Paper Group ANR 268
An Improved Classification Model for Igbo Text Using N-Gram And K-Nearest Neighbour Approaches. Sound field reconstruction in rooms: inpainting meets superresolution. SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models. Short sighted deep learning. Proving the Lottery Ticket Hypothesis: Pruning is All You Need. End-to-e …
An Improved Classification Model for Igbo Text Using N-Gram And K-Nearest Neighbour Approaches
Title | An Improved Classification Model for Igbo Text Using N-Gram And K-Nearest Neighbour Approaches |
Authors | Nkechi Ifeanyi-Reuben, Chidiebere Ugwu |
Abstract | This paper presents an improved classification model for Igbo text using N-gram and K-Nearest Neighbour approaches. The N-gram model was used for text representation and the classification was carried out on the text using the K-Nearest Neighbour model. Object-Oriented design methodology is used for the work and is implemented with the Python programming language with tools from Natural Language Toolkit (NLTK). The performance of the Igbo text classification system is measured by computing the precision, recall and F1-measure of the result obtained on Unigram, Bigram and Trigram represented text. The Igbo text classification on bigram represented text has highest degree of exactness (precision); result obtained with three N-gram models has the same level of completeness (recall) while trigram has the lowest level of precision. This shows that the classification on bigram Igbo represented text outperforms unigram and trigram represented texts. Therefore, bigram text representation model is highly recommended for any intelligent text-based system in Igbo language. |
Tasks | Text Classification |
Published | 2020-04-01 |
URL | https://arxiv.org/abs/2004.00375v1 |
https://arxiv.org/pdf/2004.00375v1.pdf | |
PWC | https://paperswithcode.com/paper/an-improved-classification-model-for-igbo |
Repo | |
Framework | |
Sound field reconstruction in rooms: inpainting meets superresolution
Title | Sound field reconstruction in rooms: inpainting meets superresolution |
Authors | Francesc Lluís, Pablo Martínez-Nuevo, Martin Bo Møller, Sven Ewan Shepstone |
Abstract | In this paper a deep-learning-based method for sound field reconstruction is proposed. It is shown the possibility to reconstruct the magnitude of the sound pressure in the frequency band 30-300 Hz for an entire room by using a very low number of irregularly distributed microphones arbitrarily arranged. In particular, the presented approach uses a limited number of arbitrary discrete measurements of the magnitude of the sound field pressure in order to extrapolate this field to a higher-resolution grid of discrete points in space with a low computational complexity. The method is based on a U-net-like neural network with partial convolutions trained solely on simulated data, i.e. the dataset is constructed from numerical simulations of the Green’s function across thousands of common rectangular rooms. Although extensible to three dimensions, the method focuses on reconstructing a two-dimensional plane of the room from measurements of the three-dimensional sound field. Experiments using simulated data together with an experimental validation in a real listening room are shown. The results suggest a performance—in terms of mean squared error and structural similarity—which may exceed conventional reconstruction techniques for a low number of microphones and computational requirements. |
Tasks | |
Published | 2020-01-30 |
URL | https://arxiv.org/abs/2001.11263v1 |
https://arxiv.org/pdf/2001.11263v1.pdf | |
PWC | https://paperswithcode.com/paper/sound-field-reconstruction-in-rooms |
Repo | |
Framework | |
SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models
Title | SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models |
Authors | Yucen Luo, Alex Beatson, Mohammad Norouzi, Jun Zhu, David Duvenaud, Ryan P. Adams, Ricky T. Q. Chen |
Abstract | Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series. If parameterized by an encoder-decoder architecture, the parameters of the encoder can be optimized to minimize its variance of this estimator. We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost. This estimator also allows use of latent variable models for tasks where unbiased estimators, rather than marginal likelihood lower bounds, are preferred, such as minimizing reverse KL divergences and estimating score functions. |
Tasks | Latent Variable Models |
Published | 2020-04-01 |
URL | https://arxiv.org/abs/2004.00353v1 |
https://arxiv.org/pdf/2004.00353v1.pdf | |
PWC | https://paperswithcode.com/paper/sumo-unbiased-estimation-of-log-marginal-1 |
Repo | |
Framework | |
Short sighted deep learning
Title | Short sighted deep learning |
Authors | Ellen de Melllo Koch, Anita de Mello Koch, Nicholas Kastanos, Ling Cheng |
Abstract | A theory explaining how deep learning works is yet to be developed. Previous work suggests that deep learning performs a coarse graining, similar in spirit to the renormalization group (RG). This idea has been explored in the setting of a local (nearest neighbor interactions) Ising spin lattice. We extend the discussion to the setting of a long range spin lattice. Markov Chain Monte Carlo (MCMC) simulations determine both the critical temperature and scaling dimensions of the system. The model is used to train both a single RBM (restricted Boltzmann machine) network, as well as a stacked RBM network. Following earlier Ising model studies, the trained weights of a single layer RBM network define a flow of lattice models. In contrast to results for nearest neighbor Ising, the RBM flow for the long ranged model does not converge to the correct values for the spin and energy scaling dimension. Further, correlation functions between visible and hidden nodes exhibit key differences between the stacked RBM and RG flows. The stacked RBM flow appears to move towards low temperatures whereas the RG flow moves towards high temperature. This again differs from results obtained for nearest neighbor Ising. |
Tasks | |
Published | 2020-02-07 |
URL | https://arxiv.org/abs/2002.02664v1 |
https://arxiv.org/pdf/2002.02664v1.pdf | |
PWC | https://paperswithcode.com/paper/short-sighted-deep-learning |
Repo | |
Framework | |
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Title | Proving the Lottery Ticket Hypothesis: Pruning is All You Need |
Authors | Eran Malach, Gilad Yehudai, Shai Shalev-Shwartz, Ohad Shamir |
Abstract | The lottery ticket hypothesis (Frankle and Carbin, 2018), states that a randomly-initialized network contains a small subnetwork such that, when trained in isolation, can compete with the performance of the original network. We prove an even stronger hypothesis (as was also conjectured in Ramanujan et al., 2019), showing that for every bounded distribution and every target network with bounded weights, a sufficiently over-parameterized neural network with random weights contains a subnetwork with roughly the same accuracy as the target network, without any further training. |
Tasks | |
Published | 2020-02-03 |
URL | https://arxiv.org/abs/2002.00585v1 |
https://arxiv.org/pdf/2002.00585v1.pdf | |
PWC | https://paperswithcode.com/paper/proving-the-lottery-ticket-hypothesis-pruning |
Repo | |
Framework | |
End-to-end deep learning for big data analytics under a quasi-open set assumption
Title | End-to-end deep learning for big data analytics under a quasi-open set assumption |
Authors | Emile R. Engelbrecht, Johan A. du Preez |
Abstract | Neural network classifiers trained using end-to-end learning regimes are argued most viable for big data analytics due to their low system complexity, fast training and low computational cost. Generally, big data classification models are trained using a semi-supervised learning framework due to the available unlabelled samples and the high cost to gather labelled samples. We assume that unlabelled training samples in big data are from both the same and different classes to available labelled training samples which we call a quasi-open set. Under quasi-open set assumptions, end-to-end classifier models must accurately classify samples from source classes represented by labelled and unlabelled training samples while also detecting samples from novel classes represented by only unlabelled training samples. To the best of our knowledge, no end-to-end work has trained under a quasi-open set assumption making our results a first of its kind. Our proposed method extends the semi-supervised learning using GANs framework to also explicitly train a certainty classification measurement via end-to-end means. Different from other certainty measurements that aim to reduce misclassifications of source classes, ours aims to provide tractable means to separate source and novel classes. Experiments are conducted on a simulated quasi-open set using MNIST by selecting seven classes as source classes and using the remaining three classes as possible novel classes. For all experiments, we achieve near-perfect detection of samples from novel classes. On the other hand, source class classification is dependant on the number of labelled training samples provided for the source classes as per general end-to-end classification learning. End-to-end learning is held as the most tractable solution for big data analytics, but if only if models are trained to classify source classes and detect novel classes. |
Tasks | |
Published | 2020-02-04 |
URL | https://arxiv.org/abs/2002.01368v2 |
https://arxiv.org/pdf/2002.01368v2.pdf | |
PWC | https://paperswithcode.com/paper/introduction-to-quasi-open-set-semi |
Repo | |
Framework | |
OptTyper: Probabilistic Type Inference by Optimising Logical and Natural Constraints
Title | OptTyper: Probabilistic Type Inference by Optimising Logical and Natural Constraints |
Authors | Irene Vlassi Pandi, Earl T. Barr, Andrew D. Gordon, Charles Sutton |
Abstract | We present a new approach to the type inference problem for dynamic languages. Our goal is to combine logical constraints, that is, deterministic information from a type system, with natural constraints, uncertain information about types from sources like identifier names. To this end, we introduce a framework for probabilistic type inference that combines logic and learning: logical constraints on the types are extracted from the program, and deep learning is applied to predict types from surface-level code properties that are statistically associated, such as variable names. The main insight of our method is to constrain the predictions from the learning procedure to respect the logical constraints, which we achieve by relaxing the logical inference problem of type prediction into a continuous optimisation problem. To evaluate the idea, we built a tool called OptTyper to predict a TypeScript declaration file for a JavaScript library. OptTyper combines a continuous interpretation of logical constraints derived by a simple program transformation and static analysis of the JavaScript code, with natural constraints obtained from a deep learning model, which learns naming conventions for types from a large codebase. We evaluate OptTyper on a data set of 5,800 open-source JavaScript projects that have type annotations in the well-known DefinitelyTyped repository. We find that combining logical and natural constraints yields a large improvement in performance over either kind of information individually, and produces 50% fewer incorrect type predictions than previous approaches. |
Tasks | |
Published | 2020-04-01 |
URL | https://arxiv.org/abs/2004.00348v1 |
https://arxiv.org/pdf/2004.00348v1.pdf | |
PWC | https://paperswithcode.com/paper/opttyper-probabilistic-type-inference-by |
Repo | |
Framework | |
Single Image Optical Flow Estimation with an Event Camera
Title | Single Image Optical Flow Estimation with an Event Camera |
Authors | Liyuan Pan, Miaomiao Liu, Richard Hartley |
Abstract | Event cameras are bio-inspired sensors that asynchronously report intensity changes in microsecond resolution. DAVIS can capture high dynamics of a scene and simultaneously output high temporal resolution events and low frame-rate intensity images. In this paper, we propose a single image (potentially blurred) and events based optical flow estimation approach. First, we demonstrate how events can be used to improve flow estimates. To this end, we encode the relation between flow and events effectively by presenting an event-based photometric consistency formulation. Then, we consider the special case of image blur caused by high dynamics in the visual environments and show that including the blur formation in our model further constrains flow estimation. This is in sharp contrast to existing works that ignore the blurred images while our formulation can naturally handle either blurred or sharp images to achieve accurate flow estimation. Finally, we reduce flow estimation, as well as image deblurring, to an alternative optimization problem of an objective function using the primal-dual algorithm. Experimental results on both synthetic and real data (with blurred and non-blurred images) show the superiority of our model in comparison to state-of-the-art approaches. |
Tasks | Deblurring, Optical Flow Estimation |
Published | 2020-04-01 |
URL | https://arxiv.org/abs/2004.00347v1 |
https://arxiv.org/pdf/2004.00347v1.pdf | |
PWC | https://paperswithcode.com/paper/single-image-optical-flow-estimation-with-an |
Repo | |
Framework | |
Multiscale modelling and simulation of physical systems as semiosis
Title | Multiscale modelling and simulation of physical systems as semiosis |
Authors | Martin Thomas Horsch, Silvia Chiacchiera, Michael A. Seaton, Ilian T. Todorov |
Abstract | It is explored how physicalist mereotopology and Peircean semiotics can be applied to represent models, simulations, and workflows in multiscale modelling and simulation of physical systems within a top-level ontology. It is argued that to conceptualize modelling and simulation in such a framework, two major types of semiosis need to be formalized and combined with each other: Interpretation, where a sign and a represented object yield an interpretant (another representamen for the same object), and metonymization, where the represented object and a sign are in a three-way relationship with another object to which the signification is transferred. It is outlined how the main elements of the pre-existing simulation workflow descriptions MODA and OSMO, i.e., use cases, models, solvers, and processors, can be aligned with a top-level ontology that implements this ontological paradigm, which is here referred to as mereosemiotic physicalism. Implications are discussed for the development of the European Materials and Modelling Ontology, an implementation of mereosemiotic physicalism. |
Tasks | |
Published | 2020-03-22 |
URL | https://arxiv.org/abs/2003.11370v2 |
https://arxiv.org/pdf/2003.11370v2.pdf | |
PWC | https://paperswithcode.com/paper/multiscale-modelling-and-simulation-of |
Repo | |
Framework | |
Digit Recognition Using Convolution Neural Network
Title | Digit Recognition Using Convolution Neural Network |
Authors | Kajol Gupta |
Abstract | In pattern recognition, digit recognition has always been a very challenging task. This paper aims to extracting a correct feature so that it can achieve better accuracy for recognition of digits. The applications of digit recognition such as in password, bank check process, etc. to recognize the valid user identification. Earlier, several researchers have used various different machine learning algorithms in pattern recognition i.e. KNN, SVM, RFC. The main objective of this work is to obtain highest accuracy 99.15% by using convolution neural network (CNN) to recognize the digit without doing too much pre-processing of dataset. |
Tasks | |
Published | 2020-04-01 |
URL | https://arxiv.org/abs/2004.00331v1 |
https://arxiv.org/pdf/2004.00331v1.pdf | |
PWC | https://paperswithcode.com/paper/digit-recognition-using-convolution-neural |
Repo | |
Framework | |
Learning Absolute Sound Source Localisation With Limited Supervisions
Title | Learning Absolute Sound Source Localisation With Limited Supervisions |
Authors | Yang Chu, Wayne Luk, Dan Goodman |
Abstract | An accurate auditory space map can be learned from auditory experience, for example during development or in response to altered auditory cues such as a modified pinna. We studied neural network models that learn to localise a single sound source in the horizontal plane using binaural cues based on limited supervisions. These supervisions can be unreliable or sparse in real life. First, a simple model that has unreliable estimation of the sound source location is built, in order to simulate the unreliable auditory orienting response of newborns. It is used as a Teacher that acts as a source of unreliable supervisions. Then we show that it is possible to learn a continuous auditory space map based only on noisy left or right feedbacks from the Teacher. Furthermore, reinforcement rewards from the environment are used as a source of sparse supervision. By combining the unreliable innate response and the sparse reinforcement rewards, an accurate auditory space map, which is hard to be achieved by either one of these two kind of supervisions, can eventually be learned. Our results show that the auditory space mapping can be calibrated even without explicit supervision. Moreover, this study implies a possibly more general neural mechanism where multiple sub-modules can be coordinated to facilitate each other’s learning process under limited supervisions. |
Tasks | |
Published | 2020-01-28 |
URL | https://arxiv.org/abs/2001.10605v1 |
https://arxiv.org/pdf/2001.10605v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-absolute-sound-source-localisation |
Repo | |
Framework | |
Learning Architectures for Binary Networks
Title | Learning Architectures for Binary Networks |
Authors | Kunal Pratap Singh, Dahyun Kim, Jonghyun Choi |
Abstract | Backbone architectures of most binary networks are well-known floating point architectures, such as the ResNet family. Questioning that the architectures designed for floating-point networks would not be the best for binary networks, we propose to search architectures for binary networks (BNAS). Specifically, based on the cell based search method, we define a new set of layer types, design a new cell template, and rediscover the utility of and propose to use the Zeroise layer to learn well-performing binary networks. In addition, we propose to diversify early search to learn better performing binary architectures. We show that our searched binary networks outperform state-of-the-art binary networks on CIFAR10 and ImageNet datasets. |
Tasks | |
Published | 2020-02-17 |
URL | https://arxiv.org/abs/2002.06963v1 |
https://arxiv.org/pdf/2002.06963v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-architectures-for-binary-networks |
Repo | |
Framework | |
Self-adaptation in non-Elitist Evolutionary Algorithms on Discrete Problems with Unknown Structure
Title | Self-adaptation in non-Elitist Evolutionary Algorithms on Discrete Problems with Unknown Structure |
Authors | Brendan Case, Per Kristian Lehre |
Abstract | A key challenge to make effective use of evolutionary algorithms is to choose appropriate settings for their parameters. However, the appropriate parameter setting generally depends on the structure of the optimisation problem, which is often unknown to the user. Non-deterministic parameter control mechanisms adjust parameters using information obtained from the evolutionary process. Self-adaptation – where parameter settings are encoded in the chromosomes of individuals and evolve through mutation and crossover – is a popular parameter control mechanism in evolutionary strategies. However, there is little theoretical evidence that self-adaptation is effective, and self-adaptation has largely been ignored by the discrete evolutionary computation community. Here we show through a theoretical runtime analysis that a non-elitist, discrete evolutionary algorithm which self-adapts its mutation rate not only outperforms EAs which use static mutation rates on \leadingones, but also improves asymptotically on an EA using a state-of-the-art control mechanism. The structure of this problem depends on a parameter $k$, which is \emph{a priori} unknown to the algorithm, and which is needed to appropriately set a fixed mutation rate. The self-adaptive EA achieves the same asymptotic runtime as if this parameter was known to the algorithm beforehand, which is an asymptotic speedup for this problem compared to all other EAs previously studied. An experimental study of how the mutation-rates evolve show that they respond adequately to a diverse range of problem structures. These results suggest that self-adaptation should be adopted more broadly as a parameter control mechanism in discrete, non-elitist evolutionary algorithms. |
Tasks | |
Published | 2020-04-01 |
URL | https://arxiv.org/abs/2004.00327v1 |
https://arxiv.org/pdf/2004.00327v1.pdf | |
PWC | https://paperswithcode.com/paper/self-adaptation-in-non-elitist-evolutionary |
Repo | |
Framework | |
SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans
Title | SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans |
Authors | Igor Santesteban, Elena Garces, Miguel A. Otaduy, Dan Casas |
Abstract | We present SoftSMPL, a learning-based method to model realistic soft-tissue dynamics as a function of body shape and motion. Datasets to learn such task are scarce and expensive to generate, which makes training models prone to overfitting. At the core of our method there are three key contributions that enable us to model highly realistic dynamics and better generalization capabilities than state-of-the-art methods, while training on the same data. First, a novel motion descriptor that disentangles the standard pose representation by removing subject-specific features; second, a neural-network-based recurrent regressor that generalizes to unseen shapes and motions; and third, a highly efficient nonlinear deformation subspace capable of representing soft-tissue deformations of arbitrary shapes. We demonstrate qualitative and quantitative improvements over existing methods and, additionally, we show the robustness of our method on a variety of motion capture databases. |
Tasks | Motion Capture |
Published | 2020-04-01 |
URL | https://arxiv.org/abs/2004.00326v1 |
https://arxiv.org/pdf/2004.00326v1.pdf | |
PWC | https://paperswithcode.com/paper/softsmpl-data-driven-modeling-of-nonlinear |
Repo | |
Framework | |
Neural Communication Systems with Bandwidth-limited Channel
Title | Neural Communication Systems with Bandwidth-limited Channel |
Authors | Karen Ullrich, Fabio Viola, Danilo Jimenez Rezende |
Abstract | Reliably transmitting messages despite information loss due to a noisy channel is a core problem of information theory. One of the most important aspects of real world communication, e.g. via wifi, is that it may happen at varying levels of information transfer. The bandwidth-limited channel models this phenomenon. In this study we consider learning coding with the bandwidth-limited channel (BWLC). Recently, neural communication models such as variational autoencoders have been studied for the task of source compression. We build upon this work by studying neural communication systems with the BWLC. Specifically,we find three modelling choices that are relevant under expected information loss. First, instead of separating the sub-tasks of compression (source coding) and error correction (channel coding), we propose to model both jointly. Framing the problem as a variational learning problem, we conclude that joint systems outperform their separate counterparts when coding is performed by flexible learnable function approximators such as neural networks. To facilitate learning, we introduce a differentiable and computationally efficient version of the bandwidth-limited channel. Second, we propose a design to model missing information with a prior, and incorporate this into the channel model. Finally, sampling from the joint model is improved by introducing auxiliary latent variables in the decoder. Experimental results justify the validity of our design decisions through improved distortion and FID scores. |
Tasks | |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13367v2 |
https://arxiv.org/pdf/2003.13367v2.pdf | |
PWC | https://paperswithcode.com/paper/neural-communication-systems-with-bandwidth-1 |
Repo | |
Framework | |