May 7, 2019

3173 words 15 mins read

Paper Group ANR 141

Paper Group ANR 141

A Large Scale Corpus of Gulf Arabic. A survey of sparse representation: algorithms and applications. Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks. metricDTW: local distance metric learning in Dynamic Time Warping. The high-conductance state enables neural sampling in networks of LIF neuron …

A Large Scale Corpus of Gulf Arabic

Title A Large Scale Corpus of Gulf Arabic
Authors Salam Khalifa, Nizar Habash, Dana Abdulrahim, Sara Hassan
Abstract Most Arabic natural language processing tools and resources are developed to serve Modern Standard Arabic (MSA), which is the official written language in the Arab World. Some Dialectal Arabic varieties, notably Egyptian Arabic, have received some attention lately and have a growing collection of resources that include annotated corpora and morphological analyzers and taggers. Gulf Arabic, however, lags behind in that respect. In this paper, we present the Gumar Corpus, a large-scale corpus of Gulf Arabic consisting of 110 million words from 1,200 forum novels. We annotate the corpus for sub-dialect information at the document level. We also present results of a preliminary study in the morphological annotation of Gulf Arabic which includes developing guidelines for a conventional orthography. The text of the corpus is publicly browsable through a web interface we developed for it.
Tasks
Published 2016-09-09
URL http://arxiv.org/abs/1609.02960v1
PDF http://arxiv.org/pdf/1609.02960v1.pdf
PWC https://paperswithcode.com/paper/a-large-scale-corpus-of-gulf-arabic
Repo
Framework

A survey of sparse representation: algorithms and applications

Title A survey of sparse representation: algorithms and applications
Authors Zheng Zhang, Yong Xu, Jian Yang, Xuelong Li, David Zhang
Abstract Sparse representation has attracted much attention from researchers in fields of signal processing, image processing, computer vision and pattern recognition. Sparse representation also has a good reputation in both theoretical research and practical applications. Many different algorithms have been proposed for sparse representation. The main purpose of this article is to provide a comprehensive study and an updated review on sparse representation and to supply a guidance for researchers. The taxonomy of sparse representation methods can be studied from various viewpoints. For example, in terms of different norm minimizations used in sparsity constraints, the methods can be roughly categorized into five groups: sparse representation with $l_0$-norm minimization, sparse representation with $l_p$-norm (0$<$p$<$1) minimization, sparse representation with $l_1$-norm minimization and sparse representation with $l_{2,1}$-norm minimization. In this paper, a comprehensive overview of sparse representation is provided. The available sparse representation algorithms can also be empirically categorized into four groups: greedy strategy approximation, constrained optimization, proximity algorithm-based optimization, and homotopy algorithm-based sparse representation. The rationales of different algorithms in each category are analyzed and a wide range of sparse representation applications are summarized, which could sufficiently reveal the potential nature of the sparse representation theory. Specifically, an experimentally comparative study of these sparse representation algorithms was presented. The Matlab code used in this paper can be available at: http://www.yongxu.org/lunwen.html.
Tasks
Published 2016-02-23
URL http://arxiv.org/abs/1602.07017v1
PDF http://arxiv.org/pdf/1602.07017v1.pdf
PWC https://paperswithcode.com/paper/a-survey-of-sparse-representation-algorithms
Repo
Framework

Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks

Title Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks
Authors Devansh Arpit, Yingbo Zhou, Bhargava U. Kota, Venu Govindaraju
Abstract While the authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks– Internal Covariate Shift– the current solution has certain drawbacks. Specifically, BN depends on batch statistics for layerwise input normalization during training which makes the estimates of mean and standard deviation of input (distribution) to hidden layers inaccurate for validation due to shifting parameter values (especially during initial training epochs). Also, BN cannot be used with batch-size 1 during training. We address these drawbacks by proposing a non-adaptive normalization technique for removing internal covariate shift, that we call Normalization Propagation. Our approach does not depend on batch statistics, but rather uses a data-independent parametric estimate of mean and standard-deviation in every layer thus being computationally faster compared with BN. We exploit the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, we can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers.
Tasks
Published 2016-03-04
URL http://arxiv.org/abs/1603.01431v6
PDF http://arxiv.org/pdf/1603.01431v6.pdf
PWC https://paperswithcode.com/paper/normalization-propagation-a-parametric
Repo
Framework

metricDTW: local distance metric learning in Dynamic Time Warping

Title metricDTW: local distance metric learning in Dynamic Time Warping
Authors Jiaping Zhao, Zerong Xi, Laurent Itti
Abstract We propose to learn multiple local Mahalanobis distance metrics to perform k-nearest neighbor (kNN) classification of temporal sequences. Temporal sequences are first aligned by dynamic time warping (DTW); given the alignment path, similarity between two sequences is measured by the DTW distance, which is computed as the accumulated distance between matched temporal point pairs along the alignment path. Traditionally, Euclidean metric is used for distance computation between matched pairs, which ignores the data regularities and might not be optimal for applications at hand. Here we propose to learn multiple Mahalanobis metrics, such that DTW distance becomes the sum of Mahalanobis distances. We adapt the large margin nearest neighbor (LMNN) framework to our case, and formulate multiple metric learning as a linear programming problem. Extensive sequence classification results show that our proposed multiple metrics learning approach is effective, insensitive to the preceding alignment qualities, and reaches the state-of-the-art performances on UCR time series datasets.
Tasks Metric Learning, Time Series
Published 2016-06-11
URL http://arxiv.org/abs/1606.03628v1
PDF http://arxiv.org/pdf/1606.03628v1.pdf
PWC https://paperswithcode.com/paper/metricdtw-local-distance-metric-learning-in
Repo
Framework

The high-conductance state enables neural sampling in networks of LIF neurons

Title The high-conductance state enables neural sampling in networks of LIF neurons
Authors Mihai A. Petrovici, Ilja Bytschok, Johannes Bill, Johannes Schemmel, Karlheinz Meier
Abstract The apparent stochasticity of in-vivo neural circuits has long been hypothesized to represent a signature of ongoing stochastic inference in the brain. More recently, a theoretical framework for neural sampling has been proposed, which explains how sample-based inference can be performed by networks of spiking neurons. One particular requirement of this approach is that the neural response function closely follows a logistic curve. Analytical approaches to calculating neural response functions have been the subject of many theoretical studies. In order to make the problem tractable, particular assumptions regarding the neural or synaptic parameters are usually made. However, biologically significant activity regimes exist which are not covered by these approaches: Under strong synaptic bombardment, as is often the case in cortex, the neuron is shifted into a high-conductance state (HCS) characterized by a small membrane time constant. In this regime, synaptic time constants and refractory periods dominate membrane dynamics. The core idea of our approach is to separately consider two different “modes” of spiking dynamics: burst spiking and transient quiescence, in which the neuron does not spike for longer periods. We treat the former by propagating the PDF of the effective membrane potential from spike to spike within a burst, while using a diffusion approximation for the latter. We find that our prediction of the neural response function closely matches simulation data. Moreover, in the HCS scenario, we show that the neural response function becomes symmetric and can be well approximated by a logistic function, thereby providing the correct dynamics in order to perform neural sampling. We hereby provide not only a normative framework for Bayesian inference in cortex, but also powerful applications of low-power, accelerated neuromorphic systems to relevant machine learning tasks.
Tasks Bayesian Inference
Published 2016-01-05
URL http://arxiv.org/abs/1601.00909v1
PDF http://arxiv.org/pdf/1601.00909v1.pdf
PWC https://paperswithcode.com/paper/the-high-conductance-state-enables-neural
Repo
Framework

An Ensemble Method to Produce High-Quality Word Embeddings (2016)

Title An Ensemble Method to Produce High-Quality Word Embeddings (2016)
Authors Robyn Speer, Joshua Chin
Abstract A currently successful approach to computational semantics is to represent words as embeddings in a machine-learned vector space. We present an ensemble method that combines embeddings produced by GloVe (Pennington et al., 2014) and word2vec (Mikolov et al., 2013) with structured knowledge from the semantic networks ConceptNet (Speer and Havasi, 2012) and PPDB (Ganitkevitch et al., 2013), merging their information into a common representation with a large, multilingual vocabulary. The embeddings it produces achieve state-of-the-art performance on many word-similarity evaluations. Its score of $\rho = .596$ on an evaluation of rare words (Luong et al., 2013) is 16% higher than the previous best known system.
Tasks Word Embeddings
Published 2016-04-06
URL https://arxiv.org/abs/1604.01692v2
PDF https://arxiv.org/pdf/1604.01692v2.pdf
PWC https://paperswithcode.com/paper/an-ensemble-method-to-produce-high-quality
Repo
Framework

Playing for Data: Ground Truth from Computer Games

Title Playing for Data: Ground Truth from Computer Games
Authors Stephan R. Richter, Vibhav Vineet, Stefan Roth, Vladlen Koltun
Abstract Recent progress in computer vision has been driven by high-capacity models trained on large datasets. Unfortunately, creating large datasets with pixel-level labels has been extremely costly due to the amount of human effort required. In this paper, we present an approach to rapidly creating pixel-accurate semantic label maps for images extracted from modern computer games. Although the source code and the internal operation of commercial games are inaccessible, we show that associations between image patches can be reconstructed from the communication between the game and the graphics hardware. This enables rapid propagation of semantic labels within and across images synthesized by the game, with no access to the source code or the content. We validate the presented approach by producing dense pixel-level semantic annotations for 25 thousand images synthesized by a photorealistic open-world computer game. Experiments on semantic segmentation datasets show that using the acquired data to supplement real-world images significantly increases accuracy and that the acquired data enables reducing the amount of hand-labeled real-world data: models trained with game data and just 1/3 of the CamVid training set outperform models trained on the complete CamVid training set.
Tasks Semantic Segmentation
Published 2016-08-07
URL http://arxiv.org/abs/1608.02192v1
PDF http://arxiv.org/pdf/1608.02192v1.pdf
PWC https://paperswithcode.com/paper/playing-for-data-ground-truth-from-computer
Repo
Framework

The Many-Body Expansion Combined with Neural Networks

Title The Many-Body Expansion Combined with Neural Networks
Authors Kun Yao, John E. Herr, John Parkhill
Abstract Fragmentation methods such as the many-body expansion (MBE) are a common strategy to model large systems by partitioning energies into a hierarchy of decreasingly significant contributions. The number of fragments required for chemical accuracy is still prohibitively expensive for ab-initio MBE to compete with force field approximations for applications beyond single-point energies. Alongside the MBE, empirical models of ab-initio potential energy surfaces have improved, especially non-linear models based on neural networks (NN) which can reproduce ab-initio potential energy surfaces rapidly and accurately. Although they are fast, NNs suffer from their own curse of dimensionality; they must be trained on a representative sample of chemical space. In this paper we examine the synergy of the MBE and NN’s, and explore their complementarity. The MBE offers a systematic way to treat systems of arbitrary size and intelligently sample chemical space. NN’s reduce, by a factor in excess of $10^6$ the computational overhead of the MBE and reproduce the accuracy of ab-initio calculations without specialized force fields. We show they are remarkably general, providing comparable accuracy with drastically different chemical embeddings. To assess this we test a new chemical embedding which can be inverted to predict molecules with desired properties.
Tasks
Published 2016-09-22
URL http://arxiv.org/abs/1609.07072v1
PDF http://arxiv.org/pdf/1609.07072v1.pdf
PWC https://paperswithcode.com/paper/the-many-body-expansion-combined-with-neural
Repo
Framework

Multi-task Domain Adaptation for Sequence Tagging

Title Multi-task Domain Adaptation for Sequence Tagging
Authors Nanyun Peng, Mark Dredze
Abstract Many domain adaptation approaches rely on learning cross domain shared representations to transfer the knowledge learned in one domain to other domains. Traditional domain adaptation only considers adapting for one task. In this paper, we explore multi-task representation learning under the domain adaptation scenario. We propose a neural network framework that supports domain adaptation for multiple tasks simultaneously, and learns shared representations that better generalize for domain adaptation. We apply the proposed framework to domain adaptation for sequence tagging problems considering two tasks: Chinese word segmentation and named entity recognition. Experiments show that multi-task domain adaptation works better than disjoint domain adaptation for each task, and achieves the state-of-the-art results for both tasks in the social media domain.
Tasks Chinese Word Segmentation, Domain Adaptation, Named Entity Recognition, Representation Learning
Published 2016-08-09
URL http://arxiv.org/abs/1608.02689v2
PDF http://arxiv.org/pdf/1608.02689v2.pdf
PWC https://paperswithcode.com/paper/multi-task-domain-adaptation-for-sequence
Repo
Framework

Towards the effectiveness of Deep Convolutional Neural Network based Fast Random Forest Classifier

Title Towards the effectiveness of Deep Convolutional Neural Network based Fast Random Forest Classifier
Authors Mrutyunjaya Panda
Abstract Deep Learning is considered to be a quite young in the area of machine learning research, found its effectiveness in dealing complex yet high dimensional dataset that includes but limited to images, text and speech etc. with multiple levels of representation and abstraction. As there are a plethora of research on these datasets by various researchers , a win over them needs lots of attention. Careful setting of Deep learning parameters is of paramount importance in order to avoid the overfitting unlike conventional methods with limited parameter settings. Deep Convolutional neural network (DCNN) with multiple layers of compositions and appropriate settings might be is an efficient machine learning method that can outperform the conventional methods in a great way. However, due to its slow adoption in learning, there are also always a chance of overfitting during feature selection process, which can be addressed by employing a regularization method called dropout. Fast Random Forest (FRF) is a powerful ensemble classifier especially when the datasets are noisy and when the number of attributes is large in comparison to the number of instances, as is the case of Bioinformatics datasets. Several publicly available Bioinformatics dataset, Handwritten digits recognition and Image segmentation dataset are considered for evaluation of the proposed approach. The excellent performance obtained by the proposed DCNN based feature selection with FRF classifier on high dimensional datasets makes it a fast and accurate classifier in comparison the state-of-the-art.
Tasks Feature Selection, Semantic Segmentation
Published 2016-09-28
URL http://arxiv.org/abs/1609.08864v1
PDF http://arxiv.org/pdf/1609.08864v1.pdf
PWC https://paperswithcode.com/paper/towards-the-effectiveness-of-deep
Repo
Framework

Models, networks and algorithmic complexity

Title Models, networks and algorithmic complexity
Authors Giulio Ruffini
Abstract I aim to show that models, classification or generating functions, invariances and datasets are algorithmically equivalent concepts once properly defined, and provide some concrete examples of them. I then show that a) neural networks (NNs) of different kinds can be seen to implement models, b) that perturbations of inputs and nodes in NNs trained to optimally implement simple models propagate strongly, c) that there is a framework in which recurrent, deep and shallow networks can be seen to fall into a descriptive power hierarchy in agreement with notions from the theory of recursive functions. The motivation for these definitions and following analysis lies in the context of cognitive neuroscience, and in particular in Ruffini (2016), where the concept of model is used extensively, as is the concept of algorithmic complexity.
Tasks
Published 2016-12-13
URL http://arxiv.org/abs/1612.05627v1
PDF http://arxiv.org/pdf/1612.05627v1.pdf
PWC https://paperswithcode.com/paper/models-networks-and-algorithmic-complexity
Repo
Framework

Multidimensional Scaling on Multiple Input Distance Matrices

Title Multidimensional Scaling on Multiple Input Distance Matrices
Authors Song Bai, Xiang Bai, Longin Jan Latecki, Qi Tian
Abstract Multidimensional Scaling (MDS) is a classic technique that seeks vectorial representations for data points, given the pairwise distances between them. However, in recent years, data are usually collected from diverse sources or have multiple heterogeneous representations. How to do multidimensional scaling on multiple input distance matrices is still unsolved to our best knowledge. In this paper, we first define this new task formally. Then, we propose a new algorithm called Multi-View Multidimensional Scaling (MVMDS) by considering each input distance matrix as one view. Our algorithm is able to learn the weights of views (i.e., distance matrices) automatically by exploring the consensus information and complementary nature of views. Experimental results on synthetic as well as real datasets demonstrate the effectiveness of MVMDS. We hope that our work encourages a wider consideration in many domains where MDS is needed.
Tasks
Published 2016-05-01
URL http://arxiv.org/abs/1605.00286v2
PDF http://arxiv.org/pdf/1605.00286v2.pdf
PWC https://paperswithcode.com/paper/multidimensional-scaling-on-multiple-input
Repo
Framework

Do We Really Need to Collect Millions of Faces for Effective Face Recognition?

Title Do We Really Need to Collect Millions of Faces for Effective Face Recognition?
Authors Iacopo Masi, Anh Tuan Tran, Jatuporn Toy Leksut, Tal Hassner, Gerard Medioni
Abstract Face recognition capabilities have recently made extraordinary leaps. Though this progress is at least partially due to ballooning training set sizes – huge numbers of face images downloaded and labeled for identity – it is not clear if the formidable task of collecting so many images is truly necessary. We propose a far more accessible means of increasing training data sizes for face recognition systems. Rather than manually harvesting and labeling more faces, we simply synthesize them. We describe novel methods of enriching an existing dataset with important facial appearance variations by manipulating the faces it contains. We further apply this synthesis approach when matching query images represented using a standard convolutional neural network. The effect of training and testing with synthesized images is extensively tested on the LFW and IJB-A (verification and identification) benchmarks and Janus CS2. The performances obtained by our approach match state of the art results reported by systems trained on millions of downloaded images.
Tasks Face Recognition, Face Verification
Published 2016-03-23
URL http://arxiv.org/abs/1603.07057v2
PDF http://arxiv.org/pdf/1603.07057v2.pdf
PWC https://paperswithcode.com/paper/do-we-really-need-to-collect-millions-of
Repo
Framework

A Reinforcement Learning Approach to the View Planning Problem

Title A Reinforcement Learning Approach to the View Planning Problem
Authors Mustafa Devrim Kaba, Mustafa Gokhan Uzunbas, Ser Nam Lim
Abstract We present a Reinforcement Learning (RL) solution to the view planning problem (VPP), which generates a sequence of view points that are capable of sensing all accessible area of a given object represented as a 3D model. In doing so, the goal is to minimize the number of view points, making the VPP a class of set covering optimization problem (SCOP). The SCOP is NP-hard, and the inapproximability results tell us that the greedy algorithm provides the best approximation that runs in polynomial time. In order to find a solution that is better than the greedy algorithm, (i) we introduce a novel score function by exploiting the geometry of the 3D model, (ii) we model an intuitive human approach to VPP using this score function, and (iii) we cast VPP as a Markovian Decision Process (MDP), and solve the MDP in RL framework using well-known RL algorithms. In particular, we use SARSA, Watkins-Q and TD with function approximation to solve the MDP. We compare the results of our method with the baseline greedy algorithm in an extensive set of test objects, and show that we can out-perform the baseline in almost all cases.
Tasks
Published 2016-10-19
URL http://arxiv.org/abs/1610.06204v2
PDF http://arxiv.org/pdf/1610.06204v2.pdf
PWC https://paperswithcode.com/paper/a-reinforcement-learning-approach-to-the-view
Repo
Framework

Fast Image Classification by Boosting Fuzzy Classifiers

Title Fast Image Classification by Boosting Fuzzy Classifiers
Authors Marcin Korytkowski, Leszek Rutkowski, Rafał Scherer
Abstract This paper presents a novel approach to visual objects classification based on generating simple fuzzy classifiers using local image features to distinguish between one known class and other classes. Boosting meta learning is used to find the most representative local features. The proposed approach is tested on a state-of-the-art image dataset and compared with the bag-of-features image representation model combined with the Support Vector Machine classification. The novel method gives better classification accuracy and the time of learning and testing process is more than 30% shorter.
Tasks Image Classification, Meta-Learning
Published 2016-10-04
URL http://arxiv.org/abs/1610.01068v1
PDF http://arxiv.org/pdf/1610.01068v1.pdf
PWC https://paperswithcode.com/paper/fast-image-classification-by-boosting-fuzzy
Repo
Framework
comments powered by Disqus