October 19, 2019

3271 words 16 mins read

Paper Group ANR 123

Paper Group ANR 123

Measuring and regularizing networks in function space. Classification of sparsely labeled spatio-temporal data through semi-supervised adversarial learning. DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction. Coarse to fine non-rigid registration: a chain of scale-specific neural networks for multimodal image alignme …

Measuring and regularizing networks in function space

Title Measuring and regularizing networks in function space
Authors Ari S. Benjamin, David Rolnick, Konrad Kording
Abstract To optimize a neural network one often thinks of optimizing its parameters, but it is ultimately a matter of optimizing the function that maps inputs to outputs. Since a change in the parameters might serve as a poor proxy for the change in the function, it is of some concern that primacy is given to parameters but that the correspondence has not been tested. Here, we show that it is simple and computationally feasible to calculate distances between functions in a $L^2$ Hilbert space. We examine how typical networks behave in this space, and compare how parameter $\ell^2$ distances compare to function $L^2$ distances between various points of an optimization trajectory. We find that the two distances are nontrivially related. In particular, the $L^2/\ell^2$ ratio decreases throughout optimization, reaching a steady value around when test error plateaus. We then investigate how the $L^2$ distance could be applied directly to optimization. We first propose that in multitask learning, one can avoid catastrophic forgetting by directly limiting how much the input/output function changes between tasks. Secondly, we propose a new learning rule that constrains the distance a network can travel through $L^2$-space in any one update. This allows new examples to be learned in a way that minimally interferes with what has previously been learned. These applications demonstrate how one can measure and regularize function distances directly, without relying on parameters or local approximations like loss curvature.
Tasks
Published 2018-05-21
URL https://arxiv.org/abs/1805.08289v3
PDF https://arxiv.org/pdf/1805.08289v3.pdf
PWC https://paperswithcode.com/paper/measuring-and-regularizing-networks-in
Repo
Framework

Classification of sparsely labeled spatio-temporal data through semi-supervised adversarial learning

Title Classification of sparsely labeled spatio-temporal data through semi-supervised adversarial learning
Authors Atanas Mirchev, Seyed-Ahmad Ahmadi
Abstract In recent years, Generative Adversarial Networks (GAN) have emerged as a powerful method for learning the mapping from noisy latent spaces to realistic data samples in high-dimensional space. So far, the development and application of GANs have been predominantly focused on spatial data such as images. In this project, we aim at modeling of spatio-temporal sensor data instead, i.e. dynamic data over time. The main goal is to encode temporal data into a global and low-dimensional latent vector that captures the dynamics of the spatio-temporal signal. To this end, we incorporate auto-regressive RNNs, Wasserstein GAN loss, spectral norm weight constraints and a semi-supervised learning scheme into InfoGAN, a method for retrieval of meaningful latents in adversarial learning. To demonstrate the modeling capability of our method, we encode full-body skeletal human motion from a large dataset representing 60 classes of daily activities, recorded in a multi-Kinect setup. Initial results indicate competitive classification performance of the learned latent representations, compared to direct CNN/RNN inference. In future work, we plan to apply this method on a related problem in the medical domain, i.e. on recovery of meaningful latents in gait analysis of patients with vertigo and balance disorders.
Tasks
Published 2018-01-26
URL http://arxiv.org/abs/1801.08712v2
PDF http://arxiv.org/pdf/1801.08712v2.pdf
PWC https://paperswithcode.com/paper/classification-of-sparsely-labeled-spatio
Repo
Framework

DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction

Title DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction
Authors Pengda Qin, Weiran Xu, William Yang Wang
Abstract Distant supervision can effectively label data for relation extraction, but suffers from the noise labeling problem. Recent works mainly perform soft bag-level noise reduction strategies to find the relatively better samples in a sentence bag, which is suboptimal compared with making a hard decision of false positive samples in sentence level. In this paper, we introduce an adversarial learning framework, which we named DSGAN, to learn a sentence-level true-positive generator. Inspired by Generative Adversarial Networks, we regard the positive samples generated by the generator as the negative samples to train the discriminator. The optimal generator is obtained until the discrimination ability of the discriminator has the greatest decline. We adopt the generator to filter distant supervision training dataset and redistribute the false positive instances into the negative set, in which way to provide a cleaned dataset for relation classification. The experimental results show that the proposed strategy significantly improves the performance of distant supervision relation extraction comparing to state-of-the-art systems.
Tasks Relation Classification, Relation Extraction
Published 2018-05-24
URL http://arxiv.org/abs/1805.09929v1
PDF http://arxiv.org/pdf/1805.09929v1.pdf
PWC https://paperswithcode.com/paper/dsgan-generative-adversarial-training-for
Repo
Framework

Coarse to fine non-rigid registration: a chain of scale-specific neural networks for multimodal image alignment with application to remote sensing

Title Coarse to fine non-rigid registration: a chain of scale-specific neural networks for multimodal image alignment with application to remote sensing
Authors Armand Zampieri, Guillaume Charpiat, Yuliya Tarabalka
Abstract We tackle here the problem of multimodal image non-rigid registration, which is of prime importance in remote sensing and medical imaging. The difficulties encountered by classical registration approaches include feature design and slow optimization by gradient descent. By analyzing these methods, we note the significance of the notion of scale. We design easy-to-train, fully-convolutional neural networks able to learn scale-specific features. Once chained appropriately, they perform global registration in linear time, getting rid of gradient descent schemes by predicting directly the deformation.We show their performance in terms of quality and speed through various tasks of remote sensing multimodal image alignment. In particular, we are able to register correctly cadastral maps of buildings as well as road polylines onto RGB images, and outperform current keypoint matching methods.
Tasks
Published 2018-02-27
URL http://arxiv.org/abs/1802.09816v1
PDF http://arxiv.org/pdf/1802.09816v1.pdf
PWC https://paperswithcode.com/paper/coarse-to-fine-non-rigid-registration-a-chain
Repo
Framework

Scene Text Recognition from Two-Dimensional Perspective

Title Scene Text Recognition from Two-Dimensional Perspective
Authors Minghui Liao, Jian Zhang, Zhaoyi Wan, Fengming Xie, Jiajun Liang, Pengyuan Lyu, Cong Yao, Xiang Bai
Abstract Inspired by speech recognition, recent state-of-the-art algorithms mostly consider scene text recognition as a sequence prediction problem. Though achieving excellent performance, these methods usually neglect an important fact that text in images are actually distributed in two-dimensional space. It is a nature quite different from that of speech, which is essentially a one-dimensional signal. In principle, directly compressing features of text into a one-dimensional form may lose useful information and introduce extra noise. In this paper, we approach scene text recognition from a two-dimensional perspective. A simple yet effective model, called Character Attention Fully Convolutional Network (CA-FCN), is devised for recognizing the text of arbitrary shapes. Scene text recognition is realized with a semantic segmentation network, where an attention mechanism for characters is adopted. Combined with a word formation module, CA-FCN can simultaneously recognize the script and predict the position of each character. Experiments demonstrate that the proposed algorithm outperforms previous methods on both regular and irregular text datasets. Moreover, it is proven to be more robust to imprecise localizations in the text detection phase, which are very common in practice.
Tasks Scene Text Recognition, Semantic Segmentation, Speech Recognition
Published 2018-09-18
URL http://arxiv.org/abs/1809.06508v2
PDF http://arxiv.org/pdf/1809.06508v2.pdf
PWC https://paperswithcode.com/paper/scene-text-recognition-from-two-dimensional
Repo
Framework

A Geometric Perspective on the Transferability of Adversarial Directions

Title A Geometric Perspective on the Transferability of Adversarial Directions
Authors Zachary Charles, Harrison Rosenberg, Dimitris Papailiopoulos
Abstract State-of-the-art machine learning models frequently misclassify inputs that have been perturbed in an adversarial manner. Adversarial perturbations generated for a given input and a specific classifier often seem to be effective on other inputs and even different classifiers. In other words, adversarial perturbations seem to transfer between different inputs, models, and even different neural network architectures. In this work, we show that in the context of linear classifiers and two-layer ReLU networks, there provably exist directions that give rise to adversarial perturbations for many classifiers and data points simultaneously. We show that these “transferable adversarial directions” are guaranteed to exist for linear separators of a given set, and will exist with high probability for linear classifiers trained on independent sets drawn from the same distribution. We extend our results to large classes of two-layer ReLU networks. We further show that adversarial directions for ReLU networks transfer to linear classifiers while the reverse need not hold, suggesting that adversarial perturbations for more complex models are more likely to transfer to other classifiers. We validate our findings empirically, even for deeper ReLU networks.
Tasks
Published 2018-11-08
URL http://arxiv.org/abs/1811.03531v1
PDF http://arxiv.org/pdf/1811.03531v1.pdf
PWC https://paperswithcode.com/paper/a-geometric-perspective-on-the
Repo
Framework

Locally Private Learning without Interaction Requires Separation

Title Locally Private Learning without Interaction Requires Separation
Authors Amit Daniely, Vitaly Feldman
Abstract We consider learning under the constraint of local differential privacy (LDP). For many learning problems known efficient algorithms in this model require many rounds of communication between the server and the clients holding the data points. Yet multi-round protocols are prohibitively slow in practice due to network latency and, as a result, currently deployed large-scale systems are limited to a single round. Despite significant research interest, very little is known about which learning problems can be solved by such non-interactive systems. The only lower bound we are aware of is for PAC learning an artificial class of functions with respect to a uniform distribution (Kasiviswanathan et al. 2011). We show that the margin complexity of a class of Boolean functions is a lower bound on the complexity of any non-interactive LDP algorithm for distribution-independent PAC learning of the class. In particular, the classes of linear separators and decision lists require exponential number of samples to learn non-interactively even though they can be learned in polynomial time by an interactive LDP algorithm. This gives the first example of a natural problem that is significantly harder to solve without interaction and also resolves an open problem of Kasiviswanathan et al. (2011). We complement this lower bound with a new efficient learning algorithm whose complexity is polynomial in the margin complexity of the class. Our algorithm is non-interactive on labeled samples but still needs interactive access to unlabeled samples. All of our results also apply to the statistical query model and any model in which the number of bits communicated about each data point is constrained.
Tasks
Published 2018-09-24
URL https://arxiv.org/abs/1809.09165v3
PDF https://arxiv.org/pdf/1809.09165v3.pdf
PWC https://paperswithcode.com/paper/learning-without-interaction-requires
Repo
Framework

Learning Data-Driven Objectives to Optimize Interactive Systems

Title Learning Data-Driven Objectives to Optimize Interactive Systems
Authors Ziming Li, Julia Kiseleva, Alekh Agarwal, Maarten de Rijke
Abstract Effective optimization is essential for interactive systems to provide a satisfactory user experience. However, it is often challenging to find an objective to optimize for. Generally, such objectives are manually crafted and rarely capture complex user needs in an accurate manner. We propose an approach that infers the objective directly from observed user interactions. These inferences can be made regardless of prior knowledge and across different types of user behavior. We introduce interactive system optimization, a novel algorithm that uses these inferred objectives for optimization. Our main contribution is a new general principled approach to optimizing interactive systems using data-driven objectives. We demonstrate the high effectiveness of interactive system optimization over several simulations.
Tasks
Published 2018-02-17
URL https://arxiv.org/abs/1802.06306v8
PDF https://arxiv.org/pdf/1802.06306v8.pdf
PWC https://paperswithcode.com/paper/optimizing-interactive-systems-with-data
Repo
Framework

Double Supervised Network with Attention Mechanism for Scene Text Recognition

Title Double Supervised Network with Attention Mechanism for Scene Text Recognition
Authors Yuting Gao, Zheng Huang, Yuchen Dai, Cheng Xu, Kai Chen, Jie Tuo
Abstract In this paper, we propose Double Supervised Network with Attention Mechanism (DSAN), a novel end-to-end trainable framework for scene text recognition. It incorporates one text attention module during feature extraction which enforces the model to focus on text regions and the whole framework is supervised by two branches. One supervision branch comes from context-level modelling and another comes from one extra supervision enhancement branch which aims at tackling inexplicit semantic information at character level. These two supervisions can benefit each other and yield better performance. The proposed approach can recognize text in arbitrary length and does not need any predefined lexicon. Our method outperforms the current state-of-the-art methods on three text recognition benchmarks: IIIT5K, ICDAR2013 and SVT reaching accuracy 88.6%, 92.3% and 84.1% respectively which suggests the effectiveness of the proposed method.
Tasks Scene Text Recognition
Published 2018-08-02
URL https://arxiv.org/abs/1808.00677v3
PDF https://arxiv.org/pdf/1808.00677v3.pdf
PWC https://paperswithcode.com/paper/double-supervised-network-with-attention
Repo
Framework

Argumentation theory for mathematical argument

Title Argumentation theory for mathematical argument
Authors Joseph Corneli, Ursula Martin, Dave Murray-Rust, Gabriela Rino Nesin, Alison Pease
Abstract To adequately model mathematical arguments the analyst must be able to represent the mathematical objects under discussion and the relationships between them, as well as inferences drawn about these objects and relationships as the discourse unfolds. We introduce a framework with these properties, which has been used to analyse mathematical dialogues and expository texts. The framework can recover salient elements of discourse at, and within, the sentence level, as well as the way mathematical content connects to form larger argumentative structures. We show how the framework might be used to support computational reasoning, and argue that it provides a more natural way to examine the process of proving theorems than do Lamport’s structured proofs.
Tasks
Published 2018-03-17
URL http://arxiv.org/abs/1803.06500v2
PDF http://arxiv.org/pdf/1803.06500v2.pdf
PWC https://paperswithcode.com/paper/argumentation-theory-for-mathematical
Repo
Framework

Racial categories in machine learning

Title Racial categories in machine learning
Authors Sebastian Benthall, Bruce D. Haynes
Abstract Controversies around race and machine learning have sparked debate among computer scientists over how to design machine learning systems that guarantee fairness. These debates rarely engage with how racial identity is embedded in our social experience, making for sociological and psychological complexity. This complexity challenges the paradigm of considering fairness to be a formal property of supervised learning with respect to protected personal attributes. Racial identity is not simply a personal subjective quality. For people labeled “Black” it is an ascribed political category that has consequences for social differentiation embedded in systemic patterns of social inequality achieved through both social and spatial segregation. In the United States, racial classification can best be understood as a system of inherently unequal status categories that places whites as the most privileged category while signifying the Negro/black category as stigmatized. Social stigma is reinforced through the unequal distribution of societal rewards and goods along racial lines that is reinforced by state, corporate, and civic institutions and practices. This creates a dilemma for society and designers: be blind to racial group disparities and thereby reify racialized social inequality by no longer measuring systemic inequality, or be conscious of racial categories in a way that itself reifies race. We propose a third option. By preceding group fairness interventions with unsupervised learning to dynamically detect patterns of segregation, machine learning systems can mitigate the root cause of social disparities, social segregation and stratification, without further anchoring status categories of disadvantage.
Tasks
Published 2018-11-28
URL http://arxiv.org/abs/1811.11668v1
PDF http://arxiv.org/pdf/1811.11668v1.pdf
PWC https://paperswithcode.com/paper/racial-categories-in-machine-learning
Repo
Framework

SCAN: Sliding Convolutional Attention Network for Scene Text Recognition

Title SCAN: Sliding Convolutional Attention Network for Scene Text Recognition
Authors Yi-Chao Wu, Fei Yin, Xu-Yao Zhang, Li Liu, Cheng-Lin Liu
Abstract Scene text recognition has drawn great attentions in the community of computer vision and artificial intelligence due to its challenges and wide applications. State-of-the-art recurrent neural networks (RNN) based models map an input sequence to a variable length output sequence, but are usually applied in a black box manner and lack of transparency for further improvement, and the maintaining of the entire past hidden states prevents parallel computation in a sequence. In this paper, we investigate the intrinsic characteristics of text recognition, and inspired by human cognition mechanisms in reading texts, we propose a scene text recognition method with sliding convolutional attention network (SCAN). Similar to the eye movement during reading, the process of SCAN can be viewed as an alternation between saccades and visual fixations. Compared to the previous recurrent models, computations over all elements of SCAN can be fully parallelized during training. Experimental results on several challenging benchmarks, including the IIIT5k, SVT and ICDAR 2003/2013 datasets, demonstrate the superiority of SCAN over state-of-the-art methods in terms of both the model interpretability and performance.
Tasks Scene Text Recognition
Published 2018-06-02
URL http://arxiv.org/abs/1806.00578v1
PDF http://arxiv.org/pdf/1806.00578v1.pdf
PWC https://paperswithcode.com/paper/scan-sliding-convolutional-attention-network
Repo
Framework

Scaling up Probabilistic Inference in Linear and Non-Linear Hybrid Domains by Leveraging Knowledge Compilation

Title Scaling up Probabilistic Inference in Linear and Non-Linear Hybrid Domains by Leveraging Knowledge Compilation
Authors Anton Fuxjaeger, Vaishak Belle
Abstract Weighted model integration (WMI) extends weighted model counting (WMC) in providing a computational abstraction for probabilistic inference in mixed discrete-continuous domains. WMC has emerged as an assembly language for state-of-the-art reasoning in Bayesian networks, factor graphs, probabilistic programs and probabilistic databases. In this regard, WMI shows immense promise to be much more widely applicable, especially as many real-world applications involve attribute and feature spaces that are continuous and mixed. Nonetheless, state-of-the-art tools for WMI are limited and less mature than their propositional counterparts. In this work, we propose a new implementation regime that leverages propositional knowledge compilation for scaling up inference. In particular, we use sentential decision diagrams, a tractable representation of Boolean functions, as the underlying model counting and model enumeration scheme. Our regime performs competitively to state-of-the-art WMI systems but is also shown to handle a specific class of non-linear constraints over non-linear potentials.
Tasks
Published 2018-11-29
URL https://arxiv.org/abs/1811.12127v2
PDF https://arxiv.org/pdf/1811.12127v2.pdf
PWC https://paperswithcode.com/paper/scaling-up-probabilistic-inference-in-linear
Repo
Framework

Unsupervised Online Learning With Multiple Postsynaptic Neurons Based on Spike-Timing-Dependent Plasticity Using a TFT-Type NOR Flash Memory Array

Title Unsupervised Online Learning With Multiple Postsynaptic Neurons Based on Spike-Timing-Dependent Plasticity Using a TFT-Type NOR Flash Memory Array
Authors Soochang Lee, Chul-Heung Kim, Seongbin Oh, Byung-Gook Park, Jong-Ho Lee
Abstract We present a two-layer fully connected neuromorphic system based on a thin-film transistor (TFT)-type NOR flash memory array with multiple postsynaptic (POST) neurons. Unsupervised online learning by spike-timing-dependent plasticity (STDP) on the binary MNIST handwritten datasets is implemented, and its recognition result is determined by measuring firing rate of POST neurons. Using a proposed learning scheme, we investigate the impact of the number of POST neurons in terms of recognition rate. In this neuromorphic system, lateral inhibition function and homeostatic property are exploited for competitive learning of multiple POST neurons. The simulation results demonstrate unsupervised online learning of the full black-and-white MNIST handwritten digits by STDP, which indicates the performance of pattern recognition and classification without preprocessing of input patterns.
Tasks
Published 2018-11-17
URL http://arxiv.org/abs/1811.07115v1
PDF http://arxiv.org/pdf/1811.07115v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-online-learning-with-multiple
Repo
Framework

Learning low dimensional word based linear classifiers using Data Shared Adaptive Bootstrap Aggregated Lasso with application to IMDb data

Title Learning low dimensional word based linear classifiers using Data Shared Adaptive Bootstrap Aggregated Lasso with application to IMDb data
Authors Ashutosh K. Maurya
Abstract In this article we propose a new supervised ensemble learning method called Data Shared Adaptive Bootstrap Aggregated (AdaBag) Lasso for capturing low dimensional useful features for word based sentiment analysis and mining problems. The literature on ensemble methods is very rich in both statistics and machine learning. The algorithm is a substantial upgrade of the Data Shared Lasso uplift algorithm. The most significant conceptual addition to the existing literature lies in the final selection of bag of predictors through a special bootstrap aggregation scheme. We apply the algorithm to one simulated data and perform dimension reduction in grouped IMDb data (drama, comedy and horror) to extract reduced set of word features for predicting sentiment ratings of movie reviews demonstrating different aspects. We also compare the performance of the present method with the classical Principal Components with associated Linear Discrimination (PCA-LD) as baseline. There are few limitations in the algorithm. Firstly, the algorithm workflow does not incorporate online sequential data acquisition and it does not use sentence based models which are common in ANN algorithms . Our results produce slightly higher error rate compare to the reported state-of-the-art as a consequence.
Tasks Dimensionality Reduction, Sentiment Analysis
Published 2018-07-26
URL http://arxiv.org/abs/1807.10623v2
PDF http://arxiv.org/pdf/1807.10623v2.pdf
PWC https://paperswithcode.com/paper/learning-low-dimensional-word-based-linear
Repo
Framework
comments powered by Disqus