July 28, 2019

2934 words 14 mins read

Paper Group ANR 195

Paper Group ANR 195

Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks. Fuzzy Clustering Data Given on the Ordinal Scale Based on Membership and Likelihood Functions Sharing. LIDAR-based Driving Path Generation Using Fully Convolutional Neural Networks. Why You Should Charge Your Friends for Borrowing Your Stuff. Greedy Sampling …

Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks

Title Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks
Authors Peter L. Bartlett, Nick Harvey, Chris Liaw, Abbas Mehrabian
Abstract We prove new upper and lower bounds on the VC-dimension of deep neural networks with the ReLU activation function. These bounds are tight for almost the entire range of parameters. Letting $W$ be the number of weights and $L$ be the number of layers, we prove that the VC-dimension is $O(W L \log(W))$, and provide examples with VC-dimension $\Omega( W L \log(W/L) )$. This improves both the previously known upper bounds and lower bounds. In terms of the number $U$ of non-linear units, we prove a tight bound $\Theta(W U)$ on the VC-dimension. All of these bounds generalize to arbitrary piecewise linear activation functions, and also hold for the pseudodimensions of these function classes. Combined with previous results, this gives an intriguing range of dependencies of the VC-dimension on depth for networks with different non-linearities: there is no dependence for piecewise-constant, linear dependence for piecewise-linear, and no more than quadratic dependence for general piecewise-polynomial.
Tasks
Published 2017-03-08
URL http://arxiv.org/abs/1703.02930v3
PDF http://arxiv.org/pdf/1703.02930v3.pdf
PWC https://paperswithcode.com/paper/nearly-tight-vc-dimension-and-pseudodimension
Repo
Framework

Fuzzy Clustering Data Given on the Ordinal Scale Based on Membership and Likelihood Functions Sharing

Title Fuzzy Clustering Data Given on the Ordinal Scale Based on Membership and Likelihood Functions Sharing
Authors Zhengbing Hu, Yevgeniy V. Bodyanskiy, Oleksii K. Tyshchenko, Viktoriia O. Samitova
Abstract A task of clustering data given in the ordinal scale under conditions of overlapping clusters has been considered. It’s proposed to use an approach based on memberhsip and likelihood functions sharing. A number of performed experiments proved effectiveness of the proposed method. The proposed method is characterized by robustness to outliers due to a way of ordering values while constructing membership functions.
Tasks
Published 2017-02-03
URL http://arxiv.org/abs/1702.01200v1
PDF http://arxiv.org/pdf/1702.01200v1.pdf
PWC https://paperswithcode.com/paper/fuzzy-clustering-data-given-on-the-ordinal
Repo
Framework

LIDAR-based Driving Path Generation Using Fully Convolutional Neural Networks

Title LIDAR-based Driving Path Generation Using Fully Convolutional Neural Networks
Authors Luca Caltagirone, Mauro Bellone, Lennart Svensson, Mattias Wahde
Abstract In this work, a novel learning-based approach has been developed to generate driving paths by integrating LIDAR point clouds, GPS-IMU information, and Google driving directions. The system is based on a fully convolutional neural network that jointly learns to carry out perception and path generation from real-world driving sequences and that is trained using automatically generated training examples. Several combinations of input data were tested in order to assess the performance gain provided by specific information modalities. The fully convolutional neural network trained using all the available sensors together with driving directions achieved the best MaxF score of 88.13% when considering a region of interest of 60x60 meters. By considering a smaller region of interest, the agreement between predicted paths and ground-truth increased to 92.60%. The positive results obtained in this work indicate that the proposed system may help fill the gap between low-level scene parsing and behavior-reflex approaches by generating outputs that are close to vehicle control and at the same time human-interpretable.
Tasks Scene Parsing
Published 2017-03-27
URL http://arxiv.org/abs/1703.08987v2
PDF http://arxiv.org/pdf/1703.08987v2.pdf
PWC https://paperswithcode.com/paper/lidar-based-driving-path-generation-using
Repo
Framework

Why You Should Charge Your Friends for Borrowing Your Stuff

Title Why You Should Charge Your Friends for Borrowing Your Stuff
Authors Kijung Shin, Euiwoong Lee, Dhivya Eswaran, Ariel D. Procaccia
Abstract We consider goods that can be shared with k-hop neighbors (i.e., the set of nodes within k hops from an owner) on a social network. We examine incentives to buy such a good by devising game-theoretic models where each node decides whether to buy the good or free ride. First, we find that social inefficiency, specifically excessive purchase of the good, occurs in Nash equilibria. Second, the social inefficiency decreases as k increases and thus a good can be shared with more nodes. Third, and most importantly, the social inefficiency can also be significantly reduced by charging free riders an access cost and paying it to owners, leading to the conclusion that organizations and system designers should impose such a cost. These findings are supported by our theoretical analysis in terms of the price of anarchy and the price of stability; and by simulations based on synthetic and real social networks.
Tasks
Published 2017-05-20
URL http://arxiv.org/abs/1705.07343v1
PDF http://arxiv.org/pdf/1705.07343v1.pdf
PWC https://paperswithcode.com/paper/why-you-should-charge-your-friends-for
Repo
Framework

Greedy Sampling of Graph Signals

Title Greedy Sampling of Graph Signals
Authors Luiz F. O. Chamon, Alejandro Ribeiro
Abstract Sampling is a fundamental topic in graph signal processing, having found applications in estimation, clustering, and video compression. In contrast to traditional signal processing, the irregularity of the signal domain makes selecting a sampling set non-trivial and hard to analyze. Indeed, though conditions for graph signal interpolation from noiseless samples exist, they do not lead to a unique sampling set. The presence of noise makes choosing among these sampling sets a hard combinatorial problem. Although greedy sampling schemes are commonly used in practice, they have no performance guarantee. This work takes a twofold approach to address this issue. First, universal performance bounds are derived for the Bayesian estimation of graph signals from noisy samples. In contrast to currently available bounds, they are not restricted to specific sampling schemes and hold for any sampling sets. Second, this paper provides near-optimal guarantees for greedy sampling by introducing the concept of approximate submodularity and updating the classical greedy bound. It then provides explicit bounds on the approximate supermodularity of the interpolation mean-square error showing that it can be optimized with worst-case guarantees using greedy search even though it is not supermodular. Simulations illustrate the derived bound for different graph models and show an application of graph signal sampling to reduce the complexity of kernel principal component analysis.
Tasks Video Compression
Published 2017-04-05
URL http://arxiv.org/abs/1704.01223v2
PDF http://arxiv.org/pdf/1704.01223v2.pdf
PWC https://paperswithcode.com/paper/greedy-sampling-of-graph-signals
Repo
Framework

Named Entity Disambiguation for Noisy Text

Title Named Entity Disambiguation for Noisy Text
Authors Yotam Eshel, Noam Cohen, Kira Radinsky, Shaul Markovitch, Ikuya Yamada, Omer Levy
Abstract We address the task of Named Entity Disambiguation (NED) for noisy text. We present WikilinksNED, a large-scale NED dataset of text fragments from the web, which is significantly noisier and more challenging than existing news-based datasets. To capture the limited and noisy local context surrounding each mention, we design a neural model and train it with a novel method for sampling informative negative examples. We also describe a new way of initializing word and entity embeddings that significantly improves performance. Our model significantly outperforms existing state-of-the-art methods on WikilinksNED while achieving comparable performance on a smaller newswire dataset.
Tasks Entity Disambiguation, Entity Embeddings
Published 2017-06-28
URL http://arxiv.org/abs/1706.09147v2
PDF http://arxiv.org/pdf/1706.09147v2.pdf
PWC https://paperswithcode.com/paper/named-entity-disambiguation-for-noisy-text
Repo
Framework

Fine Grained Knowledge Transfer for Personalized Task-oriented Dialogue Systems

Title Fine Grained Knowledge Transfer for Personalized Task-oriented Dialogue Systems
Authors Kaixiang Mo, Yu Zhang, Qiang Yang, Pascale Fung
Abstract Training a personalized dialogue system requires a lot of data, and the data collected for a single user is usually insufficient. One common practice for this problem is to share training dialogues between different users and train multiple sequence-to-sequence dialogue models together with transfer learning. However, current sequence-to-sequence transfer learning models operate on the entire sentence, which might cause negative transfer if different personal information from different users is mixed up. We propose a personalized decoder model to transfer finer granularity phrase-level knowledge between different users while keeping personal preferences of each user intact. A novel personal control gate is introduced, enabling the personalized decoder to switch between generating personalized phrases and shared phrases. The proposed personalized decoder model can be easily combined with various deep models and can be trained with reinforcement learning. Real-world experimental results demonstrate that the phrase-level personalized decoder improves the BLEU over multiple sentence-level transfer baseline models by as much as 7.5%.
Tasks Task-Oriented Dialogue Systems, Transfer Learning
Published 2017-11-11
URL http://arxiv.org/abs/1711.04079v1
PDF http://arxiv.org/pdf/1711.04079v1.pdf
PWC https://paperswithcode.com/paper/fine-grained-knowledge-transfer-for
Repo
Framework

Optimal transport maps for distribution preserving operations on latent spaces of Generative Models

Title Optimal transport maps for distribution preserving operations on latent spaces of Generative Models
Authors Eirikur Agustsson, Alexander Sage, Radu Timofte, Luc Van Gool
Abstract Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian. After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample. In this paper, we show that the latent space operations used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on. To address this, we propose to use distribution matching transport maps to ensure that such latent space operations preserve the prior distribution, while minimally modifying the original operation. Our experimental results validate that the proposed operations give higher quality samples compared to the original operations.
Tasks
Published 2017-11-06
URL http://arxiv.org/abs/1711.01970v2
PDF http://arxiv.org/pdf/1711.01970v2.pdf
PWC https://paperswithcode.com/paper/optimal-transport-maps-for-distribution
Repo
Framework

Coupled Compound Poisson Factorization

Title Coupled Compound Poisson Factorization
Authors Mehmet E. Basbug, Barbara E. Engelhardt
Abstract We present a general framework, the coupled compound Poisson factorization (CCPF), to capture the missing-data mechanism in extremely sparse data sets by coupling a hierarchical Poisson factorization with an arbitrary data-generating model. We derive a stochastic variational inference algorithm for the resulting model and, as examples of our framework, implement three different data-generating models—a mixture model, linear regression, and factor analysis—to robustly model non-random missing data in the context of clustering, prediction, and matrix factorization. In all three cases, we test our framework against models that ignore the missing-data mechanism on large scale studies with non-random missing data, and we show that explicitly modeling the missing-data mechanism substantially improves the quality of the results, as measured using data log likelihood on a held-out test set.
Tasks
Published 2017-01-09
URL http://arxiv.org/abs/1701.02058v1
PDF http://arxiv.org/pdf/1701.02058v1.pdf
PWC https://paperswithcode.com/paper/coupled-compound-poisson-factorization
Repo
Framework

Sparse Coding by Spiking Neural Networks: Convergence Theory and Computational Results

Title Sparse Coding by Spiking Neural Networks: Convergence Theory and Computational Results
Authors Ping Tak Peter Tang, Tsung-Han Lin, Mike Davies
Abstract In a spiking neural network (SNN), individual neurons operate autonomously and only communicate with other neurons sparingly and asynchronously via spike signals. These characteristics render a massively parallel hardware implementation of SNN a potentially powerful computer, albeit a non von Neumann one. But can one guarantee that a SNN computer solves some important problems reliably? In this paper, we formulate a mathematical model of one SNN that can be configured for a sparse coding problem for feature extraction. With a moderate but well-defined assumption, we prove that the SNN indeed solves sparse coding. To the best of our knowledge, this is the first rigorous result of this kind.
Tasks
Published 2017-05-15
URL http://arxiv.org/abs/1705.05475v1
PDF http://arxiv.org/pdf/1705.05475v1.pdf
PWC https://paperswithcode.com/paper/sparse-coding-by-spiking-neural-networks
Repo
Framework

Bandwidth limited object recognition in high resolution imagery

Title Bandwidth limited object recognition in high resolution imagery
Authors Laura Lopez-Fuentes, Andrew D. Bagdanov, Joost van de Weijer, Harald Skinnemoen
Abstract This paper proposes a novel method to optimize bandwidth usage for object detection in critical communication scenarios. We develop two operating models of active information seeking. The first model identifies promising regions in low resolution imagery and progressively requests higher resolution regions on which to perform recognition of higher semantic quality. The second model identifies promising regions in low resolution imagery while simultaneously predicting the approximate location of the object of higher semantic quality. From this general framework, we develop a car recognition system via identification of its license plate and evaluate the performance of both models on a car dataset that we introduce. Results are compared with traditional JPEG compression and demonstrate that our system saves up to one order of magnitude of bandwidth while sacrificing little in terms of recognition performance.
Tasks Object Detection, Object Recognition
Published 2017-01-16
URL http://arxiv.org/abs/1701.04210v1
PDF http://arxiv.org/pdf/1701.04210v1.pdf
PWC https://paperswithcode.com/paper/bandwidth-limited-object-recognition-in-high
Repo
Framework

Straggler Mitigation in Distributed Optimization Through Data Encoding

Title Straggler Mitigation in Distributed Optimization Through Data Encoding
Authors Can Karakus, Yifan Sun, Suhas Diggavi, Wotao Yin
Abstract Slow running or straggler tasks can significantly reduce computation speed in distributed computation. Recently, coding-theory-inspired approaches have been applied to mitigate the effect of straggling, through embedding redundancy in certain linear computational steps of the optimization algorithm, thus completing the computation without waiting for the stragglers. In this paper, we propose an alternate approach where we embed the redundancy directly in the data itself, and allow the computation to proceed completely oblivious to encoding. We propose several encoding schemes, and demonstrate that popular batch algorithms, such as gradient descent and L-BFGS, applied in a coding-oblivious manner, deterministically achieve sample path linear convergence to an approximate solution of the original problem, using an arbitrarily varying subset of the nodes at each iteration. Moreover, this approximation can be controlled by the amount of redundancy and the number of nodes used in each iteration. We provide experimental results demonstrating the advantage of the approach over uncoded and data replication strategies.
Tasks Distributed Optimization
Published 2017-11-14
URL http://arxiv.org/abs/1711.04969v2
PDF http://arxiv.org/pdf/1711.04969v2.pdf
PWC https://paperswithcode.com/paper/straggler-mitigation-in-distributed
Repo
Framework

Interpretable Convolutional Neural Networks for Effective Translation Initiation Site Prediction

Title Interpretable Convolutional Neural Networks for Effective Translation Initiation Site Prediction
Authors Jasper Zuallaert, Mijung Kim, Yvan Saeys, Wesley De Neve
Abstract Thanks to rapidly evolving sequencing techniques, the amount of genomic data at our disposal is growing increasingly large. Determining the gene structure is a fundamental requirement to effectively interpret gene function and regulation. An important part in that determination process is the identification of translation initiation sites. In this paper, we propose a novel approach for automatic prediction of translation initiation sites, leveraging convolutional neural networks that allow for automatic feature extraction. Our experimental results demonstrate that we are able to improve the state-of-the-art approaches with a decrease of 75.2% in false positive rate and with a decrease of 24.5% in error rate on chosen datasets. Furthermore, an in-depth analysis of the decision-making process used by our predictive model shows that our neural network implicitly learns biologically relevant features from scratch, without any prior knowledge about the problem at hand, such as the Kozak consensus sequence, the influence of stop and start codons in the sequence and the presence of donor splice site patterns. In summary, our findings yield a better understanding of the internal reasoning of a convolutional neural network when applying such a neural network to genomic data.
Tasks Decision Making
Published 2017-11-27
URL http://arxiv.org/abs/1711.09558v1
PDF http://arxiv.org/pdf/1711.09558v1.pdf
PWC https://paperswithcode.com/paper/interpretable-convolutional-neural-networks
Repo
Framework

Unsupervised Feature Learning for Writer Identification and Writer Retrieval

Title Unsupervised Feature Learning for Writer Identification and Writer Retrieval
Authors Vincent Christlein, Martin Gropp, Stefan Fiel, Andreas Maier
Abstract Deep Convolutional Neural Networks (CNN) have shown great success in supervised classification tasks such as character classification or dating. Deep learning methods typically need a lot of annotated training data, which is not available in many scenarios. In these cases, traditional methods are often better than or equivalent to deep learning methods. In this paper, we propose a simple, yet effective, way to learn CNN activation features in an unsupervised manner. Therefore, we train a deep residual network using surrogate classes. The surrogate classes are created by clustering the training dataset, where each cluster index represents one surrogate class. The activations from the penultimate CNN layer serve as features for subsequent classification tasks. We evaluate the feature representations on two publicly available datasets. The focus lies on the ICDAR17 competition dataset on historical document writer identification (Historical-WI). We show that the activation features trained without supervision are superior to descriptors of state-of-the-art writer identification methods. Additionally, we achieve comparable results in the case of handwriting classification using the ICFHR16 competition dataset on historical Latin script types (CLaMM16).
Tasks
Published 2017-05-25
URL http://arxiv.org/abs/1705.09369v3
PDF http://arxiv.org/pdf/1705.09369v3.pdf
PWC https://paperswithcode.com/paper/unsupervised-feature-learning-for-writer
Repo
Framework

Operation Frames and Clubs in Kidney Exchange

Title Operation Frames and Clubs in Kidney Exchange
Authors Gabriele Farina, John P. Dickerson, Tuomas Sandholm
Abstract A kidney exchange is a centrally-administered barter market where patients swap their willing yet incompatible donors. Modern kidney exchanges use 2-cycles, 3-cycles, and chains initiated by non-directed donors (altruists who are willing to give a kidney to anyone) as the means for swapping. We propose significant generalizations to kidney exchange. We allow more than one donor to donate in exchange for their desired patient receiving a kidney. We also allow for the possibility of a donor willing to donate if any of a number of patients receive kidneys. Furthermore, we combine these notions and generalize them. The generalization is to exchange among organ clubs, where a club is willing to donate organs outside the club if and only if the club receives organs from outside the club according to given specifications. We prove that unlike in the standard model, the uncapped clearing problem is NP-complete. We also present the notion of operation frames that can be used to sequence the operations across batches, and present integer programming formulations for the market clearing problems for these new types of organ exchanges. Experiments show that in the single-donation setting, operation frames improve planning by 34%–51%. Allowing up to two donors to donate in exchange for one kidney donated to their designated patient yields a further increase in social welfare.
Tasks
Published 2017-05-25
URL http://arxiv.org/abs/1705.09328v1
PDF http://arxiv.org/pdf/1705.09328v1.pdf
PWC https://paperswithcode.com/paper/operation-frames-and-clubs-in-kidney-exchange
Repo
Framework
comments powered by Disqus