May 7, 2019

3443 words 17 mins read

Paper Group ANR 71

Paper Group ANR 71

A series of maximum entropy upper bounds of the differential entropy. Bi-Text Alignment of Movie Subtitles for Spoken English-Arabic Statistical Machine Translation. ModelHub: Towards Unified Data and Lifecycle Management for Deep Learning. Towards Reduced Reference Parametric Models for Estimating Audiovisual Quality in Multimedia Services. Machin …

A series of maximum entropy upper bounds of the differential entropy

Title A series of maximum entropy upper bounds of the differential entropy
Authors Frank Nielsen, Richard Nock
Abstract We present a series of closed-form maximum entropy upper bounds for the differential entropy of a continuous univariate random variable and study the properties of that series. We then show how to use those generic bounds for upper bounding the differential entropy of Gaussian mixture models. This requires to calculate the raw moments and raw absolute moments of Gaussian mixtures in closed-form that may also be handy in statistical machine learning and information theory. We report on our experiments and discuss on the tightness of those bounds.
Tasks
Published 2016-12-09
URL http://arxiv.org/abs/1612.02954v1
PDF http://arxiv.org/pdf/1612.02954v1.pdf
PWC https://paperswithcode.com/paper/a-series-of-maximum-entropy-upper-bounds-of
Repo
Framework

Bi-Text Alignment of Movie Subtitles for Spoken English-Arabic Statistical Machine Translation

Title Bi-Text Alignment of Movie Subtitles for Spoken English-Arabic Statistical Machine Translation
Authors Fahad Al-Obaidli, Stephen Cox, Preslav Nakov
Abstract We describe efforts towards getting better resources for English-Arabic machine translation of spoken text. In particular, we look at movie subtitles as a unique, rich resource, as subtitles in one language often get translated into other languages. Movie subtitles are not new as a resource and have been explored in previous research; however, here we create a much larger bi-text (the biggest to date), and we further generate better quality alignment for it. Given the subtitles for the same movie in different languages, a key problem is how to align them at the fragment level. Typically, this is done using length-based alignment, but for movie subtitles, there is also time information. Here we exploit this information to develop an original algorithm that outperforms the current best subtitle alignment tool, subalign. The evaluation results show that adding our bi-text to the IWSLT training bi-text yields an improvement of over two BLEU points absolute.
Tasks Machine Translation
Published 2016-09-05
URL http://arxiv.org/abs/1609.01188v1
PDF http://arxiv.org/pdf/1609.01188v1.pdf
PWC https://paperswithcode.com/paper/bi-text-alignment-of-movie-subtitles-for
Repo
Framework

ModelHub: Towards Unified Data and Lifecycle Management for Deep Learning

Title ModelHub: Towards Unified Data and Lifecycle Management for Deep Learning
Authors Hui Miao, Ang Li, Larry S. Davis, Amol Deshpande
Abstract Deep learning has improved state-of-the-art results in many important fields, and has been the subject of much research in recent years, leading to the development of several systems for facilitating deep learning. Current systems, however, mainly focus on model building and training phases, while the issues of data management, model sharing, and lifecycle management are largely ignored. Deep learning modeling lifecycle generates a rich set of data artifacts, such as learned parameters and training logs, and comprises of several frequently conducted tasks, e.g., to understand the model behaviors and to try out new models. Dealing with such artifacts and tasks is cumbersome and largely left to the users. This paper describes our vision and implementation of a data and lifecycle management system for deep learning. First, we generalize model exploration and model enumeration queries from commonly conducted tasks by deep learning modelers, and propose a high-level domain specific language (DSL), inspired by SQL, to raise the abstraction level and accelerate the modeling process. To manage the data artifacts, especially the large amount of checkpointed float parameters, we design a novel model versioning system (dlv), and a read-optimized parameter archival storage system (PAS) that minimizes storage footprint and accelerates query workloads without losing accuracy. PAS archives versioned models using deltas in a multi-resolution fashion by separately storing the less significant bits, and features a novel progressive query (inference) evaluation algorithm. Third, we show that archiving versioned models using deltas poses a new dataset versioning problem and we develop efficient algorithms for solving it. We conduct extensive experiments over several real datasets from computer vision domain to show the efficiency of the proposed techniques.
Tasks
Published 2016-11-18
URL http://arxiv.org/abs/1611.06224v1
PDF http://arxiv.org/pdf/1611.06224v1.pdf
PWC https://paperswithcode.com/paper/modelhub-towards-unified-data-and-lifecycle
Repo
Framework

Towards Reduced Reference Parametric Models for Estimating Audiovisual Quality in Multimedia Services

Title Towards Reduced Reference Parametric Models for Estimating Audiovisual Quality in Multimedia Services
Authors Edip Demirbilek, Jean-Charles Grégoire
Abstract We have developed reduced reference parametric models for estimating perceived quality in audiovisual multimedia services. We have created 144 unique configurations for audiovisual content including various application and network parameters such as bitrates and distortions in terms of bandwidth, packet loss rate and jitter. To generate the data needed for model training and validation we have tasked 24 subjects, in a controlled environment, to rate the overall audiovisual quality on the absolute category rating (ACR) 5-level quality scale. We have developed models using Random Forest and Neural Network based machine learning methods in order to estimate Mean Opinion Scores (MOS) values. We have used information retrieved from the packet headers and side information provided as network parameters for model training. Random Forest based models have performed better in terms of Root Mean Square Error (RMSE) and Pearson correlation coefficient. The side information proved to be very effective in developing the model. We have found that, while the model performance might be improved by replacing the side information with more accurate bit stream level measurements, they are performing well in estimating perceived quality in audiovisual multimedia services.
Tasks
Published 2016-04-25
URL http://arxiv.org/abs/1604.07211v1
PDF http://arxiv.org/pdf/1604.07211v1.pdf
PWC https://paperswithcode.com/paper/towards-reduced-reference-parametric-models
Repo
Framework

Machine learning applied to single-shot x-ray diagnostics in an XFEL

Title Machine learning applied to single-shot x-ray diagnostics in an XFEL
Authors A. Sanchez-Gonzalez, P. Micaelli, C. Olivier, T. R. Barillot, M. Ilchen, A. A. Lutman, A. Marinelli, T. Maxwell, A. Achner, M. Agåker, N. Berrah, C. Bostedt, J. Buck, P. H. Bucksbaum, S. Carron Montero, B. Cooper, J. P. Cryan, M. Dong, R. Feifel, L. J. Frasinski, H. Fukuzawa, A. Galler, G. Hartmann, N. Hartmann, W. Helml, A. S. Johnson, A. Knie, A. O. Lindahl, J. Liu, K. Motomura, M. Mucke, C. O’Grady, J-E. Rubensson, E. R. Simpson, R. J. Squibb, C. Såthe, K. Ueda, M. Vacher, D. J. Walke, V. Zhaunerchyk, R. N. Coffee, J. P. Marangos
Abstract X-ray free-electron lasers (XFELs) are the only sources currently able to produce bright few-fs pulses with tunable photon energies from 100 eV to more than 10 keV. Due to the stochastic SASE operating principles and other technical issues the output pulses are subject to large fluctuations, making it necessary to characterize the x-ray pulses on every shot for data sorting purposes. We present a technique that applies machine learning tools to predict x-ray pulse properties using simple electron beam and x-ray parameters as input. Using this technique at the Linac Coherent Light Source (LCLS), we report mean errors below 0.3 eV for the prediction of the photon energy at 530 eV and below 1.6 fs for the prediction of the delay between two x-ray pulses. We also demonstrate spectral shape prediction with a mean agreement of 97%. This approach could potentially be used at the next generation of high-repetition-rate XFELs to provide accurate knowledge of complex x-ray pulses at the full repetition rate.
Tasks
Published 2016-10-11
URL http://arxiv.org/abs/1610.03378v1
PDF http://arxiv.org/pdf/1610.03378v1.pdf
PWC https://paperswithcode.com/paper/machine-learning-applied-to-single-shot-x-ray
Repo
Framework

Hybrid Jacobian and Gauss-Seidel proximal block coordinate update methods for linearly constrained convex programming

Title Hybrid Jacobian and Gauss-Seidel proximal block coordinate update methods for linearly constrained convex programming
Authors Yangyang Xu
Abstract Recent years have witnessed the rapid development of block coordinate update (BCU) methods, which are particularly suitable for problems involving large-sized data and/or variables. In optimization, BCU first appears as the coordinate descent method that works well for smooth problems or those with separable nonsmooth terms and/or separable constraints. As nonseparable constraints exist, BCU can be applied under primal-dual settings. In the literature, it has been shown that for weakly convex problems with nonseparable linear constraint, BCU with fully Gauss-Seidel updating rule may fail to converge and that with fully Jacobian rule can converge sublinearly. However, empirically the method with Jacobian update is usually slower than that with Gauss-Seidel rule. To maintain their advantages, we propose a hybrid Jacobian and Gauss-Seidel BCU method for solving linearly constrained multi-block structured convex programming, where the objective may have a nonseparable quadratic term and separable nonsmooth terms. At each primal block variable update, the method approximates the augmented Lagrangian function at an affine combination of the previous two iterates, and the affinely mixing matrix with desired nice properties can be chosen through solving a semidefinite programming. We show that the hybrid method enjoys the theoretical convergence guarantee as Jacobian BCU. In addition, we numerically demonstrate that the method can perform as well as Gauss-Seidel method and better than a recently proposed randomized primal-dual BCU method.
Tasks
Published 2016-08-13
URL http://arxiv.org/abs/1608.03928v2
PDF http://arxiv.org/pdf/1608.03928v2.pdf
PWC https://paperswithcode.com/paper/hybrid-jacobian-and-gauss-seidel-proximal
Repo
Framework

Optimal spectral transportation with application to music transcription

Title Optimal spectral transportation with application to music transcription
Authors Rémi Flamary, Cédric Févotte, Nicolas Courty, Valentin Emiya
Abstract Many spectral unmixing methods rely on the non-negative decomposition of spectral data onto a dictionary of spectral templates. In particular, state-of-the-art music transcription systems decompose the spectrogram of the input signal onto a dictionary of representative note spectra. The typical measures of fit used to quantify the adequacy of the decomposition compare the data and template entries frequency-wise. As such, small displacements of energy from a frequency bin to another as well as variations of timber can disproportionally harm the fit. We address these issues by means of optimal transportation and propose a new measure of fit that treats the frequency distributions of energy holistically as opposed to frequency-wise. Building on the harmonic nature of sound, the new measure is invariant to shifts of energy to harmonically-related frequencies, as well as to small and local displacements of energy. Equipped with this new measure of fit, the dictionary of note templates can be considerably simplified to a set of Dirac vectors located at the target fundamental frequencies (musical pitch values). This in turns gives ground to a very fast and simple decomposition algorithm that achieves state-of-the-art performance on real musical data.
Tasks
Published 2016-09-30
URL http://arxiv.org/abs/1609.09799v2
PDF http://arxiv.org/pdf/1609.09799v2.pdf
PWC https://paperswithcode.com/paper/optimal-spectral-transportation-with
Repo
Framework

A Message Passing Algorithm for the Problem of Path Packing in Graphs

Title A Message Passing Algorithm for the Problem of Path Packing in Graphs
Authors Patrick Eschenfeldt, David Gamarnik
Abstract We consider the problem of packing node-disjoint directed paths in a directed graph. We consider a variant of this problem where each path starts within a fixed subset of root nodes, subject to a given bound on the length of paths. This problem is motivated by the so-called kidney exchange problem, but has potential other applications and is interesting in its own right. We propose a new algorithm for this problem based on the message passing/belief propagation technique. A priori this problem does not have an associated graphical model, so in order to apply a belief propagation algorithm we provide a novel representation of the problem as a graphical model. Standard belief propagation on this model has poor scaling behavior, so we provide an efficient implementation that significantly decreases the complexity. We provide numerical results comparing the performance of our algorithm on both artificially created graphs and real world networks to several alternative algorithms, including algorithms based on integer programming (IP) techniques. These comparisons show that our algorithm scales better to large instances than IP-based algorithms and often finds better solutions than a simple algorithm that greedily selects the longest path from each root node. In some cases it also finds better solutions than the ones found by IP-based algorithms even when the latter are allowed to run significantly longer than our algorithm.
Tasks
Published 2016-03-18
URL http://arxiv.org/abs/1603.06002v1
PDF http://arxiv.org/pdf/1603.06002v1.pdf
PWC https://paperswithcode.com/paper/a-message-passing-algorithm-for-the-problem
Repo
Framework

A very fast iterative algorithm for TV-regularized image reconstruction with applications to low-dose and few-view CT

Title A very fast iterative algorithm for TV-regularized image reconstruction with applications to low-dose and few-view CT
Authors Hiroyuki Kudo, Fukashi Yamazaki, Takuya Nemoto, Keita Takaki
Abstract This paper concerns iterative reconstruction for low-dose and few-view CT by minimizing a data-fidelity term regularized with the Total Variation (TV) penalty. We propose a very fast iterative algorithm to solve this problem. The algorithm derivation is outlined as follows. First, the original minimization problem is reformulated into the saddle point (primal-dual) problem by using the Lagrangian duality, to which we apply the first-order primal-dual iterative methods. Second, we precondition the iteration formula using the ramp flter of Filtered Backprojection (FBP) reconstruction algorithm in such a way that the problem solution is not altered. The resulting algorithm resembles the structure of so-called iterative FBP algorithm, and it converges to the exact minimizer of cost function very fast.
Tasks Image Reconstruction
Published 2016-09-20
URL http://arxiv.org/abs/1609.06041v1
PDF http://arxiv.org/pdf/1609.06041v1.pdf
PWC https://paperswithcode.com/paper/a-very-fast-iterative-algorithm-for-tv
Repo
Framework

Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model

Title Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model
Authors Paul Christiano, Zain Shah, Igor Mordatch, Jonas Schneider, Trevor Blackwell, Joshua Tobin, Pieter Abbeel, Wojciech Zaremba
Abstract Developing control policies in simulation is often more practical and safer than directly running experiments in the real world. This applies to policies obtained from planning and optimization, and even more so to policies obtained from reinforcement learning, which is often very data demanding. However, a policy that succeeds in simulation often doesn’t work when deployed on a real robot. Nevertheless, often the overall gist of what the policy does in simulation remains valid in the real world. In this paper we investigate such settings, where the sequence of states traversed in simulation remains reasonable for the real world, even if the details of the controls are not, as could be the case when the key differences lie in detailed friction, contact, mass and geometry properties. During execution, at each time step our approach computes what the simulation-based control policy would do, but then, rather than executing these controls on the real robot, our approach computes what the simulation expects the resulting next state(s) will be, and then relies on a learned deep inverse dynamics model to decide which real-world action is most suitable to achieve those next states. Deep models are only as good as their training data, and we also propose an approach for data collection to (incrementally) learn the deep inverse dynamics model. Our experiments shows our approach compares favorably with various baselines that have been developed for dealing with simulation to real world model discrepancy, including output error control and Gaussian dynamics adaptation.
Tasks
Published 2016-10-11
URL http://arxiv.org/abs/1610.03518v1
PDF http://arxiv.org/pdf/1610.03518v1.pdf
PWC https://paperswithcode.com/paper/transfer-from-simulation-to-real-world
Repo
Framework

Most central or least central? How much modeling decisions influence a node’s centrality ranking in multiplex networks

Title Most central or least central? How much modeling decisions influence a node’s centrality ranking in multiplex networks
Authors Sude Tavassoli, Katharina Anna Zweig
Abstract To understand a node’s centrality in a multiplex network, its centrality values in all the layers of the network can be aggregated. This requires a normalization of the values, to allow their meaningful comparison and aggregation over networks with different sizes and orders. The concrete choices of such preprocessing steps like normalization and aggregation are almost never discussed in network analytic papers. In this paper, we show that even sticking to the most simple centrality index (the degree) but using different, classic choices of normalization and aggregation strategies, can turn a node from being among the most central to being among the least central. We present our results by using an aggregation operator which scales between different, classic aggregation strategies based on three multiplex networks. We also introduce a new visualization and characterization of a node’s sensitivity to the choice of a normalization and aggregation strategy in multiplex networks. The observed high sensitivity of single nodes to the specific choice of aggregation and normalization strategies is of strong importance, especially for all kinds of intelligence-analytic software as it questions the interpretations of the findings.
Tasks
Published 2016-06-17
URL http://arxiv.org/abs/1606.05468v1
PDF http://arxiv.org/pdf/1606.05468v1.pdf
PWC https://paperswithcode.com/paper/most-central-or-least-central-how-much
Repo
Framework

A Robust UCB Scheme for Active Learning in Regression from Strategic Crowds

Title A Robust UCB Scheme for Active Learning in Regression from Strategic Crowds
Authors Divya Padmanabhan, Satyanath Bhat, Dinesh Garg, Shirish Shevade, Y. Narahari
Abstract We study the problem of training an accurate linear regression model by procuring labels from multiple noisy crowd annotators, under a budget constraint. We propose a Bayesian model for linear regression in crowdsourcing and use variational inference for parameter estimation. To minimize the number of labels crowdsourced from the annotators, we adopt an active learning approach. In this specific context, we prove the equivalence of well-studied criteria of active learning like entropy minimization and expected error reduction. Interestingly, we observe that we can decouple the problems of identifying an optimal unlabeled instance and identifying an annotator to label it. We observe a useful connection between the multi-armed bandit framework and the annotator selection in active learning. Due to the nature of the distribution of the rewards on the arms, we use the Robust Upper Confidence Bound (UCB) scheme with truncated empirical mean estimator to solve the annotator selection problem. This yields provable guarantees on the regret. We further apply our model to the scenario where annotators are strategic and design suitable incentives to induce them to put in their best efforts.
Tasks Active Learning
Published 2016-01-25
URL http://arxiv.org/abs/1601.06750v2
PDF http://arxiv.org/pdf/1601.06750v2.pdf
PWC https://paperswithcode.com/paper/a-robust-ucb-scheme-for-active-learning-in
Repo
Framework

Deeply supervised salient object detection with short connections

Title Deeply supervised salient object detection with short connections
Authors Qibin Hou, Ming-Ming Cheng, Xiao-Wei Hu, Ali Borji, Zhuowen Tu, Philip Torr
Abstract Recent progress on saliency detection is substantial, benefiting mostly from the explosive development of Convolutional Neural Networks (CNNs). Semantic segmentation and saliency detection algorithms developed lately have been mostly based on Fully Convolutional Neural Networks (FCNs). There is still a large room for improvement over the generic FCN models that do not explicitly deal with the scale-space problem. Holistically-Nested Edge Detector (HED) provides a skip-layer structure with deep supervision for edge and boundary detection, but the performance gain of HED on salience detection is not obvious. In this paper, we propose a new method for saliency detection by introducing short connections to the skip-layer structures within the HED architecture. Our framework provides rich multi-scale feature maps at each layer, a property that is critically needed to perform segment detection. Our method produces state-of-the-art results on 5 widely tested salient object detection benchmarks, with advantages in terms of efficiency (0.15 seconds per image), effectiveness, and simplicity over the existing algorithms.
Tasks Boundary Detection, Object Detection, Saliency Detection, Salient Object Detection, Semantic Segmentation
Published 2016-11-15
URL http://arxiv.org/abs/1611.04849v4
PDF http://arxiv.org/pdf/1611.04849v4.pdf
PWC https://paperswithcode.com/paper/deeply-supervised-salient-object-detection
Repo
Framework

Localized Coulomb Descriptors for the Gaussian Approximation Potential

Title Localized Coulomb Descriptors for the Gaussian Approximation Potential
Authors James Barker, Johannes Bulin, Jan Hamaekers, Sonja Mathias
Abstract We introduce a novel class of localized atomic environment representations, based upon the Coulomb matrix. By combining these functions with the Gaussian approximation potential approach, we present LC-GAP, a new system for generating atomic potentials through machine learning (ML). Tests on the QM7, QM7b and GDB9 biomolecular datasets demonstrate that potentials created with LC-GAP can successfully predict atomization energies for molecules larger than those used for training to chemical accuracy, and can (in the case of QM7b) also be used to predict a range of other atomic properties with accuracy in line with the recent literature. As the best-performing representation has only linear dimensionality in the number of atoms in a local atomic environment, this represents an improvement both in prediction accuracy and computational cost when considered against similar Coulomb matrix-based methods.
Tasks
Published 2016-11-16
URL http://arxiv.org/abs/1611.05126v2
PDF http://arxiv.org/pdf/1611.05126v2.pdf
PWC https://paperswithcode.com/paper/localized-coulomb-descriptors-for-the
Repo
Framework

An Improved Intelligent Agent for Mining Real-Time Databases Using Modified Cortical Learning Algorithms

Title An Improved Intelligent Agent for Mining Real-Time Databases Using Modified Cortical Learning Algorithms
Authors N. E. Osegi
Abstract Cortical Learning Algorithms based on the Hierarchical Temporal Memory, HTM have been developed by Numenta Incorporation from which variations and modifications are currently being investigated upon. HTM offers better promises as a future computational model of the neocortex the seat of intelligence in the brain. Currently, intelligent agents are embedded in almost every modern day electronic system found in homes, offices and industries worldwide. In this paper, we present a first step in realising useful HTM like applications specifically for mining a synthetic and real time dataset based on a novel intelligent agent framework, and demonstrate how a modified version of this very important computational technique will lead to improved recognition.
Tasks
Published 2016-01-02
URL http://arxiv.org/abs/1601.00191v1
PDF http://arxiv.org/pdf/1601.00191v1.pdf
PWC https://paperswithcode.com/paper/an-improved-intelligent-agent-for-mining-real
Repo
Framework
comments powered by Disqus