January 28, 2020

3395 words 16 mins read

Paper Group ANR 952

Paper Group ANR 952

Backprop Diffusion is Biologically Plausible. Real-time Multiple People Hand Localization in 4D Point Clouds. DeepJSCC-f: Deep Joint-Source Channel Coding of Images with Feedback. A Hamilton-Jacobi Reachability-Based Framework for Predicting and Analyzing Human Motion for Safe Planning. Quantum Latent Semantic Analysis. Parity Models: A General Fra …

Backprop Diffusion is Biologically Plausible

Title Backprop Diffusion is Biologically Plausible
Authors Alessandro Betti, Marco Gori
Abstract The Backpropagation algorithm relies on the abstraction of using a neural model that gets rid of the notion of time, since the input is mapped instantaneously to the output. In this paper, we claim that this abstraction of ignoring time, along with the abrupt input changes that occur when feeding the training set, are in fact the reasons why, in some papers, Backprop biological plausibility is regarded as an arguable issue. We show that as soon as a deep feedforward network operates with neurons with time-delayed response, the backprop weight update turns out to be the basic equation of a biologically plausible diffusion process based on forward-backward waves. We also show that such a process very well approximates the gradient for inputs that are not too fast with respect to the depth of the network. These remarks somewhat disclose the diffusion process behind the backprop equation and leads us to interpret the corresponding algorithm as a degeneration of a more general diffusion process that takes place also in neural networks with cyclic connections.
Tasks
Published 2019-12-10
URL https://arxiv.org/abs/1912.04635v1
PDF https://arxiv.org/pdf/1912.04635v1.pdf
PWC https://paperswithcode.com/paper/backprop-diffusion-is-biologically-plausible
Repo
Framework

Real-time Multiple People Hand Localization in 4D Point Clouds

Title Real-time Multiple People Hand Localization in 4D Point Clouds
Authors Hao Jiang, Quanzeng You
Abstract We propose novel real-time algorithm to localize hands and find their associations with multiple people in the cluttered 4D volumetric data (dynamic 3D volumes). Different from the traditional multiple view approaches, which find key points in 2D and then triangulate to recover the 3D locations, our method directly processes the dynamic 3D data that involve both clutter and crowd. The volumetric representation is more desirable than the partial observations from different view points and enables more robust and accurate results. However, due to the large amount of data in the volumetric representation brute force 3D schemes are slow. In this paper, we propose novel real-time methods to tackle the problem to achieve both higher accuracy and faster speed than previous approaches. Our method detects the 3D bounding box of each subject and localizes the hands of each person. We develop new 2D features for fast candidate proposals and optimize the trajectory linking using a new max-covering bipartite matching formulation, which is critical for robust performance. We propose a novel decomposition method to reduce the key point localization in each person 3D volume to a sequence of efficient 2D problems. Our experiments show that the proposed method is faster than different competing methods and it gives almost half the localization error.
Tasks
Published 2019-03-05
URL http://arxiv.org/abs/1903.01695v1
PDF http://arxiv.org/pdf/1903.01695v1.pdf
PWC https://paperswithcode.com/paper/real-time-multiple-people-hand-localization
Repo
Framework

DeepJSCC-f: Deep Joint-Source Channel Coding of Images with Feedback

Title DeepJSCC-f: Deep Joint-Source Channel Coding of Images with Feedback
Authors David Burth Kurka, Deniz Gündüz
Abstract We consider wireless transmission of images in the presence of channel output feedback. From a Shannon theoretic perspective feedback does not improve the asymptotic end-to-end performance, and separate source coding followed by capacity achieving channel coding achieves the optimal performance. Although it is well known that separation is not optimal in the practical finite blocklength regime, there are no known practical joint source-channel coding (JSCC) schemes that can exploit the feedback signal and surpass the performance of separate schemes. Inspired by the recent success of deep learning methods for JSCC, we investigate how noiseless or noisy channel output feedback can be incorporated into the transmission system to improve the reconstruction quality at the receiver. We introduce an autoencoder-based deep JSCC scheme that exploits the channel output feedback, and provides considerable improvements in terms of the end-to-end reconstruction quality for fixed length transmission, or in terms of the average delay for variable length transmission. To the best of our knowledge, this is the first practical JSCC scheme that can fully exploit channel output feedback, demonstrating yet another setting in which modern machine learning techniques can enable the design of new and efficient communication methods that surpass the performance of traditional structured coding-based designs.
Tasks
Published 2019-11-25
URL https://arxiv.org/abs/1911.11174v1
PDF https://arxiv.org/pdf/1911.11174v1.pdf
PWC https://paperswithcode.com/paper/deepjscc-f-deep-joint-source-channel-coding
Repo
Framework

A Hamilton-Jacobi Reachability-Based Framework for Predicting and Analyzing Human Motion for Safe Planning

Title A Hamilton-Jacobi Reachability-Based Framework for Predicting and Analyzing Human Motion for Safe Planning
Authors Somil Bansal, Andrea Bajcsy, Ellis Ratner, Anca D. Dragan, Claire J. Tomlin
Abstract Real-world autonomous systems often employ probabilistic predictive models of human behavior during planning to reason about their future motion. Since accurately modeling the human behavior a priori is challenging, such models are often parameterized, enabling the robot to adapt predictions based on observations by maintaining a distribution over the model parameters. This leads to a probabilistic prediction problem, which even though attractive, can be computationally demanding. In this work, we formalize the prediction problem as a stochastic reachability problem in the joint state space of the human and the belief over the model parameters. We further introduce a Hamilton-Jacobi reachability framework which casts a deterministic approximation of this stochastic reachability problem by restricting the allowable actions to a set rather than a distribution, while still maintaining the belief as an explicit state. This leads to two advantages: our approach gives rise to a novel predictor wherein the predictions can be performed at a significantly lower computational expense, and to a general framework which also enables us to perform predictor analysis. We compare our approach to a fully stochastic predictor using Bayesian inference and the worst-case forward reachable set in simulation and in hardware, and demonstrate how it can enable robust planning while not being overly conservative, even when the human model is inaccurate.
Tasks Bayesian Inference
Published 2019-10-29
URL https://arxiv.org/abs/1910.13369v1
PDF https://arxiv.org/pdf/1910.13369v1.pdf
PWC https://paperswithcode.com/paper/191013369
Repo
Framework

Quantum Latent Semantic Analysis

Title Quantum Latent Semantic Analysis
Authors Fabio A. González, Juan C. Caicedo
Abstract The main goal of this paper is to explore latent topic analysis (LTA), in the context of quantum information retrieval. LTA is a valuable technique for document analysis and representation, which has been extensively used in information retrieval and machine learning. Different LTA techniques have been proposed, some based on geometrical modeling (such as latent semantic analysis, LSA) and others based on a strong statistical foundation. However, these two different approaches are not usually mixed. Quantum information retrieval has the remarkable virtue of combining both geometry and probability in a common principled framework. We built on this quantum framework to propose a new LTA method, which has a clear geometrical motivation but also supports a well-founded probabilistic interpretation. An initial exploratory experimentation was performed on three standard data sets. The results show that the proposed method outperforms LSA on two of the three datasets. These results suggests that the quantum-motivated representation is an alternative for geometrical latent topic modeling worthy of further exploration.
Tasks Information Retrieval
Published 2019-03-07
URL http://arxiv.org/abs/1903.03082v1
PDF http://arxiv.org/pdf/1903.03082v1.pdf
PWC https://paperswithcode.com/paper/quantum-latent-semantic-analysis
Repo
Framework

Parity Models: A General Framework for Coding-Based Resilience in ML Inference

Title Parity Models: A General Framework for Coding-Based Resilience in ML Inference
Authors Jack Kosaian, K. V. Rashmi, Shivaram Venkataraman
Abstract Machine learning models are becoming the primary workhorses for many applications. Production services deploy models through prediction serving systems that take in queries and return predictions by performing inference on machine learning models. In order to scale to high query rates, prediction serving systems are run on many machines in cluster settings, and thus are prone to slowdowns and failures that inflate tail latency and cause violations of strict latency targets. Current approaches to reducing tail latency are inadequate for the latency targets of prediction serving, incur high resource overhead, or are inapplicable to the computations performed during inference. We present ParM, a novel, general framework for making use of ideas from erasure coding and machine learning to achieve low-latency, resource-efficient resilience to slowdowns and failures in prediction serving systems. ParM encodes multiple queries together into a single parity query and performs inference on the parity query using a parity model. A decoder uses the output of a parity model to reconstruct approximations of unavailable predictions. ParM uses neural networks to learn parity models that enable simple, fast encoders and decoders to reconstruct unavailable predictions for a variety of inference tasks such as image classification, speech recognition, and object localization. We build ParM atop an open-source prediction serving system and through extensive evaluation show that ParM improves overall accuracy in the face of unavailability with low latency while using 2-4$\times$ less additional resources than replication-based approaches. ParM reduces the gap between 99.9th percentile and median latency by up to $3.5\times$ compared to approaches that use an equal amount of resources, while maintaining the same median.
Tasks Image Classification, Object Localization, Speech Recognition
Published 2019-05-02
URL https://arxiv.org/abs/1905.00863v2
PDF https://arxiv.org/pdf/1905.00863v2.pdf
PWC https://paperswithcode.com/paper/parity-models-a-general-framework-for-coding
Repo
Framework

Machine Learning from a Continuous Viewpoint

Title Machine Learning from a Continuous Viewpoint
Authors Weinan E, Chao Ma, Lei Wu
Abstract We present a continuous formulation of machine learning, as a problem in the calculus of variations and differential-integral equations, very much in the spirit of classical numerical analysis and statistical physics. We demonstrate that conventional machine learning models and algorithms, such as the random feature model, the shallow neural network model and the residual neural network model, can all be recovered as particular discretizations of different continuous formulations. We also present examples of new models, such as the flow-based random feature model, and new algorithms, such as the smoothed particle method and spectral method, that arise naturally from this continuous formulation. We discuss how the issues of generalization error and implicit regularization can be studied under this framework.
Tasks
Published 2019-12-30
URL https://arxiv.org/abs/1912.12777v1
PDF https://arxiv.org/pdf/1912.12777v1.pdf
PWC https://paperswithcode.com/paper/machine-learning-from-a-continuous-viewpoint
Repo
Framework

Recruitment-imitation Mechanism for Evolutionary Reinforcement Learning

Title Recruitment-imitation Mechanism for Evolutionary Reinforcement Learning
Authors Shuai Lü, Shuai Han, Wenbo Zhou, Junwei Zhang
Abstract Reinforcement learning, evolutionary algorithms and imitation learning are three principal methods to deal with continuous control tasks. Reinforcement learning is sample efficient, yet sensitive to hyper-parameters setting and needs efficient exploration; Evolutionary algorithms are stable, but with low sample efficiency; Imitation learning is both sample efficient and stable, however it requires the guidance of expert data. In this paper, we propose Recruitment-imitation Mechanism (RIM) for evolutionary reinforcement learning, a scalable framework that combines advantages of the three methods mentioned above. The core of this framework is a dual-actors and single critic reinforcement learning agent. This agent can recruit high-fitness actors from the population of evolutionary algorithms, which instructs itself to learn from experience replay buffer. At the same time, low-fitness actors in the evolutionary population can imitate behavior patterns of the reinforcement learning agent and improve their adaptability. Reinforcement and imitation learners in this framework can be replaced with any off-policy actor-critic reinforcement learner or data-driven imitation learner. We evaluate RIM on a series of benchmarks for continuous control tasks in Mujoco. The experimental results show that RIM outperforms prior evolutionary or reinforcement learning methods. The performance of RIM’s components is significantly better than components of previous evolutionary reinforcement learning algorithm, and the recruitment using soft update enables reinforcement learning agent to learn faster than that using hard update.
Tasks Continuous Control, Efficient Exploration, Imitation Learning
Published 2019-12-13
URL https://arxiv.org/abs/1912.06310v1
PDF https://arxiv.org/pdf/1912.06310v1.pdf
PWC https://paperswithcode.com/paper/recruitment-imitation-mechanism-for
Repo
Framework

Self-calibrating Deep Photometric Stereo Networks

Title Self-calibrating Deep Photometric Stereo Networks
Authors Guanying Chen, Kai Han, Boxin Shi, Yasuyuki Matsushita, Kwan-Yee K. Wong
Abstract This paper proposes an uncalibrated photometric stereo method for non-Lambertian scenes based on deep learning. Unlike previous approaches that heavily rely on assumptions of specific reflectances and light source distributions, our method is able to determine both shape and light directions of a scene with unknown arbitrary reflectances observed under unknown varying light directions. To achieve this goal, we propose a two-stage deep learning architecture, called SDPS-Net, which can effectively take advantage of intermediate supervision, resulting in reduced learning difficulty compared to a single-stage model. Experiments on both synthetic and real datasets show that our proposed approach significantly outperforms previous uncalibrated photometric stereo methods.
Tasks
Published 2019-03-18
URL http://arxiv.org/abs/1903.07366v1
PDF http://arxiv.org/pdf/1903.07366v1.pdf
PWC https://paperswithcode.com/paper/self-calibrating-deep-photometric-stereo
Repo
Framework

Meta-Graph Based HIN Spectral Embedding: Methods, Analyses, and Insights

Title Meta-Graph Based HIN Spectral Embedding: Methods, Analyses, and Insights
Authors Carl Yang, Yichen Feng, Pan Li, Yu Shi, Jiawei Han
Abstract In this work, we propose to study the utility of different meta-graphs, as well as how to simultaneously leverage multiple meta-graphs for HIN embedding in an unsupervised manner. Motivated by prolific research on homogeneous networks, especially spectral graph theory, we firstly conduct a systematic empirical study on the spectrum and embedding quality of different meta-graphs on multiple HINs, which leads to an efficient method of meta-graph assessment. It also helps us to gain valuable insight into the higher-order organization of HINs and indicates a practical way of selecting useful embedding dimensions. Further, we explore the challenges of combining multiple meta-graphs to capture the multi-dimensional semantics in HIN through reasoning from mathematical geometry and arrive at an embedding compression method of autoencoder with $\ell_{2,1}$-loss, which finds the most informative meta-graphs and embeddings in an end-to-end unsupervised manner. Finally, empirical analysis suggests a unified workflow to close the gap between our meta-graph assessment and combination methods. To the best of our knowledge, this is the first research effort to provide rich theoretical and empirical analyses on the utility of meta-graphs and their combinations, especially regarding HIN embedding. Extensive experimental comparisons with various state-of-the-art neural network based embedding methods on multiple real-world HINs demonstrate the effectiveness and efficiency of our framework in finding useful meta-graphs and generating high-quality HIN embeddings.
Tasks
Published 2019-09-29
URL https://arxiv.org/abs/1910.00004v1
PDF https://arxiv.org/pdf/1910.00004v1.pdf
PWC https://paperswithcode.com/paper/meta-graph-based-hin-spectral-embedding
Repo
Framework

Application of image processing in optical method, Moire deflectometry for investigating the optical properties of zinc oxide nanoparticle

Title Application of image processing in optical method, Moire deflectometry for investigating the optical properties of zinc oxide nanoparticle
Authors Fatemeh Jamal, Fatemeh Ahmadi, Mohammad Khanzadeh, Saber Malekzadeh
Abstract Nowadays for measurement of refractive index of nanomaterials usually spectro-photometric and mechanical methods are used which are expensive and indirect. In this paper for measuring these parameters of zinc oxide nanomaterial with two different stabilizers, a simple optical method, Moire deflectometry, which is based on wave front analysis and geometric optics is used. In the Moire deflectometry method, the beam of light from the laser diode passes through the sample. As a result of that, a change in the sample environment is observed as deflections of the fringes. By recording of these deflections using CCD and image processing with MATLAB, the nanomaterials refractive indices can be calculated. Due to the high accuracy of this method and improved the image processing code in Matlab, the smallest changes of the refractive index in the sample can be measured. Digital Image processing is used for processing images in a way that features can be selected and being shown. The results obtained in this method show a good improvement over the other used methods.
Tasks
Published 2019-01-02
URL http://arxiv.org/abs/1902.01196v1
PDF http://arxiv.org/pdf/1902.01196v1.pdf
PWC https://paperswithcode.com/paper/application-of-image-processing-in-optical
Repo
Framework

Censored and Fair Universal Representations using Generative Adversarial Models

Title Censored and Fair Universal Representations using Generative Adversarial Models
Authors Peter Kairouz, Jiachun Liao, Chong Huang, Lalitha Sankar
Abstract We present a data-driven framework for learning \textit{censored and fair universal representations} (CFUR) that ensure statistical fairness guarantees for all downstream learning tasks that may not be known \textit{a priori}. Our framework leverages recent advancements in adversarial learning to allow a data holder to learn censored and fair representations that decouple a set of sensitive attributes from the rest of the dataset. The resulting problem of finding the optimal randomizing mechanism with specific fairness/censoring guarantees is formulated as a constrained minimax game between an encoder and an adversary where the constraint ensures a measure of usefulness (utility) of the representation. We show that for appropriately chosen adversarial loss functions, our framework enables defining demographic parity for fair representations and also clarifies {the optimal adversarial strategy against strong information-theoretic adversaries}. We evaluate the performance of our proposed framework on multi-dimensional Gaussian mixture models and publicly datasets including the UCI Census, GENKI, Human Activity Recognition (HAR), and the UTKFace. Our experimental results show that multiple sensitive features can be effectively censored while ensuring accuracy for several \textit{a priori} unknown downstream tasks. Finally, our results also make precise the tradeoff between censoring and fidelity for the representation as well as the fairness-utility tradeoffs for downstream tasks.
Tasks Activity Recognition, Human Activity Recognition
Published 2019-09-27
URL https://arxiv.org/abs/1910.00411v5
PDF https://arxiv.org/pdf/1910.00411v5.pdf
PWC https://paperswithcode.com/paper/learning-generative-adversarial
Repo
Framework

ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles

Title ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles
Authors Inioluwa Deborah Raji, Jingying Yang
Abstract We present the “Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles” (ABOUT ML) project as an initiative to operationalize ML transparency and work towards a standard ML documentation practice. We make the case for the project’s relevance and effectiveness in consolidating disparate efforts across a variety of stakeholders, as well as bringing in the perspectives of currently missing voices that will be valuable in shaping future conversations. We describe the details of the initiative and the gaps we hope this project will help address.
Tasks
Published 2019-12-12
URL https://arxiv.org/abs/1912.06166v3
PDF https://arxiv.org/pdf/1912.06166v3.pdf
PWC https://paperswithcode.com/paper/about-ml-annotation-and-benchmarking-on
Repo
Framework

Minimum Volume Topic Modeling

Title Minimum Volume Topic Modeling
Authors Byoungwook Jang, Alfred Hero
Abstract We propose a new topic modeling procedure that takes advantage of the fact that the Latent Dirichlet Allocation (LDA) log likelihood function is asymptotically equivalent to the logarithm of the volume of the topic simplex. This allows topic modeling to be reformulated as finding the probability simplex that minimizes its volume and encloses the documents that are represented as distributions over words. A convex relaxation of the minimum volume topic model optimization is proposed, and it is shown that the relaxed problem has the same global minimum as the original problem under the separability assumption and the sufficiently scattered assumption introduced by Arora et al. (2013) and Huang et al. (2016). A locally convergent alternating direction method of multipliers (ADMM) approach is introduced for solving the relaxed minimum volume problem. Numerical experiments illustrate the benefits of our approach in terms of computation time and topic recovery performance.
Tasks
Published 2019-04-03
URL http://arxiv.org/abs/1904.02064v1
PDF http://arxiv.org/pdf/1904.02064v1.pdf
PWC https://paperswithcode.com/paper/minimum-volume-topic-modeling
Repo
Framework

Practical Newton-Type Distributed Learning using Gradient Based Approximations

Title Practical Newton-Type Distributed Learning using Gradient Based Approximations
Authors Samira Sheikhi
Abstract We study distributed algorithms for expected loss minimization where the datasets are large and have to be stored on different machines. Often we deal with minimizing the average of a set of convex functions where each function is the empirical risk of the corresponding part of the data. In the distributed setting where the individual data instances can be accessed only on the local machines, there would be a series of rounds of local computations followed by some communication among the machines. Since the cost of the communication is usually higher than the local machine computations, it is important to reduce it as much as possible. However, we should not allow this to make the computation too expensive to become a burden in practice. Using second-order methods could make the algorithms converge faster and decrease the amount of communication needed. There are some successful attempts in developing distributed second-order methods. Although these methods have shown fast convergence, their local computation is expensive and could enjoy more improvement for practical uses. In this study we modify an existing approach, DANE (Distributed Approximate NEwton), in order to improve the computational cost while maintaining the accuracy. We tackle this problem by using iterative methods for solving the local subproblems approximately instead of providing exact solutions for each round of communication. We study how using different iterative methods affect the behavior of the algorithm and try to provide an appropriate tradeoff between the amount of local computation and the required amount of communication. We demonstrate the practicality of our algorithm and compare it to the existing distributed gradient based methods such as SGD.
Tasks
Published 2019-07-22
URL https://arxiv.org/abs/1907.09562v1
PDF https://arxiv.org/pdf/1907.09562v1.pdf
PWC https://paperswithcode.com/paper/practical-newton-type-distributed-learning
Repo
Framework
comments powered by Disqus