July 27, 2019

2775 words 14 mins read

Paper Group ANR 672

Paper Group ANR 672

Near Optimal Sketching of Low-Rank Tensor Regression. Visual Multiple-Object Tracking for Unknown Clutter Rate. Recovery of structure of looped jointed objects from multiframes. Ultrafast photonic reinforcement learning based on laser chaos. Guaranteed Parameter Estimation for Discrete Energy Minimization. Face Transfer with Generative Adversarial …

Near Optimal Sketching of Low-Rank Tensor Regression

Title Near Optimal Sketching of Low-Rank Tensor Regression
Authors Jarvis Haupt, Xingguo Li, David P. Woodruff
Abstract We study the least squares regression problem \begin{align*} \min_{\Theta \in \mathcal{S}_{\odot D,R}} \A\Theta-b_2, \end{align*} where $\mathcal{S}_{\odot D,R}$ is the set of $\Theta$ for which $\Theta = \sum_{r=1}^{R} \theta_1^{(r)} \circ \cdots \circ \theta_D^{(r)}$ for vectors $\theta_d^{(r)} \in \mathbb{R}^{p_d}$ for all $r \in [R]$ and $d \in [D]$, and $\circ$ denotes the outer product of vectors. That is, $\Theta$ is a low-dimensional, low-rank tensor. This is motivated by the fact that the number of parameters in $\Theta$ is only $R \cdot \sum_{d=1}^D p_d$, which is significantly smaller than the $\prod_{d=1}^{D} p_d$ number of parameters in ordinary least squares regression. We consider the above CP decomposition model of tensors $\Theta$, as well as the Tucker decomposition. For both models we show how to apply data dimensionality reduction techniques based on {\it sparse} random projections $\Phi \in \mathbb{R}^{m \times n}$, with $m \ll n$, to reduce the problem to a much smaller problem $\min_{\Theta} \Phi A \Theta - \Phi b_2$, for which if $\Theta'$ is a near-optimum to the smaller problem, then it is also a near optimum to the original problem. We obtain significantly smaller dimension and sparsity in $\Phi$ than is possible for ordinary least squares regression, and we also provide a number of numerical simulations supporting our theory.
Tasks Dimensionality Reduction
Published 2017-09-20
URL http://arxiv.org/abs/1709.07093v1
PDF http://arxiv.org/pdf/1709.07093v1.pdf
PWC https://paperswithcode.com/paper/near-optimal-sketching-of-low-rank-tensor
Repo
Framework

Visual Multiple-Object Tracking for Unknown Clutter Rate

Title Visual Multiple-Object Tracking for Unknown Clutter Rate
Authors Du Yong Kim
Abstract In multi-object tracking applications, model parameter tuning is a prerequisite for reliable performance. In particular, it is difficult to know statistics of false measurements due to various sensing conditions and changes in the field of views. In this paper we are interested in designing a multi-object tracking algorithm that handles unknown false measurement rate. Recently proposed robust multi-Bernoulli filter is employed for clutter estimation while generalized labeled multi-Bernoulli filter is considered for target tracking. Performance evaluation with real videos demonstrates the effectiveness of the tracking algorithm for real-world scenarios.
Tasks Multi-Object Tracking, Multiple Object Tracking, Object Tracking
Published 2017-01-09
URL http://arxiv.org/abs/1701.02273v3
PDF http://arxiv.org/pdf/1701.02273v3.pdf
PWC https://paperswithcode.com/paper/visual-multiple-object-tracking-for-unknown
Repo
Framework

Recovery of structure of looped jointed objects from multiframes

Title Recovery of structure of looped jointed objects from multiframes
Authors Mieczysław Kłopotek
Abstract A method to recover structural parameters of looped jointed objects from multiframes is being developed. Each rigid part of the jointed body needs only to be traced at two (that is at junction) points. This method has been linearized for 4-part loops, with recovery from at least 19 frames.
Tasks
Published 2017-05-02
URL http://arxiv.org/abs/1705.01148v1
PDF http://arxiv.org/pdf/1705.01148v1.pdf
PWC https://paperswithcode.com/paper/recovery-of-structure-of-looped-jointed
Repo
Framework

Ultrafast photonic reinforcement learning based on laser chaos

Title Ultrafast photonic reinforcement learning based on laser chaos
Authors Makoto Naruse, Yuta Terashima, Atsushi Uchida, Song-Ju Kim
Abstract Reinforcement learning involves decision making in dynamic and uncertain environments, and constitutes one important element of artificial intelligence (AI). In this paper, we experimentally demonstrate that the ultrafast chaotic oscillatory dynamics of lasers efficiently solve the multi-armed bandit problem (MAB), which requires decision making concerning a class of difficult trade-offs called the exploration-exploitation dilemma. To solve the MAB, a certain degree of randomness is required for exploration purposes. However, pseudo-random numbers generated using conventional electronic circuitry encounter severe limitations in terms of their data rate and the quality of randomness due to their algorithmic foundations. We generate laser chaos signals using a semiconductor laser sampled at a maximum rate of 100 GSample/s, and combine it with a simple decision-making principle called tug-of-war with a variable threshold, to ensure ultrafast, adaptive and accurate decision making at a maximum adaptation speed of 1 GHz. We found that decision-making performance was maximized with an optimal sampling interval, and we highlight the exact coincidence between the negative autocorrelation inherent in laser chaos and decision-making performance. This study paves the way for a new realm of ultrafast photonics in the age of AI, where the ultrahigh bandwidth of photons can provide new value.
Tasks Decision Making
Published 2017-04-14
URL http://arxiv.org/abs/1704.04379v1
PDF http://arxiv.org/pdf/1704.04379v1.pdf
PWC https://paperswithcode.com/paper/ultrafast-photonic-reinforcement-learning
Repo
Framework

Guaranteed Parameter Estimation for Discrete Energy Minimization

Title Guaranteed Parameter Estimation for Discrete Energy Minimization
Authors Mengtian Li, Daniel Huber
Abstract Structural learning, a method to estimate the parameters for discrete energy minimization, has been proven to be effective in solving computer vision problems, especially in 3D scene parsing. As the complexity of the models increases, structural learning algorithms turn to approximate inference to retain tractability. Unfortunately, such methods often fail because the approximation can be arbitrarily poor. In this work, we propose a method to overcome this limitation through exploiting the properties of the joint problem of training time inference and learning. With the help of the learning framework, we transform the inapproximable inference problem into a polynomial time solvable one, thereby enabling tractable exact inference while still allowing an arbitrary graph structure and full potential interactions. Our learning algorithm is guaranteed to return a solution with a bounded error to the global optimal within the feasible parameter space. We demonstrate the effectiveness of this method on two point cloud scene parsing datasets. Our approach runs much faster and solves a problem that is intractable for previous, well-known approaches.
Tasks Scene Parsing
Published 2017-01-11
URL http://arxiv.org/abs/1701.03151v1
PDF http://arxiv.org/pdf/1701.03151v1.pdf
PWC https://paperswithcode.com/paper/guaranteed-parameter-estimation-for-discrete
Repo
Framework

Face Transfer with Generative Adversarial Network

Title Face Transfer with Generative Adversarial Network
Authors Runze Xu, Zhiming Zhou, Weinan Zhang, Yong Yu
Abstract Face transfer animates the facial performances of the character in the target video by a source actor. Traditional methods are typically based on face modeling. We propose an end-to-end face transfer method based on Generative Adversarial Network. Specifically, we leverage CycleGAN to generate the face image of the target character with the corresponding head pose and facial expression of the source. In order to improve the quality of generated videos, we adopt PatchGAN and explore the effect of different receptive field sizes on generated images.
Tasks Face Transfer
Published 2017-10-17
URL http://arxiv.org/abs/1710.06090v1
PDF http://arxiv.org/pdf/1710.06090v1.pdf
PWC https://paperswithcode.com/paper/face-transfer-with-generative-adversarial
Repo
Framework

3D Facial Action Units Recognition for Emotional Expression

Title 3D Facial Action Units Recognition for Emotional Expression
Authors N. Hussain, H. Ujir, I. Hipiny, J-L Minoi
Abstract The muscular activities caused the activation of certain AUs for every facial expression at the certain duration of time throughout the facial expression. This paper presents the methods to recognise facial Action Unit (AU) using facial distance of the facial features which activates the muscles. The seven facial action units involved are AU1, AU4, AU6, AU12, AU15, AU17 and AU25 that characterises happy and sad expression. The recognition is performed on each AU according to rules defined based on the distance of each facial points. The facial distances chosen are extracted from twelve facial features. Then the facial distances are trained using Support Vector Machine (SVM) and Neural Network (NN). Classification result using SVM is presented with several different SVM kernels while result using NN is presented for each training, validation and testing phase.
Tasks
Published 2017-12-01
URL http://arxiv.org/abs/1712.00195v1
PDF http://arxiv.org/pdf/1712.00195v1.pdf
PWC https://paperswithcode.com/paper/3d-facial-action-units-recognition-for
Repo
Framework

Stein Variational Adaptive Importance Sampling

Title Stein Variational Adaptive Importance Sampling
Authors Jun Han, Qiang Liu
Abstract We propose a novel adaptive importance sampling algorithm which incorporates Stein variational gradient decent algorithm (SVGD) with importance sampling (IS). Our algorithm leverages the nonparametric transforms in SVGD to iteratively decrease the KL divergence between our importance proposal and the target distribution. The advantages of this algorithm are twofold: first, our algorithm turns SVGD into a standard IS algorithm, allowing us to use standard diagnostic and analytic tools of IS to evaluate and interpret the results; second, we do not restrict the choice of our importance proposal to predefined distribution families like traditional (adaptive) IS methods. Empirical experiments demonstrate that our algorithm performs well on evaluating partition functions of restricted Boltzmann machines and testing likelihood of variational auto-encoders.
Tasks
Published 2017-04-18
URL http://arxiv.org/abs/1704.05201v6
PDF http://arxiv.org/pdf/1704.05201v6.pdf
PWC https://paperswithcode.com/paper/stein-variational-adaptive-importance
Repo
Framework

Deep Mixture of Diverse Experts for Large-Scale Visual Recognition

Title Deep Mixture of Diverse Experts for Large-Scale Visual Recognition
Authors Tianyi Zhao, Jun Yu, Zhenzhong Kuang, Wei Zhang, Jianping Fan
Abstract In this paper, a deep mixture of diverse experts algorithm is developed for seamlessly combining a set of base deep CNNs (convolutional neural networks) with diverse outputs (task spaces), e.g., such base deep CNNs are trained to recognize different subsets of tens of thousands of atomic object classes. First, a two-layer (category layer and object class layer) ontology is constructed to achieve more effective solution for task group generation, e.g., assigning the semantically-related atomic object classes at the sibling leaf nodes into the same task group because they may share similar learning complexities. Second, one particular base deep CNNs with $M+1$ ($M \leq 1,000$) outputs is learned for each task group to recognize its $M$ atomic object classes effectively and identify one special class of “not-in-group” automatically, and the network structure (numbers of layers and units in each layer) of the well-designed AlexNet is directly used to configure such base deep CNNs. A deep multi-task learning algorithm is developed to leverage the inter-class visual similarities to learn more discriminative base deep CNNs and multi-task softmax for enhancing the separability of the atomic object classes in the same task group. Finally, all these base deep CNNs with diverse outputs (task spaces) are seamlessly combined to form a deep mixture of diverse experts for recognizing tens of thousands of atomic object classes. Our experimental results have demonstrated that our deep mixture of diverse experts algorithm can achieve very competitive results on large-scale visual recognition.
Tasks Multi-Task Learning, Object Recognition
Published 2017-06-24
URL http://arxiv.org/abs/1706.07901v1
PDF http://arxiv.org/pdf/1706.07901v1.pdf
PWC https://paperswithcode.com/paper/deep-mixture-of-diverse-experts-for-large
Repo
Framework

Low-Rank Matrix Approximation in the Infinity Norm

Title Low-Rank Matrix Approximation in the Infinity Norm
Authors Nicolas Gillis, Yaroslav Shitov
Abstract The low-rank matrix approximation problem with respect to the entry-wise $\ell_{\infty}$-norm is the following: given a matrix $M$ and a factorization rank $r$, find a matrix $X$ whose rank is at most $r$ and that minimizes $\max_{i,j} M_{ij} - X_{ij}$. In this paper, we prove that the decision variant of this problem for $r=1$ is NP-complete using a reduction from the problem `not all equal 3SAT’. We also analyze several cases when the problem can be solved in polynomial time, and propose a simple practical heuristic algorithm which we apply on the problem of the recovery of a quantized low-rank matrix. |
Tasks
Published 2017-05-31
URL http://arxiv.org/abs/1706.00078v1
PDF http://arxiv.org/pdf/1706.00078v1.pdf
PWC https://paperswithcode.com/paper/low-rank-matrix-approximation-in-the-infinity
Repo
Framework

An Entropy-based Pruning Method for CNN Compression

Title An Entropy-based Pruning Method for CNN Compression
Authors Jian-Hao Luo, Jianxin Wu
Abstract This paper aims to simultaneously accelerate and compress off-the-shelf CNN models via filter pruning strategy. The importance of each filter is evaluated by the proposed entropy-based method first. Then several unimportant filters are discarded to get a smaller CNN model. Finally, fine-tuning is adopted to recover its generalization ability which is damaged during filter pruning. Our method can reduce the size of intermediate activations, which would dominate most memory footprint during model training stage but is less concerned in previous compression methods. Experiments on the ILSVRC-12 benchmark demonstrate the effectiveness of our method. Compared with previous filter importance evaluation criteria, our entropy-based method obtains better performance. We achieve 3.3x speed-up and 16.64x compression on VGG-16, 1.54x acceleration and 1.47x compression on ResNet-50, both with about 1% top-5 accuracy decrease.
Tasks
Published 2017-06-19
URL http://arxiv.org/abs/1706.05791v1
PDF http://arxiv.org/pdf/1706.05791v1.pdf
PWC https://paperswithcode.com/paper/an-entropy-based-pruning-method-for-cnn
Repo
Framework

Sufficient Markov Decision Processes with Alternating Deep Neural Networks

Title Sufficient Markov Decision Processes with Alternating Deep Neural Networks
Authors Longshaokan Wang, Eric B. Laber, Katie Witkiewitz
Abstract Advances in mobile computing technologies have made it possible to monitor and apply data-driven interventions across complex systems in real time. Markov decision processes (MDPs) are the primary model for sequential decision problems with a large or indefinite time horizon. Choosing a representation of the underlying decision process that is both Markov and low-dimensional is non-trivial. We propose a method for constructing a low-dimensional representation of the original decision process for which: 1. the MDP model holds; 2. a decision strategy that maximizes mean utility when applied to the low-dimensional representation also maximizes mean utility when applied to the original process. We use a deep neural network to define a class of potential process representations and estimate the process of lowest dimension within this class. The method is illustrated using data from a mobile study on heavy drinking and smoking among college students.
Tasks
Published 2017-04-25
URL http://arxiv.org/abs/1704.07531v2
PDF http://arxiv.org/pdf/1704.07531v2.pdf
PWC https://paperswithcode.com/paper/sufficient-markov-decision-processes-with
Repo
Framework

Large Scale Novel Object Discovery in 3D

Title Large Scale Novel Object Discovery in 3D
Authors Siddharth Srivastava, Gaurav Sharma, Brejesh Lall
Abstract We present a method for discovering never-seen-before objects in 3D point clouds obtained from sensors like Microsoft Kinect. We generate supervoxels directly from the point cloud data and use them with a Siamese network, built on a recently proposed 3D convolutional neural network architecture. We use known objects to train a non-linear embedding of supervoxels, by optimizing the criteria that supervoxels which fall on the same object should be closer than those which fall on different objects, in the embedding space. We test on unknown objects, which were not seen during training, and perform clustering in the learned embedding space of supervoxels to effectively perform novel object discovery. We validate the method with extensive experiments, quantitatively showing that it can discover numerous unseen objects while being trained on only a few dense 3D models. We also show very good qualitative results of object discovery in point cloud data when the test objects, either specific instances or even categories, were never seen during training.
Tasks
Published 2017-01-22
URL http://arxiv.org/abs/1701.07046v2
PDF http://arxiv.org/pdf/1701.07046v2.pdf
PWC https://paperswithcode.com/paper/large-scale-novel-object-discovery-in-3d
Repo
Framework

Primal-Dual $π$ Learning: Sample Complexity and Sublinear Run Time for Ergodic Markov Decision Problems

Title Primal-Dual $π$ Learning: Sample Complexity and Sublinear Run Time for Ergodic Markov Decision Problems
Authors Mengdi Wang
Abstract Consider the problem of approximating the optimal policy of a Markov decision process (MDP) by sampling state transitions. In contrast to existing reinforcement learning methods that are based on successive approximations to the nonlinear Bellman equation, we propose a Primal-Dual $\pi$ Learning method in light of the linear duality between the value and policy. The $\pi$ learning method is model-free and makes primal-dual updates to the policy and value vectors as new data are revealed. For infinite-horizon undiscounted Markov decision process with finite state space $S$ and finite action space $A$, the $\pi$ learning method finds an $\epsilon$-optimal policy using the following number of sample transitions $$ \tilde{O}( \frac{(\tau\cdot t^*_{mix})^2 S A }{\epsilon^2} ),$$ where $t^*_{mix}$ is an upper bound of mixing times across all policies and $\tau$ is a parameter characterizing the range of stationary distributions across policies. The $\pi$ learning method also applies to the computational problem of MDP where the transition probabilities and rewards are explicitly given as the input. In the case where each state transition can be sampled in $\tilde{O}(1)$ time, the $\pi$ learning method gives a sublinear-time algorithm for solving the averaged-reward MDP.
Tasks
Published 2017-10-17
URL http://arxiv.org/abs/1710.06100v1
PDF http://arxiv.org/pdf/1710.06100v1.pdf
PWC https://paperswithcode.com/paper/primal-dual-learning-sample-complexity-and
Repo
Framework

Approximating Optimization Problems using EAs on Scale-Free Networks

Title Approximating Optimization Problems using EAs on Scale-Free Networks
Authors Ankit Chauhan, Tobias Friedrich, Francesco Quinzan
Abstract It has been observed that many complex real-world networks have certain properties, such as a high clustering coefficient, a low diameter, and a power-law degree distribution. A network with a power-law degree distribution is known as scale-free network. In order to study these networks, various random graph models have been proposed, e.g. Preferential Attachment, Chung-Lu, or Hyperbolic. We look at the interplay between the power-law degree distribution and the run time of optimization techniques for well known combinatorial problems. We observe that on scale-free networks, simple evolutionary algorithms (EAs) quickly reach a constant-factor approximation ratio on common covering problems We prove that the single-objective (1+1)EA reaches a constant-factor approximation ratio on the Minimum Dominating Set problem, the Minimum Vertex Cover problem, the Minimum Connected Dominating Set problem, and the Maximum Independent Set problem in expected polynomial number of calls to the fitness function. Furthermore, we prove that the multi-objective GSEMO algorithm reaches a better approximation ratio than the (1+1)EA on those problems, within polynomial fitness evaluations.
Tasks
Published 2017-04-12
URL http://arxiv.org/abs/1704.03664v3
PDF http://arxiv.org/pdf/1704.03664v3.pdf
PWC https://paperswithcode.com/paper/approximating-optimization-problems-using-eas
Repo
Framework
comments powered by Disqus