Paper Group ANR 628
Learn-Memorize-Recall-Reduce A Robotic Cloud Computing Paradigm. Learning when to skim and when to read. Benchmarking Super-Resolution Algorithms on Real Data. Multi-task Learning with Gradient Guided Policy Specialization. Weakly- and Semi-Supervised Object Detection with Expectation-Maximization Algorithm. Front-to-End Bidirectional Heuristic Sea …
Learn-Memorize-Recall-Reduce A Robotic Cloud Computing Paradigm
Title | Learn-Memorize-Recall-Reduce A Robotic Cloud Computing Paradigm |
Authors | Shaoshan Liu, Bolin Ding, Jie Tang, Dawei Sun, Zhe Zhang, Grace Tsai, Jean-Luc Gaudiot |
Abstract | The rise of robotic applications has led to the generation of a huge volume of unstructured data, whereas the current cloud infrastructure was designed to process limited amounts of structured data. To address this problem, we propose a learn-memorize-recall-reduce paradigm for robotic cloud computing. The learning stage converts incoming unstructured data into structured data; the memorization stage provides effective storage for the massive amount of data; the recall stage provides efficient means to retrieve the raw data; while the reduction stage provides means to make sense of this massive amount of unstructured data with limited computing resources. |
Tasks | |
Published | 2017-04-16 |
URL | http://arxiv.org/abs/1704.04712v2 |
http://arxiv.org/pdf/1704.04712v2.pdf | |
PWC | https://paperswithcode.com/paper/learn-memorize-recall-reduce-a-robotic-cloud |
Repo | |
Framework | |
Learning when to skim and when to read
Title | Learning when to skim and when to read |
Authors | Alexander Rosenberg Johansen, Richard Socher |
Abstract | Many recent advances in deep learning for natural language processing have come at increasing computational cost, but the power of these state-of-the-art models is not needed for every example in a dataset. We demonstrate two approaches to reducing unnecessary computation in cases where a fast but weak baseline classier and a stronger, slower model are both available. Applying an AUC-based metric to the task of sentiment classification, we find significant efficiency gains with both a probability-threshold method for reducing computational cost and one that uses a secondary decision network. |
Tasks | Sentiment Analysis |
Published | 2017-12-15 |
URL | http://arxiv.org/abs/1712.05483v1 |
http://arxiv.org/pdf/1712.05483v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-when-to-skim-and-when-to-read |
Repo | |
Framework | |
Benchmarking Super-Resolution Algorithms on Real Data
Title | Benchmarking Super-Resolution Algorithms on Real Data |
Authors | Thomas Köhler, Michel Bätz, Farzad Naderi, André Kaup, Andreas K. Maier, Christian Riess |
Abstract | Over the past decades, various super-resolution (SR) techniques have been developed to enhance the spatial resolution of digital images. Despite the great number of methodical contributions, there is still a lack of comparative validations of SR under practical conditions, as capturing real ground truth data is a challenging task. Therefore, current studies are either evaluated 1) on simulated data or 2) on real data without a pixel-wise ground truth. To facilitate comprehensive studies, this paper introduces the publicly available Super-Resolution Erlangen (SupER) database that includes real low-resolution images along with high-resolution ground truth data. Our database comprises image sequences with more than 20k images captured from 14 scenes under various types of motions and photometric conditions. The datasets cover four spatial resolution levels using camera hardware binning. With this database, we benchmark 15 single-image and multi-frame SR algorithms. Our experiments quantitatively analyze SR accuracy and robustness under realistic conditions including independent object and camera motion or photometric variations. |
Tasks | Super-Resolution |
Published | 2017-09-08 |
URL | http://arxiv.org/abs/1709.04881v1 |
http://arxiv.org/pdf/1709.04881v1.pdf | |
PWC | https://paperswithcode.com/paper/benchmarking-super-resolution-algorithms-on |
Repo | |
Framework | |
Multi-task Learning with Gradient Guided Policy Specialization
Title | Multi-task Learning with Gradient Guided Policy Specialization |
Authors | Wenhao Yu, C. Karen Liu, Greg Turk |
Abstract | We present a method for efficient learning of control policies for multiple related robotic motor skills. Our approach consists of two stages, joint training and specialization training. During the joint training stage, a neural network policy is trained with minimal information to disambiguate the motor skills. This forces the policy to learn a common representation of the different tasks. Then, during the specialization training stage we selectively split the weights of the policy based on a per-weight metric that measures the disagreement among the multiple tasks. By splitting part of the control policy, it can be further trained to specialize to each task. To update the control policy during learning, we use Trust Region Policy Optimization with Generalized Advantage Function (TRPOGAE). We propose a modification to the gradient update stage of TRPO to better accommodate multi-task learning scenarios. We evaluate our approach on three continuous motor skill learning problems in simulation: 1) a locomotion task where three single legged robots with considerable difference in shape and size are trained to hop forward, 2) a manipulation task where three robot manipulators with different sizes and joint types are trained to reach different locations in 3D space, and 3) locomotion of a two-legged robot, whose range of motion of one leg is constrained in different ways. We compare our training method to three baselines. The first baseline uses only joint training for the policy, the second trains independent policies for each task, and the last randomly selects weights to split. We show that our approach learns more efficiently than each of the baseline methods. |
Tasks | Legged Robots, Multi-Task Learning |
Published | 2017-09-23 |
URL | http://arxiv.org/abs/1709.07979v3 |
http://arxiv.org/pdf/1709.07979v3.pdf | |
PWC | https://paperswithcode.com/paper/multi-task-learning-with-gradient-guided |
Repo | |
Framework | |
Weakly- and Semi-Supervised Object Detection with Expectation-Maximization Algorithm
Title | Weakly- and Semi-Supervised Object Detection with Expectation-Maximization Algorithm |
Authors | Ziang Yan, Jian Liang, Weishen Pan, Jin Li, Changshui Zhang |
Abstract | Object detection when provided image-level labels instead of instance-level labels (i.e., bounding boxes) during training is an important problem in computer vision, since large scale image datasets with instance-level labels are extremely costly to obtain. In this paper, we address this challenging problem by developing an Expectation-Maximization (EM) based object detection method using deep convolutional neural networks (CNNs). Our method is applicable to both the weakly-supervised and semi-supervised settings. Extensive experiments on PASCAL VOC 2007 benchmark show that (1) in the weakly supervised setting, our method provides significant detection performance improvement over current state-of-the-art methods, (2) having access to a small number of strongly (instance-level) annotated images, our method can almost match the performace of the fully supervised Fast RCNN. We share our source code at https://github.com/ZiangYan/EM-WSD. |
Tasks | Object Detection |
Published | 2017-02-28 |
URL | http://arxiv.org/abs/1702.08740v1 |
http://arxiv.org/pdf/1702.08740v1.pdf | |
PWC | https://paperswithcode.com/paper/weakly-and-semi-supervised-object-detection |
Repo | |
Framework | |
Front-to-End Bidirectional Heuristic Search with Near-Optimal Node Expansions
Title | Front-to-End Bidirectional Heuristic Search with Near-Optimal Node Expansions |
Authors | Jingwei Chen, Robert C. Holte, Sandra Zilles, Nathan R. Sturtevant |
Abstract | It is well-known that any admissible unidirectional heuristic search algorithm must expand all states whose $f$-value is smaller than the optimal solution cost when using a consistent heuristic. Such states are called “surely expanded” (s.e.). A recent study characterized s.e. pairs of states for bidirectional search with consistent heuristics: if a pair of states is s.e. then at least one of the two states must be expanded. This paper derives a lower bound, VC, on the minimum number of expansions required to cover all s.e. pairs, and present a new admissible front-to-end bidirectional heuristic search algorithm, Near-Optimal Bidirectional Search (NBS), that is guaranteed to do no more than 2VC expansions. We further prove that no admissible front-to-end algorithm has a worst case better than 2VC. Experimental results show that NBS competes with or outperforms existing bidirectional search algorithms, and often outperforms A* as well. |
Tasks | |
Published | 2017-03-10 |
URL | http://arxiv.org/abs/1703.03868v2 |
http://arxiv.org/pdf/1703.03868v2.pdf | |
PWC | https://paperswithcode.com/paper/front-to-end-bidirectional-heuristic-search |
Repo | |
Framework | |
SuperSpike: Supervised learning in multi-layer spiking neural networks
Title | SuperSpike: Supervised learning in multi-layer spiking neural networks |
Authors | Friedemann Zenke, Surya Ganguli |
Abstract | A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in-vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in-silico. Here we revisit the problem of supervised learning in temporally coding multi-layer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three factor learning rule capable of training multi-layer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike-time patterns. |
Tasks | |
Published | 2017-05-31 |
URL | http://arxiv.org/abs/1705.11146v2 |
http://arxiv.org/pdf/1705.11146v2.pdf | |
PWC | https://paperswithcode.com/paper/superspike-supervised-learning-in-multi-layer |
Repo | |
Framework | |
Convergence diagnostics for stochastic gradient descent with constant step size
Title | Convergence diagnostics for stochastic gradient descent with constant step size |
Authors | Jerry Chee, Panos Toulis |
Abstract | Many iterative procedures in stochastic optimization exhibit a transient phase followed by a stationary phase. During the transient phase the procedure converges towards a region of interest, and during the stationary phase the procedure oscillates in that region, commonly around a single point. In this paper, we develop a statistical diagnostic test to detect such phase transition in the context of stochastic gradient descent with constant learning rate. We present theory and experiments suggesting that the region where the proposed diagnostic is activated coincides with the convergence region. For a class of loss functions, we derive a closed-form solution describing such region. Finally, we suggest an application to speed up convergence of stochastic gradient descent by halving the learning rate each time stationarity is detected. This leads to a new variant of stochastic gradient descent, which in many settings is comparable to state-of-art. |
Tasks | Stochastic Optimization |
Published | 2017-10-17 |
URL | http://arxiv.org/abs/1710.06382v2 |
http://arxiv.org/pdf/1710.06382v2.pdf | |
PWC | https://paperswithcode.com/paper/convergence-diagnostics-for-stochastic |
Repo | |
Framework | |
Machine Learning Friendly Set Version of Johnson-Lindenstrauss Lemma
Title | Machine Learning Friendly Set Version of Johnson-Lindenstrauss Lemma |
Authors | Mieczysław A. Kłopotek |
Abstract | In this paper we make a novel use of the Johnson-Lindenstrauss Lemma. The Lemma has an existential form saying that there exists a JL transformation $f$ of the data points into lower dimensional space such that all of them fall into predefined error range $\delta$. We formulate in this paper a theorem stating that we can choose the target dimensionality in a random projection type JL linear transformation in such a way that with probability $1-\epsilon$ all of them fall into predefined error range $\delta$ for any user-predefined failure probability $\epsilon$. This result is important for applications such a data clustering where we want to have a priori dimensionality reducing transformation instead of trying out a (large) number of them, as with traditional Johnson-Lindenstrauss Lemma. In particular, we take a closer look at the $k$-means algorithm and prove that a good solution in the projected space is also a good solution in the original space. Furthermore, under proper assumptions local optima in the original space are also ones in the projected space. We define also conditions for which clusterability property of the original space is transmitted to the projected space, so that special case algorithms for the original space are also applicable in the projected space. |
Tasks | |
Published | 2017-03-04 |
URL | http://arxiv.org/abs/1703.01507v5 |
http://arxiv.org/pdf/1703.01507v5.pdf | |
PWC | https://paperswithcode.com/paper/machine-learning-friendly-set-version-of |
Repo | |
Framework | |
A unified method for super-resolution recovery and real exponential-sum separation
Title | A unified method for super-resolution recovery and real exponential-sum separation |
Authors | Charles K. Chui, Hrushikesh N. Mhaskar |
Abstract | In this paper, motivated by diffraction of traveling light waves, a simple mathematical model is proposed, both for the multivariate super-resolution problem and the problem of blind-source separation of real-valued exponential sums. This model facilitates the development of a unified theory and a unified solution of both problems in this paper. Our consideration of the super-resolution problem is aimed at applications to fluorescence microscopy and observational astronomy, and the motivation for our consideration of the second problem is the current need of extracting multivariate exponential features in magnetic resonance spectroscopy (MRS) for the neurologist and radiologist as well as for providing a mathematical tool for isotope separation in Nuclear Chemistry. The unified method introduced in this paper can be easily realized by processing only finitely many data, sampled at locations that are not necessarily prescribed in advance, with computational scheme consisting only of matrix - vector multiplication, peak finding, and clustering. |
Tasks | Super-Resolution |
Published | 2017-07-26 |
URL | http://arxiv.org/abs/1707.09428v1 |
http://arxiv.org/pdf/1707.09428v1.pdf | |
PWC | https://paperswithcode.com/paper/a-unified-method-for-super-resolution |
Repo | |
Framework | |
Human in the Loop: Interactive Passive Automata Learning via Evidence-Driven State-Merging Algorithms
Title | Human in the Loop: Interactive Passive Automata Learning via Evidence-Driven State-Merging Algorithms |
Authors | Christian A. Hammerschmidt, Radu State, Sicco Verwer |
Abstract | We present an interactive version of an evidence-driven state-merging (EDSM) algorithm for learning variants of finite state automata. Learning these automata often amounts to recovering or reverse engineering the model generating the data despite noisy, incomplete, or imperfectly sampled data sources rather than optimizing a purely numeric target function. Domain expertise and human knowledge about the target domain can guide this process, and typically is captured in parameter settings. Often, domain expertise is subconscious and not expressed explicitly. Directly interacting with the learning algorithm makes it easier to utilize this knowledge effectively. |
Tasks | |
Published | 2017-07-28 |
URL | http://arxiv.org/abs/1707.09430v1 |
http://arxiv.org/pdf/1707.09430v1.pdf | |
PWC | https://paperswithcode.com/paper/human-in-the-loop-interactive-passive |
Repo | |
Framework | |
Specifying and Computing Causes for Query Answers in Databases via Database Repairs and Repair Programs
Title | Specifying and Computing Causes for Query Answers in Databases via Database Repairs and Repair Programs |
Authors | Leopoldo Bertossi |
Abstract | A correspondence between database tuples as causes for query answers in databases and tuple-based repairs of inconsistent databases with respect to denial constraints has already been established. In this work, answer-set programs that specify repairs of databases are used as a basis for solving computational and reasoning problems about causes. Here, causes are also introduced at the attribute level by appealing to a both null-based and attribute-based repair semantics. The corresponding repair programs are presented, and they are used as a basis for computation and reasoning about attribute-level causes. They are extended to deal with the case of causality under integrity constraints. Several examples with the DLV system are shown. |
Tasks | |
Published | 2017-12-04 |
URL | http://arxiv.org/abs/1712.01001v5 |
http://arxiv.org/pdf/1712.01001v5.pdf | |
PWC | https://paperswithcode.com/paper/specifying-and-computing-causes-for-query |
Repo | |
Framework | |
Dynamic Layer Normalization for Adaptive Neural Acoustic Modeling in Speech Recognition
Title | Dynamic Layer Normalization for Adaptive Neural Acoustic Modeling in Speech Recognition |
Authors | Taesup Kim, Inchul Song, Yoshua Bengio |
Abstract | Layer normalization is a recently introduced technique for normalizing the activities of neurons in deep neural networks to improve the training speed and stability. In this paper, we introduce a new layer normalization technique called Dynamic Layer Normalization (DLN) for adaptive neural acoustic modeling in speech recognition. By dynamically generating the scaling and shifting parameters in layer normalization, DLN adapts neural acoustic models to the acoustic variability arising from various factors such as speakers, channel noises, and environments. Unlike other adaptive acoustic models, our proposed approach does not require additional adaptation data or speaker information such as i-vectors. Moreover, the model size is fixed as it dynamically generates adaptation parameters. We apply our proposed DLN to deep bidirectional LSTM acoustic models and evaluate them on two benchmark datasets for large vocabulary ASR experiments: WSJ and TED-LIUM release 2. The experimental results show that our DLN improves neural acoustic models in terms of transcription accuracy by dynamically adapting to various speakers and environments. |
Tasks | Speech Recognition |
Published | 2017-07-19 |
URL | http://arxiv.org/abs/1707.06065v1 |
http://arxiv.org/pdf/1707.06065v1.pdf | |
PWC | https://paperswithcode.com/paper/dynamic-layer-normalization-for-adaptive |
Repo | |
Framework | |
Efficient and Scalable View Generation from a Single Image using Fully Convolutional Networks
Title | Efficient and Scalable View Generation from a Single Image using Fully Convolutional Networks |
Authors | Sung-Ho Bae, Mohamed Elgharib, Mohamed Hefeeda, Wojciech Matusik |
Abstract | Single-image-based view generation (SIVG) is important for producing 3D stereoscopic content. Here, handling different spatial resolutions as input and optimizing both reconstruction accuracy and processing speed is desirable. Latest approaches are based on convolutional neural network (CNN), and they generate promising results. However, their use of fully connected layers as well as pre-trained VGG forces a compromise between reconstruction accuracy and processing speed. In addition, this approach is limited to the use of a specific spatial resolution. To remedy these problems, we propose exploiting fully convolutional networks (FCN) for SIVG. We present two FCN architectures for SIVG. The first one is based on combination of an FCN and a view-rendering network called DeepView$_{ren}$. The second one consists of decoupled networks for luminance and chrominance signals, denoted by DeepView$_{dec}$. To train our solutions we present a large dataset of 2M stereoscopic images. Results show that both of our architectures improve accuracy and speed over the state of the art. DeepView$_{ren}$ generates competitive accuracy to the state of the art, however, with the fastest processing speed of all. That is x5 times faster speed and x24 times lower memory consumption compared to the state of the art. DeepView$_{dec}$ has much higher accuracy, but with x2.5 times faster speed and x12 times lower memory consumption. We evaluated our approach with both objective and subjective studies. |
Tasks | |
Published | 2017-05-10 |
URL | https://arxiv.org/abs/1705.03737v3 |
https://arxiv.org/pdf/1705.03737v3.pdf | |
PWC | https://paperswithcode.com/paper/efficient-and-scalable-view-generation-from-a |
Repo | |
Framework | |
Estimating Mixture Entropy with Pairwise Distances
Title | Estimating Mixture Entropy with Pairwise Distances |
Authors | Artemy Kolchinsky, Brendan D. Tracey |
Abstract | Mixture distributions arise in many parametric and non-parametric settings – for example, in Gaussian mixture models and in non-parametric estimation. It is often necessary to compute the entropy of a mixture, but, in most cases, this quantity has no closed-form expression, making some form of approximation necessary. We propose a family of estimators based on a pairwise distance function between mixture components, and show that this estimator class has many attractive properties. For many distributions of interest, the proposed estimators are efficient to compute, differentiable in the mixture parameters, and become exact when the mixture components are clustered. We prove this family includes lower and upper bounds on the mixture entropy. The Chernoff $\alpha$-divergence gives a lower bound when chosen as the distance function, with the Bhattacharyya distance providing the tightest lower bound for components that are symmetric and members of a location family. The Kullback-Leibler divergence gives an upper bound when used as the distance function. We provide closed-form expressions of these bounds for mixtures of Gaussians, and discuss their applications to the estimation of mutual information. We then demonstrate that our bounds are significantly tighter than well-known existing bounds using numeric simulations. This estimator class is very useful in optimization problems involving maximization/minimization of entropy and mutual information, such as MaxEnt and rate distortion problems. |
Tasks | |
Published | 2017-06-08 |
URL | http://arxiv.org/abs/1706.02419v4 |
http://arxiv.org/pdf/1706.02419v4.pdf | |
PWC | https://paperswithcode.com/paper/estimating-mixture-entropy-with-pairwise |
Repo | |
Framework | |