Paper Group ANR 765
Denoising Imaging Polarimetry by an Adapted BM3D Method. Cross-Domain Perceptual Reward Functions. Brain Responses During Robot-Error Observation. Video Captioning via Hierarchical Reinforcement Learning. Integer Echo State Networks: Hyperdimensional Reservoir Computing. Stochastic Separation Theorems. A Revisit on Deep Hashings for Large-scale Con …
Denoising Imaging Polarimetry by an Adapted BM3D Method
Title | Denoising Imaging Polarimetry by an Adapted BM3D Method |
Authors | Alexander B. Tibbs, Ilse M. Daly, Nicholas W. Roberts, David R. Bull |
Abstract | Imaging polarimetry allows more information to be extracted from a scene than conventional intensity or colour imaging. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D. This algorithm, PBM3D, gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it to spectroscopy methods. |
Tasks | Denoising |
Published | 2017-11-13 |
URL | http://arxiv.org/abs/1711.04853v2 |
http://arxiv.org/pdf/1711.04853v2.pdf | |
PWC | https://paperswithcode.com/paper/denoising-imaging-polarimetry-by-an-adapted |
Repo | |
Framework | |
Cross-Domain Perceptual Reward Functions
Title | Cross-Domain Perceptual Reward Functions |
Authors | Ashley D. Edwards, Srijan Sood, Charles L. Isbell Jr |
Abstract | In reinforcement learning, we often define goals by specifying rewards within desirable states. One problem with this approach is that we typically need to redefine the rewards each time the goal changes, which often requires some understanding of the solution in the agents environment. When humans are learning to complete tasks, we regularly utilize alternative sources that guide our understanding of the problem. Such task representations allow one to specify goals on their own terms, thus providing specifications that can be appropriately interpreted across various environments. This motivates our own work, in which we represent goals in environments that are different from the agents. We introduce Cross-Domain Perceptual Reward (CDPR) functions, learned rewards that represent the visual similarity between an agents state and a cross-domain goal image. We report results for learning the CDPRs with a deep neural network and using them to solve two tasks with deep reinforcement learning. |
Tasks | |
Published | 2017-05-25 |
URL | http://arxiv.org/abs/1705.09045v3 |
http://arxiv.org/pdf/1705.09045v3.pdf | |
PWC | https://paperswithcode.com/paper/cross-domain-perceptual-reward-functions |
Repo | |
Framework | |
Brain Responses During Robot-Error Observation
Title | Brain Responses During Robot-Error Observation |
Authors | Dominik Welke, Joos Behncke, Marina Hader, Robin Tibor Schirrmeister, Andreas Schönau, Boris Eßmann, Oliver Müller, Wolfram Burgard, Tonio Ball |
Abstract | Brain-controlled robots are a promising new type of assistive device for severely impaired persons. Little is however known about how to optimize the interaction of humans and brain-controlled robots. Information about the human’s perceived correctness of robot performance might provide a useful teaching signal for adaptive control algorithms and thus help enhancing robot control. Here, we studied whether watching robots perform erroneous vs. correct action elicits differential brain responses that can be decoded from single trials of electroencephalographic (EEG) recordings, and whether brain activity during human-robot interaction is modulated by the robot’s visual similarity to a human. To address these topics, we designed two experiments. In experiment I, participants watched a robot arm pour liquid into a cup. The robot performed the action either erroneously or correctly, i.e. it either spilled some liquid or not. In experiment II, participants observed two different types of robots, humanoid and non-humanoid, grabbing a ball. The robots either managed to grab the ball or not. We recorded high-resolution EEG during the observation tasks in both experiments to train a Filter Bank Common Spatial Pattern (FBCSP) pipeline on the multivariate EEG signal and decode for the correctness of the observed action, and for the type of the observed robot. Our findings show that it was possible to decode both correctness and robot type for the majority of participants significantly, although often just slightly, above chance level. Our findings suggest that non-invasive recordings of brain responses elicited when observing robots indeed contain decodable information about the correctness of the robot’s action and the type of observed robot. |
Tasks | EEG |
Published | 2017-08-04 |
URL | http://arxiv.org/abs/1708.01465v2 |
http://arxiv.org/pdf/1708.01465v2.pdf | |
PWC | https://paperswithcode.com/paper/brain-responses-during-robot-error |
Repo | |
Framework | |
Video Captioning via Hierarchical Reinforcement Learning
Title | Video Captioning via Hierarchical Reinforcement Learning |
Authors | Xin Wang, Wenhu Chen, Jiawei Wu, Yuan-Fang Wang, William Yang Wang |
Abstract | Video captioning is the task of automatically generating a textual description of the actions in a video. Although previous work (e.g. sequence-to-sequence model) has shown promising results in abstracting a coarse description of a short video, it is still very challenging to caption a video containing multiple fine-grained actions with a detailed description. This paper aims to address the challenge by proposing a novel hierarchical reinforcement learning framework for video captioning, where a high-level Manager module learns to design sub-goals and a low-level Worker module recognizes the primitive actions to fulfill the sub-goal. With this compositional framework to reinforce video captioning at different levels, our approach significantly outperforms all the baseline methods on a newly introduced large-scale dataset for fine-grained video captioning. Furthermore, our non-ensemble model has already achieved the state-of-the-art results on the widely-used MSR-VTT dataset. |
Tasks | Hierarchical Reinforcement Learning, Video Captioning |
Published | 2017-11-29 |
URL | http://arxiv.org/abs/1711.11135v3 |
http://arxiv.org/pdf/1711.11135v3.pdf | |
PWC | https://paperswithcode.com/paper/video-captioning-via-hierarchical |
Repo | |
Framework | |
Integer Echo State Networks: Hyperdimensional Reservoir Computing
Title | Integer Echo State Networks: Hyperdimensional Reservoir Computing |
Authors | Denis Kleyko, Edward Paxon Frady, Evgeny Osipov |
Abstract | We propose an approximation of Echo State Networks (ESN) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed Integer Echo State Network (intESN) is a vector containing only n-bits integers (where n<8 is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The intESN architecture is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs; classifying time-series; learning dynamic processes. Such an architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss. |
Tasks | Time Series |
Published | 2017-06-01 |
URL | http://arxiv.org/abs/1706.00280v2 |
http://arxiv.org/pdf/1706.00280v2.pdf | |
PWC | https://paperswithcode.com/paper/integer-echo-state-networks-hyperdimensional |
Repo | |
Framework | |
Stochastic Separation Theorems
Title | Stochastic Separation Theorems |
Authors | A. N. Gorban, I. Y. Tyukin |
Abstract | The problem of non-iterative one-shot and non-destructive correction of unavoidable mistakes arises in all Artificial Intelligence applications in the real world. Its solution requires robust separation of samples with errors from samples where the system works properly. We demonstrate that in (moderately) high dimension this separation could be achieved with probability close to one by linear discriminants. Surprisingly, separation of a new image from a very large set of known images is almost always possible even in moderately high dimensions by linear functionals, and coefficients of these functionals can be found explicitly. Based on fundamental properties of measure concentration, we show that for $M<a\exp(b{n})$ random $M$-element sets in $\mathbb{R}^n$ are linearly separable with probability $p$, $p>1-\vartheta$, where $1>\vartheta>0$ is a given small constant. Exact values of $a,b>0$ depend on the probability distribution that determines how the random $M$-element sets are drawn, and on the constant $\vartheta$. These {\em stochastic separation theorems} provide a new instrument for the development, analysis, and assessment of machine learning methods and algorithms in high dimension. Theoretical statements are illustrated with numerical examples. |
Tasks | |
Published | 2017-03-03 |
URL | http://arxiv.org/abs/1703.01203v3 |
http://arxiv.org/pdf/1703.01203v3.pdf | |
PWC | https://paperswithcode.com/paper/stochastic-separation-theorems |
Repo | |
Framework | |
A Revisit on Deep Hashings for Large-scale Content Based Image Retrieval
Title | A Revisit on Deep Hashings for Large-scale Content Based Image Retrieval |
Authors | Deng Cai, Xiuye Gu, Chaoqi Wang |
Abstract | There is a growing trend in studying deep hashing methods for content-based image retrieval (CBIR), where hash functions and binary codes are learnt using deep convolutional neural networks and then the binary codes can be used to do approximate nearest neighbor (ANN) search. All the existing deep hashing papers report their methods’ superior performance over the traditional hashing methods according to their experimental results. However, there are serious flaws in the evaluations of existing deep hashing papers: (1) The datasets they used are too small and simple to simulate the real CBIR situation. (2) They did not correctly include the search time in their evaluation criteria, while the search time is crucial in real CBIR systems. (3) The performance of some unsupervised hashing algorithms (e.g., LSH) can easily be boosted if one uses multiple hash tables, which is an important factor should be considered in the evaluation while most of the deep hashing papers failed to do so. We re-evaluate several state-of-the-art deep hashing methods with a carefully designed experimental setting. Empirical results reveal that the performance of these deep hashing methods are inferior to multi-table IsoH, a very simple unsupervised hashing method. Thus, the conclusions in all the deep hashing papers should be carefully re-examined. |
Tasks | Content-Based Image Retrieval, Image Retrieval |
Published | 2017-11-16 |
URL | http://arxiv.org/abs/1711.06016v1 |
http://arxiv.org/pdf/1711.06016v1.pdf | |
PWC | https://paperswithcode.com/paper/a-revisit-on-deep-hashings-for-large-scale |
Repo | |
Framework | |
DepthCut: Improved Depth Edge Estimation Using Multiple Unreliable Channels
Title | DepthCut: Improved Depth Edge Estimation Using Multiple Unreliable Channels |
Authors | Paul Guerrero, Holger Winnemöller, Wilmot Li, Niloy J. Mitra |
Abstract | In the context of scene understanding, a variety of methods exists to estimate different information channels from mono or stereo images, including disparity, depth, and normals. Although several advances have been reported in the recent years for these tasks, the estimated information is often imprecise particularly near depth discontinuities or creases. Studies have however shown that precisely such depth edges carry critical cues for the perception of shape, and play important roles in tasks like depth-based segmentation or foreground selection. Unfortunately, the currently extracted channels often carry conflicting signals, making it difficult for subsequent applications to effectively use them. In this paper, we focus on the problem of obtaining high-precision depth edges (i.e., depth contours and creases) by jointly analyzing such unreliable information channels. We propose DepthCut, a data-driven fusion of the channels using a convolutional neural network trained on a large dataset with known depth. The resulting depth edges can be used for segmentation, decomposing a scene into depth layers with relatively flat depth, or improving the accuracy of the depth estimate near depth edges by constraining its gradients to agree with these edges. Quantitatively, we compare against 15 variants of baselines and demonstrate that our depth edges result in an improved segmentation performance and an improved depth estimate near depth edges compared to data-agnostic channel fusion. Qualitatively, we demonstrate that the depth edges result in superior segmentation and depth orderings. |
Tasks | Scene Understanding |
Published | 2017-05-22 |
URL | http://arxiv.org/abs/1705.07844v2 |
http://arxiv.org/pdf/1705.07844v2.pdf | |
PWC | https://paperswithcode.com/paper/depthcut-improved-depth-edge-estimation-using |
Repo | |
Framework | |
SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks
Title | SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks |
Authors | Angshuman Parashar, Minsoo Rhu, Anurag Mukkara, Antonio Puglielli, Rangharajan Venkatesan, Brucek Khailany, Joel Emer, Stephen W. Keckler, William J. Dally |
Abstract | Convolutional Neural Networks (CNNs) have emerged as a fundamental technology for machine learning. High performance and extreme energy efficiency are critical for deployments of CNNs in a wide range of situations, especially mobile platforms such as autonomous vehicles, cameras, and electronic personal assistants. This paper introduces the Sparse CNN (SCNN) accelerator architecture, which improves performance and energy efficiency by exploiting the zero-valued weights that stem from network pruning during training and zero-valued activations that arise from the common ReLU operator applied during inference. Specifically, SCNN employs a novel dataflow that enables maintaining the sparse weights and activations in a compressed encoding, which eliminates unnecessary data transfers and reduces storage requirements. Furthermore, the SCNN dataflow facilitates efficient delivery of those weights and activations to the multiplier array, where they are extensively reused. In addition, the accumulation of multiplication products are performed in a novel accumulator array. Our results show that on contemporary neural networks, SCNN can improve both performance and energy by a factor of 2.7x and 2.3x, respectively, over a comparably provisioned dense CNN accelerator. |
Tasks | Autonomous Vehicles, Network Pruning |
Published | 2017-05-23 |
URL | http://arxiv.org/abs/1708.04485v1 |
http://arxiv.org/pdf/1708.04485v1.pdf | |
PWC | https://paperswithcode.com/paper/scnn-an-accelerator-for-compressed-sparse |
Repo | |
Framework | |
Towards thinner convolutional neural networks through Gradually Global Pruning
Title | Towards thinner convolutional neural networks through Gradually Global Pruning |
Authors | Zhengtao Wang, Ce Zhu, Zhiqiang Xia, Qi Guo, Yipeng Liu |
Abstract | Deep network pruning is an effective method to reduce the storage and computation cost of deep neural networks when applying them to resource-limited devices. Among many pruning granularities, neuron level pruning will remove redundant neurons and filters in the model and result in thinner networks. In this paper, we propose a gradually global pruning scheme for neuron level pruning. In each pruning step, a small percent of neurons were selected and dropped across all layers in the model. We also propose a simple method to eliminate the biases in evaluating the importance of neurons to make the scheme feasible. Compared with layer-wise pruning scheme, our scheme avoid the difficulty in determining the redundancy in each layer and is more effective for deep networks. Our scheme would automatically find a thinner sub-network in original network under a given performance. |
Tasks | Network Pruning |
Published | 2017-03-29 |
URL | http://arxiv.org/abs/1703.09916v1 |
http://arxiv.org/pdf/1703.09916v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-thinner-convolutional-neural-networks |
Repo | |
Framework | |
Knowledge Representation in Bicategories of Relations
Title | Knowledge Representation in Bicategories of Relations |
Authors | Evan Patterson |
Abstract | We introduce the relational ontology log, or relational olog, a knowledge representation system based on the category of sets and relations. It is inspired by Spivak and Kent’s olog, a recent categorical framework for knowledge representation. Relational ologs interpolate between ologs and description logic, the dominant formalism for knowledge representation today. In this paper, we investigate relational ologs both for their own sake and to gain insight into the relationship between the algebraic and logical approaches to knowledge representation. On a practical level, we show by example that relational ologs have a friendly and intuitive–yet fully precise–graphical syntax, derived from the string diagrams of monoidal categories. We explain several other useful features of relational ologs not possessed by most description logics, such as a type system and a rich, flexible notion of instance data. In a more theoretical vein, we draw on categorical logic to show how relational ologs can be translated to and from logical theories in a fragment of first-order logic. Although we make extensive use of categorical language, this paper is designed to be self-contained and has considerable expository content. The only prerequisites are knowledge of first-order logic and the rudiments of category theory. |
Tasks | |
Published | 2017-06-02 |
URL | http://arxiv.org/abs/1706.00526v2 |
http://arxiv.org/pdf/1706.00526v2.pdf | |
PWC | https://paperswithcode.com/paper/knowledge-representation-in-bicategories-of |
Repo | |
Framework | |
Prediction and Control with Temporal Segment Models
Title | Prediction and Control with Temporal Segment Models |
Authors | Nikhil Mishra, Pieter Abbeel, Igor Mordatch |
Abstract | We introduce a method for learning the dynamics of complex nonlinear systems based on deep generative models over temporal segments of states and actions. Unlike dynamics models that operate over individual discrete timesteps, we learn the distribution over future state trajectories conditioned on past state, past action, and planned future action trajectories, as well as a latent prior over action trajectories. Our approach is based on convolutional autoregressive models and variational autoencoders. It makes stable and accurate predictions over long horizons for complex, stochastic systems, effectively expressing uncertainty and modeling the effects of collisions, sensory noise, and action delays. The learned dynamics model and action prior can be used for end-to-end, fully differentiable trajectory optimization and model-based policy optimization, which we use to evaluate the performance and sample-efficiency of our method. |
Tasks | |
Published | 2017-03-12 |
URL | http://arxiv.org/abs/1703.04070v2 |
http://arxiv.org/pdf/1703.04070v2.pdf | |
PWC | https://paperswithcode.com/paper/prediction-and-control-with-temporal-segment |
Repo | |
Framework | |
Kronecker Recurrent Units
Title | Kronecker Recurrent Units |
Authors | Cijo Jose, Moustpaha Cisse, Francois Fleuret |
Abstract | Our work addresses two important issues with recurrent neural networks: (1) they are over-parameterized, and (2) the recurrence matrix is ill-conditioned. The former increases the sample complexity of learning and the training time. The latter causes the vanishing and exploding gradient problem. We present a flexible recurrent neural network model called Kronecker Recurrent Units (KRU). KRU achieves parameter efficiency in RNNs through a Kronecker factored recurrent matrix. It overcomes the ill-conditioning of the recurrent matrix by enforcing soft unitary constraints on the factors. Thanks to the small dimensionality of the factors, maintaining these constraints is computationally efficient. Our experimental results on seven standard data-sets reveal that KRU can reduce the number of parameters by three orders of magnitude in the recurrent weight matrix compared to the existing recurrent models, without trading the statistical performance. These results in particular show that while there are advantages in having a high dimensional recurrent space, the capacity of the recurrent part of the model can be dramatically reduced. |
Tasks | |
Published | 2017-05-29 |
URL | http://arxiv.org/abs/1705.10142v7 |
http://arxiv.org/pdf/1705.10142v7.pdf | |
PWC | https://paperswithcode.com/paper/kronecker-recurrent-units |
Repo | |
Framework | |
Towards a Deep Reinforcement Learning Approach for Tower Line Wars
Title | Towards a Deep Reinforcement Learning Approach for Tower Line Wars |
Authors | Per-Arne Andersen, Morten Goodwin, Ole-Christoffer Granmo |
Abstract | There have been numerous breakthroughs with reinforcement learning in the recent years, perhaps most notably on Deep Reinforcement Learning successfully playing and winning relatively advanced computer games. There is undoubtedly an anticipation that Deep Reinforcement Learning will play a major role when the first AI masters the complicated game plays needed to beat a professional Real-Time Strategy game player. For this to be possible, there needs to be a game environment that targets and fosters AI research, and specifically Deep Reinforcement Learning. Some game environments already exist, however, these are either overly simplistic such as Atari 2600 or complex such as Starcraft II from Blizzard Entertainment. We propose a game environment in between Atari 2600 and Starcraft II, particularly targeting Deep Reinforcement Learning algorithm research. The environment is a variant of Tower Line Wars from Warcraft III, Blizzard Entertainment. Further, as a proof of concept that the environment can harbor Deep Reinforcement algorithms, we propose and apply a Deep Q-Reinforcement architecture. The architecture simplifies the state space so that it is applicable to Q-learning, and in turn improves performance compared to current state-of-the-art methods. Our experiments show that the proposed architecture can learn to play the environment well, and score 33% better than standard Deep Q-learning which in turn proves the usefulness of the game environment. |
Tasks | Q-Learning, Starcraft, Starcraft II |
Published | 2017-12-17 |
URL | http://arxiv.org/abs/1712.06180v1 |
http://arxiv.org/pdf/1712.06180v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-a-deep-reinforcement-learning |
Repo | |
Framework | |
On Bayesian Exponentially Embedded Family for Model Order Selection
Title | On Bayesian Exponentially Embedded Family for Model Order Selection |
Authors | Zhenghan Zhu, Steven Kay |
Abstract | In this paper, we derive a Bayesian model order selection rule by using the exponentially embedded family method, termed Bayesian EEF. Unlike many other Bayesian model selection methods, the Bayesian EEF can use vague proper priors and improper noninformative priors to be objective in the elicitation of parameter priors. Moreover, the penalty term of the rule is shown to be the sum of half of the parameter dimension and the estimated mutual information between parameter and observed data. This helps to reveal the EEF mechanism in selecting model orders and may provide new insights into the open problems of choosing an optimal penalty term for model order selection and choosing a good prior from information theoretic viewpoints. The important example of linear model order selection is given to illustrate the algorithms and arguments. Lastly, the Bayesian EEF that uses Jeffreys prior coincides with the EEF rule derived by frequentist strategies. This shows another interesting relationship between the frequentist and Bayesian philosophies for model selection. |
Tasks | Model Selection |
Published | 2017-03-30 |
URL | http://arxiv.org/abs/1703.10513v2 |
http://arxiv.org/pdf/1703.10513v2.pdf | |
PWC | https://paperswithcode.com/paper/on-bayesian-exponentially-embedded-family-for |
Repo | |
Framework | |