July 27, 2019

2848 words 14 mins read

Paper Group ANR 622

Paper Group ANR 622

Hardware-efficient on-line learning through pipelined truncated-error backpropagation in binary-state networks. An Adaptive Framework for Missing Depth Inference Using Joint Bilateral Filter. Adversarial Examples that Fool Detectors. BOOK: Storing Algorithm-Invariant Episodes for Deep Reinforcement Learning. Exploring Question Understanding and Ada …

Hardware-efficient on-line learning through pipelined truncated-error backpropagation in binary-state networks

Title Hardware-efficient on-line learning through pipelined truncated-error backpropagation in binary-state networks
Authors Hesham Mostafa, Bruno Pedroni, Sadique Sheik, Gert Cauwenberghs
Abstract Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.
Tasks
Published 2017-06-15
URL http://arxiv.org/abs/1707.03049v2
PDF http://arxiv.org/pdf/1707.03049v2.pdf
PWC https://paperswithcode.com/paper/hardware-efficient-on-line-learning-through
Repo
Framework

An Adaptive Framework for Missing Depth Inference Using Joint Bilateral Filter

Title An Adaptive Framework for Missing Depth Inference Using Joint Bilateral Filter
Authors Rajer Sindhu, Jayesh Ananya
Abstract Depth imaging has largely focused on sensor and intrinsics properties. However, the accuracy of acquire pixel is largely dependent on the capture. We propose a new depth estimation and approximation algorithm which takes an arbitrary 3D point cloud as input, with potentially complex geometric structures, and generates automatically a bounding box which is used to clamp the 3D distribution into a valid range. We then infer the desired compact geometric network from complex 3D geometries by using a series of adaptive joint bilateral filters. Our approach leverages these input depth in the construction of a compact descriptive adaptive filter framework. The built system that allows a user to control the result of capture depth map to fit the target geometry. In addition, it is desirable to visualize structurally problematic areas of the depth data in a dynamic environment. To provide this feature, we investigate a fast update algorithm for the fragility of each pixel’s corresponding 3D point using machine learning. We present a new for of feature vector analysis and demonstrate the effectiveness in the dataset. In our experiment, we demonstrate the practicality and benefits of our proposed method by computing accurate solutions captured depth map from different types of sensors and shows better results than existing methods.
Tasks Depth Estimation
Published 2017-10-14
URL http://arxiv.org/abs/1710.05221v1
PDF http://arxiv.org/pdf/1710.05221v1.pdf
PWC https://paperswithcode.com/paper/an-adaptive-framework-for-missing-depth
Repo
Framework

Adversarial Examples that Fool Detectors

Title Adversarial Examples that Fool Detectors
Authors Jiajun Lu, Hussein Sibai, Evan Fabry
Abstract An adversarial example is an example that has been adjusted to produce a wrong label when presented to a system at test time. To date, adversarial example constructions have been demonstrated for classifiers, but not for detectors. If adversarial examples that could fool a detector exist, they could be used to (for example) maliciously create security hazards on roads populated with smart vehicles. In this paper, we demonstrate a construction that successfully fools two standard detectors, Faster RCNN and YOLO. The existence of such examples is surprising, as attacking a classifier is very different from attacking a detector, and that the structure of detectors - which must search for their own bounding box, and which cannot estimate that box very accurately - makes it quite likely that adversarial patterns are strongly disrupted. We show that our construction produces adversarial examples that generalize well across sequences digitally, even though large perturbations are needed. We also show that our construction yields physical objects that are adversarial.
Tasks
Published 2017-12-07
URL http://arxiv.org/abs/1712.02494v1
PDF http://arxiv.org/pdf/1712.02494v1.pdf
PWC https://paperswithcode.com/paper/adversarial-examples-that-fool-detectors
Repo
Framework

BOOK: Storing Algorithm-Invariant Episodes for Deep Reinforcement Learning

Title BOOK: Storing Algorithm-Invariant Episodes for Deep Reinforcement Learning
Authors Simyung Chang, YoungJoon Yoo, Jaeseok Choi, Nojun Kwak
Abstract We introduce a novel method to train agents of reinforcement learning (RL) by sharing knowledge in a way similar to the concept of using a book. The recorded information in the form of a book is the main means by which humans learn knowledge. Nevertheless, the conventional deep RL methods have mainly focused either on experiential learning where the agent learns through interactions with the environment from the start or on imitation learning that tries to mimic the teacher. Contrary to these, our proposed book learning shares key information among different agents in a book-like manner by delving into the following two characteristic features: (1) By defining the linguistic function, input states can be clustered semantically into a relatively small number of core clusters, which are forwarded to other RL agents in a prescribed manner. (2) By defining state priorities and the contents for recording, core experiences can be selected and stored in a small container. We call this container as `BOOK’. Our method learns hundreds to thousand times faster than the conventional methods by learning only a handful of core cluster information, which shows that deep RL agents can effectively learn through the shared knowledge from other agents. |
Tasks Imitation Learning
Published 2017-09-05
URL http://arxiv.org/abs/1709.01308v3
PDF http://arxiv.org/pdf/1709.01308v3.pdf
PWC https://paperswithcode.com/paper/book-storing-algorithm-invariant-episodes-for
Repo
Framework

Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering

Title Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering
Authors Junbei Zhang, Xiaodan Zhu, Qian Chen, Lirong Dai, Si Wei, Hui Jiang
Abstract The last several years have seen intensive interest in exploring neural-network-based models for machine comprehension (MC) and question answering (QA). In this paper, we approach the problems by closely modelling questions in a neural network framework. We first introduce syntactic information to help encode questions. We then view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline.
Tasks Question Answering, Reading Comprehension
Published 2017-03-14
URL http://arxiv.org/abs/1703.04617v2
PDF http://arxiv.org/pdf/1703.04617v2.pdf
PWC https://paperswithcode.com/paper/exploring-question-understanding-and
Repo
Framework

A New Method for Performance Analysis in Nonlinear Dimensionality Reduction

Title A New Method for Performance Analysis in Nonlinear Dimensionality Reduction
Authors Jiaxi Liang, Shojaeddin Chenouri, Christopher G. Small
Abstract In this paper, we develop a local rank correlation measure which quantifies the performance of dimension reduction methods. The local rank correlation is easily interpretable, and robust against the extreme skewness of nearest neighbor distributions in high dimensions. Some benchmark datasets are studied. We find that the local rank correlation closely corresponds to our visual interpretation of the quality of the output. In addition, we demonstrate that the local rank correlation is useful in estimating the intrinsic dimensionality of the original data, and in selecting a suitable value of tuning parameters used in some algorithms.
Tasks Dimensionality Reduction
Published 2017-11-16
URL http://arxiv.org/abs/1711.06252v1
PDF http://arxiv.org/pdf/1711.06252v1.pdf
PWC https://paperswithcode.com/paper/a-new-method-for-performance-analysis-in
Repo
Framework

Parsing with CYK over Distributed Representations

Title Parsing with CYK over Distributed Representations
Authors Fabio Massimo Zanzotto, Giordano Cristini, Giorgio Satta
Abstract Syntactic parsing is a key task in natural language processing. This task has been dominated by symbolic, grammar-based parsers. Neural networks, with their distributed representations, are challenging these methods. In this article we show that existing symbolic parsing algorithms can cross the border and be entirely formulated over distributed representations. To this end we introduce a version of the traditional Cocke-Younger-Kasami (CYK) algorithm, called D-CYK, which is entirely defined over distributed representations. Our D-CYK uses matrix multiplication on real number matrices of size independent of the length of the input string. These operations are compatible with traditional neural networks. Experiments show that our D-CYK approximates the original CYK algorithm. By showing that CYK can be entirely performed on distributed representations, we open the way to the definition of recurrent layers of CYK-informed neural networks.
Tasks
Published 2017-05-24
URL http://arxiv.org/abs/1705.08843v2
PDF http://arxiv.org/pdf/1705.08843v2.pdf
PWC https://paperswithcode.com/paper/parsing-with-cyk-over-distributed
Repo
Framework

Recurrent Scene Parsing with Perspective Understanding in the Loop

Title Recurrent Scene Parsing with Perspective Understanding in the Loop
Authors Shu Kong, Charless Fowlkes
Abstract Objects may appear at arbitrary scales in perspective images of a scene, posing a challenge for recognition systems that process images at a fixed resolution. We propose a depth-aware gating module that adaptively selects the pooling field size in a convolutional network architecture according to the object scale (inversely proportional to the depth) so that small details are preserved for distant objects while larger receptive fields are used for those nearby. The depth gating signal is provided by stereo disparity or estimated directly from monocular input. We integrate this depth-aware gating into a recurrent convolutional neural network to perform semantic segmentation. Our recurrent module iteratively refines the segmentation results, leveraging the depth and semantic predictions from the previous iterations. Through extensive experiments on four popular large-scale RGB-D datasets, we demonstrate this approach achieves competitive semantic segmentation performance with a model which is substantially more compact. We carry out extensive analysis of this architecture including variants that operate on monocular RGB but use depth as side-information during training, unsupervised gating as a generic attentional mechanism, and multi-resolution gating. We find that gated pooling for joint semantic segmentation and depth yields state-of-the-art results for quantitative monocular depth estimation.
Tasks Depth Estimation, Monocular Depth Estimation, Scene Parsing, Semantic Segmentation
Published 2017-05-20
URL http://arxiv.org/abs/1705.07238v2
PDF http://arxiv.org/pdf/1705.07238v2.pdf
PWC https://paperswithcode.com/paper/recurrent-scene-parsing-with-perspective
Repo
Framework

Privileged Multi-label Learning

Title Privileged Multi-label Learning
Authors Shan You, Chang Xu, Yunhe Wang, Chao Xu, Dacheng Tao
Abstract This paper presents privileged multi-label learning (PrML) to explore and exploit the relationship between labels in multi-label learning problems. We suggest that for each individual label, it cannot only be implicitly connected with other labels via the low-rank constraint over label predictors, but also its performance on examples can receive the explicit comments from other labels together acting as an \emph{Oracle teacher}. We generate privileged label feature for each example and its individual label, and then integrate it into the framework of low-rank based multi-label learning. The proposed algorithm can therefore comprehensively explore and exploit label relationships by inheriting all the merits of privileged information and low-rank constraints. We show that PrML can be efficiently solved by dual coordinate descent algorithm using iterative optimization strategy with cheap updates. Experiments on benchmark datasets show that through privileged label features, the performance can be significantly improved and PrML is superior to several competing methods in most cases.
Tasks Multi-Label Learning
Published 2017-01-25
URL http://arxiv.org/abs/1701.07194v1
PDF http://arxiv.org/pdf/1701.07194v1.pdf
PWC https://paperswithcode.com/paper/privileged-multi-label-learning
Repo
Framework

Feature Selection Facilitates Learning Mixtures of Discrete Product Distributions

Title Feature Selection Facilitates Learning Mixtures of Discrete Product Distributions
Authors Vincent Zhao, Steven W. Zucker
Abstract Feature selection can facilitate the learning of mixtures of discrete random variables as they arise, e.g. in crowdsourcing tasks. Intuitively, not all workers are equally reliable but, if the less reliable ones could be eliminated, then learning should be more robust. By analogy with Gaussian mixture models, we seek a low-order statistical approach, and here introduce an algorithm based on the (pairwise) mutual information. This induces an order over workers that is well structured for the `one coin’ model. More generally, it is justified by a goodness-of-fit measure and is validated empirically. Improvement in real data sets can be substantial. |
Tasks Feature Selection
Published 2017-11-25
URL http://arxiv.org/abs/1711.09195v1
PDF http://arxiv.org/pdf/1711.09195v1.pdf
PWC https://paperswithcode.com/paper/feature-selection-facilitates-learning
Repo
Framework

Robust Particle Swarm Optimizer based on Chemomimicry

Title Robust Particle Swarm Optimizer based on Chemomimicry
Authors Casey Kneale, Karl S. Booksh
Abstract A particle swarm optimizer (PSO) loosely based on the phenomena of crystallization and a chaos factor which follows the complimentary error function is described. The method features three phases: diffusion, directed motion, and nucleation. During the diffusion phase random walk is the only contributor to particle motion. As the algorithm progresses the contribution from chaos decreases and movement toward global best locations is pursued until convergence has occurred. The algorithm was found to be more robust to local minima in multimodal test functions than a standard PSO algorithm and is designed for problems which feature experimental precision.
Tasks
Published 2017-02-03
URL http://arxiv.org/abs/1702.00993v2
PDF http://arxiv.org/pdf/1702.00993v2.pdf
PWC https://paperswithcode.com/paper/robust-particle-swarm-optimizer-based-on
Repo
Framework

Deep Discrete Hashing with Self-supervised Pairwise Labels

Title Deep Discrete Hashing with Self-supervised Pairwise Labels
Authors Jingkuan Song, Tao He, Hangbo Fan, Lianli Gao
Abstract Hashing methods have been widely used for applications of large-scale image retrieval and classification. Non-deep hashing methods using handcrafted features have been significantly outperformed by deep hashing methods due to their better feature representation and end-to-end learning framework. However, the most striking successes in deep hashing have mostly involved discriminative models, which require labels. In this paper, we propose a novel unsupervised deep hashing method, named Deep Discrete Hashing (DDH), for large-scale image retrieval and classification. In the proposed framework, we address two main problems: 1) how to directly learn discrete binary codes? 2) how to equip the binary representation with the ability of accurate image retrieval and classification in an unsupervised way? We resolve these problems by introducing an intermediate variable and a loss function steering the learning process, which is based on the neighborhood structure in the original space. Experimental results on standard datasets (CIFAR-10, NUS-WIDE, and Oxford-17) demonstrate that our DDH significantly outperforms existing hashing methods by large margin in terms of~mAP for image retrieval and object recognition. Code is available at \url{https://github.com/htconquer/ddh}.
Tasks Image Retrieval, Object Recognition
Published 2017-07-07
URL http://arxiv.org/abs/1707.02112v1
PDF http://arxiv.org/pdf/1707.02112v1.pdf
PWC https://paperswithcode.com/paper/deep-discrete-hashing-with-self-supervised
Repo
Framework

Machine Learning Based Source Code Classification Using Syntax Oriented Features

Title Machine Learning Based Source Code Classification Using Syntax Oriented Features
Authors Shaul Zevin, Catherine Holzem
Abstract As of today the programming language of the vast majority of the published source code is manually specified or programmatically assigned based on the sole file extension. In this paper we show that the source code programming language identification task can be fully automated using machine learning techniques. We first define the criteria that a production-level automatic programming language identification solution should meet. Our criteria include accuracy, programming language coverage, extensibility and performance. We then describe our approach: How training files are preprocessed for extracting features that mimic grammar productions, and then how these extracted grammar productions are used for the training and testing of our classifier. We achieve a 99 percent accuracy rate while classifying 29 of the most popular programming languages with a Maximum Entropy classifier.
Tasks Language Identification
Published 2017-03-04
URL http://arxiv.org/abs/1703.07638v1
PDF http://arxiv.org/pdf/1703.07638v1.pdf
PWC https://paperswithcode.com/paper/machine-learning-based-source-code
Repo
Framework

3D Pose Regression using Convolutional Neural Networks

Title 3D Pose Regression using Convolutional Neural Networks
Authors Siddharth Mahendran, Haider Ali, Rene Vidal
Abstract 3D pose estimation is a key component of many important computer vision tasks such as autonomous navigation and 3D scene understanding. Most state-of-the-art approaches to 3D pose estimation solve this problem as a pose-classification problem in which the pose space is discretized into bins and a CNN classifier is used to predict a pose bin. We argue that the 3D pose space is continuous and propose to solve the pose estimation problem in a CNN regression framework with a suitable representation, data augmentation and loss function that captures the geometry of the pose space. Experiments on PASCAL3D+ show that the proposed 3D pose regression approach achieves competitive performance compared to the state-of-the-art.
Tasks 3D Pose Estimation, Autonomous Navigation, Data Augmentation, Pose Estimation, Scene Understanding
Published 2017-08-18
URL http://arxiv.org/abs/1708.05628v1
PDF http://arxiv.org/pdf/1708.05628v1.pdf
PWC https://paperswithcode.com/paper/3d-pose-regression-using-convolutional-neural
Repo
Framework

A Systematic Study of Online Class Imbalance Learning with Concept Drift

Title A Systematic Study of Online Class Imbalance Learning with Concept Drift
Authors Shuo Wang, Leandro L. Minku, Xin Yao
Abstract As an emerging research topic, online class imbalance learning often combines the challenges of both class imbalance and concept drift. It deals with data streams having very skewed class distributions, where concept drift may occur. It has recently received increased research attention; however, very little work addresses the combined problem where both class imbalance and concept drift coexist. As the first systematic study of handling concept drift in class-imbalanced data streams, this paper first provides a comprehensive review of current research progress in this field, including current research focuses and open challenges. Then, an in-depth experimental study is performed, with the goal of understanding how to best overcome concept drift in online learning with class imbalance. Based on the analysis, a general guideline is proposed for the development of an effective algorithm.
Tasks
Published 2017-03-20
URL http://arxiv.org/abs/1703.06683v1
PDF http://arxiv.org/pdf/1703.06683v1.pdf
PWC https://paperswithcode.com/paper/a-systematic-study-of-online-class-imbalance
Repo
Framework
comments powered by Disqus