Paper Group ANR 1085
Composing Neural Algorithms with Fugu. Capturing Object Detection Uncertainty in Multi-Layer Grid Maps. Towards automated symptoms assessment in mental health. Deep Neural Networks with Auxiliary-Model Regulated Gating for Resilient Multi-Modal Sensor Fusion. BayesNAS: A Bayesian Approach for Neural Architecture Search. Polynomial Neural Networks a …
Composing Neural Algorithms with Fugu
Title | Composing Neural Algorithms with Fugu |
Authors | James B Aimone, William Severa, Craig M Vineyard |
Abstract | Neuromorphic hardware architectures represent a growing family of potential post-Moore’s Law Era platforms. Largely due to event-driving processing inspired by the human brain, these computer platforms can offer significant energy benefits compared to traditional von Neumann processors. Unfortunately there still remains considerable difficulty in successfully programming, configuring and deploying neuromorphic systems. We present the Fugu framework as an answer to this need. Rather than necessitating a developer attain intricate knowledge of how to program and exploit spiking neural dynamics to utilize the potential benefits of neuromorphic computing, Fugu is designed to provide a higher level abstraction as a hardware-independent mechanism for linking a variety of scalable spiking neural algorithms from a variety of sources. Individual kernels linked together provide sophisticated processing through compositionality. Fugu is intended to be suitable for a wide-range of neuromorphic applications, including machine learning, scientific computing, and more brain-inspired neural algorithms. Ultimately, we hope the community adopts this and other open standardization attempts allowing for free exchange and easy implementations of the ever-growing list of spiking neural algorithms. |
Tasks | |
Published | 2019-05-28 |
URL | https://arxiv.org/abs/1905.12130v1 |
https://arxiv.org/pdf/1905.12130v1.pdf | |
PWC | https://paperswithcode.com/paper/composing-neural-algorithms-with-fugu |
Repo | |
Framework | |
Capturing Object Detection Uncertainty in Multi-Layer Grid Maps
Title | Capturing Object Detection Uncertainty in Multi-Layer Grid Maps |
Authors | Sascha Wirges, Marcel Reith-Braun, Martin Lauer, Christoph Stiller |
Abstract | We propose a deep convolutional object detector for automated driving applications that also estimates classification, pose and shape uncertainty of each detected object. The input consists of a multi-layer grid map which is well-suited for sensor fusion, free-space estimation and machine learning. Based on the estimated pose and shape uncertainty we approximate object hulls with bounded collision probability which we find helpful for subsequent trajectory planning tasks. We train our models based on the KITTI object detection data set. In a quantitative and qualitative evaluation some models show a similar performance and superior robustness compared to previously developed object detectors. However, our evaluation also points to undesired data set properties which should be addressed when training data-driven models or creating new data sets. |
Tasks | Object Detection, Sensor Fusion |
Published | 2019-01-31 |
URL | http://arxiv.org/abs/1901.11284v1 |
http://arxiv.org/pdf/1901.11284v1.pdf | |
PWC | https://paperswithcode.com/paper/capturing-object-detection-uncertainty-in |
Repo | |
Framework | |
Towards automated symptoms assessment in mental health
Title | Towards automated symptoms assessment in mental health |
Authors | Maxim Osipov |
Abstract | Activity and motion analysis has the potential to be used as a diagnostic tool for mental disorders. However, to-date, little work has been performed in turning stratification measures of activity into useful symptom markers. The research presented in this thesis has focused on the identification of objective activity and behaviour metrics that could be useful for the analysis of mental health symptoms in the above mentioned dimensions. Particular attention is given to the analysis of objective differences between disorders, as well as identification of clinical episodes of mania and depression in bipolar patients, and deterioration in borderline personality disorder patients. A principled framework is proposed for mHealth monitoring of psychiatric patients, based on measurable changes in behaviour, represented in physical activity time series, collected via mobile and wearable devices. The framework defines methods for direct computational analysis of symptoms in disorganisation and psychomotor dimensions, as well as measures for indirect assessment of mood, using patterns of physical activity, sleep and circadian rhythms. The approach of computational behaviour analysis, proposed in this thesis, has the potential for early identification of clinical deterioration in ambulatory patients, and allows for the specification of distinct and measurable behavioural phenotypes, thus enabling better understanding and treatment of mental disorders. |
Tasks | Time Series |
Published | 2019-08-14 |
URL | https://arxiv.org/abs/1908.06013v1 |
https://arxiv.org/pdf/1908.06013v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-automated-symptoms-assessment-in |
Repo | |
Framework | |
Deep Neural Networks with Auxiliary-Model Regulated Gating for Resilient Multi-Modal Sensor Fusion
Title | Deep Neural Networks with Auxiliary-Model Regulated Gating for Resilient Multi-Modal Sensor Fusion |
Authors | Chenye Zhao, Myung Seok Shim, Yang Li, Xuchong Zhang, Peng Li |
Abstract | Deep neural networks allow for fusion of high-level features from multiple modalities and have become a promising end-to-end solution for multi-modal sensor fusion. While the recently proposed gating architectures improve the conventional fusion mechanisms employed in CNNs, these models are not always resilient particularly under the presence of sensor failures. This paper shows that the existing gating architectures fail to robustly learn the fusion weights that critically gate different modalities, leading to the issue of fusion weight inconsistency. We propose a new gating architecture by incorporating an auxiliary model to regularize the main model such that the fusion weight for each sensory modality can be robustly learned. As a result, this new auxiliary-model regulated architecture and its variants outperform the existing non-gating and gating fusion architectures under both clean and corrupted sensory inputs resulted from sensor failures. The obtained performance gains are rather significant in the latter case. |
Tasks | Sensor Fusion |
Published | 2019-01-29 |
URL | http://arxiv.org/abs/1901.10610v1 |
http://arxiv.org/pdf/1901.10610v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-neural-networks-with-auxiliary-model |
Repo | |
Framework | |
BayesNAS: A Bayesian Approach for Neural Architecture Search
Title | BayesNAS: A Bayesian Approach for Neural Architecture Search |
Authors | Hongpeng Zhou, Minghao Yang, Jun Wang, Wei Pan |
Abstract | One-Shot Neural Architecture Search (NAS) is a promising method to significantly reduce search time without any separate training. It can be treated as a Network Compression problem on the architecture parameters from an over-parameterized network. However, there are two issues associated with most one-shot NAS methods. First, dependencies between a node and its predecessors and successors are often disregarded which result in improper treatment over zero operations. Second, architecture parameters pruning based on their magnitude is questionable. In this paper, we employ the classic Bayesian learning approach to alleviate these two issues by modeling architecture parameters using hierarchical automatic relevance determination (HARD) priors. Unlike other NAS methods, we train the over-parameterized network for only one epoch then update the architecture. Impressively, this enabled us to find the architecture on CIFAR-10 within only 0.2 GPU days using a single GPU. Competitive performance can be also achieved by transferring to ImageNet. As a byproduct, our approach can be applied directly to compress convolutional neural networks by enforcing structural sparsity which achieves extremely sparse networks without accuracy deterioration. |
Tasks | Neural Architecture Search |
Published | 2019-05-13 |
URL | https://arxiv.org/abs/1905.04919v2 |
https://arxiv.org/pdf/1905.04919v2.pdf | |
PWC | https://paperswithcode.com/paper/bayesnas-a-bayesian-approach-for-neural |
Repo | |
Framework | |
Polynomial Neural Networks and Taylor maps for Dynamical Systems Simulation and Learning
Title | Polynomial Neural Networks and Taylor maps for Dynamical Systems Simulation and Learning |
Authors | Andrei Ivanov, Anna Golovkina, Uwe Iben |
Abstract | The connection of Taylor maps and polynomial neural networks (PNN) to solve ordinary differential equations (ODEs) numerically is considered. Having the system of ODEs, it is possible to calculate weights of PNN that simulates the dynamics of these equations. It is shown that proposed PNN architecture can provide better accuracy with less computational time in comparison with traditional numerical solvers. Moreover, neural network derived from the ODEs can be used for simulation of system dynamics with different initial conditions, but without training procedure. On the other hand, if the equations are unknown, the weights of the PNN can be fitted in a data-driven way. In the paper we describe the connection of PNN with differential equations in a theoretical way along with the examples for both dynamics simulation and learning with data. |
Tasks | |
Published | 2019-12-19 |
URL | https://arxiv.org/abs/1912.09986v1 |
https://arxiv.org/pdf/1912.09986v1.pdf | |
PWC | https://paperswithcode.com/paper/polynomial-neural-networks-and-taylor-maps |
Repo | |
Framework | |
An end-to-end Generative Retrieval Method for Sponsored Search Engine –Decoding Efficiently into a Closed Target Domain
Title | An end-to-end Generative Retrieval Method for Sponsored Search Engine –Decoding Efficiently into a Closed Target Domain |
Authors | Yijiang Lian, Zhijie Chen, Jinlong Hu, Kefeng Zhang, Chunwei Yan, Muchenxuan Tong, Wenying Han, Hanju Guan, Ying Li, Ying Cao, Yang Yu, Zhigang Li, Xiaochun Liu, Yue Wang |
Abstract | In this paper, we present a generative retrieval method for sponsored search engine, which uses neural machine translation (NMT) to generate keywords directly from query. This method is completely end-to-end, which skips query rewriting and relevance judging phases in traditional retrieval systems. Different from standard machine translation, the target space in the retrieval setting is a constrained closed set, where only committed keywords should be generated. We present a Trie-based pruning technique in beam search to address this problem. The biggest challenge in deploying this method into a real industrial environment is the latency impact of running the decoder. Self-normalized training coupled with Trie-based dynamic pruning dramatically reduces the inference time, yielding a speedup of more than 20 times. We also devise an mixed online-offline serving architecture to reduce the latency and CPU consumption. To encourage the NMT to generate new keywords uncovered by the existing system, training data is carefully selected. This model has been successfully applied in Baidu’s commercial search engine as a supplementary retrieval branch, which has brought a remarkable revenue improvement of more than 10 percents. |
Tasks | Machine Translation |
Published | 2019-02-02 |
URL | http://arxiv.org/abs/1902.00592v2 |
http://arxiv.org/pdf/1902.00592v2.pdf | |
PWC | https://paperswithcode.com/paper/an-end-to-end-generative-retrieval-method-for |
Repo | |
Framework | |
Stochastic Primal-Dual Algorithms with Faster Convergence than $O(1/\sqrt{T})$ for Problems without Bilinear Structure
Title | Stochastic Primal-Dual Algorithms with Faster Convergence than $O(1/\sqrt{T})$ for Problems without Bilinear Structure |
Authors | Yan Yan, Yi Xu, Qihang Lin, Lijun Zhang, Tianbao Yang |
Abstract | Previous studies on stochastic primal-dual algorithms for solving min-max problems with faster convergence heavily rely on the bilinear structure of the problem, which restricts their applicability to a narrowed range of problems. The main contribution of this paper is the design and analysis of new stochastic primal-dual algorithms that use a mixture of stochastic gradient updates and a logarithmic number of deterministic dual updates for solving a family of convex-concave problems with no bilinear structure assumed. Faster convergence rates than $O(1/\sqrt{T})$ with $T$ being the number of stochastic gradient updates are established under some mild conditions of involved functions on the primal and the dual variable. For example, for a family of problems that enjoy a weak strong convexity in terms of the primal variable and has a strongly concave function of the dual variable, the convergence rate of the proposed algorithm is $O(1/T)$. We also investigate the effectiveness of the proposed algorithms for learning robust models and empirical AUC maximization. |
Tasks | |
Published | 2019-04-23 |
URL | https://arxiv.org/abs/1904.10112v2 |
https://arxiv.org/pdf/1904.10112v2.pdf | |
PWC | https://paperswithcode.com/paper/stochastic-primal-dual-algorithms-with-faster |
Repo | |
Framework | |
Learning Robotic Manipulation through Visual Planning and Acting
Title | Learning Robotic Manipulation through Visual Planning and Acting |
Authors | Angelina Wang, Thanard Kurutach, Kara Liu, Pieter Abbeel, Aviv Tamar |
Abstract | Planning for robotic manipulation requires reasoning about the changes a robot can affect on objects. When such interactions can be modelled analytically, as in domains with rigid objects, efficient planning algorithms exist. However, in both domestic and industrial domains, the objects of interest can be soft, or deformable, and hard to model analytically. For such cases, we posit that a data-driven modelling approach is more suitable. In recent years, progress in deep generative models has produced methods that learn to `imagine’ plausible images from data. Building on the recent Causal InfoGAN generative model, in this work we learn to imagine goal-directed object manipulation directly from raw image data of self-supervised interaction of the robot with the object. After learning, given a goal observation of the system, our model can generate an imagined plan – a sequence of images that transition the object into the desired goal. To execute the plan, we use it as a reference trajectory to track with a visual servoing controller, which we also learn from the data as an inverse dynamics model. In a simulated manipulation task, we show that separating the problem into visual planning and visual tracking control is more sample efficient and more interpretable than alternative data-driven approaches. We further demonstrate our approach on learning to imagine and execute in 3 environments, the final of which is deformable rope manipulation on a PR2 robot. | |
Tasks | Visual Tracking |
Published | 2019-05-11 |
URL | https://arxiv.org/abs/1905.04411v1 |
https://arxiv.org/pdf/1905.04411v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-robotic-manipulation-through-visual |
Repo | |
Framework | |
Multi-resolution Networks For Flexible Irregular Time Series Modeling (Multi-FIT)
Title | Multi-resolution Networks For Flexible Irregular Time Series Modeling (Multi-FIT) |
Authors | Bhanu Pratap Singh, Iman Deznabi, Bharath Narasimhan, Bryon Kucharski, Rheeya Uppaal, Akhila Josyula, Madalina Fiterau |
Abstract | Missing values, irregularly collected samples, and multi-resolution signals commonly occur in multivariate time series data, making predictive tasks difficult. These challenges are especially prevalent in the healthcare domain, where patients’ vital signs and electronic records are collected at different frequencies and have occasionally missing information due to the imperfections in equipment or patient circumstances. Researchers have handled each of these issues differently, often handling missing data through mean value imputation and then using sequence models over the multivariate signals while ignoring the different resolution of signals. We propose a unified model named Multi-resolution Flexible Irregular Time series Network (Multi-FIT). The building block for Multi-FIT is the FIT network. The FIT network creates an informative dense representation at each time step using signal information such as last observed value, time difference since the last observed time stamp and overall mean for the signal. Vertical FIT (FIT-V) is a variant of FIT which also models the relationship between different temporal signals while creating the informative dense representations for the signal. The multi-FIT model uses multiple FIT networks for sets of signals with different resolutions, further facilitating the construction of flexible representations. Our model has three main contributions: a.) it does not impute values but rather creates informative representations to provide flexibility to the model for creating task-specific representations b.) it models the relationship between different signals in the form of support signals c.) it models different resolutions in parallel before merging them for the final prediction task. The FIT, FIT-V and Multi-FIT networks improve upon the state-of-the-art models for three predictive tasks, including the forecasting of patient survival. |
Tasks | Imputation, Irregular Time Series, Time Series |
Published | 2019-04-30 |
URL | http://arxiv.org/abs/1905.00125v1 |
http://arxiv.org/pdf/1905.00125v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-resolution-networks-for-flexible |
Repo | |
Framework | |
BIT: Biologically Inspired Tracker
Title | BIT: Biologically Inspired Tracker |
Authors | Bolun Cai, Xiangmin Xu, Xiaofen Xing, Kui Jia, Jie Miao, Dacheng Tao |
Abstract | Visual tracking is challenging due to image variations caused by various factors, such as object deformation, scale change, illumination change and occlusion. Given the superior tracking performance of human visual system (HVS), an ideal design of biologically inspired model is expected to improve computer visual tracking. This is however a difficult task due to the incomplete understanding of neurons’ working mechanism in HVS. This paper aims to address this challenge based on the analysis of visual cognitive mechanism of the ventral stream in the visual cortex, which simulates shallow neurons (S1 units and C1 units) to extract low-level biologically inspired features for the target appearance and imitates an advanced learning mechanism (S2 units and C2 units) to combine generative and discriminative models for target location. In addition, fast Gabor approximation (FGA) and fast Fourier transform (FFT) are adopted for real-time learning and detection in this framework. Extensive experiments on large-scale benchmark datasets show that the proposed biologically inspired tracker performs favorably against state-of-the-art methods in terms of efficiency, accuracy, and robustness. The acceleration technique in particular ensures that BIT maintains a speed of approximately 45 frames per second. |
Tasks | Visual Tracking |
Published | 2019-04-23 |
URL | http://arxiv.org/abs/1904.10411v1 |
http://arxiv.org/pdf/1904.10411v1.pdf | |
PWC | https://paperswithcode.com/paper/bit-biologically-inspired-tracker |
Repo | |
Framework | |
Robust guarantees for learning an autoregressive filter
Title | Robust guarantees for learning an autoregressive filter |
Authors | Holden Lee, Cyril Zhang |
Abstract | The optimal predictor for a linear dynamical system (with hidden state and Gaussian noise) takes the form of an autoregressive linear filter, namely the Kalman filter. However, a fundamental problem in reinforcement learning and control theory is to make optimal predictions in an unknown dynamical system. To this end, we take the approach of directly learning an autoregressive filter for time-series prediction under unknown dynamics. Our analysis differs from previous statistical analyses in that we regress not only on the inputs to the dynamical system, but also the outputs, which is essential to dealing with process noise. The main challenge is to estimate the filter under worst case input (in $\mathcal H_\infty$ norm), for which we use an $L^\infty$-based objective rather than ordinary least-squares. For learning an autoregressive model, our algorithm has optimal sample complexity in terms of the rollout length, which does not seem to be attained by naive least-squares. |
Tasks | Time Series, Time Series Prediction |
Published | 2019-05-23 |
URL | https://arxiv.org/abs/1905.09897v1 |
https://arxiv.org/pdf/1905.09897v1.pdf | |
PWC | https://paperswithcode.com/paper/robust-guarantees-for-learning-an |
Repo | |
Framework | |
BlurNet: Defense by Filtering the Feature Maps
Title | BlurNet: Defense by Filtering the Feature Maps |
Authors | Ravi Raju, Mikko Lipasti |
Abstract | Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adverserial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations ($RP_2$), generates adverserial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the $RP_2$ attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset by demonstrating high frequency noise is introduced into the input image by the $RP_2$ algorithm. To alleviate the high frequency, we introduce a depthwise convolution layer of standard blur kernels after the first layer. Finally, we present a regularization scheme to incorporate this low-pass filtering behavior into the training regime of the network. |
Tasks | |
Published | 2019-08-06 |
URL | https://arxiv.org/abs/1908.02256v1 |
https://arxiv.org/pdf/1908.02256v1.pdf | |
PWC | https://paperswithcode.com/paper/blurnet-defense-by-filtering-the-feature-maps |
Repo | |
Framework | |
A Multi-view Dimensionality Reduction Algorithm Based on Smooth Representation Model
Title | A Multi-view Dimensionality Reduction Algorithm Based on Smooth Representation Model |
Authors | Haohao Li, Huibing Wang |
Abstract | Over the past few decades, we have witnessed a large family of algorithms that have been designed to provide different solutions to the problem of dimensionality reduction (DR). The DR is an essential tool to excavate the important information from the high-dimensional data by mapping the data to a low-dimensional subspace. Furthermore, for the diversity of varied high-dimensional data, the multi-view features can be utilized for improving the learning performance. However, many DR methods fail to integrating multiple views. Although the features from different views are extracted by different manners, they are utilized to describe the same sample, which implies that they are highly related. Therefore, how to learn the subspace for high-dimensional features by utilizing the consistency and complementary properties of multi-view features is important in the present. In this paper, we propose an effective multi-view dimensionality reduction algorithm named Multi-view Smooth Preserve Projection. Firstly, we construct a single view DR method named Smooth Preserve Projection based on the Smooth Representation model. The proposed method aims to find a subspace for the high-dimensional data, in which the smooth reconstructive weights are preserved as much as possible. Then, we extend it to a multi-view version in which we exploits Hilbert-Schmidt Independence Criterion to jointly learn one common subspace for all views. A plenty of experiments on multi-view datasets show the excellent performance of the proposed method. |
Tasks | Dimensionality Reduction |
Published | 2019-10-10 |
URL | https://arxiv.org/abs/1910.04439v2 |
https://arxiv.org/pdf/1910.04439v2.pdf | |
PWC | https://paperswithcode.com/paper/a-multi-view-dimensionality-reduction |
Repo | |
Framework | |
Hacking Google reCAPTCHA v3 using Reinforcement Learning
Title | Hacking Google reCAPTCHA v3 using Reinforcement Learning |
Authors | Ismail Akrout, Amal Feriani, Mohamed Akrout |
Abstract | We present a Reinforcement Learning (RL) methodology to bypass Google reCAPTCHA v3. We formulate the problem as a grid world where the agent learns how to move the mouse and click on the reCAPTCHA button to receive a high score. We study the performance of the agent when we vary the cell size of the grid world and show that the performance drops when the agent takes big steps toward the goal. Finally, we used a divide and conquer strategy to defeat the reCAPTCHA system for any grid resolution. Our proposed method achieves a success rate of 97.4% on a 100x100 grid and 96.7% on a 1000x1000 screen resolution. |
Tasks | |
Published | 2019-03-03 |
URL | http://arxiv.org/abs/1903.01003v3 |
http://arxiv.org/pdf/1903.01003v3.pdf | |
PWC | https://paperswithcode.com/paper/hacking-google-recaptcha-v3-using |
Repo | |
Framework | |