October 19, 2019

3496 words 17 mins read

Paper Group ANR 320

Paper Group ANR 320

VirtualHome: Simulating Household Activities via Programs. Metric Embedding Autoencoders for Unsupervised Cross-Dataset Transfer Learning. Facial Expression and Peripheral Physiology Fusion to Decode Individualized Affective Experience. Automated Classification of Sleep Stages and EEG Artifacts in Mice with Deep Learning. Temporal Difference Learni …

VirtualHome: Simulating Household Activities via Programs

Title VirtualHome: Simulating Household Activities via Programs
Authors Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, Antonio Torralba
Abstract In this paper, we are interested in modeling complex activities that occur in a typical household. We propose to use programs, i.e., sequences of atomic actions and interactions, as a high level representation of complex tasks. Programs are interesting because they provide a non-ambiguous representation of a task, and allow agents to execute them. However, nowadays, there is no database providing this type of information. Towards this goal, we first crowd-source programs for a variety of activities that happen in people’s homes, via a game-like interface used for teaching kids how to code. Using the collected dataset, we show how we can learn to extract programs directly from natural language descriptions or from videos. We then implement the most common atomic (inter)actions in the Unity3D game engine, and use our programs to “drive” an artificial agent to execute tasks in a simulated household environment. Our VirtualHome simulator allows us to create a large activity video dataset with rich ground-truth, enabling training and testing of video understanding models. We further showcase examples of our agent performing tasks in our VirtualHome based on language descriptions.
Tasks Video Understanding
Published 2018-06-19
URL http://arxiv.org/abs/1806.07011v1
PDF http://arxiv.org/pdf/1806.07011v1.pdf
PWC https://paperswithcode.com/paper/virtualhome-simulating-household-activities
Repo
Framework

Metric Embedding Autoencoders for Unsupervised Cross-Dataset Transfer Learning

Title Metric Embedding Autoencoders for Unsupervised Cross-Dataset Transfer Learning
Authors Alexey Potapov, Sergey Rodionov, Hugo Latapie, Enzo Fenoglio
Abstract Cross-dataset transfer learning is an important problem in person re-identification (Re-ID). Unfortunately, not too many deep transfer Re-ID models exist for realistic settings of practical Re-ID systems. We propose a purely deep transfer Re-ID model consisting of a deep convolutional neural network and an autoencoder. The latent code is divided into metric embedding and nuisance variables. We then utilize an unsupervised training method that does not rely on co-training with non-deep models. Our experiments show improvements over both the baseline and competitors’ transfer learning models.
Tasks Person Re-Identification, Transfer Learning
Published 2018-07-18
URL http://arxiv.org/abs/1807.10591v1
PDF http://arxiv.org/pdf/1807.10591v1.pdf
PWC https://paperswithcode.com/paper/metric-embedding-autoencoders-for
Repo
Framework

Facial Expression and Peripheral Physiology Fusion to Decode Individualized Affective Experience

Title Facial Expression and Peripheral Physiology Fusion to Decode Individualized Affective Experience
Authors Yu Yin, Mohsen Nabian, Miolin Fan, ChunAn Chou, Maria Gendron, Sarah Ostadabbas
Abstract In this paper, we present a multimodal approach to simultaneously analyze facial movements and several peripheral physiological signals to decode individualized affective experiences under positive and negative emotional contexts, while considering their personalized resting dynamics. We propose a person-specific recurrence network to quantify the dynamics present in the person’s facial movements and physiological data. Facial movement is represented using a robust head vs. 3D face landmark localization and tracking approach, and physiological data are processed by extracting known attributes related to the underlying affective experience. The dynamical coupling between different input modalities is then assessed through the extraction of several complex recurrent network metrics. Inference models are then trained using these metrics as features to predict individual’s affective experience in a given context, after their resting dynamics are excluded from their response. We validated our approach using a multimodal dataset consists of (i) facial videos and (ii) several peripheral physiological signals, synchronously recorded from 12 participants while watching 4 emotion-eliciting video-based stimuli. The affective experience prediction results signified that our multimodal fusion method improves the prediction accuracy up to 19% when compared to the prediction using only one or a subset of the input modalities. Furthermore, we gained prediction improvement for affective experience by considering the effect of individualized resting dynamics.
Tasks
Published 2018-11-18
URL http://arxiv.org/abs/1811.07392v1
PDF http://arxiv.org/pdf/1811.07392v1.pdf
PWC https://paperswithcode.com/paper/facial-expression-and-peripheral-physiology
Repo
Framework

Automated Classification of Sleep Stages and EEG Artifacts in Mice with Deep Learning

Title Automated Classification of Sleep Stages and EEG Artifacts in Mice with Deep Learning
Authors Justus T. C. Schwabedal, Daniel Sippel, Moritz D. Brandt, Stephan Bialonski
Abstract Sleep scoring is a necessary and time-consuming task in sleep studies. In animal models (such as mice) or in humans, automating this tedious process promises to facilitate long-term studies and to promote sleep biology as a data-driven field. We introduce a deep neural network model that is able to predict different states of consciousness (Wake, Non-REM, REM) in mice from EEG and EMG recordings with excellent scoring results for out-of-sample data. Predictions are made on epochs of 4 seconds length, and epochs are classified as artifact-free or not. The model architecture draws on recent advances in deep learning and in convolutional neural networks research. In contrast to previous approaches towards automated sleep scoring, our model does not rely on manually defined features of the data but learns predictive features automatically. We expect deep learning models like ours to become widely applied in different fields, automating many repetitive cognitive tasks that were previously difficult to tackle.
Tasks EEG, EEG Artifact Removal, Sleep Stage Detection
Published 2018-09-22
URL http://arxiv.org/abs/1809.08443v1
PDF http://arxiv.org/pdf/1809.08443v1.pdf
PWC https://paperswithcode.com/paper/automated-classification-of-sleep-stages-and
Repo
Framework

Temporal Difference Learning with Neural Networks - Study of the Leakage Propagation Problem

Title Temporal Difference Learning with Neural Networks - Study of the Leakage Propagation Problem
Authors Hugo Penedones, Damien Vincent, Hartmut Maennel, Sylvain Gelly, Timothy Mann, Andre Barreto
Abstract Temporal-Difference learning (TD) [Sutton, 1988] with function approximation can converge to solutions that are worse than those obtained by Monte-Carlo regression, even in the simple case of on-policy evaluation. To increase our understanding of the problem, we investigate the issue of approximation errors in areas of sharp discontinuities of the value function being further propagated by bootstrap updates. We show empirical evidence of this leakage propagation, and show analytically that it must occur, in a simple Markov chain, when function approximation errors are present. For reversible policies, the result can be interpreted as the tension between two terms of the loss function that TD minimises, as recently described by [Ollivier, 2018]. We show that the upper bounds from [Tsitsiklis and Van Roy, 1997] hold, but they do not imply that leakage propagation occurs and under what conditions. Finally, we test whether the problem could be mitigated with a better state representation, and whether it can be learned in an unsupervised manner, without rewards or privileged information.
Tasks
Published 2018-07-09
URL http://arxiv.org/abs/1807.03064v1
PDF http://arxiv.org/pdf/1807.03064v1.pdf
PWC https://paperswithcode.com/paper/temporal-difference-learning-with-neural
Repo
Framework

Blockchain to Improve Security, Knowledge and Collaboration Inter-Agent Communication over Restrict Domains of the Internet Infrastructure

Title Blockchain to Improve Security, Knowledge and Collaboration Inter-Agent Communication over Restrict Domains of the Internet Infrastructure
Authors Juliao Braga, Joao Nuno Silva, Patricia Takako Endo, Jessica Ribas, Nizam Omar
Abstract This paper describes the deployment and implementation of a blockchain to improve the security, knowledge, intelligence and collaboration during the inter-agent communication processes in restrict domains of the Internet Infrastructure. It is a work that proposes the application of a blockchain, platform independent, on a particular model of agents, but that can be used in similar proposals, once the results on the specific model were satisfactory.
Tasks
Published 2018-05-14
URL http://arxiv.org/abs/1805.05250v4
PDF http://arxiv.org/pdf/1805.05250v4.pdf
PWC https://paperswithcode.com/paper/blockchain-to-improve-security-knowledge-and
Repo
Framework

A Deep Learning Approach for Privacy Preservation in Assisted Living

Title A Deep Learning Approach for Privacy Preservation in Assisted Living
Authors Ismini Psychoula, Erinc Merdivan, Deepika Singh, Liming Chen, Feng Chen, Sten Hanke, Johannes Kropf, Andreas Holzinger, Matthieu Geist
Abstract In the era of Internet of Things (IoT) technologies the potential for privacy invasion is becoming a major concern especially in regards to healthcare data and Ambient Assisted Living (AAL) environments. Systems that offer AAL technologies make extensive use of personal data in order to provide services that are context-aware and personalized. This makes privacy preservation a very important issue especially since the users are not always aware of the privacy risks they could face. A lot of progress has been made in the deep learning field, however, there has been lack of research on privacy preservation of sensitive personal data with the use of deep learning. In this paper we focus on a Long Short Term Memory (LSTM) Encoder-Decoder, which is a principal component of deep learning, and propose a new encoding technique that allows the creation of different AAL data views, depending on the access level of the end user and the information they require access to. The efficiency and effectiveness of the proposed method are demonstrated with experiments on a simulated AAL dataset. Qualitatively, we show that the proposed model learns privacy operations such as disclosure, deletion and generalization and can perform encoding and decoding of the data with almost perfect recovery.
Tasks
Published 2018-02-22
URL http://arxiv.org/abs/1802.09359v1
PDF http://arxiv.org/pdf/1802.09359v1.pdf
PWC https://paperswithcode.com/paper/a-deep-learning-approach-for-privacy
Repo
Framework

Privacy-Preserving Collaborative Deep Learning with Unreliable Participants

Title Privacy-Preserving Collaborative Deep Learning with Unreliable Participants
Authors Lingchen Zhao, Qian Wang, Qin Zou, Yan Zhang, Yanjiao Chen
Abstract With powerful parallel computing GPUs and massive user data, neural-network-based deep learning can well exert its strong power in problem modeling and solving, and has archived great success in many applications such as image classification, speech recognition and machine translation etc. While deep learning has been increasingly popular, the problem of privacy leakage becomes more and more urgent. Given the fact that the training data may contain highly sensitive information, e.g., personal medical records, directly sharing them among the users (i.e., participants) or centrally storing them in one single location may pose a considerable threat to user privacy. In this paper, we present a practical privacy-preserving collaborative deep learning system that allows users to cooperatively build a collective deep learning model with data of all participants, without direct data sharing and central data storage. In our system, each participant trains a local model with their own data and only shares model parameters with the others. To further avoid potential privacy leakage from sharing model parameters, we use functional mechanism to perturb the objective function of the neural network in the training process to achieve $\epsilon$-differential privacy. In particular, for the first time, we consider the existence of~\textit{unreliable participants}, i.e., the participants with low-quality data, and propose a solution to reduce the impact of these participants while protecting their privacy. We evaluate the performance of our system on two well-known real-world datasets for regression and classification tasks. The results demonstrate that the proposed system is robust against unreliable participants, and achieves high accuracy close to the model trained in a traditional centralized manner while ensuring rigorous privacy protection.
Tasks Image Classification, Machine Translation, Speech Recognition
Published 2018-12-25
URL https://arxiv.org/abs/1812.10113v3
PDF https://arxiv.org/pdf/1812.10113v3.pdf
PWC https://paperswithcode.com/paper/privacy-preserving-collaborative-deep
Repo
Framework

Physical Adversarial Examples for Object Detectors

Title Physical Adversarial Examples for Object Detectors
Authors Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, Dawn Song
Abstract Deep neural networks (DNNs) are vulnerable to adversarial examples-maliciously crafted inputs that cause DNNs to make incorrect predictions. Recent work has shown that these attacks generalize to the physical domain, to create perturbations on physical objects that fool image classifiers under a variety of real-world conditions. Such attacks pose a risk to deep learning models used in safety-critical cyber-physical systems. In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene. Improving upon a previous physical attack on image classifiers, we create perturbed physical objects that are either ignored or mislabeled by object detection models. We implement a Disappearance Attack, in which we cause a Stop sign to “disappear” according to the detector-either by covering thesign with an adversarial Stop sign poster, or by adding adversarial stickers onto the sign. In a video recorded in a controlled lab environment, the state-of-the-art YOLOv2 detector failed to recognize these adversarial Stop signs in over 85% of the video frames. In an outdoor experiment, YOLO was fooled by the poster and sticker attacks in 72.5% and 63.5% of the video frames respectively. We also use Faster R-CNN, a different object detection model, to demonstrate the transferability of our adversarial perturbations. The created poster perturbation is able to fool Faster R-CNN in 85.9% of the video frames in a controlled lab environment, and 40.2% of the video frames in an outdoor environment. Finally, we present preliminary results with a new Creation Attack, where in innocuous physical stickers fool a model into detecting nonexistent objects.
Tasks Object Detection
Published 2018-07-20
URL http://arxiv.org/abs/1807.07769v2
PDF http://arxiv.org/pdf/1807.07769v2.pdf
PWC https://paperswithcode.com/paper/physical-adversarial-examples-for-object
Repo
Framework

Marginal Singularity, and the Benefits of Labels in Covariate-Shift

Title Marginal Singularity, and the Benefits of Labels in Covariate-Shift
Authors Samory Kpotufe, Guillaume Martinet
Abstract We present new minimax results that concisely capture the relative benefits of source and target labeled data, under covariate-shift. Namely, we show that the benefits of target labels are controlled by a transfer-exponent $\gamma$ that encodes how singular Q is locally w.r.t. P, and interestingly allows situations where transfer did not seem possible under previous insights. In fact, our new minimax analysis - in terms of $\gamma$ - reveals a continuum of regimes ranging from situations where target labels have little benefit, to regimes where target labels dramatically improve classification. We then show that a recently proposed semi-supervised procedure can be extended to adapt to unknown $\gamma$, and therefore requests labels only when beneficial, while achieving minimax transfer rates.
Tasks
Published 2018-03-05
URL http://arxiv.org/abs/1803.01833v2
PDF http://arxiv.org/pdf/1803.01833v2.pdf
PWC https://paperswithcode.com/paper/marginal-singularity-and-the-benefits-of
Repo
Framework

Phase retrieval for Fourier Ptychography under varying amount of measurements

Title Phase retrieval for Fourier Ptychography under varying amount of measurements
Authors Lokesh Boominathan, Mayug Maniparambil, Honey Gupta, Rahul Baburajan, Kaushik Mitra
Abstract Fourier Ptychography is a recently proposed imaging technique that yields high-resolution images by computationally transcending the diffraction blur of an optical system. At the crux of this method is the phase retrieval algorithm, which is used for computationally stitching together low-resolution images taken under varying illumination angles of a coherent light source. However, the traditional iterative phase retrieval technique relies heavily on the initialization and also need a good amount of overlap in the Fourier domain for the successively captured low-resolution images, thus increasing the acquisition time and data. We show that an auto-encoder based architecture can be adaptively trained for phase retrieval under both low overlap, where traditional techniques completely fail, and at higher levels of overlap. For the low overlap case we show that a supervised deep learning technique using an autoencoder generator is a good choice for solving the Fourier ptychography problem. And for the high overlap case, we show that optimizing the generator for reducing the forward model error is an appropriate choice. Using simulations for the challenging case of uncorrelated phase and amplitude, we show that our method outperforms many of the previously proposed Fourier ptychography phase retrieval techniques.
Tasks
Published 2018-05-09
URL http://arxiv.org/abs/1805.03593v1
PDF http://arxiv.org/pdf/1805.03593v1.pdf
PWC https://paperswithcode.com/paper/phase-retrieval-for-fourier-ptychography
Repo
Framework

The Utility of Sparse Representations for Control in Reinforcement Learning

Title The Utility of Sparse Representations for Control in Reinforcement Learning
Authors Vincent Liu, Raksha Kumaraswamy, Lei Le, Martha White
Abstract We investigate sparse representations for control in reinforcement learning. While these representations are widely used in computer vision, their prevalence in reinforcement learning is limited to sparse coding where extracting representations for new data can be computationally intensive. Here, we begin by demonstrating that learning a control policy incrementally with a representation from a standard neural network fails in classic control domains, whereas learning with a representation obtained from a neural network that has sparsity properties enforced is effective. We provide evidence that the reason for this is that the sparse representation provides locality, and so avoids catastrophic interference, and particularly keeps consistent, stable values for bootstrapping. We then discuss how to learn such sparse representations. We explore the idea of Distributional Regularizers, where the activation of hidden nodes is encouraged to match a particular distribution that results in sparse activation across time. We identify a simple but effective way to obtain sparse representations, not afforded by previously proposed strategies, making it more practical for further investigation into sparse representations for reinforcement learning.
Tasks
Published 2018-11-15
URL http://arxiv.org/abs/1811.06626v1
PDF http://arxiv.org/pdf/1811.06626v1.pdf
PWC https://paperswithcode.com/paper/the-utility-of-sparse-representations-for
Repo
Framework

Unbiased Image Style Transfer

Title Unbiased Image Style Transfer
Authors Hyun-Chul Choi, Minseong Kim
Abstract Recent fast image style transferring methods use feed-forward neural networks to generate an output image of desired style strength from the input pair of a content and a target style image. In the existing methods, the image of intermediate style between the content and the target style is obtained by decoding a linearly interpolated feature in encoded feature space. However, there has been no work on analyzing the effectiveness of this kind of style strength interpolation so far. In this paper, we tackle the missing work on the in-depth analysis of style interpolation and propose a method that is more effective in controlling style strength. We interpret the training task of a style transfer network as a regression learning between the control parameter and output style strength. In this understanding, the existing methods are biased due to the fact that training is performed with one-sided data of full style strength (alpha = 1.0). Thus, this biased learning does not guarantee the generation of a desired intermediate style corresponding to the style control parameter between 0.0 and 1.0. To solve this problem of the biased network, we propose an unbiased learning technique which uses unbiased training data and corresponding unbiased loss for alpha = 0.0 to make the feed-forward networks to generate a zero-style image, i.e., content image when alpha = 0.0. Our experimental results verified that our unbiased learning method achieved the reconstruction of a content image with zero style strength, better regression specification between style control parameter and output style, and more stable style transfer that is insensitive to the weight of style loss without additive complexity in image generating process.
Tasks Style Transfer
Published 2018-07-04
URL http://arxiv.org/abs/1807.01424v2
PDF http://arxiv.org/pdf/1807.01424v2.pdf
PWC https://paperswithcode.com/paper/unbiased-image-style-transfer
Repo
Framework

Semi-supervised FusedGAN for Conditional Image Generation

Title Semi-supervised FusedGAN for Conditional Image Generation
Authors Navaneeth Bodla, Gang Hua, Rama Chellappa
Abstract We present FusedGAN, a deep network for conditional image synthesis with controllable sampling of diverse images. Fidelity, diversity and controllable sampling are the main quality measures of a good image generation model. Most existing models are insufficient in all three aspects. The FusedGAN can perform controllable sampling of diverse images with very high fidelity. We argue that controllability can be achieved by disentangling the generation process into various stages. In contrast to stacked GANs, where multiple stages of GANs are trained separately with full supervision of labeled intermediate images, the FusedGAN has a single stage pipeline with a built-in stacking of GANs. Unlike existing methods, which requires full supervision with paired conditions and images, the FusedGAN can effectively leverage more abundant images without corresponding conditions in training, to produce more diverse samples with high fidelity. We achieve this by fusing two generators: one for unconditional image generation, and the other for conditional image generation, where the two partly share a common latent space thereby disentangling the generation. We demonstrate the efficacy of the FusedGAN in fine grained image generation tasks such as text-to-image, and attribute-to-face generation.
Tasks Conditional Image Generation, Face Generation, Image Generation
Published 2018-01-17
URL http://arxiv.org/abs/1801.05551v1
PDF http://arxiv.org/pdf/1801.05551v1.pdf
PWC https://paperswithcode.com/paper/semi-supervised-fusedgan-for-conditional
Repo
Framework

A Hierarchical Bayesian Linear Regression Model with Local Features for Stochastic Dynamics Approximation

Title A Hierarchical Bayesian Linear Regression Model with Local Features for Stochastic Dynamics Approximation
Authors Behnoosh Parsa, Keshav Rajasekaran, Franziska Meier, Ashis G. Banerjee
Abstract One of the challenges in model-based control of stochastic dynamical systems is that the state transition dynamics are involved, and it is not easy or efficient to make good-quality predictions of the states. Moreover, there are not many representational models for the majority of autonomous systems, as it is not easy to build a compact model that captures the entire dynamical subtleties and uncertainties. In this work, we present a hierarchical Bayesian linear regression model with local features to learn the dynamics of a micro-robotic system as well as two simpler examples, consisting of a stochastic mass-spring damper and a stochastic double inverted pendulum on a cart. The model is hierarchical since we assume non-stationary priors for the model parameters. These non-stationary priors make the model more flexible by imposing priors on the priors of the model. To solve the maximum likelihood (ML) problem for this hierarchical model, we use the variational expectation maximization (EM) algorithm, and enhance the procedure by introducing hidden target variables. The algorithm yields parsimonious model structures, and consistently provides fast and accurate predictions for all our examples involving large training and test sets. This demonstrates the effectiveness of the method in learning stochastic dynamics, which makes it suitable for future use in a paradigm, such as model-based reinforcement learning, to compute optimal control policies in real time.
Tasks
Published 2018-07-11
URL http://arxiv.org/abs/1807.03931v2
PDF http://arxiv.org/pdf/1807.03931v2.pdf
PWC https://paperswithcode.com/paper/a-hierarchical-bayesian-linear-regression
Repo
Framework
comments powered by Disqus