February 2, 2020

3441 words 17 mins read

Paper Group AWR 21

Paper Group AWR 21

Disentangled Makeup Transfer with Generative Adversarial Network. Perceptual Generative Autoencoders. Detection of Malfunctioning Smart Electricity Meter. Towards a Reinforcement Learning Environment Toolbox for Intelligent Electric Motor Control. Question Generation by Transformers. Deep Learning for Time Series Forecasting: The Electric Load Case …

Disentangled Makeup Transfer with Generative Adversarial Network

Title Disentangled Makeup Transfer with Generative Adversarial Network
Authors Honglun Zhang, Wenqing Chen, Hao He, Yaohui Jin
Abstract Facial makeup transfer is a widely-used technology that aims to transfer the makeup style from a reference face image to a non-makeup face. Existing literature leverage the adversarial loss so that the generated faces are of high quality and realistic as real ones, but are only able to produce fixed outputs. Inspired by recent advances in disentangled representation, in this paper we propose DMT (Disentangled Makeup Transfer), a unified generative adversarial network to achieve different scenarios of makeup transfer. Our model contains an identity encoder as well as a makeup encoder to disentangle the personal identity and the makeup style for arbitrary face images. Based on the outputs of the two encoders, a decoder is employed to reconstruct the original faces. We also apply a discriminator to distinguish real faces from fake ones. As a result, our model can not only transfer the makeup styles from one or more reference face images to a non-makeup face with controllable strength, but also produce various outputs with styles sampled from a prior distribution. Extensive experiments demonstrate that our model is superior to existing literature by generating high-quality results for different scenarios of makeup transfer.
Tasks Facial Makeup Transfer
Published 2019-07-02
URL https://arxiv.org/abs/1907.01144v1
PDF https://arxiv.org/pdf/1907.01144v1.pdf
PWC https://paperswithcode.com/paper/disentangled-makeup-transfer-with-generative
Repo https://github.com/Honlan/DMT
Framework tf

Perceptual Generative Autoencoders

Title Perceptual Generative Autoencoders
Authors Zijun Zhang, Ruixiang Zhang, Zongpeng Li, Yoshua Bengio, Liam Paull
Abstract Modern generative models are usually designed to match target distributions directly in the data space, where the intrinsic dimensionality of data can be much lower than the ambient dimensionality. We argue that this discrepancy may contribute to the difficulties in training generative models. We therefore propose to map both the generated and target distributions to the latent space using the encoder of a standard autoencoder, and train the generator (or decoder) to match the target distribution in the latent space. The resulting method, perceptual generative autoencoder (PGA), is then incorporated with a maximum likelihood or variational autoencoder (VAE) objective to train the generative model. With maximum likelihood, PGAs generalize the idea of reversible generative models to unrestricted neural network architectures and arbitrary latent dimensionalities. When combined with VAEs, PGAs can generate sharper samples than vanilla VAEs. Compared to other autoencoder-based generative models using simple priors, PGAs achieve state-of-the-art FID scores on CIFAR-10 and CelebA.
Tasks
Published 2019-06-25
URL https://arxiv.org/abs/1906.10335v1
PDF https://arxiv.org/pdf/1906.10335v1.pdf
PWC https://paperswithcode.com/paper/perceptual-generative-autoencoders
Repo https://github.com/rosinality/lvpga-pytorch
Framework pytorch

Detection of Malfunctioning Smart Electricity Meter

Title Detection of Malfunctioning Smart Electricity Meter
Authors Ming Liu, Dongpeng Liu, Guangyu Sun, Yi Zhao, Duolin Wang, Fangxing Liu, Xiang Fang, Qing He, Dong Xu
Abstract Detecting malfunctional smart meters based on electricity usage and targeting them for replacement can save significant resources. For this purpose, we developed a novel deep-learning method for malfunctional smart meter detection based on long short-term memory (LSTM) and a modified convolutional neural network (CNN). Our method uses LSTM to predict the reading of a master meter based on data collected from submeters. If the predicted value is significantly different from master meter reading data over a period of time, the diagnosis part will be activated, classifying every submeter to identify the malfunctional submeter based on CNN. We propose a time series-recurrence plot (TS-RP) CNN, by combining the sequential raw data of electricity and its recurrence plots in the phase space as dual input branches of CNN. By combining this time sequential (TS) raw data with the recurrence plots (RP), we found that the classification performance was much better than when using the sequential raw data only. We compared our method with several classical methods, including the elastic net and gradient boosting regression methods, which show that our method performs better. To the best of our knowledge, our TS-RP CNN is the first method to apply deep learning in malfunctional meter detection. It is also relatively unique in the way it combines sequential data and its phase-space transformation as the dual input for general sequential data classification. This method is not only useful for increasing the service life span of smart meters, preventing unnecessary replacement, but it also provides a general method for managing other instruments of sequential data.
Tasks Time Series, Window Detection
Published 2019-07-26
URL https://arxiv.org/abs/1907.11377v2
PDF https://arxiv.org/pdf/1907.11377v2.pdf
PWC https://paperswithcode.com/paper/detection-of-malfunctioning-smart-electricity
Repo https://github.com/minoriwww/MeterDetection
Framework tf

Towards a Reinforcement Learning Environment Toolbox for Intelligent Electric Motor Control

Title Towards a Reinforcement Learning Environment Toolbox for Intelligent Electric Motor Control
Authors Arne Traue, Gerrit Book, Wilhelm Kirchgässner, Oliver Wallscheid
Abstract Electric motors are used in many applications and their efficiency is strongly dependent on their control. Among others, PI approaches or model predictive control methods are well-known in the scientific literature and industrial practice. A novel approach is to use reinforcement learning (RL) to have an agent learn electric drive control from scratch merely by interacting with a suitable control environment. RL achieved remarkable results with super-human performance in many games (e.g. Atari classics or Go) and also becomes more popular in control tasks like cartpole or swinging pendulum benchmarks. In this work, the open-source Python package gym-electric-motor (GEM) is developed for ease of training of RL-agents for electric motor control. Furthermore, this package can be used to compare the trained agents with other state-of-the-art control approaches. It is based on the OpenAI Gym framework that provides a widely used interface for the evaluation of RL-agents. The initial package version covers different DC motor variants and the prevalent permanent magnet synchronous motor as well as different power electronic converters and a mechanical load model. Due to the modular setup of the proposed toolbox, additional motor, load, and power electronic devices can be easily extended in the future. Furthermore, different secondary effects like controller interlocking time or noise are considered. An intelligent controller example based on the deep deterministic policy gradient algorithm which controls a series DC motor is presented and compared to a cascaded PI-controller as a baseline for future research. Fellow researchers are encouraged to use the framework in their RL investigations or to contribute to the functional scope (e.g. further motor types) of the package.
Tasks
Published 2019-10-21
URL https://arxiv.org/abs/1910.09434v1
PDF https://arxiv.org/pdf/1910.09434v1.pdf
PWC https://paperswithcode.com/paper/towards-a-reinforcement-learning-environment
Repo https://github.com/upb-lea/gym-electric-motor
Framework none

Question Generation by Transformers

Title Question Generation by Transformers
Authors Kettip Kriangchaivech, Artit Wangperawong
Abstract A machine learning model was developed to automatically generate questions from Wikipedia passages using transformers, an attention-based model eschewing the paradigm of existing recurrent neural networks (RNNs). The model was trained on the inverted Stanford Question Answering Dataset (SQuAD), which is a reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles. After training, the question generation model is able to generate simple questions relevant to unseen passages and answers containing an average of 8 words per question. The word error rate (WER) was used as a metric to compare the similarity between SQuAD questions and the model-generated questions. Although the high average WER suggests that the questions generated differ from the original SQuAD questions, the questions generated are mostly grammatically correct and plausible in their own right.
Tasks Question Answering, Question Generation, Reading Comprehension
Published 2019-09-09
URL https://arxiv.org/abs/1909.05017v2
PDF https://arxiv.org/pdf/1909.05017v2.pdf
PWC https://paperswithcode.com/paper/question-generation-by-transformers
Repo https://github.com/artitw/BERT_QA
Framework tf

Deep Learning for Time Series Forecasting: The Electric Load Case

Title Deep Learning for Time Series Forecasting: The Electric Load Case
Authors Alberto Gasparin, Slobodan Lukovic, Cesare Alippi
Abstract Management and efficient operations in critical infrastructure such as Smart Grids take huge advantage of accurate power load forecasting which, due to its nonlinear nature, remains a challenging task. Recently, deep learning has emerged in the machine learning field achieving impressive performance in a vast range of tasks, from image classification to machine translation. Applications of deep learning models to the electric load forecasting problem are gaining interest among researchers as well as the industry, but a comprehensive and sound comparison among different architectures is not yet available in the literature. This work aims at filling the gap by reviewing and experimentally evaluating on two real-world datasets the most recent trends in electric load forecasting, by contrasting deep learning architectures on short term forecast (one day ahead prediction). Specifically, we focus on feedforward and recurrent neural networks, sequence to sequence models and temporal convolutional neural networks along with architectural variants, which are known in the signal processing community but are novel to the load forecasting one.
Tasks Load Forecasting, Time Series Forecasting
Published 2019-07-22
URL https://arxiv.org/abs/1907.09207v1
PDF https://arxiv.org/pdf/1907.09207v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-for-time-series-forecasting-the
Repo https://github.com/albertogaspar/dts
Framework tf

LSANet: Feature Learning on Point Sets by Local Spatial Aware Layer

Title LSANet: Feature Learning on Point Sets by Local Spatial Aware Layer
Authors Lin-Zhuo Chen, Xuan-Yi Li, Deng-Ping Fan, Kai Wang, Shao-Ping Lu, Ming-Ming Cheng
Abstract Directly learning features from the point cloud has become an active research direction in 3D understanding. Existing learning-based methods usually construct local regions from the point cloud and extract the corresponding features. However, most of these processes do not adequately take the spatial distribution of the point cloud into account, limiting the ability to perceive fine-grained patterns. We design a novel Local Spatial Aware (LSA) layer, which can learn to generate Spatial Distribution Weights (SDWs) hierarchically based on the spatial relationship in local region for spatial independent operations, to establish the relationship between these operations and spatial distribution, thus capturing the local geometric structure sensitively.We further propose the LSANet, which is based on LSA layer, aggregating the spatial information with associated features in each layer of the network better in network design.The experiments show that our LSANet can achieve on par or better performance than the state-of-the-art methods when evaluating on the challenging benchmark datasets. For example, our LSANet can achieve 93.2% accuracy on ModelNet40 dataset using only 1024 points, significantly higher than other methods under the same conditions. The source code is available at https://github.com/LinZhuoChen/LSANet.
Tasks
Published 2019-05-14
URL https://arxiv.org/abs/1905.05442v3
PDF https://arxiv.org/pdf/1905.05442v3.pdf
PWC https://paperswithcode.com/paper/lsanet-feature-learning-on-point-sets-by
Repo https://github.com/LinZhuoChen/LSANet
Framework tf

Recurrent Autoencoder with Skip Connections and Exogenous Variables for Traffic Forecasting

Title Recurrent Autoencoder with Skip Connections and Exogenous Variables for Traffic Forecasting
Authors Pedro Herruzo, Josep L. Larriba-Pey
Abstract The increasing complexity of mobility plus the growing population in cities, together with the importance of privacy when sharing data from vehicles or any device, makes traffic forecasting that uses data from infrastructure and citizens an open and challenging task. In this paper, we introduce a novel approach to deal with predictions of speed, volume, and main traffic direction, in a new aggregated way of traffic data presented as videos. The approach leverages the continuity in a sequence of frames and its dynamics, learning to predict changing areas in a low dimensional space and then, recovering static features when reconstructing the original space. Exogenous variables like weather, time and calendar are also added in the model. Furthermore, we introduce a novel sampling approach for sequences that ensures diversity when creating batches, running in parallel to the optimization process.
Tasks
Published 2019-10-28
URL https://arxiv.org/abs/1910.13295v1
PDF https://arxiv.org/pdf/1910.13295v1.pdf
PWC https://paperswithcode.com/paper/recurrent-autoencoder-with-skip-connections
Repo https://github.com/pherrusa7/Traffic4cast_NeurIPS_2019
Framework tf

RuleKit: A Comprehensive Suite for Rule-Based Learning

Title RuleKit: A Comprehensive Suite for Rule-Based Learning
Authors Adam Gudyś, Marek Sikora, Łukasz Wróbel
Abstract Rule-based models are often used for data analysis as they combine interpretability with predictive power. We present RuleKit, a versatile tool for rule learning. Based on a sequential covering induction algorithm, it is suitable for classification, regression, and survival problems. The presence of a user-guided induction facilitates verifying hypotheses concerning data dependencies which are expected or of interest. The powerful and flexible experimental environment allows straightforward investigation of different induction schemes. The analysis can be performed in batch mode, through RapidMiner plug-in, or R package. A documented Java API is also provided for convenience. The software is publicly available at GitHub under GNU AGPL-3.0 license.
Tasks
Published 2019-08-02
URL https://arxiv.org/abs/1908.01031v1
PDF https://arxiv.org/pdf/1908.01031v1.pdf
PWC https://paperswithcode.com/paper/rulekit-a-comprehensive-suite-for-rule-based
Repo https://github.com/adaa-polsl/RuleKit
Framework none

Multi-task Learning and Catastrophic Forgetting in Continual Reinforcement Learning

Title Multi-task Learning and Catastrophic Forgetting in Continual Reinforcement Learning
Authors João Ribeiro, Francisco S. Melo, João Dias
Abstract In this paper we investigate two hypothesis regarding the use of deep reinforcement learning in multiple tasks. The first hypothesis is driven by the question of whether a deep reinforcement learning algorithm, trained on two similar tasks, is able to outperform two single-task, individually trained algorithms, by more efficiently learning a new, similar task, that none of the three algorithms has encountered before. The second hypothesis is driven by the question of whether the same multi-task deep RL algorithm, trained on two similar tasks and augmented with elastic weight consolidation (EWC), is able to retain similar performance on the new task, as a similar algorithm without EWC, whilst being able to overcome catastrophic forgetting in the two previous tasks. We show that a multi-task Asynchronous Advantage Actor-Critic (GA3C) algorithm, trained on Space Invaders and Demon Attack, is in fact able to outperform two single-tasks GA3C versions, trained individually for each single-task, when evaluated on a new, third task, namely, Phoenix. We also show that, when training two trained multi-task GA3C algorithms on the third task, if one is augmented with EWC, it is not only able to achieve similar performance on the new task, but also capable of overcoming a substantial amount of catastrophic forgetting on the two previous tasks.
Tasks Continual Learning, Multi-Task Learning
Published 2019-09-22
URL https://arxiv.org/abs/1909.10008v1
PDF https://arxiv.org/pdf/1909.10008v1.pdf
PWC https://paperswithcode.com/paper/190910008
Repo https://github.com/jmribeiro/UGP
Framework tf

Computing Full Conformal Prediction Set with Approximate Homotopy

Title Computing Full Conformal Prediction Set with Approximate Homotopy
Authors Eugene Ndiaye, Ichiro Takeuchi
Abstract If you are predicting the label $y$ of a new object with $\hat y$, how confident are you that $y = \hat y$? Conformal prediction methods provide an elegant framework for answering such question by building a $100 (1 - \alpha)%$ confidence region without assumptions on the distribution of the data. It is based on a refitting procedure that parses all the possibilities for $y$ to select the most likely ones. Although providing strong coverage guarantees, conformal set is impractical to compute exactly for many regression problems. We propose efficient algorithms to compute conformal prediction set using approximated solution of (convex) regularized empirical risk minimization. Our approaches rely on a new homotopy continuation technique for tracking the solution path with respect to sequential changes of the observations. We also provide a detailed analysis quantifying its complexity.
Tasks
Published 2019-09-20
URL https://arxiv.org/abs/1909.09365v2
PDF https://arxiv.org/pdf/1909.09365v2.pdf
PWC https://paperswithcode.com/paper/computing-full-conformal-prediction-set-with
Repo https://github.com/EugeneNdiaye/homotopy_conformal_prediction
Framework none

Energy and Policy Considerations for Deep Learning in NLP

Title Energy and Policy Considerations for Deep Learning in NLP
Authors Emma Strubell, Ananya Ganesh, Andrew McCallum
Abstract Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption. As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice.
Tasks
Published 2019-06-05
URL https://arxiv.org/abs/1906.02243v1
PDF https://arxiv.org/pdf/1906.02243v1.pdf
PWC https://paperswithcode.com/paper/energy-and-policy-considerations-for-deep
Repo https://github.com/DerwenAI/pytextrank
Framework none

Generate (non-software) Bugs to Fool Classifiers

Title Generate (non-software) Bugs to Fool Classifiers
Authors Hiromu Yakura, Youhei Akimoto, Jun Sakuma
Abstract In adversarial attacks intended to confound deep learning models, most studies have focused on limiting the magnitude of the modification so that humans do not notice the attack. On the other hand, during an attack against autonomous cars, for example, most drivers would not find it strange if a small insect image were placed on a stop sign, or they may overlook it. In this paper, we present a systematic approach to generate natural adversarial examples against classification models by employing such natural-appearing perturbations that imitate a certain object or signal. We first show the feasibility of this approach in an attack against an image classifier by employing generative adversarial networks that produce image patches that have the appearance of a natural object to fool the target model. We also introduce an algorithm to optimize placement of the perturbation in accordance with the input image, which makes the generation of adversarial examples fast and likely to succeed. Moreover, we experimentally show that the proposed approach can be extended to the audio domain, for example, to generate perturbations that sound like the chirping of birds to fool a speech classifier.
Tasks
Published 2019-11-20
URL https://arxiv.org/abs/1911.08644v1
PDF https://arxiv.org/pdf/1911.08644v1.pdf
PWC https://paperswithcode.com/paper/generate-non-software-bugs-to-fool
Repo https://github.com/hiromu/adversarial_examples_with_bugs
Framework tf

Feature Intertwiner for Object Detection

Title Feature Intertwiner for Object Detection
Authors Hongyang Li, Bo Dai, Shaoshuai Shi, Wanli Ouyang, Xiaogang Wang
Abstract A well-trained model should classify objects with a unanimous score for every category. This requires the high-level semantic features should be as much alike as possible among samples. To achive this, previous works focus on re-designing the loss or proposing new regularization constraints. In this paper, we provide a new perspective. For each category, it is assumed that there are two feature sets: one with reliable information and the other with less reliable source. We argue that the reliable set could guide the feature learning of the less reliable set during training - in spirit of student mimicking teacher behavior and thus pushing towards a more compact class centroid in the feature space. Such a scheme also benefits the reliable set since samples become closer within the same category - implying that it is easier for the classifier to identify. We refer to this mutual learning process as feature intertwiner and embed it into object detection. It is well-known that objects of low resolution are more difficult to detect due to the loss of detailed information during network forward pass (e.g., RoI operation). We thus regard objects of high resolution as the reliable set and objects of low resolution as the less reliable set. Specifically, an intertwiner is designed to minimize the distribution divergence between two sets. The choice of generating an effective feature representation for the reliable set is further investigated, where we introduce the optimal transport (OT) theory into the framework. Samples in the less reliable set are better aligned with aid of OT metric. Incorporated with such a plug-and-play intertwiner, we achieve an evident improvement over previous state-of-the-arts.
Tasks Object Detection
Published 2019-03-28
URL http://arxiv.org/abs/1903.11851v1
PDF http://arxiv.org/pdf/1903.11851v1.pdf
PWC https://paperswithcode.com/paper/feature-intertwiner-for-object-detection-1
Repo https://github.com/hli2020/feature_intertwiner
Framework pytorch

Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition

Title Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition
Authors Xuesong Niu, Hu Han, Shiguang Shan, Xilin Chen
Abstract Facial action units (AUs) recognition is essential for emotion analysis and has been widely applied in mental state analysis. Existing work on AU recognition usually requires big face dataset with AU labels; however, manual AU annotation requires expertise and can be time-consuming. In this work, we propose a semi-supervised approach for AU recognition utilizing a large number of web face images without AU labels and a relatively small face dataset with AU annotations inspired by the co-training methods. Unlike traditional co-training methods that require provided multi-view features and model re-training, we propose a novel co-training method, namely multi-label co-regularization, for semi-supervised facial AU recognition. Two deep neural networks are utilized to generate multi-view features for both labeled and unlabeled face images, and a multi-view loss is designed to enforce the two feature generators to get conditional independent representations. In order to constrain the prediction consistency of the two views, we further propose a multi-label co-regularization loss by minimizing the distance of the predicted AU probability distributions of two views. In addition, prior knowledge of the relationship between individual AUs is embedded through a graph convolutional network (GCN) for exploiting useful information from the big unlabeled dataset. Experiments on several benchmarks show that the proposed approach can effectively leverage large datasets of face images without AU labels to improve the AU recognition accuracy and outperform the state-of-the-art semi-supervised AU recognition methods.
Tasks Emotion Recognition, Facial Action Unit Detection
Published 2019-10-24
URL https://arxiv.org/abs/1910.11012v1
PDF https://arxiv.org/pdf/1910.11012v1.pdf
PWC https://paperswithcode.com/paper/multi-label-co-regularization-for-semi
Repo https://github.com/nxsEdson/MLCR
Framework pytorch
comments powered by Disqus