October 17, 2019

3375 words 16 mins read

Paper Group ANR 721

Paper Group ANR 721

A study on the use of Boundary Equilibrium GAN for Approximate Frontalization of Unconstrained Faces to aid in Surveillance. The Taboo Trap: Behavioural Detection of Adversarial Samples. Constrained Counting and Sampling: Bridging the Gap between Theory and Practice. A Survey on Nonconvex Regularization Based Sparse and Low-Rank Recovery in Signal …

A study on the use of Boundary Equilibrium GAN for Approximate Frontalization of Unconstrained Faces to aid in Surveillance

Title A study on the use of Boundary Equilibrium GAN for Approximate Frontalization of Unconstrained Faces to aid in Surveillance
Authors Wazeer Zulfikar, Sebastin Santy, Sahith Dambekodi, Tirtharaj Dash
Abstract Face frontalization is the process of synthesizing frontal facing views of faces given its angled poses. We implement a generative adversarial network (GAN) with spherical linear interpolation (Slerp) for frontalization of unconstrained facial images. Our special focus is intended towards the generation of approximate frontal faces of the side posed images captured from surveillance cameras. Specifically, the present work is a comprehensive study on the implementation of an auto-encoder based Boundary Equilibrium GAN (BEGAN) to generate frontal faces using an interpolation of a side view face and its mirrored view. To increase the quality of the interpolated output we implement a BEGAN with Slerp. This approach could produce a promising output along with a faster and more stable training for the model. The BEGAN model additionally has a balanced generator-discriminator combination, which prevents mode collapse along with a global convergence measure. It is expected that such an approximate face generation model would be able to replace face composites used in surveillance and crime detection.
Tasks Face Generation
Published 2018-09-14
URL http://arxiv.org/abs/1809.05611v1
PDF http://arxiv.org/pdf/1809.05611v1.pdf
PWC https://paperswithcode.com/paper/a-study-on-the-use-of-boundary-equilibrium
Repo
Framework

The Taboo Trap: Behavioural Detection of Adversarial Samples

Title The Taboo Trap: Behavioural Detection of Adversarial Samples
Authors Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson
Abstract Deep Neural Networks (DNNs) have become a powerful toolfor a wide range of problems. Yet recent work has found an increasing variety of adversarial samplesthat can fool them. Most existing detection mechanisms against adversarial attacksimpose significant costs, either by using additional classifiers to spot adversarial samples, or by requiring the DNN to be restructured. In this paper, we introduce a novel defence. We train our DNN so that, as long as it is workingas intended on the kind of inputs we expect, its behavior is constrained, in that some set of behaviors are taboo. If it is exposed to adversarial samples, they will often cause a taboo behavior, which we can detect. Taboos can be both subtle and diverse, so their choice can encode and hide information. It is a well-established design principle that the security of a system should not depend on the obscurity of its design, but on some variable (the key) which can differ between implementations and bechanged as necessary. We discuss how taboos can be used to equip a classifier with just such a key, and how to tune the keying mechanism to adversaries of various capabilities. We evaluate the performance of a prototype against a wide range of attacks and show how our simple defense can defend against cheap attacks at scale with zero run-time computation overhead, making it a suitable defense method for IoT devices.
Tasks
Published 2018-11-18
URL https://arxiv.org/abs/1811.07375v2
PDF https://arxiv.org/pdf/1811.07375v2.pdf
PWC https://paperswithcode.com/paper/the-taboo-trap-behavioural-detection-of
Repo
Framework

Constrained Counting and Sampling: Bridging the Gap between Theory and Practice

Title Constrained Counting and Sampling: Bridging the Gap between Theory and Practice
Authors Kuldeep S. Meel
Abstract Constrained counting and sampling are two fundamental problems in Computer Science with numerous applications, including network reliability, privacy, probabilistic reasoning, and constrained-random verification. In constrained counting, the task is to compute the total weight, subject to a given weighting function, of the set of solutions of the given constraints. In constrained sampling, the task is to sample randomly, subject to a given weighting function, from the set of solutions to a set of given constraints. Consequently, constrained counting and sampling have been subject to intense theoretical and empirical investigations over the years. Prior work, however, offered either heuristic techniques with poor guarantees of accuracy or approaches with proven guarantees but poor performance in practice. In this thesis, we introduce a novel hashing-based algorithmic framework for constrained sampling and counting that combines the classical algorithmic technique of universal hashing with the dramatic progress made in combinatorial reasoning tools, in particular, SAT and SMT, over the past two decades. The resulting frameworks for counting (ApproxMC2) and sampling (UniGen) can handle formulas with up to million variables representing a significant boost up from the prior state of the art tools’ capability to handle few hundreds of variables. If the initial set of constraints is expressed as Disjunctive Normal Form (DNF), ApproxMC is the only known Fully Polynomial Randomized Approximation Scheme (FPRAS) that does not involve Monte Carlo steps. By exploiting the connection between definability of formulas and variance of the distribution of solutions in a cell defined by 3-universal hash functions, we introduced an algorithmic technique, MIS, that reduced the size of XOR constraints employed in the underlying universal hash functions by as much as two orders of magnitude.
Tasks
Published 2018-06-06
URL http://arxiv.org/abs/1806.02239v1
PDF http://arxiv.org/pdf/1806.02239v1.pdf
PWC https://paperswithcode.com/paper/constrained-counting-and-sampling-bridging
Repo
Framework

A Survey on Nonconvex Regularization Based Sparse and Low-Rank Recovery in Signal Processing, Statistics, and Machine Learning

Title A Survey on Nonconvex Regularization Based Sparse and Low-Rank Recovery in Signal Processing, Statistics, and Machine Learning
Authors Fei Wen, Lei Chu, Peilin Liu, Robert C. Qiu
Abstract In the past decade, sparse and low-rank recovery have drawn much attention in many areas such as signal/image processing, statistics, bioinformatics and machine learning. To achieve sparsity and/or low-rankness inducing, the $\ell_1$ norm and nuclear norm are of the most popular regularization penalties due to their convexity. While the $\ell_1$ and nuclear norm are convenient as the related convex optimization problems are usually tractable, it has been shown in many applications that a nonconvex penalty can yield significantly better performance. In recent, nonconvex regularization based sparse and low-rank recovery is of considerable interest and it in fact is a main driver of the recent progress in nonconvex and nonsmooth optimization. This paper gives an overview of this topic in various fields in signal processing, statistics and machine learning, including compressive sensing (CS), sparse regression and variable selection, sparse signals separation, sparse principal component analysis (PCA), large covariance and inverse covariance matrices estimation, matrix completion, and robust PCA. We present recent developments of nonconvex regularization based sparse and low-rank recovery in these fields, addressing the issues of penalty selection, applications and the convergence of nonconvex algorithms. Code is available at https://github.com/FWen/ncreg.git.
Tasks Compressive Sensing, Matrix Completion
Published 2018-08-16
URL https://arxiv.org/abs/1808.05403v3
PDF https://arxiv.org/pdf/1808.05403v3.pdf
PWC https://paperswithcode.com/paper/nonconvex-regularization-based-sparse-and-low
Repo
Framework

Accurate Facial Parts Localization and Deep Learning for 3D Facial Expression Recognition

Title Accurate Facial Parts Localization and Deep Learning for 3D Facial Expression Recognition
Authors Asim Jan, Huaxiong Ding, Hongying Meng, Liming Chen, Huibin Li
Abstract Meaningful facial parts can convey key cues for both facial action unit detection and expression prediction. Textured 3D face scan can provide both detailed 3D geometric shape and 2D texture appearance cues of the face which are beneficial for Facial Expression Recognition (FER). However, accurate facial parts extraction as well as their fusion are challenging tasks. In this paper, a novel system for 3D FER is designed based on accurate facial parts extraction and deep feature fusion of facial parts. In particular, each textured 3D face scan is firstly represented as a 2D texture map and a depth map with one-to-one dense correspondence. Then, the facial parts of both texture map and depth map are extracted using a novel 4-stage process consists of facial landmark localization, facial rotation correction, facial resizing, facial parts bounding box extraction and post-processing procedures. Finally, deep fusion Convolutional Neural Networks (CNNs) features of all facial parts are learned from both texture maps and depth maps, respectively and nonlinear SVMs are used for expression prediction. Experiments are conducted on the BU-3DFE database, demonstrating the effectiveness of combing different facial parts, texture and depth cues and reporting the state-of-the-art results in comparison with all existing methods under the same setting.
Tasks 3D Facial Expression Recognition, Action Unit Detection, Face Alignment, Facial Action Unit Detection, Facial Expression Recognition
Published 2018-03-04
URL http://arxiv.org/abs/1803.05846v1
PDF http://arxiv.org/pdf/1803.05846v1.pdf
PWC https://paperswithcode.com/paper/accurate-facial-parts-localization-and-deep
Repo
Framework

Training deep learning based image denoisers from undersampled measurements without ground truth and without image prior

Title Training deep learning based image denoisers from undersampled measurements without ground truth and without image prior
Authors Magauiya Zhussip, Shakarim Soltanayev, Se Young Chun
Abstract Compressive sensing is a method to recover the original image from undersampled measurements. In order to overcome the ill-posedness of this inverse problem, image priors are used such as sparsity in the wavelet domain, minimum total-variation, or self-similarity. Recently, deep learning based compressive image recovery methods have been proposed and have yielded state-of-the-art performances. They used deep learning based data-driven approaches instead of hand-crafted image priors to solve the ill-posed inverse problem with undersampled data. Ironically, training deep neural networks for them requires “clean” ground truth images, but obtaining the best quality images from undersampled data requires well-trained deep neural networks. To resolve this dilemma, we propose novel methods based on two well-grounded theories: denoiser-approximate message passing and Stein’s unbiased risk estimator. Our proposed methods were able to train deep learning based image denoisers from undersampled measurements without ground truth images and without image priors, and to recover images with state-of-the-art qualities from undersampled data. We evaluated our methods for various compressive sensing recovery problems with Gaussian random, coded diffraction pattern, and compressive sensing MRI measurement matrices. Our methods yielded state-of-the-art performances for all cases without ground truth images and without image priors. They also yielded comparable performances to the methods with ground truth data.
Tasks Compressive Sensing
Published 2018-06-04
URL http://arxiv.org/abs/1806.00961v2
PDF http://arxiv.org/pdf/1806.00961v2.pdf
PWC https://paperswithcode.com/paper/training-deep-learning-based-image-denoisers
Repo
Framework

A Deeper Look into Dependency-Based Word Embeddings

Title A Deeper Look into Dependency-Based Word Embeddings
Authors Sean MacAvaney, Amir Zeldes
Abstract We investigate the effect of various dependency-based word embeddings on distinguishing between functional and domain similarity, word similarity rankings, and two downstream tasks in English. Variations include word embeddings trained using context windows from Stanford and Universal dependencies at several levels of enhancement (ranging from unlabeled, to Enhanced++ dependencies). Results are compared to basic linear contexts and evaluated on several datasets. We found that embeddings trained with Universal and Stanford dependency contexts excel at different tasks, and that enhanced dependencies often improve performance.
Tasks Word Embeddings
Published 2018-04-16
URL http://arxiv.org/abs/1804.05972v1
PDF http://arxiv.org/pdf/1804.05972v1.pdf
PWC https://paperswithcode.com/paper/a-deeper-look-into-dependency-based-word
Repo
Framework

Scale-Invariant Structure Saliency Selection for Fast Image Fusion

Title Scale-Invariant Structure Saliency Selection for Fast Image Fusion
Authors Yixiong Liang, Yuan Mao, Jiazhi Xia, Yao Xiang, Jianfeng Liu
Abstract In this paper, we present a fast yet effective method for pixel-level scale-invariant image fusion in spatial domain based on the scale-space theory. Specifically, we propose a scale-invariant structure saliency selection scheme based on the difference-of-Gaussian (DoG) pyramid of images to build the weights or activity map. Due to the scale-invariant structure saliency selection, our method can keep both details of small size objects and the integrity information of large size objects in images. In addition, our method is very efficient since there are no complex operation involved and easy to be implemented and therefore can be used for fast high resolution images fusion. Experimental results demonstrate the proposed method yields competitive or even better results comparing to state-of-the-art image fusion methods both in terms of visual quality and objective evaluation metrics. Furthermore, the proposed method is very fast and can be used to fuse the high resolution images in real-time. Code is available at https://github.com/yiqingmy/Fusion.
Tasks
Published 2018-10-30
URL http://arxiv.org/abs/1810.12553v1
PDF http://arxiv.org/pdf/1810.12553v1.pdf
PWC https://paperswithcode.com/paper/scale-invariant-structure-saliency-selection
Repo
Framework

Learning acoustic word embeddings with phonetically associated triplet network

Title Learning acoustic word embeddings with phonetically associated triplet network
Authors Hyungjun Lim, Younggwan Kim, Youngmoon Jung, Myunghun Jung, Hoirin Kim
Abstract Previous researches on acoustic word embeddings used in query-by-example spoken term detection have shown remarkable performance improvements when using a triplet network. However, the triplet network is trained using only a limited information about acoustic similarity between words. In this paper, we propose a novel architecture, phonetically associated triplet network (PATN), which aims at increasing discriminative power of acoustic word embeddings by utilizing phonetic information as well as word identity. The proposed model is learned to minimize a combined loss function that was made by introducing a cross entropy loss to the lower layer of LSTM-based triplet network. We observed that the proposed method performs significantly better than the baseline triplet network on a word discrimination task with the WSJ dataset resulting in over 20% relative improvement in recall rate at 1.0 false alarm per hour. Finally, we examined the generalization ability by conducting the out-of-domain test on the RM dataset.
Tasks Word Embeddings
Published 2018-11-07
URL http://arxiv.org/abs/1811.02736v3
PDF http://arxiv.org/pdf/1811.02736v3.pdf
PWC https://paperswithcode.com/paper/learning-acoustic-word-embeddings-with
Repo
Framework

Improved Image Selection for Stack-Based HDR Imaging

Title Improved Image Selection for Stack-Based HDR Imaging
Authors Peter van Beek
Abstract Stack-based high dynamic range (HDR) imaging is a technique for achieving a larger dynamic range in an image by combining several low dynamic range images acquired at different exposures. Minimizing the set of images to combine, while ensuring that the resulting HDR image fully captures the scene’s irradiance, is important to avoid long image acquisition and post-processing times. The problem of selecting the set of images has received much attention. However, existing methods either are not fully automatic, can be slow, or can fail to fully capture more challenging scenes. In this paper, we propose a fully automatic method for selecting the set of exposures to acquire that is both fast and more accurate. We show on an extensive set of benchmark scenes that our proposed method leads to improved HDR images as measured against ground truth using the mean squared error, a pixel-based metric, and a visible difference predictor and a quality score, both perception-based metrics.
Tasks
Published 2018-06-19
URL http://arxiv.org/abs/1806.07420v1
PDF http://arxiv.org/pdf/1806.07420v1.pdf
PWC https://paperswithcode.com/paper/improved-image-selection-for-stack-based-hdr
Repo
Framework

Learning Adaptive Display Exposure for Real-Time Advertising

Title Learning Adaptive Display Exposure for Real-Time Advertising
Authors Weixun Wang, Junqi Jin, Jianye Hao, Chunjie Chen, Chuan Yu, Weinan Zhang, Jun Wang, Xiaotian Hao, Yixi Wang, Han Li, Jian Xu, Kun Gai
Abstract In E-commerce advertising, where product recommendations and product ads are presented to users simultaneously, the traditional setting is to display ads at fixed positions. However, under such a setting, the advertising system loses the flexibility to control the number and positions of ads, resulting in sub-optimal platform revenue and user experience. Consequently, major e-commerce platforms (e.g., Taobao.com) have begun to consider more flexible ways to display ads. In this paper, we investigate the problem of advertising with adaptive exposure: can we dynamically determine the number and positions of ads for each user visit under certain business constraints so that the platform revenue can be increased? More specifically, we consider two types of constraints: request-level constraint ensures user experience for each user visit, and platform-level constraint controls the overall platform monetization rate. We model this problem as a Constrained Markov Decision Process with per-state constraint (psCMDP) and propose a constrained two-level reinforcement learning approach to decompose the original problem into two relatively independent sub-problems. To accelerate policy learning, we also devise a constrained hindsight experience replay mechanism. Experimental evaluations on industry-scale real-world datasets demonstrate the merits of our approach in both obtaining higher revenue under the constraints and the effectiveness of the constrained hindsight experience replay mechanism.
Tasks
Published 2018-09-10
URL https://arxiv.org/abs/1809.03149v2
PDF https://arxiv.org/pdf/1809.03149v2.pdf
PWC https://paperswithcode.com/paper/learning-to-advertise-with-adaptive-exposure
Repo
Framework

Curiosity Driven Exploration of Learned Disentangled Goal Spaces

Title Curiosity Driven Exploration of Learned Disentangled Goal Spaces
Authors Adrien Laversanne-Finot, Alexandre Péré, Pierre-Yves Oudeyer
Abstract Intrinsically motivated goal exploration processes enable agents to autonomously sample goals to explore efficiently complex environments with high-dimensional continuous actions. They have been applied successfully to real world robots to discover repertoires of policies producing a wide diversity of effects. Often these algorithms relied on engineered goal spaces but it was recently shown that one can use deep representation learning algorithms to learn an adequate goal space in simple environments. However, in the case of more complex environments containing multiple objects or distractors, an efficient exploration requires that the structure of the goal space reflects the one of the environment. In this paper we show that using a disentangled goal space leads to better exploration performances than an entangled goal space. We further show that when the representation is disentangled, one can leverage it by sampling goals that maximize learning progress in a modular manner. Finally, we show that the measure of learning progress, used to drive curiosity-driven exploration, can be used simultaneously to discover abstract independently controllable features of the environment.
Tasks Efficient Exploration, Representation Learning
Published 2018-07-04
URL http://arxiv.org/abs/1807.01521v3
PDF http://arxiv.org/pdf/1807.01521v3.pdf
PWC https://paperswithcode.com/paper/curiosity-driven-exploration-of-learned
Repo
Framework

An Approach to Vehicle Trajectory Prediction Using Automatically Generated Traffic Maps

Title An Approach to Vehicle Trajectory Prediction Using Automatically Generated Traffic Maps
Authors Jannik Quehl, Haohao Hu, Sascha Wirges, Martin Lauer
Abstract Trajectory and intention prediction of traffic participants is an important task in automated driving and crucial for safe interaction with the environment. In this paper, we present a new approach to vehicle trajectory prediction based on automatically generated maps containing statistical information about the behavior of traffic participants in a given area. These maps are generated based on trajectory observations using image processing and map matching techniques and contain all typical vehicle movements and probabilities in the considered area. Our prediction approach matches an observed trajectory to a behavior contained in the map and uses this information to generate a prediction. We evaluated our approach on a dataset containing over 14000 trajectories and found that it produces significantly more precise mid-term predictions compared to motion model-based prediction approaches.
Tasks Trajectory Prediction
Published 2018-02-23
URL http://arxiv.org/abs/1802.08632v2
PDF http://arxiv.org/pdf/1802.08632v2.pdf
PWC https://paperswithcode.com/paper/an-approach-to-vehicle-trajectory-prediction
Repo
Framework

Resource Allocation for a Wireless Coexistence Management System Based on Reinforcement Learning

Title Resource Allocation for a Wireless Coexistence Management System Based on Reinforcement Learning
Authors Philip Soeffker, Dimitri Block, Nico Wiebusch, Uwe Meier
Abstract In industrial environments, an increasing amount of wireless devices are used, which utilize license-free bands. As a consequence of these mutual interferences of wireless systems might decrease the state of coexistence. Therefore, a central coexistence management system is needed, which allocates conflict-free resources to wireless systems. To ensure a conflict-free resource utilization, it is useful to predict the prospective medium utilization before resources are allocated. This paper presents a self-learning concept, which is based on reinforcement learning. A simulative evaluation of reinforcement learning agents based on neural networks, called deep Q-networks and double deep Q-networks, was realized for exemplary and practically relevant coexistence scenarios. The evaluation of the double deep Q-network showed that a prediction accuracy of at least 98 % can be reached in all investigated scenarios.
Tasks
Published 2018-05-24
URL http://arxiv.org/abs/1806.04702v1
PDF http://arxiv.org/pdf/1806.04702v1.pdf
PWC https://paperswithcode.com/paper/resource-allocation-for-a-wireless
Repo
Framework

Statistical Robust Chinese Remainder Theorem for Multiple Numbers: Wrapped Gaussian Mixture Model

Title Statistical Robust Chinese Remainder Theorem for Multiple Numbers: Wrapped Gaussian Mixture Model
Authors Nan Du, Zhikang Wang, Hanshen Xiao
Abstract Generalized Chinese Remainder Theorem (CRT) has been shown to be a powerful approach to solve the ambiguity resolution problem. However, with its close relationship to number theory, study in this area is mainly from a coding theory perspective under deterministic conditions. Nevertheless, it can be proved that even with the best deterministic condition known, the probability of success in robust reconstruction degrades exponentially as the number of estimand increases. In this paper, we present the first rigorous analysis on the underlying statistical model of CRT-based multiple parameter estimation, where a generalized Gaussian mixture with background knowledge on samplings is proposed. To address the problem, two novel approaches are introduced. One is to directly calculate the conditional maximal a posteriori probability (MAP) estimation of residue clustering, and the other is to iteratively search for MAP of both common residues and clustering. Moreover, remainder error-correcting codes are introduced to improve the robustness further. It is shown that this statistically based scheme achieves much stronger robustness compared to state-of-the-art deterministic schemes, especially in low and median Signal Noise Ratio (SNR) scenarios.
Tasks
Published 2018-11-28
URL http://arxiv.org/abs/1811.11339v1
PDF http://arxiv.org/pdf/1811.11339v1.pdf
PWC https://paperswithcode.com/paper/statistical-robust-chinese-remainder-theorem
Repo
Framework
comments powered by Disqus