October 19, 2019

2984 words 15 mins read

Paper Group ANR 167

Paper Group ANR 167

DIY Human Action Data Set Generation. Every Smile is Unique: Landmark-Guided Diverse Smile Generation. NAPS: Natural Program Synthesis Dataset. Towards Mixed Optimization for Reinforcement Learning with Program Synthesis. Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Enslaving the A …

DIY Human Action Data Set Generation

Title DIY Human Action Data Set Generation
Authors Mehran Khodabandeh, Hamid Reza Vaezi Joze, Ilya Zharkov, Vivek Pradeep
Abstract The recent successes in applying deep learning techniques to solve standard computer vision problems has aspired researchers to propose new computer vision problems in different domains. As previously established in the field, training data itself plays a significant role in the machine learning process, especially deep learning approaches which are data hungry. In order to solve each new problem and get a decent performance, a large amount of data needs to be captured which may in many cases pose logistical difficulties. Therefore, the ability to generate de novo data or expand an existing data set, however small, in order to satisfy data requirement of current networks may be invaluable. Herein, we introduce a novel way to partition an action video clip into action, subject and context. Each part is manipulated separately and reassembled with our proposed video generation technique. Furthermore, our novel human skeleton trajectory generation along with our proposed video generation technique, enables us to generate unlimited action recognition training data. These techniques enables us to generate video action clips from an small set without costly and time-consuming data acquisition. Lastly, we prove through extensive set of experiments on two small human action recognition data sets, that this new data generation technique can improve the performance of current action recognition neural nets.
Tasks Temporal Action Localization, Video Generation
Published 2018-03-29
URL http://arxiv.org/abs/1803.11264v1
PDF http://arxiv.org/pdf/1803.11264v1.pdf
PWC https://paperswithcode.com/paper/diy-human-action-data-set-generation
Repo
Framework

Every Smile is Unique: Landmark-Guided Diverse Smile Generation

Title Every Smile is Unique: Landmark-Guided Diverse Smile Generation
Authors Wei Wang, Xavier Alameda-Pineda, Dan Xu, Pascal Fua, Elisa Ricci, Nicu Sebe
Abstract Each smile is unique: one person surely smiles in different ways (e.g., closing/opening the eyes or mouth). Given one input image of a neutral face, can we generate multiple smile videos with distinctive characteristics? To tackle this one-to-many video generation problem, we propose a novel deep learning architecture named Conditional Multi-Mode Network (CMM-Net). To better encode the dynamics of facial expressions, CMM-Net explicitly exploits facial landmarks for generating smile sequences. Specifically, a variational auto-encoder is used to learn a facial landmark embedding. This single embedding is then exploited by a conditional recurrent network which generates a landmark embedding sequence conditioned on a specific expression (e.g., spontaneous smile). Next, the generated landmark embeddings are fed into a multi-mode recurrent landmark generator, producing a set of landmark sequences still associated to the given smile class but clearly distinct from each other. Finally, these landmark sequences are translated into face videos. Our experimental results demonstrate the effectiveness of our CMM-Net in generating realistic videos of multiple smile expressions.
Tasks Video Generation
Published 2018-02-06
URL http://arxiv.org/abs/1802.01873v3
PDF http://arxiv.org/pdf/1802.01873v3.pdf
PWC https://paperswithcode.com/paper/every-smile-is-unique-landmark-guided-diverse
Repo
Framework

NAPS: Natural Program Synthesis Dataset

Title NAPS: Natural Program Synthesis Dataset
Authors Maksym Zavershynskyi, Alex Skidanov, Illia Polosukhin
Abstract We present a program synthesis-oriented dataset consisting of human written problem statements and solutions for these problems. The problem statements were collected via crowdsourcing and the program solutions were extracted from human-written solutions in programming competitions, accompanied by input/output examples. We propose using this dataset for the program synthesis tasks aimed for working with real user-generated data. As a baseline we present few models, with the best model achieving 8.8% accuracy, showcasing both the complexity of the dataset and large room for future research.
Tasks Program Synthesis
Published 2018-07-06
URL http://arxiv.org/abs/1807.03168v1
PDF http://arxiv.org/pdf/1807.03168v1.pdf
PWC https://paperswithcode.com/paper/naps-natural-program-synthesis-dataset
Repo
Framework

Towards Mixed Optimization for Reinforcement Learning with Program Synthesis

Title Towards Mixed Optimization for Reinforcement Learning with Program Synthesis
Authors Surya Bhupatiraju, Kumar Krishna Agrawal, Rishabh Singh
Abstract Deep reinforcement learning has led to several recent breakthroughs, though the learned policies are often based on black-box neural networks. This makes them difficult to interpret and to impose desired specification constraints during learning. We present an iterative framework, MORL, for improving the learned policies using program synthesis. Concretely, we propose to use synthesis techniques to obtain a symbolic representation of the learned policy, which can then be debugged manually or automatically using program repair. After the repair step, we use behavior cloning to obtain the policy corresponding to the repaired program, which is then further improved using gradient descent. This process continues until the learned policy satisfies desired constraints. We instantiate MORL for the simple CartPole problem and show that the programmatic representation allows for high-level modifications that in turn lead to improved learning of the policies.
Tasks Program Synthesis
Published 2018-07-01
URL http://arxiv.org/abs/1807.00403v2
PDF http://arxiv.org/pdf/1807.00403v2.pdf
PWC https://paperswithcode.com/paper/towards-mixed-optimization-for-reinforcement
Repo
Framework

Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis

Title Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis
Authors Veronika Cheplygina, Marleen de Bruijne, Josien P. W. Pluim
Abstract Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods which can learn with less/other types of supervision, have been proposed. We review semi-supervised, multiple instance, and transfer learning in medical imaging, both in diagnosis/detection or segmentation tasks. We also discuss connections between these learning scenarios, and opportunities for future research.
Tasks Transfer Learning
Published 2018-04-17
URL http://arxiv.org/abs/1804.06353v2
PDF http://arxiv.org/pdf/1804.06353v2.pdf
PWC https://paperswithcode.com/paper/not-so-supervised-a-survey-of-semi-supervised
Repo
Framework

Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”?

Title Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”?
Authors Lilian Edwards, Michael Veale
Abstract As concerns about unfairness and discrimination in “black box” machine learning systems rise, a legal “right to an explanation” has emerged as a compellingly attractive approach for challenge and redress. We outline recent debates on the limited provisions in European data protection law, and introduce and analyze newer explanation rights in French administrative law and the draft modernized Council of Europe Convention 108. While individual rights can be useful, in privacy law they have historically unreasonably burdened the average data subject. “Meaningful information” about algorithmic logics is more technically possible than commonly thought, but this exacerbates a new “transparency fallacy”—an illusion of remedy rather than anything substantively helpful. While rights-based approaches deserve a firm place in the toolbox, other forms of governance, such as impact assessments, “soft law,” judicial review, and model repositories deserve more attention, alongside catalyzing agencies acting for users to control algorithmic system design.
Tasks
Published 2018-03-20
URL http://arxiv.org/abs/1803.07540v2
PDF http://arxiv.org/pdf/1803.07540v2.pdf
PWC https://paperswithcode.com/paper/enslaving-the-algorithm-from-a-right-to-an
Repo
Framework

Quantifying Uncertainty in Discrete-Continuous and Skewed Data with Bayesian Deep Learning

Title Quantifying Uncertainty in Discrete-Continuous and Skewed Data with Bayesian Deep Learning
Authors Thomas Vandal, Evan Kodra, Jennifer Dy, Sangram Ganguly, Ramakrishna Nemani, Auroop R. Ganguly
Abstract Deep Learning (DL) methods have been transforming computer vision with innovative adaptations to other domains including climate change. For DL to pervade Science and Engineering (S&E) applications where risk management is a core component, well-characterized uncertainty estimates must accompany predictions. However, S&E observations and model-simulations often follow heavily skewed distributions and are not well modeled with DL approaches, since they usually optimize a Gaussian, or Euclidean, likelihood loss. Recent developments in Bayesian Deep Learning (BDL), which attempts to capture uncertainties from noisy observations, aleatoric, and from unknown model parameters, epistemic, provide us a foundation. Here we present a discrete-continuous BDL model with Gaussian and lognormal likelihoods for uncertainty quantification (UQ). We demonstrate the approach by developing UQ estimates on `DeepSD’, a super-resolution based DL model for Statistical Downscaling (SD) in climate applied to precipitation, which follows an extremely skewed distribution. We find that the discrete-continuous models outperform a basic Gaussian distribution in terms of predictive accuracy and uncertainty calibration. Furthermore, we find that the lognormal distribution, which can handle skewed distributions, produces quality uncertainty estimates at the extremes. Such results may be important across S&E, as well as other domains such as finance and economics, where extremes are often of significant interest. Furthermore, to our knowledge, this is the first UQ model in SD where both aleatoric and epistemic uncertainties are characterized. |
Tasks Calibration, Super-Resolution
Published 2018-02-13
URL http://arxiv.org/abs/1802.04742v2
PDF http://arxiv.org/pdf/1802.04742v2.pdf
PWC https://paperswithcode.com/paper/quantifying-uncertainty-in-discrete
Repo
Framework

Evolving Real-Time Heuristics Search Algorithms with Building Blocks

Title Evolving Real-Time Heuristics Search Algorithms with Building Blocks
Authors Md Solimul Chowdhury, Victor Silva
Abstract The research area of real-time heuristics search has produced quite many algorithms. In the landscape of real-time heuristics search research, it is not rare to find that an algorithm X that appears to perform better than algorithm Y on a group of problems, performed worse than Y for another group of problems. If these published algorithms are combined to generate a more powerful space of algorithms, then that novel space of algorithms may solve a distribution of problems more efficiently. Based on this intuition, a recent work Bulitko 2016 has defined the task of finding a combination of heuristics search algorithms as a survival task. In this evolutionary approach, a space of algorithms is defined over a set of building blocks published algorithms and a simulated evolution is used to recombine these building blocks to find out the best algorithm from that space of algorithms. In this paper, we extend the set of building blocks by adding one published algorithm, namely lookahead based A-star shaped local search space generation method from LSSLRTA-star, plus an unpublished novel strategy to generate local search space with Greedy Best First Search. Then we perform experiments in the new space of algorithms, which show that the best algorithms selected by the evolutionary process have the following property: the deeper is the lookahead depth of an algorithm, the lower is its suboptimality and scrubbing complexity.
Tasks
Published 2018-05-21
URL http://arxiv.org/abs/1805.08256v1
PDF http://arxiv.org/pdf/1805.08256v1.pdf
PWC https://paperswithcode.com/paper/evolving-real-time-heuristics-search
Repo
Framework

Learning convex bounds for linear quadratic control policy synthesis

Title Learning convex bounds for linear quadratic control policy synthesis
Authors Jack Umenberger, Thomas B. Schön
Abstract Learning to make decisions from observed data in dynamic environments remains a problem of fundamental importance in a number of fields, from artificial intelligence and robotics, to medicine and finance. This paper concerns the problem of learning control policies for unknown linear dynamical systems so as to maximize a quadratic reward function. We present a method to optimize the expected value of the reward over the posterior distribution of the unknown system parameters, given data. The algorithm involves sequential convex programing, and enjoys reliable local convergence and robust stability guarantees. Numerical simulations and stabilization of a real-world inverted pendulum are used to demonstrate the approach, with strong performance and robustness properties observed in both.
Tasks
Published 2018-06-01
URL http://arxiv.org/abs/1806.00319v1
PDF http://arxiv.org/pdf/1806.00319v1.pdf
PWC https://paperswithcode.com/paper/learning-convex-bounds-for-linear-quadratic
Repo
Framework

Brain Age Prediction Based on Resting-State Functional Connectivity Patterns Using Convolutional Neural Networks

Title Brain Age Prediction Based on Resting-State Functional Connectivity Patterns Using Convolutional Neural Networks
Authors Hongming Li, Theodore D. Satterthwaite, Yong Fan
Abstract Brain age prediction based on neuroimaging data could help characterize both the typical brain development and neuropsychiatric disorders. Pattern recognition models built upon functional connectivity (FC) measures derived from resting state fMRI (rsfMRI) data have been successfully used to predict the brain age. However, most existing studies focus on coarse-grained FC measures between brain regions or intrinsic connectivity networks (ICNs), which may sacrifice fine-grained FC information of the rsfMRI data. Whole brain voxel-wise FC measures could provide fine-grained FC information of the brain and may improve the prediction performance. In this study, we develop a deep learning method to use convolutional neural networks (CNNs) to learn informative features from the fine-grained whole brain FC measures for the brain age prediction. Experimental results on a large dataset of resting-state fMRI demonstrate that the deep learning model with fine-grained FC measures could better predict the brain age.
Tasks
Published 2018-01-11
URL http://arxiv.org/abs/1801.04013v1
PDF http://arxiv.org/pdf/1801.04013v1.pdf
PWC https://paperswithcode.com/paper/brain-age-prediction-based-on-resting-state
Repo
Framework

Contextual Multi-Armed Bandits for Causal Marketing

Title Contextual Multi-Armed Bandits for Causal Marketing
Authors Neela Sawant, Chitti Babu Namballa, Narayanan Sadagopan, Houssam Nassif
Abstract This work explores the idea of a causal contextual multi-armed bandit approach to automated marketing, where we estimate and optimize the causal (incremental) effects. Focusing on causal effect leads to better return on investment (ROI) by targeting only the persuadable customers who wouldn’t have taken the action organically. Our approach draws on strengths of causal inference, uplift modeling, and multi-armed bandits. It optimizes on causal treatment effects rather than pure outcome, and incorporates counterfactual generation within data collection. Following uplift modeling results, we optimize over the incremental business metric. Multi-armed bandit methods allow us to scale to multiple treatments and to perform off-policy policy evaluation on logged data. The Thompson sampling strategy in particular enables exploration of treatments on similar customer contexts and materialization of counterfactual outcomes. Preliminary offline experiments on a retail Fashion marketing dataset show merits of our proposal.
Tasks Causal Inference, Multi-Armed Bandits
Published 2018-10-02
URL http://arxiv.org/abs/1810.01859v1
PDF http://arxiv.org/pdf/1810.01859v1.pdf
PWC https://paperswithcode.com/paper/contextual-multi-armed-bandits-for-causal
Repo
Framework

Can Eye Movement Data Be Used As Ground Truth For Word Embeddings Evaluation?

Title Can Eye Movement Data Be Used As Ground Truth For Word Embeddings Evaluation?
Authors Amir Bakarov
Abstract In recent years a certain success in the task of modeling lexical semantics was obtained with distributional semantic models. Nevertheless, the scientific community is still unaware what is the most reliable evaluation method for these models. Some researchers argue that the only possible gold standard could be obtained from neuro-cognitive resources that store information about human cognition. One of such resources is eye movement data on silent reading. The goal of this work is to test the hypothesis of whether such data could be used to evaluate distributional semantic models on different languages. We propose experiments with English and Russian eye movement datasets (Provo Corpus, GECO and Russian Sentence Corpus), word vectors (Skip-Gram models trained on national corpora and Web corpora) and word similarity datasets of Russian and English assessed by humans in order to find the existence of correlation between embeddings and eye movement data and test the hypothesis that this correlation is language independent. As a result, we found that the validity of the hypothesis being tested could be questioned.
Tasks Word Embeddings
Published 2018-04-23
URL http://arxiv.org/abs/1804.08749v1
PDF http://arxiv.org/pdf/1804.08749v1.pdf
PWC https://paperswithcode.com/paper/can-eye-movement-data-be-used-as-ground-truth
Repo
Framework

Queue-based Resampling for Online Class Imbalance Learning

Title Queue-based Resampling for Online Class Imbalance Learning
Authors Kleanthis Malialis, Christos G. Panayiotou, Marios M. Polycarpou
Abstract Online class imbalance learning constitutes a new problem and an emerging research topic that focusses on the challenges of online learning under class imbalance and concept drift. Class imbalance deals with data streams that have very skewed distributions while concept drift deals with changes in the class imbalance status. Little work exists that addresses these challenges and in this paper we introduce queue-based resampling, a novel algorithm that successfully addresses the co-existence of class imbalance and concept drift. The central idea of the proposed resampling algorithm is to selectively include in the training set a subset of the examples that appeared in the past. Results on two popular benchmark datasets demonstrate the effectiveness of queue-based resampling over state-of-the-art methods in terms of learning speed and quality.
Tasks
Published 2018-09-27
URL http://arxiv.org/abs/1809.10388v2
PDF http://arxiv.org/pdf/1809.10388v2.pdf
PWC https://paperswithcode.com/paper/queue-based-resampling-for-online-class
Repo
Framework

Comparison of VCA and GAEE algorithms for Endmember Extraction

Title Comparison of VCA and GAEE algorithms for Endmember Extraction
Authors Douglas Winston. R. S., Gustavo T. Laureano, Celso G. Camilo Jr
Abstract Endmember Extraction is a critical step in hyperspectral image analysis and classification. It is an useful method to decompose a mixed spectrum into a collection of spectra and their corresponding proportions. In this paper, we solve a linear endmember extraction problem as an evolutionary optimization task, maximizing the Simplex Volume in the endmember space. We propose a standard genetic algorithm and a variation with In Vitro Fertilization module (IVFm) to find the best solutions and compare the results with the state-of-art Vertex Component Analysis (VCA) method and the traditional algorithms Pixel Purity Index (PPI) and N-FINDR. The experimental results on real and synthetic hyperspectral data confirms the overcome in performance and accuracy of the proposed approaches over the mentioned algorithms.
Tasks
Published 2018-05-27
URL http://arxiv.org/abs/1805.10644v1
PDF http://arxiv.org/pdf/1805.10644v1.pdf
PWC https://paperswithcode.com/paper/comparison-of-vca-and-gaee-algorithms-for
Repo
Framework

Prediction Factory: automated development and collaborative evaluation of predictive models

Title Prediction Factory: automated development and collaborative evaluation of predictive models
Authors Gaurav Sheni, Benjamin Schreck, Roy Wedge, James Max Kanter, Kalyan Veeramachaneni
Abstract In this paper, we present a data science automation system called Prediction Factory. The system uses several key automation algorithms to enable data scientists to rapidly develop predictive models and share them with domain experts. To assess the system’s impact, we implemented 3 different interfaces for creating predictive modeling projects: baseline automation, full automation, and optional automation. With a dataset of online grocery shopper behaviors, we divided data scientists among the interfaces to specify prediction problems, learn and evaluate models, and write a report for domain experts to judge whether or not to fund to continue working on. In total, 22 data scientists created 94 reports that were judged 296 times by 26 experts. In a head-to-head trial, reports generated utilizing full data science automation interface reports were funded 57.5% of the time, while the ones that used baseline automation were only funded 42.5% of the time. An intermediate interface which supports optional automation generated reports were funded 58.6% more often compared to the baseline. Full automation and optional automation reports were funded about equally when put head-to-head. These results demonstrate that Prediction Factory has implemented a critical amount of automation to augment the role of data scientists and improve business outcomes.
Tasks
Published 2018-11-29
URL http://arxiv.org/abs/1811.11960v1
PDF http://arxiv.org/pdf/1811.11960v1.pdf
PWC https://paperswithcode.com/paper/prediction-factory-automated-development-and
Repo
Framework
comments powered by Disqus