January 26, 2020

2771 words 14 mins read

Paper Group ANR 1392

Paper Group ANR 1392

A novel repetition normalized adversarial reward for headline generation. AutoER: Automated Entity Resolution using Generative Modelling. The Verbal and Non Verbal Signals of Depression – Combining Acoustics, Text and Visuals for Estimating Depression Level. An Improved multi-objective genetic algorithm based on orthogonal design and adaptive clus …

A novel repetition normalized adversarial reward for headline generation

Title A novel repetition normalized adversarial reward for headline generation
Authors Peng Xu, Pascale Fung
Abstract While reinforcement learning can effectively improve language generation models, it often suffers from generating incoherent and repetitive phrases \cite{paulus2017deep}. In this paper, we propose a novel repetition normalized adversarial reward to mitigate these problems. Our repetition penalized reward can greatly reduce the repetition rate and adversarial training mitigates generating incoherent phrases. Our model significantly outperforms the baseline model on ROUGE-1,(+3.24), ROUGE-L,(+2.25), and a decreased repetition-rate (-4.98%).
Tasks Text Generation
Published 2019-02-19
URL http://arxiv.org/abs/1902.07110v1
PDF http://arxiv.org/pdf/1902.07110v1.pdf
PWC https://paperswithcode.com/paper/a-novel-repetition-normalized-adversarial
Repo
Framework

AutoER: Automated Entity Resolution using Generative Modelling

Title AutoER: Automated Entity Resolution using Generative Modelling
Authors Renzhi Wu, Sanya Chaba, Saurabh Sawlani, Xu Chu, Saravanan Thirumuruganathan
Abstract Entity resolution (ER) refers to the problem of identifying records in one or more relations that refer to the same real-world entity. ER has been extensively studied by the database community with supervised machine learning approaches achieving the state-of-the-art results. However, supervised ML requires many labeled examples, both matches and unmatches, which are expensive to obtain. In this paper, we investigate an important problem: how can we design an unsupervised algorithm for ER that can achieve performance comparable to supervised approaches? We propose an automated ER solution, AutoER, that requires zero labeled examples. Our central insight is that the similarity vectors for matches should look different from that of unmatches. A number of innovations are needed to translate the intuition into an actual algorithm for ER. We advocate for the use of generative models to capture the two similarity vector distributions (the match distribution and the unmatch distribution). We propose an expectation maximization based algorithm to learn the model parameters. Our algorithm addresses many practical challenges including feature correlations, model overfitting, class imbalance, and transitivity between matches. On six datasets from four different domains, we show that the performance of AutoER is comparable and sometimes even better than supervised ML approaches.
Tasks Entity Resolution
Published 2019-08-16
URL https://arxiv.org/abs/1908.06049v1
PDF https://arxiv.org/pdf/1908.06049v1.pdf
PWC https://paperswithcode.com/paper/autoer-automated-entity-resolution-using
Repo
Framework

The Verbal and Non Verbal Signals of Depression – Combining Acoustics, Text and Visuals for Estimating Depression Level

Title The Verbal and Non Verbal Signals of Depression – Combining Acoustics, Text and Visuals for Estimating Depression Level
Authors Syed Arbaaz Qureshi, Mohammed Hasanuzzaman, Sriparna Saha, Gaël Dias
Abstract Depression is a serious medical condition that is suffered by a large number of people around the world. It significantly affects the way one feels, causing a persistent lowering of mood. In this paper, we propose a novel attention-based deep neural network which facilitates the fusion of various modalities. We use this network to regress the depression level. Acoustic, text and visual modalities have been used to train our proposed network. Various experiments have been carried out on the benchmark dataset, namely, Distress Analysis Interview Corpus - a Wizard of Oz (DAIC-WOZ). From the results, we empirically justify that the fusion of all three modalities helps in giving the most accurate estimation of depression level. Our proposed approach outperforms the state-of-the-art by 7.17% on root mean squared error (RMSE) and 8.08% on mean absolute error (MAE).
Tasks
Published 2019-04-02
URL http://arxiv.org/abs/1904.07656v1
PDF http://arxiv.org/pdf/1904.07656v1.pdf
PWC https://paperswithcode.com/paper/190407656
Repo
Framework

An Improved multi-objective genetic algorithm based on orthogonal design and adaptive clustering pruning strategy

Title An Improved multi-objective genetic algorithm based on orthogonal design and adaptive clustering pruning strategy
Authors Xinwu Yang, Guizeng You, Chong Zhao, Mengfei Dou, Xinian Guo
Abstract Two important characteristics of multi-objective evolutionary algorithms are distribution and convergency. As a classic multi-objective genetic algorithm, NSGA-II is widely used in multi-objective optimization fields. However, in NSGA-II, the random population initialization and the strategy of population maintenance based on distance cannot maintain the distribution or convergency of the population well. To dispose these two deficiencies, this paper proposes an improved algorithm, OTNSGA-II II, which has a better performance on distribution and convergency. The new algorithm adopts orthogonal experiment, which selects individuals in manner of a new discontinuing non-dominated sorting and crowding distance, to produce the initial population. And a new pruning strategy based on clustering is proposed to self-adaptively prunes individuals with similar features and poor performance in non-dominated sorting and crowding distance, or to individuals are far away from the Pareto Front according to the degree of intra-class aggregation of clustering results. The new pruning strategy makes population to converge to the Pareto Front more easily and maintain the distribution of population. OTNSGA-II and NSGA-II are compared on various types of test functions to verify the improvement of OTNSGA-II in terms of distribution and convergency.
Tasks
Published 2019-01-03
URL http://arxiv.org/abs/1901.00577v1
PDF http://arxiv.org/pdf/1901.00577v1.pdf
PWC https://paperswithcode.com/paper/an-improved-multi-objective-genetic-algorithm
Repo
Framework

Quantized deep learning models on low-power edge devices for robotic systems

Title Quantized deep learning models on low-power edge devices for robotic systems
Authors Anugraha Sinha, Naveen Kumar, Murukesh Mohanan, MD Muhaimin Rahman, Yves Quemener, Amina Mim, Suzana Ilić
Abstract In this work, we present a quantized deep neural network deployed on a low-power edge device, inferring learned motor-movements of a suspended robot in a defined space. This serves as the fundamental building block for the original setup, a robotic system for farms or greenhouses aimed at a wide range of agricultural tasks. Deep learning on edge devices and its implications could have a substantial impact on farming systems in the developing world, leading not only to sustainable food production and income, but also increased data privacy and autonomy.
Tasks
Published 2019-11-30
URL https://arxiv.org/abs/1912.00186v1
PDF https://arxiv.org/pdf/1912.00186v1.pdf
PWC https://paperswithcode.com/paper/quantized-deep-learning-models-on-low-power
Repo
Framework

Exploring Author Context for Detecting Intended vs Perceived Sarcasm

Title Exploring Author Context for Detecting Intended vs Perceived Sarcasm
Authors Silviu Oprea, Walid Magdy
Abstract We investigate the impact of using author context on textual sarcasm detection. We define author context as the embedded representation of their historical posts on Twitter and suggest neural models that extract these representations. We experiment with two tweet datasets, one labelled manually for sarcasm, and the other via tag-based distant supervision. We achieve state-of-the-art performance on the second dataset, but not on the one labelled manually, indicating a difference between intended sarcasm, captured by distant supervision, and perceived sarcasm, captured by manual labelling.
Tasks Sarcasm Detection
Published 2019-10-25
URL https://arxiv.org/abs/1910.11932v1
PDF https://arxiv.org/pdf/1910.11932v1.pdf
PWC https://paperswithcode.com/paper/exploring-author-context-for-detecting-1
Repo
Framework

GEN-SLAM: Generative Modeling for Monocular Simultaneous Localization and Mapping

Title GEN-SLAM: Generative Modeling for Monocular Simultaneous Localization and Mapping
Authors Punarjay Chakravarty, Praveen Narayanan, Tom Roussel
Abstract We present a Deep Learning based system for the twin tasks of localization and obstacle avoidance essential to any mobile robot. Our system learns from conventional geometric SLAM, and outputs, using a single camera, the topological pose of the camera in an environment, and the depth map of obstacles around it. We use a CNN to localize in a topological map, and a conditional VAE to output depth for a camera image, conditional on this topological location estimation. We demonstrate the effectiveness of our monocular localization and depth estimation system on simulated and real datasets.
Tasks Depth Estimation, Simultaneous Localization and Mapping
Published 2019-02-06
URL http://arxiv.org/abs/1902.02086v1
PDF http://arxiv.org/pdf/1902.02086v1.pdf
PWC https://paperswithcode.com/paper/gen-slam-generative-modeling-for-monocular
Repo
Framework

From Plots to Endings: A Reinforced Pointer Generator for Story Ending Generation

Title From Plots to Endings: A Reinforced Pointer Generator for Story Ending Generation
Authors Yan Zhao, Lu Liu, Chunhua Liu, Ruoyao Yang, Dong Yu
Abstract We introduce a new task named Story Ending Generation (SEG), whic-h aims at generating a coherent story ending from a sequence of story plot. Wepropose a framework consisting of a Generator and a Reward Manager for thistask. The Generator follows the pointer-generator network with coverage mech-anism to deal with out-of-vocabulary (OOV) and repetitive words. Moreover, amixed loss method is introduced to enable the Generator to produce story endingsof high semantic relevance with story plots. In the Reward Manager, the rewardis computed to fine-tune the Generator with policy-gradient reinforcement learn-ing (PGRL). We conduct experiments on the recently-introduced ROCStoriesCorpus. We evaluate our model in both automatic evaluation and human evalua-tion. Experimental results show that our model exceeds the sequence-to-sequencebaseline model by 15.75% and 13.57% in terms of CIDEr and consistency scorerespectively.
Tasks
Published 2019-01-11
URL http://arxiv.org/abs/1901.03459v1
PDF http://arxiv.org/pdf/1901.03459v1.pdf
PWC https://paperswithcode.com/paper/from-plots-to-endings-a-reinforced-pointer
Repo
Framework

Learning Deep Attribution Priors Based On Prior Knowledge

Title Learning Deep Attribution Priors Based On Prior Knowledge
Authors Ethan Weinberger, Joseph Janizek, Su-In Lee
Abstract Feature attribution methods are an essential tool for understanding the behavior of complex deep learning models. However, ensuring that models produce meaningful explanations, rather than ones that rely on noise, is not straightforward. Exacerbating this problem is the fact that attribution methods do not provide insight as to why features are assigned their attribution values, leading to explanations that are difficult to interpret. In real-world problems we often have sets of additional information for each feature that are predictive of that feature’s importance to the task at hand. Here we propose the deep attribution prior (DAPr) framework to exploit such information to overcome the limitations of attribution methods. Our framework jointly learns a relationship between prior information and feature importance, as well as biases models to have explanations that rely on features predicted to be important. We find that our framework both results in networks that generalize better to out of sample data and admits new methods for interpreting model explanations.
Tasks Feature Importance
Published 2019-12-20
URL https://arxiv.org/abs/1912.10065v2
PDF https://arxiv.org/pdf/1912.10065v2.pdf
PWC https://paperswithcode.com/paper/learned-feature-attribution-priors
Repo
Framework

Noise Contrastive Meta-Learning for Conditional Density Estimation using Kernel Mean Embeddings

Title Noise Contrastive Meta-Learning for Conditional Density Estimation using Kernel Mean Embeddings
Authors Jean-Francois Ton, Lucian Chan, Yee Whye Teh, Dino Sejdinovic
Abstract Current meta-learning approaches focus on learning functional representations of relationships between variables, i.e. on estimating conditional expectations in regression. In many applications, however, we are faced with conditional distributions which cannot be meaningfully summarized using expectation only (due to e.g. multimodality). Hence, we consider the problem of conditional density estimation in the meta-learning setting. We introduce a novel technique for meta-learning which combines neural representation and noise-contrastive estimation with the established literature of conditional mean embeddings into reproducing kernel Hilbert spaces. The method is validated on synthetic and real-world problems, demonstrating the utility of sharing learned representations across multiple conditional density estimation tasks.
Tasks Density Estimation, Meta-Learning
Published 2019-06-05
URL https://arxiv.org/abs/1906.02236v1
PDF https://arxiv.org/pdf/1906.02236v1.pdf
PWC https://paperswithcode.com/paper/noise-contrastive-meta-learning-for
Repo
Framework

Monocular Depth Estimation: A Survey

Title Monocular Depth Estimation: A Survey
Authors Amlaan Bhoi
Abstract Monocular depth estimation is often described as an ill-posed and inherently ambiguous problem. Estimating depth from 2D images is a crucial step in scene reconstruction, 3Dobject recognition, segmentation, and detection. The problem can be framed as: given a single RGB image as input, predict a dense depth map for each pixel. This problem is worsened by the fact that most scenes have large texture and structural variations, object occlusions, and rich geometric detailing. All these factors contribute to difficulty in accurate depth estimation. In this paper, we review five papers that attempt to solve the depth estimation problem with various techniques including supervised, weakly-supervised, and unsupervised learning techniques. We then compare these papers and understand the improvements made over one another. Finally, we explore potential improvements that can aid to better solve this problem.
Tasks Depth Estimation, Monocular Depth Estimation
Published 2019-01-27
URL http://arxiv.org/abs/1901.09402v1
PDF http://arxiv.org/pdf/1901.09402v1.pdf
PWC https://paperswithcode.com/paper/monocular-depth-estimation-a-survey
Repo
Framework

Learning to Count Objects with Few Exemplar Annotations

Title Learning to Count Objects with Few Exemplar Annotations
Authors Jianfeng Wang, Rong Xiao, Yandong Guo, Lei Zhang
Abstract In this paper, we study the problem of object counting with incomplete annotations. Based on the observation that in many object counting problems the target objects are normally repeated and highly similar to each other, we are particularly interested in the setting when only a few exemplar annotations are provided. Directly applying object detection with incomplete annotations will result in severe accuracy degradation due to its improper handling of unlabeled object instances. To address the problem, we propose a positiveness-focused object detector (PFOD) to progressively propagate the incomplete labels before applying the general object detection algorithm. The PFOD focuses on the positive samples and ignore the negative instances at most of the learning time. This strategy, though simple, dramatically boosts the object counting accuracy. On the CARPK dataset for parking lot car counting, we improved mAP@0.5 from 4.58% to 72.44% using only 5 training images each with 5 bounding boxes. On the Drink35 dataset for shelf product counting, the mAP@0.5 is improved from 14.16% to 53.73% using 10 training images each with 5 bounding boxes.
Tasks Object Counting, Object Detection
Published 2019-05-20
URL https://arxiv.org/abs/1905.07898v1
PDF https://arxiv.org/pdf/1905.07898v1.pdf
PWC https://paperswithcode.com/paper/learning-to-count-objects-with-few-exemplar
Repo
Framework

Temporal Density Extrapolation using a Dynamic Basis Approach

Title Temporal Density Extrapolation using a Dynamic Basis Approach
Authors Georg Krempl, Dominik Lang, Vera Hofer
Abstract Density estimation is a versatile technique underlying many data mining tasks and techniques,ranging from exploration and presentation of static data, to probabilistic classification, or identifying changes or irregularities in streaming data. With the pervasiveness of embedded systems and digitisation, this latter type of streaming and evolving data becomes more important. Nevertheless, research in density estimation has so far focused on stationary data, leaving the task of of extrapolating and predicting density at time points outside a training window an open problem. For this task, Temporal Density Extrapolation (TDX) is proposed. This novel method models and predicts gradual monotonous changes in a distribution. It is based on the expansion of basis functions, whose weights are modelled as functions of compositional data over time by using an isometric log-ratio transformation. Extrapolated density estimates are then obtained by extrapolating the weights to the requested time point, and querying the density from the basis functions with back-transformed weights. Our approach aims for broad applicability by neither being restricted to a specific parametric distribution, nor relying on cluster structure in the data.It requires only two additional extrapolation-specific parameters, for which reasonable defaults exist. Experimental evaluation on various data streams, synthetic as well as from the real-world domains of credit scoring and environmental health, shows that the model manages to capture monotonous drift patterns accurately and better than existing methods. Thereby, it requires not more than 1.5-times the run time of a corresponding static density estimation approach.
Tasks Density Estimation
Published 2019-06-03
URL https://arxiv.org/abs/1906.00912v1
PDF https://arxiv.org/pdf/1906.00912v1.pdf
PWC https://paperswithcode.com/paper/190600912
Repo
Framework

Query Scheduling in the Presence of Complex User Profiles

Title Query Scheduling in the Presence of Complex User Profiles
Authors Haggai Roitman, Avigdor Gal, Louiqa Raschid
Abstract Advances in Web technology enable personalization proxies that assist users in satisfying their complex information monitoring and aggregation needs through the repeated querying of multiple volatile data sources. Such proxies face a scalability challenge when trying to maximize the number of clients served while at the same time fully satisfying clients’ complex user profiles. In this work we use an abstraction of complex execution intervals (CEIs) constructed over simple execution intervals (EIs) represents user profiles and use existing offline approximation as a baseline for maximizing completeness of capturing CEIs. We present three heuristic solutions for the online problem of query scheduling to satisfy complex user profiles. The first only considers properties of individual EIs while the other two exploit properties of all EIs in the CEI. We use an extensive set of experiments on real traces and synthetic data to show that heuristics that exploit knowledge of the CEIs dominate across multiple parameter settings.
Tasks
Published 2019-02-27
URL http://arxiv.org/abs/1902.10384v1
PDF http://arxiv.org/pdf/1902.10384v1.pdf
PWC https://paperswithcode.com/paper/query-scheduling-in-the-presence-of-complex
Repo
Framework

Density Propagation with Characteristics-based Deep Learning

Title Density Propagation with Characteristics-based Deep Learning
Authors Tenavi Nakamura-Zimmerer, Daniele Venturi, Qi Gong, Wei Kang
Abstract Uncertainty propagation in nonlinear dynamic systems remains an outstanding problem in scientific computing and control. Numerous approaches have been developed, but are limited in their capability to tackle problems with more than a few uncertain variables or require large amounts of simulation data. In this paper, we propose a data-driven method for approximating joint probability density functions (PDFs) of nonlinear dynamic systems with initial condition and parameter uncertainty. Our approach leverages on the power of deep learning to deal with high-dimensional inputs, but we overcome the need for huge quantities of training data by encoding PDF evolution equations directly into the optimization problem. We demonstrate the potential of the proposed method by applying it to evaluate the robustness of a feedback controller for a six-dimensional rigid body with parameter uncertainty.
Tasks
Published 2019-11-21
URL https://arxiv.org/abs/1911.09311v1
PDF https://arxiv.org/pdf/1911.09311v1.pdf
PWC https://paperswithcode.com/paper/density-propagation-with-characteristics
Repo
Framework
comments powered by Disqus