May 6, 2019

2889 words 14 mins read

Paper Group ANR 343

Paper Group ANR 343

Memory shapes time perception and intertemporal choices. Backward-Forward Search for Manipulation Planning. The Analysis of Local Motion and Deformation in Image Sequences Inspired by Physical Electromagnetic Interaction. Functional Distributional Semantics. Depth and depth-based classification with R-package ddalpha. Learning Dense Correspondence …

Memory shapes time perception and intertemporal choices

Title Memory shapes time perception and intertemporal choices
Authors Pedro A. Ortega, Naftali Tishby
Abstract There is a consensus that human and non-human subjects experience temporal distortions in many stages of their perceptual and decision-making systems. Similarly, intertemporal choice research has shown that decision-makers undervalue future outcomes relative to immediate ones. Here we combine techniques from information theory and artificial intelligence to show how both temporal distortions and intertemporal choice preferences can be explained as a consequence of the coding efficiency of sensorimotor representation. In particular, the model implies that interactions that constrain future behavior are perceived as being both longer in duration and more valuable. Furthermore, using simulations of artificial agents, we investigate how memory constraints enforce a renormalization of the perceived timescales. Our results show that qualitatively different discount functions, such as exponential and hyperbolic discounting, arise as a consequence of an agent’s probabilistic model of the world.
Tasks Decision Making
Published 2016-04-18
URL http://arxiv.org/abs/1604.05129v2
PDF http://arxiv.org/pdf/1604.05129v2.pdf
PWC https://paperswithcode.com/paper/memory-shapes-time-perception-and
Repo
Framework

Backward-Forward Search for Manipulation Planning

Title Backward-Forward Search for Manipulation Planning
Authors Caelan Reed Garrett, Tomas Lozano-Perez, Leslie Pack Kaelbling
Abstract In this paper we address planning problems in high-dimensional hybrid configuration spaces, with a particular focus on manipulation planning problems involving many objects. We present the hybrid backward-forward (HBF) planning algorithm that uses a backward identification of constraints to direct the sampling of the infinite action space in a forward search from the initial state towards a goal configuration. The resulting planner is probabilistically complete and can effectively construct long manipulation plans requiring both prehensile and nonprehensile actions in cluttered environments.
Tasks
Published 2016-04-12
URL http://arxiv.org/abs/1604.03468v1
PDF http://arxiv.org/pdf/1604.03468v1.pdf
PWC https://paperswithcode.com/paper/backward-forward-search-for-manipulation
Repo
Framework

The Analysis of Local Motion and Deformation in Image Sequences Inspired by Physical Electromagnetic Interaction

Title The Analysis of Local Motion and Deformation in Image Sequences Inspired by Physical Electromagnetic Interaction
Authors Xiaodong Zhuang, N. E. Mastorakis
Abstract In order to analyze the moving and deforming of the objects in image sequence, a novel way is presented to analyze the local changes of object edges between two related images (such as two adjacent frames in a video sequence), which is inspired by the physical electromagnetic interaction. The changes of edge between adjacent frames in sequences are analyzed by simulation of virtual current interaction, which can reflect the change of the object’s position or shape. The virtual current along the main edge line is proposed based on the significant edge extraction. Then the virtual interaction between the current elements in the two related images is studied by imitating the interaction between physical current-carrying wires. The experimental results prove that the distribution of magnetic forces on the current elements in one image applied by the other can reflect the local change of edge lines from one image to the other, which is important in further analysis.
Tasks
Published 2016-10-12
URL http://arxiv.org/abs/1610.03612v1
PDF http://arxiv.org/pdf/1610.03612v1.pdf
PWC https://paperswithcode.com/paper/the-analysis-of-local-motion-and-deformation
Repo
Framework

Functional Distributional Semantics

Title Functional Distributional Semantics
Authors Guy Emerson, Ann Copestake
Abstract Vector space models have become popular in distributional semantics, despite the challenges they face in capturing various semantic phenomena. We propose a novel probabilistic framework which draws on both formal semantics and recent advances in machine learning. In particular, we separate predicates from the entities they refer to, allowing us to perform Bayesian inference based on logical forms. We describe an implementation of this framework using a combination of Restricted Boltzmann Machines and feedforward neural networks. Finally, we demonstrate the feasibility of this approach by training it on a parsed corpus and evaluating it on established similarity datasets.
Tasks Bayesian Inference
Published 2016-06-26
URL http://arxiv.org/abs/1606.08003v1
PDF http://arxiv.org/pdf/1606.08003v1.pdf
PWC https://paperswithcode.com/paper/functional-distributional-semantics
Repo
Framework

Depth and depth-based classification with R-package ddalpha

Title Depth and depth-based classification with R-package ddalpha
Authors Oleksii Pokotylo, Pavlo Mozharovskyi, Rainer Dyckerhoff
Abstract Following the seminal idea of Tukey, data depth is a function that measures how close an arbitrary point of the space is located to an implicitly defined center of a data cloud. Having undergone theoretical and computational developments, it is now employed in numerous applications with classification being the most popular one. The R-package ddalpha is a software directed to fuse experience of the applicant with recent achievements in the area of data depth and depth-based classification. ddalpha provides an implementation for exact and approximate computation of most reasonable and widely applied notions of data depth. These can be further used in the depth-based multivariate and functional classifiers implemented in the package, where the $DD\alpha$-procedure is in the main focus. The package is expandable with user-defined custom depth methods and separators. The implemented functions for depth visualization and the built-in benchmark procedures may also serve to provide insights into the geometry of the data and the quality of pattern recognition.
Tasks
Published 2016-08-14
URL http://arxiv.org/abs/1608.04109v1
PDF http://arxiv.org/pdf/1608.04109v1.pdf
PWC https://paperswithcode.com/paper/depth-and-depth-based-classification-with-r
Repo
Framework

Learning Dense Correspondence via 3D-guided Cycle Consistency

Title Learning Dense Correspondence via 3D-guided Cycle Consistency
Authors Tinghui Zhou, Philipp Krähenbühl, Mathieu Aubry, Qixing Huang, Alexei A. Efros
Abstract Discriminative deep learning approaches have shown impressive results for problems where human-labeled ground truth is plentiful, but what about tasks where labels are difficult or impossible to obtain? This paper tackles one such problem: establishing dense visual correspondence across different object instances. For this task, although we do not know what the ground-truth is, we know it should be consistent across instances of that category. We exploit this consistency as a supervisory signal to train a convolutional neural network to predict cross-instance correspondences between pairs of images depicting objects of the same category. For each pair of training images we find an appropriate 3D CAD model and render two synthetic views to link in with the pair, establishing a correspondence flow 4-cycle. We use ground-truth synthetic-to-synthetic correspondences, provided by the rendering engine, to train a ConvNet to predict synthetic-to-real, real-to-real and real-to-synthetic correspondences that are cycle-consistent with the ground-truth. At test time, no CAD models are required. We demonstrate that our end-to-end trained ConvNet supervised by cycle-consistency outperforms state-of-the-art pairwise matching methods in correspondence-related tasks.
Tasks
Published 2016-04-18
URL http://arxiv.org/abs/1604.05383v1
PDF http://arxiv.org/pdf/1604.05383v1.pdf
PWC https://paperswithcode.com/paper/learning-dense-correspondence-via-3d-guided
Repo
Framework

Evaluating Creative Language Generation: The Case of Rap Lyric Ghostwriting

Title Evaluating Creative Language Generation: The Case of Rap Lyric Ghostwriting
Authors Peter Potash, Alexey Romanov, Anna Rumshisky
Abstract Language generation tasks that seek to mimic human ability to use language creatively are difficult to evaluate, since one must consider creativity, style, and other non-trivial aspects of the generated text. The goal of this paper is to develop evaluation methods for one such task, ghostwriting of rap lyrics, and to provide an explicit, quantifiable foundation for the goals and future directions of this task. Ghostwriting must produce text that is similar in style to the emulated artist, yet distinct in content. We develop a novel evaluation methodology that addresses several complementary aspects of this task, and illustrate how such evaluation can be used to meaningfully analyze system performance. We provide a corpus of lyrics for 13 rap artists, annotated for stylistic similarity, which allows us to assess the feasibility of manual evaluation for generated verse.
Tasks Text Generation
Published 2016-12-09
URL http://arxiv.org/abs/1612.03205v1
PDF http://arxiv.org/pdf/1612.03205v1.pdf
PWC https://paperswithcode.com/paper/evaluating-creative-language-generation-the
Repo
Framework

A Novel Scene Text Detection Algorithm Based On Convolutional Neural Network

Title A Novel Scene Text Detection Algorithm Based On Convolutional Neural Network
Authors Xiaohang Ren, Kai Chen, Jun Sun
Abstract Candidate text region extraction plays a critical role in convolutional neural network (CNN) based text detection from natural images. In this paper, we propose a CNN based scene text detection algorithm with a new text region extractor. The so called candidate text region extractor I-MSER is based on Maximally Stable Extremal Region (MSER), which can improve the independency and completeness of the extracted candidate text regions. Design of I-MSER is motivated by the observation that text MSERs have high similarity and are close to each other. The independency of candidate text regions obtained by I-MSER is guaranteed by selecting the most representative regions from a MSER tree which is generated according to the spatial overlapping relationship among the MSERs. A multi-layer CNN model is trained to score the confidence value of the extracted regions extracted by the I-MSER for text detection. The new text detection algorithm based on I-MSER is evaluated with wide-used ICDAR 2011 and 2013 datasets and shows improved detection performance compared to the existing algorithms.
Tasks Scene Text Detection
Published 2016-04-07
URL http://arxiv.org/abs/1604.01894v1
PDF http://arxiv.org/pdf/1604.01894v1.pdf
PWC https://paperswithcode.com/paper/a-novel-scene-text-detection-algorithm-based
Repo
Framework

Curiosity-Aware Bargaining

Title Curiosity-Aware Bargaining
Authors Cédric Buron, Sylvain Ductor, Zahia Guessoum
Abstract Opponent modeling consists in modeling the strategy or preferences of an agent thanks to the data it provides. In the context of automated negotiation and with machine learning, it can result in an advantage so overwhelming that it may restrain some casual agents to be part of the bargaining process. We qualify as “curious” an agent driven by the desire of negotiating in order to collect information and improve its opponent model. However, neither curiosity-based rational-ity nor curiosity-robust protocol have been studied in automatic negotiation. In this paper, we rely on mechanism design to propose three extensions of the standard bargaining protocol that limit information leak. Those extensions are supported by an enhanced rationality model, that considers the exchanged information. Also, they are theoretically analyzed and experimentally evaluated.
Tasks
Published 2016-12-30
URL http://arxiv.org/abs/1612.09433v1
PDF http://arxiv.org/pdf/1612.09433v1.pdf
PWC https://paperswithcode.com/paper/curiosity-aware-bargaining
Repo
Framework

WikiReading: A Novel Large-scale Language Understanding Task over Wikipedia

Title WikiReading: A Novel Large-scale Language Understanding Task over Wikipedia
Authors Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, David Berthelot
Abstract We present WikiReading, a large-scale natural language understanding task and publicly-available dataset with 18 million instances. The task is to predict textual values from the structured knowledge base Wikidata by reading the text of the corresponding Wikipedia articles. The task contains a rich variety of challenging classification and extraction sub-tasks, making it well-suited for end-to-end models such as deep neural networks (DNNs). We compare various state-of-the-art DNN-based architectures for document classification, information extraction, and question answering. We find that models supporting a rich answer space, such as word or character sequences, perform best. Our best-performing model, a word-level sequence to sequence model with a mechanism to copy out-of-vocabulary words, obtains an accuracy of 71.8%.
Tasks Document Classification, Question Answering
Published 2016-08-11
URL http://arxiv.org/abs/1608.03542v2
PDF http://arxiv.org/pdf/1608.03542v2.pdf
PWC https://paperswithcode.com/paper/wikireading-a-novel-large-scale-language
Repo
Framework

Bootstrapping with Models: Confidence Intervals for Off-Policy Evaluation

Title Bootstrapping with Models: Confidence Intervals for Off-Policy Evaluation
Authors Josiah P. Hanna, Peter Stone, Scott Niekum
Abstract For an autonomous agent, executing a poor policy may be costly or even dangerous. For such agents, it is desirable to determine confidence interval lower bounds on the performance of any given policy without executing said policy. Current methods for exact high confidence off-policy evaluation that use importance sampling require a substantial amount of data to achieve a tight lower bound. Existing model-based methods only address the problem in discrete state spaces. Since exact bounds are intractable for many domains we trade off strict guarantees of safety for more data-efficient approximate bounds. In this context, we propose two bootstrapping off-policy evaluation methods which use learned MDP transition models in order to estimate lower confidence bounds on policy performance with limited data in both continuous and discrete state spaces. Since direct use of a model may introduce bias, we derive a theoretical upper bound on model bias for when the model transition function is estimated with i.i.d. trajectories. This bound broadens our understanding of the conditions under which model-based methods have high bias. Finally, we empirically evaluate our proposed methods and analyze the settings in which different bootstrapping off-policy confidence interval methods succeed and fail.
Tasks
Published 2016-06-20
URL http://arxiv.org/abs/1606.06126v3
PDF http://arxiv.org/pdf/1606.06126v3.pdf
PWC https://paperswithcode.com/paper/bootstrapping-with-models-confidence
Repo
Framework

Does quantification without adjustments work?

Title Does quantification without adjustments work?
Authors Dirk Tasche
Abstract Classification is the task of predicting the class labels of objects based on the observation of their features. In contrast, quantification has been defined as the task of determining the prevalences of the different sorts of class labels in a target dataset. The simplest approach to quantification is Classify & Count where a classifier is optimised for classification on a training set and applied to the target dataset for the prediction of class labels. In the case of binary quantification, the number of predicted positive labels is then used as an estimate of the prevalence of the positive class in the target dataset. Since the performance of Classify & Count for quantification is known to be inferior its results typically are subject to adjustments. However, some researchers recently have suggested that Classify & Count might actually work without adjustments if it is based on a classifer that was specifically trained for quantification. We discuss the theoretical foundation for this claim and explore its potential and limitations with a numerical example based on the binormal model with equal variances. In order to identify an optimal quantifier in the binormal setting, we introduce the concept of local Bayes optimality. As a side remark, we present a complete proof of a theorem by Ye et al. (2012).
Tasks
Published 2016-02-28
URL http://arxiv.org/abs/1602.08780v2
PDF http://arxiv.org/pdf/1602.08780v2.pdf
PWC https://paperswithcode.com/paper/does-quantification-without-adjustments-work
Repo
Framework

Deep Q-Networks for Accelerating the Training of Deep Neural Networks

Title Deep Q-Networks for Accelerating the Training of Deep Neural Networks
Authors Jie Fu
Abstract In this paper, we propose a principled deep reinforcement learning (RL) approach that is able to accelerate the convergence rate of general deep neural networks (DNNs). With our approach, a deep RL agent (synonym for optimizer in this work) is used to automatically learn policies about how to schedule learning rates during the optimization of a DNN. The state features of the agent are learned from the weight statistics of the optimizee during training. The reward function of this agent is designed to learn policies that minimize the optimizee’s training time given a certain performance goal. The actions of the agent correspond to changing the learning rate for the optimizee during training. As far as we know, this is the first attempt to use deep RL to learn how to optimize a large-sized DNN. We perform extensive experiments on a standard benchmark dataset and demonstrate the effectiveness of the policies learned by our approach.
Tasks
Published 2016-06-05
URL http://arxiv.org/abs/1606.01467v10
PDF http://arxiv.org/pdf/1606.01467v10.pdf
PWC https://paperswithcode.com/paper/deep-q-networks-for-accelerating-the-training
Repo
Framework

Domain Adaptation for Named Entity Recognition in Online Media with Word Embeddings

Title Domain Adaptation for Named Entity Recognition in Online Media with Word Embeddings
Authors Vivek Kulkarni, Yashar Mehdad, Troy Chevalier
Abstract Content on the Internet is heterogeneous and arises from various domains like News, Entertainment, Finance and Technology. Understanding such content requires identifying named entities (persons, places and organizations) as one of the key steps. Traditionally Named Entity Recognition (NER) systems have been built using available annotated datasets (like CoNLL, MUC) and demonstrate excellent performance. However, these models fail to generalize onto other domains like Sports and Finance where conventions and language use can differ significantly. Furthermore, several domains do not have large amounts of annotated labeled data for training robust Named Entity Recognition models. A key step towards this challenge is to adapt models learned on domains where large amounts of annotated training data are available to domains with scarce annotated data. In this paper, we propose methods to effectively adapt models learned on one domain onto other domains using distributed word representations. First we analyze the linguistic variation present across domains to identify key linguistic insights that can boost performance across domains. We propose methods to capture domain specific semantics of word usage in addition to global semantics. We then demonstrate how to effectively use such domain specific knowledge to learn NER models that outperform previous baselines in the domain adaptation setting.
Tasks Domain Adaptation, Named Entity Recognition, Word Embeddings
Published 2016-12-01
URL http://arxiv.org/abs/1612.00148v1
PDF http://arxiv.org/pdf/1612.00148v1.pdf
PWC https://paperswithcode.com/paper/domain-adaptation-for-named-entity
Repo
Framework

Motion Estimated-Compensated Reconstruction with Preserved-Features in Free-Breathing Cardiac MRI

Title Motion Estimated-Compensated Reconstruction with Preserved-Features in Free-Breathing Cardiac MRI
Authors Aurelien Bustin, Anne Menini, Martin A. Janich, Darius Burschka, Jacques Felblinger, Anja C. S. Brau, Freddy Odille
Abstract To develop an efficient motion-compensated reconstruction technique for free-breathing cardiac magnetic resonance imaging (MRI) that allows high-quality images to be reconstructed from multiple undersampled single-shot acquisitions. The proposed method is a joint image reconstruction and motion correction method consisting of several steps, including a non-rigid motion extraction and a motion-compensated reconstruction. The reconstruction includes a denoising with the Beltrami regularization, which offers an ideal compromise between feature preservation and staircasing reduction. Results were assessed in simulation, phantom and volunteer experiments. The proposed joint image reconstruction and motion correction method exhibits visible quality improvement over previous methods while reconstructing sharper edges. Moreover, when the acceleration factor increases, standard methods show blurry results while the proposed method preserves image quality. The method was applied to free-breathing single-shot cardiac MRI, successfully achieving high image quality and higher spatial resolution than conventional segmented methods, with the potential to offer high-quality delayed enhancement scans in challenging patients.
Tasks Denoising, Image Reconstruction
Published 2016-11-15
URL http://arxiv.org/abs/1611.04655v1
PDF http://arxiv.org/pdf/1611.04655v1.pdf
PWC https://paperswithcode.com/paper/motion-estimated-compensated-reconstruction
Repo
Framework
comments powered by Disqus