July 29, 2019

2893 words 14 mins read

Paper Group ANR 29

Paper Group ANR 29

SilNet : Single- and Multi-View Reconstruction by Learning from Silhouettes. Inverse Reinforcement Learning for Marketing. Debbie, the Debate Bot of the Future. Deep Haptic Model Predictive Control for Robot-Assisted Dressing. Accelerate RNN-based Training with Importance Sampling. Surprise Search for Evolutionary Divergence. Deep Multi-view Models …

SilNet : Single- and Multi-View Reconstruction by Learning from Silhouettes

Title SilNet : Single- and Multi-View Reconstruction by Learning from Silhouettes
Authors Olivia Wiles, Andrew Zisserman
Abstract The objective of this paper is 3D shape understanding from single and multiple images. To this end, we introduce a new deep-learning architecture and loss function, SilNet, that can handle multiple views in an order-agnostic manner. The architecture is fully convolutional, and for training we use a proxy task of silhouette prediction, rather than directly learning a mapping from 2D images to 3D shape as has been the target in most recent work. We demonstrate that with the SilNet architecture there is generalisation over the number of views – for example, SilNet trained on 2 views can be used with 3 or 4 views at test-time; and performance improves with more views. We introduce two new synthetics datasets: a blobby object dataset useful for pre-training, and a challenging and realistic sculpture dataset; and demonstrate on these datasets that SilNet has indeed learnt 3D shape. Finally, we show that SilNet exceeds the state of the art on the ShapeNet benchmark dataset, and use SilNet to generate novel views of the sculpture dataset.
Tasks
Published 2017-11-21
URL http://arxiv.org/abs/1711.07888v1
PDF http://arxiv.org/pdf/1711.07888v1.pdf
PWC https://paperswithcode.com/paper/silnet-single-and-multi-view-reconstruction
Repo
Framework

Inverse Reinforcement Learning for Marketing

Title Inverse Reinforcement Learning for Marketing
Authors Igor Halperin
Abstract Learning customer preferences from an observed behaviour is an important topic in the marketing literature. Structural models typically model forward-looking customers or firms as utility-maximizing agents whose utility is estimated using methods of Stochastic Optimal Control. We suggest an alternative approach to study dynamic consumer demand, based on Inverse Reinforcement Learning (IRL). We develop a version of the Maximum Entropy IRL that leads to a highly tractable model formulation that amounts to low-dimensional convex optimization in the search for optimal model parameters. Using simulations of consumer demand, we show that observational noise for identical customers can be easily confused with an apparent consumer heterogeneity.
Tasks
Published 2017-12-13
URL http://arxiv.org/abs/1712.04612v1
PDF http://arxiv.org/pdf/1712.04612v1.pdf
PWC https://paperswithcode.com/paper/inverse-reinforcement-learning-for-marketing
Repo
Framework

Debbie, the Debate Bot of the Future

Title Debbie, the Debate Bot of the Future
Authors Geetanjali Rakshit, Kevin K. Bowden, Lena Reed, Amita Misra, Marilyn Walker
Abstract Chatbots are a rapidly expanding application of dialogue systems with companies switching to bot services for customer support, and new applications for users interested in casual conversation. One style of casual conversation is argument, many people love nothing more than a good argument. Moreover, there are a number of existing corpora of argumentative dialogues, annotated for agreement and disagreement, stance, sarcasm and argument quality. This paper introduces Debbie, a novel arguing bot, that selects arguments from conversational corpora, and aims to use them appropriately in context. We present an initial working prototype of Debbie, with some preliminary evaluation and describe future work.
Tasks
Published 2017-09-10
URL http://arxiv.org/abs/1709.03167v1
PDF http://arxiv.org/pdf/1709.03167v1.pdf
PWC https://paperswithcode.com/paper/debbie-the-debate-bot-of-the-future
Repo
Framework

Deep Haptic Model Predictive Control for Robot-Assisted Dressing

Title Deep Haptic Model Predictive Control for Robot-Assisted Dressing
Authors Zackory Erickson, Henry M. Clever, Greg Turk, C. Karen Liu, Charles C. Kemp
Abstract Robot-assisted dressing offers an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people. The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person’s body. We present a deep recurrent model that, when given a proposed action by the robot, predicts the forces a garment will apply to a person’s body. We also show that a robot can provide better dressing assistance by using this model with model predictive control. The predictions made by our model only use haptic and kinematic observations from the robot’s end effector, which are readily attainable. Collecting training data from real world physical human-robot interaction can be time consuming, costly, and put people at risk. Instead, we train our predictive model using data collected in an entirely self-supervised fashion from a physics-based simulation. We evaluated our approach with a PR2 robot that attempted to pull a hospital gown onto the arms of 10 human participants. With a 0.2s prediction horizon, our controller succeeded at high rates and lowered applied force while navigating the garment around a persons fist and elbow without getting caught. Shorter prediction horizons resulted in significantly reduced performance with the sleeve catching on the participants’ fists and elbows, demonstrating the value of our model’s predictions. These behaviors of mitigating catches emerged from our deep predictive model and the controller objective function, which primarily penalizes high forces.
Tasks Common Sense Reasoning
Published 2017-09-27
URL https://arxiv.org/abs/1709.09735v3
PDF https://arxiv.org/pdf/1709.09735v3.pdf
PWC https://paperswithcode.com/paper/deep-haptic-model-predictive-control-for
Repo
Framework

Accelerate RNN-based Training with Importance Sampling

Title Accelerate RNN-based Training with Importance Sampling
Authors Fei Wang, Xiaofeng Gao, Guihai Chen, Jun Ye
Abstract Importance sampling (IS) as an elegant and efficient variance reduction (VR) technique for the acceleration of stochastic optimization problems has attracted many researches recently. Unlike commonly adopted stochastic uniform sampling in stochastic optimizations, IS-integrated algorithms sample training data at each iteration with respect to a weighted sampling probability distribution $P$, which is constructed according to the precomputed importance factors. Previous experimental results show that IS has achieved remarkable progresses in the acceleration of training convergence. Unfortunately, the calculation of the sampling probability distribution $P$ causes a major limitation of IS: it requires the input data to be well-structured, i.e., the feature vector is properly defined. Consequently, recurrent neural networks (RNN) as a popular learning algorithm is not able to enjoy the benefits of IS due to the fact that its raw input data, i.e., the training sequences, are often unstructured which makes calculation of $P$ impossible. In considering of the the popularity of RNN-based learning applications and their relative long training time, we are interested in accelerating them through IS. This paper propose a novel Fast-Importance-Mining algorithm to calculate the importance factor for unstructured data which makes the application of IS in RNN-based applications possible. Our experimental evaluation on popular open-source RNN-based learning applications validate the effectiveness of IS in improving the convergence rate of RNNs.
Tasks Stochastic Optimization
Published 2017-10-31
URL http://arxiv.org/abs/1711.00004v1
PDF http://arxiv.org/pdf/1711.00004v1.pdf
PWC https://paperswithcode.com/paper/accelerate-rnn-based-training-with-importance
Repo
Framework

Surprise Search for Evolutionary Divergence

Title Surprise Search for Evolutionary Divergence
Authors Daniele Gravina, Antonios Liapis, Georgios N. Yannakakis
Abstract Inspired by the notion of surprise for unconventional discovery we introduce a general search algorithm we name surprise search as a new method of evolutionary divergent search. Surprise search is grounded in the divergent search paradigm and is fabricated within the principles of evolutionary search. The algorithm mimics the self-surprise cognitive process and equips evolutionary search with the ability to seek for solutions that deviate from the algorithm’s expected behaviour. The predictive model of expected solutions is based on historical trails of where the search has been and local information about the search space. Surprise search is tested extensively in a robot maze navigation task: experiments are held in four authored deceptive mazes and in 60 generated mazes and compared against objective-based evolutionary search and novelty search. The key findings of this study reveal that surprise search is advantageous compared to the other two search processes. In particular, it outperforms objective search and it is as efficient as novelty search in all tasks examined. Most importantly, surprise search is faster, on average, and more robust in solving the navigation problem compared to any other algorithm examined. Finally, our analysis reveals that surprise search explores the behavioural space more extensively and yields higher population diversity compared to novelty search. What distinguishes surprise search from other forms of divergent search, such as the search for novelty, is its ability to diverge not from earlier and seen solutions but rather from predicted and unseen points in the domain considered.
Tasks
Published 2017-06-08
URL http://arxiv.org/abs/1706.02556v1
PDF http://arxiv.org/pdf/1706.02556v1.pdf
PWC https://paperswithcode.com/paper/surprise-search-for-evolutionary-divergence
Repo
Framework

Deep Multi-view Models for Glitch Classification

Title Deep Multi-view Models for Glitch Classification
Authors Sara Bahaadini, Neda Rohani, Scott Coughlin, Michael Zevin, Vicky Kalogera, Aggelos K Katsaggelos
Abstract Non-cosmic, non-Gaussian disturbances known as “glitches”, show up in gravitational-wave data of the Advanced Laser Interferometer Gravitational-wave Observatory, or aLIGO. In this paper, we propose a deep multi-view convolutional neural network to classify glitches automatically. The primary purpose of classifying glitches is to understand their characteristics and origin, which facilitates their removal from the data or from the detector entirely. We visualize glitches as spectrograms and leverage the state-of-the-art image classification techniques in our model. The suggested classifier is a multi-view deep neural network that exploits four different views for classification. The experimental results demonstrate that the proposed model improves the overall accuracy of the classification compared to traditional single view algorithms.
Tasks Image Classification
Published 2017-04-28
URL http://arxiv.org/abs/1705.00034v1
PDF http://arxiv.org/pdf/1705.00034v1.pdf
PWC https://paperswithcode.com/paper/deep-multi-view-models-for-glitch
Repo
Framework

Selective-Candidate Framework with Similarity Selection Rule for Evolutionary Optimization

Title Selective-Candidate Framework with Similarity Selection Rule for Evolutionary Optimization
Authors Sheng Xin Zhang, Wing Shing Chan, Zi Kang Peng, Shao Yong Zheng, Kit Sang Tang
Abstract Achieving better exploitation and exploration capabilities (EEC) have always been an important yet challenging issue in evolutionary optimization algorithm (EOA) design. The difficulties lie in obtaining a good balance in EEC, which is cooperatively determined by operations and parameters in an EOA. When deficiencies in exploitation or exploration are observed, most existing works only consider supplementing it, either by designing new operations or by altering the parameters. Unfortunately, when different situations are encountered, these proposals may fail to be the winner. To address these problems, this paper proposes an explicit EEC control method named selective-candidate framework with similarity selection rule (SCSS). On the one hand, M (M > 1) candidates are generated from each current solution with independent operations and parameters to enrich the search. While on the other hand, a similarity selection rule is designed to determine the final candidate. By considering the fitness ranking of the current solution and its Euclidian distance to each of these M candidates, superior current solutions select the closest to be the final candidate for efficient local exploitation while inferior ones would favor the farthest candidate for exploration purpose. In this way, the rule is able to synthesize exploitation and exploration, making the evolution more effective. The proposed SCSS framework is general and easy to implement. It has been applied to three classic, four state-of-the-art and four up-to-date EOAs from the branches of differential evolution, evolution strategy and particle swarm optimization. As confirmed with simulation results, significant performance enhancement is achieved.
Tasks
Published 2017-12-18
URL https://arxiv.org/abs/1712.06338v4
PDF https://arxiv.org/pdf/1712.06338v4.pdf
PWC https://paperswithcode.com/paper/selective-candidate-framework-with-similarity
Repo
Framework

Recognizing Descriptive Wikipedia Categories for Historical Figures

Title Recognizing Descriptive Wikipedia Categories for Historical Figures
Authors Yanqing Chen, Steven Skiena
Abstract Wikipedia is a useful knowledge source that benefits many applications in language processing and knowledge representation. An important feature of Wikipedia is that of categories. Wikipedia pages are assigned different categories according to their contents as human-annotated labels which can be used in information retrieval, ad hoc search improvements, entity ranking and tag recommendations. However, important pages are usually assigned too many categories, which makes it difficult to recognize the most important ones that give the best descriptions. In this paper, we propose an approach to recognize the most descriptive Wikipedia categories. We observe that historical figures in a precise category presumably are mutually similar and such categorical coherence could be evaluated via texts or Wikipedia links of corresponding members in the category. We rank descriptive level of Wikipedia categories according to their coherence and our ranking yield an overall agreement of 88.27% compared with human wisdom.
Tasks Information Retrieval
Published 2017-04-24
URL http://arxiv.org/abs/1704.07427v1
PDF http://arxiv.org/pdf/1704.07427v1.pdf
PWC https://paperswithcode.com/paper/recognizing-descriptive-wikipedia-categories
Repo
Framework

An Empirical Comparison of Simple Domain Adaptation Methods for Neural Machine Translation

Title An Empirical Comparison of Simple Domain Adaptation Methods for Neural Machine Translation
Authors Chenhui Chu, Raj Dabre, Sadao Kurohashi
Abstract In this paper, we propose a novel domain adaptation method named “mixed fine tuning” for neural machine translation (NMT). We combine two existing approaches namely fine tuning and multi domain NMT. We first train an NMT model on an out-of-domain parallel corpus, and then fine tune it on a parallel corpus which is a mix of the in-domain and out-of-domain corpora. All corpora are augmented with artificial tags to indicate specific domains. We empirically compare our proposed method against fine tuning and multi domain methods and discuss its benefits and shortcomings.
Tasks Domain Adaptation, Machine Translation
Published 2017-01-12
URL http://arxiv.org/abs/1701.03214v2
PDF http://arxiv.org/pdf/1701.03214v2.pdf
PWC https://paperswithcode.com/paper/an-empirical-comparison-of-simple-domain
Repo
Framework

Thompson Sampling for the MNL-Bandit

Title Thompson Sampling for the MNL-Bandit
Authors Shipra Agrawal, Vashist Avadhanula, Vineet Goyal, Assaf Zeevi
Abstract We consider a sequential subset selection problem under parameter uncertainty, where at each time step, the decision maker selects a subset of cardinality $K$ from $N$ possible items (arms), and observes a (bandit) feedback in the form of the index of one of the items in said subset, or none. Each item in the index set is ascribed a certain value (reward), and the feedback is governed by a Multinomial Logit (MNL) choice model whose parameters are a priori unknown. The objective of the decision maker is to maximize the expected cumulative rewards over a finite horizon $T$, or alternatively, minimize the regret relative to an oracle that knows the MNL parameters. We refer to this as the MNL-Bandit problem. This problem is representative of a larger family of exploration-exploitation problems that involve a combinatorial objective, and arise in several important application domains. We present an approach to adapt Thompson Sampling to this problem and show that it achieves near-optimal regret as well as attractive numerical performance.
Tasks
Published 2017-06-03
URL http://arxiv.org/abs/1706.00977v7
PDF http://arxiv.org/pdf/1706.00977v7.pdf
PWC https://paperswithcode.com/paper/thompson-sampling-for-the-mnl-bandit
Repo
Framework

ASHACL: Alternative Shapes Constraint Language

Title ASHACL: Alternative Shapes Constraint Language
Authors Peter F. Patel-Schneider
Abstract ASHACL, a variant of the W3C Shapes Constraint Language, is designed to determine whether an RDF graph meets some conditions. These conditions are grouped into shapes, which validate whether particular RDF terms each meet the constraints of the shape. Shapes are themselves expressed as RDF triples in an RDF graph, called a shapes graph.
Tasks
Published 2017-02-06
URL http://arxiv.org/abs/1702.01795v2
PDF http://arxiv.org/pdf/1702.01795v2.pdf
PWC https://paperswithcode.com/paper/ashacl-alternative-shapes-constraint-language
Repo
Framework

Tweeting AI: Perceptions of Lay vs Expert Twitterati

Title Tweeting AI: Perceptions of Lay vs Expert Twitterati
Authors Lydia Manikonda, Subbarao Kambhampati
Abstract With the recent advancements in Artificial Intelligence (AI), various organizations and individuals are debating about the progress of AI as a blessing or a curse for the future of the society. This paper conducts an investigation on how the public perceives the progress of AI by utilizing the data shared on Twitter. Specifically, this paper performs a comparative analysis on the understanding of users belonging to two categories – general AI-Tweeters (AIT) and expert AI-Tweeters (EAIT) who share posts about AI on Twitter. Our analysis revealed that users from both the categories express distinct emotions and interests towards AI. Users from both the categories regard AI as positive and are optimistic about the progress of AI but the experts are more negative than the general AI-Tweeters. Expert AI-Tweeters share relatively large percentage of tweets about their personal news compared to technical aspects of AI. However, the effects of automation on the future are of primary concern to AIT than to EAIT. When the expert category is sub-categorized, the emotion analysis revealed that students and industry professionals have more insights in their tweets about AI than academicians.
Tasks Emotion Recognition
Published 2017-09-25
URL http://arxiv.org/abs/1709.09534v1
PDF http://arxiv.org/pdf/1709.09534v1.pdf
PWC https://paperswithcode.com/paper/tweeting-ai-perceptions-of-lay-vs-expert
Repo
Framework

Learning Knowledge Graph Embeddings with Type Regularizer

Title Learning Knowledge Graph Embeddings with Type Regularizer
Authors Bhushan Kotnis, Vivi Nastase
Abstract Learning relations based on evidence from knowledge bases relies on processing the available relation instances. Many relations, however, have clear domain and range, which we hypothesize could help learn a better, more generalizing, model. We include such information in the RESCAL model in the form of a regularization factor added to the loss function that takes into account the types (categories) of the entities that appear as arguments to relations in the knowledge base. We note increased performance compared to the baseline model in terms of mean reciprocal rank and hits@N, N = 1, 3, 10. Furthermore, we discover scenarios that significantly impact the effectiveness of the type regularizer.
Tasks Knowledge Graph Embeddings
Published 2017-06-28
URL http://arxiv.org/abs/1706.09278v2
PDF http://arxiv.org/pdf/1706.09278v2.pdf
PWC https://paperswithcode.com/paper/learning-knowledge-graph-embeddings-with-type
Repo
Framework

The Reachability of Computer Programs

Title The Reachability of Computer Programs
Authors Reginaldo I. Silva Filho, Ricardo L. Azevedo da Rocha, Camila Leite Silva, Ricardo H. Gracini Guiraldelli
Abstract Would it be possible to explain the emergence of new computational ideas using the computation itself? Would it be feasible to describe the discovery process of new algorithmic solutions using only mathematics? This study is the first effort to analyze the nature of such inquiry from the viewpoint of effort to find a new algorithmic solution to a given problem. We define program reachability as a probability function whose argument is a form of the energetic cost (algorithmic entropy) of the problem.
Tasks
Published 2017-08-23
URL http://arxiv.org/abs/1708.06877v1
PDF http://arxiv.org/pdf/1708.06877v1.pdf
PWC https://paperswithcode.com/paper/the-reachability-of-computer-programs
Repo
Framework
comments powered by Disqus