January 25, 2020

3158 words 15 mins read

Paper Group ANR 1771

Paper Group ANR 1771

Unsupervised Common Question Generation from Multiple Documents using Reinforced Contrastive Coordinator. All It Takes is 20 Questions!: A Knowledge Graph Based Approach. Confirmatory Bayesian Online Change Point Detection in the Covariance Structure of Gaussian Processes. Assessment of Multiple-Biomarker Classifiers: fundamental principles and a p …

Unsupervised Common Question Generation from Multiple Documents using Reinforced Contrastive Coordinator

Title Unsupervised Common Question Generation from Multiple Documents using Reinforced Contrastive Coordinator
Authors Woon Sang Cho, Yizhe Zhang, Sudha Rao, Asli Celikyilmaz, Chenyan Xiong, Jianfeng Gao, Mengdi Wang, Bill Dolan
Abstract Web search engines today return a ranked list of document links in response to a user’s query. However, when a user query is vague, the resultant documents span multiple subtopics. In such a scenario, it would be helpful if the search engine provided clarification options to the user’s initial query in a way that each clarification option is closely related to the documents in one subtopic and is far away from the documents in all other subtopics. Motivated by this scenario, we address the task of contrastive common question generation where given a “positive” set of documents and a “negative” set of documents, we generate a question that is closely related to the “positive” set and is far away from the “negative” set. We propose Multi-Source Coordinated Question Generator (MSCQG), a novel coordinator model trained using reinforcement learning to optimize a reward based on document-question ranker score. We also develop an effective auxiliary objective, named Set-induced Contrastive Regularization (SCR) that draws the coordinator’s generation behavior more closely toward “positive” documents and away from “negative” documents. We show that our model significantly outperforms strong retrieval baselines as well as a baseline model developed for a similar task, as measured by various metrics.
Tasks Question Generation
Published 2019-11-08
URL https://arxiv.org/abs/1911.03047v1
PDF https://arxiv.org/pdf/1911.03047v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-common-question-generation-from
Repo
Framework

All It Takes is 20 Questions!: A Knowledge Graph Based Approach

Title All It Takes is 20 Questions!: A Knowledge Graph Based Approach
Authors Alvin Dey, Harsh Kumar Jain, Vikash Kumar Pandey, Tanmoy Chakraborty
Abstract 20 Questions (20Q) is a two-player game. One player is the answerer, and the other is a questioner. The answerer chooses an entity from a specified domain and does not reveal this to the other player. The questioner can ask at most 20 questions to the answerer to guess the entity. The answerer can reply to the questions asked by saying yes/no/maybe. In this paper, we propose a novel approach based on the knowledge graph for designing the 20Q game on Bollywood movies. The system assumes the role of the questioner and asks questions to predict the movie thought by the answerer. It uses a probabilistic learning model for template-based question generation and answers prediction. A dataset of interrelated entities is represented as a weighted knowledge graph, which updates as the game progresses by asking questions. An evolutionary approach helps the model to gain a better understanding of user choices and predicts the answer in fewer questions over time. Experimental results show that our model was able to predict the correct movie in less than 10 questions for more than half of the times the game was played. This kind of model can be used to design applications that can detect diseases by asking questions based on symptoms, improving recommendation systems, etc.
Tasks Question Generation, Recommendation Systems
Published 2019-11-12
URL https://arxiv.org/abs/1911.05161v1
PDF https://arxiv.org/pdf/1911.05161v1.pdf
PWC https://paperswithcode.com/paper/all-it-takes-is-20-questions-a-knowledge
Repo
Framework

Confirmatory Bayesian Online Change Point Detection in the Covariance Structure of Gaussian Processes

Title Confirmatory Bayesian Online Change Point Detection in the Covariance Structure of Gaussian Processes
Authors Jiyeon Han, Kyowoon Lee, Anh Tong, Jaesik Choi
Abstract In the analysis of sequential data, the detection of abrupt changes is important in predicting future changes. In this paper, we propose statistical hypothesis tests for detecting covariance structure changes in locally smooth time series modeled by Gaussian Processes (GPs). We provide theoretically justified thresholds for the tests, and use them to improve Bayesian Online Change Point Detection (BOCPD) by confirming statistically significant changes and non-changes. Our Confirmatory BOCPD (CBOCPD) algorithm finds multiple structural breaks in GPs even when hyperparameters are not tuned precisely. We also provide conditions under which CBOCPD provides the lower prediction error compared to BOCPD. Experimental results on synthetic and real-world datasets show that our new tests correctly detect changes in the covariance structure in GPs. The proposed algorithm also outperforms existing methods for the prediction of nonstationarity in terms of both regression error and log likelihood.
Tasks Change Point Detection, Gaussian Processes, Time Series
Published 2019-05-30
URL https://arxiv.org/abs/1905.13168v2
PDF https://arxiv.org/pdf/1905.13168v2.pdf
PWC https://paperswithcode.com/paper/confirmatory-bayesian-online-change-point
Repo
Framework

Assessment of Multiple-Biomarker Classifiers: fundamental principles and a proposed strategy

Title Assessment of Multiple-Biomarker Classifiers: fundamental principles and a proposed strategy
Authors Waleed A. Yousef
Abstract The multiple-biomarker classifier problem and its assessment are reviewed against the background of some fundamental principles from the field of statistical pattern recognition, machine learning, or the recently so-called “data science”. A narrow reading of that literature has led many authors to neglect the contribution to the total uncertainty of performance assessment from the finite training sample. Yet the latter is a fundamental indicator of the stability of a classifier; thus its neglect may be contributing to the problematic status of many studies. A three-level strategy is proposed for moving forward in this field. The lowest level is that of construction, where candidate features are selected and the choice of classifier architecture is made. At that point, the effective dimensionality of the classifier is estimated and used to size the next level of analysis, a pilot study on previously unseen cases. The total (training and testing) uncertainty resulting from the pilot study is, in turn, used to size the highest level of analysis, a pivotal study with a target level of uncertainty. Some resources available in the literature for implementing this approach are reviewed. Although the concepts explained in the present article may be fundamental and straightforward for many researchers in the machine learning community they are subtle for many practitioners, for whom we provided a general advice for the best practice in \cite{Shi2010MAQCII} and elaborate here in the present paper.
Tasks
Published 2019-10-30
URL https://arxiv.org/abs/1910.14502v1
PDF https://arxiv.org/pdf/1910.14502v1.pdf
PWC https://paperswithcode.com/paper/assessment-of-multiple-biomarker-classifiers
Repo
Framework

Learning to Auto Weight: Entirely Data-driven and Highly Efficient Weighting Framework

Title Learning to Auto Weight: Entirely Data-driven and Highly Efficient Weighting Framework
Authors Zhenmao Li, Yichao Wu, Ken Chen, Yudong Wu, Shunfeng Zhou, Jiaheng Liu, Junjie Yan
Abstract Example weighting algorithm is an effective solution to the training bias problem, however, most previous typical methods are usually limited to human knowledge and require laborious tuning of hyperparameters. In this paper, we propose a novel example weighting framework called Learning to Auto Weight (LAW). The proposed framework finds step-dependent weighting policies adaptively, and can be jointly trained with target networks without any assumptions or prior knowledge about the dataset. It consists of three key components: Stage-based Searching Strategy (3SM) is adopted to shrink the huge searching space in a complete training process; Duplicate Network Reward (DNR) gives more accurate supervision by removing randomness during the searching process; Full Data Update (FDU) further improves the updating efficiency. Experimental results demonstrate the superiority of weighting policy explored by LAW over standard training pipeline. Compared with baselines, LAW can find a better weighting schedule which achieves much more superior accuracy on both biased CIFAR and ImageNet.
Tasks
Published 2019-05-27
URL https://arxiv.org/abs/1905.11058v3
PDF https://arxiv.org/pdf/1905.11058v3.pdf
PWC https://paperswithcode.com/paper/law-learning-to-auto-weight
Repo
Framework

A Compressive Sensing Video dataset using Pixel-wise coded exposure

Title A Compressive Sensing Video dataset using Pixel-wise coded exposure
Authors Sathyaprakash Narayanan, Yeshwanth Bethi, Chetan Singh Thakur
Abstract Manifold amount of video data gets generated every minute as we read this document, ranging from surveillance to broadcasting purposes. There are two roadblocks that restrain us from using this data as such, first being the storage which restricts us from only storing the information based on the hardware constraints. Secondly, the computation required to process this data is highly expensive which makes it infeasible to work on them. Compressive sensing(CS)[2] is a signal process technique[11], through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Shannon-Nyquist sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity which requires the signal to be sparse in some domain. The second one is incoherence which is applied through the isometric property which is sufficient for sparse signals[9][10]. To sustain these characteristics, preserving all attributes in the uncompressed domain would help any kind of in this field. However, existing dataset fallback in terms of continuous tracking of all the object present in the scene, very few video datasets have comprehensive continuous tracking of objects. To address these problems collectively, in this work we propose a new comprehensive video dataset, where the data is compressed using pixel-wise coded exposure [3] that resolves various other impediments.
Tasks Compressive Sensing
Published 2019-05-24
URL https://arxiv.org/abs/1905.10054v2
PDF https://arxiv.org/pdf/1905.10054v2.pdf
PWC https://paperswithcode.com/paper/a-compressive-sensing-video-dataset-using
Repo
Framework

Active Linear Regression

Title Active Linear Regression
Authors Xavier Fontaine, Pierre Perrault, Vianney Perchet
Abstract We consider the problem of active linear regression where a decision maker has to choose between several covariates to sample in order to obtain the best estimate $\hat{\beta}$ of the parameter $\beta^{\star}$ of the linear model, in the sense of minimizing $\mathbb{E} \lVert\hat{\beta}-\beta^{\star}\rVert^2$. Using bandit and convex optimization techniques we propose an algorithm to define the sampling strategy of the decision maker and we compare it with other algorithms. We provide theoretical guarantees of our algorithm in different settings, including a $\mathcal{O}(T^{-2})$ regret bound in the case where the covariates form a basis of the feature space, generalizing and improving existing results. Numerical experiments validate our theoretical findings.
Tasks
Published 2019-06-20
URL https://arxiv.org/abs/1906.08509v1
PDF https://arxiv.org/pdf/1906.08509v1.pdf
PWC https://paperswithcode.com/paper/active-linear-regression
Repo
Framework

Inverse Ising inference from high-temperature re-weighting of observations

Title Inverse Ising inference from high-temperature re-weighting of observations
Authors Junghyo Jo, Danh-Tai Hoang, Vipul Periwal
Abstract Maximum Likelihood Estimation (MLE) is the bread and butter of system inference for stochastic systems. In some generality, MLE will converge to the correct model in the infinite data limit. In the context of physical approaches to system inference, such as Boltzmann machines, MLE requires the arduous computation of partition functions summing over all configurations, both observed and unobserved. We present here a conceptually and computationally transparent data-driven approach to system inference that is based on the simple question: How should the Boltzmann weights of observed configurations be modified to make the probability distribution of observed configurations close to a flat distribution? This algorithm gives accurate inference by using only observed configurations for systems with a large number of degrees of freedom where other approaches are intractable.
Tasks
Published 2019-09-10
URL https://arxiv.org/abs/1909.04305v1
PDF https://arxiv.org/pdf/1909.04305v1.pdf
PWC https://paperswithcode.com/paper/inverse-ising-inference-from-high-temperature
Repo
Framework

LEAP nets for power grid perturbations

Title LEAP nets for power grid perturbations
Authors Benjamin Donnot, Balthazar Donon, Isabelle Guyon, Zhengying Liu, Antoine Marot, Patrick Panciatici, Marc Schoenauer
Abstract We propose a novel neural network embedding approach to model power transmission grids, in which high voltage lines are disconnected and reconnected with one-another from time to time, either accidentally or willfully. We call our architeture LEAP net, for Latent Encoding of Atypical Perturbation. Our method implements a form of transfer learning, permitting to train on a few source domains, then generalize to new target domains, without learning on any example of that domain. We evaluate the viability of this technique to rapidly assess cu-rative actions that human operators take in emergency situations, using real historical data, from the French high voltage power grid.
Tasks Network Embedding, Transfer Learning
Published 2019-08-22
URL https://arxiv.org/abs/1908.08314v1
PDF https://arxiv.org/pdf/1908.08314v1.pdf
PWC https://paperswithcode.com/paper/leap-nets-for-power-grid-perturbations
Repo
Framework

Semantic Hilbert Space for Text Representation Learning

Title Semantic Hilbert Space for Text Representation Learning
Authors Benyou Wang, Qiuchi Li, Massimo Melucci, Dawei Song
Abstract Capturing the meaning of sentences has long been a challenging task. Current models tend to apply linear combinations of word features to conduct semantic composition for bigger-granularity units e.g. phrases, sentences, and documents. However, the semantic linearity does not always hold in human language. For instance, the meaning of the phrase ivory tower' can not be deduced by linearly combining the meanings of ivory’ and `tower’. To address this issue, we propose a new framework that models different levels of semantic units (e.g. sememe, word, sentence, and semantic abstraction) on a single \textit{Semantic Hilbert Space}, which naturally admits a non-linear semantic composition by means of a complex-valued vector word representation. An end-to-end neural network~\footnote{https://github.com/wabyking/qnn} is proposed to implement the framework in the text classification task, and evaluation results on six benchmarking text classification datasets demonstrate the effectiveness, robustness and self-explanation power of the proposed model. Furthermore, intuitive case studies are conducted to help end users to understand how the framework works. |
Tasks Representation Learning, Semantic Composition, Text Classification
Published 2019-02-26
URL http://arxiv.org/abs/1902.09802v1
PDF http://arxiv.org/pdf/1902.09802v1.pdf
PWC https://paperswithcode.com/paper/semantic-hilbert-space-for-text
Repo
Framework

A Survey of Data Quality Measurement and Monitoring Tools

Title A Survey of Data Quality Measurement and Monitoring Tools
Authors Lisa Ehrlinger, Elisa Rusz, Wolfram Wöß
Abstract High-quality data is key to interpretable and trustworthy data analytics and the basis for meaningful data-driven decisions. In practical scenarios, data quality is typically associated with data preprocessing, profiling, and cleansing for subsequent tasks like data integration or data analytics. However, from a scientific perspective, a lot of research has been published about the measurement (i.e., the detection) of data quality issues and different generally applicable data quality dimensions and metrics have been discussed. In this work, we close the gap between research into data quality measurement and practical implementations by investigating the functional scope of current data quality tools. With a systematic search, we identified 667 software tools dedicated to “data quality”, from which we evaluated 13 tools with respect to three functionality areas: (1) data profiling, (2) data quality measurement in terms of metrics, and (3) continuous data quality monitoring. We selected the evaluated tools with regard to pre-defined exclusion criteria to ensure that they are domain-independent, provide the investigated functions, and are evaluable freely or as trial. This survey aims at a comprehensive overview on state-of-the-art data quality tools and reveals potential for their functional enhancement. Additionally, the results allow a critical discussion on concepts, which are widely accepted in research, but hardly implemented in any tool observed, for example, generally applicable data quality metrics.
Tasks
Published 2019-07-18
URL https://arxiv.org/abs/1907.08138v1
PDF https://arxiv.org/pdf/1907.08138v1.pdf
PWC https://paperswithcode.com/paper/a-survey-of-data-quality-measurement-and
Repo
Framework

SemEval-2016 Task 4: Sentiment Analysis in Twitter

Title SemEval-2016 Task 4: Sentiment Analysis in Twitter
Authors Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Sebastiani, Veselin Stoyanov
Abstract This paper discusses the fourth year of the Sentiment Analysis in Twitter Task''. SemEval-2016 Task 4 comprises five subtasks, three of which represent a significant departure from previous editions. The first two subtasks are reruns from prior years and ask to predict the overall sentiment, and the sentiment towards a topic in a tweet. The three new subtasks focus on two variants of the basic sentiment classification in Twitter’’ task. The first variant adopts a five-point scale, which confers an ordinal character to the classification task. The second variant focuses on the correct estimation of the prevalence of each class of interest, a task which has been called quantification in the supervised learning literature. The task continues to be very popular, attracting a total of 43 teams.
Tasks Sentiment Analysis
Published 2019-12-03
URL https://arxiv.org/abs/1912.01973v1
PDF https://arxiv.org/pdf/1912.01973v1.pdf
PWC https://paperswithcode.com/paper/semeval-2016-task-4-sentiment-analysis-in-1
Repo
Framework

Learning Representations of Graph Data – A Survey

Title Learning Representations of Graph Data – A Survey
Authors Mital Kinderkhedia
Abstract Deep Neural Networks have shown tremendous success in the area of object recognition, image classification and natural language processing. However, designing optimal Neural Network architectures that can learn and output arbitrary graphs is an ongoing research problem. The objective of this survey is to summarize and discuss the latest advances in methods to Learn Representations of Graph Data. We start by identifying commonly used types of graph data and review basics of graph theory. This is followed by a discussion of the relationships between graph kernel methods and neural networks. Next we identify the major approaches used for learning representations of graph data namely: Kernel approaches, Convolutional approaches, Graph neural networks approaches, Graph embedding approaches and Probabilistic approaches. A variety of methods under each of the approaches are discussed and the survey is concluded with a brief discussion of the future of learning representation of graph data.
Tasks Graph Embedding, Image Classification, Object Recognition
Published 2019-06-07
URL https://arxiv.org/abs/1906.02989v2
PDF https://arxiv.org/pdf/1906.02989v2.pdf
PWC https://paperswithcode.com/paper/learning-representations-of-graph-data-a
Repo
Framework

Sample Complexity Bounds for Recurrent Neural Networks with Application to Combinatorial Graph Problems

Title Sample Complexity Bounds for Recurrent Neural Networks with Application to Combinatorial Graph Problems
Authors Nil-Jana Akpinar, Bernhard Kratzwald, Stefan Feuerriegel
Abstract Learning to predict solutions to real-valued combinatorial graph problems promises efficient approximations. As demonstrated based on the NP-hard edge clique cover number, recurrent neural networks (RNNs) are particularly suited for this task and can even outperform state-of-the-art heuristics. However, the theoretical framework for estimating real-valued RNNs is understood only poorly. As our primary contribution, this is the first work that upper bounds the sample complexity for learning real-valued RNNs. While such derivations have been made earlier for feed-forward and convolutional neural networks, our work presents the first such attempt for recurrent neural networks. Given a single-layer RNN with $a$ rectified linear units and input of length $b$, we show that a population prediction error of $\varepsilon$ can be realized with at most $\tilde{\mathcal{O}}(a^4b/\varepsilon^2)$ samples. We further derive comparable results for multi-layer RNNs. Accordingly, a size-adaptive RNN fed with graphs of at most $n$ vertices can be learned in $\tilde{\mathcal{O}}(n^6/\varepsilon^2)$, i.e., with only a polynomial number of samples. For combinatorial graph problems, this provides a theoretical foundation that renders RNNs competitive.
Tasks
Published 2019-01-29
URL https://arxiv.org/abs/1901.10289v2
PDF https://arxiv.org/pdf/1901.10289v2.pdf
PWC https://paperswithcode.com/paper/sample-complexity-bounds-for-recurrent-neural
Repo
Framework

Improving Search with Supervised Learning in Trick-Based Card Games

Title Improving Search with Supervised Learning in Trick-Based Card Games
Authors Christopher Solinas, Douglas Rebstock, Michael Buro
Abstract In trick-taking card games, a two-step process of state sampling and evaluation is widely used to approximate move values. While the evaluation component is vital, the accuracy of move value estimates is also fundamentally linked to how well the sampling distribution corresponds the true distribution. Despite this, recent work in trick-taking card game AI has mainly focused on improving evaluation algorithms with limited work on improving sampling. In this paper, we focus on the effect of sampling on the strength of a player and propose a novel method of sampling more realistic states given move history. In particular, we use predictions about locations of individual cards made by a deep neural network — trained on data from human gameplay - in order to sample likely worlds for evaluation. This technique, used in conjunction with Perfect Information Monte Carlo (PIMC) search, provides a substantial increase in cardplay strength in the popular trick-taking card game of Skat.
Tasks Card Games
Published 2019-03-22
URL http://arxiv.org/abs/1903.09604v1
PDF http://arxiv.org/pdf/1903.09604v1.pdf
PWC https://paperswithcode.com/paper/improving-search-with-supervised-learning-in
Repo
Framework
comments powered by Disqus