Paper Group ANR 878
Findings of the Third Workshop on Neural Generation and Translation. Posterior Distribution for the Number of Clusters in Dirichlet Process Mixture Models. Vulnerable road user detection: state-of-the-art and open challenges. Anomaly Detection in High Dimensional Data. Coupling the reduced-order model and the generative model for an importance samp …
Findings of the Third Workshop on Neural Generation and Translation
Title | Findings of the Third Workshop on Neural Generation and Translation |
Authors | Hiroaki Hayashi, Yusuke Oda, Alexandra Birch, Ioannis Konstas, Andrew Finch, Minh-Thang Luong, Graham Neubig, Katsuhito Sudoh |
Abstract | This document describes the findings of the Third Workshop on Neural Generation and Translation, held in concert with the annual conference of the Empirical Methods in Natural Language Processing (EMNLP 2019). First, we summarize the research trends of papers presented in the proceedings. Second, we describe the results of the two shared tasks 1) efficient neural machine translation (NMT) where participants were tasked with creating NMT systems that are both accurate and efficient, and 2) document-level generation and translation (DGT) where participants were tasked with developing systems that generate summaries from structured data, potentially with assistance from text in another language. |
Tasks | Machine Translation |
Published | 2019-10-29 |
URL | https://arxiv.org/abs/1910.13299v2 |
https://arxiv.org/pdf/1910.13299v2.pdf | |
PWC | https://paperswithcode.com/paper/191013299 |
Repo | |
Framework | |
Posterior Distribution for the Number of Clusters in Dirichlet Process Mixture Models
Title | Posterior Distribution for the Number of Clusters in Dirichlet Process Mixture Models |
Authors | Chiao-Yu Yang, Nhat Ho, Michael I. Jordan |
Abstract | Dirichlet process mixture models (DPMM) play a central role in Bayesian nonparametrics, with applications throughout statistics and machine learning. DPMMs are generally used in clustering problems where the number of clusters is not known in advance, and the posterior distribution is treated as providing inference for this number. Recently, however, it has been shown that the DPMM is inconsistent in inferring the true number of components in certain cases. This is an asymptotic result, and it would be desirable to understand whether it holds with finite samples, and to more fully understand the full posterior. In this work, we provide a rigorous study for the posterior distribution of the number of clusters in DPMM under different prior distributions on the parameters and constraints on the distributions of the data. We provide novel lower bounds on the ratios of probabilities between $s+1$ clusters and $s$ clusters when the prior distributions on parameters are chosen to be Gaussian or uniform distributions. |
Tasks | |
Published | 2019-05-23 |
URL | https://arxiv.org/abs/1905.09959v1 |
https://arxiv.org/pdf/1905.09959v1.pdf | |
PWC | https://paperswithcode.com/paper/posterior-distribution-for-the-number-of |
Repo | |
Framework | |
Vulnerable road user detection: state-of-the-art and open challenges
Title | Vulnerable road user detection: state-of-the-art and open challenges |
Authors | Patrick Mannion |
Abstract | Correctly identifying vulnerable road users (VRUs), e.g. cyclists and pedestrians, remains one of the most challenging environment perception tasks for autonomous vehicles (AVs). This work surveys the current state-of-the-art in VRU detection, covering topics such as benchmarks and datasets, object detection techniques and relevant machine learning algorithms. The article concludes with a discussion of remaining open challenges and promising future research directions for this domain. |
Tasks | Autonomous Vehicles, Object Detection |
Published | 2019-02-10 |
URL | http://arxiv.org/abs/1902.03601v1 |
http://arxiv.org/pdf/1902.03601v1.pdf | |
PWC | https://paperswithcode.com/paper/vulnerable-road-user-detection-state-of-the |
Repo | |
Framework | |
Anomaly Detection in High Dimensional Data
Title | Anomaly Detection in High Dimensional Data |
Authors | Priyanga Dilini Talagala, Rob J. Hyndman, Kate Smith-Miles |
Abstract | The HDoutliers algorithm is a powerful unsupervised algorithm for detecting anomalies in high-dimensional data, with a strong theoretical foundation. However, it suffers from some limitations that significantly hinder its performance level, under certain circumstances. In this article, we propose an algorithm that addresses these limitations. We define an anomaly as an observation that deviates markedly from the majority with a large distance gap. An approach based on extreme value theory is used for the anomalous threshold calculation. Using various synthetic and real datasets, we demonstrate the wide applicability and usefulness of our algorithm, which we call the stray algorithm. We also demonstrate how this algorithm can assist in detecting anomalies present in other data structures using feature engineering. We show the situations where the stray algorithm outperforms the HDoutliers algorithm both in accuracy and computational time. This framework is implemented in the open source R package stray. |
Tasks | Anomaly Detection, Feature Engineering |
Published | 2019-08-12 |
URL | https://arxiv.org/abs/1908.04000v1 |
https://arxiv.org/pdf/1908.04000v1.pdf | |
PWC | https://paperswithcode.com/paper/anomaly-detection-in-high-dimensional-data |
Repo | |
Framework | |
Coupling the reduced-order model and the generative model for an importance sampling estimator
Title | Coupling the reduced-order model and the generative model for an importance sampling estimator |
Authors | Xiaoliang Wan, Shuangqing Wei |
Abstract | In this work, we develop an importance sampling estimator by coupling the reduced-order model and the generative model in a problem setting of uncertainty quantification. The target is to estimate the probability that the quantity of interest (QoI) in a complex system is beyond a given threshold. To avoid the prohibitive cost of sampling a large scale system, the reduced-order model is usually considered for a trade-off between efficiency and accuracy. However, the Monte Carlo estimator given by the reduced-order model is biased due to the error from dimension reduction. To correct the bias, we still need to sample the fine model. An effective technique to reduce the variance reduction is importance sampling, where we employ the generative model to estimate the distribution of the data from the reduced-order model and use it for the change of measure in the importance sampling estimator. To compensate the approximation errors of the reduced-order model, more data that induce a slightly smaller QoI than the threshold need to be included into the training set. Although the amount of these data can be controlled by a posterior error estimate, redundant data, which may outnumber the effective data, will be kept due to the epistemic uncertainty. To deal with this issue, we introduce a weighted empirical distribution to process the data from the reduced-order model. The generative model is then trained by minimizing the cross entropy between it and the weighted empirical distribution. We also introduce a penalty term into the objective function to deal with the overfitting for more robustness. Numerical results are presented to demonstrate the effectiveness of the proposed methodology. |
Tasks | Dimensionality Reduction |
Published | 2019-01-23 |
URL | http://arxiv.org/abs/1901.07977v1 |
http://arxiv.org/pdf/1901.07977v1.pdf | |
PWC | https://paperswithcode.com/paper/coupling-the-reduced-order-model-and-the |
Repo | |
Framework | |
Early Prediction of 30-day ICU Re-admissions Using Natural Language Processing and Machine Learning
Title | Early Prediction of 30-day ICU Re-admissions Using Natural Language Processing and Machine Learning |
Authors | Zhiheng Li, Xinyue Xing, Bingzhang Lu, Zhixiang Li |
Abstract | ICU readmission is associated with longer hospitalization, mortality and adverse outcomes. An early recognition of ICU re-admission can help prevent patients from worse situation and lower treatment cost. As the abundance of Electronics Health Records (EHR), it is popular to design clinical decision tools with machine learning technique manipulating on healthcare large scale data. We designed data-driven predictive models to estimate the risk of ICU readmission. The discharge summary of each hospital admission was carefully represented by natural language processing techniques. Unified Medical Language System (UMLS) was further used to standardize inconsistency of discharge summaries. 5 machine learning classifiers were adopted to construct predictive models. The best configuration yielded a competitive AUC of 0.748. Our work suggests that natural language processing of discharge summaries is capable to send clinicians warning of unplanned 30-day readmission upon discharge. |
Tasks | |
Published | 2019-10-06 |
URL | https://arxiv.org/abs/1910.02545v1 |
https://arxiv.org/pdf/1910.02545v1.pdf | |
PWC | https://paperswithcode.com/paper/early-prediction-of-30-day-icu-re-admissions |
Repo | |
Framework | |
Instance-Invariant Adaptive Object Detection via Progressive Disentanglement
Title | Instance-Invariant Adaptive Object Detection via Progressive Disentanglement |
Authors | Aming Wu, Yahong Han, Linchao Zhu, Yi Yang |
Abstract | Most state-of-the-art methods of object detection suffer from poor generalization ability when the training and test data are from different domains, e.g., with different styles. To address this problem, previous methods mainly use holistic representations to align feature-level and pixel-level distributions of different domains, which may neglect the instance-level characteristics of objects in images. Besides, when transferring detection ability across different domains, it is important to obtain the instance-level features that are domain-invariant, instead of the styles that are domain-specific. Therefore, in order to extract instance-invariant features, we should disentangle the domain-invariant features from the domain-specific features. To this end, a progressive disentangled framework is first proposed to solve domain adaptive object detection. Particularly, base on disentangled learning used for feature decomposition, we devise two disentangled layers to decompose domain-invariant and domain-specific features. And the instance-invariant features are extracted based on the domain-invariant features. Finally, to enhance the disentanglement, a three-stage training mechanism including multiple loss functions is devised to optimize our model. In the experiment, we verify the effectiveness of our method on three domain-shift scenes. Our method is separately 2.3%, 3.6%, and 4.0% higher than the baseline method \cite{saito2019strong}. |
Tasks | Object Detection |
Published | 2019-11-20 |
URL | https://arxiv.org/abs/1911.08712v1 |
https://arxiv.org/pdf/1911.08712v1.pdf | |
PWC | https://paperswithcode.com/paper/instance-invariant-adaptive-object-detection |
Repo | |
Framework | |
M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues
Title | M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues |
Authors | Trisha Mittal, Uttaran Bhattacharya, Rohan Chandra, Aniket Bera, Dinesh Manocha |
Abstract | We present M3ER, a learning-based method for emotion recognition from multiple input modalities. Our approach combines cues from multiple co-occurring modalities (such as face, text, and speech) and also is more robust than other methods to sensor noise in any of the individual modalities. M3ER models a novel, data-driven multiplicative fusion method to combine the modalities, which learn to emphasize the more reliable cues and suppress others on a per-sample basis. By introducing a check step which uses Canonical Correlational Analysis to differentiate between ineffective and effective modalities, M3ER is robust to sensor noise. M3ER also generates proxy features in place of the ineffectual modalities. We demonstrate the efficiency of our network through experimentation on two benchmark datasets, IEMOCAP and CMU-MOSEI. We report a mean accuracy of 82.7% on IEMOCAP and 89.0% on CMU-MOSEI, which, collectively, is an improvement of about 5% over prior work. |
Tasks | Emotion Recognition, Multimodal Emotion Recognition |
Published | 2019-11-09 |
URL | https://arxiv.org/abs/1911.05659v2 |
https://arxiv.org/pdf/1911.05659v2.pdf | |
PWC | https://paperswithcode.com/paper/m3er-multiplicative-multimodal-emotion |
Repo | |
Framework | |
Cleaning tasks knowledge transfer between heterogeneous robots: a deep learning approach
Title | Cleaning tasks knowledge transfer between heterogeneous robots: a deep learning approach |
Authors | Jaeseok Kim, Nino Cauli, Pedro Vicente, Bruno Damas, Alexandre Bernardino, José Santos-Victor, Filippo Cavallo |
Abstract | Nowadays, autonomous service robots are becoming an important topic in robotic research. Differently from typical industrial scenarios, with highly controlled environments, service robots must show an additional robustness to task perturbations and changes in the characteristics of their sensory feedback. In this paper, a robot is taught to perform two different cleaning tasks over a table, using a learning from demonstration paradigm. However, differently from other approaches, a convolutional neural network is used to generalize the demonstrations to different, not yet seen dirt or stain patterns on the same table using only visual feedback, and to perform cleaning movements accordingly. Robustness to robot posture and illumination changes is achieved using data augmentation techniques and camera images transformation. This robustness allows the transfer of knowledge regarding execution of cleaning tasks between heterogeneous robots operating in different environmental settings. To demonstrate the viability of the proposed approach, a network trained in Lisbon to perform cleaning tasks, using the iCub robot, is successfully employed by the DoRo robot in Peccioli, Italy. |
Tasks | Data Augmentation, Transfer Learning |
Published | 2019-03-13 |
URL | https://arxiv.org/abs/1903.05635v2 |
https://arxiv.org/pdf/1903.05635v2.pdf | |
PWC | https://paperswithcode.com/paper/cleaning-tasks-knowledge-transfer-between |
Repo | |
Framework | |
Shift-of-Perspective Identification Within Legal Cases
Title | Shift-of-Perspective Identification Within Legal Cases |
Authors | Gathika Ratnayaka, Thejan Rupasinghe, Nisansa de Silva, Viraj Salaka Gamage, Menuka Warushavithana, Amal Shehan Perera |
Abstract | Arguments, counter-arguments, facts, and evidence obtained via documents related to previous court cases are of essential need for legal professionals. Therefore, the process of automatic information extraction from documents containing legal opinions related to court cases can be considered to be of significant importance. This study is focused on the identification of sentences in legal opinion texts which convey different perspectives on a certain topic or entity. We combined several approaches based on semantic analysis, open information extraction, and sentiment analysis to achieve our objective. Then, our methodology was evaluated with the help of human judges. The outcomes of the evaluation demonstrate that our system is successful in detecting situations where two sentences deliver different opinions on the same topic or entity. The proposed methodology can be used to facilitate other information extraction tasks related to the legal domain. One such task is the automated detection of counter arguments for a given argument. Another is the identification of opponent parties in a court case. |
Tasks | Open Information Extraction, Sentiment Analysis |
Published | 2019-06-06 |
URL | https://arxiv.org/abs/1906.02430v4 |
https://arxiv.org/pdf/1906.02430v4.pdf | |
PWC | https://paperswithcode.com/paper/shift-of-perspective-identification-within |
Repo | |
Framework | |
The Many AI Challenges of Hearthstone
Title | The Many AI Challenges of Hearthstone |
Authors | Amy K. Hoover, Julian Togelius, Scott Lee, Fernando de Mesentier Silva |
Abstract | Games have benchmarked AI methods since the inception of the field, with classic board games such as Chess and Go recently leaving room for video games with related yet different sets of challenges. The set of AI problems associated with video games has in recent decades expanded from simply playing games to win, to playing games in particular styles, generating game content, modeling players etc. Different games pose very different challenges for AI systems, and several different AI challenges can typically be posed by the same game. In this article we analyze the popular collectible card game Hearthstone (Blizzard 2014) and describe a varied set of interesting AI challenges posed by this game. Collectible card games are relatively understudied in the AI community, despite their popularity and the interesting challenges they pose. Analyzing a single game in-depth in the manner we do here allows us to see the entire field of AI and Games through the lens of a single game, discovering a few new variations on existing research topics. |
Tasks | Board Games, Card Games |
Published | 2019-07-15 |
URL | https://arxiv.org/abs/1907.06562v1 |
https://arxiv.org/pdf/1907.06562v1.pdf | |
PWC | https://paperswithcode.com/paper/the-many-ai-challenges-of-hearthstone |
Repo | |
Framework | |
The Winnability of Klondike Solitaire and Many Other Patience Games
Title | The Winnability of Klondike Solitaire and Many Other Patience Games |
Authors | Charlie Blake, Ian P. Gent |
Abstract | Our ignorance of the winnability percentage of the game in the Windows Solitaire program, more properly called ‘Klondike’, has been described as “one of the embarrassments of applied mathematics”. Klondike is just one of many single-player card games, generically called ‘patience’ or ‘solitaire’ games, for which players have long wanted to know how likely a particular game is to be winnable. A number of different games have been studied empirically in the academic literature and by non-academic enthusiasts. Here we show that a single general purpose Artificial Intelligence program, called “Solvitaire”, can be used to determine the winnability percentage of 45 different single-player card games with a 95% confidence interval of +/- 0.1% or better. For example, we report the winnability of Klondike as 81.956% +/- 0.096% (in the ‘thoughtful’ variant where the player knows the location of all cards), a 30-fold reduction in confidence interval over the best previous result. Almost all our results are either entirely new or represent significant improvements on previous knowledge. |
Tasks | Card Games |
Published | 2019-06-28 |
URL | https://arxiv.org/abs/1906.12314v3 |
https://arxiv.org/pdf/1906.12314v3.pdf | |
PWC | https://paperswithcode.com/paper/the-winnability-of-klondike-and-many-other |
Repo | |
Framework | |
Per-sample Prediction Intervals for Extreme Learning Machines
Title | Per-sample Prediction Intervals for Extreme Learning Machines |
Authors | Anton Akusok, Yoan Miche, Kaj-Mikael Björk, Amaury Lendasse |
Abstract | Prediction intervals in supervised Machine Learning bound the region where the true outputs of new samples may fall. They are necessary in the task of separating reliable predictions of a trained model from near random guesses, minimizing the rate of False Positives, and other problem-specific tasks in applied Machine Learning. Many real problems have heteroscedastic stochastic outputs, which explains the need of input-dependent prediction intervals. This paper proposes to estimate the input-dependent prediction intervals by a separate Extreme Learning Machine model, using variance of its predictions as a correction term accounting for the model uncertainty. The variance is estimated from the model’s linear output layer with a weighted Jackknife method. The methodology is very fast, robust to heteroscedastic outputs, and handles both extremely large datasets and insufficient amount of training data. |
Tasks | |
Published | 2019-12-19 |
URL | https://arxiv.org/abs/1912.09090v1 |
https://arxiv.org/pdf/1912.09090v1.pdf | |
PWC | https://paperswithcode.com/paper/per-sample-prediction-intervals-for-extreme |
Repo | |
Framework | |
Batch-Size Independent Regret Bounds for the Combinatorial Multi-Armed Bandit Problem
Title | Batch-Size Independent Regret Bounds for the Combinatorial Multi-Armed Bandit Problem |
Authors | Nadav Merlis, Shie Mannor |
Abstract | We consider the combinatorial multi-armed bandit (CMAB) problem, where the reward function is nonlinear. In this setting, the agent chooses a batch of arms on each round and receives feedback from each arm of the batch. The reward that the agent aims to maximize is a function of the selected arms and their expectations. In many applications, the reward function is highly nonlinear, and the performance of existing algorithms relies on a global Lipschitz constant to encapsulate the function’s nonlinearity. This may lead to loose regret bounds, since by itself, a large gradient does not necessarily cause a large regret, but only in regions where the uncertainty in the reward’s parameters is high. To overcome this problem, we introduce a new smoothness criterion, which we term \emph{Gini-weighted smoothness}, that takes into account both the nonlinearity of the reward and concentration properties of the arms. We show that a linear dependence of the regret in the batch size in existing algorithms can be replaced by this smoothness parameter. This, in turn, leads to much tighter regret bounds when the smoothness parameter is batch-size independent. For example, in the probabilistic maximum coverage (PMC) problem, that has many applications, including influence maximization, diverse recommendations and more, we achieve dramatic improvements in the upper bounds. We also prove matching lower bounds for the PMC problem and show that our algorithm is tight, up to a logarithmic factor in the problem’s parameters. |
Tasks | |
Published | 2019-05-08 |
URL | https://arxiv.org/abs/1905.03125v3 |
https://arxiv.org/pdf/1905.03125v3.pdf | |
PWC | https://paperswithcode.com/paper/batch-size-independent-regret-bounds-for-the |
Repo | |
Framework | |
Learning Policies from Human Data for Skat
Title | Learning Policies from Human Data for Skat |
Authors | Douglas Rebstock, Christopher Solinas, Michael Buro |
Abstract | Decision-making in large imperfect information games is difficult. Thanks to recent success in Poker, Counterfactual Regret Minimization (CFR) methods have been at the forefront of research in these games. However, most of the success in large games comes with the use of a forward model and powerful state abstractions. In trick-taking card games like Bridge or Skat, large information sets and an inability to advance the simulation without fully determinizing the state make forward search problematic. Furthermore, state abstractions can be especially difficult to construct because the precise holdings of each player directly impact move values. In this paper we explore learning model-free policies for Skat from human game data using deep neural networks (DNN). We produce a new state-of-the-art system for bidding and game declaration by introducing methods to a) directly vary the aggressiveness of the bidder and b) declare games based on expected value while mitigating issues with rarely observed state-action pairs. Although cardplay policies learned through imitation are slightly weaker than the current best search-based method, they run orders of magnitude faster. We also explore how these policies could be learned directly from experience in a reinforcement learning setting and discuss the value of incorporating human data for this task. |
Tasks | Card Games, Decision Making |
Published | 2019-05-27 |
URL | https://arxiv.org/abs/1905.10907v1 |
https://arxiv.org/pdf/1905.10907v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-policies-from-human-data-for-skat |
Repo | |
Framework | |