Paper Group NANR 171
HGMR: Hierarchical Gaussian Mixtures for Adaptive 3D Registration. Interpret Neural Networks by Identifying Critical Data Routing Paths. Inferring Light Fields From Shadows. Residual Loss Prediction: Reinforcement Learning With No Incremental Feedback. Uni-DUE Student Team: Tackling fact checking through decomposable attention neural network. Impro …
HGMR: Hierarchical Gaussian Mixtures for Adaptive 3D Registration
Title | HGMR: Hierarchical Gaussian Mixtures for Adaptive 3D Registration |
Authors | B. Eckart, K. Kim, J. Kautz |
Abstract | Point cloud registration sits at the core of many important and challenging 3D perception problems including autonomous navigation, SLAM, object/scene recognition, and augmented reality. In this paper, we present a new registration algorithm that is able to achieve state-of-the-art speed and accuracy through its use of a Hierarchical Gaussian Mixture representation. Our method, Hierarchical Gaussian Mixture Registration (HGMR), constructs a top-down multi-scale representation of point cloud data by recursively running many small-scale data likelihood segmentations in parallel on a GPU. We leverage the resulting representation using a novel optimization criterion that adaptively finds the best scale to perform data association between spatial subsets of point cloud data. Compared to previous Iterative Closest Point and GMM-based techniques, our tree-based point association algorithm performs data association in logarithmic-time while dynamically adjusting the level of detail to best match the complexity and spatial distribution characteristics of local scene geometry. In addition, unlike other GMM methods that restrict covariances to be isotropic, our new PCA-based optimization criterion well-approximates the true MLE solution even when fully anisotropic Gaussian covariances are used. Efficient data association, multi-scale adaptability, and a robust MLE approximation produce an algorithm that is up to an order of magnitude both faster and more accurate than current state-of-the-art on a wide variety of 3D datasets captured from LiDAR to structured light. |
Tasks | Autonomous Navigation, Point Cloud Registration, Scene Recognition |
Published | 2018-09-01 |
URL | http://openaccess.thecvf.com/content_ECCV_2018/html/Benjamin_Eckart_Fast_and_Accurate_ECCV_2018_paper.html |
http://openaccess.thecvf.com/content_ECCV_2018/papers/Benjamin_Eckart_Fast_and_Accurate_ECCV_2018_paper.pdf | |
PWC | https://paperswithcode.com/paper/hgmr-hierarchical-gaussian-mixtures-for |
Repo | |
Framework | |
Interpret Neural Networks by Identifying Critical Data Routing Paths
Title | Interpret Neural Networks by Identifying Critical Data Routing Paths |
Authors | Yulong Wang, Hang Su, Bo Zhang, Xiaolin Hu |
Abstract | Interpretability of a deep neural network aims to explain the rationale behind its decisions and enable the users to understand the intelligent agents, which has become an important issue due to its importance in practical applications. To address this issue, we develop a Distillation Guided Routing method, which is a flexible framework to interpret a deep neural network by identifying critical data routing paths and analyzing the functional processing behavior of the corresponding layers. Specifically, we propose to discover the critical nodes on the data routing paths during network inferring prediction for individual input samples by learning associated control gates for each layer’s output channel. The routing paths can, therefore, be represented based on the responses of concatenated control gates from all the layers, which reflect the network’s semantic selectivity regarding to the input patterns and more detailed functional process across different layer levels. Based on the discoveries, we propose an adversarial sample detection algorithm by learning a classifier to discriminate whether the critical data routing paths are from real or adversarial samples. Experiments demonstrate that our algorithm can effectively achieve high defense rate with minor training overhead. |
Tasks | |
Published | 2018-06-01 |
URL | http://openaccess.thecvf.com/content_cvpr_2018/html/Wang_Interpret_Neural_Networks_CVPR_2018_paper.html |
http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Interpret_Neural_Networks_CVPR_2018_paper.pdf | |
PWC | https://paperswithcode.com/paper/interpret-neural-networks-by-identifying |
Repo | |
Framework | |
Inferring Light Fields From Shadows
Title | Inferring Light Fields From Shadows |
Authors | Manel Baradad, Vickie Ye, Adam B. Yedidia, Frédo Durand, William T. Freeman, Gregory W. Wornell, Antonio Torralba |
Abstract | We present a method for inferring a 4D light field of a hidden scene from 2D shadows cast by a known occluder on a diffuse wall. We do this by determining how light naturally reflected off surfaces in the hidden scene interacts with the occluder. By modeling the light transport as a linear system, and incorporating prior knowledge about light field structures, we can invert the system to recover the hidden scene. We demonstrate results of our inference method across simulations and experiments with different types of occluders. For instance, using the shadow cast by a real house plant, we are able to recover low resolution light fields with different levels of texture and parallax complexity. We provide two experimental results: a human subject and two planar elements at different depths. |
Tasks | |
Published | 2018-06-01 |
URL | http://openaccess.thecvf.com/content_cvpr_2018/html/Baradad_Inferring_Light_Fields_CVPR_2018_paper.html |
http://openaccess.thecvf.com/content_cvpr_2018/papers/Baradad_Inferring_Light_Fields_CVPR_2018_paper.pdf | |
PWC | https://paperswithcode.com/paper/inferring-light-fields-from-shadows |
Repo | |
Framework | |
Residual Loss Prediction: Reinforcement Learning With No Incremental Feedback
Title | Residual Loss Prediction: Reinforcement Learning With No Incremental Feedback |
Authors | Hal Daumé III, John Langford, Amr Sharaf |
Abstract | We consider reinforcement learning and bandit structured prediction problems with very sparse loss feedback: only at the end of an episode. We introduce a novel algorithm, RESIDUAL LOSS PREDICTION (RESLOPE), that solves such problems by automatically learning an internal representation of a denser reward function. RESLOPE operates as a reduction to contextual bandits, using its learned loss representation to solve the credit assignment problem, and a contextual bandit oracle to trade-off exploration and exploitation. RESLOPE enjoys a no-regret reduction-style theoretical guarantee and outperforms state of the art reinforcement learning algorithms in both MDP environments and bandit structured prediction settings. |
Tasks | Multi-Armed Bandits, Structured Prediction |
Published | 2018-01-01 |
URL | https://openreview.net/forum?id=HJNMYceCW |
https://openreview.net/pdf?id=HJNMYceCW | |
PWC | https://paperswithcode.com/paper/residual-loss-prediction-reinforcement |
Repo | |
Framework | |
Uni-DUE Student Team: Tackling fact checking through decomposable attention neural network
Title | Uni-DUE Student Team: Tackling fact checking through decomposable attention neural network |
Authors | Jan Kowollik, Ahmet Aker |
Abstract | In this paper we present our system for the FEVER Challenge. The task of this challenge is to verify claims by extracting information from Wikipedia. Our system has two parts. In the first part it performs a search for candidate sentences by treating the claims as query. In the second part it filters out noise from these candidates and uses the remaining ones to decide whether they support or refute or entail not enough information to verify the claim. We show that this system achieves a FEVER score of 0.3927 on the FEVER shared task development data set which is a 25.5{%} improvement over the baseline score. |
Tasks | Natural Language Inference |
Published | 2018-11-01 |
URL | https://www.aclweb.org/anthology/W18-5518/ |
https://www.aclweb.org/anthology/W18-5518 | |
PWC | https://paperswithcode.com/paper/uni-due-student-team-tackling-fact-checking |
Repo | |
Framework | |
Improved Regret Bounds for Thompson Sampling in Linear Quadratic Control Problems
Title | Improved Regret Bounds for Thompson Sampling in Linear Quadratic Control Problems |
Authors | Marc Abeille, Alessandro Lazaric |
Abstract | Thompson sampling (TS) is an effective approach to trade off exploration and exploration in reinforcement learning. Despite its empirical success and recent advances, its theoretical analysis is often limited to the Bayesian setting, finite state-action spaces, or finite-horizon problems. In this paper, we study an instance of TS in the challenging setting of the infinite-horizon linear quadratic (LQ) control, which models problems with continuous state-action variables, linear dynamics, and quadratic cost. In particular, we analyze the regret in the frequentist sense (i.e., for a fixed unknown environment) in one-dimensional systems. We derive the first $O(\sqrt{T})$ frequentist regret bound for this problem, thus significantly improving the $O(T^{2/3})$ bound of Abeille & Lazaric (2017) and matching the frequentist performance derived by Abbasi-Yadkori & Szepesv{á}ri (2011) for an optimistic approach and the Bayesian result Ouyang et al. (2017) We obtain this result by developing a novel bound on the regret due to policy switches, which holds for LQ systems of any dimensionality and it allows updating the parameters and the policy at each step, thus overcoming previous limitations due to lazy updates. Finally, we report numerical simulations supporting the conjecture that our result extends to multi-dimensional systems. |
Tasks | |
Published | 2018-07-01 |
URL | https://icml.cc/Conferences/2018/Schedule?showEvent=2353 |
http://proceedings.mlr.press/v80/abeille18a/abeille18a.pdf | |
PWC | https://paperswithcode.com/paper/improved-regret-bounds-for-thompson-sampling |
Repo | |
Framework | |
Towards the Inference of Semantic Relations in Complex Nominals: a Pilot Study
Title | Towards the Inference of Semantic Relations in Complex Nominals: a Pilot Study |
Authors | Melania Cabezas-Garc{'\i}a, Pilar Le{'o}n-Ara{'u}z |
Abstract | |
Tasks | |
Published | 2018-05-01 |
URL | https://www.aclweb.org/anthology/L18-1399/ |
https://www.aclweb.org/anthology/L18-1399 | |
PWC | https://paperswithcode.com/paper/towards-the-inference-of-semantic-relations |
Repo | |
Framework | |
Socially Responsible NLP
Title | Socially Responsible NLP |
Authors | Yulia Tsvetkov, Vinodkumar Prabhakaran, Rob Voigt |
Abstract | As language technologies have become increasingly prevalent, there is a growing awareness that decisions we make about our data, methods, and tools are often tied up with their impact on people and societies. This tutorial will provide an overview of real-world applications of language technologies and the potential ethical implications associated with them. We will discuss philosophical foundations of ethical research along with state of the art techniques. Through this tutorial, we intend to provide the NLP researcher with an overview of tools to ensure that the data, algorithms, and models that they build are socially responsible. These tools will include a checklist of common pitfalls that one should avoid (e.g., demographic bias in data collection), as well as methods to adequately mitigate these issues (e.g., adjusting sampling rates or de-biasing through regularization). The tutorial is based on a new course on Ethics and NLP developed at Carnegie Mellon University. |
Tasks | Decision Making |
Published | 2018-06-01 |
URL | https://www.aclweb.org/anthology/N18-6005/ |
https://www.aclweb.org/anthology/N18-6005 | |
PWC | https://paperswithcode.com/paper/socially-responsible-nlp |
Repo | |
Framework | |
SIRIUS-LTG: An Entity Linking Approach to Fact Extraction and Verification
Title | SIRIUS-LTG: An Entity Linking Approach to Fact Extraction and Verification |
Authors | Farhad Nooralahzadeh, Lilja {\O}vrelid |
Abstract | This article presents the SIRIUS-LTG system for the Fact Extraction and VERification (FEVER) Shared Task. It consists of three components: 1) \textit{Wikipedia Page Retrieval}: First we extract the entities in the claim, then we find potential Wikipedia URI candidates for each of the entities using a SPARQL query over DBpedia 2) \textit{Sentence selection}: We investigate various techniques i.e. Smooth Inverse Frequency (SIF), Word Mover{'}s Distance (WMD), Soft-Cosine Similarity, Cosine similarity with unigram Term Frequency Inverse Document Frequency (TF-IDF) to rank sentences by their similarity to the claim. 3) \textit{Textual Entailment}: We compare three models for the task of claim classification. We apply a Decomposable Attention (DA) model (Parikh et al., 2016), a Decomposed Graph Entailment (DGE) model (Khot et al., 2018) and a Gradient-Boosted Decision Trees (TalosTree) model (Sean et al., 2017) for this task. The experiments show that the pipeline with simple Cosine Similarity using TFIDF in sentence selection along with DA model as labelling model achieves the best results on the development set (F1 evidence: 32.17, label accuracy: 59.61 and FEVER score: 0.3778). Furthermore, it obtains 30.19, 48.87 and 36.55 in terms of F1 evidence, label accuracy and FEVER score, respectively, on the test set. Our system ranks 15th among 23 participants in the shared task prior to any human-evaluation of the evidence. |
Tasks | Entity Linking, Information Retrieval, Natural Language Inference, Question Answering |
Published | 2018-11-01 |
URL | https://www.aclweb.org/anthology/W18-5519/ |
https://www.aclweb.org/anthology/W18-5519 | |
PWC | https://paperswithcode.com/paper/sirius-ltg-an-entity-linking-approach-to-fact |
Repo | |
Framework | |
Modeling Sentiment Association in Discourse for Humor Recognition
Title | Modeling Sentiment Association in Discourse for Humor Recognition |
Authors | Lizhen Liu, Donghai Zhang, Wei Song |
Abstract | Humor is one of the most attractive parts in human communication. However, automatically recognizing humor in text is challenging due to the complex characteristics of humor. This paper proposes to model sentiment association between discourse units to indicate how the punchline breaks the expectation of the setup. We found that discourse relation, sentiment conflict and sentiment transition are effective indicators for humor recognition. On the perspective of using sentiment related features, sentiment association in discourse is more useful than counting the number of emotional words. |
Tasks | Common Sense Reasoning, Sentiment Analysis |
Published | 2018-07-01 |
URL | https://www.aclweb.org/anthology/P18-2093/ |
https://www.aclweb.org/anthology/P18-2093 | |
PWC | https://paperswithcode.com/paper/modeling-sentiment-association-in-discourse |
Repo | |
Framework | |
Transfer of Value Functions via Variational Methods
Title | Transfer of Value Functions via Variational Methods |
Authors | Andrea Tirinzoni, Rafael Rodriguez Sanchez, Marcello Restelli |
Abstract | We consider the problem of transferring value functions in reinforcement learning. We propose an approach that uses the given source tasks to learn a prior distribution over optimal value functions and provide an efficient variational approximation of the corresponding posterior in a new target task. We show our approach to be general, in the sense that it can be combined with complex parametric function approximators and distribution models, while providing two practical algorithms based on Gaussians and Gaussian mixtures. We theoretically analyze them by deriving a finite-sample analysis and provide a comprehensive empirical evaluation in four different domains. |
Tasks | |
Published | 2018-12-01 |
URL | http://papers.nips.cc/paper/7856-transfer-of-value-functions-via-variational-methods |
http://papers.nips.cc/paper/7856-transfer-of-value-functions-via-variational-methods.pdf | |
PWC | https://paperswithcode.com/paper/transfer-of-value-functions-via-variational |
Repo | |
Framework | |
Team SWEEPer: Joint Sentence Extraction and Fact Checking with Pointer Networks
Title | Team SWEEPer: Joint Sentence Extraction and Fact Checking with Pointer Networks |
Authors | Christopher Hidey, Mona Diab |
Abstract | Many tasks such as question answering and reading comprehension rely on information extracted from unreliable sources. These systems would thus benefit from knowing whether a statement from an unreliable source is correct. We present experiments on the FEVER (Fact Extraction and VERification) task, a shared task that involves selecting sentences from Wikipedia and predicting whether a claim is supported by those sentences, refuted, or there is not enough information. Fact checking is a task that benefits from not only asserting or disputing the veracity of a claim but also finding evidence for that position. As these tasks are dependent on each other, an ideal model would consider the veracity of the claim when finding evidence and also find only the evidence that is relevant. We thus jointly model sentence extraction and verification on the FEVER shared task. Among all participants, we ranked 5th on the blind test set (prior to any additional human evaluation of the evidence). |
Tasks | Information Retrieval, Multi-Task Learning, Natural Language Inference, Question Answering, Reading Comprehension |
Published | 2018-11-01 |
URL | https://www.aclweb.org/anthology/W18-5525/ |
https://www.aclweb.org/anthology/W18-5525 | |
PWC | https://paperswithcode.com/paper/team-sweeper-joint-sentence-extraction-and |
Repo | |
Framework | |
QED: A fact verification system for the FEVER shared task
Title | QED: A fact verification system for the FEVER shared task |
Authors | Jackson Luken, Nanjiang Jiang, Marie-Catherine de Marneffe |
Abstract | This paper describes our system submission to the 2018 Fact Extraction and VERification (FEVER) shared task. The system uses a heuristics-based approach for evidence extraction and a modified version of the inference model by Parikh et al. (2016) for classification. Our process is broken down into three modules: potentially relevant documents are gathered based on key phrases in the claim, then any possible evidence sentences inside those documents are extracted, and finally our classifier discards any evidence deemed irrelevant and uses the remaining to classify the claim{'}s veracity. Our system beats the shared task baseline by 12{%} and is successful at finding correct evidence (evidence retrieval F1 of 62.5{%} on the development set). |
Tasks | |
Published | 2018-11-01 |
URL | https://www.aclweb.org/anthology/W18-5526/ |
https://www.aclweb.org/anthology/W18-5526 | |
PWC | https://paperswithcode.com/paper/qed-a-fact-verification-system-for-the-fever |
Repo | |
Framework | |
Adapting Descriptions of People to the Point of View of a Moving Observer
Title | Adapting Descriptions of People to the Point of View of a Moving Observer |
Authors | Gonzalo M{'e}ndez, Raquel Herv{'a}s, Pablo Gerv{'a}s, Ricardo de la Rosa, Daniel Ruiz |
Abstract | This paper addresses the task of generating descriptions of people for an observer that is moving within a scene. As the observer moves, the descriptions of the people around him also change. A referring expression generation algorithm adapted to this task needs to continuously monitor the changes in the field of view of the observer, his relative position to the people being described, and the relative position of these people to any landmarks around them, and to take these changes into account in the referring expressions generated. This task presents two advantages: many of the mechanisms already available for static contexts may be applied with small adaptations, and it introduces the concept of changing conditions into the task of referring expression generation. In this paper we describe the design of an algorithm that takes these aspects into account in order to create descriptions of people within a 3D virtual environment. The evaluation of this algorithm has shown that, by changing the descriptions in real time according to the observers point of view, they are able to identify the described person quickly and effectively. |
Tasks | Text Generation |
Published | 2018-11-01 |
URL | https://www.aclweb.org/anthology/W18-6540/ |
https://www.aclweb.org/anthology/W18-6540 | |
PWC | https://paperswithcode.com/paper/adapting-descriptions-of-people-to-the-point |
Repo | |
Framework | |
The Linguistic Ideologies of Deep Abusive Language Classification
Title | The Linguistic Ideologies of Deep Abusive Language Classification |
Authors | Michael Castelle |
Abstract | This paper brings together theories from sociolinguistics and linguistic anthropology to critically evaluate the so-called {``}language ideologies{''} {—} the set of beliefs and ways of speaking about language{—}in the practices of abusive language classification in modern machine learning-based NLP. This argument is made at both a conceptual and empirical level, as we review approaches to abusive language from different fields, and use two neural network methods to analyze three datasets developed for abusive language classification tasks (drawn from Wikipedia, Facebook, and StackOverflow). By evaluating and comparing these results, we argue for the importance of incorporating theories of pragmatics and metapragmatics into both the design of classification tasks as well as in ML architectures. | |
Tasks | |
Published | 2018-10-01 |
URL | https://www.aclweb.org/anthology/W18-5120/ |
https://www.aclweb.org/anthology/W18-5120 | |
PWC | https://paperswithcode.com/paper/the-linguistic-ideologies-of-deep-abusive |
Repo | |
Framework | |