October 15, 2019

3056 words 15 mins read

Paper Group NANR 98

Paper Group NANR 98

Joint optimization for compressive video sensing and reconstruction under hardware constraints. Introduction to the Special Issue on Language in Social Media: Exploiting Discourse and Other Contextual Information. Revisiting Bayes by Backprop. Knowledge Graph Embedding with Numeric Attributes of Entities. An Annotated Corpus of Picture Stories Reto …

Joint optimization for compressive video sensing and reconstruction under hardware constraints

Title Joint optimization for compressive video sensing and reconstruction under hardware constraints
Authors Michitaka Yoshida, Akihiko Torii, Masatoshi Okutomi, Kenta Endo, Yukinobu Sugiyama, Rin-ichiro Taniguchi, Hajime Nagahara
Abstract Compressive video sensing is the process of encoding multiple sub-frames into a single frame with controlled sensor exposures and reconstructing the sub-frames from the single compressed frame. It is known that spatially and temporally random exposures provide the most balanced compression in terms of signal recovery. However, sensors that achieve a fully random exposure on each pixel cannot be easily realized in practice because the circuit of the sensor becomes complicated and incompatible with the sensitivity and resolution.Therefore, it is necessary to design an exposure pattern by considering the constraints enforced by hardware. In this paper, we propose a method of jointly optimizing the exposure patterns of compressive sensing and the reconstruction framework under hardware constraints. By conducting a simulation and actual experiments, we demonstrated that the proposed framework can reconstruct multiple sub-frame images with higher quality.
Tasks Compressive Sensing
Published 2018-09-01
URL http://openaccess.thecvf.com/content_ECCV_2018/html/Michitaka_Yoshida_Joint_optimization_for_ECCV_2018_paper.html
PDF http://openaccess.thecvf.com/content_ECCV_2018/papers/Michitaka_Yoshida_Joint_optimization_for_ECCV_2018_paper.pdf
PWC https://paperswithcode.com/paper/joint-optimization-for-compressive-video
Repo
Framework

Introduction to the Special Issue on Language in Social Media: Exploiting Discourse and Other Contextual Information

Title Introduction to the Special Issue on Language in Social Media: Exploiting Discourse and Other Contextual Information
Authors Farah Benamara, Diana Inkpen, Maite Taboada
Abstract Social media content is changing the way people interact with each other and share information, personal messages, and opinions about situations, objects, and past experiences. Most social media texts are short online conversational posts or comments that do not contain enough information for natural language processing (NLP) tools, as they are often accompanied by non-linguistic contextual information, including meta-data (e.g., the user{'}s profile, the social network of the user, and their interactions with other users). Exploiting such different types of context and their interactions makes the automatic processing of social media texts a challenging research task. Indeed, simply applying traditional text mining tools is clearly sub-optimal, as, typically, these tools take into account neither the interactive dimension nor the particular nature of this data, which shares properties with both spoken and written language. This special issue contributes to a deeper understanding of the role of these interactions to process social media data from a new perspective in discourse interpretation. This introduction first provides the necessary background to understand what context is from both the linguistic and computational linguistic perspectives, then presents the most recent context-based approaches to NLP for social media. We conclude with an overview of the papers accepted in this special issue, highlighting what we believe are the future directions in processing social media texts.
Tasks
Published 2018-12-01
URL https://www.aclweb.org/anthology/J18-4006/
PDF https://www.aclweb.org/anthology/J18-4006
PWC https://paperswithcode.com/paper/introduction-to-the-special-issue-on-language
Repo
Framework

Revisiting Bayes by Backprop

Title Revisiting Bayes by Backprop
Authors Meire Fortunato, Charles Blundell, Oriol Vinyals
Abstract In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks. Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80%. Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs. We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics. We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks. We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them. We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared.
Tasks Image Captioning, Language Modelling
Published 2018-01-01
URL https://openreview.net/forum?id=Hkp3uhxCW
PDF https://openreview.net/pdf?id=Hkp3uhxCW
PWC https://paperswithcode.com/paper/revisiting-bayes-by-backprop
Repo
Framework

Knowledge Graph Embedding with Numeric Attributes of Entities

Title Knowledge Graph Embedding with Numeric Attributes of Entities
Authors Yanrong Wu, Zhichun Wang
Abstract Knowledge Graph (KG) embedding projects entities and relations into low dimensional vector space, which has been successfully applied in KG completion task. The previous embedding approaches only model entities and their relations, ignoring a large number of entities{'} numeric attributes in KGs. In this paper, we propose a new KG embedding model which jointly model entity relations and numeric attributes. Our approach combines an attribute embedding model with a translation-based structure embedding model, which learns the embeddings of entities, relations, and attributes simultaneously. Experiments of link prediction on YAGO and Freebase show that the performance is effectively improved by adding entities{'} numeric attributes in the embedding model.
Tasks Graph Embedding, Knowledge Graph Embedding, Knowledge Graphs, Link Prediction, Representation Learning
Published 2018-07-01
URL https://www.aclweb.org/anthology/W18-3017/
PDF https://www.aclweb.org/anthology/W18-3017
PWC https://paperswithcode.com/paper/knowledge-graph-embedding-with-numeric
Repo
Framework

An Annotated Corpus of Picture Stories Retold by Language Learners

Title An Annotated Corpus of Picture Stories Retold by Language Learners
Authors Christine K{"o}hn, Arne K{"o}hn
Abstract Corpora with language learner writing usually consist of essays, which are difficult to annotate reliably and to process automatically due to the high degree of freedom and the nature of learner language. We develop a task which mildly constrains learner utterances to facilitate consistent annotation and reliable automatic processing but at the same time does not prime learners with textual information. In this task, learners retell a comic strip. We present the resulting task-based corpus of stories written by learners of German. We designed the corpus to be able to serve multiple purposes: The corpus was manually annotated, including target hypotheses and syntactic structures. We achieve a very high inter-annotator agreement: κ = 0.765 for the annotation of minimal target hypotheses and κ = 0.507 for the extended target hypotheses. We attribute this to the design of our task and the annotation guidelines, which are based on those for the Falko corpus (Reznicek et al., 2012).
Tasks Grammatical Error Correction, Reading Comprehension
Published 2018-08-01
URL https://www.aclweb.org/anthology/W18-4914/
PDF https://www.aclweb.org/anthology/W18-4914
PWC https://paperswithcode.com/paper/an-annotated-corpus-of-picture-stories-retold
Repo
Framework

The Whole is Greater than the Sum of its Parts: Towards the Effectiveness of Voting Ensemble Classifiers for Complex Word Identification

Title The Whole is Greater than the Sum of its Parts: Towards the Effectiveness of Voting Ensemble Classifiers for Complex Word Identification
Authors Nikhil Wani, S Mathias, eep, Jayashree Aan Gajjam, , Pushpak Bhattacharyya
Abstract In this paper, we present an effective system using voting ensemble classifiers to detect contextually complex words for non-native English speakers. To make the final decision, we channel a set of eight calibrated classifiers based on lexical, size and vocabulary features and train our model with annotated datasets collected from a mixture of native and non-native speakers. Thereafter, we test our system on three datasets namely News, WikiNews, and Wikipedia and report competitive results with an F1-Score ranging between 0.777 to 0.855 for each of the datasets. Our system outperforms multiple other models and falls within 0.042 to 0.026 percent of the best-performing model{'}s score in the shared task.
Tasks Complex Word Identification, Lexical Simplification
Published 2018-06-01
URL https://www.aclweb.org/anthology/W18-0522/
PDF https://www.aclweb.org/anthology/W18-0522
PWC https://paperswithcode.com/paper/the-whole-is-greater-than-the-sum-of-its
Repo
Framework

探索結合快速文本及卷積神經網路於可讀性模型之建立 (Exploring Combination of FastText and Convolutional Neural Networks for Building Readability Models) [In Chinese]

Title 探索結合快速文本及卷積神經網路於可讀性模型之建立 (Exploring Combination of FastText and Convolutional Neural Networks for Building Readability Models) [In Chinese]
Authors Hou-Chiang Tseng, Berlin Chen, Yao-Ting Sung
Abstract
Tasks
Published 2018-10-01
URL https://www.aclweb.org/anthology/O18-1012/
PDF https://www.aclweb.org/anthology/O18-1012
PWC https://paperswithcode.com/paper/c-caaeaacccc2e-14a-e-aa1aoc-exploring
Repo
Framework

Event versus entity co-reference: Effects of context and form of referring expression

Title Event versus entity co-reference: Effects of context and form of referring expression
Authors Sharid Lo{'a}iciga, Luca Bevacqua, Hannah Rohde, Christian Hardmeier
Abstract Anaphora resolution systems require both an enumeration of possible candidate antecedents and an identification process of the antecedent. This paper focuses on (i) the impact of the form of referring expression on entity-vs-event preferences and (ii) how properties of the passage interact with referential form. Two crowd-sourced story-continuation experiments were conducted, using constructed and naturally-occurring passages, to see how participants interpret \textit{It} and \textit{This} pronouns following a context sentence that makes available event and entity referents. Our participants show a strong, but not categorical, bias to use \textit{This} to refer to events and \textit{It} to refer to entities. However, these preferences vary with passage characteristics such as verb class (a proxy in our constructed examples for the number of explicit and implicit entities) and more subtle author intentions regarding subsequent re-mention (the original event-vs-entity re-mention of our corpus items).
Tasks
Published 2018-06-01
URL https://www.aclweb.org/anthology/W18-0711/
PDF https://www.aclweb.org/anthology/W18-0711
PWC https://paperswithcode.com/paper/event-versus-entity-co-reference-effects-of
Repo
Framework

Developing and Evaluating Annotation Procedures for Twitter Data during Hazard Events

Title Developing and Evaluating Annotation Procedures for Twitter Data during Hazard Events
Authors Kevin Stowe, Martha Palmer, Jennings Anderson, Marina Kogan, Leysia Palen, Kenneth M. Anderson, Rebecca Morss, Julie Demuth, Heather Lazrus
Abstract When a hazard such as a hurricane threatens, people are forced to make a wide variety of decisions, and the information they receive and produce can influence their own and others{'} actions. As social media grows more popular, an increasing number of people are using social media platforms to obtain and share information about approaching threats and discuss their interpretations of the threat and their protective decisions. This work aims to improve understanding of natural disasters through social media and provide an annotation scheme to identify themes in user{'}s social media behavior and facilitate efforts in supervised machine learning. To that end, this work has three contributions: (1) the creation of an annotation scheme to consistently identify hazard-related themes in Twitter, (2) an overview of agreement rates and difficulties in identifying annotation categories, and (3) a public release of both the dataset and guidelines developed from this scheme.
Tasks Decision Making
Published 2018-08-01
URL https://www.aclweb.org/anthology/W18-4915/
PDF https://www.aclweb.org/anthology/W18-4915
PWC https://paperswithcode.com/paper/developing-and-evaluating-annotation
Repo
Framework

LEARNING TO ORGANIZE KNOWLEDGE WITH N-GRAM MACHINES

Title LEARNING TO ORGANIZE KNOWLEDGE WITH N-GRAM MACHINES
Authors Fan Yang, Jiazhong Nie, William W. Cohen, Ni Lao
Abstract Deep neural networks (DNNs) had great success on NLP tasks such as language modeling, machine translation and certain question answering (QA) tasks. However, the success is limited at more knowledge intensive tasks such as QA from a big corpus. Existing end-to-end deep QA models (Miller et al., 2016; Weston et al., 2014) need to read the entire text after observing the question, and therefore their complexity in responding a question is linear in the text size. This is prohibitive for practical tasks such as QA from Wikipedia, a novel, or the Web. We propose to solve this scalability issue by using symbolic meaning representations, which can be indexed and retrieved efficiently with complexity that is independent of the text size. More specifically, we use sequence-to-sequence models to encode knowledge symbolically and generate programs to answer questions from the encoded knowledge. We apply our approach, called the N-Gram Machine (NGM), to the bAbI tasks (Weston et al., 2015) and a special version of them (“life-long bAbI”) which has stories of up to 10 million sentences. Our experiments show that NGM can successfully solve both of these tasks accurately and efficiently. Unlike fully differentiable memory models, NGM’s time complexity and answering quality are not affected by the story length. The whole system of NGM is trained end-to-end with REINFORCE (Williams, 1992). To avoid high variance in gradient estimation, which is typical in discrete latent variable models, we use beam search instead of sampling. To tackle the exponentially large search space, we use a stabilized auto-encoding objective and a structure tweak procedure to iteratively reduce and refine the search space.
Tasks Language Modelling, Latent Variable Models, Machine Translation, Question Answering
Published 2018-01-01
URL https://openreview.net/forum?id=By3v9k-RZ
PDF https://openreview.net/pdf?id=By3v9k-RZ
PWC https://paperswithcode.com/paper/learning-to-organize-knowledge-with-n-gram
Repo
Framework

Provable Variational Inference for Constrained Log-Submodular Models

Title Provable Variational Inference for Constrained Log-Submodular Models
Authors Josip Djolonga, Stefanie Jegelka, Andreas Krause
Abstract Submodular maximization problems appear in several areas of machine learning and data science, as many useful modelling concepts such as diversity and coverage satisfy this natural diminishing returns property. Because the data defining these functions, as well as the decisions made with the computed solutions, are subject to statistical noise and randomness, it is arguably necessary to go beyond computing a single approximate optimum and quantify its inherent uncertainty. To this end, we define a rich class of probabilistic models associated with constrained submodular maximization problems. These capture log-submodular dependencies of arbitrary order between the variables, but also satisfy hard combinatorial constraints. Namely, the variables are assumed to take on one of — possibly exponentially many — set of states, which form the bases of a matroid. To perform inference in these models we design novel variational inference algorithms, which carefully leverage the combinatorial and probabilistic properties of these objects. In addition to providing completely tractable and well-understood variational approximations, our approach results in the minimization of a convex upper bound on the log-partition function. The bound can be efficiently evaluated using greedy algorithms and optimized using any first-order method. Moreover, for the case of facility location and weighted coverage functions, we prove the first constant factor guarantee in this setting — an efficiently certifiable e/(e-1) approximation of the log-partition function. Finally, we empirically demonstrate the effectiveness of our approach on several instances.
Tasks
Published 2018-12-01
URL http://papers.nips.cc/paper/7535-provable-variational-inference-for-constrained-log-submodular-models
PDF http://papers.nips.cc/paper/7535-provable-variational-inference-for-constrained-log-submodular-models.pdf
PWC https://paperswithcode.com/paper/provable-variational-inference-for
Repo
Framework

Joint Modeling for Query Expansion and Information Extraction with Reinforcement Learning

Title Joint Modeling for Query Expansion and Information Extraction with Reinforcement Learning
Authors Motoki Taniguchi, Yasuhide Miura, Tomoko Ohkuma
Abstract Information extraction about an event can be improved by incorporating external evidence. In this study, we propose a joint model for pseudo-relevance feedback based query expansion and information extraction with reinforcement learning. Our model generates an event-specific query to effectively retrieve documents relevant to the event. We demonstrate that our model is comparable or has better performance than the previous model in two publicly available datasets. Furthermore, we analyzed the influences of the retrieval effectiveness in our model on the extraction performance.
Tasks Decision Making
Published 2018-11-01
URL https://www.aclweb.org/anthology/W18-5506/
PDF https://www.aclweb.org/anthology/W18-5506
PWC https://paperswithcode.com/paper/joint-modeling-for-query-expansion-and
Repo
Framework

Grouping-By-ID: Guarding Against Adversarial Domain Shifts

Title Grouping-By-ID: Guarding Against Adversarial Domain Shifts
Authors Christina Heinze-Deml, Nicolai Meinshausen
Abstract When training a deep neural network for supervised image classification, one can broadly distinguish between two types of latent features of images that will drive the classification of class Y. Following the notation of Gong et al. (2016), we can divide features broadly into the classes of (i) “core” or “conditionally invariant” features X^ci whose distribution P(X^ci Y) does not change substantially across domains and (ii) “style” or “orthogonal” features X^orth whose distribution P(X^orth Y) can change substantially across domains. These latter orthogonal features would generally include features such as position, rotation, image quality or brightness but also more complex ones like hair color or posture for images of persons. We try to guard against future adversarial domain shifts by ideally just using the “conditionally invariant” features for classification. In contrast to previous work, we assume that the domain itself is not observed and hence a latent variable. We can hence not directly see the distributional change of features across different domains. We do assume, however, that we can sometimes observe a so-called identifier or ID variable. We might know, for example, that two images show the same person, with ID referring to the identity of the person. In data augmentation, we generate several images from the same original image, with ID referring to the relevant original image. The method requires only a small fraction of images to have an ID variable. We provide a causal framework for the problem by adding the ID variable to the model of Gong et al. (2016). However, we are interested in settings where we cannot observe the domain directly and we treat domain as a latent variable. If two or more samples share the same class and identifier, (Y, ID)=(y,i), then we treat those samples as counterfactuals under different style interventions on the orthogonal or style features. Using this grouping-by-ID approach, we regularize the network to provide near constant output across samples that share the same ID by penalizing with an appropriate graph Laplacian. This is shown to substantially improve performance in settings where domains change in terms of image quality, brightness, color changes, and more complex changes such as changes in movement and posture. We show links to questions of interpretability, fairness and transfer learning.
Tasks Data Augmentation, Image Classification, Transfer Learning
Published 2018-01-01
URL https://openreview.net/forum?id=HyPpD0g0Z
PDF https://openreview.net/pdf?id=HyPpD0g0Z
PWC https://paperswithcode.com/paper/grouping-by-id-guarding-against-adversarial
Repo
Framework

KLUEnicorn at SemEval-2018 Task 3: A Naive Approach to Irony Detection

Title KLUEnicorn at SemEval-2018 Task 3: A Naive Approach to Irony Detection
Authors Luise D{"u}rlich
Abstract This paper describes the KLUEnicorn system submitted to the SemEval-2018 task on {``}Irony detection in English tweets{''}. The proposed system uses a naive Bayes classifier to exploit rather simple lexical, pragmatical and semantical features as well as sentiment. It further takes a closer look at different adverb categories and named entities and factors in word-embedding information. |
Tasks Sarcasm Detection
Published 2018-06-01
URL https://www.aclweb.org/anthology/S18-1099/
PDF https://www.aclweb.org/anthology/S18-1099
PWC https://paperswithcode.com/paper/kluenicorn-at-semeval-2018-task-3-a-naive
Repo
Framework

Discrete-Valued Neural Networks Using Variational Inference

Title Discrete-Valued Neural Networks Using Variational Inference
Authors Wolfgang Roth, Franz Pernkopf
Abstract The increasing demand for neural networks (NNs) being employed on embedded devices has led to plenty of research investigating methods for training low precision NNs. While most methods involve a quantization step, we propose a principled Bayesian approach where we first infer a distribution over a discrete weight space from which we subsequently derive hardware-friendly low precision NNs. To this end, we introduce a probabilistic forward pass to approximate the intractable variational objective that allows us to optimize over discrete-valued weight distributions for NNs with sign activation functions. In our experiments, we show that our model achieves state of the art performance on several real world data sets. In addition, the resulting models exhibit a substantial amount of sparsity that can be utilized to further reduce the computational costs for inference.
Tasks Quantization
Published 2018-01-01
URL https://openreview.net/forum?id=r1h2DllAW
PDF https://openreview.net/pdf?id=r1h2DllAW
PWC https://paperswithcode.com/paper/discrete-valued-neural-networks-using
Repo
Framework
comments powered by Disqus