January 24, 2020

2744 words 13 mins read

Paper Group NANR 219

Paper Group NANR 219

DiscoNet: Shapes Learning on Disconnected Manifolds for 3D Editing. KB-NLG: From Knowledge Base to Natural Language Generation. SINAI-DL at SemEval-2019 Task 5: Recurrent networks and data augmentation by paraphrasing. Modeling Parts, Structure, and System Dynamics via Predictive Learning. Compositional Hyponymy with Positive Operators. Semantic Co …

DiscoNet: Shapes Learning on Disconnected Manifolds for 3D Editing

Title DiscoNet: Shapes Learning on Disconnected Manifolds for 3D Editing
Authors Eloi Mehr, Ariane Jourdan, Nicolas Thome, Matthieu Cord, Vincent Guitteny
Abstract Editing 3D models is a very challenging task, as it requires complex interactions with the 3D shape to reach the targeted design, while preserving the global consistency and plausibility of the shape. In this work, we present an intelligent and user-friendly 3D editing tool, where the edited model is constrained to lie onto a learned manifold of realistic shapes. Due to the topological variability of real 3D models, they often lie close to a disconnected manifold, which cannot be learned with a common learning algorithm. Therefore, our tool is based on a new deep learning model, DiscoNet, which extends 3D surface autoencoders in two ways. Firstly, our deep learning model uses several autoencoders to automatically learn each connected component of a disconnected manifold, without any supervision. Secondly, each autoencoder infers the output 3D surface by deforming a pre-learned 3D template specific to each connected component. Both advances translate into improved 3D synthesis, thus enhancing the quality of our 3D editing tool.
Tasks
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Mehr_DiscoNet_Shapes_Learning_on_Disconnected_Manifolds_for_3D_Editing_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Mehr_DiscoNet_Shapes_Learning_on_Disconnected_Manifolds_for_3D_Editing_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/disconet-shapes-learning-on-disconnected
Repo
Framework

KB-NLG: From Knowledge Base to Natural Language Generation

Title KB-NLG: From Knowledge Base to Natural Language Generation
Authors Wen Cui, Minghui Zhou, Rongwen Zhao, Narges Norouzi
Abstract We perform the natural language generation (NLG) task by mapping sets of Resource Description Framework (RDF) triples into text. First we investigate the impact of increasing the number of entity types in delexicalisaiton on the generation quality. Second we conduct different experiments to evaluate two widely applied language generation systems, encoder-decoder with attention and the Transformer model on a large benchmark dataset. We evaluate different models on automatic metrics, as well as the training time. To our knowledge, we are the first to apply Transformer model to this task.
Tasks Text Generation
Published 2019-08-01
URL https://www.aclweb.org/anthology/papers/W/W19/W19-3626/
PDF https://www.aclweb.org/anthology/W19-3626
PWC https://paperswithcode.com/paper/kb-nlg-from-knowledge-base-to-natural
Repo
Framework

SINAI-DL at SemEval-2019 Task 5: Recurrent networks and data augmentation by paraphrasing

Title SINAI-DL at SemEval-2019 Task 5: Recurrent networks and data augmentation by paraphrasing
Authors Arturo Montejo-R{'a}ez, Salud Mar{'\i}a Jim{'e}nez-Zafra, Miguel A. Garc{'\i}a-Cumbreras, Manuel Carlos D{'\i}az-Galiano
Abstract This paper describes the participation of the SINAI-DL team at Task 5 in SemEval 2019, called HatEval. We have applied some classic neural network layers, like word embeddings and LSTM, to build a neural classifier for both proposed tasks. Due to the small amount of training data provided compared to what is expected for an adequate learning stage in deep architectures, we explore the use of paraphrasing tools as source for data augmentation. Our results show that this method is promising, as some improvement has been found over non-augmented training sets.
Tasks Data Augmentation, Word Embeddings
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-2085/
PDF https://www.aclweb.org/anthology/S19-2085
PWC https://paperswithcode.com/paper/sinai-dl-at-semeval-2019-task-5-recurrent
Repo
Framework

Modeling Parts, Structure, and System Dynamics via Predictive Learning

Title Modeling Parts, Structure, and System Dynamics via Predictive Learning
Authors Zhenjia Xu, Zhijian Liu, Chen Sun, Kevin Murphy, William T. Freeman, Joshua B. Tenenbaum, Jiajun Wu
Abstract Humans easily recognize object parts and their hierarchical structure by watching how they move; they can then predict how each part moves in the future. In this paper, we propose a novel formulation that simultaneously learns a hierarchical, disentangled object representation and a dynamics model for object parts from unlabeled videos in a self-supervised manner. Our Parts, Structure, and Dynamics (PSD) model learns to first recognize the object parts via a layered image representation; second, predict hierarchy via a structural descriptor that composes low-level concepts into a hierarchical structure; and third, model the system dynamics by predicting the future. Experiments on multiple real and synthetic datasets demonstrate that our PSD model works well on all three tasks: segmenting object parts, building their hierarchical structure, and capturing their motion distributions.
Tasks
Published 2019-05-01
URL https://openreview.net/forum?id=rJe10iC5K7
PDF https://openreview.net/pdf?id=rJe10iC5K7
PWC https://paperswithcode.com/paper/modeling-parts-structure-and-system-dynamics
Repo
Framework

Compositional Hyponymy with Positive Operators

Title Compositional Hyponymy with Positive Operators
Authors Martha Lewis
Abstract Language is used to describe concepts, and many of these concepts are hierarchical. Moreover, this hierarchy should be compatible with forming phrases and sentences. We use linear-algebraic methods that allow us to encode words as collections of vectors. The representations we use have an ordering, related to subspace inclusion, which we interpret as modelling hierarchical information. The word representations built can be understood within a compositional distributional semantic framework, providing methods for composing words to form phrase and sentence level representations. We show that the resulting representations give competitive results on both word-level hyponymy and sentence-level entailment datasets.
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1075/
PDF https://www.aclweb.org/anthology/R19-1075
PWC https://paperswithcode.com/paper/compositional-hyponymy-with-positive
Repo
Framework

Semantic Component Decomposition for Face Attribute Manipulation

Title Semantic Component Decomposition for Face Attribute Manipulation
Authors Ying-Cong Chen, Xiaohui Shen, Zhe Lin, Xin Lu, I-Ming Pao, Jiaya Jia
Abstract Deep neural network-based methods were proposed for face attribute manipulation. There still exist, however, two major issues, i.e., insufficient visual quality (or resolution) of the results and lack of user control. They limit the applicability of existing methods since users may have different editing preference on facial attributes. In this paper, we address these issues by proposing a semantic component model. The model decomposes a facial attribute into multiple semantic components, each corresponds to a specific face region. This not only allows for user control of edit strength on different parts based on their preference, but also makes it effective to remove unwanted edit effect. Further, each semantic component is composed of two fundamental elements, which determine the edit effect and region respectively. This property provides fine interactive control. As shown in experiments, our model not only produces high-quality results, but also allows effective user interaction.
Tasks
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Chen_Semantic_Component_Decomposition_for_Face_Attribute_Manipulation_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Chen_Semantic_Component_Decomposition_for_Face_Attribute_Manipulation_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/semantic-component-decomposition-for-face
Repo
Framework

To Annotate or Not? Predicting Performance Drop under Domain Shift

Title To Annotate or Not? Predicting Performance Drop under Domain Shift
Authors Hady Elsahar, Matthias Gall{'e}
Abstract Performance drop due to domain-shift is an endemic problem for NLP models in production. This problem creates an urge to continuously annotate evaluation datasets to measure the expected drop in the model performance which can be prohibitively expensive and slow. In this paper, we study the problem of predicting the performance drop of modern NLP models under domain-shift, in the absence of any target domain labels. We investigate three families of methods ($\mathcal{H}$-divergence, reverse classification accuracy and confidence measures), show how they can be used to predict the performance drop and study their robustness to adversarial domain-shifts. Our results on sentiment classification and sequence labelling show that our method is able to predict performance drops with an error rate as low as 2.15{%} and 0.89{%} for sentiment analysis and POS tagging respectively.
Tasks Sentiment Analysis
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1222/
PDF https://www.aclweb.org/anthology/D19-1222
PWC https://paperswithcode.com/paper/to-annotate-or-not-predicting-performance
Repo
Framework

Translator2Vec: Understanding and Representing Human Post-Editors

Title Translator2Vec: Understanding and Representing Human Post-Editors
Authors Ant{'o}nio G{'o}is, Andr{'e} F. T. Martins
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-6605/
PDF https://www.aclweb.org/anthology/W19-6605
PWC https://paperswithcode.com/paper/translator2vec-understanding-and-representing-1
Repo
Framework

Event Detection without Triggers

Title Event Detection without Triggers
Authors Shulin Liu, Yang Li, Feng Zhang, Tao Yang, Xinpeng Zhou
Abstract The goal of event detection (ED) is to detect the occurrences of events and categorize them. Previous work solved this task by recognizing and classifying event triggers, which is defined as the word or phrase that most clearly expresses an event occurrence. As a consequence, existing approaches required both annotated triggers and event types in training data. However, triggers are nonessential to event detection, and it is time-consuming for annotators to pick out the {``}most clearly{''} word from a given sentence, especially from a long sentence. The expensive annotation of training corpus limits the application of existing approaches. To reduce manual effort, we explore detecting events without triggers. In this work, we propose a novel framework dubbed as Type-aware Bias Neural Network with Attention Mechanisms (TBNNAM), which encodes the representation of a sentence based on target event types. Experimental results demonstrate the effectiveness. Remarkably, the proposed approach even achieves competitive performances compared with state-of-the-arts that used annotated triggers. |
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-1080/
PDF https://www.aclweb.org/anthology/N19-1080
PWC https://paperswithcode.com/paper/event-detection-without-triggers
Repo
Framework

Object Guided External Memory Network for Video Object Detection

Title Object Guided External Memory Network for Video Object Detection
Authors Hanming Deng, Yang Hua, Tao Song, Zongpu Zhang, Zhengui Xue, Ruhui Ma, Neil Robertson, Haibing Guan
Abstract Video object detection is more challenging than image object detection because of the deteriorated frame quality. To enhance the feature representation, state-of-the-art methods propagate temporal information into the deteriorated frame by aligning and aggregating entire feature maps from multiple nearby frames. However, restricted by feature map’s low storage-efficiency and vulnerable content-address allocation, long-term temporal information is not fully stressed by these methods. In this work, we propose the first object guided external memory network for online video object detection. Storage-efficiency is handled by object guided hard-attention to selectively store valuable features, and long-term information is protected when stored in an addressable external data matrix. A set of read/write operations are designed to accurately propagate/allocate and delete multi-level memory feature under object guidance. We evaluate our method on the ImageNet VID dataset and achieve state-of-the-art performance as well as good speed-accuracy tradeoff. Furthermore, by visualizing the external memory, we show the detailed object-level reasoning process across frames.
Tasks Object Detection, Video Object Detection
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Deng_Object_Guided_External_Memory_Network_for_Video_Object_Detection_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Deng_Object_Guided_External_Memory_Network_for_Video_Object_Detection_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/object-guided-external-memory-network-for
Repo
Framework

Alignment Based Mathching Networks for One-Shot Classification and Open-Set Recognition

Title Alignment Based Mathching Networks for One-Shot Classification and Open-Set Recognition
Authors Paresh Malalur, Tommi Jaakkola
Abstract Deep learning for object classification relies heavily on convolutional models. While effective, CNNs are rarely interpretable after the fact. An attention mechanism can be used to highlight the area of the image that the model focuses on thus offering a narrow view into the mechanism of classification. We expand on this idea by forcing the method to explicitly align images to be classified to reference images representing the classes. The mechanism of alignment is learned and therefore does not require that the reference objects are anything like those being classified. Beyond explanation, our exemplar based cross-alignment method enables classification with only a single example per category (one-shot). Our model cuts the 5-way, 1-shot error rate in Omniglot from 2.1% to 1.4% and in MiniImageNet from 53.5% to 46.5% while simultaneously providing point-wise alignment information providing some understanding on what the network is capturing. This method of alignment also enables the recognition of an unsupported class (open-set) in the one-shot setting while maintaining an F1-score of above 0.5 for Omniglot even with 19 other distracting classes while baselines completely fail to separate the open-set class in the one-shot setting.
Tasks Object Classification, Omniglot, Open Set Learning
Published 2019-05-01
URL https://openreview.net/forum?id=Skl6k209Ym
PDF https://openreview.net/pdf?id=Skl6k209Ym
PWC https://paperswithcode.com/paper/alignment-based-mathching-networks-for-one
Repo
Framework

Learning Unsupervised Multilingual Word Embeddings with Incremental Multilingual Hubs

Title Learning Unsupervised Multilingual Word Embeddings with Incremental Multilingual Hubs
Authors Geert Heyman, Bregt Verreet, Ivan Vuli{'c}, Marie-Francine Moens
Abstract Recent research has discovered that a shared bilingual word embedding space can be induced by projecting monolingual word embedding spaces from two languages using a self-learning paradigm without any bilingual supervision. However, it has also been shown that for distant language pairs such fully unsupervised self-learning methods are unstable and often get stuck in poor local optima due to reduced isomorphism between starting monolingual spaces. In this work, we propose a new robust framework for learning unsupervised multilingual word embeddings that mitigates the instability issues. We learn a shared multilingual embedding space for a variable number of languages by incrementally adding new languages one by one to the current multilingual space. Through the gradual language addition the method can leverage the interdependencies between the new language and all other languages in the current multilingual space. We find that it is beneficial to project more distant languages later in the iterative process. Our fully unsupervised multilingual embedding spaces yield results that are on par with the state-of-the-art methods in the bilingual lexicon induction (BLI) task, and simultaneously obtain state-of-the-art scores on two downstream tasks: multilingual document classification and multilingual dependency parsing, outperforming even supervised baselines. This finding also accentuates the need to establish evaluation protocols for cross-lingual word embeddings beyond the omnipresent intrinsic BLI task in future work.
Tasks Dependency Parsing, Document Classification, Multilingual Word Embeddings, Word Embeddings
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-1188/
PDF https://www.aclweb.org/anthology/N19-1188
PWC https://paperswithcode.com/paper/learning-unsupervised-multilingual-word
Repo
Framework

EusDisParser: improving an under-resourced discourse parser with cross-lingual data

Title EusDisParser: improving an under-resourced discourse parser with cross-lingual data
Authors Mikel Iruskieta, Chlo{'e} Braud
Abstract Development of discourse parsers to annotate the relational discourse structure of a text is crucial for many downstream tasks. However, most of the existing work focuses on English, assuming a quite large dataset. Discourse data have been annotated for Basque, but training a system on these data is challenging since the corpus is very small. In this paper, we create the first demonstrator based on RST for Basque, and we investigate the use of data in another language to improve the performance of a Basque discourse parser. More precisely, we build a monolingual system using the small set of data available and investigate the use of multilingual word embeddings to train a system for Basque using data annotated for another language. We found that our approach to building a system limited to the small set of data available for Basque allowed us to get an improvement over previous approaches making use of many data annotated in other languages. At best, we get 34.78 in F1 for the full discourse structure. More data annotation is necessary in order to improve the results obtained with these techniques. We also describe which relations match with the gold standard, in order to understand these results.
Tasks Multilingual Word Embeddings, Word Embeddings
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2709/
PDF https://www.aclweb.org/anthology/W19-2709
PWC https://paperswithcode.com/paper/eusdisparser-improving-an-under-resourced
Repo
Framework

Selecting causal brain features with a single conditional independence test per feature

Title Selecting causal brain features with a single conditional independence test per feature
Authors Atalanti Mastakouri, Bernhard Schölkopf, Dominik Janzing
Abstract We propose a constraint-based causal feature selection method for identifying causes of a given target variable, selecting from a set of candidate variables, while there can also be hidden variables acting as common causes with the target. We prove that if we observe a cause for each candidate cause, then a single conditional independence test with one conditioning variable is sufficient to decide whether a candidate associated with the target is indeed causing it. We thus improve upon existing methods by significantly simplifying statistical testing and requiring a weaker version of causal faithfulness. Our main assumption is inspired by neuroscience paradigms where the activity of a single neuron is considered to be also caused by its own previous state. We demonstrate successful application of our method to simulated, as well as encephalographic data of twenty-one participants, recorded in Max Planck Institute for intelligent Systems. The detected causes of motor performance are in accordance with the latest consensus about the neurophysiological pathways, and can provide new insights into personalised brain stimulation.
Tasks Feature Selection
Published 2019-12-01
URL http://papers.nips.cc/paper/9419-selecting-causal-brain-features-with-a-single-conditional-independence-test-per-feature
PDF http://papers.nips.cc/paper/9419-selecting-causal-brain-features-with-a-single-conditional-independence-test-per-feature.pdf
PWC https://paperswithcode.com/paper/selecting-causal-brain-features-with-a-single
Repo
Framework

Automatic Argument Quality Assessment - New Datasets and Methods

Title Automatic Argument Quality Assessment - New Datasets and Methods
Authors Assaf Toledo, Shai Gretz, Edo Cohen-Karlik, Roni Friedman, Elad Venezian, Dan Lahav, Michal Jacovi, Ranit Aharonov, Noam Slonim
Abstract We explore the task of automatic assessment of argument quality. To that end, we actively collected 6.3k arguments, more than a factor of five compared to previously examined data. Each argument was explicitly and carefully annotated for its quality. In addition, 14k pairs of arguments were annotated independently, identifying the higher quality argument in each pair. In spite of the inherent subjective nature of the task, both annotation schemes led to surprisingly consistent results. We release the labeled datasets to the community. Furthermore, we suggest neural methods based on a recently released language model, for argument ranking as well as for argument-pair classification. In the former task, our results are comparable to state-of-the-art; in the latter task our results significantly outperform earlier methods.
Tasks Language Modelling
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1564/
PDF https://www.aclweb.org/anthology/D19-1564
PWC https://paperswithcode.com/paper/automatic-argument-quality-assessment-new-1
Repo
Framework
comments powered by Disqus