January 25, 2020

2949 words 14 mins read

Paper Group NAWR 31

Paper Group NAWR 31

Aligning Vector-spaces with Noisy Supervised Lexicon. A Hybrid Approach for Aspect-Based Sentiment Analysis Using a Lexicalized Domain Ontology and Attentional Neural Models. Exploration Bonus for Regret Minimization in Discrete and Continuous Average Reward MDPs. HyperGCN: A New Method For Training Graph Convolutional Networks on Hypergraphs. Gene …

Aligning Vector-spaces with Noisy Supervised Lexicon

Title Aligning Vector-spaces with Noisy Supervised Lexicon
Authors Noa Yehezkel Lubin, Jacob Goldberger, Yoav Goldberg
Abstract The problem of learning to translate between two vector spaces given a set of aligned points arises in several application areas of NLP. Current solutions assume that the lexicon which defines the alignment pairs is noise-free. We consider the case where the set of aligned points is allowed to contain an amount of noise, in the form of incorrect lexicon pairs and show that this arises in practice by analyzing the edited dictionaries after the cleaning process. We demonstrate that such noise substantially degrades the accuracy of the learned translation when using current methods. We propose a model that accounts for noisy pairs. This is achieved by introducing a generative model with a compatible iterative EM algorithm. The algorithm jointly learns the noise level in the lexicon, finds the set of noisy pairs, and learns the mapping between the spaces. We demonstrate the effectiveness of our proposed algorithm on two alignment problems: bilingual word embedding translation, and mapping between diachronic embedding spaces for recovering the semantic shifts of words across time periods.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-1045/
PDF https://www.aclweb.org/anthology/N19-1045
PWC https://paperswithcode.com/paper/aligning-vector-spaces-with-noisy-supervised-1
Repo https://github.com/NoaKel/Noise-Aware-Alignment
Framework none

A Hybrid Approach for Aspect-Based Sentiment Analysis Using a Lexicalized Domain Ontology and Attentional Neural Models

Title A Hybrid Approach for Aspect-Based Sentiment Analysis Using a Lexicalized Domain Ontology and Attentional Neural Models
Authors Olaf Wallaart; Flavius Frasincar
Abstract This work focuses on sentence-level aspect-based sentiment analysis for restaurant reviews. A two-stage sentiment analysis algorithm is proposed. In this method, first a lexicalized domain ontology is used to predict the sentiment and as a back-up algorithm a neural network with a rotatory attention mechanism (LCR-Rot) is utilized. Furthermore, two features are added to the backup algorithm. The first extension changes the order in which the rotatory attention mechanism operates (LCRRot-inv). The second extension runs over the rotatory attention mechanism for multiple iterations (LCR-Rot-hop). Using the SemEval-2015 and SemEval-2016 data, we conclude that the two-stage method outperforms the baseline methods, albeit with a small percentage. Moreover, we find that the method where we iterate multiple times over a rotatory attention mechanism has the best performance.
Tasks Aspect-Based Sentiment Analysis, Sentiment Analysis
Published 2019-03-04
URL https://personal.eur.nl/frasincar/papers/ESWC2019/eswc2019.pdf
PDF https://personal.eur.nl/frasincar/papers/ESWC2019/eswc2019.pdf
PWC https://paperswithcode.com/paper/a-hybrid-approach-for-aspect-based-sentiment
Repo https://github.com/ofwallaart/HAABSA
Framework tf

Exploration Bonus for Regret Minimization in Discrete and Continuous Average Reward MDPs

Title Exploration Bonus for Regret Minimization in Discrete and Continuous Average Reward MDPs
Authors Jian Qian, Ronan Fruit, Matteo Pirotta, Alessandro Lazaric
Abstract The exploration bonus is an effective approach to manage the exploration-exploitation trade-off in Markov Decision Processes (MDPs). While it has been analyzed in infinite-horizon discounted and finite-horizon problems, we focus on designing and analysing the exploration bonus in the more challenging infinite-horizon undiscounted setting. We first introduce SCAL+, a variant of SCAL (Fruit et al. 2018), that uses a suitable exploration bonus to solve any discrete unknown weakly-communicating MDP for which an upper bound $c$ on the span of the optimal bias function is known. We prove that SCAL+ enjoys the same regret guarantees as SCAL, which relies on the less efficient extended value iteration approach. Furthermore, we leverage the flexibility provided by the exploration bonus scheme to generalize SCAL+ to smooth MDPs with continuous state space and discrete actions. We show that the resulting algorithm (SCCAL+) achieves the same regret bound as UCCRL (Ortner and Ryabko, 2012) while being the first implementable algorithm for this setting.
Tasks
Published 2019-12-01
URL http://papers.nips.cc/paper/8735-exploration-bonus-for-regret-minimization-in-discrete-and-continuous-average-reward-mdps
PDF http://papers.nips.cc/paper/8735-exploration-bonus-for-regret-minimization-in-discrete-and-continuous-average-reward-mdps.pdf
PWC https://paperswithcode.com/paper/exploration-bonus-for-regret-minimization-in-1
Repo https://github.com/RonanFR/UCRL
Framework none

HyperGCN: A New Method For Training Graph Convolutional Networks on Hypergraphs

Title HyperGCN: A New Method For Training Graph Convolutional Networks on Hypergraphs
Authors Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, Partha Talukdar
Abstract In many real-world network datasets such as co-authorship, co-citation, email communication, etc., relationships are complex and go beyond pairwise. Hypergraphs provide a flexible and natural modeling tool to model such complex relationships. The obvious existence of such complex relationships in many real-world networks naturaly motivates the problem of learning with hypergraphs. A popular learning paradigm is hypergraph-based semi-supervised learning (SSL) where the goal is to assign labels to initially unlabeled vertices in a hypergraph. Motivated by the fact that a graph convolutional network (GCN) has been effective for graph-based SSL, we propose HyperGCN, a novel GCN for SSL on attributed hypergraphs. Additionally, we show how HyperGCN can be used as a learning-based approach for combinatorial optimisation on NP-hard hypergraph problems. We demonstrate HyperGCN’s effectiveness through detailed experimentation on real-world hypergraphs. We have made HyperGCN’s source code available to foster reproducible research.
Tasks
Published 2019-12-01
URL http://papers.nips.cc/paper/8430-hypergcn-a-new-method-for-training-graph-convolutional-networks-on-hypergraphs
PDF http://papers.nips.cc/paper/8430-hypergcn-a-new-method-for-training-graph-convolutional-networks-on-hypergraphs.pdf
PWC https://paperswithcode.com/paper/hypergcn-a-new-method-for-training-graph
Repo https://github.com/malllabiisc/HyperGCN
Framework pytorch

General E(2)-Equivariant Steerable CNNs

Title General E(2)-Equivariant Steerable CNNs
Authors Maurice Weiler, Gabriele Cesa
Abstract The big empirical success of group equivariant networks has led in recent years to the sprouting of a great variety of equivariant network architectures. A particular focus has thereby been on rotation and reflection equivariant CNNs for planar images. Here we give a general description of E(2)-equivariant convolutions in the framework of Steerable CNNs. The theory of Steerable CNNs thereby yields constraints on the convolution kernels which depend on group representations describing the transformation laws of feature spaces. We show that these constraints for arbitrary group representations can be reduced to constraints under irreducible representations. A general solution of the kernel space constraint is given for arbitrary representations of the Euclidean group E(2) and its subgroups. We implement a wide range of previously proposed and entirely new equivariant network architectures and extensively compare their performances. E(2)-steerable convolutions are further shown to yield remarkable gains on CIFAR-10, CIFAR-100 and STL-10 when used as drop in replacement for non-equivariant convolutions.
Tasks
Published 2019-12-01
URL http://papers.nips.cc/paper/9580-general-e2-equivariant-steerable-cnns
PDF http://papers.nips.cc/paper/9580-general-e2-equivariant-steerable-cnns.pdf
PWC https://paperswithcode.com/paper/general-e2-equivariant-steerable-cnns
Repo https://github.com/QUVA-Lab/e2cnn
Framework pytorch

Scene Parsing via Integrated Classification Model and Variance-Based Regularization

Title Scene Parsing via Integrated Classification Model and Variance-Based Regularization
Authors Hengcan Shi, Hongliang Li, Qingbo Wu, Zichen Song
Abstract Scene Parsing is a challenging task in computer vision, which can be formulated as a pixel-wise classification problem. Existing deep-learning-based methods usually use one general classifier to recognize all object categories. However, the general classifier easily makes some mistakes in dealing with some confusing categories that share similar appearances or semantics. In this paper, we propose an integrated classification model and a variance-based regularization to achieve more accurate classifications. On the one hand, the integrated classification model contains multiple classifiers, not only the general classifier but also a refinement classifier to distinguish the confusing categories. On the other hand, the variance-based regularization differentiates the scores of all categories as large as possible to reduce misclassifications. Specifically, the integrated classification model includes three steps. The first is to extract the features of each pixel. Based on the features, the second step is to classify each pixel across all categories to generate a preliminary classification result. In the third step, we leverage a refinement classifier to refine the classification result, focusing on differentiating the high-preliminary-score categories. An integrated loss with the variance-based regularization is used to train the model. Extensive experiments on three common scene parsing datasets demonstrate the effectiveness of the proposed method.
Tasks Scene Parsing
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Shi_Scene_Parsing_via_Integrated_Classification_Model_and_Variance-Based_Regularization_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Shi_Scene_Parsing_via_Integrated_Classification_Model_and_Variance-Based_Regularization_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/scene-parsing-via-integrated-classification
Repo https://github.com/shihengcan/ICM-matcaffe
Framework none

Instance-aware Image-to-Image Translation

Title Instance-aware Image-to-Image Translation
Authors Sangwoo Mo, Minsu Cho, Jinwoo Shin
Abstract Unsupervised image-to-image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs). However, previous methods often fail in challenging cases, in particular, when an image has multiple target instances and a translation task involves significant changes in shape, e.g., translating pants to skirts in fashion images. To tackle the issues, we propose a novel method, coined instance-aware GAN (InstaGAN), that incorporates the instance information (e.g., object segmentation masks) and improves multi-instance transfiguration. The proposed method translates both an image and the corresponding set of instance attributes while maintaining the permutation invariance property of the instances. To this end, we introduce a context preserving loss that encourages the network to learn the identity function outside of target instances. We also propose a sequential mini-batch inference/training technique that handles multiple instances with a limited GPU memory and enhances the network to generalize better for multiple instances. Our comparative evaluation demonstrates the effectiveness of the proposed method on different image datasets, in particular, in the aforementioned challenging cases.
Tasks Image-to-Image Translation, Semantic Segmentation, Unsupervised Image-To-Image Translation
Published 2019-05-01
URL https://openreview.net/forum?id=ryxwJhC9YX
PDF https://openreview.net/pdf?id=ryxwJhC9YX
PWC https://paperswithcode.com/paper/instance-aware-image-to-image-translation
Repo https://github.com/sangwoomo/instagan
Framework pytorch

Generalized Tuning of Distributional Word Vectors for Monolingual and Cross-Lingual Lexical Entailment

Title Generalized Tuning of Distributional Word Vectors for Monolingual and Cross-Lingual Lexical Entailment
Authors Goran Glava{\v{s}}, Ivan Vuli{'c}
Abstract Lexical entailment (LE; also known as hyponymy-hypernymy or is-a relation) is a core asymmetric lexical relation that supports tasks like taxonomy induction and text generation. In this work, we propose a simple and effective method for fine-tuning distributional word vectors for LE. Our Generalized Lexical ENtailment model (GLEN) is decoupled from the word embedding model and applicable to any distributional vector space. Yet {–} unlike existing retrofitting models {–} it captures a general specialization function allowing for LE-tuning of the entire distributional space and not only the vectors of words seen in lexical constraints. Coupled with a multilingual embedding space, GLEN seamlessly enables cross-lingual LE detection. We demonstrate the effectiveness of GLEN in graded LE and report large improvements (over 20{%} in accuracy) over state-of-the-art in cross-lingual LE detection.
Tasks Text Generation
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1476/
PDF https://www.aclweb.org/anthology/P19-1476
PWC https://paperswithcode.com/paper/generalized-tuning-of-distributional-word
Repo https://github.com/codogogo/glen
Framework tf

Connective Cognition Network for Directional Visual Commonsense Reasoning

Title Connective Cognition Network for Directional Visual Commonsense Reasoning
Authors Aming Wu, Linchao Zhu, Yahong Han, Yi Yang
Abstract Visual commonsense reasoning (VCR) has been introduced to boost research of cognition-level visual understanding, i.e., a thorough understanding of correlated details of the scene plus an inference with related commonsense knowledge. Recent studies on neuroscience have suggested that brain function or cognition can be described as a global and dynamic integration of local neuronal connectivity, which is context-sensitive to specific cognition tasks. Inspired by this idea, towards VCR, we propose a connective cognition network (CCN) to dynamically reorganize the visual neuron connectivity that is contextualized by the meaning of questions and answers. Concretely, we first develop visual neuron connectivity to fully model correlations of visual content. Then, a contextualization process is introduced to fuse the sentence representation with that of visual neurons. Finally, based on the output of contextualized connectivity, we propose directional connectivity to infer answers or rationales. Experimental results on the VCR dataset demonstrate the effectiveness of our method. Particularly, in $Q \to AR$ mode, our method is around 4% higher than the state-of-the-art method.
Tasks Visual Commonsense Reasoning
Published 2019-12-01
URL http://papers.nips.cc/paper/8804-connective-cognition-network-for-directional-visual-commonsense-reasoning
PDF http://papers.nips.cc/paper/8804-connective-cognition-network-for-directional-visual-commonsense-reasoning.pdf
PWC https://paperswithcode.com/paper/connective-cognition-network-for-directional
Repo https://github.com/AmingWu/CCN
Framework pytorch

SWOW-8500: Word Association task for Intrinsic Evaluation of Word Embeddings

Title SWOW-8500: Word Association task for Intrinsic Evaluation of Word Embeddings
Authors Avijit Thawani, Biplav Srivastava, Anil Singh
Abstract Downstream evaluation of pretrained word embeddings is expensive, more so for tasks where current state of the art models are very large architectures. Intrinsic evaluation using word similarity or analogy datasets, on the other hand, suffers from several disadvantages. We propose a novel intrinsic evaluation task employing large word association datasets (particularly the Small World of Words dataset). We observe correlations not just between performances on SWOW-8500 and previously proposed intrinsic tasks of word similarity prediction, but also with downstream tasks (eg. Text Classification and Natural Language Inference). Most importantly, we report better confidence intervals for scores on our word association task, with no fall in correlation with downstream performance.
Tasks Natural Language Inference, Text Classification, Word Embeddings
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2006/
PDF https://www.aclweb.org/anthology/W19-2006
PWC https://paperswithcode.com/paper/swow-8500-word-association-task-for-intrinsic
Repo https://github.com/avi-jit/SWOW-eval
Framework none

Self-Supervised Representation Learning From Videos for Facial Action Unit Detection

Title Self-Supervised Representation Learning From Videos for Facial Action Unit Detection
Authors Yong Li, Jiabei Zeng, Shiguang Shan, Xilin Chen
Abstract In this paper, we aim to learn discriminative representation for facial action unit (AU) detection from large amount of videos without manual annotations. Inspired by the fact that facial actions are the movements of facial muscles, we depict the movements as the transformation between two face images in different frames and use it as the self-supervisory signal to learn the representations. However, under the uncontrolled condition, the transformation is caused by both facial actions and head motions. To remove the influence by head motions, we propose a Twin-Cycle Autoencoder (TCAE) that can disentangle the facial action related movements and the head motion related ones. Specifically, TCAE is trained to respectively change the facial actions and head poses of the source face to those of the target face. Our experiments validate TCAE’s capability of decoupling the movements. Experimental results also demonstrate that the learned representation is discriminative for AU detection, where TCAE outperforms or is comparable with the state-of-the-art self-supervised learning methods and supervised AU detection methods.
Tasks Action Unit Detection, Facial Action Unit Detection, Representation Learning
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Li_Self-Supervised_Representation_Learning_From_Videos_for_Facial_Action_Unit_Detection_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Self-Supervised_Representation_Learning_From_Videos_for_Facial_Action_Unit_Detection_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/self-supervised-representation-learning-from
Repo https://github.com/mysee1989/TCAE
Framework pytorch

TeraVR empowers precise reconstruction of complete 3-D neuronal morphology in the whole brain

Title TeraVR empowers precise reconstruction of complete 3-D neuronal morphology in the whole brain
Authors Yimin Wang, Qi Li, Lijuan Liu, Zhi Zhou, Zongcai Ruan, Lingsheng Kong, Yaoyao Li, Yun Wang, Ning Zhong, Renjie Chai, Xiangfeng Luo, Yike Guo, Michael Hawrylycz, Qingming Luo, Zhongze Gu, Wei Xie, Hongkui Zeng, Hanchuan Peng
Abstract Neuron morphology is recognized as a key determinant of cell type, yet the quantitative profiling of a mammalian neuron’s complete three-dimensional (3-D) morphology remains arduous when the neuron has complex arborization and long projection. Whole-brain reconstruction of neuron morphology is even more challenging as it involves processing tens of teravoxels of imaging data. Validating such reconstructions is extremely laborious. We develop TeraVR, an open-source virtual reality annotation system, to address these challenges. TeraVR integrates immersive and collaborative 3-D visualization, interaction, and hierarchical streaming of teravoxel-scale images. Using TeraVR, we have produced precise 3-D full morphology of long-projecting neurons in whole mouse brains and developed a collaborative workflow for highly accurate neuronal reconstruction.
Tasks Electron Microscopy Image Segmentation, Image Reconstruction
Published 2019-08-02
URL https://doi.org/10.1038/s41467-019-11443-y
PDF https://www.nature.com/articles/s41467-019-11443-y.pdf
PWC https://paperswithcode.com/paper/teravr-empowers-precise-reconstruction-of
Repo https://github.com/Vaa3D/release
Framework none

Balancing Efficiency and Fairness in On-Demand Ridesourcing

Title Balancing Efficiency and Fairness in On-Demand Ridesourcing
Authors Nixie S. Lesmana, Xuan Zhang, Xiaohui Bei
Abstract We investigate the problem of assigning trip requests to available vehicles in on-demand ridesourcing. Much of the literature has focused on maximizing the total value of served requests, achieving efficiency on the passengers’ side. However, such solutions may result in some drivers being assigned to insufficient or undesired trips, therefore losing fairness from the drivers’ perspective. In this paper, we focus on both the system efficiency and the fairness among drivers and quantitatively analyze the trade-offs between these two objectives. In particular, we give an explicit answer to the question of whether there always exists an assignment that achieves any target efficiency and fairness. We also propose a simple reassignment algorithm that can achieve any selected trade-off. Finally, we demonstrate the effectiveness of the algorithms through extensive experiments on real-world datasets.
Tasks
Published 2019-12-01
URL http://papers.nips.cc/paper/8772-balancing-efficiency-and-fairness-in-on-demand-ridesourcing
PDF http://papers.nips.cc/paper/8772-balancing-efficiency-and-fairness-in-on-demand-ridesourcing.pdf
PWC https://paperswithcode.com/paper/balancing-efficiency-and-fairness-in-on
Repo https://github.com/zxok365/On-Demand-Ridesourcing-Project
Framework none

A Conditional Generative Adversarial Network for Rendering Point Clouds

Title A Conditional Generative Adversarial Network for Rendering Point Clouds
Authors Rowel Atienza
Abstract In computer graphics, point clouds from laser scanning devices are difficult to render into photo-realistic images due to lack of information they carry about color, normal, lighting, and connection between points. Rendering a point cloud after surface mesh reconstruction generally results into poor image quality with many noticeable artifacts. In this paper, we propose a conditional generative adversarial network that directly renders a point cloud given the azimuth and elevation angles of camera viewpoint. The proposed method, called pc2pix, renders point clouds into objects with higher class similarity with the ground truth as compared to images from surface reconstruction. pc2pix is also significantly faster, more robust to noise and can operate on a lower number of points. The code is available at: https://github.com/roatienza/pc2pix.
Tasks
Published 2019-06-17
URL http://openaccess.thecvf.com/content_CVPRW_2019/papers/3D-WidDGET/Atienza_A_Conditional_Generative_Adversarial_Network_for_Rendering_Point_Clouds_CVPRW_2019_paper.pdf
PDF http://openaccess.thecvf.com/content_CVPRW_2019/papers/3D-WidDGET/Atienza_A_Conditional_Generative_Adversarial_Network_for_Rendering_Point_Clouds_CVPRW_2019_paper.pdf
PWC https://paperswithcode.com/paper/a-conditional-generative-adversarial-network
Repo https://github.com/roatienza/pc2pix
Framework tf

Self-Supervised Representation Learning by Rotation Feature Decoupling

Title Self-Supervised Representation Learning by Rotation Feature Decoupling
Authors Zeyu Feng, Chang Xu, Dacheng Tao
Abstract We introduce a self-supervised learning method that focuses on beneficial properties of representation and their abilities in generalizing to real-world tasks. The method incorporates rotation invariance into the feature learning framework, one of many good and well-studied properties of visual representation, which is rarely appreciated or exploited by previous deep convolutional neural network based self-supervised representation learning methods. Specifically, our model learns a split representation that contains both rotation related and unrelated parts. We train neural networks by jointly predicting image rotations and discriminating individual instances. In particular, our model decouples the rotation discrimination from instance discrimination, which allows us to improve the rotation prediction by mitigating the influence of rotation label noise, as well as discriminate instances without regard to image rotations. The resulting feature has a better generalization ability for more various tasks. Experimental results show that our model outperforms current state-of-the-art methods on standard self-supervised feature learning benchmarks.
Tasks Representation Learning, Unsupervised Representation Learning
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Feng_Self-Supervised_Representation_Learning_by_Rotation_Feature_Decoupling_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Feng_Self-Supervised_Representation_Learning_by_Rotation_Feature_Decoupling_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/self-supervised-representation-learning-by
Repo https://github.com/philiptheother/FeatureDecoupling
Framework pytorch
comments powered by Disqus