February 2, 2020

3169 words 15 mins read

Paper Group AWR 14

Paper Group AWR 14

Direction Concentration Learning: Enhancing Congruency in Machine Learning. Semantically Conditioned Dialog Response Generation via Hierarchical Disentangled Self-Attention. Lightweight and Robust Representation of Economic Scales from Satellite Imagery. Mining Unfollow Behavior in Large-Scale Online Social Networks via Spatial-Temporal Interaction …

Direction Concentration Learning: Enhancing Congruency in Machine Learning

Title Direction Concentration Learning: Enhancing Congruency in Machine Learning
Authors Yan Luo, Yongkang Wong, Mohan S. Kankanhalli, Qi Zhao
Abstract One of the well-known challenges in computer vision tasks is the visual diversity of images, which could result in an agreement or disagreement between the learned knowledge and the visual content exhibited by the current observation. In this work, we first define such an agreement in a concepts learning process as congruency. Formally, given a particular task and sufficiently large dataset, the congruency issue occurs in the learning process whereby the task-specific semantics in the training data are highly varying. We propose a Direction Concentration Learning (DCL) method to improve congruency in the learning process, where enhancing congruency influences the convergence path to be less circuitous. The experimental results show that the proposed DCL method generalizes to state-of-the-art models and optimizers, as well as improves the performances of saliency prediction task, continual learning task, and classification task. Moreover, it helps mitigate the catastrophic forgetting problem in the continual learning task. The code is publicly available at https://github.com/luoyan407/congruency.
Tasks Continual Learning, Saliency Prediction
Published 2019-12-17
URL https://arxiv.org/abs/1912.08136v2
PDF https://arxiv.org/pdf/1912.08136v2.pdf
PWC https://paperswithcode.com/paper/direction-concentration-learning-enhancing
Repo https://github.com/luoyan407/congruency
Framework pytorch

Semantically Conditioned Dialog Response Generation via Hierarchical Disentangled Self-Attention

Title Semantically Conditioned Dialog Response Generation via Hierarchical Disentangled Self-Attention
Authors Wenhu Chen, Jianshu Chen, Pengda Qin, Xifeng Yan, William Yang Wang
Abstract Semantically controlled neural response generation on limited-domain has achieved great performance. However, moving towards multi-domain large-scale scenarios are shown to be difficult because the possible combinations of semantic inputs grow exponentially with the number of domains. To alleviate such scalability issue, we exploit the structure of dialog acts to build a multi-layer hierarchical graph, where each act is represented as a root-to-leaf route on the graph. Then, we incorporate such graph structure prior as an inductive bias to build a hierarchical disentangled self-attention network, where we disentangle attention heads to model designated nodes on the dialog act graph. By activating different (disentangled) heads at each layer, combinatorially many dialog act semantics can be modeled to control the neural response generation. On the large-scale Multi-Domain-WOZ dataset, our model can yield a significant improvement over the baselines on various automatic and human evaluation metrics.
Tasks
Published 2019-05-30
URL https://arxiv.org/abs/1905.12866v3
PDF https://arxiv.org/pdf/1905.12866v3.pdf
PWC https://paperswithcode.com/paper/semantically-conditioned-dialog-response
Repo https://github.com/wenhuchen/HDSA-Dialog
Framework pytorch

Lightweight and Robust Representation of Economic Scales from Satellite Imagery

Title Lightweight and Robust Representation of Economic Scales from Satellite Imagery
Authors Sungwon Han, Donghyun Ahn, Hyunji Cha, Jeasurk Yang, Sungwon Park, Meeyoung Cha
Abstract Satellite imagery has long been an attractive data source that provides a wealth of information on human-inhabited areas. While super resolution satellite images are rapidly becoming available, little study has focused on how to extract meaningful information about human habitation patterns and economic scales from such data. We present READ, a new approach for obtaining essential spatial representation for any given district from high-resolution satellite imagery based on deep neural networks. Our method combines transfer learning and embedded statistics to efficiently learn critical spatial characteristics of arbitrary size areas and represent them into a fixed-length vector with minimal information loss. Even with a small set of labels, READ can distinguish subtle differences between rural and urban areas and infer the degree of urbanization. An extensive evaluation demonstrates the model outperforms the state-of-the-art in predicting economic scales, such as population density for South Korea (R^2=0.9617), and shows a high potential use for developing countries where district-level economic scales are not known.
Tasks Super-Resolution, Transfer Learning
Published 2019-12-18
URL https://arxiv.org/abs/1912.08197v1
PDF https://arxiv.org/pdf/1912.08197v1.pdf
PWC https://paperswithcode.com/paper/lightweight-and-robust-representation-of
Repo https://github.com/Sungwon-Han/READ
Framework pytorch

Mining Unfollow Behavior in Large-Scale Online Social Networks via Spatial-Temporal Interaction

Title Mining Unfollow Behavior in Large-Scale Online Social Networks via Spatial-Temporal Interaction
Authors Haozhe Wu, Zhiyuan Hu, Jia Jia, Yaohua Bu, Xiangnan He, Tat-Seng Chua
Abstract Online Social Networks (OSNs) evolve through two pervasive behaviors: follow and unfollow, which respectively signify relationship creation and relationship dissolution. Researches on social network evolution mainly focus on the follow behavior, while the unfollow behavior has largely been ignored. Mining unfollow behavior is challenging because user’s decision on unfollow is not only affected by the simple combination of user’s attributes like informativeness and reciprocity, but also affected by the complex interaction among them. Meanwhile, prior datasets seldom contain sufficient records for inferring such complex interaction. To address these issues, we first construct a large-scale real-world Weibo dataset, which records detailed post content and relationship dynamics of 1.8 million Chinese users. Next, we define user’s attributes as two categories: spatial attributes (e.g., social role of user) and temporal attributes (e.g., post content of user). Leveraging the constructed dataset, we systematically study how the interaction effects between user’s spatial and temporal attributes contribute to the unfollow behavior. Afterwards, we propose a novel unified model with heterogeneous information (UMHI) for unfollow prediction. Specifically, our UMHI model: 1) captures user’s spatial attributes through social network structure; 2) infers user’s temporal attributes through user-posted content and unfollow history; and 3) models the interaction between spatial and temporal attributes by the nonlinear MLP layers. Comprehensive evaluations on the constructed dataset demonstrate that the proposed UMHI model outperforms baseline methods by 16.44% on average in terms of precision. In addition, factor analyses verify that both spatial attributes and temporal attributes are essential for mining unfollow behavior.
Tasks
Published 2019-11-17
URL https://arxiv.org/abs/1911.07156v1
PDF https://arxiv.org/pdf/1911.07156v1.pdf
PWC https://paperswithcode.com/paper/mining-unfollow-behavior-in-large-scale
Repo https://github.com/wuhaozhe/Unfollow-Prediction
Framework none

Automated Essay Scoring based on Two-Stage Learning

Title Automated Essay Scoring based on Two-Stage Learning
Authors Jiawei Liu, Yang Xu, Yaguang Zhu
Abstract Current state-of-art feature-engineered and end-to-end Automated Essay Score (AES) methods are proven to be unable to detect adversarial samples, e.g. the essays composed of permuted sentences and the prompt-irrelevant essays. Focusing on the problem, we develop a Two-Stage Learning Framework (TSLF) which integrates the advantages of both feature-engineered and end-to-end AES models. In experiments, we compare TSLF against a number of strong baselines, and the results demonstrate the effectiveness and robustness of our models. TSLF surpasses all the baselines on five-eighths of prompts and achieves new state-of-the-art average performance when without negative samples. After adding some adversarial essays to the original datasets, TSLF outperforms the feature-engineered and end-to-end baselines to a great extent, and shows great robustness.
Tasks
Published 2019-01-23
URL https://arxiv.org/abs/1901.07744v2
PDF https://arxiv.org/pdf/1901.07744v2.pdf
PWC https://paperswithcode.com/paper/automated-essay-scoring-based-on-two-stage
Repo https://github.com/ustcljw/fupugec-score
Framework tf

Relational Collaborative Filtering:Modeling Multiple Item Relations for Recommendation

Title Relational Collaborative Filtering:Modeling Multiple Item Relations for Recommendation
Authors Xin Xin, Xiangnan He, Yongfeng Zhang, Yongdong Zhang, Joemon Jose
Abstract Existing item-based collaborative filtering (ICF) methods leverage only the relation of collaborative similarity. Nevertheless, there exist multiple relations between items in real-world scenarios. Distinct from the collaborative similarity that implies co-interact patterns from the user perspective, these relations reveal fine-grained knowledge on items from different perspectives of meta-data, functionality, etc. However, how to incorporate multiple item relations is less explored in recommendation research. In this work, we propose Relational Collaborative Filtering (RCF), a general framework to exploit multiple relations between items in recommender system. We find that both the relation type and the relation value are crucial in inferring user preference. To this end, we develop a two-level hierarchical attention mechanism to model user preference. The first-level attention discriminates which types of relations are more important, and the second-level attention considers the specific relation values to estimate the contribution of a historical item in recommending the target item. To make the item embeddings be reflective of the relational structure between items, we further formulate a task to preserve the item relations, and jointly train it with the recommendation task of preference modeling. Empirical results on two real datasets demonstrate the strong performance of RCF. Furthermore, we also conduct qualitative analyses to show the benefits of explanations brought by the modeling of multiple item relations.
Tasks Recommendation Systems
Published 2019-04-29
URL https://arxiv.org/abs/1904.12796v3
PDF https://arxiv.org/pdf/1904.12796v3.pdf
PWC https://paperswithcode.com/paper/relational-collaborative-filteringmodeling
Repo https://github.com/XinGla/RCF
Framework tf

Learning to play the Chess Variant Crazyhouse above World Champion Level with Deep Neural Networks and Human Data

Title Learning to play the Chess Variant Crazyhouse above World Champion Level with Deep Neural Networks and Human Data
Authors Johannes Czech, Moritz Willig, Alena Beyer, Kristian Kersting, Johannes Fürnkranz
Abstract Deep neural networks have been successfully applied in learning the board games Go, chess and shogi without prior knowledge by making use of reinforcement learning. Although starting from zero knowledge has been shown to yield impressive results, it is associated with high computationally costs especially for complex games. With this paper, we present CrazyAra which is a neural network based engine solely trained in supervised manner for the chess variant crazyhouse. Crazyhouse is a game with a higher branching factor than chess and there is only limited data of lower quality available compared to AlphaGo. Therefore, we focus on improving efficiency in multiple aspects while relying on low computational resources. These improvements include modifications in the neural network design and training configuration, the introduction of a data normalization step and a more sample efficient Monte-Carlo tree search which has a lower chance to blunder. After training on 569,537 human games for 1.5 days we achieve a move prediction accuracy of 60.4%. During development, versions of CrazyAra played professional human players. Most notably, CrazyAra achieved a four to one win over 2017 crazyhouse world champion Justin Tan (aka LM Jann Lee) who is more than 400 Elo higher rated compared to the average player in our training set. Furthermore, we test the playing strength of CrazyAra on CPU against all participants of the second Crazyhouse Computer Championships 2017, winning against twelve of the thirteen participants. Finally, for CrazyAraFish we continue training our model on generated engine games. In ten long-time control matches playing Stockfish 10, CrazyAraFish wins three games and draws one out of ten matches.
Tasks Board Games
Published 2019-08-19
URL https://arxiv.org/abs/1908.06660v2
PDF https://arxiv.org/pdf/1908.06660v2.pdf
PWC https://paperswithcode.com/paper/learning-to-play-the-chess-variant-crazyhouse
Repo https://github.com/QueensGambit/CrazyAra-Engine
Framework none

Domain Transfer in Dialogue Systems without Turn-Level Supervision

Title Domain Transfer in Dialogue Systems without Turn-Level Supervision
Authors Joachim Bingel, Victor Petrén Bach Hansen, Ana Valeria Gonzalez, Paweł Budzianowski, Isabelle Augenstein, Anders Søgaard
Abstract Task oriented dialogue systems rely heavily on specialized dialogue state tracking (DST) modules for dynamically predicting user intent throughout the conversation. State-of-the-art DST models are typically trained in a supervised manner from manual annotations at the turn level. However, these annotations are costly to obtain, which makes it difficult to create accurate dialogue systems for new domains. To address these limitations, we propose a method, based on reinforcement learning, for transferring DST models to new domains without turn-level supervision. Across several domains, our experiments show that this method quickly adapts off-the-shelf models to new domains and performs on par with models trained with turn-level supervision. We also show our method can improve models trained using turn-level supervision by subsequent fine-tuning optimization toward dialog-level rewards.
Tasks Dialogue State Tracking, Task-Oriented Dialogue Systems
Published 2019-09-16
URL https://arxiv.org/abs/1909.07101v1
PDF https://arxiv.org/pdf/1909.07101v1.pdf
PWC https://paperswithcode.com/paper/domain-transfer-in-dialogue-systems-without
Repo https://github.com/coastalcph/dialog-rl
Framework none

Guiding Extractive Summarization with Question-Answering Rewards

Title Guiding Extractive Summarization with Question-Answering Rewards
Authors Kristjan Arumae, Fei Liu
Abstract Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is cost-prohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors.
Tasks Question Answering
Published 2019-04-04
URL http://arxiv.org/abs/1904.02321v1
PDF http://arxiv.org/pdf/1904.02321v1.pdf
PWC https://paperswithcode.com/paper/guiding-extractive-summarization-with
Repo https://github.com/ucfnlp/summ_qa_rewards
Framework none

Model-Based Reinforcement Learning with Adversarial Training for Online Recommendation

Title Model-Based Reinforcement Learning with Adversarial Training for Online Recommendation
Authors Xueying Bai, Jian Guan, Hongning Wang
Abstract Reinforcement learning is well suited for optimizing policies of recommender systems. Current solutions mostly focus on model-free approaches, which require frequent interactions with the real environment, and thus are expensive in model learning. Offline evaluation methods, such as importance sampling, can alleviate such limitations, but usually request a large amount of logged data and do not work well when the action space is large. In this work, we propose a model-based reinforcement learning solution which models user-agent interaction for offline policy learning via a generative adversarial network. To reduce bias in the learned model and policy, we use a discriminator to evaluate the quality of generated data and scale the generated rewards. Our theoretical analysis and empirical evaluations demonstrate the effectiveness of our solution in learning policies from the offline and generated data.
Tasks Recommendation Systems
Published 2019-11-10
URL https://arxiv.org/abs/1911.03845v3
PDF https://arxiv.org/pdf/1911.03845v3.pdf
PWC https://paperswithcode.com/paper/a-model-based-reinforcement-learning-with
Repo https://github.com/JianGuanTHU/RecGAN
Framework tf

PointNetLK: Robust & Efficient Point Cloud Registration using PointNet

Title PointNetLK: Robust & Efficient Point Cloud Registration using PointNet
Authors Yasuhiro Aoki, Hunter Goforth, Rangaprasad Arun Srivatsan, Simon Lucey
Abstract PointNet has revolutionized how we think about representing point clouds. For classification and segmentation tasks, the approach and its subsequent extensions are state-of-the-art. To date, the successful application of PointNet to point cloud registration has remained elusive. In this paper we argue that PointNet itself can be thought of as a learnable “imaging” function. As a consequence, classical vision algorithms for image alignment can be applied on the problem - namely the Lucas & Kanade (LK) algorithm. Our central innovations stem from: (i) how to modify the LK algorithm to accommodate the PointNet imaging function, and (ii) unrolling PointNet and the LK algorithm into a single trainable recurrent deep neural network. We describe the architecture, and compare its performance against state-of-the-art in common registration scenarios. The architecture offers some remarkable properties including: generalization across shape categories and computational efficiency - opening up new paths of exploration for the application of deep learning to point cloud registration. Code and videos are available at https://github.com/hmgoforth/PointNetLK.
Tasks Point Cloud Registration
Published 2019-03-13
URL http://arxiv.org/abs/1903.05711v2
PDF http://arxiv.org/pdf/1903.05711v2.pdf
PWC https://paperswithcode.com/paper/pointnetlk-robust-efficient-point-cloud
Repo https://github.com/izhangrui/paper_to_read
Framework none

Deep Closest Point: Learning Representations for Point Cloud Registration

Title Deep Closest Point: Learning Representations for Point Cloud Registration
Authors Yue Wang, Justin M. Solomon
Abstract Point cloud registration is a key problem for computer vision applied to robotics, medical imaging, and other applications. This problem involves finding a rigid transformation from one point cloud into another so that they align. Iterative Closest Point (ICP) and its variants provide simple and easily-implemented iterative methods for this task, but these algorithms can converge to spurious local optima. To address local optima and other difficulties in the ICP pipeline, we propose a learning-based method, titled Deep Closest Point (DCP), inspired by recent techniques in computer vision and natural language processing. Our model consists of three parts: a point cloud embedding network, an attention-based module combined with a pointer generation layer, to approximate combinatorial matching, and a differentiable singular value decomposition (SVD) layer to extract the final rigid transformation. We train our model end-to-end on the ModelNet40 dataset and show in several settings that it performs better than ICP, its variants (e.g., Go-ICP, FGR), and the recently-proposed learning-based method PointNetLK. Beyond providing a state-of-the-art registration technique, we evaluate the suitability of our learned features transferred to unseen objects. We also provide preliminary analysis of our learned model to help understand whether domain-specific and/or global features facilitate rigid registration.
Tasks Point Cloud Registration
Published 2019-05-08
URL https://arxiv.org/abs/1905.03304v1
PDF https://arxiv.org/pdf/1905.03304v1.pdf
PWC https://paperswithcode.com/paper/190503304
Repo https://github.com/WangYueFt/dcp
Framework pytorch

Self-Attentive Model for Headline Generation

Title Self-Attentive Model for Headline Generation
Authors Daniil Gavrilov, Pavel Kalaidin, Valentin Malykh
Abstract Headline generation is a special type of text summarization task. While the amount of available training data for this task is almost unlimited, it still remains challenging, as learning to generate headlines for news articles implies that the model has strong reasoning about natural language. To overcome this issue, we applied recent Universal Transformer architecture paired with byte-pair encoding technique and achieved new state-of-the-art results on the New York Times Annotated corpus with ROUGE-L F1-score 24.84 and ROUGE-2 F1-score 13.48. We also present the new RIA corpus and reach ROUGE-L F1-score 36.81 and ROUGE-2 F1-score 22.15 on it.
Tasks Text Summarization
Published 2019-01-23
URL http://arxiv.org/abs/1901.07786v1
PDF http://arxiv.org/pdf/1901.07786v1.pdf
PWC https://paperswithcode.com/paper/self-attentive-model-for-headline-generation
Repo https://github.com/RossiyaSegodnya/ria_news_dataset
Framework none

SDRSAC: Semidefinite-Based Randomized Approach for Robust Point Cloud Registration without Correspondences

Title SDRSAC: Semidefinite-Based Randomized Approach for Robust Point Cloud Registration without Correspondences
Authors Huu Le, Thanh-Toan Do, Tuan Hoang, Ngai-Man Cheung
Abstract This paper presents a novel randomized algorithm for robust point cloud registration without correspondences. Most existing registration approaches require a set of putative correspondences obtained by extracting invariant descriptors. However, such descriptors could become unreliable in noisy and contaminated settings. In these settings, methods that directly handle input point sets are preferable. Without correspondences, however, conventional randomized techniques require a very large number of samples in order to reach satisfactory solutions. In this paper, we propose a novel approach to address this problem. In particular, our work enables the use of randomized methods for point cloud registration without the need of putative correspondences. By considering point cloud alignment as a special instance of graph matching and employing an efficient semi-definite relaxation, we propose a novel sampling mechanism, in which the size of the sampled subsets can be larger-than-minimal. Our tight relaxation scheme enables fast rejection of the outliers in the sampled sets, resulting in high-quality hypotheses. We conduct extensive experiments to demonstrate that our approach outperforms other state-of-the-art methods. Importantly, our proposed method serves as a generic framework which can be extended to problems with known correspondences.
Tasks Graph Matching, Point Cloud Registration
Published 2019-04-06
URL http://arxiv.org/abs/1904.03483v2
PDF http://arxiv.org/pdf/1904.03483v2.pdf
PWC https://paperswithcode.com/paper/sdrsac-semidefinite-based-randomized-approach
Repo https://github.com/intellhave/SDRSAC
Framework none

Towards Efficient and Unbiased Implementation of Lipschitz Continuity in GANs

Title Towards Efficient and Unbiased Implementation of Lipschitz Continuity in GANs
Authors Zhiming Zhou, Jian Shen, Yuxuan Song, Weinan Zhang, Yong Yu
Abstract Lipschitz continuity recently becomes popular in generative adversarial networks (GANs). It was observed that the Lipschitz regularized discriminator leads to improved training stability and sample quality. The mainstream implementations of Lipschitz continuity include gradient penalty and spectral normalization. In this paper, we demonstrate that gradient penalty introduces undesired bias, while spectral normalization might be over restrictive. We accordingly propose a new method which is efficient and unbiased. Our experiments verify our analysis and show that the proposed method is able to achieve successful training in various situations where gradient penalty and spectral normalization fail.
Tasks
Published 2019-04-02
URL http://arxiv.org/abs/1904.01184v1
PDF http://arxiv.org/pdf/1904.01184v1.pdf
PWC https://paperswithcode.com/paper/towards-efficient-and-unbiased-implementation
Repo https://github.com/ZhimingZhou/AdaShift-Lipschitz-GANs-MaxGP
Framework tf
comments powered by Disqus