February 1, 2020

3117 words 15 mins read

Paper Group AWR 178

Paper Group AWR 178

Large-scale Multi-view Subspace Clustering in Linear Time. Nested Named Entity Recognition via Second-best Sequence Learning and Decoding. BLVD: Building A Large-scale 5D Semantics Benchmark for Autonomous Driving. Making AI Forget You: Data Deletion in Machine Learning. Viewpoint Optimization for Autonomous Strawberry Harvesting with Deep Reinforc …

Large-scale Multi-view Subspace Clustering in Linear Time

Title Large-scale Multi-view Subspace Clustering in Linear Time
Authors Zhao Kang, Wangtao Zhou, Zhitong Zhao, Junming Shao, Meng Han, Zenglin Xu
Abstract A plethora of multi-view subspace clustering (MVSC) methods have been proposed over the past few years. Researchers manage to boost clustering accuracy from different points of view. However, many state-of-the-art MVSC algorithms, typically have a quadratic or even cubic complexity, are inefficient and inherently difficult to apply at large scales. In the era of big data, the computational issue becomes critical. To fill this gap, we propose a large-scale MVSC (LMVSC) algorithm with linear order complexity. Inspired by the idea of anchor graph, we first learn a smaller graph for each view. Then, a novel approach is designed to integrate those graphs so that we can implement spectral clustering on a smaller graph. Interestingly, it turns out that our model also applies to single-view scenario. Extensive experiments on various large-scale benchmark data sets validate the effectiveness and efficiency of our approach with respect to state-of-the-art clustering methods.
Tasks Multi-view Subspace Clustering
Published 2019-11-21
URL https://arxiv.org/abs/1911.09290v1
PDF https://arxiv.org/pdf/1911.09290v1.pdf
PWC https://paperswithcode.com/paper/large-scale-multi-view-subspace-clustering-in
Repo https://github.com/sckangz/LMVSC
Framework none

Nested Named Entity Recognition via Second-best Sequence Learning and Decoding

Title Nested Named Entity Recognition via Second-best Sequence Learning and Decoding
Authors Takashi Shibuya, Eduard Hovy
Abstract When an entity name contains other names within it, the identification of all combinations of names can become difficult and expensive. We propose a new method to recognize not only outermost named entities but also inner nested ones. We design an objective function for training a neural model that treats the tag sequence for nested entities as the second best path within the span of their parent entity. In addition, we provide the decoding method for inference that extracts entities iteratively from outermost ones to inner ones in an outside-to-inside way. Our method has no additional hyperparameters to the conditional random field based model widely used for flat named entity recognition tasks. Experiments demonstrate that our method outperforms existing methods capable of handling nested entities, achieving the F1-scores of 84.97%, 83.99%, and 77.19% on ACE-2004, ACE-2005, and GENIA datasets, respectively.
Tasks Named Entity Recognition, Nested Mention Recognition, Nested Named Entity Recognition
Published 2019-09-05
URL https://arxiv.org/abs/1909.02250v2
PDF https://arxiv.org/pdf/1909.02250v2.pdf
PWC https://paperswithcode.com/paper/nested-named-entity-recognition-via-second
Repo https://github.com/yahshibu/nested-ner-2019-bert
Framework pytorch

BLVD: Building A Large-scale 5D Semantics Benchmark for Autonomous Driving

Title BLVD: Building A Large-scale 5D Semantics Benchmark for Autonomous Driving
Authors Jianru Xue, Jianwu Fang, Tao Li, Bohua Zhang, Pu Zhang, Zhen Ye, Jian Dou
Abstract In autonomous driving community, numerous benchmarks have been established to assist the tasks of 3D/2D object detection, stereo vision, semantic/instance segmentation. However, the more meaningful dynamic evolution of the surrounding objects of ego-vehicle is rarely exploited, and lacks a large-scale dataset platform. To address this, we introduce BLVD, a large-scale 5D semantics benchmark which does not concentrate on the static detection or semantic/instance segmentation tasks tackled adequately before. Instead, BLVD aims to provide a platform for the tasks of dynamic 4D (3D+temporal) tracking, 5D (4D+interactive) interactive event recognition and intention prediction. This benchmark will boost the deeper understanding of traffic scenes than ever before. We totally yield 249,129 3D annotations, 4,902 independent individuals for tracking with the length of overall 214,922 points, 6,004 valid fragments for 5D interactive event recognition, and 4,900 individuals for 5D intention prediction. These tasks are contained in four kinds of scenarios depending on the object density (low and high) and light conditions (daytime and nighttime). The benchmark can be downloaded from our project site https://github.com/VCCIV/BLVD/.
Tasks Autonomous Driving, Instance Segmentation, Object Detection, Semantic Segmentation
Published 2019-03-15
URL http://arxiv.org/abs/1903.06405v1
PDF http://arxiv.org/pdf/1903.06405v1.pdf
PWC https://paperswithcode.com/paper/blvd-building-a-large-scale-5d-semantics
Repo https://github.com/VCCIV/BLVD
Framework none

Making AI Forget You: Data Deletion in Machine Learning

Title Making AI Forget You: Data Deletion in Machine Learning
Authors Antonio Ginart, Melody Y. Guan, Gregory Valiant, James Zou
Abstract Intense recent discussions have focused on how to provide individuals with control over when their data can and cannot be used — the EU’s Right To Be Forgotten regulation is an example of this effort. In this paper we initiate a framework studying what to do when it is no longer permissible to deploy models derivative from specific user data. In particular, we formulate the problem of efficiently deleting individual data points from trained machine learning models. For many standard ML models, the only way to completely remove an individual’s data is to retrain the whole model from scratch on the remaining data, which is often not computationally practical. We investigate algorithmic principles that enable efficient data deletion in ML. For the specific setting of k-means clustering, we propose two provably efficient deletion algorithms which achieve an average of over 100X improvement in deletion efficiency across 6 datasets, while producing clusters of comparable statistical quality to a canonical k-means++ baseline.
Tasks
Published 2019-07-11
URL https://arxiv.org/abs/1907.05012v2
PDF https://arxiv.org/pdf/1907.05012v2.pdf
PWC https://paperswithcode.com/paper/making-ai-forget-you-data-deletion-in-machine
Repo https://github.com/tginart/deletion-efficient-kmeans
Framework none

Viewpoint Optimization for Autonomous Strawberry Harvesting with Deep Reinforcement Learning

Title Viewpoint Optimization for Autonomous Strawberry Harvesting with Deep Reinforcement Learning
Authors Jonathon Sather, Xiaozheng Jane Zhang
Abstract Autonomous harvesting may provide a viable solution to mounting labor pressures in the United States’s strawberry industry. However, due to bottlenecks in machine perception and economic viability, a profitable and commercially adopted strawberry harvesting system remains elusive. In this research, we explore the feasibility of using deep reinforcement learning to overcome these bottlenecks and develop a practical algorithm to address the sub-objective of viewpoint optimization, or the development of a control policy to direct a camera to favorable vantage points for autonomous harvesting. We evaluate the algorithm’s performance in a custom, open-source simulated environment and observe encouraging results. Our trained agent yields 8.7 times higher returns than random actions and 8.8 percent faster exploration than our best baseline policy, which uses visual servoing. Visual investigation shows the agent is able to fixate on favorable viewpoints, despite having no explicit means to propagate information through time. Overall, we conclude that deep reinforcement learning is a promising area of research to advance the state of the art in autonomous strawberry harvesting.
Tasks
Published 2019-03-05
URL http://arxiv.org/abs/1903.02074v2
PDF http://arxiv.org/pdf/1903.02074v2.pdf
PWC https://paperswithcode.com/paper/viewpoint-optimization-for-autonomous
Repo https://github.com/jsather/harvester-sim
Framework none

DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images

Title DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images
Authors Yuying Ge, Ruimao Zhang, Lingyun Wu, Xiaogang Wang, Xiaoou Tang, Ping Luo
Abstract Understanding fashion images has been advanced by benchmarks with rich annotations such as DeepFashion, whose labels include clothing categories, landmarks, and consumer-commercial image pairs. However, DeepFashion has nonnegligible issues such as single clothing-item per image, sparse landmarks (4~8 only), and no per-pixel masks, making it had significant gap from real-world scenarios. We fill in the gap by presenting DeepFashion2 to address these issues. It is a versatile benchmark of four tasks including clothes detection, pose estimation, segmentation, and retrieval. It has 801K clothing items where each item has rich annotations such as style, scale, viewpoint, occlusion, bounding box, dense landmarks and masks. There are also 873K Commercial-Consumer clothes pairs. A strong baseline is proposed, called Match R-CNN, which builds upon Mask R-CNN to solve the above four tasks in an end-to-end manner. Extensive evaluations are conducted with different criterions in DeepFashion2.
Tasks Pose Estimation, Semantic Segmentation
Published 2019-01-23
URL http://arxiv.org/abs/1901.07973v1
PDF http://arxiv.org/pdf/1901.07973v1.pdf
PWC https://paperswithcode.com/paper/deepfashion2-a-versatile-benchmark-for
Repo https://github.com/RustyBoy1097/sina-weibo-collection
Framework tf

Approximated Oracle Filter Pruning for Destructive CNN Width Optimization

Title Approximated Oracle Filter Pruning for Destructive CNN Width Optimization
Authors Xiaohan Ding, Guiguang Ding, Yuchen Guo, Jungong Han, Chenggang Yan
Abstract It is not easy to design and run Convolutional Neural Networks (CNNs) due to: 1) finding the optimal number of filters (i.e., the width) at each layer is tricky, given an architecture; and 2) the computational intensity of CNNs impedes the deployment on computationally limited devices. Oracle Pruning is designed to remove the unimportant filters from a well-trained CNN, which estimates the filters’ importance by ablating them in turn and evaluating the model, thus delivers high accuracy but suffers from intolerable time complexity, and requires a given resulting width but cannot automatically find it. To address these problems, we propose Approximated Oracle Filter Pruning (AOFP), which keeps searching for the least important filters in a binary search manner, makes pruning attempts by masking out filters randomly, accumulates the resulting errors, and finetunes the model via a multi-path framework. As AOFP enables simultaneous pruning on multiple layers, we can prune an existing very deep CNN with acceptable time cost, negligible accuracy drop, and no heuristic knowledge, or re-design a model which exerts higher accuracy and faster inference.
Tasks
Published 2019-05-12
URL https://arxiv.org/abs/1905.04748v1
PDF https://arxiv.org/pdf/1905.04748v1.pdf
PWC https://paperswithcode.com/paper/approximated-oracle-filter-pruning-for
Repo https://github.com/ShawnDing1994/AOFP
Framework tf

All about Structure: Adapting Structural Information across Domains for Boosting Semantic Segmentation

Title All about Structure: Adapting Structural Information across Domains for Boosting Semantic Segmentation
Authors Wei-Lun Chang, Hui-Po Wang, Wen-Hsiao Peng, Wei-Chen Chiu
Abstract In this paper we tackle the problem of unsupervised domain adaptation for the task of semantic segmentation, where we attempt to transfer the knowledge learned upon synthetic datasets with ground-truth labels to real-world images without any annotation. With the hypothesis that the structural content of images is the most informative and decisive factor to semantic segmentation and can be readily shared across domains, we propose a Domain Invariant Structure Extraction (DISE) framework to disentangle images into domain-invariant structure and domain-specific texture representations, which can further realize image-translation across domains and enable label transfer to improve segmentation performance. Extensive experiments verify the effectiveness of our proposed DISE model and demonstrate its superiority over several state-of-the-art approaches.
Tasks Domain Adaptation, Image-to-Image Translation, Semantic Segmentation, Synthetic-to-Real Translation, Unsupervised Domain Adaptation
Published 2019-03-26
URL http://arxiv.org/abs/1903.12212v1
PDF http://arxiv.org/pdf/1903.12212v1.pdf
PWC https://paperswithcode.com/paper/all-about-structure-adapting-structural
Repo https://github.com/a514514772/DISE-Domain-Invariant-Structure-Extraction
Framework pytorch

Denoising based Sequence-to-Sequence Pre-training for Text Generation

Title Denoising based Sequence-to-Sequence Pre-training for Text Generation
Authors Liang Wang, Wei Zhao, Ruoyu Jia, Sujian Li, Jingming Liu
Abstract This paper presents a new sequence-to-sequence (seq2seq) pre-training method PoDA (Pre-training of Denoising Autoencoders), which learns representations suitable for text generation tasks. Unlike encoder-only (e.g., BERT) or decoder-only (e.g., OpenAI GPT) pre-training approaches, PoDA jointly pre-trains both the encoder and decoder by denoising the noise-corrupted text, and it also has the advantage of keeping the network architecture unchanged in the subsequent fine-tuning stage. Meanwhile, we design a hybrid model of Transformer and pointer-generator networks as the backbone architecture for PoDA. We conduct experiments on two text generation tasks: abstractive summarization, and grammatical error correction. Results on four datasets show that PoDA can improve model performance over strong baselines without using any task-specific techniques and significantly speed up convergence.
Tasks Abstractive Text Summarization, Denoising, Grammatical Error Correction, Text Generation
Published 2019-08-22
URL https://arxiv.org/abs/1908.08206v1
PDF https://arxiv.org/pdf/1908.08206v1.pdf
PWC https://paperswithcode.com/paper/denoising-based-sequence-to-sequence-pre
Repo https://github.com/yuantiku/PoDA
Framework pytorch

Stable prediction with radiomics data

Title Stable prediction with radiomics data
Authors Carel F. W. Peeters, Caroline Übelhör, Steven W. Mes, Roland Martens, Thomas Koopman, Pim de Graaf, Floris H. P. van Velden, Ronald Boellaard, Jonas A. Castelijns, Dennis E. te Beest, Martijn W. Heymans, Mark A. van de Wiel
Abstract Motivation: Radiomics refers to the high-throughput mining of quantitative features from radiographic images. It is a promising field in that it may provide a non-invasive solution for screening and classification. Standard machine learning classification and feature selection techniques, however, tend to display inferior performance in terms of (the stability of) predictive performance. This is due to the heavy multicollinearity present in radiomic data. We set out to provide an easy-to-use approach that deals with this problem. Results: We developed a four-step approach that projects the original high-dimensional feature space onto a lower-dimensional latent-feature space, while retaining most of the covariation in the data. It consists of (i) penalized maximum likelihood estimation of a redundancy filtered correlation matrix. The resulting matrix (ii) is the input for a maximum likelihood factor analysis procedure. This two-stage maximum-likelihood approach can be used to (iii) produce a compact set of stable features that (iv) can be directly used in any (regression-based) classifier or predictor. It outperforms other classification (and feature selection) techniques in both external and internal validation settings regarding survival in squamous cell cancers.
Tasks Feature Selection
Published 2019-03-27
URL http://arxiv.org/abs/1903.11696v1
PDF http://arxiv.org/pdf/1903.11696v1.pdf
PWC https://paperswithcode.com/paper/stable-prediction-with-radiomics-data
Repo https://github.com/CFWP/FMradio
Framework none

Complement Objective Training

Title Complement Objective Training
Authors Hao-Yun Chen, Pei-Hsin Wang, Chun-Hao Liu, Shih-Chieh Chang, Jia-Yu Pan, Yu-Ting Chen, Wei Wei, Da-Cheng Juan
Abstract Learning with a primary objective, such as softmax cross entropy for classification and sequence generation, has been the norm for training deep neural networks for years. Although being a widely-adopted approach, using cross entropy as the primary objective exploits mostly the information from the ground-truth class for maximizing data likelihood, and largely ignores information from the complement (incorrect) classes. We argue that, in addition to the primary objective, training also using a complement objective that leverages information from the complement classes can be effective in improving model performance. This motivates us to study a new training paradigm that maximizes the likelihood of the groundtruth class while neutralizing the probabilities of the complement classes. We conduct extensive experiments on multiple tasks ranging from computer vision to natural language understanding. The experimental results confirm that, compared to the conventional training with just one primary objective, training also with the complement objective further improves the performance of the state-of-the-art models across all tasks. In addition to the accuracy improvement, we also show that models trained with both primary and complement objectives are more robust to single-step adversarial attacks.
Tasks
Published 2019-03-04
URL http://arxiv.org/abs/1903.01182v2
PDF http://arxiv.org/pdf/1903.01182v2.pdf
PWC https://paperswithcode.com/paper/complement-objective-training
Repo https://github.com/henry8527/COT
Framework pytorch

Identifying Unclear Questions in Community Question Answering Websites

Title Identifying Unclear Questions in Community Question Answering Websites
Authors Jan Trienes, Krisztian Balog
Abstract Thousands of complex natural language questions are submitted to community question answering websites on a daily basis, rendering them as one of the most important information sources these days. However, oftentimes submitted questions are unclear and cannot be answered without further clarification questions by expert community members. This study is the first to investigate the complex task of classifying a question as clear or unclear, i.e., if it requires further clarification. We construct a novel dataset and propose a classification approach that is based on the notion of similar questions. This approach is compared to state-of-the-art text classification baselines. Our main finding is that the similar questions approach is a viable alternative that can be used as a stepping stone towards the development of supportive user interfaces for question formulation.
Tasks Community Question Answering, Question Answering, Text Classification
Published 2019-01-18
URL http://arxiv.org/abs/1901.06168v1
PDF http://arxiv.org/pdf/1901.06168v1.pdf
PWC https://paperswithcode.com/paper/identifying-unclear-questions-in-community
Repo https://github.com/jantrienes/ecir2019-qac
Framework none

Searching for Effective Neural Extractive Summarization: What Works and What’s Next

Title Searching for Effective Neural Extractive Summarization: What Works and What’s Next
Authors Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, Xuanjing Huang
Abstract The recent years have seen remarkable success in the use of deep neural networks on text summarization. However, there is no clear understanding of \textit{why} they perform so well, or \textit{how} they might be improved. In this paper, we seek to better understand how neural extractive summarization systems could benefit from different types of model architectures, transferable knowledge and learning schemas. Additionally, we find an effective way to improve current frameworks and achieve the state-of-the-art result on CNN/DailyMail by a large margin based on our observations and analyses. Hopefully, our work could provide more clues for future research on extractive summarization.
Tasks Text Summarization
Published 2019-07-08
URL https://arxiv.org/abs/1907.03491v1
PDF https://arxiv.org/pdf/1907.03491v1.pdf
PWC https://paperswithcode.com/paper/searching-for-effective-neural-extractive
Repo https://github.com/zhdbwe/Paper-DailyReading
Framework tf

From Zero-Shot Learning to Cold-Start Recommendation

Title From Zero-Shot Learning to Cold-Start Recommendation
Authors Jingjing Li, Mengmeng Jing, Ke Lu, Lei Zhu, Yang Yang, Zi Huang
Abstract Zero-shot learning (ZSL) and cold-start recommendation (CSR) are two challenging problems in computer vision and recommender system, respectively. In general, they are independently investigated in different communities. This paper, however, reveals that ZSL and CSR are two extensions of the same intension. Both of them, for instance, attempt to predict unseen classes and involve two spaces, one for direct feature representation and the other for supplementary description. Yet there is no existing approach which addresses CSR from the ZSL perspective. This work, for the first time, formulates CSR as a ZSL problem, and a tailor-made ZSL method is proposed to handle CSR. Specifically, we propose a Low-rank Linear Auto-Encoder (LLAE), which challenges three cruxes, i.e., domain shift, spurious correlations and computing efficiency, in this paper. LLAE consists of two parts, a low-rank encoder maps user behavior into user attributes and a symmetric decoder reconstructs user behavior from user attributes. Extensive experiments on both ZSL and CSR tasks verify that the proposed method is a win-win formulation, i.e., not only can CSR be handled by ZSL models with a significant performance improvement compared with several conventional state-of-the-art methods, but the consideration of CSR can benefit ZSL as well.
Tasks Recommendation Systems, Zero-Shot Learning
Published 2019-06-20
URL https://arxiv.org/abs/1906.08511v2
PDF https://arxiv.org/pdf/1906.08511v2.pdf
PWC https://paperswithcode.com/paper/from-zero-shot-learning-to-cold-start
Repo https://github.com/lijin118/LLAE
Framework none

Joint Learning of Answer Selection and Answer Summary Generation in Community Question Answering

Title Joint Learning of Answer Selection and Answer Summary Generation in Community Question Answering
Authors Yang Deng, Wai Lam, Yuexiang Xie, Daoyuan Chen, Yaliang Li, Min Yang, Ying Shen
Abstract Community question answering (CQA) gains increasing popularity in both academy and industry recently. However, the redundancy and lengthiness issues of crowdsourced answers limit the performance of answer selection and lead to reading difficulties and misunderstandings for community users. To solve these problems, we tackle the tasks of answer selection and answer summary generation in CQA with a novel joint learning model. Specifically, we design a question-driven pointer-generator network, which exploits the correlation information between question-answer pairs to aid in attending the essential information when generating answer summaries. Meanwhile, we leverage the answer summaries to alleviate noise in original lengthy answers when ranking the relevancy degrees of question-answer pairs. In addition, we construct a new large-scale CQA corpus, WikiHowQA, which contains long answers for answer selection as well as reference summaries for answer summarization. The experimental results show that the joint learning method can effectively address the answer redundancy issue in CQA and achieves state-of-the-art results on both answer selection and text summarization tasks. Furthermore, the proposed model is shown to be of great transferring ability and applicability for resource-poor CQA tasks, which lack of reference answer summaries.
Tasks Answer Selection, Community Question Answering, Question Answering, Text Summarization
Published 2019-11-22
URL https://arxiv.org/abs/1911.09801v1
PDF https://arxiv.org/pdf/1911.09801v1.pdf
PWC https://paperswithcode.com/paper/joint-learning-of-answer-selection-and-answer
Repo https://github.com/dengyang17/wikihowQA
Framework none
comments powered by Disqus