January 25, 2020

2536 words 12 mins read

Paper Group NANR 13

Paper Group NANR 13

Efficient and Thrifty Voting by Any Means Necessary. Learning to Bootstrap for Entity Set Expansion. Semantic Textual Similarity with Siamese Neural Networks. Density Map Regression Guided Detection Network for RGB-D Crowd Counting and Localization. Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP …

Efficient and Thrifty Voting by Any Means Necessary

Title Efficient and Thrifty Voting by Any Means Necessary
Authors Debmalya Mandal, Ariel D. Procaccia, Nisarg Shah, David Woodruff
Abstract We take an unorthodox view of voting by expanding the design space to include both the elicitation rule, whereby voters map their (cardinal) preferences to votes, and the aggregation rule, which transforms the reported votes into collective decisions. Intuitively, there is a tradeoff between the communication requirements of the elicitation rule (i.e., the number of bits of information that voters need to provide about their preferences) and the efficiency of the outcome of the aggregation rule, which we measure through distortion (i.e., how well the utilitarian social welfare of the outcome approximates the maximum social welfare in the worst case). Our results chart the Pareto frontier of the communication-distortion tradeoff.
Tasks
Published 2019-12-01
URL http://papers.nips.cc/paper/8939-efficient-and-thrifty-voting-by-any-means-necessary
PDF http://papers.nips.cc/paper/8939-efficient-and-thrifty-voting-by-any-means-necessary.pdf
PWC https://paperswithcode.com/paper/efficient-and-thrifty-voting-by-any-means
Repo
Framework

Learning to Bootstrap for Entity Set Expansion

Title Learning to Bootstrap for Entity Set Expansion
Authors Lingyong Yan, Xianpei Han, Le Sun, Ben He
Abstract Bootstrapping for Entity Set Expansion (ESE) aims at iteratively acquiring new instances of a specific target category. Traditional bootstrapping methods often suffer from two problems: 1) delayed feedback, i.e., the pattern evaluation relies on both its direct extraction quality and extraction quality in later iterations. 2) sparse supervision, i.e., only few seed entities are used as the supervision. To address the above two problems, we propose a novel bootstrapping method combining the Monte Carlo Tree Search (MCTS) algorithm with a deep similarity network, which can efficiently estimate delayed feedback for pattern evaluation and adaptively score entities given sparse supervision signals. Experimental results confirm the effectiveness of the proposed method.
Tasks
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1028/
PDF https://www.aclweb.org/anthology/D19-1028
PWC https://paperswithcode.com/paper/learning-to-bootstrap-for-entity-set
Repo
Framework

Semantic Textual Similarity with Siamese Neural Networks

Title Semantic Textual Similarity with Siamese Neural Networks
Authors Tharindu Ranasinghe, Constantin Orasan, Ruslan Mitkov
Abstract Calculating the Semantic Textual Similarity (STS) is an important research area in natural language processing which plays a significant role in many applications such as question answering, document summarisation, information retrieval and information extraction. This paper evaluates Siamese recurrent architectures, a special type of neural networks, which are used here to measure STS. Several variants of the architecture are compared with existing methods
Tasks Information Retrieval, Question Answering, Semantic Textual Similarity
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1116/
PDF https://www.aclweb.org/anthology/R19-1116
PWC https://paperswithcode.com/paper/semantic-textual-similarity-with-siamese
Repo
Framework

Density Map Regression Guided Detection Network for RGB-D Crowd Counting and Localization

Title Density Map Regression Guided Detection Network for RGB-D Crowd Counting and Localization
Authors Dongze Lian, Jing Li, Jia Zheng, Weixin Luo, Shenghua Gao
Abstract To simultaneously estimate head counts and localize heads with bounding boxes, a regression guided detection network (RDNet) is proposed for RGB-D crowd counting. Specifically, to improve the robustness of detection-based approaches for small/tiny heads, we leverage density map to improve the head/non-head classification in detection network where density map serves as the probability of a pixel being a head. A depth-adaptive kernel that considers the variances in head sizes is also introduced to generate high-fidelity density map for more robust density map regression. Further, a depth-aware anchor is designed for better initialization of anchor sizes in detection framework. Then we use the bounding boxes whose sizes are estimated with depth to train our RDNet. The existing RGB-D datasets are too small and not suitable for performance evaluation on data-driven based approaches, we collect a large-scale RGB-D crowd counting dataset. Experiments on both our RGB-D dataset and the MICC RGB-D counting dataset show that our method achieves the best performance for RGB-D crowd counting and localization. Further, our method can be readily extended to RGB image based crowd counting and achieves comparable performance on the ShanghaiTech Part_B dataset for both counting and localization.
Tasks Crowd Counting
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Lian_Density_Map_Regression_Guided_Detection_Network_for_RGB-D_Crowd_Counting_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Lian_Density_Map_Regression_Guided_Detection_Network_for_RGB-D_Crowd_Counting_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/density-map-regression-guided-detection
Repo
Framework

Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)

Title Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)
Authors
Abstract
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1000/
PDF https://www.aclweb.org/anthology/R19-1000
PWC https://paperswithcode.com/paper/natural-language-processing-in-a-deep
Repo
Framework

Residual Regression With Semantic Prior for Crowd Counting

Title Residual Regression With Semantic Prior for Crowd Counting
Authors Jia Wan, Wenhan Luo, Baoyuan Wu, Antoni B. Chan, Wei Liu
Abstract Crowd counting is a challenging task due to factors such as large variations in crowdedness and severe occlusions. Although recent deep learning based counting algorithms have achieved a great progress, the correlation knowledge among samples and the semantic prior have not yet been fully exploited. In this paper, a residual regression framework is proposed for crowd counting utilizing the correlation information among samples. By incorporating such information into our network, we discover that more intrinsic characteristics can be learned by the network which thus generalizes better to unseen scenarios. Besides, we show how to effectively leverage the semantic prior to improve the performance of crowd counting. We also observe that the adversarial loss can be used to improve the quality of predicted density maps, thus leading to an improvement in crowd counting. Experiments on public datasets demonstrate the effectiveness and generalization ability of the proposed method.
Tasks Crowd Counting
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Wan_Residual_Regression_With_Semantic_Prior_for_Crowd_Counting_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Wan_Residual_Regression_With_Semantic_Prior_for_Crowd_Counting_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/residual-regression-with-semantic-prior-for
Repo
Framework

Storyboarding of Recipes: Grounded Contextual Generation

Title Storyboarding of Recipes: Grounded Contextual Generation
Authors Ch, Khyathi u, Eric Nyberg, Alan W Black
Abstract Information need of humans is essentially multimodal in nature, enabling maximum exploitation of situated context. We introduce a dataset for sequential procedural (how-to) text generation from images in cooking domain. The dataset consists of 16,441 cooking recipes with 160,479 photos associated with different steps. We setup a baseline motivated by the best performing model in terms of human evaluation for the Visual Story Telling (ViST) task. In addition, we introduce two models to incorporate high level structure learnt by a Finite State Machine (FSM) in neural sequential generation process by: (1) Scaffolding Structure in Decoder (SSiD) (2) Scaffolding Structure in Loss (SSiL). Our best performing model (SSiL) achieves a METEOR score of 0.31, which is an improvement of 0.6 over the baseline model. We also conducted human evaluation of the generated grounded recipes, which reveal that 61{%} found that our proposed (SSiL) model is better than the baseline model in terms of overall recipes. We also discuss analysis of the output highlighting key important NLP issues for prospective directions.
Tasks Text Generation
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1606/
PDF https://www.aclweb.org/anthology/P19-1606
PWC https://paperswithcode.com/paper/storyboarding-of-recipes-grounded-contextual
Repo
Framework

SC-UPB at the VarDial 2019 Evaluation Campaign: Moldavian vs. Romanian Cross-Dialect Topic Identification

Title SC-UPB at the VarDial 2019 Evaluation Campaign: Moldavian vs. Romanian Cross-Dialect Topic Identification
Authors Cristian Onose, Dumitru-Clementin Cercel, Stefan Trausan-Matu
Abstract This paper describes our models for the Moldavian vs. Romanian Cross-Topic Identification (MRC) evaluation campaign, part of the VarDial 2019 workshop. We focus on the three subtasks for MRC: binary classification between the Moldavian (MD) and the Romanian (RO) dialects and two cross-dialect multi-class classification between six news topics, MD to RO and RO to MD. We propose several deep learning models based on long short-term memory cells, Bidirectional Gated Recurrent Unit (BiGRU) and Hierarchical Attention Networks (HAN). We also employ three word embedding models to represent the text as a low dimensional vector. Our official submission includes two runs of the BiGRU and HAN models for each of the three subtasks. The best submitted model obtained the following macro-averaged F1 scores: 0.708 for subtask 1, 0.481 for subtask 2 and 0.480 for the last one. Due to a read error caused by the quoting behaviour over the test file, our final submissions contained a smaller number of items than expected. More than 50{%} of the submission files were corrupted. Thus, we also present the results obtained with the corrected labels for which the HAN model achieves the following results: 0.930 for subtask 1, 0.590 for subtask 2 and 0.687 for the third one.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-1418/
PDF https://www.aclweb.org/anthology/W19-1418
PWC https://paperswithcode.com/paper/sc-upb-at-the-vardial-2019-evaluation
Repo
Framework

Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects

Title Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects
Authors
Abstract
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-1400/
PDF https://www.aclweb.org/anthology/W19-1400
PWC https://paperswithcode.com/paper/proceedings-of-the-sixth-workshop-on-nlp-for
Repo
Framework

Divide and Extract – Disentangling Clause Splitting and Proposition Extraction

Title Divide and Extract – Disentangling Clause Splitting and Proposition Extraction
Authors Darina Gold, Torsten Zesch
Abstract Proposition extraction from sentences is an important task for information extraction systems Evaluation of such systems usually conflates two aspects: splitting complex sentences into clauses and the extraction of propositions. It is thus difficult to independently determine the quality of the proposition extraction step. We create a manually annotated proposition dataset from sentences taken from restaurant reviews that distinguishes between clauses that need to be split and those that do not. The resulting proposition evaluation dataset allows us to independently compare the performance of proposition extraction systems on simple and complex clauses. Although performance drastically drops on more complex sentences, we show that the same systems perform best on both simple and complex clauses. Furthermore, we show that specific kinds of subordinate clauses pose difficulties to most systems.
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1047/
PDF https://www.aclweb.org/anthology/R19-1047
PWC https://paperswithcode.com/paper/divide-and-extract-disentangling-clause
Repo
Framework

Multiple Encoder-Decoders Net for Lane Detection

Title Multiple Encoder-Decoders Net for Lane Detection
Authors Yuetong Du, Xiaodong Gu, Junqin Liu, Liwen He
Abstract For semantic image segmentation and lane detection, nets with a single spatial pyramid structure or encoder-decoder structure are usually exploited. Convolutional neural networks (CNNs) show great results on both high-level and low-level features representations, however, the capability has not been fully embodied for lane detection task. In especial, it’s still a challenge for model-based lane detection to combine the multi-scale context with a pixel-level accuracy because of the weak visual appearance and strong prior information. In this paper, we we propose an novel network for lane detection, the three main contributions are as follows. First, we employ multiple encoder-decoders module in end-to-end ways and show the promising results for lane detection. Second, we analysis different configurations of multiple encoder-decoders nets. Third, we make our attempts to rethink the evaluation methods of lane detection for the limitation of the popular methods based on IoU.
Tasks Lane Detection, Semantic Segmentation
Published 2019-05-01
URL https://openreview.net/forum?id=SJgiNo0cKX
PDF https://openreview.net/pdf?id=SJgiNo0cKX
PWC https://paperswithcode.com/paper/multiple-encoder-decoders-net-for-lane
Repo
Framework

Learning and Understanding Different Categories of Sexism Using Convolutional Neural Network’s Filters

Title Learning and Understanding Different Categories of Sexism Using Convolutional Neural Network’s Filters
Authors Sima Sharifirad, Alon Jacovi
Abstract Sexism is very common in social media and makes the boundaries of free speech tighter for female users. Automatically flagging and removing sexist content requires niche identification and description of the categories. In this study, inspired by social science work, we propose three categories of sexism toward women as follows: {}Indirect sexism{''}, {}Sexual sexism{''} and {``}Physical sexism{''}. We build classifiers such as Convolutional Neural Network (CNN) to automatically detect different types of sexism and address problems of annotation. Even though inherent non-interpretability of CNN is a challenge for users who detect sexism, as the reason classifying a given speech instance with regard to sexism is difficult to glance from a CNN. However, recent research developed interpretable CNN filters for text data. In a CNN, filters followed by different activation patterns along with global max-pooling can help us tease apart the most important ngrams from the rest. In this paper, we interpret a CNN model trained to classify sexism in order to understand different categories of sexism by detecting semantic categories of ngrams and clustering them. Then, these ngrams in each category are used to improve the performance of the classification task. It is a preliminary work using machine learning and natural language techniques to learn the concept of sexism and distinguishes itself by looking at more precise categories of sexism in social media along with an in-depth investigation of CNN{'}s filters. |
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/papers/W/W19/W19-3609/
PDF https://www.aclweb.org/anthology/W19-3609
PWC https://paperswithcode.com/paper/learning-and-understanding-different
Repo
Framework

Adapting Object Detectors via Selective Cross-Domain Alignment

Title Adapting Object Detectors via Selective Cross-Domain Alignment
Authors Xinge Zhu, Jiangmiao Pang, Ceyuan Yang, Jianping Shi, Dahua Lin
Abstract State-of-the-art object detectors are usually trained on public datasets. They often face substantial difficulties when applied to a different domain, where the imaging condition differs significantly and the corresponding annotated data are unavailable (or expensive to acquire). A natural remedy is to adapt the model by aligning the image representations on both domains. This can be achieved, for example, by adversarial learning, and has been shown to be effective in tasks like image classification. However, we found that in object detection, the improvement obtained in this way is quite limited. An important reason is that conventional domain adaptation methods strive to align images as a whole, while object detection, by nature, focuses on local regions that may contain objects of interest. Motivated by this, we propose a novel approach to domain adaption for object detection to handle the issues in “where to look” and “how to align”. Our key idea is to mine the discriminative regions, namely those that are directly pertinent to object detection, and focus on aligning them across both domains. Experiments show that the proposed method performs remarkably better than existing methods with about 4% 6% improvement under various domain-shift scenarios while keeping good scalability.
Tasks Domain Adaptation, Image Classification, Object Detection, Unsupervised Domain Adaptation
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Zhu_Adapting_Object_Detectors_via_Selective_Cross-Domain_Alignment_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhu_Adapting_Object_Detectors_via_Selective_Cross-Domain_Alignment_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/adapting-object-detectors-via-selective-cross
Repo
Framework

Rethinking Phonotactic Complexity

Title Rethinking Phonotactic Complexity
Authors Tiago Pimentel, Brian Roark, Ryan Cotterell
Abstract In this work, we propose the use of phone-level language models to estimate phonotactic complexity{—}measured in bits per phoneme{—}which makes cross-linguistic comparison straightforward. We compare the entropy across languages using this simple measure, gaining insight on how complex different language{'}s phonotactics are. Finally, we show a very strong negative correlation between phonotactic complexity and the average length of words{—}Spearman rho=-0.744{—}when analysing a collection of 106 languages with 1016 basic concepts each.
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/papers/W/W19/W19-3628/
PDF https://www.aclweb.org/anthology/W19-3628
PWC https://paperswithcode.com/paper/rethinking-phonotactic-complexity
Repo
Framework

Variance Reduced Policy Evaluation with Smooth Function Approximation

Title Variance Reduced Policy Evaluation with Smooth Function Approximation
Authors Hoi-To Wai, Mingyi Hong, Zhuoran Yang, Zhaoran Wang, Kexin Tang
Abstract Policy evaluation with smooth and nonlinear function approximation has shown great potential for reinforcement learning. Compared to linear function approxi- mation, it allows for using a richer class of approximation functions such as the neural networks. Traditional algorithms are based on two timescales stochastic approximation whose convergence rate is often slow. This paper focuses on an offline setting where a trajectory of $m$ state-action pairs are observed. We formulate the policy evaluation problem as a non-convex primal-dual, finite-sum optimization problem, whose primal sub-problem is non-convex and dual sub-problem is strongly concave. We suggest a single-timescale primal-dual gradient algorithm with variance reduction, and show that it converges to an $\epsilon$-stationary point using $O(m/\epsilon)$ calls (in expectation) to a gradient oracle.
Tasks
Published 2019-12-01
URL http://papers.nips.cc/paper/8814-variance-reduced-policy-evaluation-with-smooth-function-approximation
PDF http://papers.nips.cc/paper/8814-variance-reduced-policy-evaluation-with-smooth-function-approximation.pdf
PWC https://paperswithcode.com/paper/variance-reduced-policy-evaluation-with
Repo
Framework
comments powered by Disqus