Paper Group NANR 118
Interactive Shadow Removal and Ground Truth for Difficult Shadow Scenes. UFAL at SemEval-2016 Task 5: Recurrent Neural Networks for Sentence Classification. English-Chinese Knowledge Base Translation with Neural Network. Exploring Fine-Grained Emotion Detection in Tweets. Discourse Relation Sense Classification with Two-Step Classifiers. DA-IICT Su …
Interactive Shadow Removal and Ground Truth for Difficult Shadow Scenes
Title | Interactive Shadow Removal and Ground Truth for Difficult Shadow Scenes |
Authors | Han Gong; Darren Cosker |
Abstract | A user-centric method for fast, interactive, robust and high-quality shadow removal is presented. Our algorithm can perform detection and removal in a range of difficult cases: such as highly textured and colored shadows. To perform detection an on-the-fly learning approach is adopted guided by two rough user inputs for the pixels of the shadow and the lit area. After detection, shadow removal is performed by registering the penumbra to a normalized frame which allows us efficient estimation of non-uniform shadow illumination changes, resulting in accurate and robust removal. Another major contribution of this work is the first validated and multi-scene category ground truth for shadow removal algorithms. This data set containing 186 images eliminates inconsistencies between shadow and shadow-free images and provides a range of different shadow types such as soft, textured, colored and broken shadow. Using this data, the most thorough comparison of state-of-the-art shadow removal methods to date is performed, showing our proposed new algorithm to outperform the state-of-the-art across several measures and shadow category. To complement our dataset, an online shadow removal benchmark website is also presented to encourage future open comparisons in this challenging field of research. |
Tasks | |
Published | 2016-09-01 |
URL | https://www.osapublishing.org/abstract.cfm?uri=josaa-33-9-1798 |
https://arxiv.org/pdf/1608.00762 | |
PWC | https://paperswithcode.com/paper/interactive-shadow-removal-and-ground-truth |
Repo | |
Framework | |
UFAL at SemEval-2016 Task 5: Recurrent Neural Networks for Sentence Classification
Title | UFAL at SemEval-2016 Task 5: Recurrent Neural Networks for Sentence Classification |
Authors | Ale{\v{s}} Tamchyna, Kate{\v{r}}ina Veselovsk{'a} |
Abstract | |
Tasks | Aspect-Based Sentiment Analysis, Feature Engineering, Sentence Classification, Sentiment Analysis |
Published | 2016-06-01 |
URL | https://www.aclweb.org/anthology/S16-1059/ |
https://www.aclweb.org/anthology/S16-1059 | |
PWC | https://paperswithcode.com/paper/ufal-at-semeval-2016-task-5-recurrent-neural |
Repo | |
Framework | |
English-Chinese Knowledge Base Translation with Neural Network
Title | English-Chinese Knowledge Base Translation with Neural Network |
Authors | Xiaocheng Feng, Duyu Tang, Bing Qin, Ting Liu |
Abstract | Knowledge base (KB) such as Freebase plays an important role for many natural language processing tasks. English knowledge base is obviously larger and of higher quality than low resource language like Chinese. To expand Chinese KB by leveraging English KB resources, an effective way is to translate English KB (source) into Chinese (target). In this direction, two major challenges are to model triple semantics and to build a robust KB translator. We address these challenges by presenting a neural network approach, which learns continuous triple representation with a gated neural network. Accordingly, source triples and target triples are mapped in the same semantic vector space. We build a new dataset for English-Chinese KB translation from Freebase, and compare with several baselines on it. Experimental results show that the proposed method improves translation accuracy compared with baseline methods. We show that adaptive composition model improves standard solution such as neural tensor network in terms of translation accuracy. |
Tasks | Information Retrieval, Machine Translation |
Published | 2016-12-01 |
URL | https://www.aclweb.org/anthology/C16-1276/ |
https://www.aclweb.org/anthology/C16-1276 | |
PWC | https://paperswithcode.com/paper/english-chinese-knowledge-base-translation |
Repo | |
Framework | |
Exploring Fine-Grained Emotion Detection in Tweets
Title | Exploring Fine-Grained Emotion Detection in Tweets |
Authors | Jasy Suet Yan Liew, Howard R. Turtle |
Abstract | |
Tasks | Emotion Classification, Sentiment Analysis |
Published | 2016-06-01 |
URL | https://www.aclweb.org/anthology/N16-2011/ |
https://www.aclweb.org/anthology/N16-2011 | |
PWC | https://paperswithcode.com/paper/exploring-fine-grained-emotion-detection-in |
Repo | |
Framework | |
Discourse Relation Sense Classification with Two-Step Classifiers
Title | Discourse Relation Sense Classification with Two-Step Classifiers |
Authors | Yusuke Kido, Akiko Aizawa |
Abstract | |
Tasks | |
Published | 2016-08-01 |
URL | https://www.aclweb.org/anthology/K16-2018/ |
https://www.aclweb.org/anthology/K16-2018 | |
PWC | https://paperswithcode.com/paper/discourse-relation-sense-classification-with |
Repo | |
Framework | |
DA-IICT Submission for PDTB-styled Discourse Parser
Title | DA-IICT Submission for PDTB-styled Discourse Parser |
Authors | Devanshu Jain, Prasenjit Majumder |
Abstract | |
Tasks | |
Published | 2016-08-01 |
URL | https://www.aclweb.org/anthology/K16-2017/ |
https://www.aclweb.org/anthology/K16-2017 | |
PWC | https://paperswithcode.com/paper/da-iict-submission-for-pdtb-styled-discourse |
Repo | |
Framework | |
Discourse Relation Sense Classification Using Cross-argument Semantic Similarity Based on Word Embeddings
Title | Discourse Relation Sense Classification Using Cross-argument Semantic Similarity Based on Word Embeddings |
Authors | Todor Mihaylov, Anette Frank |
Abstract | |
Tasks | Semantic Similarity, Semantic Textual Similarity, Word Embeddings |
Published | 2016-08-01 |
URL | https://www.aclweb.org/anthology/K16-2014/ |
https://www.aclweb.org/anthology/K16-2014 | |
PWC | https://paperswithcode.com/paper/discourse-relation-sense-classification-using |
Repo | |
Framework | |
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Title | Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue |
Authors | |
Abstract | |
Tasks | |
Published | 2016-09-01 |
URL | https://www.aclweb.org/anthology/W16-3600/ |
https://www.aclweb.org/anthology/W16-3600 | |
PWC | https://paperswithcode.com/paper/proceedings-of-the-17th-annual-meeting-of-the |
Repo | |
Framework | |
Collecting Reliable Human Judgements on Machine-Generated Language: The Case of the QG-STEC Data
Title | Collecting Reliable Human Judgements on Machine-Generated Language: The Case of the QG-STEC Data |
Authors | Keith Godwin, Paul Piwek |
Abstract | |
Tasks | Text Generation |
Published | 2016-09-01 |
URL | https://www.aclweb.org/anthology/W16-6634/ |
https://www.aclweb.org/anthology/W16-6634 | |
PWC | https://paperswithcode.com/paper/collecting-reliable-human-judgements-on |
Repo | |
Framework | |
Semantic Annotation Aggregation with Conditional Crowdsourcing Models and Word Embeddings
Title | Semantic Annotation Aggregation with Conditional Crowdsourcing Models and Word Embeddings |
Authors | Paul Felt, Eric Ringger, Kevin Seppi |
Abstract | In modern text annotation projects, crowdsourced annotations are often aggregated using item response models or by majority vote. Recently, item response models enhanced with generative data models have been shown to yield substantial benefits over those with conditional or no data models. However, suitable generative data models do not exist for many tasks, such as semantic labeling tasks. When no generative data model exists, we demonstrate that similar benefits may be derived by conditionally modeling documents that have been previously embedded in a semantic space using recent work in vector space models. We use this approach to show state-of-the-art results on a variety of semantic annotation aggregation tasks. |
Tasks | Word Embeddings |
Published | 2016-12-01 |
URL | https://www.aclweb.org/anthology/C16-1168/ |
https://www.aclweb.org/anthology/C16-1168 | |
PWC | https://paperswithcode.com/paper/semantic-annotation-aggregation-with |
Repo | |
Framework | |
Learning Parametric Sparse Models for Image Super-Resolution
Title | Learning Parametric Sparse Models for Image Super-Resolution |
Authors | Yongbo Li, Weisheng Dong, Xuemei Xie, Guangming Shi, Xin Li, Donglai Xu |
Abstract | Learning accurate prior knowledge of natural images is of great importance for single image super-resolution (SR). Existing SR methods either learn the prior from the low/high-resolution patch pairs or estimate the prior models from the input low-resolution (LR) image. Specifically, high-frequency details are learned in the former methods. Though effective, they are heuristic and have limitations in dealing with blurred LR images; while the latter suffers from the limitations of frequency aliasing. In this paper, we propose to combine those two lines of ideas for image super-resolution. More specifically, the parametric sparse prior of the desirable high-resolution (HR) image patches are learned from both the input low-resolution (LR) image and a training image dataset. With the learned sparse priors, the sparse codes and thus the HR image patches can be accurately recovered by solving a sparse coding problem. Experimental results show that the proposed SR method outperforms existing state-of-the-art methods in terms of both subjective and objective image qualities. |
Tasks | Image Super-Resolution, Super-Resolution |
Published | 2016-12-01 |
URL | http://papers.nips.cc/paper/6378-learning-parametric-sparse-models-for-image-super-resolution |
http://papers.nips.cc/paper/6378-learning-parametric-sparse-models-for-image-super-resolution.pdf | |
PWC | https://paperswithcode.com/paper/learning-parametric-sparse-models-for-image |
Repo | |
Framework | |
On Regularizing Rademacher Observation Losses
Title | On Regularizing Rademacher Observation Losses |
Authors | Richard Nock |
Abstract | It has recently been shown that supervised learning linear classifiers with two of the most popular losses, the logistic and square loss, is equivalent to optimizing an equivalent loss over sufficient statistics about the class: Rademacher observations (rados). It has also been shown that learning over rados brings solutions to two prominent problems for which the state of the art of learning from examples can be comparatively inferior and in fact less convenient: protecting and learning from private examples, learning from distributed datasets without entity resolution. Bis repetita placent: the two proofs of equivalence are different and rely on specific properties of the corresponding losses, so whether these can be unified and generalized inevitably comes to mind. This is our first contribution: we show how they can be fit into the same theory for the equivalence between example and rado losses. As a second contribution, we show that the generalization unveils a surprising new connection to regularized learning, and in particular a sufficient condition under which regularizing the loss over examples is equivalent to regularizing the rados (i.e. the data) in the equivalent rado loss, in such a way that an efficient algorithm for one regularized rado loss may be as efficient when changing the regularizer. This is our third contribution: we give a formal boosting algorithm for the regularized exponential rado-loss which boost with any of the ridge, lasso, \slope, l_\infty, or elastic nets, using the same master routine for all. Because the regularized exponential rado-loss is the equivalent of the regularized logistic loss over examples we obtain the first efficient proxy to the minimisation of the regularized logistic loss over examples using such a wide spectrum of regularizers. Experiments with a readily available code display that regularization significantly improves rado-based learning and compares favourably with example-based learning. |
Tasks | Entity Resolution |
Published | 2016-12-01 |
URL | http://papers.nips.cc/paper/6572-on-regularizing-rademacher-observation-losses |
http://papers.nips.cc/paper/6572-on-regularizing-rademacher-observation-losses.pdf | |
PWC | https://paperswithcode.com/paper/on-regularizing-rademacher-observation-losses |
Repo | |
Framework | |
Introducing the LCC Metaphor Datasets
Title | Introducing the LCC Metaphor Datasets |
Authors | Michael Mohler, Mary Brunson, Bryan Rink, Marc Tomlinson |
Abstract | In this work, we present the Language Computer Corporation (LCC) annotated metaphor datasets, which represent the largest and most comprehensive resource for metaphor research to date. These datasets were produced over the course of three years by a staff of nine annotators working in four languages (English, Spanish, Russian, and Farsi). As part of these datasets, we provide (1) metaphoricity ratings for within-sentence word pairs on a four-point scale, (2) scored links to our repository of 114 source concept domains and 32 target concept domains, and (3) ratings for the affective polarity and intensity of each pair. Altogether, we provide 188,741 annotations in English (for 80,100 pairs), 159,915 annotations in Spanish (for 63,188 pairs), 99,740 annotations in Russian (for 44,632 pairs), and 137,186 annotations in Farsi (for 57,239 pairs). In addition, we are providing a large set of likely metaphors which have been independently extracted by our two state-of-the-art metaphor detection systems but which have not been analyzed by our team of annotators. |
Tasks | |
Published | 2016-05-01 |
URL | https://www.aclweb.org/anthology/L16-1668/ |
https://www.aclweb.org/anthology/L16-1668 | |
PWC | https://paperswithcode.com/paper/introducing-the-lcc-metaphor-datasets |
Repo | |
Framework | |
How Challenging is Sarcasm versus Irony Classification?: A Study With a Dataset from English Literature
Title | How Challenging is Sarcasm versus Irony Classification?: A Study With a Dataset from English Literature |
Authors | Aditya Joshi, Vaibhav Tripathi, Pushpak Bhattacharyya, Mark Carman, Meghna Singh, Jaya Saraswati, Rajita Shukla |
Abstract | |
Tasks | Sarcasm Detection |
Published | 2016-12-01 |
URL | https://www.aclweb.org/anthology/U16-1013/ |
https://www.aclweb.org/anthology/U16-1013 | |
PWC | https://paperswithcode.com/paper/how-challenging-is-sarcasm-versus-irony |
Repo | |
Framework | |
Scrutable Feature Sets for Stance Classification
Title | Scrutable Feature Sets for Stance Classification |
Authors | M, Angrosh ya, Advaith Siddharthan, Adam Wyner |
Abstract | |
Tasks | Argument Mining, Sentiment Analysis |
Published | 2016-08-01 |
URL | https://www.aclweb.org/anthology/W16-2807/ |
https://www.aclweb.org/anthology/W16-2807 | |
PWC | https://paperswithcode.com/paper/scrutable-feature-sets-for-stance |
Repo | |
Framework | |