Paper Group NANR 125
Random Projections with Asymmetric Quantization. Learning to Jointly Generate and Separate Reflections. Level-Up: Learning to Improve Proficiency Level of Essays. An Ensemble of Humour, Sarcasm, and Hate Speechfor Sentiment Classification in Online Reviews. Implementing an archival, multilingual and Semantic Web-compliant taxonomy by means of SKOS …
Random Projections with Asymmetric Quantization
Title | Random Projections with Asymmetric Quantization |
Authors | Xiaoyun Li, Ping Li |
Abstract | The method of random projection has been a popular tool for data compression, similarity search, and machine learning. In many practical scenarios, applying quantization on randomly projected data could be very helpful to further reduce storage cost and facilitate more efficient retrievals, while only suffering from little loss in accuracy. In real-world applications, however, data collected from different sources may be quantized under different schemes, which calls for a need to study the asymmetric quantization problem. In this paper, we investigate the cosine similarity estimators derived in such setting under the Lloyd-Max (LM) quantization scheme. We thoroughly analyze the biases and variances of a series of estimators including the basic simple estimators, their normalized versions, and their debiased versions. Furthermore, by studying the monotonicity, we show that the expectation of proposed estimators increases with the true cosine similarity, on a broader family of stair-shaped quantizers. Experiments on nearest neighbor search justify the theory and illustrate the effectiveness of our proposed estimators. |
Tasks | Quantization |
Published | 2019-12-01 |
URL | http://papers.nips.cc/paper/9268-random-projections-with-asymmetric-quantization |
http://papers.nips.cc/paper/9268-random-projections-with-asymmetric-quantization.pdf | |
PWC | https://paperswithcode.com/paper/random-projections-with-asymmetric |
Repo | |
Framework | |
Learning to Jointly Generate and Separate Reflections
Title | Learning to Jointly Generate and Separate Reflections |
Authors | Daiqian Ma, Renjie Wan, Boxin Shi, Alex C. Kot, Ling-Yu Duan |
Abstract | Existing learning-based single image reflection removal methods using paired training data have fundamental limitations about the generalization capability on real-world reflections due to the limited variations in training pairs. In this work, we propose to jointly generate and separate reflections within a weakly-supervised learning framework, aiming to model the reflection image formation more comprehensively with abundant unpaired supervision. By imposing the adversarial losses and combinable mapping mechanism in a multi-task structure, the proposed framework elegantly integrates the two separate stages of reflection generation and separation into a unified model. The gradient constraint is incorporated into the concurrent training process of the multi-task learning as well. In particular, we built up an unpaired reflection dataset with 4,027 images, which is useful for facilitating the weakly-supervised learning of reflection removal model. Extensive experiments on a public benchmark dataset show that our framework performs favorably against state-of-the-art methods and consistently produces visually appealing results. |
Tasks | Multi-Task Learning |
Published | 2019-10-01 |
URL | http://openaccess.thecvf.com/content_ICCV_2019/html/Ma_Learning_to_Jointly_Generate_and_Separate_Reflections_ICCV_2019_paper.html |
http://openaccess.thecvf.com/content_ICCV_2019/papers/Ma_Learning_to_Jointly_Generate_and_Separate_Reflections_ICCV_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-jointly-generate-and-separate |
Repo | |
Framework | |
Level-Up: Learning to Improve Proficiency Level of Essays
Title | Level-Up: Learning to Improve Proficiency Level of Essays |
Authors | Wen-Bin Han, Jhih-Jie Chen, Chingyu Yang, Jason Chang |
Abstract | We introduce a method for generating suggestions on a given sentence for improving the proficiency level. In our approach, the sentence is transformed into a sequence of grammatical elements aimed at providing suggestions of more advanced grammar elements based on originals. The method involves parsing the sentence, identifying grammatical elements, and ranking related elements to recommend a higher level of grammatical element. We present a prototype tutoring system, Level-Up, that applies the method to English learners{'} essays in order to assist them in writing and reading. Evaluation on a set of essays shows that our method does assist user in writing. |
Tasks | |
Published | 2019-07-01 |
URL | https://www.aclweb.org/anthology/P19-3033/ |
https://www.aclweb.org/anthology/P19-3033 | |
PWC | https://paperswithcode.com/paper/level-up-learning-to-improve-proficiency |
Repo | |
Framework | |
An Ensemble of Humour, Sarcasm, and Hate Speechfor Sentiment Classification in Online Reviews
Title | An Ensemble of Humour, Sarcasm, and Hate Speechfor Sentiment Classification in Online Reviews |
Authors | Rohan Badlani, Nishit Asnani, Manan Rai |
Abstract | Due to the nature of online user reviews, sentiment analysis on such data requires a deep semantic understanding of the text. Many online reviews are sarcastic, humorous, or hateful. Signals from such language nuances may reinforce or completely alter the sentiment of a review as predicted by a machine learning model that attempts to detect sentiment alone. Thus, having a model that is explicitly aware of these features should help it perform better on reviews that are characterized by them. We propose a composite two-step model that extracts features pertaining to sarcasm, humour, hate speech, as well as sentiment, in the first step, feeding them in conjunction to inform sentiment classification in the second step. We show that this multi-step approach leads to a better empirical performance for sentiment classification than a model that predicts sentiment alone. A qualitative analysis reveals that the conjunctive approach can better capture the nuances of sentiment as expressed in online reviews. |
Tasks | Sentiment Analysis |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-5544/ |
https://www.aclweb.org/anthology/D19-5544 | |
PWC | https://paperswithcode.com/paper/an-ensemble-of-humour-sarcasm-and-hate |
Repo | |
Framework | |
Implementing an archival, multilingual and Semantic Web-compliant taxonomy by means of SKOS (Simple Knowledge Organization System)
Title | Implementing an archival, multilingual and Semantic Web-compliant taxonomy by means of SKOS (Simple Knowledge Organization System) |
Authors | Francesco Gelati |
Abstract | The paper shows how a multilingual hierarchical thesaurus, or taxonomy, can be created and implemented in compliance with Semantic Web requirements by means of the data model SKOS (Simple Knowledge Organization System). It takes the EHRI (European Holocaust Research Infrastructure) portal as an example, and shows how open-source software like SKOS Play! can facilitate the task. |
Tasks | |
Published | 2019-09-01 |
URL | https://www.aclweb.org/anthology/W19-9005/ |
https://www.aclweb.org/anthology/W19-9005 | |
PWC | https://paperswithcode.com/paper/implementing-an-archival-multilingual-and |
Repo | |
Framework | |
Self-Discriminative Learning for Unsupervised Document Embedding
Title | Self-Discriminative Learning for Unsupervised Document Embedding |
Authors | Hong-You Chen, Chin-Hua Hu, Leila Wehbe, Shou-De Lin |
Abstract | Unsupervised document representation learning is an important task providing pre-trained features for NLP applications. Unlike most previous work which learn the embedding based on self-prediction of the surface of text, we explicitly exploit the inter-document information and directly model the relations of documents in embedding space with a discriminative network and a novel objective. Extensive experiments on both small and large public datasets show the competitiveness of the proposed method. In evaluations on standard document classification, our model has errors that are 5 to 13{%} lower than state-of-the-art unsupervised embedding models. The reduction in error is even more pronounced in scarce label setting. |
Tasks | Document Classification, Document Embedding, Representation Learning |
Published | 2019-06-01 |
URL | https://www.aclweb.org/anthology/N19-1255/ |
https://www.aclweb.org/anthology/N19-1255 | |
PWC | https://paperswithcode.com/paper/self-discriminative-learning-for-unsupervised |
Repo | |
Framework | |
Calls to Action on Social Media: Detection, Social Impact, and Censorship Potential
Title | Calls to Action on Social Media: Detection, Social Impact, and Censorship Potential |
Authors | Anna Rogers, Olga Kovaleva, Anna Rumshisky |
Abstract | Calls to action on social media are known to be effective means of mobilization in social movements, and a frequent target of censorship. We investigate the possibility of their automatic detection and their potential for predicting real-world protest events, on historical data of Bolotnaya protests in Russia (2011-2013). We find that political calls to action can be annotated and detected with relatively high accuracy, and that in our sample their volume has a moderate positive correlation with rally attendance. |
Tasks | |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-5005/ |
https://www.aclweb.org/anthology/D19-5005 | |
PWC | https://paperswithcode.com/paper/calls-to-action-on-social-media-detection |
Repo | |
Framework | |
A Synaptic Neural Network and Synapse Learning
Title | A Synaptic Neural Network and Synapse Learning |
Authors | Chang Li |
Abstract | A Synaptic Neural Network (SynaNN) consists of synapses and neurons. Inspired by the synapse research of neuroscience, we built a synapse model with a nonlinear synapse function of excitatory and inhibitory channel probabilities. Introduced the concept of surprisal space and constructed a commutative diagram, we proved that the inhibitory probability function -log(1-exp(-x)) in surprisal space is the topologically conjugate function of the inhibitory complementary probability 1-x in probability space. Furthermore, we found that the derivative of the synapse over the parameter in the surprisal space is equal to the negative Bose-Einstein distribution. In addition, we constructed a fully connected synapse graph (tensor) as a synapse block of a synaptic neural network. Moreover, we proved the gradient formula of a cross-entropy loss function over parameters, so synapse learning can work with the gradient descent and backpropagation algorithms. In the proof-of-concept experiment, we performed an MNIST training and testing on the MLP model with synapse network as hidden layers. |
Tasks | |
Published | 2019-05-01 |
URL | https://openreview.net/forum?id=ryGpEiAcFQ |
https://openreview.net/pdf?id=ryGpEiAcFQ | |
PWC | https://paperswithcode.com/paper/a-synaptic-neural-network-and-synapse |
Repo | |
Framework | |
Differentiable Kernel Evolution
Title | Differentiable Kernel Evolution |
Authors | Yu Liu, Jihao Liu, Ailing Zeng, Xiaogang Wang |
Abstract | This paper proposes a differentiable kernel evolution (DKE) algorithm to find a better layer-operator for the convolutional neural network. Unlike most of the other neural architecture searching (NAS) technologies, we consider the searching space in a fundamental scope: kernel space, which encodes the assembly of basic multiply-accumulate (MAC) operations into a conv-kernel. We first deduce a strict form of the generalized convolutional operator by some necessary constraints and construct a continuous searching space for its extra freedom-of-degree, namely, the connection of each MAC. Then a novel unsupervised greedy evolution algorithm called gradient agreement guided searching (GAGS) is proposed to learn the optimal location for each MAC in the spatially continuous searching space. We leverage DKE on multiple kinds of tasks such as object classification, face/object detection, large-scale fine-grained and recognition, with various kinds of backbone architecture. Not to mention the consistent performance gain, we found the proposed DKE can further act as an auto-dilated operator, which makes it easy to boost the performance of miniaturized neural networks in multiple tasks. |
Tasks | Object Classification, Object Detection |
Published | 2019-10-01 |
URL | http://openaccess.thecvf.com/content_ICCV_2019/html/Liu_Differentiable_Kernel_Evolution_ICCV_2019_paper.html |
http://openaccess.thecvf.com/content_ICCV_2019/papers/Liu_Differentiable_Kernel_Evolution_ICCV_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/differentiable-kernel-evolution |
Repo | |
Framework | |
Glocal: Incorporating Global Information in Local Convolution for Keyphrase Extraction
Title | Glocal: Incorporating Global Information in Local Convolution for Keyphrase Extraction |
Authors | Animesh Prasad, Min-Yen Kan |
Abstract | Graph Convolutional Networks (GCNs) are a class of spectral clustering techniques that leverage localized convolution filters to perform supervised classification directly on graphical structures. While such methods model nodes{'} local pairwise importance, they lack the capability to model global importance relative to other nodes of the graph. This causes such models to miss critical information in tasks where global ranking is a key component for the task, such as in keyphrase extraction. We address this shortcoming by allowing the proper incorporation of global information into the GCN family of models through the use of scaled node weights. In the context of keyphrase extraction, incorporating global random walk scores obtained from TextRank boosts performance significantly. With our proposed method, we achieve state-of-the-art results, bettering a strong baseline by an absolute 2{%} increase in F1 score. |
Tasks | |
Published | 2019-06-01 |
URL | https://www.aclweb.org/anthology/N19-1182/ |
https://www.aclweb.org/anthology/N19-1182 | |
PWC | https://paperswithcode.com/paper/glocal-incorporating-global-information-in |
Repo | |
Framework | |
KnockoffGAN: Generating Knockoffs for Feature Selection using Generative Adversarial Networks
Title | KnockoffGAN: Generating Knockoffs for Feature Selection using Generative Adversarial Networks |
Authors | James Jordon, Jinsung Yoon, Mihaela van der Schaar |
Abstract | Feature selection is a pervasive problem. The discovery of relevant features can be as important for performing a particular task (such as to avoid overfitting in prediction) as it can be for understanding the underlying processes governing the true label (such as discovering relevant genetic factors for a disease). Machine learning driven feature selection can enable discovery from large, high-dimensional, non-linear observational datasets by creating a subset of features for experts to focus on. In order to use expert time most efficiently, we need a principled methodology capable of controlling the False Discovery Rate. In this work, we build on the promising Knockoff framework by developing a flexible knockoff generation model. We adapt the Generative Adversarial Networks framework to allow us to generate knockoffs with no assumptions on the feature distribution. Our model consists of 4 networks, a generator, a discriminator, a stability network and a power network. We demonstrate the capability of our model to perform feature selection, showing that it performs as well as the originally proposed knockoff generation model in the Gaussian setting and that it outperforms the original model in non-Gaussian settings, including on a real-world dataset. |
Tasks | Feature Selection |
Published | 2019-05-01 |
URL | https://openreview.net/forum?id=ByeZ5jC5YQ |
https://openreview.net/pdf?id=ByeZ5jC5YQ | |
PWC | https://paperswithcode.com/paper/knockoffgan-generating-knockoffs-for-feature |
Repo | |
Framework | |
Adaptive Pyramid Context Network for Semantic Segmentation
Title | Adaptive Pyramid Context Network for Semantic Segmentation |
Authors | Junjun He, Zhongying Deng, Lei Zhou, Yali Wang, Yu Qiao |
Abstract | Recent studies witnessed that context features can significantly improve the performance of deep semantic segmentation networks. Current context based segmentation methods differ with each other in how to construct context features and perform differently in practice. This paper firstly introduces three desirable properties of context features in segmentation task. Specially, we find that Global-guided Local Affinity (GLA) can play a vital role in constructing effective context features, while this property has been largely ignored in previous works. Based on this analysis, this paper proposes Adaptive Pyramid Context Network (APCNet) for semantic segmentation. APCNet adaptively constructs multi-scale contextual representations with multiple well-designed Adaptive Context Modules (ACMs). Specifically, each ACM leverages a global image representation as a guidance to estimate the local affinity coefficients for each sub-region, and then calculates a context vector with these affinities. We empirically evaluate our APCNet on three semantic segmentation and scene parsing datasets, including PASCAL VOC 2012, Pascal-Context, and ADE20K dataset. Experimental results show that APCNet achieves state-of-the-art performance on all three benchmarks, and obtains a new record 84.2% on PASCAL VOC 2012 test set without MS COCO pre-trained and any post-processing. |
Tasks | Scene Parsing, Semantic Segmentation |
Published | 2019-06-01 |
URL | http://openaccess.thecvf.com/content_CVPR_2019/html/He_Adaptive_Pyramid_Context_Network_for_Semantic_Segmentation_CVPR_2019_paper.html |
http://openaccess.thecvf.com/content_CVPR_2019/papers/He_Adaptive_Pyramid_Context_Network_for_Semantic_Segmentation_CVPR_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/adaptive-pyramid-context-network-for-semantic |
Repo | |
Framework | |
CAUnLP at NLP4IF 2019 Shared Task: Context-Dependent BERT for Sentence-Level Propaganda Detection
Title | CAUnLP at NLP4IF 2019 Shared Task: Context-Dependent BERT for Sentence-Level Propaganda Detection |
Authors | Wenjun Hou, Ying Chen |
Abstract | The goal of fine-grained propaganda detection is to determine whether a given sentence uses propaganda techniques (sentence-level) or to recognize which techniques are used (fragment-level). This paper presents the sys- tem of our participation in the sentence-level subtask of the propaganda detection shared task. In order to better utilize the document information, we construct context-dependent input pairs (sentence-title pair and sentence- context pair) to fine-tune the pretrained BERT, and we also use the undersampling method to tackle the problem of imbalanced data. |
Tasks | |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-5010/ |
https://www.aclweb.org/anthology/D19-5010 | |
PWC | https://paperswithcode.com/paper/caunlp-at-nlp4if-2019-shared-task-context |
Repo | |
Framework | |
University of Edinburgh’s submission to the Document-level Generation and Translation Shared Task
Title | University of Edinburgh’s submission to the Document-level Generation and Translation Shared Task |
Authors | Ratish Puduppully, Jonathan Mallinson, Mirella Lapata |
Abstract | The University of Edinburgh participated in all six tracks: NLG, MT, and MT+NLG with both English and German as targeted languages. For the NLG track, we submitted a multilingual system based on the Content Selection and Planning model of Puduppully et al (2019). For the MT track, we submitted Transformer-based Neural Machine Translation models, where out-of-domain parallel data was augmented with in-domain data extracted from monolingual corpora. Our MT+NLG systems disregard the structured input data and instead rely exclusively on the source summaries. |
Tasks | Machine Translation |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-5630/ |
https://www.aclweb.org/anthology/D19-5630 | |
PWC | https://paperswithcode.com/paper/university-of-edinburghs-submission-to-the |
Repo | |
Framework | |
Personalized Fashion Design
Title | Personalized Fashion Design |
Authors | Cong Yu, Yang Hu, Yan Chen, Bing Zeng |
Abstract | Fashion recommendation is the task of suggesting a fashion item that fits well with a given item. In this work, we propose to automatically synthesis new items for recommendation. We jointly consider the two key issues for the task, i.e., compatibility and personalization. We propose a personalized fashion design framework with the help of generative adversarial training. A convolutional network is first used to map the query image into a latent vector representation. This latent representation, together with another vector which characterizes user’s style preference, are taken as the input to the generator network to generate the target item image. Two discriminator networks are built to guide the generation process. One is the classic real/fake discriminator. The other is a matching network which simultaneously models the compatibility between fashion items and learns users’ preference representations. The performance of the proposed method is evaluated on thousands of outfits composited by online users. The experiments show that the items generated by our model are quite realistic. They have better visual quality and higher matching degree than those generated by alternative methods. |
Tasks | |
Published | 2019-10-01 |
URL | http://openaccess.thecvf.com/content_ICCV_2019/html/Yu_Personalized_Fashion_Design_ICCV_2019_paper.html |
http://openaccess.thecvf.com/content_ICCV_2019/papers/Yu_Personalized_Fashion_Design_ICCV_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/personalized-fashion-design |
Repo | |
Framework | |