Paper Group NANR 39
Roles and Success in Wikipedia Talk Pages: Identifying Latent Patterns of Behavior. Influence Maximization with \varepsilon-Almost Submodular Threshold Functions. Apples to Apples: Learning Semantics of Common Entities Through a Novel Comprehension Task. Efficient Sublinear-Regret Algorithms for Online Sparse Linear Regression with Limited Observat …
Roles and Success in Wikipedia Talk Pages: Identifying Latent Patterns of Behavior
Title | Roles and Success in Wikipedia Talk Pages: Identifying Latent Patterns of Behavior |
Authors | Keith Maki, Michael Yoder, Yohan Jo, Carolyn Ros{'e} |
Abstract | In this work we investigate how role-based behavior profiles of a Wikipedia editor, considered against the backdrop of roles taken up by other editors in discussions, predict the success of the editor at achieving an impact on the associated article. We first contribute a new public dataset including a task predicting the success of Wikipedia editors involved in discussion, measured by an operationalization of the lasting impact of their edits in the article. We then propose a probabilistic graphical model that advances earlier work inducing latent discussion roles using the light supervision of success in the negotiation task. We evaluate the performance of the model and interpret findings of roles and group configurations that lead to certain outcomes on Wikipedia. |
Tasks | |
Published | 2017-11-01 |
URL | https://www.aclweb.org/anthology/I17-1103/ |
https://www.aclweb.org/anthology/I17-1103 | |
PWC | https://paperswithcode.com/paper/roles-and-success-in-wikipedia-talk-pages |
Repo | |
Framework | |
Influence Maximization with \varepsilon-Almost Submodular Threshold Functions
Title | Influence Maximization with \varepsilon-Almost Submodular Threshold Functions |
Authors | Qiang Li, Wei Chen, Institute Of Computing Xiaoming Sun, Institute Of Computing Jialin Zhang |
Abstract | Influence maximization is the problem of selecting $k$ nodes in a social network to maximize their influence spread. The problem has been extensively studied but most works focus on the submodular influence diffusion models. In this paper, motivated by empirical evidences, we explore influence maximization in the non-submodular regime. In particular, we study the general threshold model in which a fraction of nodes have non-submodular threshold functions, but their threshold functions are closely upper- and lower-bounded by some submodular functions (we call them $\varepsilon$-almost submodular). We first show a strong hardness result: there is no $1/n^{\gamma/c}$ approximation for influence maximization (unless P = NP) for all networks with up to $n^{\gamma}$ $\varepsilon$-almost submodular nodes, where $\gamma$ is in (0,1) and $c$ is a parameter depending on $\varepsilon$. This indicates that influence maximization is still hard to approximate even though threshold functions are close to submodular. We then provide $(1-\varepsilon)^{\ell}(1-1/e)$ approximation algorithms when the number of $\varepsilon$-almost submodular nodes is $\ell$. Finally, we conduct experiments on a number of real-world datasets, and the results demonstrate that our approximation algorithms outperform other baseline algorithms. |
Tasks | |
Published | 2017-12-01 |
URL | http://papers.nips.cc/paper/6970-influence-maximization-with-varepsilon-almost-submodular-threshold-functions |
http://papers.nips.cc/paper/6970-influence-maximization-with-varepsilon-almost-submodular-threshold-functions.pdf | |
PWC | https://paperswithcode.com/paper/influence-maximization-with-varepsilon-almost |
Repo | |
Framework | |
Apples to Apples: Learning Semantics of Common Entities Through a Novel Comprehension Task
Title | Apples to Apples: Learning Semantics of Common Entities Through a Novel Comprehension Task |
Authors | Bakhsh, Omid eh, James Allen |
Abstract | Understanding common entities and their attributes is a primary requirement for any system that comprehends natural language. In order to enable learning about common entities, we introduce a novel machine comprehension task, GuessTwo: given a short paragraph comparing different aspects of two real-world semantically-similar entities, a system should guess what those entities are. Accomplishing this task requires deep language understanding which enables inference, connecting each comparison paragraph to different levels of knowledge about world entities and their attributes. So far we have crowdsourced a dataset of more than 14K comparison paragraphs comparing entities from a variety of categories such as fruits and animals. We have designed two schemes for evaluation: open-ended, and binary-choice prediction. For benchmarking further progress in the task, we have collected a set of paragraphs as the test set on which human can accomplish the task with an accuracy of 94.2{%} on open-ended prediction. We have implemented various models for tackling the task, ranging from semantic-driven to neural models. The semantic-driven approach outperforms the neural models, however, the results indicate that the task is very challenging across the models. |
Tasks | Part-Of-Speech Tagging, Reading Comprehension, Semantic Textual Similarity, Word Embeddings |
Published | 2017-07-01 |
URL | https://www.aclweb.org/anthology/P17-1084/ |
https://www.aclweb.org/anthology/P17-1084 | |
PWC | https://paperswithcode.com/paper/apples-to-apples-learning-semantics-of-common |
Repo | |
Framework | |
Efficient Sublinear-Regret Algorithms for Online Sparse Linear Regression with Limited Observation
Title | Efficient Sublinear-Regret Algorithms for Online Sparse Linear Regression with Limited Observation |
Authors | Shinji Ito, Daisuke Hatano, Hanna Sumita, Akihiro Yabe, Takuro Fukunaga, Naonori Kakimura, Ken-Ichi Kawarabayashi |
Abstract | Online sparse linear regression is the task of applying linear regression analysis to examples arriving sequentially subject to a resource constraint that a limited number of features of examples can be observed. Despite its importance in many practical applications, it has been recently shown that there is no polynomial-time sublinear-regret algorithm unless NP$\subseteq$BPP, and only an exponential-time sublinear-regret algorithm has been found. In this paper, we introduce mild assumptions to solve the problem. Under these assumptions, we present polynomial-time sublinear-regret algorithms for the online sparse linear regression. In addition, thorough experiments with publicly available data demonstrate that our algorithms outperform other known algorithms. |
Tasks | |
Published | 2017-12-01 |
URL | http://papers.nips.cc/paper/6998-efficient-sublinear-regret-algorithms-for-online-sparse-linear-regression-with-limited-observation |
http://papers.nips.cc/paper/6998-efficient-sublinear-regret-algorithms-for-online-sparse-linear-regression-with-limited-observation.pdf | |
PWC | https://paperswithcode.com/paper/efficient-sublinear-regret-algorithms-for |
Repo | |
Framework | |
Modeling Context Words as Regions: An Ordinal Regression Approach to Word Embedding
Title | Modeling Context Words as Regions: An Ordinal Regression Approach to Word Embedding |
Authors | Shoaib Jameel, Steven Schockaert |
Abstract | Vector representations of word meaning have found many applications in the field of natural language processing. Word vectors intuitively represent the average context in which a given word tends to occur, but they cannot explicitly model the diversity of these contexts. Although region representations of word meaning offer a natural alternative to word vectors, only few methods have been proposed that can effectively learn word regions. In this paper, we propose a new word embedding model which is based on SVM regression. We show that the underlying ranking interpretation of word contexts is sufficient to match, and sometimes outperform, the performance of popular methods such as Skip-gram. Furthermore, we show that by using a quadratic kernel, we can effectively learn word regions, which outperform existing unsupervised models for the task of hypernym detection. |
Tasks | Word Embeddings |
Published | 2017-08-01 |
URL | https://www.aclweb.org/anthology/K17-1014/ |
https://www.aclweb.org/anthology/K17-1014 | |
PWC | https://paperswithcode.com/paper/modeling-context-words-as-regions-an-ordinal |
Repo | |
Framework | |
Revisiting Tones in Twic East Dinka
Title | Revisiting Tones in Twic East Dinka |
Authors | Yu-Leng Lin |
Abstract | |
Tasks | |
Published | 2017-11-01 |
URL | https://www.aclweb.org/anthology/Y17-1053/ |
https://www.aclweb.org/anthology/Y17-1053 | |
PWC | https://paperswithcode.com/paper/revisiting-tones-in-twic-east-dinka |
Repo | |
Framework | |
Coarse-To-Fine Parsing for Expressive Grammar Formalisms
Title | Coarse-To-Fine Parsing for Expressive Grammar Formalisms |
Authors | Christoph Teichmann, Alex Koller, er, Jonas Groschwitz |
Abstract | We generalize coarse-to-fine parsing to grammar formalisms that are more expressive than PCFGs and/or describe languages of trees or graphs. We evaluate our algorithm on PCFG, PTAG, and graph parsing. While we achieve the expected performance gains on PCFGs, coarse-to-fine does not help for PTAG and can even slow down parsing for graphs. We discuss the implications of this finding. |
Tasks | |
Published | 2017-09-01 |
URL | https://www.aclweb.org/anthology/W17-6317/ |
https://www.aclweb.org/anthology/W17-6317 | |
PWC | https://paperswithcode.com/paper/coarse-to-fine-parsing-for-expressive-grammar |
Repo | |
Framework | |
Evaluating LSTM models for grammatical function labelling
Title | Evaluating LSTM models for grammatical function labelling |
Authors | Bich-Ngoc Do, Ines Rehbein |
Abstract | To improve grammatical function labelling for German, we augment the labelling component of a neural dependency parser with a decision history. We present different ways to encode the history, using different LSTM architectures, and show that our models yield significant improvements, resulting in a LAS for German that is close to the best result from the SPMRL 2014 shared task (without the reranker). |
Tasks | Dependency Parsing |
Published | 2017-09-01 |
URL | https://www.aclweb.org/anthology/W17-6318/ |
https://www.aclweb.org/anthology/W17-6318 | |
PWC | https://paperswithcode.com/paper/evaluating-lstm-models-for-grammatical |
Repo | |
Framework | |
Capturing Dependency Syntax with ``Deep’’ Sequential Models
Title | Capturing Dependency Syntax with ``Deep’’ Sequential Models | |
Authors | Yoav Goldberg |
Abstract | |
Tasks | |
Published | 2017-09-01 |
URL | https://www.aclweb.org/anthology/W17-6501/ |
https://www.aclweb.org/anthology/W17-6501 | |
PWC | https://paperswithcode.com/paper/capturing-dependency-syntax-with-deep |
Repo | |
Framework | |
Assessing the Annotation Consistency of the Universal Dependencies Corpora
Title | Assessing the Annotation Consistency of the Universal Dependencies Corpora |
Authors | Marie-Catherine de Marneffe, Matias Grioni, Jenna Kanerva, Filip Ginter |
Abstract | |
Tasks | |
Published | 2017-09-01 |
URL | https://www.aclweb.org/anthology/W17-6514/ |
https://www.aclweb.org/anthology/W17-6514 | |
PWC | https://paperswithcode.com/paper/assessing-the-annotation-consistency-of-the |
Repo | |
Framework | |
Improving Opinion Summarization by Assessing Sentence Importance in On-line Reviews
Title | Improving Opinion Summarization by Assessing Sentence Importance in On-line Reviews |
Authors | Rafael Anchi{^e}ta, Rogerio Figueredo Sousa, Raimundo Moura, Thiago Pardo |
Abstract | |
Tasks | |
Published | 2017-10-01 |
URL | https://www.aclweb.org/anthology/W17-6605/ |
https://www.aclweb.org/anthology/W17-6605 | |
PWC | https://paperswithcode.com/paper/improving-opinion-summarization-by-assessing |
Repo | |
Framework | |
NCYU at IJCNLP-2017 Task 2: Dimensional Sentiment Analysis for Chinese Phrases using Vector Representations
Title | NCYU at IJCNLP-2017 Task 2: Dimensional Sentiment Analysis for Chinese Phrases using Vector Representations |
Authors | Jui-Feng Yeh, Jian-Cheng Tsai, Bo-Wei Wu, Tai-You Kuang |
Abstract | This paper presents two vector representations proposed by National Chiayi University (NCYU) about phrased-based sentiment detection which was used to compete in dimensional sentiment analysis for Chinese phrases (DSACP) at IJCNLP 2017. The vector-based sentiment phraselike unit analysis models are proposed in this article. E-HowNet-based clustering is used to obtain the values of valence and arousal for sentiment words first. An out-of-vocabulary function is also defined in this article to measure the dimensional emotion values for unknown words. For predicting the corresponding values of sentiment phrase-like unit, a vectorbased approach is proposed here. According to the experimental results, we can find the proposed approach is efficacious. |
Tasks | Sentiment Analysis |
Published | 2017-12-01 |
URL | https://www.aclweb.org/anthology/I17-4018/ |
https://www.aclweb.org/anthology/I17-4018 | |
PWC | https://paperswithcode.com/paper/ncyu-at-ijcnlp-2017-task-2-dimensional |
Repo | |
Framework | |
Consistent Robust Regression
Title | Consistent Robust Regression |
Authors | Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, Purushottam Kar |
Abstract | We present the first efficient and provably consistent estimator for the robust regression problem. The area of robust learning and optimization has generated a significant amount of interest in the learning and statistics communities in recent years owing to its applicability in scenarios with corrupted data, as well as in handling model mis-specifications. In particular, special interest has been devoted to the fundamental problem of robust linear regression where estimators that can tolerate corruption in up to a constant fraction of the response variables are widely studied. Surprisingly however, to this date, we are not aware of a polynomial time estimator that offers a consistent estimate in the presence of dense, unbounded corruptions. In this work we present such an estimator, called CRR. This solves an open problem put forward in the work of (Bhatia et al, 2015). Our consistency analysis requires a novel two-stage proof technique involving a careful analysis of the stability of ordered lists which may be of independent interest. We show that CRR not only offers consistent estimates, but is empirically far superior to several other recently proposed algorithms for the robust regression problem, including extended Lasso and the TORRENT algorithm. In comparison, CRR offers comparable or better model recovery but with runtimes that are faster by an order of magnitude. |
Tasks | |
Published | 2017-12-01 |
URL | http://papers.nips.cc/paper/6806-consistent-robust-regression |
http://papers.nips.cc/paper/6806-consistent-robust-regression.pdf | |
PWC | https://paperswithcode.com/paper/consistent-robust-regression |
Repo | |
Framework | |
Argument Relation Classification Using a Joint Inference Model
Title | Argument Relation Classification Using a Joint Inference Model |
Authors | Yufang Hou, Charles Jochim |
Abstract | In this paper, we address the problem of argument relation classification where argument units are from different texts. We design a joint inference method for the task by modeling argument relation classification and stance classification jointly. We show that our joint model improves the results over several strong baselines. |
Tasks | Argument Mining, Relation Classification |
Published | 2017-09-01 |
URL | https://www.aclweb.org/anthology/W17-5107/ |
https://www.aclweb.org/anthology/W17-5107 | |
PWC | https://paperswithcode.com/paper/argument-relation-classification-using-a |
Repo | |
Framework | |
Conceptualizing EDUCATION in Hong Kong and China (1984-2014)
Title | Conceptualizing EDUCATION in Hong Kong and China (1984-2014) |
Authors | Kathleen Ahrens, Huiheng Zeng |
Abstract | |
Tasks | |
Published | 2017-11-01 |
URL | https://www.aclweb.org/anthology/Y17-1041/ |
https://www.aclweb.org/anthology/Y17-1041 | |
PWC | https://paperswithcode.com/paper/conceptualizing-education-in-hong-kong-and |
Repo | |
Framework | |