May 4, 2019

1956 words 10 mins read

Paper Group NANR 224

Paper Group NANR 224

Split LBI: An Iterative Regularization Path with Structural Sparsity. Report of NEWS 2016 Machine Transliteration Shared Task. A Bayesian method for reducing bias in neural representational similarity analysis. Temporal Modelling of Geospatial Words in Twitter. Summarizing Multi-Party Argumentative Conversations in Reader Comment on News. Recognizi …

Split LBI: An Iterative Regularization Path with Structural Sparsity

Title Split LBI: An Iterative Regularization Path with Structural Sparsity
Authors Chendi Huang, Xinwei Sun, Jiechao Xiong, Yuan Yao
Abstract An iterative regularization path with structural sparsity is proposed in this paper based on variable splitting and the Linearized Bregman Iteration, hence called \emph{Split LBI}. Despite its simplicity, Split LBI outperforms the popular generalized Lasso in both theory and experiments. A theory of path consistency is presented that equipped with a proper early stopping, Split LBI may achieve model selection consistency under a family of Irrepresentable Conditions which can be weaker than the necessary and sufficient condition for generalized Lasso. Furthermore, some $\ell_2$ error bounds are also given at the minimax optimal rates. The utility and benefit of the algorithm are illustrated by applications on both traditional image denoising and a novel example on partial order ranking.
Tasks Denoising, Image Denoising, Model Selection
Published 2016-12-01
URL http://papers.nips.cc/paper/6288-split-lbi-an-iterative-regularization-path-with-structural-sparsity
PDF http://papers.nips.cc/paper/6288-split-lbi-an-iterative-regularization-path-with-structural-sparsity.pdf
PWC https://paperswithcode.com/paper/split-lbi-an-iterative-regularization-path
Repo
Framework

Report of NEWS 2016 Machine Transliteration Shared Task

Title Report of NEWS 2016 Machine Transliteration Shared Task
Authors Xiangyu Duan, Rafael Banchs, Min Zhang, Haizhou Li, A. Kumaran
Abstract
Tasks Information Retrieval, Machine Translation, Transliteration
Published 2016-08-01
URL https://www.aclweb.org/anthology/W16-2709/
PDF https://www.aclweb.org/anthology/W16-2709
PWC https://paperswithcode.com/paper/report-of-news-2016-machine-transliteration
Repo
Framework

A Bayesian method for reducing bias in neural representational similarity analysis

Title A Bayesian method for reducing bias in neural representational similarity analysis
Authors Ming Bo Cai, Nicolas W. Schuck, Jonathan W. Pillow, Yael Niv
Abstract In neuroscience, the similarity matrix of neural activity patterns in response to different sensory stimuli or under different cognitive states reflects the structure of neural representational space. Existing methods derive point estimations of neural activity patterns from noisy neural imaging data, and the similarity is calculated from these point estimations. We show that this approach translates structured noise from estimated patterns into spurious bias structure in the resulting similarity matrix, which is especially severe when signal-to-noise ratio is low and experimental conditions cannot be fully randomized in a cognitive task. We propose an alternative Bayesian framework for computing representational similarity in which we treat the covariance structure of neural activity patterns as a hyper-parameter in a generative model of the neural data, and directly estimate this covariance structure from imaging data while marginalizing over the unknown activity patterns. Converting the estimated covariance structure into a correlation matrix offers a much less biased estimate of neural representational similarity. Our method can also simultaneously estimate a signal-to-noise map that informs where the learned representational structure is supported more strongly, and the learned covariance matrix can be used as a structured prior to constrain Bayesian estimation of neural activity patterns. Our code is freely available in Brain Imaging Analysis Kit (Brainiak) (https://github.com/IntelPNI/brainiak), a python toolkit for brain imaging analysis.
Tasks
Published 2016-12-01
URL http://papers.nips.cc/paper/6131-a-bayesian-method-for-reducing-bias-in-neural-representational-similarity-analysis
PDF http://papers.nips.cc/paper/6131-a-bayesian-method-for-reducing-bias-in-neural-representational-similarity-analysis.pdf
PWC https://paperswithcode.com/paper/a-bayesian-method-for-reducing-bias-in-neural
Repo
Framework

Temporal Modelling of Geospatial Words in Twitter

Title Temporal Modelling of Geospatial Words in Twitter
Authors Bo Han, Antonio Jimeno Yepes, Andrew MacKinlay, Lianhua Chi
Abstract
Tasks Language Modelling
Published 2016-12-01
URL https://www.aclweb.org/anthology/U16-1015/
PDF https://www.aclweb.org/anthology/U16-1015
PWC https://paperswithcode.com/paper/temporal-modelling-of-geospatial-words-in
Repo
Framework

Summarizing Multi-Party Argumentative Conversations in Reader Comment on News

Title Summarizing Multi-Party Argumentative Conversations in Reader Comment on News
Authors Emma Barker, Robert Gaizauskas
Abstract
Tasks Argument Mining
Published 2016-08-01
URL https://www.aclweb.org/anthology/W16-2802/
PDF https://www.aclweb.org/anthology/W16-2802
PWC https://paperswithcode.com/paper/summarizing-multi-party-argumentative
Repo
Framework

Recognizing the Absence of Opposing Arguments in Persuasive Essays

Title Recognizing the Absence of Opposing Arguments in Persuasive Essays
Authors Christian Stab, Iryna Gurevych
Abstract
Tasks Argument Mining, Document Classification
Published 2016-08-01
URL https://www.aclweb.org/anthology/W16-2813/
PDF https://www.aclweb.org/anthology/W16-2813
PWC https://paperswithcode.com/paper/recognizing-the-absence-of-opposing-arguments
Repo
Framework

Error Analysis of Generalized Nyström Kernel Regression

Title Error Analysis of Generalized Nyström Kernel Regression
Authors Hong Chen, Haifeng Xia, Heng Huang, Weidong Cai
Abstract Nystr"{o}m method has been used successfully to improve the computational efficiency of kernel ridge regression (KRR). Recently, theoretical analysis of Nystr"{o}m KRR, including generalization bound and convergence rate, has been established based on reproducing kernel Hilbert space (RKHS) associated with the symmetric positive semi-definite kernel. However, in real world applications, RKHS is not always optimal and kernel function is not necessary to be symmetric or positive semi-definite. In this paper, we consider the generalized Nystr"{o}m kernel regression (GNKR) with $\ell_2$ coefficient regularization, where the kernel just requires the continuity and boundedness. Error analysis is provided to characterize its generalization performance and the column norm sampling is introduced to construct the refined hypothesis space. In particular, the fast learning rate with polynomial decay is reached for the GNKR. Experimental analysis demonstrates the satisfactory performance of GNKR with the column norm sampling.
Tasks
Published 2016-12-01
URL http://papers.nips.cc/paper/6602-error-analysis-of-generalized-nystrom-kernel-regression
PDF http://papers.nips.cc/paper/6602-error-analysis-of-generalized-nystrom-kernel-regression.pdf
PWC https://paperswithcode.com/paper/error-analysis-of-generalized-nystrom-kernel
Repo
Framework

Learning to Recognize Ancillary Information for Automatic Paraphrase Identification

Title Learning to Recognize Ancillary Information for Automatic Paraphrase Identification
Authors Simone Filice, Aless Moschitti, ro
Abstract
Tasks Paraphrase Identification
Published 2016-06-01
URL https://www.aclweb.org/anthology/N16-1129/
PDF https://www.aclweb.org/anthology/N16-1129
PWC https://paperswithcode.com/paper/learning-to-recognize-ancillary-information
Repo
Framework

FBK HLT-MT at SemEval-2016 Task 1: Cross-lingual Semantic Similarity Measurement Using Quality Estimation Features and Compositional Bilingual Word Embeddings

Title FBK HLT-MT at SemEval-2016 Task 1: Cross-lingual Semantic Similarity Measurement Using Quality Estimation Features and Compositional Bilingual Word Embeddings
Authors Duygu Ataman, Jos{'e} G. C. de Souza, Marco Turchi, Matteo Negri
Abstract
Tasks Cross-Lingual Semantic Textual Similarity, Machine Translation, Natural Language Inference, Paraphrase Identification, Semantic Similarity, Semantic Textual Similarity, Sentiment Analysis, Word Embeddings, Word Sense Disambiguation
Published 2016-06-01
URL https://www.aclweb.org/anthology/S16-1086/
PDF https://www.aclweb.org/anthology/S16-1086
PWC https://paperswithcode.com/paper/fbk-hlt-mt-at-semeval-2016-task-1-cross
Repo
Framework

Maximization of Approximately Submodular Functions

Title Maximization of Approximately Submodular Functions
Authors Thibaut Horel, Yaron Singer
Abstract We study the problem of maximizing a function that is approximately submodular under a cardinality constraint. Approximate submodularity implicitly appears in a wide range of applications as in many cases errors in evaluation of a submodular function break submodularity. Say that $F$ is $\eps$-approximately submodular if there exists a submodular function $f$ such that $(1-\eps)f(S) \leq F(S)\leq (1+\eps)f(S)$ for all subsets $S$. We are interested in characterizing the query-complexity of maximizing $F$ subject to a cardinality constraint $k$ as a function of the error level $\eps > 0$. We provide both lower and upper bounds: for $\eps > n^{-1/2}$ we show an exponential query-complexity lower bound. In contrast, when $\eps < {1}/{k}$ or under a stronger bounded curvature assumption, we give constant approximation algorithms.
Tasks
Published 2016-12-01
URL http://papers.nips.cc/paper/6236-maximization-of-approximately-submodular-functions
PDF http://papers.nips.cc/paper/6236-maximization-of-approximately-submodular-functions.pdf
PWC https://paperswithcode.com/paper/maximization-of-approximately-submodular
Repo
Framework

Automatic Prediction of Linguistic Decline in Writings of Subjects with Degenerative Dementia

Title Automatic Prediction of Linguistic Decline in Writings of Subjects with Degenerative Dementia
Authors Davy Weissenbacher, Travis A. Johnson, Laura Wojtulewicz, Amylou Dueck, Dona Locke, Richard Caselli, Graciela Gonzalez
Abstract
Tasks
Published 2016-06-01
URL https://www.aclweb.org/anthology/N16-1143/
PDF https://www.aclweb.org/anthology/N16-1143
PWC https://paperswithcode.com/paper/automatic-prediction-of-linguistic-decline-in
Repo
Framework

Scaled Least Squares Estimator for GLMs in Large-Scale Problems

Title Scaled Least Squares Estimator for GLMs in Large-Scale Problems
Authors Murat A. Erdogdu, Lee H. Dicker, Mohsen Bayati
Abstract We study the problem of efficiently estimating the coefficients of generalized linear models (GLMs) in the large-scale setting where the number of observations $n$ is much larger than the number of predictors $p$, i.e. $n\gg p \gg 1$. We show that in GLMs with random (not necessarily Gaussian) design, the GLM coefficients are approximately proportional to the corresponding ordinary least squares (OLS) coefficients. Using this relation, we design an algorithm that achieves the same accuracy as the maximum likelihood estimator (MLE) through iterations that attain up to a cubic convergence rate, and that are cheaper than any batch optimization algorithm by at least a factor of $\mathcal{O}(p)$. We provide theoretical guarantees for our algorithm, and analyze the convergence behavior in terms of data dimensions. % Finally, we demonstrate the performance of our algorithm through extensive numerical studies on large-scale real and synthetic datasets, and show that it achieves the highest performance compared to several other widely used optimization algorithms.
Tasks
Published 2016-12-01
URL http://papers.nips.cc/paper/6522-scaled-least-squares-estimator-for-glms-in-large-scale-problems
PDF http://papers.nips.cc/paper/6522-scaled-least-squares-estimator-for-glms-in-large-scale-problems.pdf
PWC https://paperswithcode.com/paper/scaled-least-squares-estimator-for-glms-in
Repo
Framework

CNN- and LSTM-based Claim Classification in Online User Comments

Title CNN- and LSTM-based Claim Classification in Online User Comments
Authors Chinnappa Guggilla, Tristan Miller, Iryna Gurevych
Abstract When processing arguments in online user interactive discourse, it is often necessary to determine their bases of support. In this paper, we describe a supervised approach, based on deep neural networks, for classifying the claims made in online arguments. We conduct experiments using convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) on two claim data sets compiled from online user comments. Using different types of distributional word embeddings, but without incorporating any rich, expensive set of features, we achieve a significant improvement over the state of the art for one data set (which categorizes arguments as factual vs. emotional), and performance comparable to the state of the art on the other data set (which categorizes propositions according to their verifiability). Our approach has the advantages of using a generalized, simple, and effective methodology that works for claim categorization on different data sets and tasks.
Tasks Sarcasm Detection, Word Embeddings
Published 2016-12-01
URL https://www.aclweb.org/anthology/C16-1258/
PDF https://www.aclweb.org/anthology/C16-1258
PWC https://paperswithcode.com/paper/cnn-and-lstm-based-claim-classification-in
Repo
Framework

Privacy Odometers and Filters: Pay-as-you-Go Composition

Title Privacy Odometers and Filters: Pay-as-you-Go Composition
Authors Ryan M. Rogers, Aaron Roth, Jonathan Ullman, Salil Vadhan
Abstract In this paper we initiate the study of adaptive composition in differential privacy when the length of the composition, and the privacy parameters themselves can be chosen adaptively, as a function of the outcome of previously run analyses. This case is much more delicate than the setting covered by existing composition theorems, in which the algorithms themselves can be chosen adaptively, but the privacy parameters must be fixed up front. Indeed, it isn’t even clear how to define differential privacy in the adaptive parameter setting. We proceed by defining two objects which cover the two main use cases of composition theorems. A privacy filter is a stopping time rule that allows an analyst to halt a computation before his pre-specified privacy budget is exceeded. A privacy odometer allows the analyst to track realized privacy loss as he goes, without needing to pre-specify a privacy budget. We show that unlike the case in which privacy parameters are fixed, in the adaptive parameter setting, these two use cases are distinct. We show that there exist privacy filters with bounds comparable (up to constants) with existing privacy composition theorems. We also give a privacy odometer that nearly matches non-adaptive private composition theorems, but is sometimes worse by a small asymptotic factor. Moreover, we show that this is inherent, and that any valid privacy odometer in the adaptive parameter setting must lose this factor, which shows a formal separation between the filter and odometer use-cases.
Tasks
Published 2016-12-01
URL http://papers.nips.cc/paper/6170-privacy-odometers-and-filters-pay-as-you-go-composition
PDF http://papers.nips.cc/paper/6170-privacy-odometers-and-filters-pay-as-you-go-composition.pdf
PWC https://paperswithcode.com/paper/privacy-odometers-and-filters-pay-as-you-go
Repo
Framework

RankDCG: Rank-Ordering Evaluation Measure

Title RankDCG: Rank-Ordering Evaluation Measure
Authors Denys Katerenchuk, Andrew Rosenberg
Abstract Ranking is used for a wide array of problems, most notably information retrieval (search). Kendall{'}s Ï„, Average Precision, and nDCG are a few popular approaches to the evaluation of ranking. When dealing with problems such as user ranking or recommendation systems, all these measures suffer from various problems, including the inability to deal with elements of the same rank, inconsistent and ambiguous lower bound scores, and an inappropriate cost function. We propose a new measure, a modification of the popular nDCG algorithm, named rankDCG, that addresses these problems. We provide a number of criteria for any effective ranking algorithm and show that only rankDCG satisfies them all. Results are presented on constructed and real data sets. We release a publicly available rankDCG evaluation package.
Tasks Information Retrieval, Recommendation Systems
Published 2016-05-01
URL https://www.aclweb.org/anthology/L16-1583/
PDF https://www.aclweb.org/anthology/L16-1583
PWC https://paperswithcode.com/paper/rankdcg-rank-ordering-evaluation-measure
Repo
Framework
comments powered by Disqus