May 6, 2019

2802 words 14 mins read

Paper Group ANR 234

Paper Group ANR 234

Optimized Kernel-based Projection Space of Riemannian Manifolds. Fair Division via Social Comparison. A novel learning-based frame pooling method for Event Detection. The RNN-ELM Classifier. Lexicons and Minimum Risk Training for Neural Machine Translation: NAIST-CMU at WAT2016. Efficient Continuous Relaxations for Dense CRF. Enhancing Use Case Poi …

Optimized Kernel-based Projection Space of Riemannian Manifolds

Title Optimized Kernel-based Projection Space of Riemannian Manifolds
Authors Azadeh Alavi, Vishal M Patel, Rama Chellappa
Abstract It is proven that encoding images and videos through Symmetric Positive Definite (SPD) matrices, and considering the Riemannian geometry of the resulting space, can lead to increased classification performance. Taking into account manifold geometry is typically done via embedding the manifolds in tangent spaces, or Reproducing Kernel Hilbert Spaces (RKHS). Recently, it was shown that embedding such manifolds into a Random Projection Spaces (RPS), rather than RKHS or tangent space, leads to higher classification and clustering performance. However, based on structure and dimensionality of the randomly generated hyperplanes, the classification performance over RPS may vary significantly. In addition, fine-tuning RPS is data expensive (as it requires validation-data), time consuming, and resource demanding. In this paper, we introduce an approach to learn an optimized kernel-based projection (with fixed dimensionality), by employing the concept of subspace clustering. As such, we encode the association of data points to the underlying subspace of each point, to generate meaningful hyperplanes. Further, we adopt the concept of dictionary learning and sparse coding, and discriminative analysis, for the optimized kernel-based projection space (OPS) on SPD manifolds. We validate our algorithm on several classification tasks. The experiment results also demonstrate that the proposed method outperforms state-of-the-art methods on such manifolds.
Tasks Dictionary Learning
Published 2016-02-10
URL http://arxiv.org/abs/1602.03570v3
PDF http://arxiv.org/pdf/1602.03570v3.pdf
PWC https://paperswithcode.com/paper/optimized-kernel-based-projection-space-of
Repo
Framework

Fair Division via Social Comparison

Title Fair Division via Social Comparison
Authors Rediet Abebe, Jon Kleinberg, David Parkes
Abstract In the classical cake cutting problem, a resource must be divided among agents with different utilities so that each agent believes they have received a fair share of the resource relative to the other agents. We introduce a variant of the problem in which we model an underlying social network on the agents with a graph, and agents only evaluate their shares relative to their neighbors’ in the network. This formulation captures many situations in which it is unrealistic to assume a global view, and also exposes interesting phenomena in the original problem. Specifically, we say an allocation is locally envy-free if no agent envies a neighbor’s allocation and locally proportional if each agent values her own allocation as much as the average value of her neighbor’s allocations, with the former implying the latter. While global envy-freeness implies local envy-freeness, global proportionality does not imply local proportionality, or vice versa. A general result is that for any two distinct graphs on the same set of nodes and an allocation, there exists a set of valuation functions such that the allocation is locally proportional on one but not the other. We fully characterize the set of graphs for which an oblivious single-cutter protocol– a protocol that uses a single agent to cut the cake into pieces –admits a bounded protocol with $O(n^2)$ query complexity for locally envy-free allocations in the Robertson-Webb model. We also consider the price of envy-freeness, which compares the total utility of an optimal allocation to the best utility of an allocation that is envy-free. We show that a lower bound of $\Omega(\sqrt{n})$ on the price of envy-freeness for global allocations in fact holds for local envy-freeness in any connected undirected graph. Thus, sparse graphs surprisingly do not provide more flexibility with respect to the quality of envy-free allocations.
Tasks
Published 2016-11-20
URL http://arxiv.org/abs/1611.06589v2
PDF http://arxiv.org/pdf/1611.06589v2.pdf
PWC https://paperswithcode.com/paper/fair-division-via-social-comparison
Repo
Framework

A novel learning-based frame pooling method for Event Detection

Title A novel learning-based frame pooling method for Event Detection
Authors Lan Wang, Chenqiang Gao, Jiang Liu, Deyu Meng
Abstract Detecting complex events in a large video collection crawled from video websites is a challenging task. When applying directly good image-based feature representation, e.g., HOG, SIFT, to videos, we have to face the problem of how to pool multiple frame feature representations into one feature representation. In this paper, we propose a novel learning-based frame pooling method. We formulate the pooling weight learning as an optimization problem and thus our method can automatically learn the best pooling weight configuration for each specific event category. Experimental results conducted on TRECVID MED 2011 reveal that our method outperforms the commonly used average pooling and max pooling strategies on both high-level and low-level 2D image features.
Tasks
Published 2016-03-07
URL http://arxiv.org/abs/1603.02078v2
PDF http://arxiv.org/pdf/1603.02078v2.pdf
PWC https://paperswithcode.com/paper/a-novel-learning-based-frame-pooling-method
Repo
Framework

The RNN-ELM Classifier

Title The RNN-ELM Classifier
Authors Athanasios Vlontzos
Abstract In this paper we examine learning methods combining the Random Neural Network, a biologically inspired neural network and the Extreme Learning Machine that achieve state of the art classification performance while requiring much shorter training time. The Random Neural Network is a integrate and fire computational model of a neural network whose mathematical structure permits the efficient analysis of large ensembles of neurons. An activation function is derived from the RNN and used in an Extreme Learning Machine. We compare the performance of this combination against the ELM with various activation functions, we reduce the input dimensionality via PCA and compare its performance vs. autoencoder based versions of the RNN-ELM.
Tasks
Published 2016-09-25
URL http://arxiv.org/abs/1609.07724v1
PDF http://arxiv.org/pdf/1609.07724v1.pdf
PWC https://paperswithcode.com/paper/the-rnn-elm-classifier
Repo
Framework

Lexicons and Minimum Risk Training for Neural Machine Translation: NAIST-CMU at WAT2016

Title Lexicons and Minimum Risk Training for Neural Machine Translation: NAIST-CMU at WAT2016
Authors Graham Neubig
Abstract This year, the Nara Institute of Science and Technology (NAIST)/Carnegie Mellon University (CMU) submission to the Japanese-English translation track of the 2016 Workshop on Asian Translation was based on attentional neural machine translation (NMT) models. In addition to the standard NMT model, we make a number of improvements, most notably the use of discrete translation lexicons to improve probability estimates, and the use of minimum risk training to optimize the MT system for BLEU score. As a result, our system achieved the highest translation evaluation scores for the task.
Tasks Machine Translation
Published 2016-10-20
URL http://arxiv.org/abs/1610.06542v1
PDF http://arxiv.org/pdf/1610.06542v1.pdf
PWC https://paperswithcode.com/paper/lexicons-and-minimum-risk-training-for-neural
Repo
Framework

Efficient Continuous Relaxations for Dense CRF

Title Efficient Continuous Relaxations for Dense CRF
Authors Alban Desmaison, Rudy Bunel, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar
Abstract Dense conditional random fields (CRF) with Gaussian pairwise potentials have emerged as a popular framework for several computer vision applications such as stereo correspondence and semantic segmentation. By modeling long-range interactions, dense CRFs provide a more detailed labelling compared to their sparse counterparts. Variational inference in these dense models is performed using a filtering-based mean-field algorithm in order to obtain a fully-factorized distribution minimising the Kullback-Leibler divergence to the true distribution. In contrast to the continuous relaxation-based energy minimisation algorithms used for sparse CRFs, the mean-field algorithm fails to provide strong theoretical guarantees on the quality of its solutions. To address this deficiency, we show that it is possible to use the same filtering approach to speed-up the optimisation of several continuous relaxations. Specifically, we solve a convex quadratic programming (QP) relaxation using the efficient Frank-Wolfe algorithm. This also allows us to solve difference-of-convex relaxations via the iterative concave-convex procedure where each iteration requires solving a convex QP. Finally, we develop a novel divide-and-conquer method to compute the subgradients of a linear programming relaxation that provides the best theoretical bounds for energy minimisation. We demonstrate the advantage of continuous relaxations over the widely used mean-field algorithm on publicly available datasets.
Tasks Semantic Segmentation
Published 2016-08-22
URL http://arxiv.org/abs/1608.06192v1
PDF http://arxiv.org/pdf/1608.06192v1.pdf
PWC https://paperswithcode.com/paper/efficient-continuous-relaxations-for-dense
Repo
Framework

Enhancing Use Case Points Estimation Method Using Soft Computing Techniques

Title Enhancing Use Case Points Estimation Method Using Soft Computing Techniques
Authors Ali Bou Nassif, Luiz Fernando Capretz, Danny Ho
Abstract Software estimation is a crucial task in software engineering. Software estimation encompasses cost, effort, schedule, and size. The importance of software estimation becomes critical in the early stages of the software life cycle when the details of software have not been revealed yet. Several commercial and non-commercial tools exist to estimate software in the early stages. Most software effort estimation methods require software size as one of the important metric inputs and consequently, software size estimation in the early stages becomes essential. One of the approaches that has been used for about two decades in the early size and effort estimation is called use case points. Use case points method relies on the use case diagram to estimate the size and effort of software projects. Although the use case points method has been widely used, it has some limitations that might adversely affect the accuracy of estimation. This paper presents some techniques using fuzzy logic and neural networks to improve the accuracy of the use case points method. Results showed that an improvement up to 22% can be obtained using the proposed approach.
Tasks
Published 2016-12-04
URL http://arxiv.org/abs/1612.01078v1
PDF http://arxiv.org/pdf/1612.01078v1.pdf
PWC https://paperswithcode.com/paper/enhancing-use-case-points-estimation-method
Repo
Framework

Towards Transparent AI Systems: Interpreting Visual Question Answering Models

Title Towards Transparent AI Systems: Interpreting Visual Question Answering Models
Authors Yash Goyal, Akrit Mohapatra, Devi Parikh, Dhruv Batra
Abstract Deep neural networks have shown striking progress and obtained state-of-the-art results in many AI research fields in the recent years. However, it is often unsatisfying to not know why they predict what they do. In this paper, we address the problem of interpreting Visual Question Answering (VQA) models. Specifically, we are interested in finding what part of the input (pixels in images or words in questions) the VQA model focuses on while answering the question. To tackle this problem, we use two visualization techniques – guided backpropagation and occlusion – to find important words in the question and important regions in the image. We then present qualitative and quantitative analyses of these importance maps. We found that even without explicit attention mechanisms, VQA models may sometimes be implicitly attending to relevant regions in the image, and often to appropriate words in the question.
Tasks Question Answering, Visual Question Answering
Published 2016-08-31
URL http://arxiv.org/abs/1608.08974v2
PDF http://arxiv.org/pdf/1608.08974v2.pdf
PWC https://paperswithcode.com/paper/towards-transparent-ai-systems-interpreting
Repo
Framework

Amodal Instance Segmentation

Title Amodal Instance Segmentation
Authors Ke Li, Jitendra Malik
Abstract We consider the problem of amodal instance segmentation, the objective of which is to predict the region encompassing both visible and occluded parts of each object. Thus far, the lack of publicly available amodal segmentation annotations has stymied the development of amodal segmentation methods. In this paper, we sidestep this issue by relying solely on standard modal instance segmentation annotations to train our model. The result is a new method for amodal instance segmentation, which represents the first such method to the best of our knowledge. We demonstrate the proposed method’s effectiveness both qualitatively and quantitatively.
Tasks Instance Segmentation, Semantic Segmentation
Published 2016-04-27
URL http://arxiv.org/abs/1604.08202v2
PDF http://arxiv.org/pdf/1604.08202v2.pdf
PWC https://paperswithcode.com/paper/amodal-instance-segmentation
Repo
Framework

AGI and Reflexivity

Title AGI and Reflexivity
Authors Pascal Faudemay
Abstract We define a property of intelligent systems, which we call Reflexivity. In human beings, it is one aspect of consciousness, and an element of deliberation. We propose a conjecture, that this property is conditioned by a topological property of the processes which implement this reflexivity. These processes may be symbolic, or non symbolic e.g. connexionnist. An architecture which implements reflexivity may be based on the interaction of one or several modules of deep learning, which may be specialized or not, and interconnected in a relevant way. A necessary condition of reflexivity is the existence of recurrence in its processes, we will examine in which cases this condition may be sufficient. We will then examine how this topology and this property make possible the expression of a second property, the deliberation. In a final paragraph, we propose an evaluation of intelligent systems, based on the fulfillment of all or some of these properties.
Tasks
Published 2016-04-15
URL http://arxiv.org/abs/1604.05557v3
PDF http://arxiv.org/pdf/1604.05557v3.pdf
PWC https://paperswithcode.com/paper/agi-and-reflexivity
Repo
Framework

Blocking Collapsed Gibbs Sampler for Latent Dirichlet Allocation Models

Title Blocking Collapsed Gibbs Sampler for Latent Dirichlet Allocation Models
Authors Xin Zhang, Scott A. Sisson
Abstract The latent Dirichlet allocation (LDA) model is a widely-used latent variable model in machine learning for text analysis. Inference for this model typically involves a single-site collapsed Gibbs sampling step for latent variables associated with observations. The efficiency of the sampling is critical to the success of the model in practical large scale applications. In this article, we introduce a blocking scheme to the collapsed Gibbs sampler for the LDA model which can, with a theoretical guarantee, improve chain mixing efficiency. We develop two procedures, an O(K)-step backward simulation and an O(log K)-step nested simulation, to directly sample the latent variables within each block. We demonstrate that the blocking scheme achieves substantial improvements in chain mixing compared to the state of the art single-site collapsed Gibbs sampler. We also show that when the number of topics is over hundreds, the nested-simulation blocking scheme can achieve a significant reduction in computation time compared to the single-site sampler.
Tasks
Published 2016-08-02
URL http://arxiv.org/abs/1608.00945v1
PDF http://arxiv.org/pdf/1608.00945v1.pdf
PWC https://paperswithcode.com/paper/blocking-collapsed-gibbs-sampler-for-latent
Repo
Framework

Classify or Select: Neural Architectures for Extractive Document Summarization

Title Classify or Select: Neural Architectures for Extractive Document Summarization
Authors Ramesh Nallapati, Bowen Zhou, Mingbo Ma
Abstract We present two novel and contrasting Recurrent Neural Network (RNN) based architectures for extractive summarization of documents. The Classifier based architecture sequentially accepts or rejects each sentence in the original document order for its membership in the final summary. The Selector architecture, on the other hand, is free to pick one sentence at a time in any arbitrary order to piece together the summary. Our models under both architectures jointly capture the notions of salience and redundancy of sentences. In addition, these models have the advantage of being very interpretable, since they allow visualization of their predictions broken up by abstract features such as information content, salience and redundancy. We show that our models reach or outperform state-of-the-art supervised models on two different corpora. We also recommend the conditions under which one architecture is superior to the other based on experimental evidence.
Tasks Document Summarization, Extractive Document Summarization
Published 2016-11-14
URL http://arxiv.org/abs/1611.04244v1
PDF http://arxiv.org/pdf/1611.04244v1.pdf
PWC https://paperswithcode.com/paper/classify-or-select-neural-architectures-for
Repo
Framework

Artificial neural networks and fuzzy logic for recognizing alphabet characters and mathematical symbols

Title Artificial neural networks and fuzzy logic for recognizing alphabet characters and mathematical symbols
Authors Giuseppe Airò Farulla, Tiziana Armano, Anna Capietto, Nadir Murru, Rosaria Rossini
Abstract Optical Character Recognition software (OCR) are important tools for obtaining accessible texts. We propose the use of artificial neural networks (ANN) in order to develop pattern recognition algorithms capable of recognizing both normal texts and formulae. We present an original improvement of the backpropagation algorithm. Moreover, we describe a novel image segmentation algorithm that exploits fuzzy logic for separating touching characters.
Tasks Optical Character Recognition, Semantic Segmentation
Published 2016-07-06
URL http://arxiv.org/abs/1607.02028v1
PDF http://arxiv.org/pdf/1607.02028v1.pdf
PWC https://paperswithcode.com/paper/artificial-neural-networks-and-fuzzy-logic
Repo
Framework

A theory of contemplation

Title A theory of contemplation
Authors Jonathan Darren Nix
Abstract In this paper you can explore the application of some notable Boolean-derived methods, namely the Disjunctive Normal Form representation of logic table expansions, and extend them to a real-valued logic model which is able to utilize quantities on the range [0,1], [-1,1], [a,b], (x,y), (x,y,z), and etc. so as to produce a logical programming of arbitrary range, precision, and dimensionality, thereby enabling contemplation at a logical level in notions of arbitrary data, colors, and spatial constructs, with an example of the production of a game character’s logic in mathematical form.
Tasks Probabilistic Programming
Published 2016-02-18
URL https://arxiv.org/abs/1602.05705v8
PDF https://arxiv.org/pdf/1602.05705v8.pdf
PWC https://paperswithcode.com/paper/applying-boolean-discrete-methods-in-the
Repo
Framework

In the mood: the dynamics of collective sentiments on Twitter

Title In the mood: the dynamics of collective sentiments on Twitter
Authors Nathaniel Charlton, Colin Singleton, Danica Vukadinović Greetham
Abstract We study the relationship between the sentiment levels of Twitter users and the evolving network structure that the users created by @-mentioning each other. We use a large dataset of tweets to which we apply three sentiment scoring algorithms, including the open source SentiStrength program. Specifically we make three contributions. Firstly we find that people who have potentially the largest communication reach (according to a dynamic centrality measure) use sentiment differently than the average user: for example they use positive sentiment more often and negative sentiment less often. Secondly we find that when we follow structurally stable Twitter communities over a period of months, their sentiment levels are also stable, and sudden changes in community sentiment from one day to the next can in most cases be traced to external events affecting the community. Thirdly, based on our findings, we create and calibrate a simple agent-based model that is capable of reproducing measures of emotive response comparable to those obtained from our empirical dataset.
Tasks
Published 2016-04-11
URL http://arxiv.org/abs/1604.03427v1
PDF http://arxiv.org/pdf/1604.03427v1.pdf
PWC https://paperswithcode.com/paper/in-the-mood-the-dynamics-of-collective
Repo
Framework
comments powered by Disqus