May 6, 2019

2927 words 14 mins read

Paper Group ANR 256

Paper Group ANR 256

Learning Sparse Graphs Under Smoothness Prior. Relaxation of the EM Algorithm via Quantum Annealing. SAD-GAN: Synthetic Autonomous Driving using Generative Adversarial Networks. A natural language interface to a graph-based bibliographic information retrieval system. Linear Learning with Sparse Data. Toward a Deep Neural Approach for Knowledge-Base …

Learning Sparse Graphs Under Smoothness Prior

Title Learning Sparse Graphs Under Smoothness Prior
Authors Sundeep Prabhakar Chepuri, Sijia Liu, Geert Leus, Alfred O. Hero III
Abstract In this paper, we are interested in learning the underlying graph structure behind training data. Solving this basic problem is essential to carry out any graph signal processing or machine learning task. To realize this, we assume that the data is smooth with respect to the graph topology, and we parameterize the graph topology using an edge sampling function. That is, the graph Laplacian is expressed in terms of a sparse edge selection vector, which provides an explicit handle to control the sparsity level of the graph. We solve the sparse graph learning problem given some training data in both the noiseless and noisy settings. Given the true smooth data, the posed sparse graph learning problem can be solved optimally and is based on simple rank ordering. Given the noisy data, we show that the joint sparse graph learning and denoising problem can be simplified to designing only the sparse edge selection vector, which can be solved using convex optimization.
Tasks Denoising
Published 2016-09-12
URL http://arxiv.org/abs/1609.03448v1
PDF http://arxiv.org/pdf/1609.03448v1.pdf
PWC https://paperswithcode.com/paper/learning-sparse-graphs-under-smoothness-prior
Repo
Framework

Relaxation of the EM Algorithm via Quantum Annealing

Title Relaxation of the EM Algorithm via Quantum Annealing
Authors Hideyuki Miyahara, Koji Tsumura
Abstract The EM algorithm is a novel numerical method to obtain maximum likelihood estimates and is often used for practical calculations. However, many of maximum likelihood estimation problems are nonconvex, and it is known that the EM algorithm fails to give the optimal estimate by being trapped by local optima. In order to deal with this difficulty, we propose a deterministic quantum annealing EM algorithm by introducing the mathematical mechanism of quantum fluctuations into the conventional EM algorithm because quantum fluctuations induce the tunnel effect and are expected to relax the difficulty of nonconvex optimization problems in the maximum likelihood estimation problems. We show a theorem that guarantees its convergence and give numerical experiments to verify its efficiency.
Tasks
Published 2016-06-05
URL http://arxiv.org/abs/1606.01484v1
PDF http://arxiv.org/pdf/1606.01484v1.pdf
PWC https://paperswithcode.com/paper/relaxation-of-the-em-algorithm-via-quantum-1
Repo
Framework

SAD-GAN: Synthetic Autonomous Driving using Generative Adversarial Networks

Title SAD-GAN: Synthetic Autonomous Driving using Generative Adversarial Networks
Authors Arna Ghosh, Biswarup Bhattacharya, Somnath Basu Roy Chowdhury
Abstract Autonomous driving is one of the most recent topics of interest which is aimed at replicating human driving behavior keeping in mind the safety issues. We approach the problem of learning synthetic driving using generative neural networks. The main idea is to make a controller trainer network using images plus key press data to mimic human learning. We used the architecture of a stable GAN to make predictions between driving scenes using key presses. We train our model on one video game (Road Rash) and tested the accuracy and compared it by running the model on other maps in Road Rash to determine the extent of learning.
Tasks Autonomous Driving
Published 2016-11-27
URL http://arxiv.org/abs/1611.08788v1
PDF http://arxiv.org/pdf/1611.08788v1.pdf
PWC https://paperswithcode.com/paper/sad-gan-synthetic-autonomous-driving-using
Repo
Framework

A natural language interface to a graph-based bibliographic information retrieval system

Title A natural language interface to a graph-based bibliographic information retrieval system
Authors Yongjun Zhu, Erjia Yan, Il-Yeol Song
Abstract With the ever-increasing scientific literature, there is a need on a natural language interface to bibliographic information retrieval systems to retrieve related information effectively. In this paper, we propose a natural language interface, NLI-GIBIR, to a graph-based bibliographic information retrieval system. In designing NLI-GIBIR, we developed a novel framework that can be applicable to graph-based bibliographic information retrieval systems. Our framework integrates algorithms/heuristics for interpreting and analyzing natural language bibliographic queries. NLI-GIBIR allows users to search for a variety of bibliographic data through natural language. A series of text- and linguistic-based techniques are used to analyze and answer natural language queries, including tokenization, named entity recognition, and syntactic analysis. We find that our framework can effectively represents and addresses complex bibliographic information needs. Thus, the contributions of this paper are as follows: First, to our knowledge, it is the first attempt to propose a natural language interface to graph-based bibliographic information retrieval. Second, we propose a novel customized natural language processing framework that integrates a few original algorithms/heuristics for interpreting and analyzing natural language bibliographic queries. Third, we show that the proposed framework and natural language interface provide a practical solution in building real-world natural language interface-based bibliographic information retrieval systems. Our experimental results show that the presented system can correctly answer 39 out of 40 example natural language queries with varying lengths and complexities.
Tasks Information Retrieval, Named Entity Recognition, Tokenization
Published 2016-12-10
URL http://arxiv.org/abs/1612.03231v1
PDF http://arxiv.org/pdf/1612.03231v1.pdf
PWC https://paperswithcode.com/paper/a-natural-language-interface-to-a-graph-based
Repo
Framework

Linear Learning with Sparse Data

Title Linear Learning with Sparse Data
Authors Ofer Dekel
Abstract Linear predictors are especially useful when the data is high-dimensional and sparse. One of the standard techniques used to train a linear predictor is the Averaged Stochastic Gradient Descent (ASGD) algorithm. We present an efficient implementation of ASGD that avoids dense vector operations. We also describe a translation invariant extension called Centered Averaged Stochastic Gradient Descent (CASGD).
Tasks
Published 2016-12-29
URL http://arxiv.org/abs/1612.09147v2
PDF http://arxiv.org/pdf/1612.09147v2.pdf
PWC https://paperswithcode.com/paper/linear-learning-with-sparse-data
Repo
Framework

Toward a Deep Neural Approach for Knowledge-Based IR

Title Toward a Deep Neural Approach for Knowledge-Based IR
Authors Gia-Hung Nguyen, Lynda Tamine, Laure Soulier, Nathalie Bricon-Souf
Abstract This paper tackles the problem of the semantic gap between a document and a query within an ad-hoc information retrieval task. In this context, knowledge bases (KBs) have already been acknowledged as valuable means since they allow the representation of explicit relations between entities. However, they do not necessarily represent implicit relations that could be hidden in a corpora. This latter issue is tackled by recent works dealing with deep representation learn ing of texts. With this in mind, we argue that embedding KBs within deep neural architectures supporting documentquery matching would give rise to fine-grained latent representations of both words and their semantic relations. In this paper, we review the main approaches of neural-based document ranking as well as those approaches for latent representation of entities and relations via KBs. We then propose some avenues to incorporate KBs in deep neural approaches for document ranking. More particularly, this paper advocates that KBs can be used either to support enhanced latent representations of queries and documents based on both distributional and relational semantics or to serve as a semantic translator between their latent distributional representations.
Tasks Ad-Hoc Information Retrieval, Document Ranking, Information Retrieval
Published 2016-06-23
URL http://arxiv.org/abs/1606.07211v1
PDF http://arxiv.org/pdf/1606.07211v1.pdf
PWC https://paperswithcode.com/paper/toward-a-deep-neural-approach-for-knowledge
Repo
Framework

Robot Vision Architecture for Autonomous Clothes Manipulation

Title Robot Vision Architecture for Autonomous Clothes Manipulation
Authors Li Sun, Gerardo Aragon-Camarasa, Simon Rogers, J. Paul Siebert
Abstract This paper presents a novel robot vision architecture for perceiving generic 3D clothes configurations. Our architecture is hierarchically structured, starting from low-level curvatures, across mid-level geometric shapes & topology descriptions; and finally approaching high-level semantic surface structure descriptions. We demonstrate our robot vision architecture in a customised dual-arm industrial robot with our self-designed, off-the-self stereo vision system, carrying out autonomous grasping and dual-arm flattening. It is worth noting that the proposed dual-arm flattening approach is unique among the state-of-the-art robot autonomous system, which is the major contribution of this paper. The experimental results show that the proposed dual-arm flattening using stereo vision system remarkably outperforms the single-arm flattening and widely-cited Kinect-based sensing system for dexterous manipulation tasks. In addition, the proposed grasping approach achieves satisfactory performance on grasping various kind of garments, verifying the capability of proposed visual perception architecture to be adapted to more than one clothing manipulation tasks.
Tasks
Published 2016-10-18
URL http://arxiv.org/abs/1610.05824v1
PDF http://arxiv.org/pdf/1610.05824v1.pdf
PWC https://paperswithcode.com/paper/robot-vision-architecture-for-autonomous
Repo
Framework

Socially-Informed Timeline Generation for Complex Events

Title Socially-Informed Timeline Generation for Complex Events
Authors Lu Wang, Claire Cardie, Galen Marchetti
Abstract Existing timeline generation systems for complex events consider only information from traditional media, ignoring the rich social context provided by user-generated content that reveals representative public interests or insightful opinions. We instead aim to generate socially-informed timelines that contain both news article summaries and selected user comments. We present an optimization framework designed to balance topical cohesion between the article and comment summaries along with their informativeness and coverage of the event. Automatic evaluations on real-world datasets that cover four complex events show that our system produces more informative timelines than state-of-the-art systems. In human evaluation, the associated comment summaries are furthermore rated more insightful than editor’s picks and comments ranked highly by users.
Tasks
Published 2016-06-17
URL http://arxiv.org/abs/1606.05699v1
PDF http://arxiv.org/pdf/1606.05699v1.pdf
PWC https://paperswithcode.com/paper/socially-informed-timeline-generation-for
Repo
Framework

When Do Luxury Cars Hit the Road? Findings by A Big Data Approach

Title When Do Luxury Cars Hit the Road? Findings by A Big Data Approach
Authors Yang Feng, Jiebo Luo
Abstract In this paper, we focus on studying the appearing time of different kinds of cars on the road. This information will enable us to infer the life style of the car owners. The results can further be used to guide marketing towards car owners. Conventionally, this kind of study is carried out by sending out questionnaires, which is limited in scale and diversity. To solve this problem, we propose a fully automatic method to carry out this study. Our study is based on publicly available surveillance camera data. To make the results reliable, we only use the high resolution cameras (i.e. resolution greater than $1280 \times 720$). Images from the public cameras are downloaded every minute. After obtaining 50,000 images, we apply faster R-CNN (region-based convoluntional neural network) to detect the cars in the downloaded images and a fine-tuned VGG16 model is used to recognize the car makes. Based on the recognition results, we present a data-driven analysis on the relationship between car makes and their appearing times, with implications on lifestyles.
Tasks
Published 2016-05-10
URL http://arxiv.org/abs/1605.02827v2
PDF http://arxiv.org/pdf/1605.02827v2.pdf
PWC https://paperswithcode.com/paper/when-do-luxury-cars-hit-the-road-findings-by
Repo
Framework

Efficiently Computing Piecewise Flat Embeddings for Data Clustering and Image Segmentation

Title Efficiently Computing Piecewise Flat Embeddings for Data Clustering and Image Segmentation
Authors Renee T. Meinhold, Tyler L. Hayes, Nathan D. Cahill
Abstract Image segmentation is a popular area of research in computer vision that has many applications in automated image processing. A recent technique called piecewise flat embeddings (PFE) has been proposed for use in image segmentation; PFE transforms image pixel data into a lower dimensional representation where similar pixels are pulled close together and dissimilar pixels are pushed apart. This technique has shown promising results, but its original formulation is not computationally feasible for large images. We propose two improvements to the algorithm for computing PFE: first, we reformulate portions of the algorithm to enable various linear algebra operations to be performed in parallel; second, we propose utilizing an iterative linear solver (preconditioned conjugate gradient) to quickly solve a linear least-squares problem that occurs in the inner loop of a nested iteration. With these two computational improvements, we show on a publicly available image database that PFE can be sped up by an order of magnitude without sacrificing segmentation performance. Our results make this technique more practical for use on large data sets, not only for image segmentation, but for general data clustering problems.
Tasks Semantic Segmentation
Published 2016-12-20
URL http://arxiv.org/abs/1612.06496v1
PDF http://arxiv.org/pdf/1612.06496v1.pdf
PWC https://paperswithcode.com/paper/efficiently-computing-piecewise-flat
Repo
Framework

Light Field Compression with Disparity Guided Sparse Coding based on Structural Key Views

Title Light Field Compression with Disparity Guided Sparse Coding based on Structural Key Views
Authors Jie Chen, Junhui Hou, Lap-Pui Chau
Abstract Recent imaging technologies are rapidly evolving for sampling richer and more immersive representations of the 3D world. And one of the emerging technologies are light field (LF) cameras based on micro-lens arrays. To record the directional information of the light rays, a much larger storage space and transmission bandwidth are required by a LF image as compared with a conventional 2D image of similar spatial dimension, and the compression of LF data becomes a vital part of its application. In this paper, we propose a LF codec that fully exploits the intrinsic geometry between the LF sub-views by first approximating the LF with disparity guided sparse coding over a perspective shifted light field dictionary. The sparse coding is only based on several optimized Structural Key Views (SKV); however the entire LF can be recovered from the coding coefficients. By keeping the approximation identical between encoder and decoder, only the residuals of the non-key views, disparity map and the SKVs need to be compressed into the bit stream. An optimized SKV selection method is proposed such that most LF spatial information could be preserved. And to achieve optimum dictionary efficiency, the LF is divided into several Coding Regions (CR), over which the reconstruction works individually. Experiments and comparisons have been carried out over benchmark LF dataset, which show that the proposed SC-SKV codec produces convincing compression results in terms of both rate-distortion performance and visual quality compared with High Efficiency Video Coding (HEVC): with 47.87% BD-rate reduction and 1.59 dB BD-PSNR improvement achieved on average, especially with up to 4 dB improvement for low bit rate scenarios.
Tasks
Published 2016-10-12
URL http://arxiv.org/abs/1610.03684v2
PDF http://arxiv.org/pdf/1610.03684v2.pdf
PWC https://paperswithcode.com/paper/light-field-compression-with-disparity-guided
Repo
Framework

Structured Factored Inference: A Framework for Automated Reasoning in Probabilistic Programming Languages

Title Structured Factored Inference: A Framework for Automated Reasoning in Probabilistic Programming Languages
Authors Avi Pfeffer, Brian Ruttenberg, William Kretschmer
Abstract Reasoning on large and complex real-world models is a computationally difficult task, yet one that is required for effective use of many AI applications. A plethora of inference algorithms have been developed that work well on specific models or only on parts of general models. Consequently, a system that can intelligently apply these inference algorithms to different parts of a model for fast reasoning is highly desirable. We introduce a new framework called structured factored inference (SFI) that provides the foundation for such a system. Using models encoded in a probabilistic programming language, SFI provides a sound means to decompose a model into sub-models, apply an inference algorithm to each sub-model, and combine the resulting information to answer a query. Our results show that SFI is nearly as accurate as exact inference yet retains the benefits of approximate inference methods.
Tasks Probabilistic Programming
Published 2016-06-10
URL http://arxiv.org/abs/1606.03298v1
PDF http://arxiv.org/pdf/1606.03298v1.pdf
PWC https://paperswithcode.com/paper/structured-factored-inference-a-framework-for
Repo
Framework

Discriminative Gaifman Models

Title Discriminative Gaifman Models
Authors Mathias Niepert
Abstract We present discriminative Gaifman models, a novel family of relational machine learning models. Gaifman models learn feature representations bottom up from representations of locally connected and bounded-size regions of knowledge bases (KBs). Considering local and bounded-size neighborhoods of knowledge bases renders logical inference and learning tractable, mitigates the problem of overfitting, and facilitates weight sharing. Gaifman models sample neighborhoods of knowledge bases so as to make the learned relational models more robust to missing objects and relations which is a common situation in open-world KBs. We present the core ideas of Gaifman models and apply them to large-scale relational learning problems. We also discuss the ways in which Gaifman models relate to some existing relational machine learning approaches.
Tasks Link Prediction, Relational Reasoning
Published 2016-10-28
URL http://arxiv.org/abs/1610.09369v1
PDF http://arxiv.org/pdf/1610.09369v1.pdf
PWC https://paperswithcode.com/paper/discriminative-gaifman-models
Repo
Framework

Improving a Credit Scoring Model by Incorporating Bank Statement Derived Features

Title Improving a Credit Scoring Model by Incorporating Bank Statement Derived Features
Authors Rory P. Bunker, Wenjun Zhang, M. Asif Naeem
Abstract In this paper, we investigate the extent to which features derived from bank statements provided by loan applicants, and which are not declared on an application form, can enhance a credit scoring model for a New Zealand lending company. Exploring the potential of such information to improve credit scoring models in this manner has not been studied previously. We construct a baseline model based solely on the existing scoring features obtained from the loan application form, and a second baseline model based solely on the new bank statement-derived features. A combined feature model is then created by augmenting the application form features with the new bank statement derived features. Our experimental results using ROC analysis show that a combined feature model performs better than both of the two baseline models, and show that a number of the bank statement-derived features have value in improving the credit scoring model. The target data set used for modelling was highly imbalanced, and Naive Bayes was found to be the best performing model, and outperformed a number of other classifiers commonly used in credit scoring, suggesting its potential for future use on highly imbalanced data sets.
Tasks
Published 2016-10-30
URL http://arxiv.org/abs/1611.00252v2
PDF http://arxiv.org/pdf/1611.00252v2.pdf
PWC https://paperswithcode.com/paper/improving-a-credit-scoring-model-by
Repo
Framework

Sampled Fictitious Play is Hannan Consistent

Title Sampled Fictitious Play is Hannan Consistent
Authors Zifan Li, Ambuj Tewari
Abstract Fictitious play is a simple and widely studied adaptive heuristic for playing repeated games. It is well known that fictitious play fails to be Hannan consistent. Several variants of fictitious play including regret matching, generalized regret matching and smooth fictitious play, are known to be Hannan consistent. In this note, we consider sampled fictitious play: at each round, the player samples past times and plays the best response to previous moves of other players at the sampled time points. We show that sampled fictitious play, using Bernoulli sampling, is Hannan consistent. Unlike several existing Hannan consistency proofs that rely on concentration of measure results, ours instead uses anti-concentration results from Littlewood-Offord theory.
Tasks
Published 2016-10-05
URL http://arxiv.org/abs/1610.01687v2
PDF http://arxiv.org/pdf/1610.01687v2.pdf
PWC https://paperswithcode.com/paper/sampled-fictitious-play-is-hannan-consistent
Repo
Framework
comments powered by Disqus