July 27, 2019

3092 words 15 mins read

Paper Group ANR 487

Paper Group ANR 487

Multilingual Topic Models. Cross-Country Skiing Gears Classification using Deep Learning. Who’s Better? Who’s Best? Pairwise Deep Ranking for Skill Determination. Learning Channel Inter-dependencies at Multiple Scales on Dense Networks for Face Recognition. Linear-Time Sequence Classification using Restricted Boltzmann Machines. A Popperian Falsifi …

Multilingual Topic Models

Title Multilingual Topic Models
Authors Kriste Krstovski, Michael J. Kurtz, David A. Smith, Alberto Accomazzi
Abstract Scientific publications have evolved several features for mitigating vocabulary mismatch when indexing, retrieving, and computing similarity between articles. These mitigation strategies range from simply focusing on high-value article sections, such as titles and abstracts, to assigning keywords, often from controlled vocabularies, either manually or through automatic annotation. Various document representation schemes possess different cost-benefit tradeoffs. In this paper, we propose to model different representations of the same article as translations of each other, all generated from a common latent representation in a multilingual topic model. We start with a methodological overview on latent variable models for parallel document representations that could be used across many information science tasks. We then show how solving the inference problem of mapping diverse representations into a shared topic space allows us to evaluate representations based on how topically similar they are to the original article. In addition, our proposed approach provides means to discover where different concept vocabularies require improvement.
Tasks Latent Variable Models, Topic Models
Published 2017-12-18
URL http://arxiv.org/abs/1712.06704v1
PDF http://arxiv.org/pdf/1712.06704v1.pdf
PWC https://paperswithcode.com/paper/multilingual-topic-models
Repo
Framework

Cross-Country Skiing Gears Classification using Deep Learning

Title Cross-Country Skiing Gears Classification using Deep Learning
Authors Aliaa Rassem, Mohammed El-Beltagy, Mohamed Saleh
Abstract Human Activity Recognition has witnessed a significant progress in the last decade. Although a great deal of work in this field goes in recognizing normal human activities, few studies focused on identifying motion in sports. Recognizing human movements in different sports has high impact on understanding the different styles of humans in the play and on improving their performance. As deep learning models proved to have good results in many classification problems, this paper will utilize deep learning to classify cross-country skiing movements, known as gears, collected using a 3D accelerometer. It will also provide a comparison between different deep learning models such as convolutional and recurrent neural networks versus standard multi-layer perceptron. Results show that deep learning is more effective and has the highest classification accuracy.
Tasks Activity Recognition, Human Activity Recognition
Published 2017-06-27
URL http://arxiv.org/abs/1706.08924v1
PDF http://arxiv.org/pdf/1706.08924v1.pdf
PWC https://paperswithcode.com/paper/cross-country-skiing-gears-classification
Repo
Framework

Who’s Better? Who’s Best? Pairwise Deep Ranking for Skill Determination

Title Who’s Better? Who’s Best? Pairwise Deep Ranking for Skill Determination
Authors Hazel Doughty, Dima Damen, Walterio Mayol-Cuevas
Abstract We present a method for assessing skill from video, applicable to a variety of tasks, ranging from surgery to drawing and rolling pizza dough. We formulate the problem as pairwise (who’s better?) and overall (who’s best?) ranking of video collections, using supervised deep ranking. We propose a novel loss function that learns discriminative features when a pair of videos exhibit variance in skill, and learns shared features when a pair of videos exhibit comparable skill levels. Results demonstrate our method is applicable across tasks, with the percentage of correctly ordered pairs of videos ranging from 70% to 83% for four datasets. We demonstrate the robustness of our approach via sensitivity analysis of its parameters. We see this work as effort toward the automated organization of how-to video collections and overall, generic skill determination in video.
Tasks
Published 2017-03-29
URL http://arxiv.org/abs/1703.09913v2
PDF http://arxiv.org/pdf/1703.09913v2.pdf
PWC https://paperswithcode.com/paper/whos-better-whos-best-pairwise-deep-ranking
Repo
Framework

Learning Channel Inter-dependencies at Multiple Scales on Dense Networks for Face Recognition

Title Learning Channel Inter-dependencies at Multiple Scales on Dense Networks for Face Recognition
Authors Qiangchang Wang, Guodong Guo, Mohammad Iqbal Nouyed
Abstract We propose a new deep network structure for unconstrained face recognition. The proposed network integrates several key components together in order to characterize complex data distributions, such as in unconstrained face images. Inspired by recent progress in deep networks, we consider some important concepts, including multi-scale feature learning, dense connections of network layers, and weighting different network flows, for building our deep network structure. The developed network is evaluated in unconstrained face matching, showing the capability of learning complex data distributions caused by face images with various qualities.
Tasks Face Recognition
Published 2017-11-28
URL http://arxiv.org/abs/1711.10103v2
PDF http://arxiv.org/pdf/1711.10103v2.pdf
PWC https://paperswithcode.com/paper/learning-channel-inter-dependencies-at
Repo
Framework

Linear-Time Sequence Classification using Restricted Boltzmann Machines

Title Linear-Time Sequence Classification using Restricted Boltzmann Machines
Authors Son N. Tran, Srikanth Cherla, Artur Garcez, Tillman Weyde
Abstract Classification of sequence data is the topic of interest for dynamic Bayesian models and Recurrent Neural Networks (RNNs). While the former can explicitly model the temporal dependencies between class variables, the latter have a capability of learning representations. Several attempts have been made to improve performance by combining these two approaches or increasing the processing capability of the hidden units in RNNs. This often results in complex models with a large number of learning parameters. In this paper, a compact model is proposed which offers both representation learning and temporal inference of class variables by rolling Restricted Boltzmann Machines (RBMs) and class variables over time. We address the key issue of intractability in this variant of RBMs by optimising a conditional distribution, instead of a joint distribution. Experiments reported in the paper on melody modelling and optical character recognition show that the proposed model can outperform the state-of-the-art. Also, the experimental results on optical character recognition, part-of-speech tagging and text chunking demonstrate that our model is comparable to recurrent neural networks with complex memory gates while requiring far fewer parameters.
Tasks Chunking, Optical Character Recognition, Part-Of-Speech Tagging, Representation Learning
Published 2017-10-06
URL http://arxiv.org/abs/1710.02245v3
PDF http://arxiv.org/pdf/1710.02245v3.pdf
PWC https://paperswithcode.com/paper/linear-time-sequence-classification-using
Repo
Framework

A Popperian Falsification of Artificial Intelligence - Lighthill Defended

Title A Popperian Falsification of Artificial Intelligence - Lighthill Defended
Authors Steven Meyer
Abstract The area of computation called artificial intelligence (AI) is falsified by describing a previous 1972 falsification of AI by British applied mathematician James Lighthill. It is explained how Lighthill’s arguments continue to apply to current AI. It is argued that AI should use the Popperian scientific method in which it is the duty of every scientist to attempt to falsify theories and if theories are falsified to replace or modify them. The paper describes the Popperian method in detail and discusses Paul Nurse’s application of the method to cell biology that also involves questions of mechanism and behavior. Arguments used by Lighthill in his original 1972 report that falsified AI are discussed. The Lighthill arguments are then shown to apply to current AI. The argument uses recent scholarship to explain Lighthill’s assumptions and to show how the arguments based on those assumptions continue to falsify modern AI. An important focus of the argument involves Hilbert’s philosophical programme that defined knowledge and truth as provable formal sentences. Current AI takes the Hilbert programme as dogma beyond criticism while Lighthill as a mid 20th century applied mathematician had abandoned it. The paper uses recent scholarship to explain John von Neumann’s criticism of AI that I claim was assumed by Lighthill. The paper discusses computer chess programs to show Lighthill’s combinatorial explosion still applies to AI but not humans. An argument showing that Turing Machines (TM) are not the correct description of computation is given. The paper concludes by advocating studying computation as Peter Naur’s Dataology.
Tasks
Published 2017-04-23
URL http://arxiv.org/abs/1704.08111v2
PDF http://arxiv.org/pdf/1704.08111v2.pdf
PWC https://paperswithcode.com/paper/a-popperian-falsification-of-artificial
Repo
Framework

Fast Preprocessing for Robust Face Sketch Synthesis

Title Fast Preprocessing for Robust Face Sketch Synthesis
Authors Yibing Song, Jiawei Zhang, Linchao Bao, Qingxiong Yang
Abstract Exemplar-based face sketch synthesis methods usually meet the challenging problem that input photos are captured in different lighting conditions from training photos. The critical step causing the failure is the search of similar patch candidates for an input photo patch. Conventional illumination invariant patch distances are adopted rather than directly relying on pixel intensity difference, but they will fail when local contrast within a patch changes. In this paper, we propose a fast preprocessing method named Bidirectional Luminance Remapping (BLR), which interactively adjust the lighting of training and input photos. Our method can be directly integrated into state-of-the-art exemplar-based methods to improve their robustness with ignorable computational cost.
Tasks Face Sketch Synthesis
Published 2017-08-01
URL http://arxiv.org/abs/1708.00224v1
PDF http://arxiv.org/pdf/1708.00224v1.pdf
PWC https://paperswithcode.com/paper/fast-preprocessing-for-robust-face-sketch
Repo
Framework

Determination of hysteresis in finite-state random walks using Bayesian cross validation

Title Determination of hysteresis in finite-state random walks using Bayesian cross validation
Authors Joshua C. Chang
Abstract Consider the problem of modeling hysteresis for finite-state random walks using higher-order Markov chains. This Letter introduces a Bayesian framework to determine, from data, the number of prior states of recent history upon which a trajectory is statistically dependent. The general recommendation is to use leave-one-out cross validation, using an easily-computable formula that is provided in closed form. Importantly, Bayes factors using flat model priors are biased in favor of too-complex a model (more hysteresis) when a large amount of data is present and the Akaike information criterion (AIC) is biased in favor of too-sparse a model (less hysteresis) when few data are present.
Tasks
Published 2017-02-21
URL http://arxiv.org/abs/1702.06221v2
PDF http://arxiv.org/pdf/1702.06221v2.pdf
PWC https://paperswithcode.com/paper/determination-of-hysteresis-in-finite-state
Repo
Framework

Approximate Gradient Coding via Sparse Random Graphs

Title Approximate Gradient Coding via Sparse Random Graphs
Authors Zachary Charles, Dimitris Papailiopoulos, Jordan Ellenberg
Abstract Distributed algorithms are often beset by the straggler effect, where the slowest compute nodes in the system dictate the overall running time. Coding-theoretic techniques have been recently proposed to mitigate stragglers via algorithmic redundancy. Prior work in coded computation and gradient coding has mainly focused on exact recovery of the desired output. However, slightly inexact solutions can be acceptable in applications that are robust to noise, such as model training via gradient-based algorithms. In this work, we present computationally simple gradient codes based on sparse graphs that guarantee fast and approximately accurate distributed computation. We demonstrate that sacrificing a small amount of accuracy can significantly increase algorithmic robustness to stragglers.
Tasks
Published 2017-11-17
URL http://arxiv.org/abs/1711.06771v1
PDF http://arxiv.org/pdf/1711.06771v1.pdf
PWC https://paperswithcode.com/paper/approximate-gradient-coding-via-sparse-random
Repo
Framework

Detection, Recognition and Tracking of Moving Objects from Real-time Video via SP Theory of Intelligence and Species Inspired PSO

Title Detection, Recognition and Tracking of Moving Objects from Real-time Video via SP Theory of Intelligence and Species Inspired PSO
Authors Kumar S Ray, Sayandip Dutta, Anit Chakraborty
Abstract In this paper, we address the basic problem of recognizing moving objects in video images using SP Theory of Intelligence. The concept of SP Theory of Intelligence which is a framework of artificial intelligence, was first introduced by Gerard J Wolff, where S stands for Simplicity and P stands for Power. Using the concept of multiple alignment, we detect and recognize object of our interest in video frames with multilevel hierarchical parts and subparts, based on polythetic categories. We track the recognized objects using the species based Particle Swarm Optimization (PSO). First, we extract the multiple alignment of our object of interest from training images. In order to recognize accurately and handle occlusion, we use the polythetic concepts on raw data line to omit the redundant noise via searching for best alignment representing the features from the extracted alignments. We recognize the domain of interest from the video scenes in form of wide variety of multiple alignments to handle scene variability. Unsupervised learning is done in the SP model following the DONSVIC principle and natural structures are discovered via information compression and pattern analysis. After successful recognition of objects, we use species based PSO algorithm as the alignments of our object of interest is analogues to observation likelihood and fitness ability of species. Subsequently, we analyze the competition and repulsion among species with annealed Gaussian based PSO. We have tested our algorithms on David, Walking2, FaceOcc1, Jogging and Dudek, obtaining very satisfactory and competitive results.
Tasks
Published 2017-04-12
URL http://arxiv.org/abs/1704.07312v1
PDF http://arxiv.org/pdf/1704.07312v1.pdf
PWC https://paperswithcode.com/paper/detection-recognition-and-tracking-of-moving-1
Repo
Framework

Contextual Outlier Interpretation

Title Contextual Outlier Interpretation
Authors Ninghao Liu, Donghwa Shin, Xia Hu
Abstract Outlier detection plays an essential role in many data-driven applications to identify isolated instances that are different from the majority. While many statistical learning and data mining techniques have been used for developing more effective outlier detection algorithms, the interpretation of detected outliers does not receive much attention. Interpretation is becoming increasingly important to help people trust and evaluate the developed models through providing intrinsic reasons why the certain outliers are chosen. It is difficult, if not impossible, to simply apply feature selection for explaining outliers due to the distinct characteristics of various detection models, complicated structures of data in certain applications, and imbalanced distribution of outliers and normal instances. In addition, the role of contrastive contexts where outliers locate, as well as the relation between outliers and contexts, are usually overlooked in interpretation. To tackle the issues above, in this paper, we propose a novel Contextual Outlier INterpretation (COIN) method to explain the abnormality of existing outliers spotted by detectors. The interpretability for an outlier is achieved from three aspects: outlierness score, attributes that contribute to the abnormality, and contextual description of its neighborhoods. Experimental results on various types of datasets demonstrate the flexibility and effectiveness of the proposed framework compared with existing interpretation approaches.
Tasks Feature Selection, Outlier Detection
Published 2017-11-28
URL http://arxiv.org/abs/1711.10589v3
PDF http://arxiv.org/pdf/1711.10589v3.pdf
PWC https://paperswithcode.com/paper/contextual-outlier-interpretation
Repo
Framework

Analysis of Agent Expertise in Ms. Pac-Man using Value-of-Information-based Policies

Title Analysis of Agent Expertise in Ms. Pac-Man using Value-of-Information-based Policies
Authors Isaac J. Sledge, Jose C. Principe
Abstract Conventional reinforcement learning methods for Markov decision processes rely on weakly-guided, stochastic searches to drive the learning process. It can therefore be difficult to predict what agent behaviors might emerge. In this paper, we consider an information-theoretic cost function for performing constrained stochastic searches that promote the formation of risk-averse to risk-favoring behaviors. This cost function is the value of information, which provides the optimal trade-off between the expected return of a policy and the policy’s complexity; policy complexity is measured by number of bits and controlled by a single hyperparameter on the cost function. As the policy complexity is reduced, the agents will increasingly eschew risky actions. This reduces the potential for high accrued rewards. As the policy complexity increases, the agents will take actions, regardless of the risk, that can raise the long-term rewards. The obtainable reward depends on a single, tunable hyperparameter that regulates the degree of policy complexity. We evaluate the performance of value-of-information-based policies on a stochastic version of Ms. Pac-Man. A major component of this paper is the demonstration that ranges of policy complexity values yield different game-play styles and explaining why this occurs. We also show that our reinforcement-learning search mechanism is more efficient than the others we utilize. This result implies that the value of information theory is appropriate for framing the exploitation-exploration trade-off in reinforcement learning.
Tasks
Published 2017-02-28
URL http://arxiv.org/abs/1702.08628v3
PDF http://arxiv.org/pdf/1702.08628v3.pdf
PWC https://paperswithcode.com/paper/analysis-of-agent-expertise-in-ms-pac-man
Repo
Framework

Learning Disordered Topological Phases by Statistical Recovery of Symmetry

Title Learning Disordered Topological Phases by Statistical Recovery of Symmetry
Authors Nobuyuki Yoshioka, Yutaka Akagi, Hosho Katsura
Abstract In this letter, we apply the artificial neural network in a supervised manner to map out the quantum phase diagram of disordered topological superconductor in class DIII. Given the disorder that keeps the discrete symmetries of the ensemble as a whole, translational symmetry which is broken in the quasiparticle distribution individually is recovered statistically by taking an ensemble average. By using this, we classify the phases by the artificial neural network that learned the quasiparticle distribution in the clean limit, and show that the result is totally consistent with the calculation by the transfer matrix method or noncommutative geometry approach. If all three phases, namely the $\mathbb{Z}_2$, trivial, and the thermal metal phases appear in the clean limit, the machine can classify them with high confidence over the entire phase diagram. If only the former two phases are present, we find that the machine remains confused in the certain region, leading us to conclude the detection of the unknown phase which is eventually identified as the thermal metal phase. In our method, only the first moment of the quasiparticle distribution is used for input, but application to a wider variety of systems is expected by the inclusion of higher moments.
Tasks
Published 2017-09-18
URL http://arxiv.org/abs/1709.05790v3
PDF http://arxiv.org/pdf/1709.05790v3.pdf
PWC https://paperswithcode.com/paper/learning-disordered-topological-phases-by
Repo
Framework

Elite Bases Regression: A Real-time Algorithm for Symbolic Regression

Title Elite Bases Regression: A Real-time Algorithm for Symbolic Regression
Authors Chen Chen, Changtong Luo, Zonglin Jiang
Abstract Symbolic regression is an important but challenging research topic in data mining. It can detect the underlying mathematical models. Genetic programming (GP) is one of the most popular methods for symbolic regression. However, its convergence speed might be too slow for large scale problems with a large number of variables. This drawback has become a bottleneck in practical applications. In this paper, a new non-evolutionary real-time algorithm for symbolic regression, Elite Bases Regression (EBR), is proposed. EBR generates a set of candidate basis functions coded with parse-matrix in specific mapping rules. Meanwhile, a certain number of elite bases are preserved and updated iteratively according to the correlation coefficients with respect to the target model. The regression model is then spanned by the elite bases. A comparative study between EBR and a recent proposed machine learning method for symbolic regression, Fast Function eXtraction (FFX), are conducted. Numerical results indicate that EBR can solve symbolic regression problems more effectively.
Tasks
Published 2017-04-24
URL http://arxiv.org/abs/1704.07313v2
PDF http://arxiv.org/pdf/1704.07313v2.pdf
PWC https://paperswithcode.com/paper/elite-bases-regression-a-real-time-algorithm
Repo
Framework

On Convergence Property of Implicit Self-paced Objective

Title On Convergence Property of Implicit Self-paced Objective
Authors Zilu Ma, Shiqi Liu, Deyu Meng
Abstract Self-paced learning (SPL) is a new methodology that simulates the learning principle of humans/animals to start learning easier aspects of a learning task, and then gradually take more complex examples into training. This new-coming learning regime has been empirically substantiated to be effective in various computer vision and pattern recognition tasks. Recently, it has been proved that the SPL regime has a close relationship to a implicit self-paced objective function. While this implicit objective could provide helpful interpretations to the effectiveness, especially the robustness, insights under the SPL paradigms, there are still no theoretical results strictly proved to verify such relationship. To this issue, in this paper, we provide some convergence results on this implicit objective of SPL. Specifically, we prove that the learning process of SPL always converges to critical points of this implicit objective under some mild conditions. This result verifies the intrinsic relationship between SPL and this implicit objective, and makes the previous robustness analysis on SPL complete and theoretically rational.
Tasks
Published 2017-03-29
URL http://arxiv.org/abs/1703.09923v1
PDF http://arxiv.org/pdf/1703.09923v1.pdf
PWC https://paperswithcode.com/paper/on-convergence-property-of-implicit-self
Repo
Framework
comments powered by Disqus