May 6, 2019

3012 words 15 mins read

Paper Group ANR 233

Paper Group ANR 233

Action Recognition in Video Using Sparse Coding and Relative Features. Searching for Topological Symmetry in Data Haystack. Self-Paced Learning: an Implicit Regularization Perspective. Social Computing for Mobile Big Data in Wireless Networks. A Base Camp for Scaling AI. Global Constraint Catalog, Volume II, Time-Series Constraints. A landmark-base …

Action Recognition in Video Using Sparse Coding and Relative Features

Title Action Recognition in Video Using Sparse Coding and Relative Features
Authors Anali Alfaro, Domingo Mery, Alvaro Soto
Abstract This work presents an approach to category-based action recognition in video using sparse coding techniques. The proposed approach includes two main contributions: i) A new method to handle intra-class variations by decomposing each video into a reduced set of representative atomic action acts or key-sequences, and ii) A new video descriptor, ITRA: Inter-Temporal Relational Act Descriptor, that exploits the power of comparative reasoning to capture relative similarity relations among key-sequences. In terms of the method to obtain key-sequences, we introduce a loss function that, for each video, leads to the identification of a sparse set of representative key-frames capturing both, relevant particularities arising in the input video, as well as relevant generalities arising in the complete class collection. In terms of the method to obtain the ITRA descriptor, we introduce a novel scheme to quantify relative intra and inter-class similarities among local temporal patterns arising in the videos. The resulting ITRA descriptor demonstrates to be highly effective to discriminate among action categories. As a result, the proposed approach reaches remarkable action recognition performance on several popular benchmark datasets, outperforming alternative state-of-the-art techniques by a large margin.
Tasks Temporal Action Localization
Published 2016-05-10
URL http://arxiv.org/abs/1605.03222v1
PDF http://arxiv.org/pdf/1605.03222v1.pdf
PWC https://paperswithcode.com/paper/action-recognition-in-video-using-sparse
Repo
Framework

Searching for Topological Symmetry in Data Haystack

Title Searching for Topological Symmetry in Data Haystack
Authors Kallol Roy, Anh Tong, Jaesik Choi
Abstract Finding interesting symmetrical topological structures in high-dimensional systems is an important problem in statistical machine learning. Limited amount of available high-dimensional data and its sensitivity to noise pose computational challenges to find symmetry. Our paper presents a new method to find local symmetries in a low-dimensional 2-D grid structure which is embedded in high-dimensional structure. To compute the symmetry in a grid structure, we introduce three legal grid moves (i) Commutation (ii) Cyclic Permutation (iii) Stabilization on sets of local grid squares, grid blocks. The three grid moves are legal transformations as they preserve the statistical distribution of hamming distances in each grid block. We propose and coin the term of grid symmetry of data on the 2-D data grid as the invariance of statistical distributions of hamming distance are preserved after a sequence of grid moves. We have computed and analyzed the grid symmetry of data on multivariate Gaussian distributions and Gamma distributions with noise.
Tasks
Published 2016-03-11
URL http://arxiv.org/abs/1603.03703v1
PDF http://arxiv.org/pdf/1603.03703v1.pdf
PWC https://paperswithcode.com/paper/searching-for-topological-symmetry-in-data
Repo
Framework

Self-Paced Learning: an Implicit Regularization Perspective

Title Self-Paced Learning: an Implicit Regularization Perspective
Authors Yanbo Fan, Ran He, Jian Liang, Bao-Gang Hu
Abstract Self-paced learning (SPL) mimics the cognitive mechanism of humans and animals that gradually learns from easy to hard samples. One key issue in SPL is to obtain better weighting strategy that is determined by minimizer function. Existing methods usually pursue this by artificially designing the explicit form of SPL regularizer. In this paper, we focus on the minimizer function, and study a group of new regularizer, named self-paced implicit regularizer that is deduced from robust loss function. Based on the convex conjugacy theory, the minimizer function for self-paced implicit regularizer can be directly learned from the latent loss function, while the analytic form of the regularizer can be even known. A general framework (named SPL-IR) for SPL is developed accordingly. We demonstrate that the learning procedure of SPL-IR is associated with latent robust loss functions, thus can provide some theoretical inspirations for its working mechanism. We further analyze the relation between SPL-IR and half-quadratic optimization. Finally, we implement SPL-IR to both supervised and unsupervised tasks, and experimental results corroborate our ideas and demonstrate the correctness and effectiveness of implicit regularizers.
Tasks
Published 2016-06-01
URL http://arxiv.org/abs/1606.00128v3
PDF http://arxiv.org/pdf/1606.00128v3.pdf
PWC https://paperswithcode.com/paper/self-paced-learning-an-implicit
Repo
Framework

Social Computing for Mobile Big Data in Wireless Networks

Title Social Computing for Mobile Big Data in Wireless Networks
Authors Xing Zhang, Zhenglei Yi, Zhi Yan, Geyong Min, Wenbo Wang, Sabita Maharjan, Yan Zhang
Abstract Mobile big data contains vast statistical features in various dimensions, including spatial, temporal, and the underlying social domain. Understanding and exploiting the features of mobile data from a social network perspective will be extremely beneficial to wireless networks, from planning, operation, and maintenance to optimization and marketing. In this paper, we categorize and analyze the big data collected from real wireless cellular networks. Then, we study the social characteristics of mobile big data and highlight several research directions for mobile big data in the social computing areas.
Tasks
Published 2016-09-30
URL http://arxiv.org/abs/1609.09597v1
PDF http://arxiv.org/pdf/1609.09597v1.pdf
PWC https://paperswithcode.com/paper/social-computing-for-mobile-big-data-in
Repo
Framework

A Base Camp for Scaling AI

Title A Base Camp for Scaling AI
Authors C. J. C. Burges, T. Hart, Z. Yang, S. Cucerzan, R. W. White, A. Pastusiak, J. Lewis
Abstract Modern statistical machine learning (SML) methods share a major limitation with the early approaches to AI: there is no scalable way to adapt them to new domains. Human learning solves this in part by leveraging a rich, shared, updateable world model. Such scalability requires modularity: updating part of the world model should not impact unrelated parts. We have argued that such modularity will require both “correctability” (so that errors can be corrected without introducing new errors) and “interpretability” (so that we can understand what components need correcting). To achieve this, one could attempt to adapt state of the art SML systems to be interpretable and correctable; or one could see how far the simplest possible interpretable, correctable learning methods can take us, and try to control the limitations of SML methods by applying them only where needed. Here we focus on the latter approach and we investigate two main ideas: “Teacher Assisted Learning”, which leverages crowd sourcing to learn language; and “Factored Dialog Learning”, which factors the process of application development into roles where the language competencies needed are isolated, enabling non-experts to quickly create new applications. We test these ideas in an “Automated Personal Assistant” (APA) setting, with two scenarios: that of detecting user intent from a user-APA dialog; and that of creating a class of event reminder applications, where a non-expert “teacher” can then create specific apps. For the intent detection task, we use a dataset of a thousand labeled utterances from user dialogs with Cortana, and we show that our approach matches state of the art SML methods, but in addition provides full transparency: the whole (editable) model can be summarized on one human-readable page. For the reminder app task, we ran small user studies to verify the efficacy of the approach.
Tasks Dialog Learning, Intent Detection
Published 2016-12-23
URL http://arxiv.org/abs/1612.07896v1
PDF http://arxiv.org/pdf/1612.07896v1.pdf
PWC https://paperswithcode.com/paper/a-base-camp-for-scaling-ai
Repo
Framework

Global Constraint Catalog, Volume II, Time-Series Constraints

Title Global Constraint Catalog, Volume II, Time-Series Constraints
Authors Ekaterina Arafailova, Nicolas Beldiceanu, Rémi Douence, Mats Carlsson, Pierre Flener, María Andreína Francisco Rodríguez, Justin Pearson, Helmut Simonis
Abstract First this report presents a restricted set of finite transducers used to synthesise structural time-series constraints described by means of a multi-layered function composition scheme. Second it provides the corresponding synthesised catalogue of structural time-series constraints where each constraint is explicitly described in terms of automata with registers.
Tasks Time Series
Published 2016-09-26
URL http://arxiv.org/abs/1609.08925v2
PDF http://arxiv.org/pdf/1609.08925v2.pdf
PWC https://paperswithcode.com/paper/global-constraint-catalog-volume-ii-time
Repo
Framework

A landmark-based algorithm for automatic pattern recognition and abnormality detection

Title A landmark-based algorithm for automatic pattern recognition and abnormality detection
Authors S. Huzurbazar, Long Lee, Dongyang Kuang
Abstract We study a class of mathematical and statistical algorithms with the aim of establishing a computer-based framework for fast and reliable automatic abnormality detection on landmark represented image templates. Under this framework, we apply a landmark-based algorithm for finding a group average as an estimator that is said to best represent the common features of the group in study. This algorithm extracts information of momentum at each landmark through the process of template matching. If ever converges, the proposed algorithm produces a local coordinate system for each member of the observing group, in terms of the residual momentum. We use a Bayesian approach on the collected residual momentum representations for making inference. For illustration, we apply this framework to a small database of brain images for detecting structure abnormality. The brain structure changes identified by our framework are highly consistent with studies in the literature.
Tasks Anomaly Detection
Published 2016-02-17
URL http://arxiv.org/abs/1602.05572v2
PDF http://arxiv.org/pdf/1602.05572v2.pdf
PWC https://paperswithcode.com/paper/a-landmark-based-algorithm-for-automatic
Repo
Framework

Duality between Feature Selection and Data Clustering

Title Duality between Feature Selection and Data Clustering
Authors Chung Chan, Ali Al-Bashabsheh, Qiaoqiao Zhou, Tie Liu
Abstract The feature-selection problem is formulated from an information-theoretic perspective. We show that the problem can be efficiently solved by an extension of the recently proposed info-clustering paradigm. This reveals the fundamental duality between feature selection and data clustering,which is a consequence of the more general duality between the principal partition and the principal lattice of partitions in combinatorial optimization.
Tasks Combinatorial Optimization, Feature Selection
Published 2016-09-27
URL http://arxiv.org/abs/1609.08312v2
PDF http://arxiv.org/pdf/1609.08312v2.pdf
PWC https://paperswithcode.com/paper/duality-between-feature-selection-and-data
Repo
Framework

Optimizing human-interpretable dialog management policy using Genetic Algorithm

Title Optimizing human-interpretable dialog management policy using Genetic Algorithm
Authors Hang Ren, Weiqun Xu, Yonghong Yan
Abstract Automatic optimization of spoken dialog management policies that are robust to environmental noise has long been the goal for both academia and industry. Approaches based on reinforcement learning have been proved to be effective. However, the numerical representation of dialog policy is human-incomprehensible and difficult for dialog system designers to verify or modify, which limits its practical application. In this paper we propose a novel framework for optimizing dialog policies specified in domain language using genetic algorithm. The human-interpretable representation of policy makes the method suitable for practical employment. We present learning algorithms using user simulation and real human-machine dialogs respectively.Empirical experimental results are given to show the effectiveness of the proposed approach.
Tasks
Published 2016-05-12
URL http://arxiv.org/abs/1605.03915v2
PDF http://arxiv.org/pdf/1605.03915v2.pdf
PWC https://paperswithcode.com/paper/optimizing-human-interpretable-dialog
Repo
Framework

Sparse recovery via Orthogonal Least-Squares under presence of Noise

Title Sparse recovery via Orthogonal Least-Squares under presence of Noise
Authors Abolfazl Hashemi, Haris Vikalo
Abstract We consider the Orthogonal Least-Squares (OLS) algorithm for the recovery of a $m$-dimensional $k$-sparse signal from a low number of noisy linear measurements. The Exact Recovery Condition (ERC) in bounded noisy scenario is established for OLS under certain condition on nonzero elements of the signal. The new result also improves the existing guarantees for Orthogonal Matching Pursuit (OMP) algorithm. In addition, This framework is employed to provide probabilistic guarantees for the case that the coefficient matrix is drawn at random according to Gaussian or Bernoulli distribution where we exploit some concentration properties. It is shown that under certain conditions, OLS recovers the true support in $k$ iterations with high probability. This in turn demonstrates that ${\cal O}\left(k\log m\right)$ measurements is sufficient for exact recovery of sparse signals via OLS.
Tasks
Published 2016-08-08
URL http://arxiv.org/abs/1608.02554v1
PDF http://arxiv.org/pdf/1608.02554v1.pdf
PWC https://paperswithcode.com/paper/sparse-recovery-via-orthogonal-least-squares
Repo
Framework

algcomparison: Comparing the Performance of Graphical Structure Learning Algorithms with TETRAD

Title algcomparison: Comparing the Performance of Graphical Structure Learning Algorithms with TETRAD
Authors Joseph D. Ramsey, Daniel Malinsky, Kevin V. Bui
Abstract In this report we describe a tool for comparing the performance of graphical causal structure learning algorithms implemented in the TETRAD freeware suite of causal analysis methods. Currently the tool is available as package in the TETRAD source code (written in Java). Simulations can be done varying the number of runs, sample sizes, and data modalities. Performance on this simulated data can then be compared for a number of algorithms, with parameters varied and with performance statistics as selected, producing a publishable report. The order of the algorithms in the output can be adjusted to the user’s preference using a utility function over the statistics. Data sets from simulation can be saved along with their graphs to a file and loaded back in for further analysis, or used for analysis by other tools. The package presented here may also be used to compare structure learning methods across platforms and programming languages, i.e., to compare algorithms implemented in TETRAD with those implemented in MATLAB, Python, or R.
Tasks
Published 2016-07-27
URL https://arxiv.org/abs/1607.08110v6
PDF https://arxiv.org/pdf/1607.08110v6.pdf
PWC https://paperswithcode.com/paper/comparing-the-performance-of-graphical
Repo
Framework

Mean-Field Variational Inference for Gradient Matching with Gaussian Processes

Title Mean-Field Variational Inference for Gradient Matching with Gaussian Processes
Authors Nico S. Gorbach, Stefan Bauer, Joachim M. Buhmann
Abstract Gradient matching with Gaussian processes is a promising tool for learning parameters of ordinary differential equations (ODE’s). The essence of gradient matching is to model the prior over state variables as a Gaussian process which implies that the joint distribution given the ODE’s and GP kernels is also Gaussian distributed. The state-derivatives are integrated out analytically since they are modelled as latent variables. However, the state variables themselves are also latent variables because they are contaminated by noise. Previous work sampled the state variables since integrating them out is \textit{not} analytically tractable. In this paper we use mean-field approximation to establish tight variational lower bounds that decouple state variables and are therefore, in contrast to the integral over state variables, analytically tractable and even concave for a restricted family of ODE’s, including nonlinear and periodic ODE’s. Such variational lower bounds facilitate “hill climbing” to determine the maximum a posteriori estimate of ODE parameters. An additional advantage of our approach over sampling methods is the determination of a proxy to the intractable posterior distribution over state variables given observations and the ODE’s.
Tasks Gaussian Processes
Published 2016-10-21
URL http://arxiv.org/abs/1610.06949v1
PDF http://arxiv.org/pdf/1610.06949v1.pdf
PWC https://paperswithcode.com/paper/mean-field-variational-inference-for-gradient
Repo
Framework

Exploiting Web Images for Dataset Construction: A Domain Robust Approach

Title Exploiting Web Images for Dataset Construction: A Domain Robust Approach
Authors Yazhou Yao, Jian Zhang, Fumin Shen, Xiansheng Hua, Jingsong Xu, Zhenmin Tang
Abstract Labelled image datasets have played a critical role in high-level image understanding. However, the process of manual labelling is both time-consuming and labor intensive. To reduce the cost of manual labelling, there has been increased research interest in automatically constructing image datasets by exploiting web images. Datasets constructed by existing methods tend to have a weak domain adaptation ability, which is known as the “dataset bias problem”. To address this issue, we present a novel image dataset construction framework that can be generalized well to unseen target domains. Specifically, the given queries are first expanded by searching the Google Books Ngrams Corpus to obtain a rich semantic description, from which the visually non-salient and less relevant expansions are filtered out. By treating each selected expansion as a “bag” and the retrieved images as “instances”, image selection can be formulated as a multi-instance learning problem with constrained positive bags. We propose to solve the employed problems by the cutting-plane and concave-convex procedure (CCCP) algorithm. By using this approach, images from different distributions can be kept while noisy images are filtered out. To verify the effectiveness of our proposed approach, we build an image dataset with 20 categories. Extensive experiments on image classification, cross-dataset generalization, diversity comparison and object detection demonstrate the domain robustness of our dataset.
Tasks Domain Adaptation, Image Classification, Object Detection
Published 2016-11-22
URL http://arxiv.org/abs/1611.07156v4
PDF http://arxiv.org/pdf/1611.07156v4.pdf
PWC https://paperswithcode.com/paper/exploiting-web-images-for-dataset
Repo
Framework

Shape Estimation from Defocus Cue for Microscopy Images via Belief Propagation

Title Shape Estimation from Defocus Cue for Microscopy Images via Belief Propagation
Authors Arnav Bhavsar
Abstract In recent years, the usefulness of 3D shape estimation is being realized in microscopic or close-range imaging, as the 3D information can further be used in various applications. Due to limited depth of field at such small distances, the defocus blur induced in images can provide information about the 3D shape of the object. The task of `shape from defocus’ (SFD), involves the problem of estimating good quality 3D shape estimates from images with depth-dependent defocus blur. While the research area of SFD is quite well-established, the approaches have largely demonstrated results on objects with bulk/coarse shape variation. However, in many cases, objects studied under microscopes often involve fine/detailed structures, which have not been explicitly considered in most methods. In addition, given that, in recent years, large data volumes are typically associated with microscopy related applications, it is also important for such SFD methods to be efficient. In this work, we provide an indication of the usefulness of the Belief Propagation (BP) approach in addressing these concerns for SFD. BP has been known to be an efficient combinatorial optimization approach, and has been empirically demonstrated to yield good quality solutions in low-level vision problems such as image restoration, stereo disparity estimation etc. For exploiting the efficiency of BP in SFD, we assume local space-invariance of the defocus blur, which enables the application of BP in a straightforward manner. Even with such an assumption, the ability of BP to provide good quality solutions while using non-convex priors, reflects in yielding plausible shape estimates in presence of fine structures on the objects under microscopy imaging. |
Tasks Combinatorial Optimization, Disparity Estimation, Image Restoration
Published 2016-12-30
URL http://arxiv.org/abs/1612.09411v1
PDF http://arxiv.org/pdf/1612.09411v1.pdf
PWC https://paperswithcode.com/paper/shape-estimation-from-defocus-cue-for
Repo
Framework

Learning to Blend Computer Game Levels

Title Learning to Blend Computer Game Levels
Authors Matthew Guzdial, Mark Riedl
Abstract We present an approach to generate novel computer game levels that blend different game concepts in an unsupervised fashion. Our primary contribution is an analogical reasoning process to construct blends between level design models learned from gameplay videos. The models represent probabilistic relationships between elements in the game. An analogical reasoning process maps features between two models to produce blended models that can then generate new level chunks. As a proof-of-concept we train our system on the classic platformer game Super Mario Bros. due to its highly-regarded and well understood level design. We evaluate the extent to which the models represent stylistic level design knowledge and demonstrate the ability of our system to explain levels that were blended by human expert designers.
Tasks
Published 2016-03-08
URL http://arxiv.org/abs/1603.02738v1
PDF http://arxiv.org/pdf/1603.02738v1.pdf
PWC https://paperswithcode.com/paper/learning-to-blend-computer-game-levels
Repo
Framework
comments powered by Disqus