July 27, 2019

2837 words 14 mins read

Paper Group ANR 643

Paper Group ANR 643

Genealogical Distance as a Diversity Estimate in Evolutionary Algorithms. Segmentation of Instances by Hashing. Learning Linear Dynamical Systems with High-Order Tensor Data for Skeleton based Action Recognition. Randomized Iterative Reconstruction for Sparse View X-ray Computed Tomography. Creativity: Generating Diverse Questions using Variational …

Genealogical Distance as a Diversity Estimate in Evolutionary Algorithms

Title Genealogical Distance as a Diversity Estimate in Evolutionary Algorithms
Authors Thomas Gabor, Lenz Belzner
Abstract The evolutionary edit distance between two individuals in a population, i.e., the amount of applications of any genetic operator it would take the evolutionary process to generate one individual starting from the other, seems like a promising estimate for the diversity between said individuals. We introduce genealogical diversity, i.e., estimating two individuals’ degree of relatedness by analyzing large, unused parts of their genome, as a computationally efficient method to approximate that measure for diversity.
Tasks
Published 2017-04-27
URL http://arxiv.org/abs/1704.08774v1
PDF http://arxiv.org/pdf/1704.08774v1.pdf
PWC https://paperswithcode.com/paper/genealogical-distance-as-a-diversity-estimate
Repo
Framework

Segmentation of Instances by Hashing

Title Segmentation of Instances by Hashing
Authors J. D. Curtó, I. C. Zarza, A. Smola, L. Van Gool
Abstract We propose a novel approach to address the Simultaneous Detection and Segmentation problem. Using hierarchical structures we use an efficient and accurate procedure that exploits the hierarchy feature information using Locality Sensitive Hashing. We build on recent work that utilizes convolutional neural networks to detect bounding boxes in an image and then use the top similar hierarchical region that best fits each bounding box after hashing, we call this approach CZ Segmentation. We then refine our final segmentation results by automatic hierarchy pruning. CZ Segmentation introduces a train-free alternative to Hypercolumns. We conduct extensive experiments on PASCAL VOC 2012 segmentation dataset, showing that CZ gives competitive state-of-the-art object segmentations.
Tasks
Published 2017-02-27
URL http://arxiv.org/abs/1702.08160v9
PDF http://arxiv.org/pdf/1702.08160v9.pdf
PWC https://paperswithcode.com/paper/segmentation-of-instances-by-hashing
Repo
Framework

Learning Linear Dynamical Systems with High-Order Tensor Data for Skeleton based Action Recognition

Title Learning Linear Dynamical Systems with High-Order Tensor Data for Skeleton based Action Recognition
Authors Wenwen Ding, Kai Liu
Abstract In recent years, there has been renewed interest in developing methods for skeleton-based human action recognition. A skeleton sequence can be naturally represented as a high-order tensor time series. In this paper, we model and analyze tensor time series with Linear Dynamical System (LDS) which is the most common for encoding spatio-temporal time-series data in various disciplines dut to its relative simplicity and efficiency. However, the traditional LDS treats the latent and observation state at each frame of video as a column vector. Such a vector representation fails to take into account the curse of dimensionality as well as valuable structural information with human action. Considering this fact, we propose generalized Linear Dynamical System (gLDS) for modeling tensor observation in the time series and employ Tucker decomposition to estimate the LDS parameters as action descriptors. Therefore, an action can be represented as a subspace corresponding to a point on a Grassmann manifold. Then we perform classification using dictionary learning and sparse coding over Grassmann manifold. Experiments on MSR Action3D Dataset, UCF Kinect Dataset and Northwestern-UCLA Multiview Action3D Dataset demonstrate that our proposed method achieves superior performance to the state-of-the-art algorithms.
Tasks Dictionary Learning, Skeleton Based Action Recognition, Temporal Action Localization, Time Series
Published 2017-01-14
URL http://arxiv.org/abs/1701.03869v1
PDF http://arxiv.org/pdf/1701.03869v1.pdf
PWC https://paperswithcode.com/paper/learning-linear-dynamical-systems-with-high
Repo
Framework

Randomized Iterative Reconstruction for Sparse View X-ray Computed Tomography

Title Randomized Iterative Reconstruction for Sparse View X-ray Computed Tomography
Authors D. Trinca, Y. Zhong
Abstract With the availability of more powerful computers, iterative reconstruction algorithms are the subject of an ongoing work in the design of more efficient reconstruction algorithms for X-ray computed tomography. In this work, we show how two analytical reconstruction algorithms can be improved by correcting the corresponding reconstructions using a randomized iterative reconstruction algorithm. The combined analytical reconstruction followed by randomized iterative reconstruction can also be viewed as a reconstruction algorithm which, in the experiments we have conducted, uses up to $35%$ less projection angles as compared to the analytical reconstruction algorithms and produces the same results in terms of quality of reconstruction, without increasing the execution time significantly.
Tasks
Published 2017-03-06
URL http://arxiv.org/abs/1703.04393v1
PDF http://arxiv.org/pdf/1703.04393v1.pdf
PWC https://paperswithcode.com/paper/randomized-iterative-reconstruction-for
Repo
Framework

Creativity: Generating Diverse Questions using Variational Autoencoders

Title Creativity: Generating Diverse Questions using Variational Autoencoders
Authors Unnat Jain, Ziyu Zhang, Alexander Schwing
Abstract Generating diverse questions for given images is an important task for computational education, entertainment and AI assistants. Different from many conventional prediction techniques is the need for algorithms to generate a diverse set of plausible questions, which we refer to as “creativity”. In this paper we propose a creative algorithm for visual question generation which combines the advantages of variational autoencoders with long short-term memory networks. We demonstrate that our framework is able to generate a large set of varying questions given a single input image.
Tasks Question Generation
Published 2017-04-11
URL http://arxiv.org/abs/1704.03493v1
PDF http://arxiv.org/pdf/1704.03493v1.pdf
PWC https://paperswithcode.com/paper/creativity-generating-diverse-questions-using
Repo
Framework

Learning neural trans-dimensional random field language models with noise-contrastive estimation

Title Learning neural trans-dimensional random field language models with noise-contrastive estimation
Authors Bin Wang, Zhijian Ou
Abstract Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training efficiency of neural TRF LMs is not satisfactory, which limits the scalability of TRF LMs on large training corpus. In this paper, several techniques on both model formulation and parameter estimation are proposed to improve the training efficiency and the performance of neural TRF LMs. First, TRFs are reformulated in the form of exponential tilting of a reference distribution. Second, noise-contrastive estimation (NCE) is introduced to jointly estimate the model parameters and normalization constants. Third, we extend the neural TRF LMs by marrying the deep convolutional neural network (CNN) and the bidirectional LSTM into the potential function to extract the deep hierarchical features and bidirectionally sequential features. Utilizing all the above techniques enables the successful and efficient training of neural TRF LMs on a 40x larger training set with only 1/3 training time and further reduces the WER with relative reduction of 4.7% on top of a strong LSTM LM baseline.
Tasks Speech Recognition
Published 2017-10-30
URL http://arxiv.org/abs/1710.10739v1
PDF http://arxiv.org/pdf/1710.10739v1.pdf
PWC https://paperswithcode.com/paper/learning-neural-trans-dimensional-random
Repo
Framework

Data Augmentation of Wearable Sensor Data for Parkinson’s Disease Monitoring using Convolutional Neural Networks

Title Data Augmentation of Wearable Sensor Data for Parkinson’s Disease Monitoring using Convolutional Neural Networks
Authors Terry Taewoong Um, Franz Michael Josef Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, Dana Kulić
Abstract While convolutional neural networks (CNNs) have been successfully applied to many challenging classification applications, they typically require large datasets for training. When the availability of labeled data is limited, data augmentation is a critical preprocessing step for CNNs. However, data augmentation for wearable sensor data has not been deeply investigated yet. In this paper, various data augmentation methods for wearable sensor data are proposed. The proposed methods and CNNs are applied to the classification of the motor state of Parkinson’s Disease patients, which is challenging due to small dataset size, noisy labels, and large intra-class variability. Appropriate augmentation improves the classification performance from 77.54% to 86.88%.
Tasks Data Augmentation
Published 2017-06-02
URL http://arxiv.org/abs/1706.00527v2
PDF http://arxiv.org/pdf/1706.00527v2.pdf
PWC https://paperswithcode.com/paper/data-augmentation-of-wearable-sensor-data-for
Repo
Framework

Predicting Human Activities Using Stochastic Grammar

Title Predicting Human Activities Using Stochastic Grammar
Authors Siyuan Qi, Siyuan Huang, Ping Wei, Song-Chun Zhu
Abstract This paper presents a novel method to predict future human activities from partially observed RGB-D videos. Human activity prediction is generally difficult due to its non-Markovian property and the rich context between human and environments. We use a stochastic grammar model to capture the compositional structure of events, integrating human actions, objects, and their affordances. We represent the event by a spatial-temporal And-Or graph (ST-AOG). The ST-AOG is composed of a temporal stochastic grammar defined on sub-activities, and spatial graphs representing sub-activities that consist of human actions, objects, and their affordances. Future sub-activities are predicted using the temporal grammar and Earley parsing algorithm. The corresponding action, object, and affordance labels are then inferred accordingly. Extensive experiments are conducted to show the effectiveness of our model on both semantic event parsing and future activity prediction.
Tasks Activity Prediction
Published 2017-08-02
URL http://arxiv.org/abs/1708.00945v1
PDF http://arxiv.org/pdf/1708.00945v1.pdf
PWC https://paperswithcode.com/paper/predicting-human-activities-using-stochastic
Repo
Framework

Weighted Motion Averaging for the Registration of Multi-View Range Scans

Title Weighted Motion Averaging for the Registration of Multi-View Range Scans
Authors Rui Guo, Jihua Zhu, Yaochen Li, Dapeng Chen, Zhongyu Li, Yongqin Zhang
Abstract Multi-view registration is a fundamental but challenging problem in 3D reconstruction and robot vision. Although the original motion averaging algorithm has been introduced as an effective means to solve the multi-view registration problem, it does not consider the reliability and accuracy of each relative motion. Accordingly, this paper proposes a novel motion averaging algorithm for multi-view registration. Firstly, it utilizes the pair-wise registration algorithm to estimate the relative motion and overlapping percentage of each scan pair with a certain degree of overlap. With the overlapping percentage available, it views the overlapping percentage as the corresponding weight of each scan pair and proposes the weight motion averaging algorithm, which can pay more attention to reliable and accurate relative motions. By treating each relative motion distinctively, more accurate registration can be achieved by applying the weighted motion averaging to multi-view range scans. Experimental results demonstrate the superiority of our proposed approach compared with the state-of-the-art methods in terms of accuracy, robustness and efficiency.
Tasks 3D Reconstruction
Published 2017-02-21
URL http://arxiv.org/abs/1702.06264v3
PDF http://arxiv.org/pdf/1702.06264v3.pdf
PWC https://paperswithcode.com/paper/weighted-motion-averaging-for-the
Repo
Framework

CrescendoNet: A Simple Deep Convolutional Neural Network with Ensemble Behavior

Title CrescendoNet: A Simple Deep Convolutional Neural Network with Ensemble Behavior
Authors Xiang Zhang, Nishant Vishwamitra, Hongxin Hu, Feng Luo
Abstract We introduce a new deep convolutional neural network, CrescendoNet, by stacking simple building blocks without residual connections. Each Crescendo block contains independent convolution paths with increased depths. The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks. In experiments, CrescendoNet with only 15 layers outperforms almost all networks without residual connections on benchmark datasets, CIFAR10, CIFAR100, and SVHN. Given sufficient amount of data as in SVHN dataset, CrescendoNet with 15 layers and 4.1M parameters can match the performance of DenseNet-BC with 250 layers and 15.3M parameters. CrescendoNet provides a new way to construct high performance deep convolutional neural networks without residual connections. Moreover, through investigating the behavior and performance of subnetworks in CrescendoNet, we note that the high performance of CrescendoNet may come from its implicit ensemble behavior, which differs from the FractalNet that is also a deep convolutional neural network without residual connections. Furthermore, the independence between paths in CrescendoNet allows us to introduce a new path-wise training procedure, which can reduce the memory needed for training.
Tasks
Published 2017-10-30
URL http://arxiv.org/abs/1710.11176v2
PDF http://arxiv.org/pdf/1710.11176v2.pdf
PWC https://paperswithcode.com/paper/crescendonet-a-simple-deep-convolutional
Repo
Framework

Evolving a Vector Space with any Generating Set

Title Evolving a Vector Space with any Generating Set
Authors Richard Nock, Frank Nielsen
Abstract In Valiant’s model of evolution, a class of representations is evolvable iff a polynomial-time process of random mutations guided by selection converges with high probability to a representation as $\epsilon$-close as desired from the optimal one, for any required $\epsilon>0$. Several previous positive results exist that can be related to evolving a vector space, but each former result imposes disproportionate representations or restrictions on (re)initialisations, distributions, performance functions and/or the mutator. In this paper, we show that all it takes to evolve a normed vector space is merely a set that generates the space. Furthermore, it takes only $\tilde{O}(1/\epsilon^2)$ steps and it is essentially stable, agnostic and handles target drifts that rival some proven in fairly restricted settings. Our algorithm can be viewed as a close relative to a popular fifty-years old gradient-free optimization method for which little is still known from the convergence standpoint: Nelder-Mead simplex method.
Tasks
Published 2017-04-10
URL http://arxiv.org/abs/1704.02708v2
PDF http://arxiv.org/pdf/1704.02708v2.pdf
PWC https://paperswithcode.com/paper/evolving-a-vector-space-with-any-generating
Repo
Framework

Invariance of Weight Distributions in Rectified MLPs

Title Invariance of Weight Distributions in Rectified MLPs
Authors Russell Tsuchida, Farbod Roosta-Khorasani, Marcus Gallagher
Abstract An interesting approach to analyzing neural networks that has received renewed attention is to examine the equivalent kernel of the neural network. This is based on the fact that a fully connected feedforward network with one hidden layer, a certain weight distribution, an activation function, and an infinite number of neurons can be viewed as a mapping into a Hilbert space. We derive the equivalent kernels of MLPs with ReLU or Leaky ReLU activations for all rotationally-invariant weight distributions, generalizing a previous result that required Gaussian weight distributions. Additionally, the Central Limit Theorem is used to show that for certain activation functions, kernels corresponding to layers with weight distributions having $0$ mean and finite absolute third moment are asymptotically universal, and are well approximated by the kernel corresponding to layers with spherical Gaussian weights. In deep networks, as depth increases the equivalent kernel approaches a pathological fixed point, which can be used to argue why training randomly initialized networks can be difficult. Our results also have implications for weight initialization.
Tasks
Published 2017-11-24
URL http://arxiv.org/abs/1711.09090v3
PDF http://arxiv.org/pdf/1711.09090v3.pdf
PWC https://paperswithcode.com/paper/invariance-of-weight-distributions-in
Repo
Framework

Statistical Inference for Data-adaptive Doubly Robust Estimators with Survival Outcomes

Title Statistical Inference for Data-adaptive Doubly Robust Estimators with Survival Outcomes
Authors Iván Díaz
Abstract The consistency of doubly robust estimators relies on consistent estimation of at least one of two nuisance regression parameters. In moderate to large dimensions, the use of flexible data-adaptive regression estimators may aid in achieving this consistency. However, $n^{1/2}$-consistency of doubly robust estimators is not guaranteed if one of the nuisance estimators is inconsistent. In this paper we present a doubly robust estimator for survival analysis with the novel property that it converges to a Gaussian variable at $n^{1/2}$-rate for a large class of data-adaptive estimators of the nuisance parameters, under the only assumption that at least one of them is consistently estimated at a $n^{1/4}$-rate. This result is achieved through adaptation of recent ideas in semiparametric inference, which amount to: (i) Gaussianizing (i.e., making asymptotically linear) a drift term that arises in the asymptotic analysis of the doubly robust estimator, and (ii) using cross-fitting to avoid entropy conditions on the nuisance estimators. We present the formula of the asymptotic variance of the estimator, which allows computation of doubly robust confidence intervals and p-values. We illustrate the finite-sample properties of the estimator in simulation studies, and demonstrate its use in a phase III clinical trial for estimating the effect of a novel therapy for the treatment of HER2 positive breast cancer.
Tasks Survival Analysis
Published 2017-09-01
URL http://arxiv.org/abs/1709.00401v3
PDF http://arxiv.org/pdf/1709.00401v3.pdf
PWC https://paperswithcode.com/paper/statistical-inference-for-data-adaptive
Repo
Framework

Wisdom of the crowd from unsupervised dimension reduction

Title Wisdom of the crowd from unsupervised dimension reduction
Authors Lingfei Wang, Tom Michoel
Abstract Wisdom of the crowd, the collective intelligence derived from responses of multiple human or machine individuals to the same questions, can be more accurate than each individual, and improve social decision-making and prediction accuracy. This can also integrate multiple programs or datasets, each as an individual, for the same predictive questions. Crowd wisdom estimates each individual’s independent error level arising from their limited knowledge, and finds the crowd consensus that minimizes the overall error. However, previous studies have merely built isolated, problem-specific models with limited generalizability, and mainly for binary (yes/no) responses. Here we show with simulation and real-world data that the crowd wisdom problem is analogous to one-dimensional unsupervised dimension reduction in machine learning. This provides a natural class of crowd wisdom solutions, such as principal component analysis and Isomap, which can handle binary and also continuous responses, like confidence levels, and consequently can be more accurate than existing solutions. They can even outperform supervised-learning-based collective intelligence that is calibrated on historical performance of individuals, e.g. penalized linear regression and random forest. This study unifies crowd wisdom and unsupervised dimension reduction, and thereupon introduces a broad range of highly-performing and widely-applicable crowd wisdom methods. As the costs for data acquisition and processing rapidly decrease, this study will promote and guide crowd wisdom applications in the social and natural sciences, including data fusion, meta-analysis, crowd-sourcing, and committee decision making.
Tasks Decision Making, Dimensionality Reduction
Published 2017-11-28
URL http://arxiv.org/abs/1711.11034v1
PDF http://arxiv.org/pdf/1711.11034v1.pdf
PWC https://paperswithcode.com/paper/wisdom-of-the-crowd-from-unsupervised
Repo
Framework

A causation coefficient and taxonomy of correlation/causation relationships

Title A causation coefficient and taxonomy of correlation/causation relationships
Authors Joshua Brulé
Abstract This paper introduces a causation coefficient which is defined in terms of probabilistic causal models. This coefficient is suggested as the natural causal analogue of the Pearson correlation coefficient and permits comparing causation and correlation to each other in a simple, yet rigorous manner. Together, these coefficients provide a natural way to classify the possible correlation/causation relationships that can occur in practice and examples of each relationship are provided. In addition, the typical relationship between correlation and causation is analyzed to provide insight into why correlation and causation are often conflated. Finally, example calculations of the causation coefficient are shown on a real data set.
Tasks
Published 2017-08-05
URL http://arxiv.org/abs/1708.05069v1
PDF http://arxiv.org/pdf/1708.05069v1.pdf
PWC https://paperswithcode.com/paper/a-causation-coefficient-and-taxonomy-of
Repo
Framework
comments powered by Disqus