May 7, 2019

3064 words 15 mins read

Paper Group ANR 37

Paper Group ANR 37

Admissible Hierarchical Clustering Methods and Algorithms for Asymmetric Networks. Multiview Differential Geometry of Curves. Contour-based 3d tongue motion visualization using ultrasound image sequences. Tracking Completion. Short-term traffic flow forecasting with spatial-temporal correlation in a hybrid deep learning framework. Random Feature Ma …

Admissible Hierarchical Clustering Methods and Algorithms for Asymmetric Networks

Title Admissible Hierarchical Clustering Methods and Algorithms for Asymmetric Networks
Authors Gunnar Carlsson, Facundo Mémoli, Alejandro Ribeiro, Santiago Segarra
Abstract This paper characterizes hierarchical clustering methods that abide by two previously introduced axioms – thus, denominated admissible methods – and proposes tractable algorithms for their implementation. We leverage the fact that, for asymmetric networks, every admissible method must be contained between reciprocal and nonreciprocal clustering, and describe three families of intermediate methods. Grafting methods exchange branches between dendrograms generated by different admissible methods. The convex combination family combines admissible methods through a convex operation in the space of dendrograms, and thirdly, the semi-reciprocal family clusters nodes that are related by strong cyclic influences in the network. Algorithms for the computation of hierarchical clusters generated by reciprocal and nonreciprocal clustering as well as the grafting, convex combination, and semi-reciprocal families are derived using matrix operations in a dioid algebra. Finally, the introduced clustering methods and algorithms are exemplified through their application to a network describing the interrelation between sectors of the United States (U.S.) economy.
Tasks
Published 2016-07-21
URL http://arxiv.org/abs/1607.06335v1
PDF http://arxiv.org/pdf/1607.06335v1.pdf
PWC https://paperswithcode.com/paper/admissible-hierarchical-clustering-methods
Repo
Framework

Multiview Differential Geometry of Curves

Title Multiview Differential Geometry of Curves
Authors Ricardo Fabbri, Benjamin Kimia
Abstract The field of multiple view geometry has seen tremendous progress in reconstruction and calibration due to methods for extracting reliable point features and key developments in projective geometry. Point features, however, are not available in certain applications and result in unstructured point cloud reconstructions. General image curves provide a complementary feature when keypoints are scarce, and result in 3D curve geometry, but face challenges not addressed by the usual projective geometry of points and algebraic curves. We address these challenges by laying the theoretical foundations of a framework based on the differential geometry of general curves, including stationary curves, occluding contours, and non-rigid curves, aiming at stereo correspondence, camera estimation (including calibration, pose, and multiview epipolar geometry), and 3D reconstruction given measured image curves. By gathering previous results into a cohesive theory, novel results were made possible, yielding three contributions. First we derive the differential geometry of an image curve (tangent, curvature, curvature derivative) from that of the underlying space curve (tangent, curvature, curvature derivative, torsion). Second, we derive the differential geometry of a space curve from that of two corresponding image curves. Third, the differential motion of an image curve is derived from camera motion and the differential geometry and motion of the space curve. The availability of such a theory enables novel curve-based multiview reconstruction and camera estimation systems to augment existing point-based approaches. This theory has been used to reconstruct a “3D curve sketch”, to determine camera pose from local curve geometry, and tracking; other developments are underway.
Tasks 3D Reconstruction, Calibration
Published 2016-04-27
URL http://arxiv.org/abs/1604.08256v1
PDF http://arxiv.org/pdf/1604.08256v1.pdf
PWC https://paperswithcode.com/paper/multiview-differential-geometry-of-curves
Repo
Framework

Contour-based 3d tongue motion visualization using ultrasound image sequences

Title Contour-based 3d tongue motion visualization using ultrasound image sequences
Authors Kele Xu, Yin Yang, Clémence Leboullenger, Pierre Roussel, Bruce Denby
Abstract This article describes a contour-based 3D tongue deformation visualization framework using B-mode ultrasound image sequences. A robust, automatic tracking algorithm characterizes tongue motion via a contour, which is then used to drive a generic 3D Finite Element Model (FEM). A novel contour-based 3D dynamic modeling method is presented. Modal reduction and modal warping techniques are applied to model the deformation of the tongue physically and efficiently. This work can be helpful in a variety of fields, such as speech production, silent speech recognition, articulation training, speech disorder study, etc.
Tasks Speech Recognition
Published 2016-05-19
URL http://arxiv.org/abs/1605.05967v1
PDF http://arxiv.org/pdf/1605.05967v1.pdf
PWC https://paperswithcode.com/paper/contour-based-3d-tongue-motion-visualization
Repo
Framework

Tracking Completion

Title Tracking Completion
Authors Yao Sui, Guanghui Wang, Yafei Tang, Li Zhang
Abstract A fundamental component of modern trackers is an online learned tracking model, which is typically modeled either globally or locally. The two kinds of models perform differently in terms of effectiveness and robustness under different challenging situations. This work exploits the advantages of both models. A subspace model, from a global perspective, is learned from previously obtained targets via rank-minimization to address the tracking, and a pixel-level local observation is leveraged si- multaneously, from a local point of view, to augment the subspace model. A matrix completion method is employed to integrate the two models. Unlike previous tracking methods, which locate the target among all fully observed target candidates, the proposed approach first estimates an expected target via the matrix completion through partially observed target candidates, and then, identifies the target according to the estimation accuracy with respect to the target candidates. Specifically, the tracking is formulated as a problem of target appearance estimation. Extensive experiments on various challenging video sequences verify the effectiveness of the proposed approach and demonstrate that the proposed tracker outperforms other popular state-of-the-art trackers.
Tasks Matrix Completion
Published 2016-08-29
URL http://arxiv.org/abs/1608.08171v2
PDF http://arxiv.org/pdf/1608.08171v2.pdf
PWC https://paperswithcode.com/paper/tracking-completion
Repo
Framework

Short-term traffic flow forecasting with spatial-temporal correlation in a hybrid deep learning framework

Title Short-term traffic flow forecasting with spatial-temporal correlation in a hybrid deep learning framework
Authors Yuankai Wu, Huachun Tan
Abstract Deep learning approaches have reached a celebrity status in artificial intelligence field, its success have mostly relied on Convolutional Networks (CNN) and Recurrent Networks. By exploiting fundamental spatial properties of images and videos, the CNN always achieves dominant performance on visual tasks. And the Recurrent Networks (RNN) especially long short-term memory methods (LSTM) can successfully characterize the temporal correlation, thus exhibits superior capability for time series tasks. Traffic flow data have plentiful characteristics on both time and space domain. However, applications of CNN and LSTM approaches on traffic flow are limited. In this paper, we propose a novel deep architecture combined CNN and LSTM to forecast future traffic flow (CLTFP). An 1-dimension CNN is exploited to capture spatial features of traffic flow, and two LSTMs are utilized to mine the short-term variability and periodicities of traffic flow. Given those meaningful features, the feature-level fusion is performed to achieve short-term forecasting. The proposed CLTFP is compared with other popular forecasting methods on an open datasets. Experimental results indicate that the CLTFP has considerable advantages in traffic flow forecasting. in additional, the proposed CLTFP is analyzed from the view of Granger Causality, and several interesting properties of CLTFP are discovered and discussed .
Tasks Time Series
Published 2016-12-03
URL http://arxiv.org/abs/1612.01022v1
PDF http://arxiv.org/pdf/1612.01022v1.pdf
PWC https://paperswithcode.com/paper/short-term-traffic-flow-forecasting-with
Repo
Framework

Random Feature Maps via a Layered Random Projection (LaRP) Framework for Object Classification

Title Random Feature Maps via a Layered Random Projection (LaRP) Framework for Object Classification
Authors A. G. Chung, M. J. Shafiee, A. Wong
Abstract The approximation of nonlinear kernels via linear feature maps has recently gained interest due to their applications in reducing the training and testing time of kernel-based learning algorithms. Current random projection methods avoid the curse of dimensionality by embedding the nonlinear feature space into a low dimensional Euclidean space to create nonlinear kernels. We introduce a Layered Random Projection (LaRP) framework, where we model the linear kernels and nonlinearity separately for increased training efficiency. The proposed LaRP framework was assessed using the MNIST hand-written digits database and the COIL-100 object database, and showed notable improvement in object classification performance relative to other state-of-the-art random projection methods.
Tasks Object Classification
Published 2016-02-04
URL http://arxiv.org/abs/1602.01818v1
PDF http://arxiv.org/pdf/1602.01818v1.pdf
PWC https://paperswithcode.com/paper/random-feature-maps-via-a-layered-random
Repo
Framework

LOH and behold: Web-scale visual search, recommendation and clustering using Locally Optimized Hashing

Title LOH and behold: Web-scale visual search, recommendation and clustering using Locally Optimized Hashing
Authors Yannis Kalantidis, Lyndon Kennedy, Huy Nguyen, Clayton Mellina, David A. Shamma
Abstract We propose a novel hashing-based matching scheme, called Locally Optimized Hashing (LOH), based on a state-of-the-art quantization algorithm that can be used for efficient, large-scale search, recommendation, clustering, and deduplication. We show that matching with LOH only requires set intersections and summations to compute and so is easily implemented in generic distributed computing systems. We further show application of LOH to: a) large-scale search tasks where performance is on par with other state-of-the-art hashing approaches; b) large-scale recommendation where queries consisting of thousands of images can be used to generate accurate recommendations from collections of hundreds of millions of images; and c) efficient clustering with a graph-based algorithm that can be scaled to massive collections in a distributed environment or can be used for deduplication for small collections, like search results, performing better than traditional hashing approaches while only requiring a few milliseconds to run. In this paper we experiment on datasets of up to 100 million images, but in practice our system can scale to larger collections and can be used for other types of data that have a vector representation in a Euclidean space.
Tasks Quantization
Published 2016-04-21
URL http://arxiv.org/abs/1604.06480v2
PDF http://arxiv.org/pdf/1604.06480v2.pdf
PWC https://paperswithcode.com/paper/loh-and-behold-web-scale-visual-search
Repo
Framework

The De-Biased Whittle Likelihood

Title The De-Biased Whittle Likelihood
Authors Adam M. Sykulski, Sofia C. Olhede, Arthur P. Guillaumin, Jonathan M. Lilly, Jeffrey J. Early
Abstract The Whittle likelihood is a widely used and computationally efficient pseudo-likelihood. However, it is known to produce biased parameter estimates for large classes of models. We propose a method for de-biasing Whittle estimates for second-order stationary stochastic processes. The de-biased Whittle likelihood can be computed in the same $\mathcal{O}(n\log n)$ operations as the standard approach. We demonstrate the superior performance of the method in simulation studies and in application to a large-scale oceanographic dataset, where in both cases the de-biased approach reduces bias by up to two orders of magnitude, achieving estimates that are close to exact maximum likelihood, at a fraction of the computational cost. We prove that the method yields estimates that are consistent at an optimal convergence rate of $n^{-1/2}$, under weaker assumptions than standard theory, where we do not require that the power spectral density is continuous in frequency. We describe how the method can be easily combined with standard methods of bias reduction, such as tapering and differencing, to further reduce bias in parameter estimates.
Tasks
Published 2016-05-22
URL http://arxiv.org/abs/1605.06718v3
PDF http://arxiv.org/pdf/1605.06718v3.pdf
PWC https://paperswithcode.com/paper/the-de-biased-whittle-likelihood
Repo
Framework

Learning under Distributed Weak Supervision

Title Learning under Distributed Weak Supervision
Authors Martin Rajchl, Matthew C. H. Lee, Franklin Schrans, Alice Davidson, Jonathan Passerat-Palmbach, Giacomo Tarroni, Amir Alansary, Ozan Oktay, Bernhard Kainz, Daniel Rueckert
Abstract The availability of training data for supervision is a frequently encountered bottleneck of medical image analysis methods. While typically established by a clinical expert rater, the increase in acquired imaging data renders traditional pixel-wise segmentations less feasible. In this paper, we examine the use of a crowdsourcing platform for the distribution of super-pixel weak annotation tasks and collect such annotations from a crowd of non-expert raters. The crowd annotations are subsequently used for training a fully convolutional neural network to address the problem of fetal brain segmentation in T2-weighted MR images. Using this approach we report encouraging results compared to highly targeted, fully supervised methods and potentially address a frequent problem impeding image analysis research.
Tasks Brain Segmentation
Published 2016-06-03
URL http://arxiv.org/abs/1606.01100v1
PDF http://arxiv.org/pdf/1606.01100v1.pdf
PWC https://paperswithcode.com/paper/learning-under-distributed-weak-supervision
Repo
Framework

Probabilistic Bisection Converges Almost as Quickly as Stochastic Approximation

Title Probabilistic Bisection Converges Almost as Quickly as Stochastic Approximation
Authors Peter I. Frazier, Shane G. Henderson, Rolf Waeber
Abstract The probabilistic bisection algorithm (PBA) solves a class of stochastic root-finding problems in one dimension by successively updating a prior belief on the location of the root based on noisy responses to queries at chosen points. The responses indicate the direction of the root from the queried point, and are incorrect with a fixed probability. The fixed-probability assumption is problematic in applications, and so we extend the PBA to apply when this assumption is relaxed. The extension involves the use of a power-one test at each queried point. We explore the convergence behavior of the extended PBA, showing that it converges at a rate arbitrarily close to, but slower than, the canonical “square root” rate of stochastic approximation.
Tasks
Published 2016-12-12
URL http://arxiv.org/abs/1612.03964v1
PDF http://arxiv.org/pdf/1612.03964v1.pdf
PWC https://paperswithcode.com/paper/probabilistic-bisection-converges-almost-as
Repo
Framework

AMSOM: Adaptive Moving Self-organizing Map for Clustering and Visualization

Title AMSOM: Adaptive Moving Self-organizing Map for Clustering and Visualization
Authors Gerasimos Spanakis, Gerhard Weiss
Abstract Self-Organizing Map (SOM) is a neural network model which is used to obtain a topology-preserving mapping from the (usually high dimensional) input/feature space to an output/map space of fewer dimensions (usually two or three in order to facilitate visualization). Neurons in the output space are connected with each other but this structure remains fixed throughout training and learning is achieved through the updating of neuron reference vectors in feature space. Despite the fact that growing variants of SOM overcome the fixed structure limitation they increase computational cost and also do not allow the removal of a neuron after its introduction. In this paper, a variant of SOM is proposed called AMSOM (Adaptive Moving Self-Organizing Map) that on the one hand creates a more flexible structure where neuron positions are dynamically altered during training and on the other hand tackles the drawback of having a predefined grid by allowing neuron addition and/or removal during training. Experiments using multiple literature datasets show that the proposed method improves training performance of SOM, leads to a better visualization of the input dataset and provides a framework for determining the optimal number and structure of neurons.
Tasks
Published 2016-05-19
URL http://arxiv.org/abs/1605.06047v1
PDF http://arxiv.org/pdf/1605.06047v1.pdf
PWC https://paperswithcode.com/paper/amsom-adaptive-moving-self-organizing-map-for
Repo
Framework

UnrealStereo: Controlling Hazardous Factors to Analyze Stereo Vision

Title UnrealStereo: Controlling Hazardous Factors to Analyze Stereo Vision
Authors Yi Zhang, Weichao Qiu, Qi Chen, Xiaolin Hu, Alan Yuille
Abstract A reliable stereo algorithm is critical for many robotics applications. But textureless and specular regions can easily cause failure by making feature matching difficult. Understanding whether an algorithm is robust to these hazardous regions is important. Although many stereo benchmarks have been developed to evaluate performance, it is hard to quantify the effect of hazardous regions in real images because the location and severity of these regions are unknown. In this paper, we develop a synthetic image generation tool enabling to control hazardous factors, such as making objects more specular or transparent, to produce hazardous regions at different degrees. The densely controlled sampling strategy in virtual worlds enables to effectively stress test stereo algorithms by varying the types and degrees of the hazard. We generate a large synthetic image dataset with automatically computed hazardous regions and analyze algorithms on these regions. The observations from synthetic images are further validated by annotating hazardous regions in real-world datasets Middlebury and KITTI (which gives a sparse sampling of the hazards). Our synthetic image generation tool is based on a game engine Unreal Engine 4 and will be open-source along with the virtual scenes in our experiments. Many publicly available realistic game contents can be used by our tool to provide an enormous resource for development and evaluation of algorithms.
Tasks Image Generation
Published 2016-12-14
URL http://arxiv.org/abs/1612.04647v2
PDF http://arxiv.org/pdf/1612.04647v2.pdf
PWC https://paperswithcode.com/paper/unrealstereo-controlling-hazardous-factors-to
Repo
Framework

Parallels of human language in the behavior of bottlenose dolphins

Title Parallels of human language in the behavior of bottlenose dolphins
Authors R. Ferrer-i-Cancho, D. Lusseau, B. McCowan
Abstract A short review of similarities between dolphins and humans with the help of quantitative linguistics and information theory.
Tasks
Published 2016-05-05
URL http://arxiv.org/abs/1605.01661v1
PDF http://arxiv.org/pdf/1605.01661v1.pdf
PWC https://paperswithcode.com/paper/parallels-of-human-language-in-the-behavior
Repo
Framework

Nonlinear Structural Vector Autoregressive Models for Inferring Effective Brain Network Connectivity

Title Nonlinear Structural Vector Autoregressive Models for Inferring Effective Brain Network Connectivity
Authors Yanning Shen, Brian Baingana, Georgios B. Giannakis
Abstract Structural equation models (SEMs) and vector autoregressive models (VARMs) are two broad families of approaches that have been shown useful in effective brain connectivity studies. While VARMs postulate that a given region of interest in the brain is directionally connected to another one by virtue of time-lagged influences, SEMs assert that causal dependencies arise due to contemporaneous effects, and may even be adopted when nodal measurements are not necessarily multivariate time series. To unify these complementary perspectives, linear structural vector autoregressive models (SVARMs) that leverage both contemporaneous and time-lagged nodal data have recently been put forth. Albeit simple and tractable, linear SVARMs are quite limited since they are incapable of modeling nonlinear dependencies between neuronal time series. To this end, the overarching goal of the present paper is to considerably broaden the span of linear SVARMs by capturing nonlinearities through kernels, which have recently emerged as a powerful nonlinear modeling framework in canonical machine learning tasks, e.g., regression, classification, and dimensionality reduction. The merits of kernel-based methods are extended here to the task of learning the effective brain connectivity, and an efficient regularized estimator is put forth to leverage the edge sparsity inherent to real-world complex networks. Judicious kernel choice from a preselected dictionary of kernels is also addressed using a data-driven approach. Extensive numerical tests on ECoG data captured through a study on epileptic seizures demonstrate that it is possible to unveil previously unknown causal links between brain regions of interest.
Tasks Dimensionality Reduction, Time Series
Published 2016-10-20
URL http://arxiv.org/abs/1610.06551v1
PDF http://arxiv.org/pdf/1610.06551v1.pdf
PWC https://paperswithcode.com/paper/nonlinear-structural-vector-autoregressive
Repo
Framework

Machine Learning with Guarantees using Descriptive Complexity and SMT Solvers

Title Machine Learning with Guarantees using Descriptive Complexity and SMT Solvers
Authors Charles Jordan, Łukasz Kaiser
Abstract Machine learning is a thriving part of computer science. There are many efficient approaches to machine learning that do not provide strong theoretical guarantees, and a beautiful general learning theory. Unfortunately, machine learning approaches that give strong theoretical guarantees have not been efficient enough to be applicable. In this paper we introduce a logical approach to machine learning. Models are represented by tuples of logical formulas and inputs and outputs are logical structures. We present our framework together with several applications where we evaluate it using SAT and SMT solvers. We argue that this approach to machine learning is particularly suited to bridge the gap between efficiency and theoretical soundness. We exploit results from descriptive complexity theory to prove strong theoretical guarantees for our approach. To show its applicability, we present experimental results including learning complexity-theoretic reductions rules for board games. We also explain how neural networks fit into our framework, although the current implementation does not scale to provide guarantees for real-world neural networks.
Tasks Board Games
Published 2016-09-09
URL http://arxiv.org/abs/1609.02664v1
PDF http://arxiv.org/pdf/1609.02664v1.pdf
PWC https://paperswithcode.com/paper/machine-learning-with-guarantees-using
Repo
Framework
comments powered by Disqus