July 28, 2019

2806 words 14 mins read

Paper Group ANR 364

Paper Group ANR 364

Dictionary-based Monitoring of Premature Ventricular Contractions: An Ultra-Low-Cost Point-of-Care Service. Community Recovery in Hypergraphs. Deep learning evaluation using deep linguistic processing. Smartphone App Usage Prediction Using Points of Interest. One Representation per Word - Does it make Sense for Composition?. Jointly Trained Sequent …

Dictionary-based Monitoring of Premature Ventricular Contractions: An Ultra-Low-Cost Point-of-Care Service

Title Dictionary-based Monitoring of Premature Ventricular Contractions: An Ultra-Low-Cost Point-of-Care Service
Authors Bollepalli S. Chandra, Challa S. Sastry, Laxminarayana Anumandla, Soumya Jana
Abstract While cardiovascular diseases (CVDs) are prevalent across economic strata, the economically disadvantaged population is disproportionately affected due to the high cost of traditional CVD management. Accordingly, developing an ultra-low-cost alternative, affordable even to groups at the bottom of the economic pyramid, has emerged as a societal imperative. Against this backdrop, we propose an inexpensive yet accurate home-based electrocardiogram(ECG) monitoring service. Specifically, we seek to provide point-of-care monitoring of premature ventricular contractions (PVCs), high frequency of which could indicate the onset of potentially fatal arrhythmia. Note that a traditional telecardiology system acquires the ECG, transmits it to a professional diagnostic centre without processing, and nearly achieves the diagnostic accuracy of a bedside setup, albeit at high bandwidth cost. In this context, we aim at reducing cost without significantly sacrificing reliability. To this end, we develop a dictionary-based algorithm that detects with high sensitivity the anomalous beats only which are then transmitted. We further compress those transmitted beats using class-specific dictionaries subject to suitable reconstruction/diagnostic fidelity. Such a scheme would not only reduce the overall bandwidth requirement, but also localising anomalous beats, thereby reducing physicians’ burden. Finally, using Monte Carlo cross validation on MIT/BIH arrhythmia database, we evaluate the performance of the proposed system. In particular, with a sensitivity target of at most one undetected PVC in one hundred beats, and a percentage root mean squared difference less than 9% (a clinically acceptable level of fidelity), we achieved about 99.15% reduction in bandwidth cost, equivalent to 118-fold savings over traditional telecardiology.
Tasks
Published 2017-05-24
URL http://arxiv.org/abs/1705.08619v1
PDF http://arxiv.org/pdf/1705.08619v1.pdf
PWC https://paperswithcode.com/paper/dictionary-based-monitoring-of-premature
Repo
Framework

Community Recovery in Hypergraphs

Title Community Recovery in Hypergraphs
Authors Kwangjun Ahn, Kangwook Lee, Changho Suh
Abstract Community recovery is a central problem that arises in a wide variety of applications such as network clustering, motion segmentation, face clustering and protein complex detection. The objective of the problem is to cluster data points into distinct communities based on a set of measurements, each of which is associated with the values of a certain number of data points. While most of the prior works focus on a setting in which the number of data points involved in a measurement is two, this work explores a generalized setting in which the number can be more than two. Motivated by applications particularly in machine learning and channel coding, we consider two types of measurements: (1) homogeneity measurement which indicates whether or not the associated data points belong to the same community; (2) parity measurement which denotes the modulo-2 sum of the values of the data points. Such measurements are possibly corrupted by Bernoulli noise. We characterize the fundamental limits on the number of measurements required to reconstruct the communities for the considered models.
Tasks Motion Segmentation
Published 2017-09-12
URL http://arxiv.org/abs/1709.03670v1
PDF http://arxiv.org/pdf/1709.03670v1.pdf
PWC https://paperswithcode.com/paper/community-recovery-in-hypergraphs
Repo
Framework

Deep learning evaluation using deep linguistic processing

Title Deep learning evaluation using deep linguistic processing
Authors Alexander Kuhnle, Ann Copestake
Abstract We discuss problems with the standard approaches to evaluation for tasks like visual question answering, and argue that artificial data can be used to address these as a complement to current practice. We demonstrate that with the help of existing ‘deep’ linguistic processing technology we are able to create challenging abstract datasets, which enable us to investigate the language understanding abilities of multimodal deep learning models in detail, as compared to a single performance value on a static and monolithic dataset.
Tasks Question Answering, Visual Question Answering
Published 2017-06-05
URL http://arxiv.org/abs/1706.01322v2
PDF http://arxiv.org/pdf/1706.01322v2.pdf
PWC https://paperswithcode.com/paper/deep-learning-evaluation-using-deep
Repo
Framework

Smartphone App Usage Prediction Using Points of Interest

Title Smartphone App Usage Prediction Using Points of Interest
Authors Donghan Yu, Yong Li, Fengli Xu, Pengyu Zhang, Vassilis Kostakos
Abstract In this paper we present the first population-level, city-scale analysis of application usage on smartphones. Using deep packet inspection at the network operator level, we obtained a geo-tagged dataset with more than 6 million unique devices that launched more than 10,000 unique applications across the city of Shanghai over one week. We develop a technique that leverages transfer learning to predict which applications are most popular and estimate the whole usage distribution based on the Point of Interest (POI) information of that particular location. We demonstrate that our technique has an 83.0% hitrate in successfully identifying the top five popular applications, and a 0.15 RMSE when estimating usage with just 10% sampled sparse data. It outperforms by about 25.7% over the existing state-of-the-art approaches. Our findings pave the way for predicting which apps are relevant to a user given their current location, and which applications are popular where. The implications of our findings are broad: it enables a range of systems to benefit from such timely predictions, including operating systems, network operators, appstores, advertisers, and service providers.
Tasks Transfer Learning
Published 2017-11-26
URL http://arxiv.org/abs/1711.09337v1
PDF http://arxiv.org/pdf/1711.09337v1.pdf
PWC https://paperswithcode.com/paper/smartphone-app-usage-prediction-using-points
Repo
Framework

One Representation per Word - Does it make Sense for Composition?

Title One Representation per Word - Does it make Sense for Composition?
Authors Thomas Kober, Julie Weeds, John Wilkie, Jeremy Reffin, David Weir
Abstract In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone. We evaluate the performance of off-the-shelf single-vector and multi-sense vector models on a benchmark phrase similarity task and a novel task for word-sense discrimination. We find that single-sense vector models perform as well or better than multi-sense vector models despite arguably less clean elementary representations. Our findings furthermore show that simple composition functions such as pointwise addition are able to recover sense specific information from a single-sense vector model remarkably well.
Tasks
Published 2017-02-22
URL http://arxiv.org/abs/1702.06696v1
PDF http://arxiv.org/pdf/1702.06696v1.pdf
PWC https://paperswithcode.com/paper/one-representation-per-word-does-it-make
Repo
Framework

Jointly Trained Sequential Labeling and Classification by Sparse Attention Neural Networks

Title Jointly Trained Sequential Labeling and Classification by Sparse Attention Neural Networks
Authors Mingbo Ma, Kai Zhao, Liang Huang, Bing Xiang, Bowen Zhou
Abstract Sentence-level classification and sequential labeling are two fundamental tasks in language understanding. While these two tasks are usually modeled separately, in reality, they are often correlated, for example in intent classification and slot filling, or in topic classification and named-entity recognition. In order to utilize the potential benefits from their correlations, we propose a jointly trained model for learning the two tasks simultaneously via Long Short-Term Memory (LSTM) networks. This model predicts the sentence-level category and the word-level label sequence from the stepwise output hidden representations of LSTM. We also introduce a novel mechanism of “sparse attention” to weigh words differently based on their semantic relevance to sentence-level classification. The proposed method outperforms baseline models on ATIS and TREC datasets.
Tasks Intent Classification, Named Entity Recognition, Slot Filling
Published 2017-09-28
URL http://arxiv.org/abs/1709.10191v1
PDF http://arxiv.org/pdf/1709.10191v1.pdf
PWC https://paperswithcode.com/paper/jointly-trained-sequential-labeling-and
Repo
Framework

Random Projection and Its Applications

Title Random Projection and Its Applications
Authors Mahmoud Nabil
Abstract Random Projection is a foundational research topic that connects a bunch of machine learning algorithms under a similar mathematical basis. It is used to reduce the dimensionality of the dataset by projecting the data points efficiently to a smaller dimensions while preserving the original relative distance between the data points. In this paper, we are intended to explain random projection method, by explaining its mathematical background and foundation, the applications that are currently adopting it, and an overview on its current research perspective.
Tasks
Published 2017-10-09
URL http://arxiv.org/abs/1710.03163v1
PDF http://arxiv.org/pdf/1710.03163v1.pdf
PWC https://paperswithcode.com/paper/random-projection-and-its-applications
Repo
Framework

Long-term Correlation Tracking using Multi-layer Hybrid Features in Sparse and Dense Environments

Title Long-term Correlation Tracking using Multi-layer Hybrid Features in Sparse and Dense Environments
Authors Nathanael L. Baisa, Deepayan Bhowmik, Andrew Wallace
Abstract Tracking a target of interest in both sparse and crowded environments is a challenging problem, not yet successfully addressed in the literature. In this paper, we propose a new long-term visual tracking algorithm, learning discriminative correlation filters and using an online classifier, to track a target of interest in both sparse and crowded video sequences. First, we learn a translation correlation filter using a multi-layer hybrid of convolutional neural networks (CNN) and traditional hand-crafted features. We combine advantages of both the lower convolutional layer which retains more spatial details for precise localization and the higher convolutional layer which encodes semantic information for handling appearance variations, and then integrate these with histogram of oriented gradients (HOG) and color-naming traditional features. Second, we include a re-detection module for overcoming tracking failures due to long-term occlusions by training an incremental (online) SVM on the most confident frames using hand-engineered features. This re-detection module is activated only when the correlation response of the object is below some pre-defined threshold. This generates high score detection proposals which are temporally filtered using a Gaussian mixture probability hypothesis density (GM-PHD) filter to find the detection proposal with the maximum weight as the target state estimate by removing the other detection proposals as clutter. Finally, we learn a scale correlation filter for estimating the scale of a target by constructing a target pyramid around the estimated or re-detected position using the HOG features. We carry out extensive experiments on both sparse and dense data sets which show that our method significantly outperforms state-of-the-art methods.
Tasks Visual Tracking
Published 2017-05-31
URL http://arxiv.org/abs/1705.11175v6
PDF http://arxiv.org/pdf/1705.11175v6.pdf
PWC https://paperswithcode.com/paper/long-term-correlation-tracking-using-multi
Repo
Framework

An Analysis of Action Recognition Datasets for Language and Vision Tasks

Title An Analysis of Action Recognition Datasets for Language and Vision Tasks
Authors Spandana Gella, Frank Keller
Abstract A large amount of recent research has focused on tasks that combine language and vision, resulting in a proliferation of datasets and methods. One such task is action recognition, whose applications include image annotation, scene under- standing and image retrieval. In this survey, we categorize the existing ap- proaches based on how they conceptualize this problem and provide a detailed review of existing datasets, highlighting their di- versity as well as advantages and disad- vantages. We focus on recently devel- oped datasets which link visual informa- tion with linguistic resources and provide a fine-grained syntactic and semantic anal- ysis of actions in images.
Tasks Image Retrieval, Temporal Action Localization
Published 2017-04-24
URL http://arxiv.org/abs/1704.07129v1
PDF http://arxiv.org/pdf/1704.07129v1.pdf
PWC https://paperswithcode.com/paper/an-analysis-of-action-recognition-datasets
Repo
Framework

Information Bottleneck in Control Tasks with Recurrent Spiking Neural Networks

Title Information Bottleneck in Control Tasks with Recurrent Spiking Neural Networks
Authors Madhavun Candadai Vasu, Eduardo Izquierdo
Abstract The nervous system encodes continuous information from the environment in the form of discrete spikes, and then decodes these to produce smooth motor actions. Understanding how spikes integrate, represent, and process information to produce behavior is one of the greatest challenges in neuroscience. Information theory has the potential to help us address this challenge. Informational analyses of deep and feed-forward artificial neural networks solving static input-output tasks, have led to the proposal of the \emph{Information Bottleneck} principle, which states that deeper layers encode more relevant yet minimal information about the inputs. Such an analyses on networks that are recurrent, spiking, and perform control tasks is relatively unexplored. Here, we present results from a Mutual Information analysis of a recurrent spiking neural network that was evolved to perform the classic pole-balancing task. Our results show that these networks deviate from the \emph{Information Bottleneck} principle prescribed for feed-forward networks.
Tasks
Published 2017-06-06
URL http://arxiv.org/abs/1706.01831v1
PDF http://arxiv.org/pdf/1706.01831v1.pdf
PWC https://paperswithcode.com/paper/information-bottleneck-in-control-tasks-with
Repo
Framework

Total stability of kernel methods

Title Total stability of kernel methods
Authors Andreas Christmann, Daohong Xiang, Ding-Xuan Zhou
Abstract Regularized empirical risk minimization using kernels and their corresponding reproducing kernel Hilbert spaces (RKHSs) plays an important role in machine learning. However, the actually used kernel often depends on one or on a few hyperparameters or the kernel is even data dependent in a much more complicated manner. Examples are Gaussian RBF kernels, kernel learning, and hierarchical Gaussian kernels which were recently proposed for deep learning. Therefore, the actually used kernel is often computed by a grid search or in an iterative manner and can often only be considered as an approximation to the “ideal” or “optimal” kernel. The paper gives conditions under which classical kernel based methods based on a convex Lipschitz loss function and on a bounded and smooth kernel are stable, if the probability measure $P$, the regularization parameter $\lambda$, and the kernel $k$ may slightly change in a simultaneous manner. Similar results are also given for pairwise learning. Therefore, the topic of this paper is somewhat more general than in classical robust statistics, where usually only the influence of small perturbations of the probability measure $P$ on the estimated function is considered.
Tasks
Published 2017-09-22
URL http://arxiv.org/abs/1709.07625v1
PDF http://arxiv.org/pdf/1709.07625v1.pdf
PWC https://paperswithcode.com/paper/total-stability-of-kernel-methods
Repo
Framework

Multiple Kernel Learning and Automatic Subspace Relevance Determination for High-dimensional Neuroimaging Data

Title Multiple Kernel Learning and Automatic Subspace Relevance Determination for High-dimensional Neuroimaging Data
Authors Murat Seckin Ayhan, Vijay Raghavan, Alzheimer’s disease Neuroimaging Initiative
Abstract Alzheimer’s disease is a major cause of dementia. Its diagnosis requires accurate biomarkers that are sensitive to disease stages. In this respect, we regard probabilistic classification as a method of designing a probabilistic biomarker for disease staging. Probabilistic biomarkers naturally support the interpretation of decisions and evaluation of uncertainty associated with them. In this paper, we obtain probabilistic biomarkers via Gaussian Processes. Gaussian Processes enable probabilistic kernel machines that offer flexible means to accomplish Multiple Kernel Learning. Exploiting this flexibility, we propose a new variation of Automatic Relevance Determination and tackle the challenges of high dimensionality through multiple kernels. Our research results demonstrate that the Gaussian Process models are competitive with or better than the well-known Support Vector Machine in terms of classification performance even in the cases of single kernel learning. Extending the basic scheme towards the Multiple Kernel Learning, we improve the efficacy of the Gaussian Process models and their interpretability in terms of the known anatomical correlates of the disease. For instance, the disease pathology starts in and around the hippocampus and entorhinal cortex. Through the use of Gaussian Processes and Multiple Kernel Learning, we have automatically and efficiently determined those portions of neuroimaging data. In addition to their interpretability, our Gaussian Process models are competitive with recent deep learning solutions under similar settings.
Tasks Gaussian Processes
Published 2017-06-02
URL http://arxiv.org/abs/1706.00856v1
PDF http://arxiv.org/pdf/1706.00856v1.pdf
PWC https://paperswithcode.com/paper/multiple-kernel-learning-and-automatic
Repo
Framework

AI Buzzwords Explained: Multi-Agent Path Finding (MAPF)

Title AI Buzzwords Explained: Multi-Agent Path Finding (MAPF)
Authors Hang Ma, Sven Koenig
Abstract Explanation of the hot topic “multi-agent path finding”.
Tasks Multi-Agent Path Finding
Published 2017-10-10
URL http://arxiv.org/abs/1710.03774v2
PDF http://arxiv.org/pdf/1710.03774v2.pdf
PWC https://paperswithcode.com/paper/ai-buzzwords-explained-multi-agent-path
Repo
Framework

A dual framework for low-rank tensor completion

Title A dual framework for low-rank tensor completion
Authors Madhav Nimishakavi, Pratik Jawanpuria, Bamdev Mishra
Abstract One of the popular approaches for low-rank tensor completion is to use the latent trace norm regularization. However, most existing works in this direction learn a sparse combination of tensors. In this work, we fill this gap by proposing a variant of the latent trace norm that helps in learning a non-sparse combination of tensors. We develop a dual framework for solving the low-rank tensor completion problem. We first show a novel characterization of the dual solution space with an interesting factorization of the optimal solution. Overall, the optimal solution is shown to lie on a Cartesian product of Riemannian manifolds. Furthermore, we exploit the versatile Riemannian optimization framework for proposing computationally efficient trust region algorithm. The experiments illustrate the efficacy of the proposed algorithm on several real-world datasets across applications.
Tasks
Published 2017-12-04
URL http://arxiv.org/abs/1712.01193v4
PDF http://arxiv.org/pdf/1712.01193v4.pdf
PWC https://paperswithcode.com/paper/a-dual-framework-for-low-rank-tensor
Repo
Framework

LocNet: Global localization in 3D point clouds for mobile vehicles

Title LocNet: Global localization in 3D point clouds for mobile vehicles
Authors Huan Yin, Li Tang, Xiaqing Ding, Yue Wang, Rong Xiong
Abstract Global localization in 3D point clouds is a challenging problem of estimating the pose of vehicles without any prior knowledge. In this paper, a solution to this problem is presented by achieving place recognition and metric pose estimation in the global prior map. Specifically, we present a semi-handcrafted representation learning method for LiDAR point clouds using siamese LocNets, which states the place recognition problem to a similarity modeling problem. With the final learned representations by LocNet, a global localization framework with range-only observations is proposed. To demonstrate the performance and effectiveness of our global localization system, KITTI dataset is employed for comparison with other algorithms, and also on our long-time multi-session datasets for evaluation. The result shows that our system can achieve high accuracy.
Tasks Pose Estimation, Representation Learning
Published 2017-12-06
URL http://arxiv.org/abs/1712.02165v2
PDF http://arxiv.org/pdf/1712.02165v2.pdf
PWC https://paperswithcode.com/paper/locnet-global-localization-in-3d-point-clouds
Repo
Framework
comments powered by Disqus