July 28, 2019

2924 words 14 mins read

Paper Group ANR 265

Paper Group ANR 265

The Use of Autoencoders for Discovering Patient Phenotypes. On the Generalized Essential Matrix Correction: An efficient solution to the problem and its applications. D-SLATS: Distributed Simultaneous Localization and Time Synchronization. Modeling The Intensity Function Of Point Process Via Recurrent Neural Networks. Reduction of Overfitting in Di …

The Use of Autoencoders for Discovering Patient Phenotypes

Title The Use of Autoencoders for Discovering Patient Phenotypes
Authors Harini Suresh, Peter Szolovits, Marzyeh Ghassemi
Abstract We use autoencoders to create low-dimensional embeddings of underlying patient phenotypes that we hypothesize are a governing factor in determining how different patients will react to different interventions. We compare the performance of autoencoders that take fixed length sequences of concatenated timesteps as input with a recurrent sequence-to-sequence autoencoder. We evaluate our methods on around 35,500 patients from the latest MIMIC III dataset from Beth Israel Deaconess Hospital.
Tasks
Published 2017-03-20
URL http://arxiv.org/abs/1703.07004v1
PDF http://arxiv.org/pdf/1703.07004v1.pdf
PWC https://paperswithcode.com/paper/the-use-of-autoencoders-for-discovering
Repo
Framework

On the Generalized Essential Matrix Correction: An efficient solution to the problem and its applications

Title On the Generalized Essential Matrix Correction: An efficient solution to the problem and its applications
Authors Pedro Miraldo, Joao R. Cardoso
Abstract This paper addresses the problem of finding the closest generalized essential matrix from a given $6\times 6$ matrix, with respect to the Frobenius norm. To the best of our knowledge, this nonlinear constrained optimization problem has not been addressed in the literature yet. Although it can be solved directly, it involves a large number of constraints, and any optimization method to solve it would require much computational effort. We start by deriving a couple of unconstrained formulations of the problem. After that, we convert the original problem into a new one, involving only orthogonal constraints, and propose an efficient algorithm of steepest descent-type to find its solution. To test the algorithms, we evaluate the methods with synthetic data and conclude that the proposed steepest descent-type approach is much faster than the direct application of general optimization techniques to the original formulation with 33 constraints and to the unconstrained ones. To further motivate the relevance of our method, we apply it in two pose problems (relative and absolute) using synthetic and real data.
Tasks
Published 2017-09-19
URL https://arxiv.org/abs/1709.06328v3
PDF https://arxiv.org/pdf/1709.06328v3.pdf
PWC https://paperswithcode.com/paper/fitting-generalized-essential-matrices-from
Repo
Framework

D-SLATS: Distributed Simultaneous Localization and Time Synchronization

Title D-SLATS: Distributed Simultaneous Localization and Time Synchronization
Authors Amr Alanwar, Henrique Ferraz, Kevin Hsieh, Rohit Thazhath, Paul Martin, Joao Hespanha, Mani Srivastava
Abstract Through the last decade, we have witnessed a surge of Internet of Things (IoT) devices, and with that a greater need to choreograph their actions across both time and space. Although these two problems, namely time synchronization and localization, share many aspects in common, they are traditionally treated separately or combined on centralized approaches that results in an ineffcient use of resources, or in solutions that are not scalable in terms of the number of IoT devices. Therefore, we propose D-SLATS, a framework comprised of three different and independent algorithms to jointly solve time synchronization and localization problems in a distributed fashion. The First two algorithms are based mainly on the distributed Extended Kalman Filter (EKF) whereas the third one uses optimization techniques. No fusion center is required, and the devices only communicate with their neighbors. The proposed methods are evaluated on custom Ultra-Wideband communication Testbed and a quadrotor, representing a network of both static and mobile nodes. Our algorithms achieve up to three microseconds time synchronization accuracy and 30 cm localization error.
Tasks
Published 2017-11-10
URL http://arxiv.org/abs/1711.03906v1
PDF http://arxiv.org/pdf/1711.03906v1.pdf
PWC https://paperswithcode.com/paper/d-slats-distributed-simultaneous-localization
Repo
Framework

Modeling The Intensity Function Of Point Process Via Recurrent Neural Networks

Title Modeling The Intensity Function Of Point Process Via Recurrent Neural Networks
Authors Shuai Xiao, Junchi Yan, Stephen M. Chu, Xiaokang Yang, Hongyuan Zha
Abstract Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.
Tasks Point Processes, Time Series
Published 2017-05-24
URL http://arxiv.org/abs/1705.08982v1
PDF http://arxiv.org/pdf/1705.08982v1.pdf
PWC https://paperswithcode.com/paper/modeling-the-intensity-function-of-point
Repo
Framework

Reduction of Overfitting in Diabetes Prediction Using Deep Learning Neural Network

Title Reduction of Overfitting in Diabetes Prediction Using Deep Learning Neural Network
Authors Akm Ashiquzzaman, Abdul Kawsar Tushar, Md. Rashedul Islam, Jong-Myon Kim
Abstract Augmented accuracy in prediction of diabetes will open up new frontiers in health prognostics. Data overfitting is a performance-degrading issue in diabetes prognosis. In this study, a prediction system for the disease of diabetes is pre-sented where the issue of overfitting is minimized by using the dropout method. Deep learning neural network is used where both fully connected layers are fol-lowed by dropout layers. The output performance of the proposed neural network is shown to have outperformed other state-of-art methods and it is recorded as by far the best performance for the Pima Indians Diabetes Data Set.
Tasks Diabetes Prediction
Published 2017-07-26
URL http://arxiv.org/abs/1707.08386v1
PDF http://arxiv.org/pdf/1707.08386v1.pdf
PWC https://paperswithcode.com/paper/reduction-of-overfitting-in-diabetes
Repo
Framework

Conformative Filtering for Implicit Feedback Data

Title Conformative Filtering for Implicit Feedback Data
Authors Farhan Khawar, Nevin L. Zhang
Abstract Implicit feedback is the simplest form of user feedback that can be used for item recommendation. It is easy to collect and is domain independent. However, there is a lack of negative examples. Previous work tackles this problem by assuming that users are not interested or not as much interested in the unconsumed items. Those assumptions are often severely violated since non-consumption can be due to factors like unawareness or lack of resources. Therefore, non-consumption by a user does not always mean disinterest or irrelevance. In this paper, we propose a novel method called Conformative Filtering (CoF) to address the issue. The motivating observation is that if there is a large group of users who share the same taste and none of them have consumed an item before, then it is likely that the item is not of interest to the group. We perform multidimensional clustering on implicit feedback data using hierarchical latent tree analysis (HLTA) to identify user `tastes’ groups and make recommendations for a user based on her memberships in the groups and on the past behavior of the groups. Experiments on two real-world datasets from different domains show that CoF has superior performance compared to several common baselines. |
Tasks
Published 2017-04-06
URL http://arxiv.org/abs/1704.01889v2
PDF http://arxiv.org/pdf/1704.01889v2.pdf
PWC https://paperswithcode.com/paper/conformative-filtering-for-implicit-feedback
Repo
Framework

Two-Archive Evolutionary Algorithm for Constrained Multi-Objective Optimization

Title Two-Archive Evolutionary Algorithm for Constrained Multi-Objective Optimization
Authors Ke Li, Renzhi Chen, Guangtao Fu, Xin Yao
Abstract When solving constrained multi-objective optimization problems, an important issue is how to balance convergence, diversity and feasibility simultaneously. To address this issue, this paper proposes a parameter-free constraint handling technique, two-archive evolutionary algorithm, for constrained multi-objective optimization. It maintains two co-evolving populations simultaneously: one, denoted as convergence archive, is the driving force to push the population toward the Pareto front; the other one, denoted as diversity archive, mainly tends to maintain the population diversity. In particular, to complement the behavior of the convergence archive and provide as much diversified information as possible, the diversity archive aims at exploring areas under-exploited by the convergence archive including the infeasible regions. To leverage the complementary effects of both archives, we develop a restricted mating selection mechanism that adaptively chooses appropriate mating parents from them according to their evolution status. Comprehensive experiments on a series of benchmark problems and a real-world case study fully demonstrate the competitiveness of our proposed algorithm, comparing to five state-of-the-art constrained evolutionary multi-objective optimizers.
Tasks
Published 2017-11-21
URL http://arxiv.org/abs/1711.07907v1
PDF http://arxiv.org/pdf/1711.07907v1.pdf
PWC https://paperswithcode.com/paper/two-archive-evolutionary-algorithm-for
Repo
Framework

Multi-level Residual Networks from Dynamical Systems View

Title Multi-level Residual Networks from Dynamical Systems View
Authors Bo Chang, Lili Meng, Eldad Haber, Frederick Tung, David Begert
Abstract Deep residual networks (ResNets) and their variants are widely used in many computer vision applications and natural language processing tasks. However, the theoretical principles for designing and training ResNets are still not fully understood. Recently, several points of view have emerged to try to interpret ResNet theoretically, such as unraveled view, unrolled iterative estimation and dynamical systems view. In this paper, we adopt the dynamical systems point of view, and analyze the lesioning properties of ResNet both theoretically and experimentally. Based on these analyses, we additionally propose a novel method for accelerating ResNet training. We apply the proposed method to train ResNets and Wide ResNets for three image classification benchmarks, reducing training time by more than 40% with superior or on-par accuracy.
Tasks Image Classification
Published 2017-10-27
URL http://arxiv.org/abs/1710.10348v2
PDF http://arxiv.org/pdf/1710.10348v2.pdf
PWC https://paperswithcode.com/paper/multi-level-residual-networks-from-dynamical
Repo
Framework

Thresholding Bandits with Augmented UCB

Title Thresholding Bandits with Augmented UCB
Authors Subhojyoti Mukherjee, K. P. Naveen, Nandan Sudarsanam, Balaraman Ravindran
Abstract In this paper we propose the Augmented-UCB (AugUCB) algorithm for a fixed-budget version of the thresholding bandit problem (TBP), where the objective is to identify a set of arms whose quality is above a threshold. A key feature of AugUCB is that it uses both mean and variance estimates to eliminate arms that have been sufficiently explored; to the best of our knowledge this is the first algorithm to employ such an approach for the considered TBP. Theoretically, we obtain an upper bound on the loss (probability of mis-classification) incurred by AugUCB. Although UCBEV in literature provides a better guarantee, it is important to emphasize that UCBEV has access to problem complexity (whose computation requires arms’ mean and variances), and hence is not realistic in practice; this is in contrast to AugUCB whose implementation does not require any such complexity inputs. We conduct extensive simulation experiments to validate the performance of AugUCB. Through our simulation work, we establish that AugUCB, owing to its utilization of variance estimates, performs significantly better than the state-of-the-art APT, CSAR and other non variance-based algorithms.
Tasks
Published 2017-04-07
URL https://arxiv.org/abs/1704.02281v3
PDF https://arxiv.org/pdf/1704.02281v3.pdf
PWC https://paperswithcode.com/paper/thresholding-bandits-with-augmented-ucb
Repo
Framework

Photographic Image Synthesis with Cascaded Refinement Networks

Title Photographic Image Synthesis with Cascaded Refinement Networks
Authors Qifeng Chen, Vladlen Koltun
Abstract We present an approach to synthesizing photographic images conditioned on semantic layouts. Given a semantic label map, our approach produces an image with photographic appearance that conforms to the input layout. The approach thus functions as a rendering engine that takes a two-dimensional semantic specification of the scene and produces a corresponding photographic image. Unlike recent and contemporaneous work, our approach does not rely on adversarial training. We show that photographic images can be synthesized from semantic layouts by a single feedforward network with appropriate structure, trained end-to-end with a direct regression objective. The presented approach scales seamlessly to high resolutions; we demonstrate this by synthesizing photographic images at 2-megapixel resolution, the full resolution of our training data. Extensive perceptual experiments on datasets of outdoor and indoor scenes demonstrate that images synthesized by the presented approach are considerably more realistic than alternative approaches. The results are shown in the supplementary video at https://youtu.be/0fhUJT21-bs
Tasks Image Generation, Image-to-Image Translation
Published 2017-07-28
URL http://arxiv.org/abs/1707.09405v1
PDF http://arxiv.org/pdf/1707.09405v1.pdf
PWC https://paperswithcode.com/paper/photographic-image-synthesis-with-cascaded
Repo
Framework

Improving a Strong Neural Parser with Conjunction-Specific Features

Title Improving a Strong Neural Parser with Conjunction-Specific Features
Authors Jessica Ficler, Yoav Goldberg
Abstract While dependency parsers reach very high overall accuracy, some dependency relations are much harder than others. In particular, dependency parsers perform poorly in coordination construction (i.e., correctly attaching the “conj” relation). We extend a state-of-the-art dependency parser with conjunction-specific features, focusing on the similarity between the conjuncts head words. Training the extended parser yields an improvement in “conj” attachment as well as in overall dependency parsing accuracy on the Stanford dependency conversion of the Penn TreeBank.
Tasks Dependency Parsing
Published 2017-02-22
URL http://arxiv.org/abs/1702.06733v1
PDF http://arxiv.org/pdf/1702.06733v1.pdf
PWC https://paperswithcode.com/paper/improving-a-strong-neural-parser-with
Repo
Framework

Long-Term Visual Object Tracking Benchmark

Title Long-Term Visual Object Tracking Benchmark
Authors Abhinav Moudgil, Vineet Gandhi
Abstract We propose a new long video dataset (called Track Long and Prosper - TLP) and benchmark for single object tracking. The dataset consists of 50 HD videos from real world scenarios, encompassing a duration of over 400 minutes (676K frames), making it more than 20 folds larger in average duration per sequence and more than 8 folds larger in terms of total covered duration, as compared to existing generic datasets for visual tracking. The proposed dataset paves a way to suitably assess long term tracking performance and train better deep learning architectures (avoiding/reducing augmentation, which may not reflect real world behaviour). We benchmark the dataset on 17 state of the art trackers and rank them according to tracking accuracy and run time speeds. We further present thorough qualitative and quantitative evaluation highlighting the importance of long term aspect of tracking. Our most interesting observations are (a) existing short sequence benchmarks fail to bring out the inherent differences in tracking algorithms which widen up while tracking on long sequences and (b) the accuracy of trackers abruptly drops on challenging long sequences, suggesting the potential need of research efforts in the direction of long-term tracking.
Tasks Object Tracking, Visual Object Tracking, Visual Tracking
Published 2017-12-04
URL http://arxiv.org/abs/1712.01358v4
PDF http://arxiv.org/pdf/1712.01358v4.pdf
PWC https://paperswithcode.com/paper/long-term-visual-object-tracking-benchmark
Repo
Framework

Comparative Opinion Mining: A Review

Title Comparative Opinion Mining: A Review
Authors Kasturi Dewi Varathan, Anastasia Giachanou, Fabio Crestani
Abstract Opinion mining refers to the use of natural language processing, text analysis and computational linguistics to identify and extract subjective information in textual material. Opinion mining, also known as sentiment analysis, has received a lot of attention in recent times, as it provides a number of tools to analyse the public opinion on a number of different topics. Comparative opinion mining is a subfield of opinion mining that deals with identifying and extracting information that is expressed in a comparative form (e.g.~"paper X is better than the Y”). Comparative opinion mining plays a very important role when ones tries to evaluate something, as it provides a reference point for the comparison. This paper provides a review of the area of comparative opinion mining. It is the first review that cover specifically this topic as all previous reviews dealt mostly with general opinion mining. This survey covers comparative opinion mining from two different angles. One from perspective of techniques and the other from perspective of comparative opinion elements. It also incorporates preprocessing tools as well as dataset that were used by the past researchers that can be useful to the future researchers in the field of comparative opinion mining.
Tasks Opinion Mining, Sentiment Analysis
Published 2017-12-24
URL http://arxiv.org/abs/1712.08941v1
PDF http://arxiv.org/pdf/1712.08941v1.pdf
PWC https://paperswithcode.com/paper/comparative-opinion-mining-a-review
Repo
Framework

Duality of Graphical Models and Tensor Networks

Title Duality of Graphical Models and Tensor Networks
Authors Elina Robeva, Anna Seigal
Abstract In this article we show the duality between tensor networks and undirected graphical models with discrete variables. We study tensor networks on hypergraphs, which we call tensor hypernetworks. We show that the tensor hypernetwork on a hypergraph exactly corresponds to the graphical model given by the dual hypergraph. We translate various notions under duality. For example, marginalization in a graphical model is dual to contraction in the tensor network. Algorithms also translate under duality. We show that belief propagation corresponds to a known algorithm for tensor network contraction. This article is a reminder that the research areas of graphical models and tensor networks can benefit from interaction.
Tasks Tensor Networks
Published 2017-10-04
URL http://arxiv.org/abs/1710.01437v1
PDF http://arxiv.org/pdf/1710.01437v1.pdf
PWC https://paperswithcode.com/paper/duality-of-graphical-models-and-tensor
Repo
Framework

On Pairwise Clustering with Side Information

Title On Pairwise Clustering with Side Information
Authors Stephen Pasteris, Fabio Vitale, Claudio Gentile, Mark Herbster
Abstract Pairwise clustering, in general, partitions a set of items via a known similarity function. In our treatment, clustering is modeled as a transductive prediction problem. Thus rather than beginning with a known similarity function, the function instead is hidden and the learner only receives a random sample consisting of a subset of the pairwise similarities. An additional set of pairwise side-information may be given to the learner, which then determines the inductive bias of our algorithms. We measure performance not based on the recovery of the hidden similarity function, but instead on how well we classify each item. We give tight bounds on the number of misclassifications. We provide two algorithms. The first algorithm SACA is a simple agglomerative clustering algorithm which runs in near linear time, and which serves as a baseline for our analyses. Whereas the second algorithm, RGCA, enables the incorporation of side-information which may lead to improved bounds at the cost of a longer running time.
Tasks
Published 2017-06-19
URL http://arxiv.org/abs/1706.06474v1
PDF http://arxiv.org/pdf/1706.06474v1.pdf
PWC https://paperswithcode.com/paper/on-pairwise-clustering-with-side-information
Repo
Framework
comments powered by Disqus