October 20, 2019

3387 words 16 mins read

Paper Group ANR 102

Paper Group ANR 102

Algorithmic Bio-surveillance For Precise Spatio-temporal Prediction of Zoonotic Emergence. Fast Semantic Segmentation on Video Using Block Motion-Based Feature Interpolation. Exploiting deep residual networks for human action recognition from skeletal data. ScaffoldNet: Detecting and Classifying Biomedical Polymer-Based Scaffolds via a Convolutiona …

Algorithmic Bio-surveillance For Precise Spatio-temporal Prediction of Zoonotic Emergence

Title Algorithmic Bio-surveillance For Precise Spatio-temporal Prediction of Zoonotic Emergence
Authors Jaideep Dhanoa, Balaji Manicassamy, Ishanu Chattopadhyay
Abstract Viral zoonoses have emerged as the key drivers of recent pandemics. Human infection by zoonotic viruses are either spillover events – isolated infections that fail to cause a widespread contagion – or species jumps, where successful adaptation to the new host leads to a pandemic. Despite expensive bio-surveillance efforts, historically emergence response has been reactive, and post-hoc. Here we use machine inference to demonstrate a high accuracy predictive bio-surveillance capability, designed to pro-actively localize an impending species jump via automated interrogation of massive sequence databases of viral proteins. Our results suggest that a jump might not purely be the result of an isolated unfortunate cross-infection localized in space and time; there are subtle yet detectable patterns of genotypic changes accumulating in the global viral population leading up to emergence. Using tens of thousands of protein sequences simultaneously, we train models that track maximum achievable accuracy for disambiguating host tropism from the primary structure of surface proteins, and show that the inverse classification accuracy is a quantitative indicator of jump risk. We validate our claim in the context of the 2009 swine flu outbreak, and the 2004 emergence of H5N1 subspecies of Influenza A from avian reservoirs; illustrating that interrogation of the global viral population can unambiguously track a near monotonic risk elevation over several preceding years leading to eventual emergence.
Tasks
Published 2018-01-23
URL http://arxiv.org/abs/1801.07807v1
PDF http://arxiv.org/pdf/1801.07807v1.pdf
PWC https://paperswithcode.com/paper/algorithmic-bio-surveillance-for-precise
Repo
Framework

Fast Semantic Segmentation on Video Using Block Motion-Based Feature Interpolation

Title Fast Semantic Segmentation on Video Using Block Motion-Based Feature Interpolation
Authors Samvit Jain, Joseph E. Gonzalez
Abstract Convolutional networks optimized for accuracy on challenging, dense prediction tasks are prohibitively slow to run on each frame in a video. The spatial similarity of nearby video frames, however, suggests opportunity to reuse computation. Existing work has explored basic feature reuse and feature warping based on optical flow, but has encountered limits to the speedup attainable with these techniques. In this paper, we present a new, two part approach to accelerating inference on video. First, we propose a fast feature propagation technique that utilizes the block motion vectors present in compressed video (e.g. H.264 codecs) to cheaply propagate features from frame to frame. Second, we develop a novel feature estimation scheme, termed feature interpolation, that fuses features propagated from enclosing keyframes to render accurate feature estimates, even at sparse keyframe frequencies. We evaluate our system on the Cityscapes and CamVid datasets, comparing to both a frame-by-frame baseline and related work. We find that we are able to substantially accelerate segmentation on video, achieving near real-time frame rates (20.1 frames per second) on large images (960 x 720 pixels), while maintaining competitive accuracy. This represents an improvement of almost 6x over the single-frame baseline and 2.5x over the fastest prior work.
Tasks Optical Flow Estimation, Semantic Segmentation
Published 2018-03-21
URL http://arxiv.org/abs/1803.07742v5
PDF http://arxiv.org/pdf/1803.07742v5.pdf
PWC https://paperswithcode.com/paper/fast-semantic-segmentation-on-video-using
Repo
Framework

Exploiting deep residual networks for human action recognition from skeletal data

Title Exploiting deep residual networks for human action recognition from skeletal data
Authors Huy-Hieu Pham, Louahdi Khoudour, Alain Crouzil, Pablo Zegers, Sergio A. Velastin
Abstract The computer vision community is currently focusing on solving action recognition problems in real videos, which contain thousands of samples with many challenges. In this process, Deep Convolutional Neural Networks (D-CNNs) have played a significant role in advancing the state-of-the-art in various vision-based action recognition systems. Recently, the introduction of residual connections in conjunction with a more traditional CNN model in a single architecture called Residual Network (ResNet) has shown impressive performance and great potential for image recognition tasks. In this paper, we investigate and apply deep ResNets for human action recognition using skeletal data provided by depth sensors. Firstly, the 3D coordinates of the human body joints carried in skeleton sequences are transformed into image-based representations and stored as RGB images. These color images are able to capture the spatial-temporal evolutions of 3D motions from skeleton sequences and can be efficiently learned by D-CNNs. We then propose a novel deep learning architecture based on ResNets to learn features from obtained color-based representations and classify them into action classes. The proposed method is evaluated on three challenging benchmark datasets including MSR Action 3D, KARD, and NTU-RGB+D datasets. Experimental results demonstrate that our method achieves state-of-the-art performance for all these benchmarks whilst requiring less computation resource. In particular, the proposed method surpasses previous approaches by a significant margin of 3.4% on MSR Action 3D dataset, 0.67% on KARD dataset, and 2.5% on NTU-RGB+D dataset.
Tasks Temporal Action Localization
Published 2018-03-21
URL http://arxiv.org/abs/1803.07781v1
PDF http://arxiv.org/pdf/1803.07781v1.pdf
PWC https://paperswithcode.com/paper/exploiting-deep-residual-networks-for-human
Repo
Framework

ScaffoldNet: Detecting and Classifying Biomedical Polymer-Based Scaffolds via a Convolutional Neural Network

Title ScaffoldNet: Detecting and Classifying Biomedical Polymer-Based Scaffolds via a Convolutional Neural Network
Authors Darlington Ahiale Akogo, Xavier-Lewis Palmer
Abstract We developed a Convolutional Neural Network model to identify and classify Airbrushed (alternatively known as Blow-spun), Electrospun and Steel Wire scaffolds. Our model ScaffoldNet is a 6-layer Convolutional Neural Network trained and tested on 3,043 images of Airbrushed, Electrospun and Steel Wire scaffolds. The model takes in as input an imaged scaffold and then outputs the scaffold type (Airbrushed, Electrospun or Steel Wire) as predicted probabilities for the 3 classes. Our model scored a 99.44% Accuracy, demonstrating potential for adaptation to investigating and solving complex machine learning problems aimed at abstract spatial contexts, or in screening complex, biological, fibrous structures seen in cortical bone and fibrous shells.
Tasks
Published 2018-05-17
URL http://arxiv.org/abs/1805.08702v1
PDF http://arxiv.org/pdf/1805.08702v1.pdf
PWC https://paperswithcode.com/paper/scaffoldnet-detecting-and-classifying
Repo
Framework

Fire SSD: Wide Fire Modules based Single Shot Detector on Edge Device

Title Fire SSD: Wide Fire Modules based Single Shot Detector on Edge Device
Authors Hengfui Liau, Nimmagadda Yamini, YengLiong Wong
Abstract With the emergence of edge computing, there is an increasing need for running convolutional neural network based object detection on small form factor edge computing devices with limited compute and thermal budget for applications such as video surveillance. To address this problem, efficient object detection frameworks such as YOLO and SSD were proposed. However, SSD based object detection that uses VGG16 as backend network is insufficient to achieve real time speed on edge devices. To further improve the detection speed, the backend network is replaced by more efficient networks such as SqueezeNet and MobileNet. Although the speed is greatly improved, it comes with a price of lower accuracy. In this paper, we propose an efficient SSD named Fire SSD. Fire SSD achieves 70.7mAP on Pascal VOC 2007 test set. Fire SSD achieves the speed of 30.6FPS on low power mainstream CPU and is about 6 times faster than SSD300 and has about 4 times smaller model size. Fire SSD also achieves 22.2FPS on integrated GPU.
Tasks Object Detection
Published 2018-06-14
URL http://arxiv.org/abs/1806.05363v5
PDF http://arxiv.org/pdf/1806.05363v5.pdf
PWC https://paperswithcode.com/paper/fire-ssd-wide-fire-modules-based-single-shot
Repo
Framework

An easy-to-use empirical likelihood ABC method

Title An easy-to-use empirical likelihood ABC method
Authors Sanjay Chaudhuri, Subhro Ghosh, David J. Nott, Kim Cuc Pham
Abstract Many scientifically well-motivated statistical models in natural, engineering and environmental sciences are specified through a generative process, but in some cases it may not be possible to write down a likelihood for these models analytically. Approximate Bayesian computation (ABC) methods, which allow Bayesian inference in these situations, are typically computationally intensive. Recently, computationally attractive empirical likelihood based ABC methods have been suggested in the literature. These methods heavily rely on the availability of a set of suitable analytically tractable estimating equations. We propose an easy-to-use empirical likelihood ABC method, where the only inputs required are a choice of summary statistic, it’s observed value, and the ability to simulate summary statistics for any parameter value under the model. It is shown that the posterior obtained using the proposed method is consistent, and its performance is explored using various examples.
Tasks Bayesian Inference
Published 2018-10-03
URL http://arxiv.org/abs/1810.01675v2
PDF http://arxiv.org/pdf/1810.01675v2.pdf
PWC https://paperswithcode.com/paper/an-easy-to-use-empirical-likelihood-abc
Repo
Framework

MetaAnchor: Learning to Detect Objects with Customized Anchors

Title MetaAnchor: Learning to Detect Objects with Customized Anchors
Authors Tong Yang, Xiangyu Zhang, Zeming Li, Wenqiang Zhang, Jian Sun
Abstract We propose a novel and flexible anchor mechanism named MetaAnchor for object detection frameworks. Unlike many previous detectors model anchors via a predefined manner, in MetaAnchor anchor functions could be dynamically generated from the arbitrary customized prior boxes. Taking advantage of weight prediction, MetaAnchor is able to work with most of the anchor-based object detection systems such as RetinaNet. Compared with the predefined anchor scheme, we empirically find that MetaAnchor is more robust to anchor settings and bounding box distributions; in addition, it also shows the potential on transfer tasks. Our experiment on COCO detection task shows that MetaAnchor consistently outperforms the counterparts in various scenarios.
Tasks Object Detection
Published 2018-07-03
URL http://arxiv.org/abs/1807.00980v2
PDF http://arxiv.org/pdf/1807.00980v2.pdf
PWC https://paperswithcode.com/paper/metaanchor-learning-to-detect-objects-with
Repo
Framework

Topological Data Analysis Made Easy with the Topology ToolKit

Title Topological Data Analysis Made Easy with the Topology ToolKit
Authors Guillaume Favelier, Charles Gueunet, Attila Gyulassy, Julien Kitware, Joshua Levine, Jonas Lukasczyk, Daisuke Sakurai, Maxime Soler, Julien Tierny, Will Usher, Qi Wu
Abstract This tutorial presents topological methods for the analysis and visualization of scientific data from a user’s perspective, with the Topology ToolKit (TTK), a recently released open-source library for topological data analysis. Topological methods have gained considerably in popularity and maturity over the last twenty years and success stories of established methods have been documented in a wide range of applications (combustion, chemistry, astrophysics, material sciences, etc.) with both acquired and simulated data, in both post-hoc and in-situ contexts. While reference textbooks have been published on the topic, no tutorial at IEEE VIS has covered this area in recent years, and never at a software level and from a user’s point-of-view. This tutorial fills this gap by providing a beginner’s introduction to topological methods for practitioners, researchers, students, and lecturers. In particular, instead of focusing on theoretical aspects and algorithmic details, this tutorial focuses on how topological methods can be useful in practice for concrete data analysis tasks such as segmentation, feature extraction or tracking. The tutorial describes in detail how to achieve these tasks with TTK. First, after an introduction to topological methods and their application in data analysis, a brief overview of TTK’s main entry point for end users, namely ParaView, will be presented. Second, an overview of TTK’s main features will be given. A running example will be described in detail, showcasing how to access TTK’s features via ParaView, Python, VTK/C++, and C++. Third, hands-on sessions will concretely show how to use TTK in ParaView for multiple, representative data analysis tasks. Fourth, the usage of TTK will be presented for developers, in particular by describing several examples of visualization and data analysis projects that were built on top of TTK. Finally, some feedback regarding the usage of TTK as a teaching platform for topological analysis will be given. Presenters of this tutorial include experts in topological methods, core authors of TTK as well as active users, coming from academia, labs, or industry. A large part of the tutorial will be dedicated to hands-on exercises and a rich material package (including TTK pre-installs in virtual machines, code, data, demos, video tutorials, etc.) will be provided to the participants. This tutorial mostly targets students, practitioners and researchers who are not experts in topological methods but who are interested in using them in their daily tasks. We also target researchers already familiar to topological methods and who are interested in using or contributing to TTK.
Tasks Topological Data Analysis
Published 2018-06-21
URL http://arxiv.org/abs/1806.08126v1
PDF http://arxiv.org/pdf/1806.08126v1.pdf
PWC https://paperswithcode.com/paper/topological-data-analysis-made-easy-with-the
Repo
Framework

Modeling Coherence for Discourse Neural Machine Translation

Title Modeling Coherence for Discourse Neural Machine Translation
Authors Hao Xiong, Zhongjun He, Hua Wu, Haifeng Wang
Abstract Discourse coherence plays an important role in the translation of one text. However, the previous reported models most focus on improving performance over individual sentence while ignoring cross-sentence links and dependencies, which affects the coherence of the text. In this paper, we propose to use discourse context and reward to refine the translation quality from the discourse perspective. In particular, we generate the translation of individual sentences at first. Next, we deliberate the preliminary produced translations, and train the model to learn the policy that produces discourse coherent text by a reward teacher. Practical results on multiple discourse test datasets indicate that our model significantly improves the translation quality over the state-of-the-art baseline system by +1.23 BLEU score. Moreover, our model generates more discourse coherent text and obtains +2.2 BLEU improvements when evaluated by discourse metrics.
Tasks Machine Translation
Published 2018-11-14
URL http://arxiv.org/abs/1811.05683v1
PDF http://arxiv.org/pdf/1811.05683v1.pdf
PWC https://paperswithcode.com/paper/modeling-coherence-for-discourse-neural
Repo
Framework

Providing Explanations for Recommendations in Reciprocal Environments

Title Providing Explanations for Recommendations in Reciprocal Environments
Authors Akiva Kleinerman, Ariel Rosenfeld, Sarit Kraus
Abstract Automated platforms which support users in finding a mutually beneficial match, such as online dating and job recruitment sites, are becoming increasingly popular. These platforms often include recommender systems that assist users in finding a suitable match. While recommender systems which provide explanations for their recommendations have shown many benefits, explanation methods have yet to be adapted and tested in recommending suitable matches. In this paper, we introduce and extensively evaluate the use of “reciprocal explanations” – explanations which provide reasoning as to why both parties are expected to benefit from the match. Through an extensive empirical evaluation, in both simulated and real-world dating platforms with 287 human participants, we find that when the acceptance of a recommendation involves a significant cost (e.g., monetary or emotional), reciprocal explanations outperform standard explanation methods which consider the recommendation receiver alone. However, contrary to what one may expect, when the cost of accepting a recommendation is negligible, reciprocal explanations are shown to be less effective than the traditional explanation methods.
Tasks Recommendation Systems
Published 2018-07-03
URL http://arxiv.org/abs/1807.01227v1
PDF http://arxiv.org/pdf/1807.01227v1.pdf
PWC https://paperswithcode.com/paper/providing-explanations-for-recommendations-in
Repo
Framework

Abnormal Event Detection and Location for Dense Crowds using Repulsive Forces and Sparse Reconstruction

Title Abnormal Event Detection and Location for Dense Crowds using Repulsive Forces and Sparse Reconstruction
Authors Pei Lv, Shunhua Liu, Mingliang Xu, Bing Zhou
Abstract This paper proposes a method based on repulsive forces and sparse reconstruction for the detection and location of abnormal events in crowded scenes. In order to avoid the challenging problem of accurately tracking each specific individual in a dense or complex scene, we divide each frame of the surveillance video into a fixed number of grids and select a single representative point in each grid as the individual to track. The repulsive force model, which can accurately reflect interactive behaviors of crowds, is used to calculate the interactive forces between grid particles in crowded scenes and to construct a force flow matrix using these discrete forces from a fixed number of continuous frames. The force flow matrix, which contains spatial and temporal information, is adopted to train a group of visual dictionaries by sparse coding. To further improve the detection efficiency and avoid concept drift, we propose a fully unsupervised global and local dynamic updating algorithm, based on sparse reconstruction and a group of word pools. For anomaly location, since our method is based on a fixed grid, we can judge whether anomalies occur in a region intuitively according to the reconstruction error of the corresponding visual words. We experimentally verify the proposed method using the UMN dataset, the UCSD dataset and the Web dataset separately. The results indicate that our method can not only detect abnormal events accurately, but can also pinpoint the location of anomalies.
Tasks
Published 2018-08-21
URL http://arxiv.org/abs/1808.06749v1
PDF http://arxiv.org/pdf/1808.06749v1.pdf
PWC https://paperswithcode.com/paper/abnormal-event-detection-and-location-for
Repo
Framework

Redundant Perception and State Estimation for Reliable Autonomous Racing

Title Redundant Perception and State Estimation for Reliable Autonomous Racing
Authors Nikhil Bharadwaj Gosala, Andreas Bühler, Manish Prajapat, Claas Ehmke, Mehak Gupta, Ramya Sivanesan, Abel Gawel, Mark Pfeiffer, Mathias Bürki, Inkyu Sa, Renaud Dubé, Roland Siegwart
Abstract In autonomous racing, vehicles operate close to the limits of handling and a sensor failure can have critical consequences. To limit the impact of such failures, this paper presents the redundant perception and state estimation approaches developed for an autonomous race car. Redundancy in perception is achieved by estimating the color and position of the track delimiting objects using two sensor modalities independently. Specifically, learning-based approaches are used to generate color and pose estimates, from LiDAR and camera data respectively. The redundant perception inputs are fused by a particle filter based SLAM algorithm that operates in real-time. Velocity is estimated using slip dynamics, with reliability being ensured through a probabilistic failure detection algorithm. The sub-modules are extensively evaluated in real-world racing conditions using the autonomous race car “gotthard driverless”, achieving lateral accelerations up to 1.7G and a top speed of 90km/h.
Tasks
Published 2018-09-26
URL http://arxiv.org/abs/1809.10099v1
PDF http://arxiv.org/pdf/1809.10099v1.pdf
PWC https://paperswithcode.com/paper/redundant-perception-and-state-estimation-for
Repo
Framework

A Context-based Approach for Dialogue Act Recognition using Simple Recurrent Neural Networks

Title A Context-based Approach for Dialogue Act Recognition using Simple Recurrent Neural Networks
Authors Chandrakant Bothe, Cornelius Weber, Sven Magg, Stefan Wermter
Abstract Dialogue act recognition is an important part of natural language understanding. We investigate the way dialogue act corpora are annotated and the learning approaches used so far. We find that the dialogue act is context-sensitive within the conversation for most of the classes. Nevertheless, previous models of dialogue act classification work on the utterance-level and only very few consider context. We propose a novel context-based learning method to classify dialogue acts using a character-level language model utterance representation, and we notice significant improvement. We evaluate this method on the Switchboard Dialogue Act corpus, and our results show that the consideration of the preceding utterances as a context of the current utterance improves dialogue act detection.
Tasks Dialogue Act Classification, Language Modelling
Published 2018-05-16
URL http://arxiv.org/abs/1805.06280v1
PDF http://arxiv.org/pdf/1805.06280v1.pdf
PWC https://paperswithcode.com/paper/a-context-based-approach-for-dialogue-act
Repo
Framework

The Query Complexity of a Permutation-Based Variant of Mastermind

Title The Query Complexity of a Permutation-Based Variant of Mastermind
Authors Peyman Afshani, Manindra Agrawal, Benjamin Doerr, Carola Doerr, Kasper Green Larsen, Kurt Mehlhorn
Abstract We study the query complexity of a permutation-based variant of the guessing game Mastermind. In this variant, the secret is a pair $(z,\pi)$ which consists of a binary string $z \in {0,1}^n$ and a permutation $\pi$ of $[n]$. The secret must be unveiled by asking queries of the form $x \in {0,1}^n$. For each such query, we are returned the score [ f_{z,\pi}(x):= \max { i \in [0..n]\mid \forall j \leq i: z_{\pi(j)} = x_{\pi(j)}},;] i.e., the score of $x$ is the length of the longest common prefix of $x$ and $z$ with respect to the order imposed by $\pi$. The goal is to minimize the number of queries needed to identify $(z,\pi)$. This problem originates from the study of black-box optimization heuristics, where it is known as the \textsc{LeadingOnes} problem. In this work, we prove matching upper and lower bounds for the deterministic and randomized query complexity of this game, which are $\Theta(n \log n)$ and $\Theta(n \log \log n)$, respectively.
Tasks
Published 2018-12-20
URL http://arxiv.org/abs/1812.08480v1
PDF http://arxiv.org/pdf/1812.08480v1.pdf
PWC https://paperswithcode.com/paper/the-query-complexity-of-a-permutation-based
Repo
Framework

Unicorn: Continual Learning with a Universal, Off-policy Agent

Title Unicorn: Continual Learning with a Universal, Off-policy Agent
Authors Daniel J. Mankowitz, Augustin Žídek, André Barreto, Dan Horgan, Matteo Hessel, John Quan, Junhyuk Oh, Hado van Hasselt, David Silver, Tom Schaul
Abstract Some real-world domains are best characterized as a single task, but for others this perspective is limiting. Instead, some tasks continually grow in complexity, in tandem with the agent’s competence. In continual learning, also referred to as lifelong learning, there are no explicit task boundaries or curricula. As learning agents have become more powerful, continual learning remains one of the frontiers that has resisted quick progress. To test continual learning capabilities we consider a challenging 3D domain with an implicit sequence of tasks and sparse rewards. We propose a novel agent architecture called Unicorn, which demonstrates strong continual learning and outperforms several baseline agents on the proposed domain. The agent achieves this by jointly representing and learning multiple policies efficiently, using a parallel off-policy learning setup.
Tasks Continual Learning
Published 2018-02-22
URL http://arxiv.org/abs/1802.08294v2
PDF http://arxiv.org/pdf/1802.08294v2.pdf
PWC https://paperswithcode.com/paper/unicorn-continual-learning-with-a-universal
Repo
Framework
comments powered by Disqus