October 20, 2019

2894 words 14 mins read

Paper Group ANR 17

Paper Group ANR 17

Extraction of Behavioral Features from Smartphone and Wearable Data. REFUGE CHALLENGE 2018-Task 2:Deep Optic Disc and Cup Segmentation in Fundus Images Using U-Net and Multi-scale Feature Matching Networks. Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings. Trading the Twitter Sentiment with Reinforcement Learnin …

Extraction of Behavioral Features from Smartphone and Wearable Data

Title Extraction of Behavioral Features from Smartphone and Wearable Data
Authors Afsaneh Doryab, Prerna Chikarsel, Xinwen Liu, Anind K. Dey
Abstract The rich set of sensors in smartphones and wearable devices provides the possibility to passively collect streams of data in the wild. The raw data streams, however, can rarely be directly used in the modeling pipeline. We provide a generic framework that can process raw data streams and extract useful features related to non-verbal human behavior. This framework can be used by researchers in the field who are interested in processing data from smartphones and Wearable devices.
Tasks
Published 2018-12-18
URL http://arxiv.org/abs/1812.10394v2
PDF http://arxiv.org/pdf/1812.10394v2.pdf
PWC https://paperswithcode.com/paper/extraction-of-behavioral-features-from
Repo
Framework

REFUGE CHALLENGE 2018-Task 2:Deep Optic Disc and Cup Segmentation in Fundus Images Using U-Net and Multi-scale Feature Matching Networks

Title REFUGE CHALLENGE 2018-Task 2:Deep Optic Disc and Cup Segmentation in Fundus Images Using U-Net and Multi-scale Feature Matching Networks
Authors Vivek Kumar Singh, Hatem A. Rashwan, Adel Saleh, Farhan Akram, Md Mostafa Kamal Sarker, Nidhi Pandey, Saddam Abdulwahab
Abstract In this paper, an optic disc and cup segmentation method is proposed using U-Net followed by a multi-scale feature matching network. The proposed method targets task 2 of the REFUGE challenge 2018. In order to solve the segmentation problem of task 2, we firstly crop the input image using single shot multibox detector (SSD). The cropped image is then passed to an encoder-decoder network with skip connections also known as generator. Afterwards, both the ground truth and generated images are fed to a convolution neural network (CNN) to extract their multi-level features. A dice loss function is then used to match the features of the two images by minimizing the error at each layer. The aggregation of error from each layer is back-propagated through the generator network to enforce it to generate a segmented image closer to the ground truth. The CNN network improves the performance of the generator network without increasing the complexity of the model.
Tasks
Published 2018-07-30
URL http://arxiv.org/abs/1807.11433v1
PDF http://arxiv.org/pdf/1807.11433v1.pdf
PWC https://paperswithcode.com/paper/refuge-challenge-2018-task-2deep-optic-disc
Repo
Framework

Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings

Title Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings
Authors Kamal Al-Sabahi, Zhang Zuping, Yang Kang
Abstract Since the amount of information on the internet is growing rapidly, it is not easy for a user to find relevant information for his/her query. To tackle this issue, much attention has been paid to Automatic Document Summarization. The key point in any successful document summarizer is a good document representation. The traditional approaches based on word overlapping mostly fail to produce that kind of representation. Word embedding, distributed representation of words, has shown an excellent performance that allows words to match on semantic level. Naively concatenating word embeddings makes the common word dominant which in turn diminish the representation quality. In this paper, we employ word embeddings to improve the weighting schemes for calculating the input matrix of Latent Semantic Analysis method. Two embedding-based weighting schemes are proposed and then combined to calculate the values of this matrix. The new weighting schemes are modified versions of the augment weight and the entropy frequency. The new schemes combine the strength of the traditional weighting schemes and word embedding. The proposed approach is experimentally evaluated on three well-known English datasets, DUC 2002, DUC 2004 and Multilingual 2015 Single-document Summarization for English. The proposed model performs comprehensively better compared to the state-of-the-art methods, by at least 1% ROUGE points, leading to a conclusion that it provides a better document representation and a better document summary as a result.
Tasks Document Summarization, Word Embeddings
Published 2018-07-08
URL http://arxiv.org/abs/1807.02748v2
PDF http://arxiv.org/pdf/1807.02748v2.pdf
PWC https://paperswithcode.com/paper/latent-semantic-analysis-approach-for
Repo
Framework

Trading the Twitter Sentiment with Reinforcement Learning

Title Trading the Twitter Sentiment with Reinforcement Learning
Authors Catherine Xiao, Wanfeng Chen
Abstract This paper is to explore the possibility to use alternative data and artificial intelligence techniques to trade stocks. The efficacy of the daily Twitter sentiment on predicting the stock return is examined using machine learning methods. Reinforcement learning(Q-learning) is applied to generate the optimal trading policy based on the sentiment signal. The predicting power of the sentiment signal is more significant if the stock price is driven by the expectation of the company growth and when the company has a major event that draws the public attention. The optimal trading strategy based on reinforcement learning outperforms the trading strategy based on the machine learning prediction.
Tasks Q-Learning
Published 2018-01-07
URL http://arxiv.org/abs/1801.02243v1
PDF http://arxiv.org/pdf/1801.02243v1.pdf
PWC https://paperswithcode.com/paper/trading-the-twitter-sentiment-with
Repo
Framework

A real-time warning system for rear-end collision based on random forest classifier

Title A real-time warning system for rear-end collision based on random forest classifier
Authors Fateme Teimouri, Mehdi Ghatee
Abstract Rear-end collision warning system has a great role to enhance the driving safety. In this system some measures are used to estimate the dangers and the system warns drivers to be more cautious. The real-time processes should be executed in such system, to remain enough time and distance to avoid collision with the front vehicle. To this end, in this paper a new system is developed by using random forest classifier. To evaluate the performance of the proposed system, vehicles trajectory data of 100 car’s database from Virginia tech transportation institute are used and the methods are compared based on their accuracy and their processing time. By using TOPSIS multi-criteria selection method, we show that the results of the implemented classifier is better than the results of different classifiers including Bayesian network, naive Bayes, MLP neural network, support vector machine, nearest neighbor, rule-based methods and decision tree. The presented experiments reveals that the random forest is an acceptable algorithm for the proposed driver assistant system with 88.4% accuracy for detecting warning situations and 94.7% for detecting safe situations.
Tasks
Published 2018-03-29
URL http://arxiv.org/abs/1803.10988v1
PDF http://arxiv.org/pdf/1803.10988v1.pdf
PWC https://paperswithcode.com/paper/a-real-time-warning-system-for-rear-end
Repo
Framework

Glassy nature of the hard phase in inference problems

Title Glassy nature of the hard phase in inference problems
Authors Fabrizio Antenucci, Silvio Franz, Pierfrancesco Urbani, Lenka Zdeborová
Abstract An algorithmically hard phase was described in a range of inference problems: even if the signal can be reconstructed with a small error from an information theoretic point of view, known algorithms fail unless the noise-to-signal ratio is sufficiently small. This hard phase is typically understood as a metastable branch of the dynamical evolution of message passing algorithms. In this work we study the metastable branch for a prototypical inference problem, the low-rank matrix factorization, that presents a hard phase. We show that for noise-to-signal ratios that are below the information theoretic threshold, the posterior measure is composed of an exponential number of metastable glassy states and we compute their entropy, called the complexity. We show that this glassiness extends even slightly below the algorithmic threshold below which the well-known approximate message passing (AMP) algorithm is able to closely reconstruct the signal. Counter-intuitively, we find that the performance of the AMP algorithm is not improved by taking into account the glassy nature of the hard phase. This result provides further evidence that the hard phase in inference problems is algorithmically impenetrable for some deep computational reasons that remain to be uncovered.
Tasks
Published 2018-05-15
URL http://arxiv.org/abs/1805.05857v4
PDF http://arxiv.org/pdf/1805.05857v4.pdf
PWC https://paperswithcode.com/paper/glassy-nature-of-the-hard-phase-in-inference
Repo
Framework

Simultaneous Task Allocation and Planning Under Uncertainty

Title Simultaneous Task Allocation and Planning Under Uncertainty
Authors Fatma Faruq, Bruno Lacerda, Nick Hawes, David Parker
Abstract We propose novel techniques for task allocation and planning in multi-robot systems operating in uncertain environments. Task allocation is performed simultaneously with planning, which provides more detailed information about individual robot behaviour, but also exploits independence between tasks to do so efficiently. We use Markov decision processes to model robot behaviour and linear temporal logic to specify tasks and safety constraints. Building upon techniques and tools from formal verification, we show how to generate a sequence of multi-robot policies, iteratively refining them to reallocate tasks if individual robots fail, and providing probabilistic guarantees on the performance (and safe operation) of the team of robots under the resulting policy. We implement our approach and evaluate it on a benchmark multi-robot example.
Tasks
Published 2018-03-07
URL http://arxiv.org/abs/1803.02906v2
PDF http://arxiv.org/pdf/1803.02906v2.pdf
PWC https://paperswithcode.com/paper/simultaneous-task-allocation-and-planning
Repo
Framework

Ranking with Adaptive Neighbors

Title Ranking with Adaptive Neighbors
Authors Muge Li, Liangyue Li, Feiping Nie
Abstract Retrieving the most similar objects in a large-scale database for a given query is a fundamental building block in many application domains, ranging from web searches, visual, cross media, and document retrievals. State-of-the-art approaches have mainly focused on capturing the underlying geometry of the data manifolds. Graph-based approaches, in particular, define various diffusion processes on weighted data graphs. Despite success, these approaches rely on fixed-weight graphs, making ranking sensitive to the input affinity matrix. In this study, we propose a new ranking algorithm that simultaneously learns the data affinity matrix and the ranking scores. The proposed optimization formulation assigns adaptive neighbors to each point in the data based on the local connectivity, and the smoothness constraint assigns similar ranking scores to similar data points. We develop a novel and efficient algorithm to solve the optimization problem. Evaluations using synthetic and real datasets suggest that the proposed algorithm can outperform the existing methods.
Tasks
Published 2018-03-14
URL http://arxiv.org/abs/1803.05105v1
PDF http://arxiv.org/pdf/1803.05105v1.pdf
PWC https://paperswithcode.com/paper/ranking-with-adaptive-neighbors
Repo
Framework

Semi-Automatic RECIST Labeling on CT Scans with Cascaded Convolutional Neural Networks

Title Semi-Automatic RECIST Labeling on CT Scans with Cascaded Convolutional Neural Networks
Authors Youbao Tang, Adam P. Harrison, Mohammadhadi Bagheri, Jing Xiao, Ronald M. Summers
Abstract Response evaluation criteria in solid tumors (RECIST) is the standard measurement for tumor extent to evaluate treatment responses in cancer patients. As such, RECIST annotations must be accurate. However, RECIST annotations manually labeled by radiologists require professional knowledge and are time-consuming, subjective, and prone to inconsistency among different observers. To alleviate these problems, we propose a cascaded convolutional neural network based method to semi-automatically label RECIST annotations and drastically reduce annotation time. The proposed method consists of two stages: lesion region normalization and RECIST estimation. We employ the spatial transformer network (STN) for lesion region normalization, where a localization network is designed to predict the lesion region and the transformation parameters with a multi-task learning strategy. For RECIST estimation, we adapt the stacked hourglass network (SHN), introducing a relationship constraint loss to improve the estimation precision. STN and SHN can both be learned in an end-to-end fashion. We train our system on the DeepLesion dataset, obtaining a consensus model trained on RECIST annotations performed by multiple radiologists over a multi-year period. Importantly, when judged against the inter-reader variability of two additional radiologist raters, our system performs more stably and with less variability, suggesting that RECIST annotations can be reliably obtained with reduced labor and time.
Tasks Multi-Task Learning
Published 2018-06-25
URL http://arxiv.org/abs/1806.09507v1
PDF http://arxiv.org/pdf/1806.09507v1.pdf
PWC https://paperswithcode.com/paper/semi-automatic-recist-labeling-on-ct-scans
Repo
Framework

A Block Coordinate Ascent Algorithm for Mean-Variance Optimization

Title A Block Coordinate Ascent Algorithm for Mean-Variance Optimization
Authors Bo Liu, Tengyang Xie, Yangyang Xu, Mohammad Ghavamzadeh, Yinlam Chow, Daoming Lyu, Daesub Yoon
Abstract Risk management in dynamic decision problems is a primary concern in many fields, including financial investment, autonomous driving, and healthcare. The mean-variance function is one of the most widely used objective functions in risk management due to its simplicity and interpretability. Existing algorithms for mean-variance optimization are based on multi-time-scale stochastic approximation, whose learning rate schedules are often hard to tune, and have only asymptotic convergence proof. In this paper, we develop a model-free policy search framework for mean-variance optimization with finite-sample error bound analysis (to local optima). Our starting point is a reformulation of the original mean-variance function with its Fenchel dual, from which we propose a stochastic block coordinate ascent policy search algorithm. Both the asymptotic convergence guarantee of the last iteration’s solution and the convergence rate of the randomly picked solution are provided, and their applicability is demonstrated on several benchmark domains.
Tasks Autonomous Driving
Published 2018-09-07
URL http://arxiv.org/abs/1809.02292v3
PDF http://arxiv.org/pdf/1809.02292v3.pdf
PWC https://paperswithcode.com/paper/a-block-coordinate-ascent-algorithm-for-mean
Repo
Framework

Block Belief Propagation for Parameter Learning in Markov Random Fields

Title Block Belief Propagation for Parameter Learning in Markov Random Fields
Authors You Lu, Zhiyuan Liu, Bert Huang
Abstract Traditional learning methods for training Markov random fields require doing inference over all variables to compute the likelihood gradient. The iteration complexity for those methods therefore scales with the size of the graphical models. In this paper, we propose \emph{block belief propagation learning} (BBPL), which uses block-coordinate updates of approximate marginals to compute approximate gradients, removing the need to compute inference on the entire graphical model. Thus, the iteration complexity of BBPL does not scale with the size of the graphs. We prove that the method converges to the same solution as that obtained by using full inference per iteration, despite these approximations, and we empirically demonstrate its scalability improvements over standard training methods.
Tasks
Published 2018-11-09
URL http://arxiv.org/abs/1811.04064v1
PDF http://arxiv.org/pdf/1811.04064v1.pdf
PWC https://paperswithcode.com/paper/block-belief-propagation-for-parameter
Repo
Framework

Recognizing Film Entities in Podcasts

Title Recognizing Film Entities in Podcasts
Authors Ahmet Salih Gundogdu, Arjun Sanghvi, Keith Harrigian
Abstract In this paper, we propose a Named Entity Recognition (NER) system to identify film titles in podcast audio. Taking inspiration from NER systems for noisy text in social media, we implement a two-stage approach that is robust to computer transcription errors and does not require significant computational expense to accommodate new film titles/releases. Evaluating on a diverse set of podcasts, we demonstrate more than a 20% increase in F1 score across three baseline approaches when combining fuzzy-matching with a linear model aware of film-specific metadata.
Tasks Named Entity Recognition
Published 2018-09-24
URL http://arxiv.org/abs/1809.08711v1
PDF http://arxiv.org/pdf/1809.08711v1.pdf
PWC https://paperswithcode.com/paper/recognizing-film-entities-in-podcasts
Repo
Framework

Domain Adaptation for Deviating Acquisition Protocols in CNN-based Lesion Classification on Diffusion-Weighted MR Images

Title Domain Adaptation for Deviating Acquisition Protocols in CNN-based Lesion Classification on Diffusion-Weighted MR Images
Authors Jennifer Kamphenkel, Paul F. Jaeger, Sebastian Bickelhaupt, Frederik Bernd Laun, Wolfgang Lederer, Heidi Daniel, Tristan Anselm Kuder, Stefan Delorme, Heinz-Peter Schlemmer, Franziska Koenig, Klaus H. Maier-Hein
Abstract End-to-end deep learning improves breast cancer classification on diffusion-weighted MR images (DWI) using a convolutional neural network (CNN) architecture. A limitation of CNN as opposed to previous model-based approaches is the dependence on specific DWI input channels used during training. However, in the context of large-scale application, methods agnostic towards heterogeneous inputs are desirable, due to the high deviation of scanning protocols between clinical sites. We propose model-based domain adaptation to overcome input dependencies and avoid re-training of networks at clinical sites by restoring training inputs from altered input channels given during deployment. We demonstrate the method’s significant increase in classification performance and superiority over implicit domain adaptation provided by training-schemes operating on model-parameters instead of raw DWI images.
Tasks Domain Adaptation
Published 2018-07-17
URL http://arxiv.org/abs/1807.06277v1
PDF http://arxiv.org/pdf/1807.06277v1.pdf
PWC https://paperswithcode.com/paper/domain-adaptation-for-deviating-acquisition
Repo
Framework

Towards Training Recurrent Neural Networks for Lifelong Learning

Title Towards Training Recurrent Neural Networks for Lifelong Learning
Authors Shagun Sodhani, Sarath Chandar, Yoshua Bengio
Abstract Catastrophic forgetting and capacity saturation are the central challenges of any parametric lifelong learning system. In this work, we study these challenges in the context of sequential supervised learning with an emphasis on recurrent neural networks. To evaluate the models in the lifelong learning setting, we propose a curriculum-based, simple, and intuitive benchmark where the models are trained on tasks with increasing levels of difficulty. To measure the impact of catastrophic forgetting, the model is tested on all the previous tasks as it completes any task. As a step towards developing true lifelong learning systems, we unify Gradient Episodic Memory (a catastrophic forgetting alleviation approach) and Net2Net(a capacity expansion approach). Both these models are proposed in the context of feedforward networks and we evaluate the feasibility of using them for recurrent networks. Evaluation on the proposed benchmark shows that the unified model is more suitable than the constituent models for lifelong learning setting.
Tasks
Published 2018-11-16
URL https://arxiv.org/abs/1811.07017v3
PDF https://arxiv.org/pdf/1811.07017v3.pdf
PWC https://paperswithcode.com/paper/on-training-recurrent-neural-networks-for
Repo
Framework

SmartPM: Automatic Adaptation of Dynamic Processes at Run-Time

Title SmartPM: Automatic Adaptation of Dynamic Processes at Run-Time
Authors Andrea Marrella
Abstract The research activity outlined in this PhD thesis is devoted to define a general approach, a concrete architecture and a prototype Process Management System (PMS) for the automated adaptation of dynamic processes at run-time, on the basis of a declarative specification of process tasks and relying on well-established reasoning about actions and planning techniques. The purpose is to demonstrate that the combination of procedural and imperative models with declarative elements, along with the exploitation of techniques from the field of artificial intelligence (AI), such as Situation Calculus, IndiGolog and automated planning, can increase the ability of existing PMSs of supporting dynamic processes. To this end, a prototype PMS named SmartPM, which is specifically tailored for supporting collaborative work of process participants during pervasive scenarios, has been developed. The adaptation mechanism deployed on SmartPM is based on execution monitoring for detecting failures at run-time, which does not require the definition of the adaptation strategy in the process itself (as most of the current approaches do), and on automatic planning techniques for the synthesis of the recovery procedure.
Tasks
Published 2018-10-12
URL http://arxiv.org/abs/1810.06374v1
PDF http://arxiv.org/pdf/1810.06374v1.pdf
PWC https://paperswithcode.com/paper/smartpm-automatic-adaptation-of-dynamic
Repo
Framework
comments powered by Disqus