July 29, 2019

3548 words 17 mins read

Paper Group ANR 86

Paper Group ANR 86

Development of Statewide AADT Estimation Model from Short-Term Counts: A Comparative Study for South Carolina. Decision structure of risky choice. On the Reconstruction Risk of Convolutional Sparse Dictionary Learning. Detection and Tracking of General Movable Objects in Large 3D Maps. Incomplete Dot Products for Dynamic Computation Scaling in Neur …

Development of Statewide AADT Estimation Model from Short-Term Counts: A Comparative Study for South Carolina

Title Development of Statewide AADT Estimation Model from Short-Term Counts: A Comparative Study for South Carolina
Authors Sakib Mahmud Khan, Sababa Islam, MD Zadid Khan, Kakan Dey, Mashrur Chowdhury, Nathan Huynh
Abstract Annual Average Daily Traffic (AADT) is an important parameter used in traffic engineering analysis. Departments of Transportation (DOTs) continually collect traffic count using both permanent count stations (i.e., Automatic Traffic Recorders or ATRs) and temporary short-term count stations. In South Carolina, 87% of the ATRs are located on interstates and arterial highways. For most secondary highways (i.e., collectors and local roads), AADT is estimated based on short-term counts. This paper develops AADT estimation models for different roadway functional classes with two machine learning techniques: Artificial Neural Network (ANN) and Support Vector Regression (SVR). The models aim to predict AADT from short-term counts. The results are first compared against each other to identify the best model. Then, the results of the best model are compared against a regression method and factor-based method. The comparison reveals the superiority of SVR for AADT estimation for different roadway functional classes over all other methods. Among all developed models for different functional roadway classes, the SVR-based model shows a minimum root mean square error (RMSE) of 0.22 and a mean absolute percentage error (MAPE) of 11.3% for the interstate/expressway functional class. This model also shows a higher R-squared value compared to the traditional factor-based model and regression model. SVR models are validated for each roadway functional class using the 2016 ATR data and selected short-term count data collected by the South Carolina Department of Transportation (SCDOT). The validation results show that the SVR-based AADT estimation models can be used by the SCDOT as a reliable option to predict AADT from the short-term counts.
Tasks
Published 2017-11-30
URL http://arxiv.org/abs/1712.01257v1
PDF http://arxiv.org/pdf/1712.01257v1.pdf
PWC https://paperswithcode.com/paper/development-of-statewide-aadt-estimation
Repo
Framework

Decision structure of risky choice

Title Decision structure of risky choice
Authors Lamb Wubin, Naixin Ren
Abstract As we know, there is a controversy about the decision making under risk between economists and psychologists. We discuss to build a unified theory of risky choice, which would explain both of compensatory and non-compensatory theories. For risky choice, according to cognition ability, we argue that people could not build a continuous and accurate subjective probability world, but several order concepts, such as small, middle and large probability. People make decisions based on information, experience, imagination and other things. All of these things are so huge that people have to prepare some strategies. That is, people have different strategies when facing to different situations. The distributions of these things have different decision structures. More precisely, decision making is a process of simplifying the decision structure. However, the process of decision structure simplifying is not stuck in a rut, but through different path when facing problems repeatedly. It is why preference reversal always happens when making decisions. The most efficient way to simplify the decision structure is calculating expected value or making decisions based on one or two dimensions. We also argue that the deliberation time at least has four parts, which are consist of substitution time, first order time, second order time and calculation time. Decision structure also can simply explain the phenomenon of paradoxes and anomalies. JEL Codes: C10, D03, D81
Tasks Decision Making
Published 2017-01-30
URL http://arxiv.org/abs/1701.08567v2
PDF http://arxiv.org/pdf/1701.08567v2.pdf
PWC https://paperswithcode.com/paper/decision-structure-of-risky-choice
Repo
Framework

On the Reconstruction Risk of Convolutional Sparse Dictionary Learning

Title On the Reconstruction Risk of Convolutional Sparse Dictionary Learning
Authors Shashank Singh, Barnabás Póczos, Jian Ma
Abstract Sparse dictionary learning (SDL) has become a popular method for adaptively identifying parsimonious representations of a dataset, a fundamental problem in machine learning and signal processing. While most work on SDL assumes a training dataset of independent and identically distributed samples, a variant known as convolutional sparse dictionary learning (CSDL) relaxes this assumption, allowing more general sequential data sources, such as time series or other dependent data. Although recent work has explored the statistical properties of classical SDL, the statistical properties of CSDL remain unstudied. This paper begins to study this by identifying the minimax convergence rate of CSDL in terms of reconstruction risk, by both upper bounding the risk of an established CSDL estimator and proving a matching information-theoretic lower bound. Our results indicate that consistency in reconstruction risk is possible precisely in the `ultra-sparse’ setting, in which the sparsity (i.e., the number of feature occurrences) is in $o(N)$ in terms of the length N of the training sequence. Notably, our results make very weak assumptions, allowing arbitrary dictionaries and dependent measurement noise. Finally, we verify our theoretical results with numerical experiments on synthetic data. |
Tasks Dictionary Learning, Time Series
Published 2017-08-29
URL http://arxiv.org/abs/1708.08587v2
PDF http://arxiv.org/pdf/1708.08587v2.pdf
PWC https://paperswithcode.com/paper/on-the-reconstruction-risk-of-convolutional
Repo
Framework

Detection and Tracking of General Movable Objects in Large 3D Maps

Title Detection and Tracking of General Movable Objects in Large 3D Maps
Authors Nils Bore, Johan Ekekrantz, Patric Jensfelt, John Folkesson
Abstract This paper studies the problem of detection and tracking of general objects with long-term dynamics, observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, it can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances, through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.
Tasks
Published 2017-12-22
URL http://arxiv.org/abs/1712.08409v2
PDF http://arxiv.org/pdf/1712.08409v2.pdf
PWC https://paperswithcode.com/paper/detection-and-tracking-of-general-movable
Repo
Framework

Incomplete Dot Products for Dynamic Computation Scaling in Neural Network Inference

Title Incomplete Dot Products for Dynamic Computation Scaling in Neural Network Inference
Authors Bradley McDanel, Surat Teerapittayanon, H. T. Kung
Abstract We propose the use of incomplete dot products (IDP) to dynamically adjust the number of input channels used in each layer of a convolutional neural network during feedforward inference. IDP adds monotonically non-increasing coefficients, referred to as a “profile”, to the channels during training. The profile orders the contribution of each channel in non-increasing order. At inference time, the number of channels used can be dynamically adjusted to trade off accuracy for lowered power consumption and reduced latency by selecting only a beginning subset of channels. This approach allows for a single network to dynamically scale over a computation range, as opposed to training and deploying multiple networks to support different levels of computation scaling. Additionally, we extend the notion to multiple profiles, each optimized for some specific range of computation scaling. We present experiments on the computation and accuracy trade-offs of IDP for popular image classification models and datasets. We demonstrate that, for MNIST and CIFAR-10, IDP reduces computation significantly, e.g., by 75%, without significantly compromising accuracy. We argue that IDP provides a convenient and effective means for devices to lower computation costs dynamically to reflect the current computation budget of the system. For example, VGG-16 with 50% IDP (using only the first 50% of channels) achieves 70% in accuracy on the CIFAR-10 dataset compared to the standard network which achieves only 35% accuracy when using the reduced channel set.
Tasks Image Classification
Published 2017-10-21
URL http://arxiv.org/abs/1710.07830v1
PDF http://arxiv.org/pdf/1710.07830v1.pdf
PWC https://paperswithcode.com/paper/incomplete-dot-products-for-dynamic
Repo
Framework

SPLODE: Semi-Probabilistic Point and Line Odometry with Depth Estimation from RGB-D Camera Motion

Title SPLODE: Semi-Probabilistic Point and Line Odometry with Depth Estimation from RGB-D Camera Motion
Authors Pedro F. Proença, Yang Gao
Abstract Active depth cameras suffer from several limitations, which cause incomplete and noisy depth maps, and may consequently affect the performance of RGB-D Odometry. To address this issue, this paper presents a visual odometry method based on point and line features that leverages both measurements from a depth sensor and depth estimates from camera motion. Depth estimates are generated continuously by a probabilistic depth estimation framework for both types of features to compensate for the lack of depth measurements and inaccurate feature depth associations. The framework models explicitly the uncertainty of triangulating depth from both point and line observations to validate and obtain precise estimates. Furthermore, depth measurements are exploited by propagating them through a depth map registration module and using a frame-to-frame motion estimation method that considers 3D-to-2D and 2D-to-3D reprojection errors, independently. Results on RGB-D sequences captured on large indoor and outdoor scenes, where depth sensor limitations are critical, show that the combination of depth measurements and estimates through our approach is able to overcome the absence and inaccuracy of depth measurements.
Tasks Depth Estimation, Motion Estimation, Visual Odometry
Published 2017-08-09
URL http://arxiv.org/abs/1708.02837v1
PDF http://arxiv.org/pdf/1708.02837v1.pdf
PWC https://paperswithcode.com/paper/splode-semi-probabilistic-point-and-line
Repo
Framework

Calibration for Stratified Classification Models

Title Calibration for Stratified Classification Models
Authors Chandler Zuo
Abstract In classification problems, sampling bias between training data and testing data is critical to the ranking performance of classification scores. Such bias can be both unintentionally introduced by data collection and intentionally introduced by the algorithm, such as under-sampling or weighting techniques applied to imbalanced data. When such sampling bias exists, using the raw classification score to rank observations in the testing data can lead to suboptimal results. In this paper, I investigate the optimal calibration strategy in general settings, and develop a practical solution for one specific sampling bias case, where the sampling bias is introduced by stratified sampling. The optimal solution is developed by analytically solving the problem of optimizing the ROC curve. For practical data, I propose a ranking algorithm for general classification models with stratified data. Numerical experiments demonstrate that the proposed algorithm effectively addresses the stratified sampling bias issue. Interestingly, the proposed method shows its potential applicability in two other machine learning areas: unsupervised learning and model ensembling, which can be future research topics.
Tasks Calibration
Published 2017-10-31
URL http://arxiv.org/abs/1711.00064v1
PDF http://arxiv.org/pdf/1711.00064v1.pdf
PWC https://paperswithcode.com/paper/calibration-for-stratified-classification
Repo
Framework

Pathological Pulmonary Lobe Segmentation from CT Images using Progressive Holistically Nested Neural Networks and Random Walker

Title Pathological Pulmonary Lobe Segmentation from CT Images using Progressive Holistically Nested Neural Networks and Random Walker
Authors Kevin George, Adam P. Harrison, Dakai Jin, Ziyue Xu, Daniel J. Mollura
Abstract Automatic pathological pulmonary lobe segmentation(PPLS) enables regional analyses of lung disease, a clinically important capability. Due to often incomplete lobe boundaries, PPLS is difficult even for experts, and most prior art requires inference from contextual information. To address this, we propose a novel PPLS method that couples deep learning with the random walker (RW) algorithm. We first employ the recent progressive holistically-nested network (P-HNN) model to identify potential lobar boundaries, then generate final segmentations using a RW that is seeded and weighted by the P-HNN output. We are the first to apply deep learning to PPLS. The advantages are independence from prior airway/vessel segmentations, increased robustness in diseased lungs, and methodological simplicity that does not sacrifice accuracy. Our method posts a high mean Jaccard score of 0.888$\pm$0.164 on a held-out set of 154 CT scans from lung-disease patients, while also significantly (p < 0.001) outperforming a state-of-the-art method.
Tasks
Published 2017-08-15
URL http://arxiv.org/abs/1708.04503v1
PDF http://arxiv.org/pdf/1708.04503v1.pdf
PWC https://paperswithcode.com/paper/pathological-pulmonary-lobe-segmentation-from
Repo
Framework

Copy-move Forgery Detection based on Convolutional Kernel Network

Title Copy-move Forgery Detection based on Convolutional Kernel Network
Authors Yaqi Liu, Qingxiao Guan, Xianfeng Zhao
Abstract In this paper, a copy-move forgery detection method based on Convolutional Kernel Network is proposed. Different from methods based on conventional hand-crafted features, Convolutional Kernel Network is a kind of data-driven local descriptor with the deep convolutional structure. Thanks to the development of deep learning theories and widely available datasets, the data-driven methods can achieve competitive performance on different conditions for its excellent discriminative capability. Besides, our Convolutional Kernel Network is reformulated as a series of matrix computations and convolutional operations which are easy to parallelize and accelerate by GPU, leading to high efficiency. Then, appropriate preprocessing and postprocessing for Convolutional Kernel Network are adopted to achieve copy-move forgery detection. Particularly, a segmentation-based keypoints distribution strategy is proposed and a GPU-based adaptive oversegmentation method is adopted. Numerous experiments are conducted to demonstrate the effectiveness and robustness of the GPU version of Convolutional Kernel Network, and the state-of-the-art performance of the proposed copy-move forgery detection method based on Convolutional Kernel Network.
Tasks
Published 2017-07-05
URL http://arxiv.org/abs/1707.01221v1
PDF http://arxiv.org/pdf/1707.01221v1.pdf
PWC https://paperswithcode.com/paper/copy-move-forgery-detection-based-on
Repo
Framework

Synthesizing Filamentary Structured Images with GANs

Title Synthesizing Filamentary Structured Images with GANs
Authors He Zhao, Huiqi Li, Li Cheng
Abstract This paper aims at synthesizing filamentary structured images such as retinal fundus images and neuronal images, as follows: Given a ground-truth, to generate multiple realistic looking phantoms. A ground-truth could be a binary segmentation map containing the filamentary structured morphology, while the synthesized output image is of the same size as the ground-truth and has similar visual appearance to what have been presented in the training set. Our approach is inspired by the recent progresses in generative adversarial nets (GANs) as well as image style transfer. In particular, it is dedicated to our problem context with the following properties: Rather than large-scale dataset, it works well in the presence of as few as 10 training examples, which is common in medical image analysis; It is capable of synthesizing diverse images from the same ground-truth; Last and importantly, the synthetic images produced by our approach are demonstrated to be useful in boosting image analysis performance. Empirical examination over various benchmarks of fundus and neuronal images demonstrate the advantages of the proposed approach.
Tasks Style Transfer
Published 2017-06-07
URL http://arxiv.org/abs/1706.02185v1
PDF http://arxiv.org/pdf/1706.02185v1.pdf
PWC https://paperswithcode.com/paper/synthesizing-filamentary-structured-images
Repo
Framework

Causally Regularized Learning with Agnostic Data Selection Bias

Title Causally Regularized Learning with Agnostic Data Selection Bias
Authors Zheyan Shen, Peng Cui, Kun Kuang, Bo Li, Peixuan Chen
Abstract Most of previous machine learning algorithms are proposed based on the i.i.d. hypothesis. However, this ideal assumption is often violated in real applications, where selection bias may arise between training and testing process. Moreover, in many scenarios, the testing data is not even available during the training process, which makes the traditional methods like transfer learning infeasible due to their need on prior of test distribution. Therefore, how to address the agnostic selection bias for robust model learning is of paramount importance for both academic research and real applications. In this paper, under the assumption that causal relationships among variables are robust across domains, we incorporate causal technique into predictive modeling and propose a novel Causally Regularized Logistic Regression (CRLR) algorithm by jointly optimize global confounder balancing and weighted logistic regression. Global confounder balancing helps to identify causal features, whose causal effect on outcome are stable across domains, then performing logistic regression on those causal features constructs a robust predictive model against the agnostic bias. To validate the effectiveness of our CRLR algorithm, we conduct comprehensive experiments on both synthetic and real world datasets. Experimental results clearly demonstrate that our CRLR algorithm outperforms the state-of-the-art methods, and the interpretability of our method can be fully depicted by the feature visualization.
Tasks Transfer Learning
Published 2017-08-22
URL http://arxiv.org/abs/1708.06656v2
PDF http://arxiv.org/pdf/1708.06656v2.pdf
PWC https://paperswithcode.com/paper/causally-regularized-learning-with-agnostic
Repo
Framework

DeepRain: ConvLSTM Network for Precipitation Prediction using Multichannel Radar Data

Title DeepRain: ConvLSTM Network for Precipitation Prediction using Multichannel Radar Data
Authors Seongchan Kim, Seungkyun Hong, Minsu Joh, Sa-kwang Song
Abstract Accurate rainfall forecasting is critical because it has a great impact on people’s social and economic activities. Recent trends on various literatures show that Deep Learning (Neural Network) is a promising methodology to tackle many challenging tasks. In this study, we introduce a brand-new data-driven precipitation prediction model called DeepRain. This model predicts the amount of rainfall from weather radar data, which is three-dimensional and four-channel data, using convolutional LSTM (ConvLSTM). ConvLSTM is a variant of LSTM (Long Short-Term Memory) containing a convolution operation inside the LSTM cell. For the experiment, we used radar reflectivity data for a two-year period whose input is in a time series format in units of 6 min divided into 15 records. The output is the predicted rainfall information for the input data. Experimental results show that two-stacked ConvLSTM reduced RMSE by 23.0% compared to linear regression.
Tasks Time Series
Published 2017-11-07
URL http://arxiv.org/abs/1711.02316v1
PDF http://arxiv.org/pdf/1711.02316v1.pdf
PWC https://paperswithcode.com/paper/deeprain-convlstm-network-for-precipitation
Repo
Framework

An Improved Epsilon Constraint-handling Method in MOEA/D for CMOPs with Large Infeasible Regions

Title An Improved Epsilon Constraint-handling Method in MOEA/D for CMOPs with Large Infeasible Regions
Authors Zhun Fan, Wenji Li, Xinye Cai, Han Huang, Yi Fang, Yugen You, Jiajie Mo, Caimin Wei, Erik Goodman
Abstract This paper proposes an improved epsilon constraint-handling mechanism, and combines it with a decomposition-based multi-objective evolutionary algorithm (MOEA/D) to solve constrained multi-objective optimization problems (CMOPs). The proposed constrained multi-objective evolutionary algorithm (CMOEA) is named MOEA/D-IEpsilon. It adjusts the epsilon level dynamically according to the ratio of feasible to total solutions (RFS) in the current population. In order to evaluate the performance of MOEA/D-IEpsilon, a new set of CMOPs with two and three objectives is designed, having large infeasible regions (relative to the feasible regions), and they are called LIR-CMOPs. Then the fourteen benchmarks, including LIR-CMOP1-14, are used to test MOEA/D-IEpsilon and four other decomposition-based CMOEAs, including MOEA/D-Epsilon, MOEA/D-SR, MOEA/D-CDP and C-MOEA/D. The experimental results indicate that MOEA/D-IEpsilon is significantly better than the other four CMOEAs on all of the test instances, which shows that MOEA/D-IEpsilon is more suitable for solving CMOPs with large infeasible regions. Furthermore, a real-world problem, namely the robot gripper optimization problem, is used to test the five CMOEAs. The experimental results demonstrate that MOEA/D-IEpsilon also outperforms the other four CMOEAs on this problem.
Tasks
Published 2017-07-27
URL http://arxiv.org/abs/1707.08767v1
PDF http://arxiv.org/pdf/1707.08767v1.pdf
PWC https://paperswithcode.com/paper/an-improved-epsilon-constraint-handling
Repo
Framework

Learning from Label Proportions in Brain-Computer Interfaces: Online Unsupervised Learning with Guarantees

Title Learning from Label Proportions in Brain-Computer Interfaces: Online Unsupervised Learning with Guarantees
Authors D Hübner, T Verhoeven, K Schmid, K-R Müller, M Tangermann, P-J Kindermans
Abstract Objective: Using traditional approaches, a Brain-Computer Interface (BCI) requires the collection of calibration data for new subjects prior to online use. Calibration time can be reduced or eliminated e.g.~by transfer of a pre-trained classifier or unsupervised adaptive classification methods which learn from scratch and adapt over time. While such heuristics work well in practice, none of them can provide theoretical guarantees. Our objective is to modify an event-related potential (ERP) paradigm to work in unison with the machine learning decoder to achieve a reliable calibration-less decoding with a guarantee to recover the true class means. Method: We introduce learning from label proportions (LLP) to the BCI community as a new unsupervised, and easy-to-implement classification approach for ERP-based BCIs. The LLP estimates the mean target and non-target responses based on known proportions of these two classes in different groups of the data. We modified a visual ERP speller to meet the requirements of the LLP. For evaluation, we ran simulations on artificially created data sets and conducted an online BCI study with N=13 subjects performing a copy-spelling task. Results: Theoretical considerations show that LLP is guaranteed to minimize the loss function similarly to a corresponding supervised classifier. It performed well in simulations and in the online application, where 84.5% of characters were spelled correctly on average without prior calibration. Significance: The continuously adapting LLP classifier is the first unsupervised decoder for ERP BCIs guaranteed to find the true class means. This makes it an ideal solution to avoid a tedious calibration and to tackle non-stationarities in the data. Additionally, LLP works on complementary principles compared to existing unsupervised methods, allowing for their further enhancement when combined with LLP.
Tasks Calibration
Published 2017-01-25
URL http://arxiv.org/abs/1701.07213v1
PDF http://arxiv.org/pdf/1701.07213v1.pdf
PWC https://paperswithcode.com/paper/learning-from-label-proportions-in-brain
Repo
Framework

Guided Signal Reconstruction Theory

Title Guided Signal Reconstruction Theory
Authors Andrew Knyazev, Akshay Gadde, Hassan Mansour, Dong Tian
Abstract An axiomatic approach to signal reconstruction is formulated, involving a sample consistent set and a guiding set, describing desired reconstructions. New frame-less reconstruction methods are proposed, based on a novel concept of a reconstruction set, defined as a shortest pathway between the sample consistent set and the guiding set. Existence and uniqueness of the reconstruction set are investigated in a Hilbert space, where the guiding set is a closed subspace and the sample consistent set is a closed plane, formed by a sampling subspace. Connections to earlier known consistent, generalized, and regularized reconstructions are clarified. New stability and reconstruction error bounds are derived, using the largest nontrivial angle between the sampling and guiding subspaces. Conjugate gradient iterative reconstruction algorithms are proposed and illustrated numerically for image magnification.
Tasks
Published 2017-02-02
URL http://arxiv.org/abs/1702.00852v1
PDF http://arxiv.org/pdf/1702.00852v1.pdf
PWC https://paperswithcode.com/paper/guided-signal-reconstruction-theory
Repo
Framework
comments powered by Disqus