July 28, 2019

3187 words 15 mins read

Paper Group ANR 306

Paper Group ANR 306

Direct White Matter Bundle Segmentation using Stacked U-Nets. Discrepancy-Based Algorithms for Non-Stationary Rested Bandits. Incremental learning of high-level concepts by imitation. Robust and Precise Vehicle Localization based on Multi-sensor Fusion in Diverse City Scenes. Jointly Optimizing Placement and Inference for Beacon-based Localization. …

Direct White Matter Bundle Segmentation using Stacked U-Nets

Title Direct White Matter Bundle Segmentation using Stacked U-Nets
Authors Jakob Wasserthal, Peter F. Neher, Fabian Isensee, Klaus H. Maier-Hein
Abstract The state-of-the-art method for automatically segmenting white matter bundles in diffusion-weighted MRI is tractography in conjunction with streamline cluster selection. This process involves long chains of processing steps which are not only computationally expensive but also complex to setup and tedious with respect to quality control. Direct bundle segmentation methods treat the task as a traditional image segmentation problem. While they so far did not deliver competitive results, they can potentially mitigate many of the mentioned issues. We present a novel supervised approach for direct tract segmentation that shows major performance gains. It builds upon a stacked U-Net architecture which is trained on manual bundle segmentations from Human Connectome Project subjects. We evaluate our approach \textit{in vivo} as well as \textit{in silico} using the ISMRM 2015 Tractography Challenge phantom dataset. We achieve human segmentation performance and a major performance gain over previous pipelines. We show how the learned spatial priors efficiently guide the segmentation even at lower image qualities with little quality loss.
Tasks Semantic Segmentation
Published 2017-03-06
URL http://arxiv.org/abs/1703.02036v1
PDF http://arxiv.org/pdf/1703.02036v1.pdf
PWC https://paperswithcode.com/paper/direct-white-matter-bundle-segmentation-using
Repo
Framework

Discrepancy-Based Algorithms for Non-Stationary Rested Bandits

Title Discrepancy-Based Algorithms for Non-Stationary Rested Bandits
Authors Corinna Cortes, Giulia DeSalvo, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang
Abstract We study the multi-armed bandit problem where the rewards are realizations of general non-stationary stochastic processes, a setting that generalizes many existing lines of work and analyses. In particular, we present a theoretical analysis and derive regret guarantees for rested bandits in which the reward distribution of each arm changes only when we pull that arm. Remarkably, our regret bounds are logarithmic in the number of rounds under several natural conditions. We introduce a new algorithm based on classical UCB ideas combined with the notion of weighted discrepancy, a useful tool for measuring the non-stationarity of a stochastic process. We show that the notion of discrepancy can be used to design very general algorithms and a unified framework for the analysis of multi-armed rested bandit problems with non-stationary rewards. In particular, we show that we can recover the regret guarantees of many specific instances of bandit problems with non-stationary rewards that have been studied in the literature. We also provide experiments demonstrating that our algorithms can enjoy a significant improvement in practice compared to standard benchmarks.
Tasks
Published 2017-10-29
URL http://arxiv.org/abs/1710.10657v2
PDF http://arxiv.org/pdf/1710.10657v2.pdf
PWC https://paperswithcode.com/paper/discrepancy-based-algorithms-for-non
Repo
Framework

Incremental learning of high-level concepts by imitation

Title Incremental learning of high-level concepts by imitation
Authors Mina Alibeigi, Majid Nili Ahmadabadi, Babak Nadjar Araabi
Abstract Nowadays, robots become a companion in everyday life. To be well-accepted by humans, robots should efficiently understand meanings of their partners’ motions and body language, and respond accordingly. Learning concepts by imitation brings them this ability in a user-friendly way. This paper presents a fast and robust model for Incremental Learning of Concepts by Imitation (ILoCI). In ILoCI, observed multimodal spatio-temporal demonstrations are incrementally abstracted and generalized based on both their perceptual and functional similarities during the imitation. In this method, perceptually similar demonstrations are abstracted by a dynamic model of mirror neuron system. An incremental method is proposed to learn their functional similarities through a limited number of interactions with the teacher. Learning all concepts together by the proposed memory rehearsal enables robot to utilize the common structural relations among concepts which not only expedites the learning process especially at the initial stages, but also improves the generalization ability and the robustness against discrepancies between observed demonstrations. Performance of ILoCI is assessed using standard LASA handwriting benchmark data set. The results show efficiency of ILoCI in concept acquisition, recognition and generation in addition to its robustness against variability in demonstrations.
Tasks
Published 2017-04-14
URL http://arxiv.org/abs/1704.04408v1
PDF http://arxiv.org/pdf/1704.04408v1.pdf
PWC https://paperswithcode.com/paper/incremental-learning-of-high-level-concepts
Repo
Framework

Robust and Precise Vehicle Localization based on Multi-sensor Fusion in Diverse City Scenes

Title Robust and Precise Vehicle Localization based on Multi-sensor Fusion in Diverse City Scenes
Authors Guowei Wan, Xiaolong Yang, Renlan Cai, Hao Li, Hao Wang, Shiyu Song
Abstract We present a robust and precise localization system that achieves centimeter-level localization accuracy in disparate city scenes. Our system adaptively uses information from complementary sensors such as GNSS, LiDAR, and IMU to achieve high localization accuracy and resilience in challenging scenes, such as urban downtown, highways, and tunnels. Rather than relying only on LiDAR intensity or 3D geometry, we make innovative use of LiDAR intensity and altitude cues to significantly improve localization system accuracy and robustness. Our GNSS RTK module utilizes the help of the multi-sensor fusion framework and achieves a better ambiguity resolution success rate. An error-state Kalman filter is applied to fuse the localization measurements from different sources with novel uncertainty estimation. We validate, in detail, the effectiveness of our approaches, achieving 5-10cm RMS accuracy and outperforming previous state-of-the-art systems. Importantly, our system, while deployed in a large autonomous driving fleet, made our vehicles fully autonomous in crowded city streets despite road construction that occurred from time to time. A dataset including more than 60 km real traffic driving in various urban roads is used to comprehensively test our system.
Tasks Autonomous Driving, Sensor Fusion
Published 2017-11-15
URL https://arxiv.org/abs/1711.05805v2
PDF https://arxiv.org/pdf/1711.05805v2.pdf
PWC https://paperswithcode.com/paper/robust-and-precise-vehicle-localization-based
Repo
Framework

Jointly Optimizing Placement and Inference for Beacon-based Localization

Title Jointly Optimizing Placement and Inference for Beacon-based Localization
Authors Charles Schaff, David Yunis, Ayan Chakrabarti, Matthew R. Walter
Abstract The ability of robots to estimate their location is crucial for a wide variety of autonomous operations. In settings where GPS is unavailable, measurements of transmissions from fixed beacons provide an effective means of estimating a robot’s location as it navigates. The accuracy of such a beacon-based localization system depends both on how beacons are distributed in the environment, and how the robot’s location is inferred based on noisy and potentially ambiguous measurements. We propose an approach for making these design decisions automatically and without expert supervision, by explicitly searching for the placement and inference strategies that, together, are optimal for a given environment. Since this search is computationally expensive, our approach encodes beacon placement as a differential neural layer that interfaces with a neural network for inference. This formulation allows us to employ standard techniques for training neural networks to carry out the joint optimization. We evaluate this approach on a variety of environments and settings, and find that it is able to discover designs that enable high localization accuracy.
Tasks
Published 2017-03-24
URL http://arxiv.org/abs/1703.08612v2
PDF http://arxiv.org/pdf/1703.08612v2.pdf
PWC https://paperswithcode.com/paper/jointly-optimizing-placement-and-inference
Repo
Framework

Machine Learning, Deepest Learning: Statistical Data Assimilation Problems

Title Machine Learning, Deepest Learning: Statistical Data Assimilation Problems
Authors Henry Abarbanel, Paul Rozdeba, Sasha Shirman
Abstract We formulate a strong equivalence between machine learning, artificial intelligence methods and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in the artificial network setting is the analog of time in the data assimilation setting. Within the discussion of this equivalence we show that adding more layers (making the network deeper) is analogous to adding temporal resolution in a data assimilation framework. How one can find a candidate for the global minimum of the cost functions in the machine learning context using a method from data assimilation is discussed. Calculations on simple models from each side of the equivalence are reported. Also discussed is a framework in which the time or layer label is taken to be continuous, providing a differential equation, the Euler-Lagrange equation, which shows that the problem being solved is a two point boundary value problem familiar in the discussion of variational methods. The use of continuous layers is denoted “deepest learning”. These problems respect a symplectic symmetry in continuous time/layer phase space. Both Lagrangian versions and Hamiltonian versions of these problems are presented. Their well-studied implementation in a discrete time/layer, while respected the symplectic structure, is addressed. The Hamiltonian version provides a direct rationale for back propagation as a solution method for the canonical momentum.
Tasks
Published 2017-07-05
URL http://arxiv.org/abs/1707.01415v1
PDF http://arxiv.org/pdf/1707.01415v1.pdf
PWC https://paperswithcode.com/paper/machine-learning-deepest-learning-statistical
Repo
Framework

Efficiently Discovering Locally Exceptional yet Globally Representative Subgroups

Title Efficiently Discovering Locally Exceptional yet Globally Representative Subgroups
Authors Janis Kalofolias, Mario Boley, Jilles Vreeken
Abstract Subgroup discovery is a local pattern mining technique to find interpretable descriptions of sub-populations that stand out on a given target variable. That is, these sub-populations are exceptional with regard to the global distribution. In this paper we argue that in many applications, such as scientific discovery, subgroups are only useful if they are additionally representative of the global distribution with regard to a control variable. That is, when the distribution of this control variable is the same, or almost the same, as over the whole data. We formalise this objective function and give an efficient algorithm to compute its tight optimistic estimator for the case of a numeric target and a binary control variable. This enables us to use the branch-and-bound framework to efficiently discover the top-$k$ subgroups that are both exceptional as well as representative. Experimental evaluation on a wide range of datasets shows that with this algorithm we discover meaningful representative patterns and are up to orders of magnitude faster in terms of node evaluations as well as time.
Tasks
Published 2017-09-22
URL http://arxiv.org/abs/1709.07941v1
PDF http://arxiv.org/pdf/1709.07941v1.pdf
PWC https://paperswithcode.com/paper/efficiently-discovering-locally-exceptional
Repo
Framework

An Online Convex Optimization Approach to Dynamic Network Resource Allocation

Title An Online Convex Optimization Approach to Dynamic Network Resource Allocation
Authors Tianyi Chen, Qing Ling, Georgios B. Giannakis
Abstract Existing approaches to online convex optimization (OCO) make sequential one-slot-ahead decisions, which lead to (possibly adversarial) losses that drive subsequent decision iterates. Their performance is evaluated by the so-called regret that measures the difference of losses between the online solution and the best yet fixed overall solution in hindsight. The present paper deals with online convex optimization involving adversarial loss functions and adversarial constraints, where the constraints are revealed after making decisions, and can be tolerable to instantaneous violations but must be satisfied in the long term. Performance of an online algorithm in this setting is assessed by: i) the difference of its losses relative to the best dynamic solution with one-slot-ahead information of the loss function and the constraint (that is here termed dynamic regret); and, ii) the accumulated amount of constraint violations (that is here termed dynamic fit). In this context, a modified online saddle-point (MOSP) scheme is developed, and proved to simultaneously yield sub-linear dynamic regret and fit, provided that the accumulated variations of per-slot minimizers and constraints are sub-linearly growing with time. MOSP is also applied to the dynamic network resource allocation task, and it is compared with the well-known stochastic dual gradient method. Under various scenarios, numerical experiments demonstrate the performance gain of MOSP relative to the state-of-the-art.
Tasks
Published 2017-01-14
URL http://arxiv.org/abs/1701.03974v2
PDF http://arxiv.org/pdf/1701.03974v2.pdf
PWC https://paperswithcode.com/paper/an-online-convex-optimization-approach-to
Repo
Framework

Bayesian Nonparametric Poisson-Process Allocation for Time-Sequence Modeling

Title Bayesian Nonparametric Poisson-Process Allocation for Time-Sequence Modeling
Authors Hongyi Ding, Mohammad Emtiyaz Khan, Issei Sato, Masashi Sugiyama
Abstract Analyzing the underlying structure of multiple time-sequences provides insights into the understanding of social networks and human activities. In this work, we present the \emph{Bayesian nonparametric Poisson process allocation} (BaNPPA), a latent-function model for time-sequences, which automatically infers the number of latent functions. We model the intensity of each sequence as an infinite mixture of latent functions, each of which is obtained using a function drawn from a Gaussian process. We show that a technical challenge for the inference of such mixture models is the unidentifiability of the weights of the latent functions. We propose to cope with the issue by regulating the volume of each latent function within a variational inference algorithm. Our algorithm is computationally efficient and scales well to large data sets. We demonstrate the usefulness of our proposed model through experiments on both synthetic and real-world data sets.
Tasks
Published 2017-05-19
URL http://arxiv.org/abs/1705.07006v5
PDF http://arxiv.org/pdf/1705.07006v5.pdf
PWC https://paperswithcode.com/paper/bayesian-nonparametric-poisson-process
Repo
Framework

Scale-Robust Localization Using General Object Landmarks

Title Scale-Robust Localization Using General Object Landmarks
Authors Andrew Holliday, Gregory Dudek
Abstract Visual localization under large changes in scale is an important capability in many robotic mapping applications, such as localizing at low altitudes in maps built at high altitudes, or performing loop closure over long distances. Existing approaches, however, are robust only up to about a 3x difference in scale between map and query images. We propose a novel combination of deep-learning-based object features and state-of-the-art SIFT point-features that yields improved robustness to scale change. This technique is training-free and class-agnostic, and in principle can be deployed in any environment out-of-the-box. We evaluate the proposed technique on the KITTI Odometry benchmark and on a novel dataset of outdoor images exhibiting changes in visual scale of $7\times$ and greater, which we have released to the public. Our technique consistently outperforms localization using either SIFT features or the proposed object features alone, achieving both greater accuracy and much lower failure rates under large changes in scale.
Tasks Visual Localization
Published 2017-10-28
URL http://arxiv.org/abs/1710.10466v2
PDF http://arxiv.org/pdf/1710.10466v2.pdf
PWC https://paperswithcode.com/paper/scale-robust-localization-using-general
Repo
Framework

Reinforced dynamics for enhanced sampling in large atomic and molecular systems

Title Reinforced dynamics for enhanced sampling in large atomic and molecular systems
Authors Linfeng Zhang, Han Wang, Weinan E
Abstract A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom, explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.
Tasks Efficient Exploration
Published 2017-12-10
URL http://arxiv.org/abs/1712.03461v6
PDF http://arxiv.org/pdf/1712.03461v6.pdf
PWC https://paperswithcode.com/paper/reinforced-dynamics-for-enhanced-sampling-in
Repo
Framework

An enhanced method to compute the similarity between concepts of ontology

Title An enhanced method to compute the similarity between concepts of ontology
Authors Noreddine Gherabi, Abdelhadi Daoui, Abderrahim Marzouk
Abstract With the use of ontologies in several domains such as semantic web, information retrieval, artificial intelligence, the concept of similarity measuring has become a very important domain of research. Therefore, in the current paper, we propose our method of similarity measuring which uses the Dijkstra algorithm to define and compute the shortest path. Then, we use this one to compute the semantic distance between two concepts defined in the same hierarchy of ontology. Afterward, we base on this result to compute the semantic similarity. Finally, we present an experimental comparison between our method and other methods of similarity measuring.
Tasks Information Retrieval, Semantic Similarity, Semantic Textual Similarity
Published 2017-09-26
URL http://arxiv.org/abs/1709.08880v1
PDF http://arxiv.org/pdf/1709.08880v1.pdf
PWC https://paperswithcode.com/paper/an-enhanced-method-to-compute-the-similarity
Repo
Framework

Total Variation-Based Dense Depth from Multi-Camera Array

Title Total Variation-Based Dense Depth from Multi-Camera Array
Authors Hossein Javidnia, Peter Corcoran
Abstract Multi-Camera arrays are increasingly employed in both consumer and industrial applications, and various passive techniques are documented to estimate depth from such camera arrays. Current depth estimation methods provide useful estimations of depth in an imaged scene but are often impractical due to significant computational requirements. This paper presents a novel framework that generates a high-quality continuous depth map from multi-camera array/light field cameras. The proposed framework utilizes analysis of the local Epipolar Plane Image (EPI) to initiate the depth estimation process. The estimated depth map is then processed using Total Variation (TV) minimization based on the Fenchel-Rockafellar duality. Evaluation of this method based on a well-known benchmark indicates that the proposed framework performs well in terms of accuracy when compared to the top-ranked depth estimation methods and a baseline algorithm. The test dataset includes both photorealistic and non-photorealistic scenes. Notably, the computational requirements required to achieve an equivalent accuracy are significantly reduced when compared to the top algorithms. As a consequence, the proposed framework is suitable for deployment in consumer and industrial applications.
Tasks Depth Estimation
Published 2017-11-21
URL http://arxiv.org/abs/1711.07719v1
PDF http://arxiv.org/pdf/1711.07719v1.pdf
PWC https://paperswithcode.com/paper/total-variation-based-dense-depth-from-multi
Repo
Framework

Ensemble Framework for Real-time Decision Making

Title Ensemble Framework for Real-time Decision Making
Authors Philip Rodgers, John Levine
Abstract This paper introduces a new framework for real-time decision making in video games. An Ensemble agent is a compound agent composed of multiple agents, each with its own tasks or goals to achieve. Usually when dealing with real-time decision making, reactive agents are used; that is agents that return a decision based on the current state. While reactive agents are very fast, most games require more than just a rule-based agent to achieve good results. Deliberative agents—agents that use a forward model to search future states—are very useful in games with no hard time limit, such as Go or Backgammon, but generally take too long for real-time games. The Ensemble framework addresses this issue by allowing the agent to be both deliberative and reactive at the same time. This is achieved by breaking up the game-play into logical roles and having highly focused components for each role, with each component disregarding anything outwith its own role. Reactive agents can be used where a reactive agent is suited to the role, and where a deliberative approach is required, branching is kept to a minimum by the removal of all extraneous factors, enabling an informed decision to be made within a much smaller time-frame. An Arbiter is used to combine the component results, allowing high performing agents to be created from simple, efficient components.
Tasks Decision Making
Published 2017-06-21
URL http://arxiv.org/abs/1706.06952v1
PDF http://arxiv.org/pdf/1706.06952v1.pdf
PWC https://paperswithcode.com/paper/ensemble-framework-for-real-time-decision
Repo
Framework

New Methods of Enhancing Prediction Accuracy in Linear Models with Missing Data

Title New Methods of Enhancing Prediction Accuracy in Linear Models with Missing Data
Authors Mohammad Amin Fakharian, Ashkan Esmaeili, Farokh Marvasti
Abstract In this paper, prediction for linear systems with missing information is investigated. New methods are introduced to improve the Mean Squared Error (MSE) on the test set in comparison to state-of-the-art methods, through appropriate tuning of Bias-Variance trade-off. First, the use of proposed Soft Weighted Prediction (SWP) algorithm and its efficacy are depicted and compared to previous works for non-missing scenarios. The algorithm is then modified and optimized for missing scenarios. It is shown that controlled over-fitting by suggested algorithms will improve prediction accuracy in various cases. Simulation results approve our heuristics in enhancing the prediction accuracy.
Tasks
Published 2017-01-03
URL http://arxiv.org/abs/1701.00677v1
PDF http://arxiv.org/pdf/1701.00677v1.pdf
PWC https://paperswithcode.com/paper/new-methods-of-enhancing-prediction-accuracy
Repo
Framework
comments powered by Disqus