Paper Group ANR 1463
Parameterized Exploration. Statistically Discriminative Sub-trajectory Mining. Task Selection Policies for Multitask Learning. Average reward reinforcement learning with unknown mixing times. Monkey Optimization System with Active Membranes: A New Meta-heuristic Optimization System. A Convolutional Cost-Sensitive Crack Localization Algorithm for Au …
Parameterized Exploration
Title | Parameterized Exploration |
Authors | Jesse Clifton, Lili Wu, Eric Laber |
Abstract | We introduce Parameterized Exploration (PE), a simple family of methods for model-based tuning of the exploration schedule in sequential decision problems. Unlike common heuristics for exploration, our method accounts for the time horizon of the decision problem as well as the agent’s current state of knowledge of the dynamics of the decision problem. We show our method as applied to several common exploration techniques has superior performance relative to un-tuned counterparts in Bernoulli and Gaussian multi-armed bandits, contextual bandits, and a Markov decision process based on a mobile health (mHealth) study. We also examine the effects of the accuracy of the estimated dynamics model on the performance of PE. |
Tasks | Multi-Armed Bandits |
Published | 2019-07-13 |
URL | https://arxiv.org/abs/1907.06090v1 |
https://arxiv.org/pdf/1907.06090v1.pdf | |
PWC | https://paperswithcode.com/paper/parameterized-exploration |
Repo | |
Framework | |
Statistically Discriminative Sub-trajectory Mining
Title | Statistically Discriminative Sub-trajectory Mining |
Authors | Vo Nguyen Le Duy, Takuto Sakuma, Taiju Ishiyama, Hiroki Toda, Kazuya Nishi, Masayuki Karasuyama, Yuta Okubo, Masayuki Sunaga, Yasuo Tabei, Ichiro Takeuchi |
Abstract | We study the problem of discriminative sub-trajectory mining. Given two groups of trajectories, the goal of this problem is to extract moving patterns in the form of sub-trajectories which are more similar to sub-trajectories of one group and less similar to those of the other. We propose a new method called Statistically Discriminative Sub-trajectory Mining (SDSM) for this problem. An advantage of the SDSM method is that the statistical significance of the extracted sub-trajectories are properly controlled in the sense that the probability of finding a false positive sub-trajectory is smaller than a specified significance threshold alpha (e.g., 0.05), which is indispensable when the method is used in scientific or social studies under noisy environment. Finding such statistically discriminative sub-trajectories from massive trajectory dataset is both computationally and statistically challenging. In the SDSM method, we resolve the difficulties by introducing a tree representation among sub-trajectories and running an efficient permutation-based statistical inference method on the tree. To the best of our knowledge, SDSM is the first method that can efficiently extract statistically discriminative sub-trajectories from massive trajectory dataset. We illustrate the effectiveness and scalability of the SDSM method by applying it to a real-world dataset with 1,000,000 trajectories which contains 16,723,602,505 sub-trajectories. |
Tasks | |
Published | 2019-05-06 |
URL | https://arxiv.org/abs/1905.01788v1 |
https://arxiv.org/pdf/1905.01788v1.pdf | |
PWC | https://paperswithcode.com/paper/statistically-discriminative-sub-trajectory |
Repo | |
Framework | |
Task Selection Policies for Multitask Learning
Title | Task Selection Policies for Multitask Learning |
Authors | John Glover, Chris Hokamp |
Abstract | One of the questions that arises when designing models that learn to solve multiple tasks simultaneously is how much of the available training budget should be devoted to each individual task. We refer to any formalized approach to addressing this problem (learned or otherwise) as a task selection policy. In this work we provide an empirical evaluation of the performance of some common task selection policies in a synthetic bandit-style setting, as well as on the GLUE benchmark for natural language understanding. We connect task selection policy learning to existing work on automated curriculum learning and off-policy evaluation, and suggest a method based on counterfactual estimation that leads to improved model performance in our experimental settings. |
Tasks | |
Published | 2019-07-14 |
URL | https://arxiv.org/abs/1907.06214v1 |
https://arxiv.org/pdf/1907.06214v1.pdf | |
PWC | https://paperswithcode.com/paper/task-selection-policies-for-multitask |
Repo | |
Framework | |
Average reward reinforcement learning with unknown mixing times
Title | Average reward reinforcement learning with unknown mixing times |
Authors | Tom Zahavy, Alon Cohen, Haim Kaplan, Yishay Mansour |
Abstract | We derive and analyze learning algorithms for policy evaluation, apprenticeship learning, and policy gradient for average reward criteria. Existing algorithms explicitly require an upper bound on the mixing time. In contrast, we build on ideas from Markov chain theory and derive sampling algorithms that do not require such an upper bound. For these algorithms, we provide theoretical bounds on their sample-complexity and running time. |
Tasks | |
Published | 2019-05-23 |
URL | https://arxiv.org/abs/1905.09704v1 |
https://arxiv.org/pdf/1905.09704v1.pdf | |
PWC | https://paperswithcode.com/paper/average-reward-reinforcement-learning-with |
Repo | |
Framework | |
Monkey Optimization System with Active Membranes: A New Meta-heuristic Optimization System
Title | Monkey Optimization System with Active Membranes: A New Meta-heuristic Optimization System |
Authors | Moustafa Zein, Aboul Ella Hassanien, Ammar Adl, Adam Slowik |
Abstract | Optimization techniques, used to get the optimal solution in search spaces, have not solved the time-consuming problem. The objective of this study is to tackle the sequential processing problem in Monkey Algorithm and simulating the natural parallel behavior of monkeys. Therefore, a P system with active membranes is constructed by providing a codification for Monkey Algorithm within the context of a cell-like P system, defining accordingly the elements of the model - membrane structure, objects, rules and the behavior of it. The proposed algorithm has modeled the natural behavior of climb process using separate membranes, rather than the original algorithm. Moreover, it introduced the membrane migration process to select the best solution and the time stamp was added as an additional stopping criterion to control the timing of the algorithm. The results indicate a substantial solution for the time consumption problem, significant representation of the natural behavior of monkeys, and considerable chance to reach the best solution in the context of meta-heuristics purpose. In addition, experiments use the commonly used benchmark functions to test the performance of the algorithm as well as the expected time of the proposed P Monkey optimization algorithm and the traditional Monkey Algorithm running on population size. The unit times are calculated based on the complexity of algorithms, where P Monkey takes a time unit to fire rule(s) over a population size n; as soon as, Monkey Algorithm takes a time unit to run a step every mathematical equation over a population size. |
Tasks | |
Published | 2019-09-30 |
URL | https://arxiv.org/abs/1910.06283v1 |
https://arxiv.org/pdf/1910.06283v1.pdf | |
PWC | https://paperswithcode.com/paper/monkey-optimization-system-with-active |
Repo | |
Framework | |
A Convolutional Cost-Sensitive Crack Localization Algorithm for Automated and Reliable RC Bridge Inspection
Title | A Convolutional Cost-Sensitive Crack Localization Algorithm for Automated and Reliable RC Bridge Inspection |
Authors | Seyed Omid Sajedi, Xiao Liang |
Abstract | Bridges are an essential part of the transportation infrastructure and need to be monitored periodically. Visual inspections by dedicated teams have been one of the primary tools in structural health monitoring (SHM) of bridge structures. However, such conventional methods have certain shortcomings. Manual inspections may be challenging in harsh environments and are commonly biased in nature. In the last decade, camera-equipped unmanned aerial vehicles (UAVs) have been widely used for visual inspections; however, the task of automatically extracting useful information from raw images is still challenging. In this paper, a deep learning semantic segmentation framework is proposed to automatically localize surface cracks. Due to the high imbalance of crack and background classes in images, different strategies are investigated to improve performance and reliability. The trained models are tested on real-world crack images showing impressive robustness in terms of the metrics defined by the concepts of precision and recall. These techniques can be used in SHM of bridges to extract useful information from the unprocessed images taken from UAVs. |
Tasks | Semantic Segmentation |
Published | 2019-05-23 |
URL | https://arxiv.org/abs/1905.09716v1 |
https://arxiv.org/pdf/1905.09716v1.pdf | |
PWC | https://paperswithcode.com/paper/a-convolutional-cost-sensitive-crack |
Repo | |
Framework | |
Active learning for binary classification with variable selection
Title | Active learning for binary classification with variable selection |
Authors | Zhanfeng Wang, Yumi Kwon, Yuan-chin Ivan Chang |
Abstract | Modern computing and communication technologies can make data collection procedures very efficient. However, our ability to analyze large data sets and/or to extract information out from them is hard-pressed to keep up with our capacities for data collection. Among these huge data sets, some of them are not collected for any particular research purpose. For a classification problem, this means that the essential label information may not be readily obtainable, in the data set in hands, and an extra labeling procedure is required such that we can have enough label information to be used for constructing a classification model. When the size of a data set is huge, to label each subject in it will cost a lot in both capital and time. Thus, it is an important issue to decide which subjects should be labeled first in order to efficiently reduce the training cost/time. Active learning method is a promising outlet for this situation, because with the active learning ideas, we can select the unlabeled subjects sequentially without knowing their label information. In addition, there will be no confirmed information about the essential variables for constructing an efficient classification rule. Thus, how to merge a variable selection scheme with an active learning procedure is of interest. In this paper, we propose a procedure for building binary classification models when the complete label information is not available in the beginning of the training stage. We study an model-based active learning procedure with sequential variable selection schemes, and discuss the results of the proposed procedure from both theoretical and numerical aspects. |
Tasks | Active Learning |
Published | 2019-01-29 |
URL | http://arxiv.org/abs/1901.10079v1 |
http://arxiv.org/pdf/1901.10079v1.pdf | |
PWC | https://paperswithcode.com/paper/active-learning-for-binary-classification |
Repo | |
Framework | |
Self-supervised Feature Learning for 3D Medical Images by Playing a Rubik’s Cube
Title | Self-supervised Feature Learning for 3D Medical Images by Playing a Rubik’s Cube |
Authors | Xinrui Zhuang, Yuexiang Li, Yifan Hu, Kai Ma, Yujiu Yang, Yefeng Zheng |
Abstract | Witnessed the development of deep learning, increasing number of studies try to build computer aided diagnosis systems for 3D volumetric medical data. However, as the annotations of 3D medical data are difficult to acquire, the number of annotated 3D medical images is often not enough to well train the deep learning networks. The self-supervised learning deeply exploiting the information of raw data is one of the potential solutions to loose the requirement of training data. In this paper, we propose a self-supervised learning framework for the volumetric medical images. A novel proxy task, i.e., Rubik’s cube recovery, is formulated to pre-train 3D neural networks. The proxy task involves two operations, i.e., cube rearrangement and cube rotation, which enforce networks to learn translational and rotational invariant features from raw 3D data. Compared to the train-from-scratch strategy, fine-tuning from the pre-trained network leads to a better accuracy on various tasks, e.g., brain hemorrhage classification and brain tumor segmentation. We show that our self-supervised learning approach can substantially boost the accuracies of 3D deep learning networks on the volumetric medical datasets without using extra data. To our best knowledge, this is the first work focusing on the self-supervised learning of 3D neural networks. |
Tasks | Brain Tumor Segmentation |
Published | 2019-10-05 |
URL | https://arxiv.org/abs/1910.02241v1 |
https://arxiv.org/pdf/1910.02241v1.pdf | |
PWC | https://paperswithcode.com/paper/self-supervised-feature-learning-for-3d |
Repo | |
Framework | |
Signaling Friends and Head-Faking Enemies Simultaneously: Balancing Goal Obfuscation and Goal Legibility
Title | Signaling Friends and Head-Faking Enemies Simultaneously: Balancing Goal Obfuscation and Goal Legibility |
Authors | Anagha Kulkarni, Siddharth Srivastava, Subbarao Kambhampati |
Abstract | In order to be useful in the real world, AI agents need to plan and act in the presence of others, who may include adversarial and cooperative entities. In this paper, we consider the problem where an autonomous agent needs to act in a manner that clarifies its objectives to cooperative entities while preventing adversarial entities from inferring those objectives. We show that this problem is solvable when cooperative entities and adversarial entities use different types of sensors and/or prior knowledge. We develop two new solution approaches for computing such plans. One approach provides an optimal solution to the problem by using an IP solver to provide maximum obfuscation for adversarial entities while providing maximum legibility for cooperative entities in the environment, whereas the other approach provides a satisficing solution using heuristic-guided forward search to achieve preset levels of obfuscation and legibility for adversarial and cooperative entities respectively. We show the feasibility and utility of our algorithms through extensive empirical evaluation on problems derived from planning benchmarks. |
Tasks | |
Published | 2019-05-25 |
URL | https://arxiv.org/abs/1905.10672v2 |
https://arxiv.org/pdf/1905.10672v2.pdf | |
PWC | https://paperswithcode.com/paper/balancing-goal-obfuscation-and-goal |
Repo | |
Framework | |
VACL: Variance-Aware Cross-Layer Regularization for Pruning Deep Residual Networks
Title | VACL: Variance-Aware Cross-Layer Regularization for Pruning Deep Residual Networks |
Authors | Shuang Gao, Xin Liu, Lung-Sheng Chien, William Zhang, Jose M. Alvarez |
Abstract | Improving weight sparsity is a common strategy for producing light-weight deep neural networks. However, pruning models with residual learning is more challenging. In this paper, we introduce Variance-Aware Cross-Layer (VACL), a novel approach to address this problem. VACL consists of two parts, a Cross-Layer grouping and a Variance Aware regularization. In Cross-Layer grouping the $i^{th}$ filters of layers connected by skip-connections are grouped into one regularization group. Then, the Variance-Aware regularization term takes into account both the first and second-order statistics of the connected layers to constrain the variance within a group. Our approach can effectively improve the structural sparsity of residual models. For CIFAR10, the proposed method reduces a ResNet model by up to 79.5% with no accuracy drop and reduces a ResNeXt model by up to 82% with less than 1% accuracy drop. For ImageNet, it yields a pruned ratio of up to 63.3% with less than 1% top-5 accuracy drop. Our experimental results show that the proposed approach significantly outperforms other state-of-the-art methods in terms of overall model size and accuracy. |
Tasks | |
Published | 2019-09-10 |
URL | https://arxiv.org/abs/1909.04485v1 |
https://arxiv.org/pdf/1909.04485v1.pdf | |
PWC | https://paperswithcode.com/paper/vacl-variance-aware-cross-layer |
Repo | |
Framework | |
Hardware/Software Co-Exploration of Neural Architectures
Title | Hardware/Software Co-Exploration of Neural Architectures |
Authors | Weiwen Jiang, Lei Yang, Edwin Sha, Qingfeng Zhuge, Shouzhen Gu, Sakyasingha Dasgupta, Yiyu Shi, Jingtong Hu |
Abstract | We propose a novel hardware and software co-exploration framework for efficient neural architecture search (NAS). Different from existing hardware-aware NAS which assumes a fixed hardware design and explores the neural architecture search space only, our framework simultaneously explores both the architecture search space and the hardware design space to identify the best neural architecture and hardware pairs that maximize both test accuracy and hardware efficiency. Such a practice greatly opens up the design freedom and pushes forward the Pareto frontier between hardware efficiency and test accuracy for better design tradeoffs. The framework iteratively performs a two-level (fast and slow) exploration. Without lengthy training, the fast exploration can effectively fine-tune hyperparameters and prune inferior architectures in terms of hardware specifications, which significantly accelerates the NAS process. Then, the slow exploration trains candidates on a validation set and updates a controller using the reinforcement learning to maximize the expected accuracy together with the hardware efficiency. Experiments on ImageNet show that our co-exploration NAS can find the neural architectures and associated hardware design with the same accuracy, 35.24% higher throughput, 54.05% higher energy efficiency and 136x reduced search time, compared with the state-of-the-art hardware-aware NAS. |
Tasks | Neural Architecture Search |
Published | 2019-07-06 |
URL | https://arxiv.org/abs/1907.04650v2 |
https://arxiv.org/pdf/1907.04650v2.pdf | |
PWC | https://paperswithcode.com/paper/hardwaresoftware-co-exploration-of-neural |
Repo | |
Framework | |
Anchoring Theory in Sequential Stackelberg Games
Title | Anchoring Theory in Sequential Stackelberg Games |
Authors | Jan Karwowski, Jacek Mańdziuk, Adam Żychowski |
Abstract | An underlying assumption of Stackelberg Games (SGs) is perfect rationality of the players. However, in real-life situations (which are often modeled by SGs) the followers (terrorists, thieves, poachers or smugglers) – as humans in general – may act not in a perfectly rational way, as their decisions may be affected by biases of various kinds which bound rationality of their decisions. One of the popular models of bounded rationality (BR) is Anchoring Theory (AT) which claims that humans have a tendency to flatten probabilities of available options, i.e. they perceive a distribution of these probabilities as being closer to the uniform distribution than it really is. This paper proposes an efficient formulation of AT in sequential extensive-form SGs (named ATSG), suitable for Mixed-Integer Linear Program (MILP) solution methods. ATSG is implemented in three MILP/LP-based state-of-the-art methods for solving sequential SGs and two recently introduced non-MILP approaches: one relying on Monte Carlo sampling (O2UCT) and the other one (EASG) employing Evolutionary Algorithms. Experimental evaluation indicates that both non-MILP heuristic approaches scale better in time than MILP solutions while providing optimal or close-to-optimal solutions. Except for competitive time scalability, an additional asset of non-MILP methods is flexibility of potential BR formulations they are able to incorporate. While MILP approaches accept BR formulations with linear constraints only, no restrictions on the BR form are imposed in either of the two non-MILP methods. |
Tasks | |
Published | 2019-12-07 |
URL | https://arxiv.org/abs/1912.03564v2 |
https://arxiv.org/pdf/1912.03564v2.pdf | |
PWC | https://paperswithcode.com/paper/anchoring-theory-in-sequential-stackelberg |
Repo | |
Framework | |
Rethinking Medical Image Reconstruction via Shape Prior, Going Deeper and Faster: Deep Joint Indirect Registration and Reconstruction
Title | Rethinking Medical Image Reconstruction via Shape Prior, Going Deeper and Faster: Deep Joint Indirect Registration and Reconstruction |
Authors | Jiulong Liu, Angelica I. Aviles-Rivero, Hui Ji, Carola-Bibiane Schönlieb |
Abstract | Indirect image registration is a promising technique to improve image reconstruction quality by providing a shape prior for the reconstruction task. In this paper, we propose a novel hybrid method that seeks to reconstruct high quality images from few measurements whilst requiring low computational cost. With this purpose, our framework intertwines indirect registration and reconstruction tasks is a single functional. It is based on two major novelties. Firstly, we introduce a model based on deep nets to solve the indirect registration problem, in which the inversion and registration mappings are recurrently connected through a fixed-point interaction based sparse optimisation. Secondly, we introduce specific inversion blocks, that use the explicit physical forward operator, to map the acquired measurements to the image reconstruction. We also introduce registration blocks based deep nets to predict the registration parameters and warp transformation accurately and efficiently. We demonstrate, through extensive numerical and visual experiments, that our framework outperforms significantly classic reconstruction schemes and other bi-task method; this in terms of both image quality and computational time. Finally, we show generalisation capabilities of our approach by demonstrating their performance on fast Magnetic Resonance Imaging (MRI), sparse view computed tomography (CT) and low dose CT with measurements much below the Nyquist limit. |
Tasks | Computed Tomography (CT), Image Reconstruction, Image Registration |
Published | 2019-12-16 |
URL | https://arxiv.org/abs/1912.07648v1 |
https://arxiv.org/pdf/1912.07648v1.pdf | |
PWC | https://paperswithcode.com/paper/rethinking-medical-image-reconstruction-via |
Repo | |
Framework | |
Activation Functions for Generalized Learning Vector Quantization - A Performance Comparison
Title | Activation Functions for Generalized Learning Vector Quantization - A Performance Comparison |
Authors | Thomas Villmann, John Ravichandran, Andrea Villmann, David Nebel, Marika Kaden |
Abstract | An appropriate choice of the activation function (like ReLU, sigmoid or swish) plays an important role in the performance of (deep) multilayer perceptrons (MLP) for classification and regression learning. Prototype-based classification learning methods like (generalized) learning vector quantization (GLVQ) are powerful alternatives. These models also deal with activation functions but here they are applied to the so-called classifier function instead. In this paper we investigate successful candidates of activation functions known for MLPs for application in GLVQ and their influence on the performance. |
Tasks | Quantization |
Published | 2019-01-17 |
URL | http://arxiv.org/abs/1901.05995v1 |
http://arxiv.org/pdf/1901.05995v1.pdf | |
PWC | https://paperswithcode.com/paper/activation-functions-for-generalized-learning |
Repo | |
Framework | |
Best-scored Random Forest Classification
Title | Best-scored Random Forest Classification |
Authors | Hanyuan Hang, Xiaoyu Liu, Ingo Steinwart |
Abstract | We propose an algorithm named best-scored random forest for binary classification problems. The terminology “best-scored” means to select the one with the best empirical performance out of a certain number of purely random tree candidates as each single tree in the forest. In this way, the resulting forest can be more accurate than the original purely random forest. From the theoretical perspective, within the framework of regularized empirical risk minimization penalized on the number of splits, we establish almost optimal convergence rates for the proposed best-scored random trees under certain conditions which can be extended to the best-scored random forest. In addition, we present a counterexample to illustrate that in order to ensure the consistency of the forest, every dimension must have the chance to be split. In the numerical experiments, for the sake of efficiency, we employ an adaptive random splitting criterion. Comparative experiments with other state-of-art classification methods demonstrate the accuracy of our best-scored random forest. |
Tasks | |
Published | 2019-05-27 |
URL | https://arxiv.org/abs/1905.11028v1 |
https://arxiv.org/pdf/1905.11028v1.pdf | |
PWC | https://paperswithcode.com/paper/best-scored-random-forest-classification |
Repo | |
Framework | |