Paper Group ANR 309
Knowledge Adaptation for Efficient Semantic Segmentation. SampleFix: Learning to Correct Programs by Sampling Diverse Fixes. Scaling Robot Supervision to Hundreds of Hours with RoboTurk: Robotic Manipulation Dataset through Human Reasoning and Dexterity. Optimal Experiment Design in Nonlinear Parameter Estimation with Exact Confidence Regions. Unde …
Knowledge Adaptation for Efficient Semantic Segmentation
Title | Knowledge Adaptation for Efficient Semantic Segmentation |
Authors | Tong He, Chunhua Shen, Zhi Tian, Dong Gong, Changming Sun, Youliang Yan |
Abstract | Both accuracy and efficiency are of significant importance to the task of semantic segmentation. Existing deep FCNs suffer from heavy computations due to a series of high-resolution feature maps for preserving the detailed knowledge in dense estimation. Although reducing the feature map resolution (i.e., applying a large overall stride) via subsampling operations (e.g., pooling and convolution striding) can instantly increase the efficiency, it dramatically decreases the estimation accuracy. To tackle this dilemma, we propose a knowledge distillation method tailored for semantic segmentation to improve the performance of the compact FCNs with large overall stride. To handle the inconsistency between the features of the student and teacher network, we optimize the feature similarity in a transferred latent domain formulated by utilizing a pre-trained autoencoder. Moreover, an affinity distillation module is proposed to capture the long-range dependency by calculating the non-local interactions across the whole image. To validate the effectiveness of our proposed method, extensive experiments have been conducted on three popular benchmarks: Pascal VOC, Cityscapes and Pascal Context. Built upon a highly competitive baseline, our proposed method can improve the performance of a student network by 2.5% (mIOU boosts from 70.2 to 72.7 on the cityscapes test set) and can train a better compact model with only 8% float operations (FLOPS) of a model that achieves comparable performances. |
Tasks | Semantic Segmentation |
Published | 2019-03-12 |
URL | http://arxiv.org/abs/1903.04688v1 |
http://arxiv.org/pdf/1903.04688v1.pdf | |
PWC | https://paperswithcode.com/paper/knowledge-adaptation-for-efficient-semantic |
Repo | |
Framework | |
SampleFix: Learning to Correct Programs by Sampling Diverse Fixes
Title | SampleFix: Learning to Correct Programs by Sampling Diverse Fixes |
Authors | Hossein Hajipour, Apratim Bhattacharya, Mario Fritz |
Abstract | Automatic program correction is an active topic of research, which holds the potential of dramatically improving productivity of programmers during the software development process and correctness of software in general. Recent advances in machine learning, deep learning and NLP have rekindled the hope to eventually fully automate the process of repairing programs. A key challenge is ambiguity, as multiple codes – or fixes – can implement the same functionality. In addition, datasets by nature fail to capture the variance introduced by such ambiguities. Therefore, we propose a deep generative model to automatically correct programming errors by learning a distribution of potential fixes. Our model is formulated as a deep conditional variational autoencoder that samples diverse fixes for the given erroneous programs. In order to account for ambiguity and inherent lack of representative datasets, we propose a novel regularizer to encourage the model to generate diverse fixes. Our evaluations on common programming errors show for the first time the generation of diverse fixes and strong improvements over the state-of-the-art approaches by fixing up to 65% of the mistakes. |
Tasks | |
Published | 2019-06-24 |
URL | https://arxiv.org/abs/1906.10502v2 |
https://arxiv.org/pdf/1906.10502v2.pdf | |
PWC | https://paperswithcode.com/paper/samplefix-learning-to-correct-programs-by |
Repo | |
Framework | |
Scaling Robot Supervision to Hundreds of Hours with RoboTurk: Robotic Manipulation Dataset through Human Reasoning and Dexterity
Title | Scaling Robot Supervision to Hundreds of Hours with RoboTurk: Robotic Manipulation Dataset through Human Reasoning and Dexterity |
Authors | Ajay Mandlekar, Jonathan Booher, Max Spero, Albert Tung, Anchit Gupta, Yuke Zhu, Animesh Garg, Silvio Savarese, Li Fei-Fei |
Abstract | Large, richly annotated datasets have accelerated progress in fields such as computer vision and natural language processing, but replicating these successes in robotics has been challenging. While prior data collection methodologies such as self-supervision have resulted in large datasets, the data can have poor signal-to-noise ratio. By contrast, previous efforts to collect task demonstrations with humans provide better quality data, but they cannot reach the same data magnitude. Furthermore, neither approach places guarantees on the diversity of the data collected, in terms of solution strategies. In this work, we leverage and extend the RoboTurk platform to scale up data collection for robotic manipulation using remote teleoperation. The primary motivation for our platform is two-fold: (1) to address the shortcomings of prior work and increase the total quantity of manipulation data collected through human supervision by an order of magnitude without sacrificing the quality of the data and (2) to collect data on challenging manipulation tasks across several operators and observe a diverse set of emergent behaviors and solutions. We collected over 111 hours of robot manipulation data across 54 users and 3 challenging manipulation tasks in 1 week, resulting in the largest robot dataset collected via remote teleoperation. We evaluate the quality of our platform, the diversity of demonstrations in our dataset, and the utility of our dataset via quantitative and qualitative analysis. For additional results, supplementary videos, and to download our dataset, visit http://roboturk.stanford.edu/realrobotdataset . |
Tasks | |
Published | 2019-11-11 |
URL | https://arxiv.org/abs/1911.04052v1 |
https://arxiv.org/pdf/1911.04052v1.pdf | |
PWC | https://paperswithcode.com/paper/scaling-robot-supervision-to-hundreds-of |
Repo | |
Framework | |
Optimal Experiment Design in Nonlinear Parameter Estimation with Exact Confidence Regions
Title | Optimal Experiment Design in Nonlinear Parameter Estimation with Exact Confidence Regions |
Authors | Anwesh Reddy Gottu Mukkula, Radoslav Paulen |
Abstract | A model-based optimal experiment design (OED) of nonlinear systems is studied. OED represents a methodology for optimizing the geometry of the parametric joint-confidence regions (CRs), which are obtained in an a posteriori analysis of the least-squares parameter estimates. The optimal design is achieved by using the available (experimental) degrees of freedom such that more informative measurements are obtained. Unlike the commonly used approaches, which base the OED procedure upon the linearized CRs, we explore a path where we explicitly consider the exact CRs in the OED framework. We use a methodology for a finite parametrization of the exact CRs within the OED problem and we introduce a novel approximation technique of the exact CRs using inner- and outer-approximating ellipsoids as a computationally less demanding alternative. The employed techniques give the OED problem as a finite-dimensional mathematical program of bilevel nature. We use two small-scale illustrative case studies to study various OED criteria and compare the resulting optimal designs with the commonly used linearization-based approach. We also assess the performance of two simple heuristic numerical schemes for bilevel optimization within the studied problems. |
Tasks | bilevel optimization |
Published | 2019-02-03 |
URL | http://arxiv.org/abs/1902.00931v1 |
http://arxiv.org/pdf/1902.00931v1.pdf | |
PWC | https://paperswithcode.com/paper/optimal-experiment-design-in-nonlinear |
Repo | |
Framework | |
Understanding Human Context in 3D Scenes by Learning Spatial Affordances with Virtual Skeleton Models
Title | Understanding Human Context in 3D Scenes by Learning Spatial Affordances with Virtual Skeleton Models |
Authors | Lasitha Piyathilaka, Sarath Kodagoda |
Abstract | Robots are often required to operate in environments where humans are not present, but yet require the human context information for better human-robot interaction. Even when humans are present in the environment, detecting their presence in cluttered environments could be challenging. As a solution to this problem, this paper presents the concept of spatial affordance map which learns human context by looking at geometric features of the environment. Instead of observing real humans to learn human context, it uses virtual human models and their relationships with the environment to map hidden human affordances in 3D scenes by placing virtual skeleton models in 3D scenes with their confidence values. The spatial affordance map learning problem is formulated as a multi-label classification problem that can be learned using Support Vector Machine (SVM) based learners. Experiments carried out in a real 3D scene dataset recorded promising results and proved the applicability of affordance-map for mapping human context. |
Tasks | Multi-Label Classification |
Published | 2019-06-13 |
URL | https://arxiv.org/abs/1906.05498v1 |
https://arxiv.org/pdf/1906.05498v1.pdf | |
PWC | https://paperswithcode.com/paper/understanding-human-context-in-3d-scenes-by |
Repo | |
Framework | |
Modeling AGI Safety Frameworks with Causal Influence Diagrams
Title | Modeling AGI Safety Frameworks with Causal Influence Diagrams |
Authors | Tom Everitt, Ramana Kumar, Victoria Krakovna, Shane Legg |
Abstract | Proposals for safe AGI systems are typically made at the level of frameworks, specifying how the components of the proposed system should be trained and interact with each other. In this paper, we model and compare the most promising AGI safety frameworks using causal influence diagrams. The diagrams show the optimization objective and causal assumptions of the framework. The unified representation permits easy comparison of frameworks and their assumptions. We hope that the diagrams will serve as an accessible and visual introduction to the main AGI safety frameworks. |
Tasks | |
Published | 2019-06-20 |
URL | https://arxiv.org/abs/1906.08663v1 |
https://arxiv.org/pdf/1906.08663v1.pdf | |
PWC | https://paperswithcode.com/paper/modeling-agi-safety-frameworks-with-causal |
Repo | |
Framework | |
Anticipatory Thinking: A Metacognitive Capability
Title | Anticipatory Thinking: A Metacognitive Capability |
Authors | Adam Amos-Binks, Dustin Dannenhauer |
Abstract | Anticipatory thinking is a complex cognitive process for assessing and managing risk in many contexts. Humans use anticipatory thinking to identify potential future issues and proactively take actions to manage their risks. In this paper we define a cognitive systems approach to anticipatory thinking as a metacognitive goal reasoning mechanism. The contributions of this paper include (1) defining anticipatory thinking in the MIDCA cognitive architecture, (2) operationalizing anticipatory thinking as a three step process for managing risk in plans, and (3) a numeric risk assessment calculating an expected cost-benefit ratio for modifying a plan with anticipatory actions. |
Tasks | |
Published | 2019-06-28 |
URL | https://arxiv.org/abs/1906.12249v1 |
https://arxiv.org/pdf/1906.12249v1.pdf | |
PWC | https://paperswithcode.com/paper/anticipatory-thinking-a-metacognitive |
Repo | |
Framework | |
Interactive user interface based on Convolutional Auto-encoders for annotating CT-scans
Title | Interactive user interface based on Convolutional Auto-encoders for annotating CT-scans |
Authors | Martin Längkvist, Jonas Widell, Per Thunberg, Amy Loutfi, Mats Lidén |
Abstract | High resolution computed tomography (HRCT) is the most important imaging modality for interstitial lung diseases, where the radiologists are interested in identifying certain patterns, and their volumetric and regional distribution. The use of machine learning can assist the radiologists with both these tasks by performing semantic segmentation. In this paper, we propose an interactive annotation-tool for semantic segmentation that assists the radiologist in labeling CT scans. The annotation tool is evaluated by six radiologists and radiology residents classifying healthy lung and reticular pattern i HRCT images. The usability of the system is evaluated with a System Usability Score (SUS) and interaction information from the readers that used the tool for annotating the CT volumes. It was discovered that the experienced usability and how the users interactied with the system differed between the users. A higher SUS-score was given by users that prioritized learning speed over model accuracy and spent less time with manual labeling and instead utilized the suggestions provided by the GUI. An analysis of the annotation variations between the readers show substantial agreement (Cohen’s kappa=0.69) for classification of healthy and affected lung parenchyma in pulmonary fibrosis. The inter-reader variation is a challenge for the definition of ground truth. |
Tasks | Semantic Segmentation |
Published | 2019-04-26 |
URL | http://arxiv.org/abs/1904.11701v1 |
http://arxiv.org/pdf/1904.11701v1.pdf | |
PWC | https://paperswithcode.com/paper/interactive-user-interface-based-on |
Repo | |
Framework | |
Explainable Machine Learning in Deployment
Title | Explainable Machine Learning in Deployment |
Authors | Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, Peter Eckersley |
Abstract | Explainable machine learning offers the potential to provide stakeholders with insights into model behavior by using various methods such as feature importance scores, counterfactual explanations, or influential training data. Yet there is little understanding of how organizations use these methods in practice. This study explores how organizations view and use explainability for stakeholder consumption. We find that, currently, the majority of deployments are not for end users affected by the model but rather for machine learning engineers, who use explainability to debug the model itself. There is thus a gap between explainability in practice and the goal of transparency, since explanations primarily serve internal stakeholders rather than external ones. Our study synthesizes the limitations of current explainability techniques that hamper their use for end users. To facilitate end user interaction, we develop a framework for establishing clear goals for explainability. We end by discussing concerns raised regarding explainability. |
Tasks | Feature Importance |
Published | 2019-09-13 |
URL | https://arxiv.org/abs/1909.06342v2 |
https://arxiv.org/pdf/1909.06342v2.pdf | |
PWC | https://paperswithcode.com/paper/explainable-machine-learning-in-deployment |
Repo | |
Framework | |
Learning a Generator Model from Terminal Bus Data
Title | Learning a Generator Model from Terminal Bus Data |
Authors | Nikolay Stulov, Dejan J Sobajic, Yury Maximov, Deepjyoti Deka, Michael Chertkov |
Abstract | In this work we investigate approaches to reconstruct generator models from measurements available at the generator terminal bus using machine learning (ML) techniques. The goal is to develop an emulator which is trained online and is capable of fast predictive computations. The training is illustrated on synthetic data generated based on available open-source dynamical generator model. Two ML techniques were developed and tested: (a) standard vector auto-regressive (VAR) model; and (b) novel customized long short-term memory (LSTM) deep learning model. Trade-offs in reconstruction ability between computationally light but linear AR model and powerful but computationally demanding LSTM model are established and analyzed. |
Tasks | |
Published | 2019-01-03 |
URL | http://arxiv.org/abs/1901.00781v1 |
http://arxiv.org/pdf/1901.00781v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-a-generator-model-from-terminal-bus |
Repo | |
Framework | |
MSFD:Multi-Scale Receptive Field Face Detector
Title | MSFD:Multi-Scale Receptive Field Face Detector |
Authors | Qiushan Guo, Yuan Dong, Yu Guo, Hongliang Bai |
Abstract | We aim to study the multi-scale receptive fields of a single convolutional neural network to detect faces of varied scales. This paper presents our Multi-Scale Receptive Field Face Detector (MSFD), which has superior performance on detecting faces at different scales and enjoys real-time inference speed. MSFD agglomerates context and texture by hierarchical structure. More additional information and rich receptive field bring significant improvement but generate marginal time consumption. We simultaneously propose an anchor assignment strategy which can cover faces with a wide range of scales to improve the recall rate of small faces and rotated faces. To reduce the false positive rate, we train our detector with focal loss which keeps the easy samples from overwhelming. As a result, MSFD reaches superior results on the FDDB, Pascal-Faces and WIDER FACE datasets, and can run at 31 FPS on GPU for VGA-resolution images. |
Tasks | |
Published | 2019-03-11 |
URL | http://arxiv.org/abs/1903.04147v1 |
http://arxiv.org/pdf/1903.04147v1.pdf | |
PWC | https://paperswithcode.com/paper/msfdmulti-scale-receptive-field-face-detector |
Repo | |
Framework | |
Optimal approximation for unconstrained non-submodular minimization
Title | Optimal approximation for unconstrained non-submodular minimization |
Authors | Marwa El Halabi, Stefanie Jegelka |
Abstract | Submodular function minimization is a well studied problem; existing algorithms solve it exactly or up to arbitrary accuracy. However, in many applications, the objective function is not exactly submodular. No theoretical guarantees exist in this case. While submodular minimization algorithms rely on intricate connections between submodularity and convexity, we show that these relations can be extended sufficiently to obtain approximation guarantees for non-submodular minimization. In particular, we prove how a projected subgradient method can perform well even for certain non-submodular functions. This includes important examples, such as objectives for structured sparse learning and variance reduction in Bayesian optimization. We also extend this result to noisy function evaluations. Our algorithm works in the value oracle model. We prove that in this model, the approximation result we obtain is the best possible with a subexponential number of queries. |
Tasks | Sparse Learning |
Published | 2019-05-29 |
URL | https://arxiv.org/abs/1905.12145v3 |
https://arxiv.org/pdf/1905.12145v3.pdf | |
PWC | https://paperswithcode.com/paper/minimizing-approximately-submodular-functions |
Repo | |
Framework | |
An Optimal Private Stochastic-MAB Algorithm Based on an Optimal Private Stopping Rule
Title | An Optimal Private Stochastic-MAB Algorithm Based on an Optimal Private Stopping Rule |
Authors | Touqir Sajed, Or Sheffet |
Abstract | We present a provably optimal differentially private algorithm for the stochastic multi-arm bandit problem, as opposed to the private analogue of the UCB-algorithm [Mishra and Thakurta, 2015; Tossou and Dimitrakakis, 2016] which doesn’t meet the recently discovered lower-bound of $\Omega \left(\frac{K\log(T)}{\epsilon} \right)$ [Shariff and Sheffet, 2018]. Our construction is based on a different algorithm, Successive Elimination [Even-Dar et al. 2002], that repeatedly pulls all remaining arms until an arm is found to be suboptimal and is then eliminated. In order to devise a private analogue of Successive Elimination we visit the problem of private stopping rule, that takes as input a stream of i.i.d samples from an unknown distribution and returns a multiplicative $(1 \pm \alpha)$-approximation of the distribution’s mean, and prove the optimality of our private stopping rule. We then present the private Successive Elimination algorithm which meets both the non-private lower bound [Lai and Robbins, 1985] and the above-mentioned private lower bound. We also compare empirically the performance of our algorithm with the private UCB algorithm. |
Tasks | |
Published | 2019-05-22 |
URL | https://arxiv.org/abs/1905.09383v1 |
https://arxiv.org/pdf/1905.09383v1.pdf | |
PWC | https://paperswithcode.com/paper/an-optimal-private-stochastic-mab-algorithm |
Repo | |
Framework | |
Provable Non-Convex Optimization and Algorithm Validation via Submodularity
Title | Provable Non-Convex Optimization and Algorithm Validation via Submodularity |
Authors | Yatao An Bian |
Abstract | Submodularity is one of the most well-studied properties of problem classes in combinatorial optimization and many applications of machine learning and data mining, with strong implications for guaranteed optimization. In this thesis, we investigate the role of submodularity in provable non-convex optimization and validation of algorithms. A profound understanding which classes of functions can be tractably optimized remains a central challenge for non-convex optimization. By advancing the notion of submodularity to continuous domains (termed “continuous submodularity”), we characterize a class of generally non-convex and non-concave functions – continuous submodular functions, and derive algorithms for approximately maximizing them with strong approximation guarantees. Meanwhile, continuous submodularity captures a wide spectrum of applications, ranging from revenue maximization with general marketing strategies, MAP inference for DPPs to mean field inference for probabilistic log-submodular models, which renders it as a valuable domain knowledge in optimizing this class of objectives. Validation of algorithms is an information-theoretic framework to investigate the robustness of algorithms to fluctuations in the input/observations and their generalization ability. We investigate various algorithms for one of the paradigmatic unconstrained submodular maximization problem: MaxCut. Due to submodularity of the MaxCut objective, we are able to present efficient approaches to calculate the algorithmic information content of MaxCut algorithms. The results provide insights into the robustness of different algorithmic techniques for MaxCut. |
Tasks | Combinatorial Optimization |
Published | 2019-12-18 |
URL | https://arxiv.org/abs/1912.08495v1 |
https://arxiv.org/pdf/1912.08495v1.pdf | |
PWC | https://paperswithcode.com/paper/provable-non-convex-optimization-and |
Repo | |
Framework | |
Part-based approximations for morphological operators using asymmetric auto-encoders
Title | Part-based approximations for morphological operators using asymmetric auto-encoders |
Authors | Bastien Ponchon, Santiago Velasco-Forero, Samy Blusseau, Jesus Angulo, Isabelle Bloch |
Abstract | This paper addresses the issue of building a part-based representation of a dataset of images. More precisely, we look for a non-negative, sparse decomposition of the images on a reduced set of atoms, in order to unveil a morphological and interpretable structure of the data. Additionally, we want this decomposition to be computed online for any new sample that is not part of the initial dataset. Therefore, our solution relies on a sparse, non-negative auto-encoder where the encoder is deep (for accuracy) and the decoder shallow (for interpretability). This method compares favorably to the state-of-the-art online methods on two datasets (MNIST and Fashion MNIST), according to classical metrics and to a new one we introduce, based on the invariance of the representation to morphological dilation. |
Tasks | |
Published | 2019-03-20 |
URL | http://arxiv.org/abs/1904.00763v2 |
http://arxiv.org/pdf/1904.00763v2.pdf | |
PWC | https://paperswithcode.com/paper/part-based-approximations-for-morphological |
Repo | |
Framework | |