Paper Group ANR 503
Jaccard Filtration and Stable Paths in the Mapper. Variable Selection with Rigorous Uncertainty Quantification using Deep Bayesian Neural Networks: Posterior Concentration and Bernstein-von Mises Phenomenon. Optical machine learning with incoherent light and a single-pixel detector. Monocular Depth Estimation with Directional Consistency by Deep Ne …
Jaccard Filtration and Stable Paths in the Mapper
Title | Jaccard Filtration and Stable Paths in the Mapper |
Authors | Dustin L. Arendt, Matthew Broussard, Bala Krishnamoorthy, Nathaniel Saul |
Abstract | The contributions of this paper are two-fold. We define a new filtration called the cover filtration built from a single cover based on a generalized Jaccard distance. We provide stability results for the cover filtration and show how the construction is equivalent to the Cech filtration under certain settings. We then develop a language and theory for stable paths within this filtration, inspired by ideas of persistent homology. We demonstrate how the filtration and paths can be applied to a variety of applications in which defining a metric is not obvious but a cover is readily available. We demonstrate the usefulness of this construction by employing it in the context of recommendation systems and explainable machine learning. We demonstrate a new perspective for modeling recommendation system data sets that does not require manufacturing a bespoke metric. This extends work on graph-based recommendation systems, allowing a topological perspective. For an explicit example, we look at a movies data set and we find the stable paths identified in our framework represent a sequence of movies constituting a gentle transition and ordering from one genre to another. For explainable machine learning, we apply the Mapper for model induction, providing explanations in the form of paths between subpopulations or observations. Our framework provides an alternative way of building a filtration from a single mapper that is then used to explore stable paths. As a direct illustration, we build a mapper from a supervised machine learning model trained on the FashionMNIST data set. We show that the stable paths in the cover filtration provide improved explanations of relationships between subpopulations of images. |
Tasks | Recommendation Systems |
Published | 2019-06-19 |
URL | https://arxiv.org/abs/1906.08256v1 |
https://arxiv.org/pdf/1906.08256v1.pdf | |
PWC | https://paperswithcode.com/paper/jaccard-filtration-and-stable-paths-in-the |
Repo | |
Framework | |
Variable Selection with Rigorous Uncertainty Quantification using Deep Bayesian Neural Networks: Posterior Concentration and Bernstein-von Mises Phenomenon
Title | Variable Selection with Rigorous Uncertainty Quantification using Deep Bayesian Neural Networks: Posterior Concentration and Bernstein-von Mises Phenomenon |
Authors | Jeremiah Zhe Liu |
Abstract | This work develops rigorous theoretical basis for the fact that deep Bayesian neural network (BNN) is an effective tool for high-dimensional variable selection with rigorous uncertainty quantification. We develop new Bayesian non-parametric theorems to show that a properly configured deep BNN (1) learns the variable importance effectively in high dimensions, and its learning rate can sometimes “break” the curse of dimensionality. (2) BNN’s uncertainty quantification for variable importance is rigorous, in the sense that its 95% credible intervals for variable importance indeed covers the truth 95% of the time (i.e., the Bernstein-von Mises (BvM) phenomenon). The theoretical results suggest a simple variable selection algorithm based on the BNN’s credible intervals. Extensive simulation confirms the theoretical findings and shows that the proposed algorithm outperforms existing classic and neural-network-based variable selection methods, particularly in high dimensions. |
Tasks | |
Published | 2019-12-03 |
URL | https://arxiv.org/abs/1912.01189v1 |
https://arxiv.org/pdf/1912.01189v1.pdf | |
PWC | https://paperswithcode.com/paper/variable-selection-with-rigorous-uncertainty |
Repo | |
Framework | |
Optical machine learning with incoherent light and a single-pixel detector
Title | Optical machine learning with incoherent light and a single-pixel detector |
Authors | Shuming Jiao, Jun Feng, Yang Gao, Ting Lei, Zhenwei Xie, Xiaocong Yuan |
Abstract | An optical diffractive neural network (DNN) can be implemented with a cascaded phase mask architecture. Like an optical computer, the system can perform machine learning tasks such as number digit recognition in an all-optical manner. However, the system can only work under coherent light illumination and the precision requirement in practical experiments is quite high. This paper proposes an optical machine learning framework based on single-pixel imaging (MLSPI). The MLSPI system can perform the same linear pattern recognition task as DNN. Furthermore, it can work under incoherent lighting conditions, has lower experimental complexity and can be easily programmable. |
Tasks | |
Published | 2019-04-24 |
URL | https://arxiv.org/abs/1904.10851v3 |
https://arxiv.org/pdf/1904.10851v3.pdf | |
PWC | https://paperswithcode.com/paper/optical-machine-learning-with-incoherent |
Repo | |
Framework | |
Monocular Depth Estimation with Directional Consistency by Deep Networks
Title | Monocular Depth Estimation with Directional Consistency by Deep Networks |
Authors | Fabian Truetsch, Alfred Schöttl |
Abstract | As processing power has become more available, more human-like artificial intelligences are created to solve image processing tasks that we are inherently good at. As such we propose a model that estimates depth from a monocular image. Our approach utilizes a combination of structure from motion and stereo disparity. We estimate a pose between the source image and a different viewpoint and a dense depth map and use a simple transformation to reconstruct the image seen from said viewpoint. We can then use the real image at that viewpoint to act as supervision to train out model. The metric chosen for image comparison employs standard L1 and structural similarity and a consistency constraint between depth maps as well as smoothness constraint. We show that similar to human perception utilizing the correlation within the provided data by two different approaches increases the accuracy and outperforms the individual components. |
Tasks | Depth Estimation, Monocular Depth Estimation |
Published | 2019-05-11 |
URL | https://arxiv.org/abs/1905.04467v1 |
https://arxiv.org/pdf/1905.04467v1.pdf | |
PWC | https://paperswithcode.com/paper/monocular-depth-estimation-with-directional |
Repo | |
Framework | |
Single-Component Privacy Guarantees in Helper Data Systems and Sparse Coding with Ambiguation
Title | Single-Component Privacy Guarantees in Helper Data Systems and Sparse Coding with Ambiguation |
Authors | Behrooz Razeghi, Taras Stanko, Boris Škorić, Slava Voloshynovskiy |
Abstract | We investigate the privacy of two approaches to (biometric) template protection: Helper Data Systems and Sparse Ternary Coding with Ambiguization. In particular, we focus on a privacy property that is often overlooked, namely how much leakage exists about one specific binary property of one component of the feature vector. This property is e.g. the sign or an indicator that a threshold is exceeded. We provide evidence that both approaches are able to protect such sensitive binary variables, and discuss how system parameters need to be set. |
Tasks | |
Published | 2019-07-15 |
URL | https://arxiv.org/abs/1907.06388v2 |
https://arxiv.org/pdf/1907.06388v2.pdf | |
PWC | https://paperswithcode.com/paper/single-component-privacy-guarantees-in-helper |
Repo | |
Framework | |
Deep neural network-based classification model for Sentiment Analysis
Title | Deep neural network-based classification model for Sentiment Analysis |
Authors | Donghang Pan, Jingling Yuan, Lin Li, Deming Sheng |
Abstract | The growing prosperity of social networks has brought great challenges to the sentimental tendency mining of users. As more and more researchers pay attention to the sentimental tendency of online users, rich research results have been obtained based on the sentiment classification of explicit texts. However, research on the implicit sentiment of users is still in its infancy. Aiming at the difficulty of implicit sentiment classification, a research on implicit sentiment classification model based on deep neural network is carried out. Classification models based on DNN, LSTM, Bi-LSTM and CNN were established to judge the tendency of the user’s implicit sentiment text. Based on the Bi-LSTM model, the classification model of word-level attention mechanism is studied. The experimental results on the public dataset show that the established LSTM series classification model and CNN classification model can achieve good sentiment classification effect, and the classification effect is significantly better than the DNN model. The Bi-LSTM based attention mechanism classification model obtained the optimal R value in the positive category identification. |
Tasks | Sentiment Analysis |
Published | 2019-07-03 |
URL | https://arxiv.org/abs/1907.02046v1 |
https://arxiv.org/pdf/1907.02046v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-neural-network-based-classification |
Repo | |
Framework | |
Distance transform regression for spatially-aware deep semantic segmentation
Title | Distance transform regression for spatially-aware deep semantic segmentation |
Authors | Nicolas Audebert, Alexandre Boulch, Bertrand Le Saux, Sébastien Lefèvre |
Abstract | Understanding visual scenes relies more and more on dense pixel-wise classification obtained via deep fully convolutional neural networks. However, due to the nature of the networks, predictions often suffer from blurry boundaries and ill-segmented shapes, fueling the need for post-processing. This work introduces a new semantic segmentation regularization based on the regression of a distance transform. After computing the distance transform on the label masks, we train a FCN in a multi-task setting in both discrete and continuous spaces by learning jointly classification and distance regression. This requires almost no modification of the network structure and adds a very low overhead to the training process. Learning to approximate the distance transform back-propagates spatial cues that implicitly regularizes the segmentation. We validate this technique with several architectures on various datasets, and we show significant improvements compared to competitive baselines. |
Tasks | Semantic Segmentation |
Published | 2019-09-04 |
URL | https://arxiv.org/abs/1909.01671v1 |
https://arxiv.org/pdf/1909.01671v1.pdf | |
PWC | https://paperswithcode.com/paper/distance-transform-regression-for-spatially |
Repo | |
Framework | |
GET-AID: Visual Recognition of Human Rights Abuses via Global Emotional Traits
Title | GET-AID: Visual Recognition of Human Rights Abuses via Global Emotional Traits |
Authors | Grigorios Kalliatakis, Shoaib Ehsan, Maria Fasli, Klaus D. McDonald-Maier |
Abstract | In the era of social media and big data, the use of visual evidence to document conflict and human rights abuse has become an important element for human rights organizations and advocates. In this paper, we address the task of detecting two types of human rights abuses in challenging, everyday photos: (1) child labour, and (2) displaced populations. We propose a novel model that is driven by a human-centric approach. Our hypothesis is that the emotional state of a person – how positive or pleasant an emotion is, and the control level of the situation by the person – are powerful cues for perceiving potential human rights violations. To exploit these cues, our model learns to predict global emotional traits over a given image based on the joint analysis of every detected person and the whole scene. By integrating these predictions with a data-driven convolutional neural network (CNN) classifier, our system efficiently infers potential human rights abuses in a clean, end-to-end system we call GET-AID (from Global Emotional Traits for Abuse IDentification). Extensive experiments are performed to verify our method on the recently introduced subset of Human Rights Archive (HRA) dataset (2 violation categories with the same number of positive and negative samples), where we show quantitatively compelling results. Compared with previous works and the sole use of a CNN classifier, this paper improves the coverage up to 23.73% for child labour and 57.21% for displaced populations. Our dataset, codes and trained models are available online at https://github.com/GKalliatakis/GET-AID. |
Tasks | |
Published | 2019-02-11 |
URL | http://arxiv.org/abs/1902.03817v1 |
http://arxiv.org/pdf/1902.03817v1.pdf | |
PWC | https://paperswithcode.com/paper/get-aid-visual-recognition-of-human-rights |
Repo | |
Framework | |
ExTra: Transfer-guided Exploration
Title | ExTra: Transfer-guided Exploration |
Authors | Anirban Santara, Rishabh Madan, Balaraman Ravindran, Pabitra Mitra |
Abstract | In this work we present a novel approach for transfer-guided exploration in reinforcement learning that is inspired by the human tendency to leverage experiences from similar encounters in the past while navigating a new task. Given an optimal policy in a related task-environment, we show that its bisimulation distance from the current task-environment gives a lower bound on the optimal advantage of state-action pairs in the current task-environment. Transfer-guided Exploration (ExTra) samples actions from a Softmax distribution over these lower bounds. In this way, actions with potentially higher optimum advantage are sampled more frequently. In our experiments on gridworld environments, we demonstrate that given access to an optimal policy in a related task-environment, ExTra can outperform popular domain-specific exploration strategies viz. epsilon greedy, Model-Based Interval Estimation - Exploration Based (MBIE-EB), Pursuit and Boltzmann in terms of sample complexity and rate of convergence. We further show that ExTra is robust to choices of source task and shows a graceful degradation of performance as the dissimilarity of the source task increases. We also demonstrate that ExTra, when used alongside traditional exploration algorithms, improves their rate of convergence. Thus it is capable of complimenting the efficacy of traditional exploration algorithms. |
Tasks | |
Published | 2019-06-27 |
URL | https://arxiv.org/abs/1906.11785v2 |
https://arxiv.org/pdf/1906.11785v2.pdf | |
PWC | https://paperswithcode.com/paper/extra-transfer-guided-exploration |
Repo | |
Framework | |
Model Order Selection in DoA Scenarios via Cross-Entropy based Machine Learning Techniques
Title | Model Order Selection in DoA Scenarios via Cross-Entropy based Machine Learning Techniques |
Authors | Andreas Barthelme, Reinhard Wiesmayr, Wolfgang Utschick |
Abstract | In this paper, we present a machine learning approach for estimating the number of incident wavefronts in a direction of arrival scenario. In contrast to previous works, a multilayer neural network with a cross-entropy objective is trained. Furthermore, we investigate an online training procedure that allows an adaption of the neural network to imperfections of an antenna array without explicitly calibrating the array manifold. We show via simulations that the proposed method outperforms classical model order selection schemes based on information criteria in terms of accuracy, especially for a small number of snapshots and at low signal-to-noise-ratios. Also, the online training procedure enables the neural network to adapt with only a few online training samples, if initialized by offline training on artificial data. |
Tasks | |
Published | 2019-10-21 |
URL | https://arxiv.org/abs/1910.09284v1 |
https://arxiv.org/pdf/1910.09284v1.pdf | |
PWC | https://paperswithcode.com/paper/model-order-selection-in-doa-scenarios-via |
Repo | |
Framework | |
Graduated Non-Convexity for Robust Spatial Perception: From Non-Minimal Solvers to Global Outlier Rejection
Title | Graduated Non-Convexity for Robust Spatial Perception: From Non-Minimal Solvers to Global Outlier Rejection |
Authors | Heng Yang, Pasquale Antonante, Vasileios Tzoumas, Luca Carlone |
Abstract | Semidefinite Programming (SDP) and Sums-of-Squares (SOS) relaxations have led to certifiably optimal non-minimal solvers for several robotics and computer vision problems. However, most non-minimal solvers rely on least-squares formulations, and, as a result, are brittle against outliers. While a standard approach to regain robustness against outliers is to use robust cost functions, the latter typically introduce other non-convexities, preventing the use of existing non-minimal solvers. In this paper, we enable the simultaneous use of non-minimal solvers and robust estimation by providing a general-purpose approach for robust global estimation, which can be applied to any problem where a non-minimal solver is available for the outlier-free case. To this end, we leverage the Black-Rangarajan duality between robust estimation and outlier processes (which has been traditionally applied to early vision problems), and show that graduated non-convexity (GNC) can be used in conjunction with non-minimal solvers to compute robust solutions, without requiring an initial guess. Although GNC’s global optimality cannot be guaranteed, we demonstrate the empirical robustness of the resulting robust non-minimal solvers in applications, including point cloud and mesh registration, pose graph optimization, and image-based object pose estimation (also called shape alignment). Our solvers are robust to 70-80% of outliers, outperform RANSAC, are more accurate than specialized local solvers, and faster than specialized global solvers. We also propose the first certifiably optimal non-minimal solver for shape alignment using SOS relaxation. |
Tasks | Pose Estimation |
Published | 2019-09-18 |
URL | https://arxiv.org/abs/1909.08605v3 |
https://arxiv.org/pdf/1909.08605v3.pdf | |
PWC | https://paperswithcode.com/paper/graduated-non-convexity-for-robust-spatial |
Repo | |
Framework | |
Using Honeypots to Catch Adversarial Attacks on Neural Networks
Title | Using Honeypots to Catch Adversarial Attacks on Neural Networks |
Authors | Shawn Shan, Emily Wenger, Bolun Wang, Bo Li, Haitao Zheng, Ben Y. Zhao |
Abstract | Deep neural networks (DNN) are known to be vulnerable to adversarial attacks. Numerous efforts either try to patch weaknesses in trained models, or try to make it difficult or costly to compute adversarial examples that exploit them. In our work, we explore a new “honeypot” approach to protect DNN models. We intentionally inject trapdoors, honeypot weaknesses in the classification manifold that attract attackers searching for adversarial examples. Attackers’ optimization algorithms gravitate towards trapdoors, leading them to produce attacks similar to trapdoors in the feature space. Our defense then identifies attacks by comparing neuron activation signatures of inputs to those of trapdoors. In this paper, we introduce trapdoors and describe an implementation of a trapdoor-enabled defense. First, we analytically prove that trapdoors shape the computation of adversarial attacks so that attack inputs will have feature representations very similar to those of trapdoors. Second, we experimentally show that trapdoor-protected models can detect, with high accuracy, adversarial examples generated by state-of-the-art attacks (Projected Gradient Descent, optimization-based CW, Elastic Net, BPDA), with negligible impact on normal classification. These results generalize across classification domains, including image, facial, and traffic-sign recognition. We also validate trapdoors’ robustness against strong adaptive attacks (countermeasures), including those who can identify and unlearn trapdoors. |
Tasks | Face Recognition, Traffic Sign Recognition |
Published | 2019-04-18 |
URL | https://arxiv.org/abs/1904.08554v5 |
https://arxiv.org/pdf/1904.08554v5.pdf | |
PWC | https://paperswithcode.com/paper/gotta-catch-em-all-using-concealed-trapdoors |
Repo | |
Framework | |
DeepBlindness: Fast Blindness Map Estimation and Blindness Type Classification for Outdoor Scene from Single Color Image
Title | DeepBlindness: Fast Blindness Map Estimation and Blindness Type Classification for Outdoor Scene from Single Color Image |
Authors | Jiaxiong Qiu, Xinyuan Yu, Guoqiang Yang, Shuaicheng Liu |
Abstract | Outdoor vision robotic systems and autonomous cars suffer from many image-quality issues, particularly haze, defocus blur, and motion blur, which we will define generically as “blindness issues”. These blindness issues may seriously affect the performance of robotic systems and could lead to unsafe decisions being made. However, existing solutions either focus on one type of blindness only or lack the ability to estimate the degree of blindness accurately. Besides, heavy computation is needed so that these solutions cannot run in real-time on practical systems. In this paper, we provide a method which could simultaneously detect the type of blindness and provide a blindness map indicating to what degree the vision is limited on a pixel-by-pixel basis. Both the blindness type and the estimate of per-pixel blindness are essential for tasks like deblur, dehaze, or the fail-safe functioning of robotic systems. We demonstrate the effectiveness of our approach on the KITTI and CUHK datasets where experiments show that our method outperforms other state-of-the-art approaches, achieving speeds of about 130 frames per second (fps). |
Tasks | |
Published | 2019-11-02 |
URL | https://arxiv.org/abs/1911.00652v1 |
https://arxiv.org/pdf/1911.00652v1.pdf | |
PWC | https://paperswithcode.com/paper/deepblindness-fast-blindness-map-estimation |
Repo | |
Framework | |
Can Sophisticated Dispatching Strategy Acquired by Reinforcement Learning? - A Case Study in Dynamic Courier Dispatching System
Title | Can Sophisticated Dispatching Strategy Acquired by Reinforcement Learning? - A Case Study in Dynamic Courier Dispatching System |
Authors | Yujie Chen, Yu Qian, Yichen Yao, Zili Wu, Rongqi Li, Yinzhi Zhou, Haoyuan Hu, Yinghui Xu |
Abstract | In this paper, we study a courier dispatching problem (CDP) raised from an online pickup-service platform of Alibaba. The CDP aims to assign a set of couriers to serve pickup requests with stochastic spatial and temporal arrival rate among urban regions. The objective is to maximize the revenue of served requests given a limited number of couriers over a period of time. Many online algorithms such as dynamic matching and vehicle routing strategy from existing literature could be applied to tackle this problem. However, these methods rely on appropriately predefined optimization objectives at each decision point, which is hard in dynamic situations. This paper formulates the CDP as a Markov decision process (MDP) and proposes a data-driven approach to derive the optimal dispatching rule-set under different scenarios. Our method stacks multi-layer images of the spatial-and-temporal map and apply multi-agent reinforcement learning (MARL) techniques to evolve dispatching models. This method solves the learning inefficiency caused by traditional centralized MDP modeling. Through comprehensive experiments on both artificial dataset and real-world dataset, we show: 1) By utilizing historical data and considering long-term revenue gains, MARL achieves better performance than myopic online algorithms; 2) MARL is able to construct the mapping between complex scenarios to sophisticated decisions such as the dispatching rule. 3) MARL has the scalability to adopt in large-scale real-world scenarios. |
Tasks | Multi-agent Reinforcement Learning |
Published | 2019-03-07 |
URL | http://arxiv.org/abs/1903.02716v1 |
http://arxiv.org/pdf/1903.02716v1.pdf | |
PWC | https://paperswithcode.com/paper/can-sophisticated-dispatching-strategy |
Repo | |
Framework | |
NGEMM: Optimizing GEMM for Deep Learning via Compiler-based Techniques
Title | NGEMM: Optimizing GEMM for Deep Learning via Compiler-based Techniques |
Authors | Wenlei Bao, Li-Wen Chang, Yang Chen, Ke Deng, Amit Agarwal, Emad Barsoum, Abe Taha |
Abstract | Quantization has emerged to be an effective way to significantly boost the performance of deep neural networks (DNNs) by utilizing low-bit computations. Despite having lower numerical precision, quantized DNNs are able to reduce both memory bandwidth and computation cycles with little losses of accuracy. Integer GEMM (General Matrix Multiplication) is critical to running quantized DNN models efficiently, as GEMM operations often dominate the computations in these models. Various approaches have been developed by leveraging techniques such as vectorization and memory layout to improve the performance of integer GEMM. However, these existing approaches are not fast enough in certain scenarios. We developed NGEMM, a compiler-based GEMM implementation for accelerating lower-precision training and inference. NGEMM has better use of the vector units by avoiding unnecessary vector computation that is introduced during tree reduction. We compared NGEMM’s performance with the state-of-art BLAS libraries such as MKL. Our experimental results showed that NGEMM outperformed MKL non-pack and pack version by an average of 1.86x and 1.16x, respectively. We have applied NGEMM to a number of production services in Microsoft. |
Tasks | Quantization |
Published | 2019-10-01 |
URL | https://arxiv.org/abs/1910.00178v2 |
https://arxiv.org/pdf/1910.00178v2.pdf | |
PWC | https://paperswithcode.com/paper/ngemm-optimizing-gemm-for-deep-learning-via |
Repo | |
Framework | |