Paper Group ANR 1004
DLA: Dense-Layer-Analysis for Adversarial Example Detection. Sparse Lifting of Dense Vectors: Unifying Word and Sentence Representations. Liability Design for Autonomous Vehicles and Human-Driven Vehicles: A Hierarchical Game-Theoretic Approach. Robust Dense Mapping for Large-Scale Dynamic Environments. Prospect Theory Based Crowdsourcing for Class …
DLA: Dense-Layer-Analysis for Adversarial Example Detection
Title | DLA: Dense-Layer-Analysis for Adversarial Example Detection |
Authors | Philip Sperl, Ching-Yu Kao, Peng Chen, Konstantin Böttinger |
Abstract | In recent years Deep Neural Networks (DNNs) have achieved remarkable results and even showed super-human capabilities in a broad range of domains. This led people to trust in DNNs’ classifications and resulting actions even in security-sensitive environments like autonomous driving. Despite their impressive achievements, DNNs are known to be vulnerable to adversarial examples. Such inputs contain small perturbations to intentionally fool the attacked model. In this paper, we present a novel end-to-end framework to detect such attacks during classification without influencing the target model’s performance. Inspired by recent research in neuron-coverage guided testing we show that dense layers of DNNs carry security-sensitive information. With a secondary DNN we analyze the activation patterns of the dense layers during classification runtime, which enables effective and real-time detection of adversarial examples. Our prototype implementation successfully detects adversarial examples in image, natural language, and audio processing. Thereby, we cover a variety of target DNNs, including Long Short Term Memory (LSTM) architectures. In addition, to effectively defend against state-of-the-art attacks, our approach generalizes between different sets of adversarial examples. Thus, our method most likely enables us to detect even future, yet unknown attacks. Finally, during white-box adaptive attacks, we show our method cannot be easily bypassed. |
Tasks | Autonomous Driving |
Published | 2019-11-05 |
URL | https://arxiv.org/abs/1911.01921v1 |
https://arxiv.org/pdf/1911.01921v1.pdf | |
PWC | https://paperswithcode.com/paper/dla-dense-layer-analysis-for-adversarial |
Repo | |
Framework | |
Sparse Lifting of Dense Vectors: Unifying Word and Sentence Representations
Title | Sparse Lifting of Dense Vectors: Unifying Word and Sentence Representations |
Authors | Wenye Li, Senyue Hao |
Abstract | As the first step in automated natural language processing, representing words and sentences is of central importance and has attracted significant research attention. Different approaches, from the early one-hot and bag-of-words representation to more recent distributional dense and sparse representations, were proposed. Despite the successful results that have been achieved, such vectors tend to consist of uninterpretable components and face nontrivial challenge in both memory and computational requirement in practical applications. In this paper, we designed a novel representation model that projects dense word vectors into a higher dimensional space and favors a highly sparse and binary representation of word vectors with potentially interpretable components, while trying to maintain pairwise inner products between original vectors as much as possible. Computationally, our model is relaxed as a symmetric non-negative matrix factorization problem which admits a fast yet effective solution. In a series of empirical evaluations, the proposed model exhibited consistent improvement and high potential in practical applications. |
Tasks | |
Published | 2019-11-05 |
URL | https://arxiv.org/abs/1911.01625v1 |
https://arxiv.org/pdf/1911.01625v1.pdf | |
PWC | https://paperswithcode.com/paper/sparse-lifting-of-dense-vectors-unifying-word |
Repo | |
Framework | |
Liability Design for Autonomous Vehicles and Human-Driven Vehicles: A Hierarchical Game-Theoretic Approach
Title | Liability Design for Autonomous Vehicles and Human-Driven Vehicles: A Hierarchical Game-Theoretic Approach |
Authors | Xuan Di, Xu Chen, Eric Talley |
Abstract | Autonomous vehicles (AVs) are inevitably entering our lives with potential benefits for improved traffic safety, mobility, and accessibility. However, AVs’ benefits also introduce a serious potential challenge, in the form of complex interactions with human-driven vehicles (HVs). The emergence of AVs introduces uncertainty in the behavior of human actors and in the impact of the AV manufacturer on autonomous driving design. This paper thus aims to investigate how AVs affect road safety and to design socially optimal liability rules for AVs and human drivers. A unified game is developed, including a Nash game between human drivers, a Stackelberg game between the AV manufacturer and HVs, and a Stackelberg game between the law maker and other users. We also establish the existence and uniqueness of the equilibrium of the game. The game is then simulated with numerical examples to investigate the emergence of human drivers’ moral hazard, the AV manufacturer’s role in traffic safety, and the law maker’s role in liability design. Our findings demonstrate that human drivers could develop moral hazard if they perceive their road environment has become safer and an optimal liability rule design is crucial to improve social welfare with advanced transportation technologies. More generally, the game-theoretic model developed in this paper provides an analytical tool to assist policy-makers in AV policymaking and hopefully mitigate uncertainty in the existing regulation landscape about AV technologies. |
Tasks | Autonomous Driving, Autonomous Vehicles |
Published | 2019-11-05 |
URL | https://arxiv.org/abs/1911.02405v2 |
https://arxiv.org/pdf/1911.02405v2.pdf | |
PWC | https://paperswithcode.com/paper/liability-design-for-autonomous-vehicles-and |
Repo | |
Framework | |
Robust Dense Mapping for Large-Scale Dynamic Environments
Title | Robust Dense Mapping for Large-Scale Dynamic Environments |
Authors | Ioan Andrei Bârsan, Peidong Liu, Marc Pollefeys, Andreas Geiger |
Abstract | We present a stereo-based dense mapping algorithm for large-scale dynamic urban environments. In contrast to other existing methods, we simultaneously reconstruct the static background, the moving objects, and the potentially moving but currently stationary objects separately, which is desirable for high-level mobile robotic tasks such as path planning in crowded environments. We use both instance-aware semantic segmentation and sparse scene flow to classify objects as either background, moving, or potentially moving, thereby ensuring that the system is able to model objects with the potential to transition from static to dynamic, such as parked cars. Given camera poses estimated from visual odometry, both the background and the (potentially) moving objects are reconstructed separately by fusing the depth maps computed from the stereo input. In addition to visual odometry, sparse scene flow is also used to estimate the 3D motions of the detected moving objects, in order to reconstruct them accurately. A map pruning technique is further developed to improve reconstruction accuracy and reduce memory consumption, leading to increased scalability. We evaluate our system thoroughly on the well-known KITTI dataset. Our system is capable of running on a PC at approximately 2.5Hz, with the primary bottleneck being the instance-aware semantic segmentation, which is a limitation we hope to address in future work. The source code is available from the project website (http://andreibarsan.github.io/dynslam). |
Tasks | Semantic Segmentation, Visual Odometry |
Published | 2019-05-07 |
URL | https://arxiv.org/abs/1905.02781v1 |
https://arxiv.org/pdf/1905.02781v1.pdf | |
PWC | https://paperswithcode.com/paper/robust-dense-mapping-for-large-scale-dynamic |
Repo | |
Framework | |
Prospect Theory Based Crowdsourcing for Classification in the Presence of Spammers
Title | Prospect Theory Based Crowdsourcing for Classification in the Presence of Spammers |
Authors | Baocheng Geng, Qunwei Li, Pramod K. Varshney |
Abstract | We consider the $M$-ary classification problem via crowdsourcing, where crowd workers respond to simple binary questions and the answers are aggregated via decision fusion. The workers have a reject option to skip answering a question when they do not have the expertise, or when the confidence of answering that question correctly is low. We further consider that there are spammers in the crowd who respond to the questions with random guesses. Under the payment mechanism that encourages the reject option, we study the behavior of honest workers and spammers, whose objectives are to maximize their monetary rewards. To accurately characterize human behavioral aspects, we employ prospect theory to model the rationality of the crowd workers, whose perception of costs and probabilities are distorted based on some value and weight functions, respectively. Moreover, we estimate the number of spammers and employ a weighted majority voting decision rule, where we assign an optimal weight for every worker to maximize the system performance. The probability of correct classification and asymptotic system performance are derived. We also provide simulation results to demonstrate the effectiveness of our approach. |
Tasks | |
Published | 2019-09-03 |
URL | https://arxiv.org/abs/1909.01463v1 |
https://arxiv.org/pdf/1909.01463v1.pdf | |
PWC | https://paperswithcode.com/paper/prospect-theory-based-crowdsourcing-for |
Repo | |
Framework | |
Explicitly Bayesian Regularizations in Deep Learning
Title | Explicitly Bayesian Regularizations in Deep Learning |
Authors | Xinjie Lan, Kenneth E. Barner |
Abstract | Generalization is essential for deep learning. In contrast to previous works claiming that Deep Neural Networks (DNNs) have an implicit regularization implemented by the stochastic gradient descent, we demonstrate explicitly Bayesian regularizations in a specific category of DNNs, i.e., Convolutional Neural Networks (CNNs). First, we introduce a novel probabilistic representation for the hidden layers of CNNs and demonstrate that CNNs correspond to Bayesian networks with the serial connection. Furthermore, we show that the hidden layers close to the input formulate prior distributions, thus CNNs have explicitly Bayesian regularizations based on the Bayesian regularization theory. In addition, we clarify two recently observed empirical phenomena that are inconsistent with traditional theories of generalization. Finally, we validate the proposed theory on a synthetic dataset |
Tasks | |
Published | 2019-10-22 |
URL | https://arxiv.org/abs/1910.09732v1 |
https://arxiv.org/pdf/1910.09732v1.pdf | |
PWC | https://paperswithcode.com/paper/explicitly-bayesian-regularizations-in-deep |
Repo | |
Framework | |
The space complexity of inner product filters
Title | The space complexity of inner product filters |
Authors | Rasmus Pagh, Johan Sivertsen |
Abstract | Motivated by the problem of filtering candidate pairs in inner product similarity joins we study the following inner product estimation problem: Given parameters $d\in {\bf N}$, $\alpha>\beta\geq 0$ and unit vectors $x,y\in {\bf R}^{d}$ consider the task of distinguishing between the cases $\langle x, y\rangle\leq\beta$ and $\langle x, y\rangle\geq \alpha$ where $\langle x, y\rangle = \sum_{i=1}^d x_i y_i$ is the inner product of vectors $x$ and $y$. The goal is to distinguish these cases based on information on each vector encoded independently in a bit string of the shortest length possible. In contrast to much work on compressing vectors using randomized dimensionality reduction, we seek to solve the problem deterministically, with no probability of error. Inner product estimation can be solved in general via estimating $\langle x, y\rangle$ with an additive error bounded by $\varepsilon = \alpha - \beta$. We show that $d \log_2 \left(\tfrac{\sqrt{1-\beta}}{\varepsilon}\right) \pm \Theta(d)$ bits of information about each vector is necessary and sufficient. Our upper bound is constructive and improves a known upper bound of $d \log_2(1/\varepsilon) + O(d)$ by up to a factor of 2 when $\beta$ is close to $1$. The lower bound holds even in a stronger model where one of the vectors is known exactly, and an arbitrary estimation function is allowed. |
Tasks | Dimensionality Reduction |
Published | 2019-09-24 |
URL | https://arxiv.org/abs/1909.10766v2 |
https://arxiv.org/pdf/1909.10766v2.pdf | |
PWC | https://paperswithcode.com/paper/the-space-complexity-of-inner-product-filters |
Repo | |
Framework | |
On the Vietnamese Name Entity Recognition: A Deep Learning Method Approach
Title | On the Vietnamese Name Entity Recognition: A Deep Learning Method Approach |
Authors | Ngoc C. Lê, Ngoc-Yen Nguyen, Anh-Duong Trinh |
Abstract | Named entity recognition (NER) plays an important role in text-based information retrieval. In this paper, we combine Bidirectional Long Short-Term Memory (Bi-LSTM) \cite{hochreiter1997,schuster1997} with Conditional Random Field (CRF) \cite{lafferty2001} to create a novel deep learning model for the NER problem. Each word as input of the deep learning model is represented by a Word2vec-trained vector. A word embedding set trained from about one million articles in 2018 collected through a Vietnamese news portal (baomoi.com). In addition, we concatenate a Word2Vec\cite{mikolov2013}-trained vector with semantic feature vector (Part-Of-Speech (POS) tagging, chunk-tag) and hidden syntactic feature vector (extracted by Bi-LSTM nerwork) to achieve the (so far best) result in Vietnamese NER system. The result was conducted on the data set VLSP2016 (Vietnamese Language and Speech Processing 2016 \cite{vlsp2016}) competition. |
Tasks | Information Retrieval, Named Entity Recognition, Part-Of-Speech Tagging |
Published | 2019-11-18 |
URL | https://arxiv.org/abs/1912.01109v1 |
https://arxiv.org/pdf/1912.01109v1.pdf | |
PWC | https://paperswithcode.com/paper/on-the-vietnamese-name-entity-recognition-a |
Repo | |
Framework | |
A Novel Automation-Assisted Cervical Cancer Reading Method Based on Convolutional Neural Network
Title | A Novel Automation-Assisted Cervical Cancer Reading Method Based on Convolutional Neural Network |
Authors | Yao Xiang, Wanxin Sun, Changli Pan, Meng Yan, Zhihua Yin, Yixiong Liang |
Abstract | While most previous automation-assisted reading methods can improve efficiency, their performance often relies on the success of accurate cell segmentation and hand-craft feature extraction. This paper presents an efficient and totally segmentation-free method for automated cervical cell screening that utilizes modern object detector to directly detect cervical cells or clumps, without the design of specific hand-crafted feature. Specifically, we use the state-of-the-art CNN-based object detection methods, YOLOv3, as our baseline model. In order to improve the classification performance of hard examples which are four highly similar categories, we cascade an additional task-specific classifier. We also investigate the presence of unreliable annotations and cope with them by smoothing the distribution of noisy labels. We comprehensively evaluate our methods on test set which is consisted of 1,014 annotated cervical cell images with size of 4000*3000 and complex cellular situation corresponding to 10 categories. Our model achieves 97.5% sensitivity (Sens) and 67.8% specificity (Spec) on cervical cell image-level screening. Moreover, we obtain a mean Average Precision (mAP) of 63.4% on cervical cell-level diagnosis, and improve the Average Precision (AP) of hard examples which are valuable but difficult to distinguish. Our automation-assisted cervical cell reading method not only achieves cervical cell image-level classification but also provides more detailed location and category information of abnormal cells. The results indicate feasible performance of our method, together with the efficiency and robustness, providing a new idea for future development of computer-assisted reading system in clinical cervical screening. |
Tasks | Cell Segmentation, Object Detection |
Published | 2019-12-14 |
URL | https://arxiv.org/abs/1912.06649v1 |
https://arxiv.org/pdf/1912.06649v1.pdf | |
PWC | https://paperswithcode.com/paper/a-novel-automation-assisted-cervical-cancer |
Repo | |
Framework | |
Auto-labelling of Markers in Optical Motion Capture by Permutation Learning
Title | Auto-labelling of Markers in Optical Motion Capture by Permutation Learning |
Authors | Saeed Ghorbani, Ali Etemad, Nikolaus F. Troje |
Abstract | Optical marker-based motion capture is a vital tool in applications such as motion and behavioural analysis, animation, and biomechanics. Labelling, that is, assigning optical markers to the pre-defined positions on the body is a time consuming and labour intensive postprocessing part of current motion capture pipelines. The problem can be considered as a ranking process in which markers shuffled by an unknown permutation matrix are sorted to recover the correct order. In this paper, we present a framework for automatic marker labelling which first estimates a permutation matrix for each individual frame using a differentiable permutation learning model and then utilizes temporal consistency to identify and correct remaining labelling errors. Experiments conducted on the test data show the effectiveness of our framework. |
Tasks | Motion Capture |
Published | 2019-07-31 |
URL | https://arxiv.org/abs/1907.13580v1 |
https://arxiv.org/pdf/1907.13580v1.pdf | |
PWC | https://paperswithcode.com/paper/auto-labelling-of-markers-in-optical-motion |
Repo | |
Framework | |
Active Annotation: bootstrapping annotation lexicon and guidelines for supervised NLU learning
Title | Active Annotation: bootstrapping annotation lexicon and guidelines for supervised NLU learning |
Authors | Federico Marinelli, Alessandra Cervone, Giuliano Tortoreto, Evgeny A. Stepanov, Giuseppe Di Fabbrizio, Giuseppe Riccardi |
Abstract | Natural Language Understanding (NLU) models are typically trained in a supervised learning framework. In the case of intent classification, the predicted labels are predefined and based on the designed annotation schema while the labelling process is based on a laborious task where annotators manually inspect each utterance and assign the corresponding label. We propose an Active Annotation (AA) approach where we combine an unsupervised learning method in the embedding space, a human-in-the-loop verification process, and linguistic insights to create lexicons that can be open categories and adapted over time. In particular, annotators define the y-label space on-the-fly during the annotation using an iterative process and without the need for prior knowledge about the input data. We evaluate the proposed annotation paradigm in a real use-case NLU scenario. Results show that our Active Annotation paradigm achieves accurate and higher quality training data, with an annotation speed of an order of magnitude higher with respect to the traditional human-only driven baseline annotation methodology. |
Tasks | Intent Classification |
Published | 2019-08-12 |
URL | https://arxiv.org/abs/1908.04092v1 |
https://arxiv.org/pdf/1908.04092v1.pdf | |
PWC | https://paperswithcode.com/paper/active-annotation-bootstrapping-annotation |
Repo | |
Framework | |
Weakly Supervised Adversarial Domain Adaptation for Semantic Segmentation in Urban Scenes
Title | Weakly Supervised Adversarial Domain Adaptation for Semantic Segmentation in Urban Scenes |
Authors | Qi Wang, Junyu Gao, Xuelong Li |
Abstract | Semantic segmentation, a pixel-level vision task, is developed rapidly by using convolutional neural networks (CNNs). Training CNNs requires a large amount of labeled data, but manually annotating data is difficult. For emancipating manpower, in recent years, some synthetic datasets are released. However, they are still different from real scenes, which causes that training a model on the synthetic data (source domain) cannot achieve a good performance on real urban scenes (target domain). In this paper, we propose a weakly supervised adversarial domain adaptation to improve the segmentation performance from synthetic data to real scenes, which consists of three deep neural networks. To be specific, a detection and segmentation (“DS” for short) model focuses on detecting objects and predicting segmentation map; a pixel-level domain classifier (“PDC” for short) tries to distinguish the image features from which domains; an object-level domain classifier (“ODC” for short) discriminates the objects from which domains and predicts the objects classes. PDC and ODC are treated as the discriminators, and DS is considered as the generator. By adversarial learning, DS is supposed to learn domain-invariant features. In experiments, our proposed method yields the new record of mIoU metric in the same problem. |
Tasks | Domain Adaptation, Semantic Segmentation |
Published | 2019-04-19 |
URL | http://arxiv.org/abs/1904.09092v1 |
http://arxiv.org/pdf/1904.09092v1.pdf | |
PWC | https://paperswithcode.com/paper/weakly-supervised-adversarial-domain |
Repo | |
Framework | |
Recognition Of Surface Defects On Steel Sheet Using Transfer Learning
Title | Recognition Of Surface Defects On Steel Sheet Using Transfer Learning |
Authors | Jingwen Fu, Xiaoyan Zhu, Yingbin Li |
Abstract | Automatic defect recognition is one of the research hotspots in steel production, but most of the current methods mainly extract features manually and use machine learning classifiers to recognize defects, which cannot tackle the situation, where there are few data available to train and confine to a certain scene. Therefore, in this paper, a new approach is proposed which consists of part of pretrained VGG16 as a feature extractor and a new CNN neural network as a classifier to recognize the defect of steel strip surface based on the feature maps created by the feature extractor. Our method achieves an accuracy of 99.1% and 96.0% while the dataset contains 150 images each class and 10 images each class respectively, which is much better than previous methods. |
Tasks | Transfer Learning |
Published | 2019-09-07 |
URL | https://arxiv.org/abs/1909.03258v2 |
https://arxiv.org/pdf/1909.03258v2.pdf | |
PWC | https://paperswithcode.com/paper/recognition-of-surface-defects-on-steel-sheet |
Repo | |
Framework | |
Automatic Colon Polyp Detection using Region based Deep CNN and Post Learning Approaches
Title | Automatic Colon Polyp Detection using Region based Deep CNN and Post Learning Approaches |
Authors | Younghak Shin, Hemin Ali Qadir, Lars Aabakken, Jacob Bergsland, Ilangko Balasingham |
Abstract | Automatic detection of colonic polyps is still an unsolved problem due to the large variation of polyps in terms of shape, texture, size, and color, and the existence of various polyp-like mimics during colonoscopy. In this study, we apply a recent region based convolutional neural network (CNN) approach for the automatic detection of polyps in images and videos obtained from colonoscopy examinations. We use a deep-CNN model (Inception Resnet) as a transfer learning scheme in the detection system. To overcome the polyp detection obstacles and the small number of polyp images, we examine image augmentation strategies for training deep networks. We further propose two efficient post-learning methods such as, automatic false positive learning and off-line learning, both of which can be incorporated with the region based detection system for reliable polyp detection. Using the large size of colonoscopy databases, experimental results demonstrate that the suggested detection systems show better performance compared to other systems in the literature. Furthermore, we show improved detection performance using the proposed post-learning schemes for colonoscopy videos. |
Tasks | Image Augmentation, Transfer Learning |
Published | 2019-06-27 |
URL | https://arxiv.org/abs/1906.11463v1 |
https://arxiv.org/pdf/1906.11463v1.pdf | |
PWC | https://paperswithcode.com/paper/automatic-colon-polyp-detection-using-region |
Repo | |
Framework | |
An Effective Hit-or-Miss Layer Favoring Feature Interpretation as Learned Prototypes Deformations
Title | An Effective Hit-or-Miss Layer Favoring Feature Interpretation as Learned Prototypes Deformations |
Authors | A. Deliege, A. Cioppa, M. Van Droogenbroeck |
Abstract | Neural networks designed for the task of classification have become a commodity in recent years. Many works target the development of more effective networks, which results in a complexification of their architectures with more layers, multiple sub-networks, or even the combination of multiple classifiers, but this often comes at the expense of producing uninterpretable black boxes. In this paper, we redesign a simple capsule network to enable it to synthesize class-representative samples, called prototypes, by replacing the last layer with a novel Hit-or-Miss layer. This layer contains activated vectors, called capsules, that we train to hit or miss a fixed target capsule by tailoring a specific centripetal loss function. This possibility allows to develop a data augmentation step combining information from the data space and the feature space, resulting in a hybrid data augmentation process. We show that our network, named HitNet, is able to reach better performances than those reproduced with the initial CapsNet on several datasets, while allowing to visualize the nature of the features extracted as deformations of the prototypes, which provides a direct insight into the feature representation learned by the network . |
Tasks | Data Augmentation |
Published | 2019-02-23 |
URL | https://arxiv.org/abs/1911.05588v1 |
https://arxiv.org/pdf/1911.05588v1.pdf | |
PWC | https://paperswithcode.com/paper/an-effective-hit-or-miss-layer-favoring |
Repo | |
Framework | |