April 3, 2020

3271 words 16 mins read

Paper Group ANR 67

Paper Group ANR 67

Assembly robots with optimized control stiffness through reinforcement learning. Proximity-Based Active Learning on Streaming Data: A Personalized Eating Moment Recognition. $\text{A}^3$: Activation Anomaly Analysis. A multi-agent ontologies-based clinical decision support system. Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Dee …

Assembly robots with optimized control stiffness through reinforcement learning

Title Assembly robots with optimized control stiffness through reinforcement learning
Authors Masahide Oikawa, Kyo Kutsuzawa, Sho Sakaino, Toshiaki Tsuji
Abstract There is an increased demand for task automation in robots. Contact-rich tasks, wherein multiple contact transitions occur in a series of operations, are extensively being studied to realize high accuracy. In this study, we propose a methodology that uses reinforcement learning (RL) to achieve high performance in robots for the execution of assembly tasks that require precise contact with objects without causing damage. The proposed method ensures the online generation of stiffness matrices that help improve the performance of local trajectory optimization. The method has an advantage of rapid response owing to short sampling time of the trajectory planning. The effectiveness of the method was verified via experiments involving two contact-rich tasks. The results indicate that the proposed method can be implemented in various contact-rich manipulations. A demonstration video shows the performance. (https://youtu.be/gxSCl7Tp4-0)
Published 2020-02-27
URL https://arxiv.org/abs/2002.12207v1
PDF https://arxiv.org/pdf/2002.12207v1.pdf
PWC https://paperswithcode.com/paper/assembly-robots-with-optimized-control

Proximity-Based Active Learning on Streaming Data: A Personalized Eating Moment Recognition

Title Proximity-Based Active Learning on Streaming Data: A Personalized Eating Moment Recognition
Authors Marjan Nourollahi, Seyed Ali Rokni, Hassan Ghasemzadeh
Abstract Detecting when eating occurs is an essential step toward automatic dietary monitoring, medication adherence assessment, and diet-related health interventions. Wearable technologies play a central role in designing unubtrusive diet monitoring solutions by leveraging machine learning algorithms that work on time-series sensor data to detect eating moments. While much research has been done on developing activity recognition and eating moment detection algorithms, the performance of the detection algorithms drops substantially when the model trained with one user is utilized by a new user. To facilitate development of personalized models, we propose PALS, Proximity-based Active Learning on Streaming data, a novel proximity-based model for recognizing eating gestures with the goal of significantly decreasing the need for labeled data with new users. Particularly, we propose an optimization problem to perform active learning under limited query budget by leveraging unlabeled data. Our extensive analysis on data collected in both controlled and uncontrolled settings indicates that the F-score of PLAS ranges from 22% to 39% for a budget that varies from 10 to 60 query. Furthermore, compared to the state-of-the-art approaches, off-line PALS, on average, achieves to 40% higher recall and 12% higher f-score in detecting eating gestures.
Tasks Active Learning, Activity Recognition, Time Series
Published 2020-03-29
URL https://arxiv.org/abs/2003.13098v1
PDF https://arxiv.org/pdf/2003.13098v1.pdf
PWC https://paperswithcode.com/paper/proximity-based-active-learning-on-streaming

$\text{A}^3$: Activation Anomaly Analysis

Title $\text{A}^3$: Activation Anomaly Analysis
Authors Philip Sperl, Jan-Philipp Schulze, Konstantin Böttinger
Abstract Inspired by the recent advances in coverage-guided analysis of neural networks (NNs), we propose a novel anomaly detection approach. We show that the hidden activation values in NNs contain information to distinguish between normal and anomalous samples. Common approaches for anomaly detection base the amount of novelty of each data point solely on one single decision variable. We refine this approach by incorporating the entire context of the model. With our data-driven method, we achieve strong anomaly detection results on common baseline data sets, e.g., MNIST and CSE-CIC-IDS2018, purely based on the automatic analysis of the data. Our anomaly detection method allows to easily inspect data across different domains for anomalies without expert knowledge.
Tasks Anomaly Detection
Published 2020-03-03
URL https://arxiv.org/abs/2003.01801v1
PDF https://arxiv.org/pdf/2003.01801v1.pdf
PWC https://paperswithcode.com/paper/texta3-activation-anomaly-analysis

A multi-agent ontologies-based clinical decision support system

Title A multi-agent ontologies-based clinical decision support system
Authors Ying Shen, Jacquet-Andrieu Armelle, Joël Colloc
Abstract Clinical decision support systems combine knowledge and data from a variety of sources, represented by quantitative models based on stochastic methods, or qualitative based rather on expert heuristics and deductive reasoning. At the same time, case-based reasoning (CBR) memorizes and returns the experience of solving similar problems. The cooperation of heterogeneous clinical knowledge bases (knowledge objects, semantic distances, evaluation functions, logical rules, databases…) is based on medical ontologies. A multi-agent decision support system (MADSS) enables the integration and cooperation of agents specialized in different fields of knowledge (semiology, pharmacology, clinical cases, etc.). Each specialist agent operates a knowledge base defining the conduct to be maintained in conformity with the state of the art associated with an ontological basis that expresses the semantic relationships between the terms of the domain in question. Our approach is based on the specialization of agents adapted to the knowledge models used during the clinical steps and ontologies. This modular approach is suitable for the realization of MADSS in many areas.
Published 2020-01-21
URL https://arxiv.org/abs/2001.07374v1
PDF https://arxiv.org/pdf/2001.07374v1.pdf
PWC https://paperswithcode.com/paper/a-multi-agent-ontologies-based-clinical

Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition

Title Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition
Authors Yuanhang Zhang, Shuang Yang, Jingyun Xiao, Shiguang Shan, Xilin Chen
Abstract Recent advances in deep learning have heightened interest among researchers in the field of visual speech recognition (VSR). Currently, most existing methods equate VSR with automatic lip reading, which attempts to recognise speech by analysing lip motion. However, human experience and psychological studies suggest that we do not always fix our gaze at each other’s lips during a face-to-face conversation, but rather scan the whole face repetitively. This inspires us to revisit a fundamental yet somehow overlooked problem: can VSR models benefit from reading extraoral facial regions, i.e. beyond the lips? In this paper, we perform a comprehensive study to evaluate the effects of different facial regions with state-of-the-art VSR models, including the mouth, the whole face, the upper face, and even the cheeks. Experiments are conducted on both word-level and sentence-level benchmarks with different characteristics. We find that despite the complex variations of the data, incorporating information from extraoral facial regions, even the upper face, consistently benefits VSR performance. Furthermore, we introduce a simple yet effective method based on Cutout to learn more discriminative features for face-based VSR, hoping to maximise the utility of information encoded in different facial regions. Our experiments show obvious improvements over existing state-of-the-art methods that use only the lip region as inputs, a result we believe would probably provide the VSR community with some new and exciting insights.
Tasks Speech Recognition, Visual Speech Recognition
Published 2020-03-06
URL https://arxiv.org/abs/2003.03206v2
PDF https://arxiv.org/pdf/2003.03206v2.pdf
PWC https://paperswithcode.com/paper/can-we-read-speech-beyond-the-lips-rethinking

Gesticulator: A framework for semantically-aware speech-driven gesture generation

Title Gesticulator: A framework for semantically-aware speech-driven gesture generation
Authors Taras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Eje Henter, Simon Alexanderson, Iolanda Leite, Hedvig Kjellström
Abstract During speech, people spontaneously gesticulate, which plays a key role in conveying information. Similarly, realistic co-speech gestures are crucial to enable natural and smooth interactions with social agents. Current data-driven co-speech gesture generation systems use a single modality for representing speech: either audio or text. These systems are therefore confined to producing either acoustically-linked beat gestures or semantically-linked gesticulation (e.g., raising a hand when saying ``high’'): they cannot appropriately learn to generate both gesture types. We present a model designed to produce arbitrary beat and semantic gestures together. Our deep-learning based model takes both acoustic and semantic representations of speech as input, and generates gestures as a sequence of joint angle rotations as output. The resulting gestures can be applied to both virtual agents and humanoid robots. We illustrate the model’s efficacy with subjective and objective evaluations. |
Tasks Gesture Generation
Published 2020-01-25
URL https://arxiv.org/abs/2001.09326v1
PDF https://arxiv.org/pdf/2001.09326v1.pdf
PWC https://paperswithcode.com/paper/gesticulator-a-framework-for-semantically

Performance of a Deep Neural Network at Detecting North Atlantic Right Whale Upcalls

Title Performance of a Deep Neural Network at Detecting North Atlantic Right Whale Upcalls
Authors Oliver S. Kirsebom, Fabio Frazao, Yvan Simard, Nathalie Roy, Stan Matwin, Samuel Giard
Abstract Passive acoustics provides a powerful tool for monitoring the endangered North Atlantic right whale ($Eubalaena$ $glacialis$), but robust detection algorithms are needed to handle diverse and variable acoustic conditions and differences in recording techniques and equipment. Here, we investigate the potential of deep neural networks for addressing this need. ResNet, an architecture commonly used for image recognition, is trained to recognize the time-frequency representation of the characteristic North Atlantic right whale upcall. The network is trained on several thousand examples recorded at various locations in the Gulf of St.\ Lawrence in 2018 and 2019, using different equipment and deployment techniques. Used as a detection algorithm on fifty 30-minute recordings from the years 2015-2017 containing over one thousand upcalls, the network achieves recalls up to 80%, while maintaining a precision of 90%. Importantly, the performance of the network improves as more variance is introduced into the training dataset, whereas the opposite trend is observed using a conventional linear discriminant analysis approach. Our work demonstrates that deep neural networks can be trained to identify North Atlantic right whale upcalls under diverse and variable conditions with a performance that compares favorably to that of existing algorithms.
Published 2020-01-24
URL https://arxiv.org/abs/2001.09127v2
PDF https://arxiv.org/pdf/2001.09127v2.pdf
PWC https://paperswithcode.com/paper/performance-of-a-deep-neural-network-at

Detecting Code Clones with Graph Neural Networkand Flow-Augmented Abstract Syntax Tree

Title Detecting Code Clones with Graph Neural Networkand Flow-Augmented Abstract Syntax Tree
Authors Wenhan Wang, Ge Li, Bo Ma, Xin Xia, Zhi Jin
Abstract Code clones are semantically similar code fragments pairs that are syntactically similar or different. Detection of code clones can help to reduce the cost of software maintenance and prevent bugs. Numerous approaches of detecting code clones have been proposed previously, but most of them focus on detecting syntactic clones and do not work well on semantic clones with different syntactic features. To detect semantic clones, researchers have tried to adopt deep learning for code clone detection to automatically learn latent semantic features from data. Especially, to leverage grammar information, several approaches used abstract syntax trees (AST) as input and achieved significant progress on code clone benchmarks in various programming languages. However, these AST-based approaches still can not fully leverage the structural information of code fragments, especially semantic information such as control flow and data flow. To leverage control and data flow information, in this paper, we build a graph representation of programs called flow-augmented abstract syntax tree (FA-AST). We construct FA-AST by augmenting original ASTs with explicit control and data flow edges. Then we apply two different types of graph neural networks (GNN) on FA-AST to measure the similarity of code pairs. As far as we have concerned, we are the first to apply graph neural networks on the domain of code clone detection. We apply our FA-AST and graph neural networks on two Java datasets: Google Code Jam and BigCloneBench. Our approach outperforms the state-of-the-art approaches on both Google Code Jam and BigCloneBench tasks.
Published 2020-02-20
URL https://arxiv.org/abs/2002.08653v1
PDF https://arxiv.org/pdf/2002.08653v1.pdf
PWC https://paperswithcode.com/paper/detecting-code-clones-with-graph-neural

One-Shot Bayes Opt with Probabilistic Population Based Training

Title One-Shot Bayes Opt with Probabilistic Population Based Training
Authors Jack Parker-Holder, Vu Nguyen, Stephen Roberts
Abstract Selecting optimal hyperparameters is a key challenge in machine learning. An exciting recent result showed it is possible to learn high-performing hyperparameter schedules on the fly in a single training run through methods inspired by Evolutionary Algorithms. These approaches have been shown to increase performance across a wide variety of machine learning tasks, ranging from supervised (SL) to reinforcement learning (RL). However, since they remain primarily evolutionary, they act in a greedy fashion, thus require a combination of vast computational resources and carefully selected meta-parameters to effectively explore the hyperparameter space. To address these shortcomings we look to Bayesian Optimization (BO), where a Gaussian Process surrogate model is combined with an acquisition function to produce a principled mechanism to trade off exploration vs exploitation. Our approach, which we call Probabilistic Population-Based Training ($\mathrm{P2BT}$), is able to transfer sample efficiency of BO to the online setting, making it possible to achieve these traits in a single training run. We show that $\mathrm{P2BT}$ is able to achieve high performance with only a small population size, making it useful for all researchers regardless of their computational resources.
Published 2020-02-06
URL https://arxiv.org/abs/2002.02518v1
PDF https://arxiv.org/pdf/2002.02518v1.pdf
PWC https://paperswithcode.com/paper/one-shot-bayes-opt-with-probabilistic

Federated Extra-Trees with Privacy Preserving

Title Federated Extra-Trees with Privacy Preserving
Authors Yang Liu, Mingxin Chen, Wenxi Zhang, Junbo Zhang, Yu Zheng
Abstract It is commonly observed that the data are scattered everywhere and difficult to be centralized. The data privacy and security also become a sensitive topic. The laws and regulations such as the European Union’s General Data Protection Regulation (GDPR) are designed to protect the public’s data privacy. However, machine learning requires a large amount of data for better performance, and the current circumstances put deploying real-life AI applications in an extremely difficult situation. To tackle these challenges, in this paper we propose a novel privacy-preserving federated machine learning model, named Federated Extra-Trees, which applies local differential privacy in the federated trees model. A secure multi-institutional machine learning system was developed to provide superior performance by processing the modeling jointly on different clients without exchanging any raw data. We have validated the accuracy of our work by conducting extensive experiments on public datasets and the efficiency and robustness were also verified by simulating the real-world scenarios. Overall, we presented an extensible, scalable and practical solution to handle the data island problem.
Published 2020-02-18
URL https://arxiv.org/abs/2002.07323v1
PDF https://arxiv.org/pdf/2002.07323v1.pdf
PWC https://paperswithcode.com/paper/federated-extra-trees-with-privacy-preserving

#MeToo on Campus: Studying College Sexual Assault at Scale Using Data Reported on Social Media

Title #MeToo on Campus: Studying College Sexual Assault at Scale Using Data Reported on Social Media
Authors Viet Duong, Phu Pham, Ritwik Bose, Jiebo Luo
Abstract Recently, the emergence of the #MeToo trend on social media has empowered thousands of people to share their own sexual harassment experiences. This viral trend, in conjunction with the massive personal information and content available on Twitter, presents a promising opportunity to extract data driven insights to complement the ongoing survey based studies about sexual harassment in college. In this paper, we analyze the influence of the #MeToo trend on a pool of college followers. The results show that the majority of topics embedded in those #MeToo tweets detail sexual harassment stories, and there exists a significant correlation between the prevalence of this trend and official reports on several major geographical regions. Furthermore, we discover the outstanding sentiments of the #MeToo tweets using deep semantic meaning representations and their implications on the affected users experiencing different types of sexual harassment. We hope this study can raise further awareness regarding sexual misconduct in academia.
Published 2020-01-16
URL https://arxiv.org/abs/2001.05970v1
PDF https://arxiv.org/pdf/2001.05970v1.pdf
PWC https://paperswithcode.com/paper/metoo-on-campus-studying-college-sexual

EXPLAIN-IT: Towards Explainable AI for Unsupervised Network Traffic Analysis

Title EXPLAIN-IT: Towards Explainable AI for Unsupervised Network Traffic Analysis
Authors Andrea Morichetta, Pedro Casas, Marco Mellia
Abstract The application of unsupervised learning approaches, and in particular of clustering techniques, represents a powerful exploration means for the analysis of network measurements. Discovering underlying data characteristics, grouping similar measurements together, and identifying eventual patterns of interest are some of the applications which can be tackled through clustering. Being unsupervised, clustering does not always provide precise and clear insight into the produced output, especially when the input data structure and distribution are complex and difficult to grasp. In this paper we introduce EXPLAIN-IT, a methodology which deals with unlabeled data, creates meaningful clusters, and suggests an explanation to the clustering results for the end-user. EXPLAIN-IT relies on a novel explainable Artificial Intelligence (AI) approach, which allows to understand the reasons leading to a particular decision of a supervised learning-based model, additionally extending its application to the unsupervised learning domain. We apply EXPLAIN-IT to the problem of YouTube video quality classification under encrypted traffic scenarios, showing promising results.
Published 2020-03-03
URL https://arxiv.org/abs/2003.01670v1
PDF https://arxiv.org/pdf/2003.01670v1.pdf
PWC https://paperswithcode.com/paper/explain-it-towards-explainable-ai-for

Improving the affordability of robustness training for DNNs

Title Improving the affordability of robustness training for DNNs
Authors Sidharth Gupta, Parijat Dube, Ashish Verma
Abstract Projected Gradient Descent (PGD) based adversarial training has become one of the most prominent methods for building robust deep neural network models. However, the computational complexity associated with this approach, due to the maximization of the loss function when finding adversaries, is a longstanding problem and may be prohibitive when using larger and more complex models. In this paper, we propose a modification of the PGD method for adversarial training and demonstrate that models can be trained much more efficiently without any loss in accuracy on natural and adversarial samples. We argue that the initial phase of adversarial training is redundant and can be replaced with natural training thereby increasing the computational efficiency significantly. We support our argument with insights on the nature of the adversaries and their relative strength during the training process. We show that our proposed method can reduce the training time to up to 38% of the original training time with comparable model accuracy and generalization on various strengths of adversarial attacks.
Published 2020-02-11
URL https://arxiv.org/abs/2002.04237v1
PDF https://arxiv.org/pdf/2002.04237v1.pdf
PWC https://paperswithcode.com/paper/improving-the-affordability-of-robustness

Rethinking Depthwise Separable Convolutions: How Intra-Kernel Correlations Lead to Improved MobileNets

Title Rethinking Depthwise Separable Convolutions: How Intra-Kernel Correlations Lead to Improved MobileNets
Authors Daniel Haase, Manuel Amthor
Abstract We introduce blueprint separable convolutions (BSConv) as highly efficient building blocks for CNNs. They are motivated by quantitative analyses of kernel properties from trained models, which show the dominance of correlations along the depth axis. Based on our findings, we formulate a theoretical foundation from which we derive efficient implementations using only standard layers. Moreover, our approach provides a thorough theoretical derivation, interpretation, and justification for the application of depthwise separable convolutions (DSCs) in general, which have become the basis of many modern network architectures. Ultimately, we reveal that DSC-based architectures such as MobileNets implicitly rely on cross-kernel correlations, while our BSConv formulation is based on intra-kernel correlations and thus allows for a more efficient separation of regular convolutions. Extensive experiments on large-scale and fine-grained classification datasets show that BSConvs clearly and consistently improve MobileNets and other DSC-based architectures without introducing any further complexity. For fine-grained datasets, we achieve an improvement of up to 13.7 percentage points. In addition, if used as drop-in replacement for standard architectures such as ResNets, BSConv variants also outperform their vanilla counterparts by up to 9.5 percentage points on ImageNet.
Published 2020-03-30
URL https://arxiv.org/abs/2003.13549v2
PDF https://arxiv.org/pdf/2003.13549v2.pdf
PWC https://paperswithcode.com/paper/2003-13549

Feature Importance Estimation with Self-Attention Networks

Title Feature Importance Estimation with Self-Attention Networks
Authors Blaž Škrlj, Sašo Džeroski, Nada Lavrač, Matej Petkovič
Abstract Black-box neural network models are widely used in industry and science, yet are hard to understand and interpret. Recently, the attention mechanism was introduced, offering insights into the inner workings of neural language models. This paper explores the use of attention-based neural networks mechanism for estimating feature importance, as means for explaining the models learned from propositional (tabular) data. Feature importance estimates, assessed by the proposed Self-Attention Network (SAN) architecture, are compared with the established ReliefF, Mutual Information and Random Forest-based estimates, which are widely used in practice for model interpretation. For the first time we conduct scale-free comparisons of feature importance estimates across algorithms on ten real and synthetic data sets to study the similarities and differences of the resulting feature importance estimates, showing that SANs identify similar high-ranked features as the other methods. We demonstrate that SANs identify feature interactions which in some cases yield better predictive performance than the baselines, suggesting that attention extends beyond interactions of just a few key features and detects larger feature subsets relevant for the considered learning task.
Tasks Feature Importance
Published 2020-02-11
URL https://arxiv.org/abs/2002.04464v1
PDF https://arxiv.org/pdf/2002.04464v1.pdf
PWC https://paperswithcode.com/paper/feature-importance-estimation-with-self
comments powered by Disqus