Paper Group ANR 198
PUMiner: Mining Security Posts from Developer Question and Answer Websites with PU Learning. Modeling 3D Shapes by Reinforcement Learning. MQA: Answering the Question via Robotic Manipulation. Overfitting Can Be Harmless for Basis Pursuit: Only to a Degree. Improving Deep Learning For Airbnb Search. Modelling and Quantifying Membership Information …
PUMiner: Mining Security Posts from Developer Question and Answer Websites with PU Learning
Title | PUMiner: Mining Security Posts from Developer Question and Answer Websites with PU Learning |
Authors | Triet H. M. Le, David Hin, Roland Croft, M. Ali Babar |
Abstract | Security is an increasing concern in software development. Developer Question and Answer (Q&A) websites provide a large amount of security discussion. Existing studies have used human-defined rules to mine security discussions, but these works still miss many posts, which may lead to an incomplete analysis of the security practices reported on Q&A websites. Traditional supervised Machine Learning methods can automate the mining process; however, the required negative (non-security) class is too expensive to obtain. We propose a novel learning framework, PUMiner, to automatically mine security posts from Q&A websites. PUMiner builds a context-aware embedding model to extract features of the posts, and then develops a two-stage PU model to identify security content using the labelled Positive and Unlabelled posts. We evaluate PUMiner on more than 17.2 million posts on Stack Overflow and 52,611 posts on Security StackExchange. We show that PUMiner is effective with the validation performance of at least 0.85 across all model configurations. Moreover, Matthews Correlation Coefficient (MCC) of PUMiner is 0.906, 0.534 and 0.084 points higher than one-class SVM, positive-similarity filtering, and one-stage PU models on unseen testing posts, respectively. PUMiner also performs well with an MCC of 0.745 for scenarios where string matching totally fails. Even when the ratio of the labelled positive posts to the unlabelled ones is only 1:100, PUMiner still achieves a strong MCC of 0.65, which is 160% better than fully-supervised learning. Using PUMiner, we provide the largest and up-to-date security content on Q&A websites for practitioners and researchers. |
Tasks | |
Published | 2020-03-08 |
URL | https://arxiv.org/abs/2003.03741v1 |
https://arxiv.org/pdf/2003.03741v1.pdf | |
PWC | https://paperswithcode.com/paper/puminer-mining-security-posts-from-developer |
Repo | |
Framework | |
Modeling 3D Shapes by Reinforcement Learning
Title | Modeling 3D Shapes by Reinforcement Learning |
Authors | Cheng Lin, Tingxiang Fan, Wenping Wang, Matthias Nießner |
Abstract | We explore how to enable machines to model 3D shapes like human modelers using reinforcement learning (RL). In 3D modeling software like Maya, a modeler usually creates a mesh model in two steps: (1) approximating the shape using a set of primitives; (2) editing the meshes of the primitives to create detailed geometry. Inspired by such artist-based modeling, we propose a two-step neural framework based on RL to learn 3D modeling policies. By taking actions and collecting rewards in an interactive environment, the agents first learn to parse a target shape into primitives and then to edit the geometry. To effectively train the modeling agents, we introduce a novel training algorithm that combines heuristic policy, imitation learning and reinforcement learning. Our experiments show that the agents can learn good policies to produce regular and structure-aware mesh models, which demonstrates the feasibility and effectiveness of the proposed RL framework. |
Tasks | Imitation Learning |
Published | 2020-03-27 |
URL | https://arxiv.org/abs/2003.12397v1 |
https://arxiv.org/pdf/2003.12397v1.pdf | |
PWC | https://paperswithcode.com/paper/modeling-3d-shapes-by-reinforcement-learning |
Repo | |
Framework | |
MQA: Answering the Question via Robotic Manipulation
Title | MQA: Answering the Question via Robotic Manipulation |
Authors | Yuhong Deng, Naifu Zhang, Di Guo, Huaping Liu, Fuchun Sun, Chen Pang, Jing Pang |
Abstract | In this paper,we propose a novel task of Manipulation Question Answering(MQA),a class of Question Answering (QA) task, where the robot is required to find the answer to the question by actively interacting with the environment via manipulation. Considering the tabletop scenario, a heatmap of the scene is generated to facilitate the robot to have a semantic understanding of the scene and an imitation learning approach with semantic understanding metric is proposed to generate manipulation actions which guide the manipulator to explore the tabletop to find the answer to the question. Besides, a novel dataset which contains a variety of tabletop scenarios and corresponding question-answer pairs is established. Extensive experiments have been conducted to validate the effectiveness of the proposed framework. |
Tasks | Imitation Learning, Question Answering |
Published | 2020-03-10 |
URL | https://arxiv.org/abs/2003.04641v1 |
https://arxiv.org/pdf/2003.04641v1.pdf | |
PWC | https://paperswithcode.com/paper/mqa-answering-the-question-via-robotic |
Repo | |
Framework | |
Overfitting Can Be Harmless for Basis Pursuit: Only to a Degree
Title | Overfitting Can Be Harmless for Basis Pursuit: Only to a Degree |
Authors | Peizhong Ju, Xiaojun Lin, Jia Liu |
Abstract | Recently, there have been significant interests in studying the generalization power of linear regression models in the overparameterized regime, with the hope that such analysis may provide the first step towards understanding why overparameterized deep neural networks generalize well even when they overfit the training data. Studies on min $\ell_2$-norm solutions that overfit the training data have suggested that such solutions exhibit the “double-descent” behavior, i.e., the test error decreases with the number of features $p$ in the overparameterized regime when $p$ is larger than the number of samples $n$. However, for linear models with i.i.d. Gaussian features, for large $p$ the model errors of such min $\ell_2$-norm solutions approach the “null risk,” i.e., the error of a trivial estimator that always outputs zero, even when the noise is very low. In contrast, we studied the overfitting solution of min $\ell_1$-norm, which is known as Basis Pursuit (BP) in the compressed sensing literature. Under a sparse true linear model with i.i.d. Gaussian features, we show that for a large range of $p$ up to a limit that grows exponentially with $n$, with high probability the model error of BP is upper bounded by a value that decreases with $p$ and is proportional to the noise level. To the best of our knowledge, this is the first result in the literature showing that, without any explicit regularization in such settings where both $p$ and the dimension of data are much larger than $n$, the test errors of a practical-to-compute overfitting solution can exhibit double-descent and approach the order of the noise level independently of the null risk. Our upper bound also reveals a descent floor for BP that is proportional to the noise level. Further, this descent floor is independent of $n$ and the null risk, but increases with the sparsity level of the true model. |
Tasks | |
Published | 2020-02-02 |
URL | https://arxiv.org/abs/2002.00492v1 |
https://arxiv.org/pdf/2002.00492v1.pdf | |
PWC | https://paperswithcode.com/paper/overfitting-can-be-harmless-for-basis-pursuit |
Repo | |
Framework | |
Improving Deep Learning For Airbnb Search
Title | Improving Deep Learning For Airbnb Search |
Authors | Malay Haldar, Mustafa Abdool, Prashant Ramanathan, Tyler Sax, Lanbo Zhang, Aamir Mansawala, Shulin Yang, Bradley Turnbull, Junshuo Liao |
Abstract | The application of deep learning to search ranking was one of the most impactful product improvements at Airbnb. But what comes next after you launch a deep learning model? In this paper we describe the journey beyond, discussing what we refer to as the ABCs of improving search: A for architecture, B for bias and C for cold start. For architecture, we describe a new ranking neural network, focusing on the process that evolved our existing DNN beyond a fully connected two layer network. On handling positional bias in ranking, we describe a novel approach that led to one of the most significant improvements in tackling inventory that the DNN historically found challenging. To solve cold start, we describe our perspective on the problem and changes we made to improve the treatment of new listings on the platform. We hope ranking teams transitioning to deep learning will find this a practical case study of how to iterate on DNNs. |
Tasks | |
Published | 2020-02-10 |
URL | https://arxiv.org/abs/2002.05515v1 |
https://arxiv.org/pdf/2002.05515v1.pdf | |
PWC | https://paperswithcode.com/paper/improving-deep-learning-for-airbnb-search |
Repo | |
Framework | |
Modelling and Quantifying Membership Information Leakage in Machine Learning
Title | Modelling and Quantifying Membership Information Leakage in Machine Learning |
Authors | Farhad Farokhi, Mohamed Ali Kaafar |
Abstract | Machine learning models have been shown to be vulnerable to membership inference attacks, i.e., inferring whether individuals’ data have been used for training models. The lack of understanding about factors contributing success of these attacks motivates the need for modelling membership information leakage using information theory and for investigating properties of machine learning models and training algorithms that can reduce membership information leakage. We use conditional mutual information leakage to measure the amount of information leakage from the trained machine learning model about the presence of an individual in the training dataset. We devise an upper bound for this measure of information leakage using Kullback–Leibler divergence that is more amenable to numerical computation. We prove a direct relationship between the Kullback–Leibler membership information leakage and the probability of success for a hypothesis-testing adversary examining whether a particular data record belongs to the training dataset of a machine learning model. We show that the mutual information leakage is a decreasing function of the training dataset size and the regularization weight. We also prove that, if the sensitivity of the machine learning model (defined in terms of the derivatives of the fitness with respect to model parameters) is high, more membership information is potentially leaked. This illustrates that complex models, such as deep neural networks, are more susceptible to membership inference attacks in comparison to simpler models with fewer degrees of freedom. We show that the amount of the membership information leakage is reduced by $\mathcal{O}(\log^{1/2}(\delta^{-1})\epsilon^{-1})$ when using Gaussian $(\epsilon,\delta)$-differentially-private additive noises. |
Tasks | |
Published | 2020-01-29 |
URL | https://arxiv.org/abs/2001.10648v1 |
https://arxiv.org/pdf/2001.10648v1.pdf | |
PWC | https://paperswithcode.com/paper/modelling-and-quantifying-membership |
Repo | |
Framework | |
How human judgment impairs automated deception detection performance
Title | How human judgment impairs automated deception detection performance |
Authors | Bennett Kleinberg, Bruno Verschuere |
Abstract | Background: Deception detection is a prevalent problem for security practitioners. With a need for more large-scale approaches, automated methods using machine learning have gained traction. However, detection performance still implies considerable error rates. Findings from other domains suggest that hybrid human-machine integrations could offer a viable path in deception detection tasks. Method: We collected a corpus of truthful and deceptive answers about participants’ autobiographical intentions (n=1640) and tested whether a combination of supervised machine learning and human judgment could improve deception detection accuracy. Human judges were presented with the outcome of the automated credibility judgment of truthful and deceptive statements. They could either fully overrule it (hybrid-overrule condition) or adjust it within a given boundary (hybrid-adjust condition). Results: The data suggest that in neither of the hybrid conditions did the human judgment add a meaningful contribution. Machine learning in isolation identified truth-tellers and liars with an overall accuracy of 69%. Human involvement through hybrid-overrule decisions brought the accuracy back to the chance level. The hybrid-adjust condition did not deception detection performance. The decision-making strategies of humans suggest that the truth bias - the tendency to assume the other is telling the truth - could explain the detrimental effect. Conclusion: The current study does not support the notion that humans can meaningfully add to the deception detection performance of a machine learning system. |
Tasks | Deception Detection, Decision Making |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13316v1 |
https://arxiv.org/pdf/2003.13316v1.pdf | |
PWC | https://paperswithcode.com/paper/how-human-judgment-impairs-automated |
Repo | |
Framework | |
Explainable Object-induced Action Decision for Autonomous Vehicles
Title | Explainable Object-induced Action Decision for Autonomous Vehicles |
Authors | Yiran Xu, Xiaoyin Yang, Lihang Gong, Hsuan-Chu Lin, Tz-Ying Wu, Yunsheng Li, Nuno Vasconcelos |
Abstract | A new paradigm is proposed for autonomous driving. The new paradigm lies between the end-to-end and pipelined approaches, and is inspired by how humans solve the problem. While it relies on scene understanding, the latter only considers objects that could originate hazard. These are denoted as action-inducing, since changes in their state should trigger vehicle actions. They also define a set of explanations for these actions, which should be produced jointly with the latter. An extension of the BDD100K dataset, annotated for a set of 4 actions and 21 explanations, is proposed. A new multi-task formulation of the problem, which optimizes the accuracy of both action commands and explanations, is then introduced. A CNN architecture is finally proposed to solve this problem, by combining reasoning about action inducing objects and global scene context. Experimental results show that the requirement of explanations improves the recognition of action-inducing objects, which in turn leads to better action predictions. |
Tasks | Autonomous Driving, Autonomous Vehicles, Scene Understanding |
Published | 2020-03-20 |
URL | https://arxiv.org/abs/2003.09405v1 |
https://arxiv.org/pdf/2003.09405v1.pdf | |
PWC | https://paperswithcode.com/paper/explainable-object-induced-action-decision |
Repo | |
Framework | |
Intelligent and Reconfigurable Architecture for KL Divergence Based Online Machine Learning Algorithm
Title | Intelligent and Reconfigurable Architecture for KL Divergence Based Online Machine Learning Algorithm |
Authors | S. V. Sai Santosh, Sumit J. Darak |
Abstract | Online machine learning (OML) algorithms do not need any training phase and can be deployed directly in an unknown environment. OML includes multi-armed bandit (MAB) algorithms that can identify the best arm among several arms by achieving a balance between exploration of all arms and exploitation of optimal arm. The Kullback-Leibler divergence based upper confidence bound (KLUCB) is the state-of-the-art MAB algorithm that optimizes exploration-exploitation trade-off but it is complex due to underlining optimization routine. This limits its usefulness for robotics and radio applications which demand integration of KLUCB with the PHY on the system on chip (SoC). In this paper, we efficiently map the KLUCB algorithm on SoC by realizing optimization routine via alternative synthesizable computation without compromising on the performance. The proposed architecture is dynamically reconfigurable such that the number of arms, as well as type of algorithm, can be changed on-the-fly. Specifically, after initial learning, on-the-fly switch to light-weight UCB offers around 10-factor improvement in latency and throughput. Since learning duration depends on the unknown arm statistics, we offer intelligence embedded in architecture to decide the switching instant. We validate the functional correctness and usefulness of the proposed architecture via a realistic wireless application and detailed complexity analysis demonstrates its feasibility in realizing intelligent radios. |
Tasks | |
Published | 2020-02-18 |
URL | https://arxiv.org/abs/2002.07713v1 |
https://arxiv.org/pdf/2002.07713v1.pdf | |
PWC | https://paperswithcode.com/paper/intelligent-and-reconfigurable-architecture |
Repo | |
Framework | |
Variational Template Machine for Data-to-Text Generation
Title | Variational Template Machine for Data-to-Text Generation |
Authors | Rong Ye, Wenxian Shi, Hao Zhou, Zhongyu Wei, Lei Li |
Abstract | How to generate descriptions from structured data organized in tables? Existing approaches using neural encoder-decoder models often suffer from lacking diversity. We claim that an open set of templates is crucial for enriching the phrase constructions and realizing varied generations. Learning such templates is prohibitive since it often requires a large paired <table, description> corpus, which is seldom available. This paper explores the problem of automatically learning reusable “templates” from paired and non-paired data. We propose the variational template machine (VTM), a novel method to generate text descriptions from data tables. Our contributions include: a) we carefully devise a specific model architecture and losses to explicitly disentangle text template and semantic content information, in the latent spaces, and b)we utilize both small parallel data and large raw text without aligned tables to enrich the template learning. Experiments on datasets from a variety of different domains show that VTM is able to generate more diversely while keeping a good fluency and quality. |
Tasks | Data-to-Text Generation, Text Generation |
Published | 2020-02-04 |
URL | https://arxiv.org/abs/2002.01127v2 |
https://arxiv.org/pdf/2002.01127v2.pdf | |
PWC | https://paperswithcode.com/paper/variational-template-machine-for-data-to-text-1 |
Repo | |
Framework | |
Risk Bounds for Multi-layer Perceptrons through Spectra of Integral Operators
Title | Risk Bounds for Multi-layer Perceptrons through Spectra of Integral Operators |
Authors | Meyer Scetbon, Zaid Harchaoui |
Abstract | We characterize the behavior of integral operators associated with multi-layer perceptrons in two eigenvalue decay regimes. We obtain as a result sharper risk bounds for multi-layer perceptrons highlighting their behavior in high dimensions. Doing so, we also improve on previous results on integral operators related to power series kernels on spheres, with sharper eigenvalue decay estimates in a wider range of eigenvalue decay regimes. |
Tasks | |
Published | 2020-02-28 |
URL | https://arxiv.org/abs/2002.12640v1 |
https://arxiv.org/pdf/2002.12640v1.pdf | |
PWC | https://paperswithcode.com/paper/risk-bounds-for-multi-layer-perceptrons |
Repo | |
Framework | |
SeismiQB – a novel framework for deep learning with seismic data
Title | SeismiQB – a novel framework for deep learning with seismic data |
Authors | Alexander Koryagin, Roman Khudorozhkov, Sergey Tsimfer, Darima Mylzenova |
Abstract | In recent years, Deep Neural Networks were successfully adopted in numerous domains to solve various image-related tasks, ranging from simple classification to fine borders annotation. Naturally, many researches proposed to use it to solve geological problems. Unfortunately, many of the seismic processing tools were developed years before the era of machine learning, including the most popular SEG-Y data format for storing seismic cubes. Its slow loading speed heavily hampers experimentation speed, which is essential for getting acceptable results. Worse yet, there is no widely-used format for storing surfaces inside the volume (for example, seismic horizons). To address these problems, we’ve developed an open-sourced Python framework with emphasis on working with neural networks, that provides convenient tools for (i) fast loading seismic cubes in multiple data formats and converting between them, (ii) generating crops of desired shape and augmenting them with various transformations, and (iii) pairing cube data with labeled horizons or other types of geobodies. |
Tasks | |
Published | 2020-01-10 |
URL | https://arxiv.org/abs/2001.06416v1 |
https://arxiv.org/pdf/2001.06416v1.pdf | |
PWC | https://paperswithcode.com/paper/seismiqb-a-novel-framework-for-deep-learning |
Repo | |
Framework | |
Deep-HR: Fast Heart Rate Estimation from Face Video Under Realistic Conditions
Title | Deep-HR: Fast Heart Rate Estimation from Face Video Under Realistic Conditions |
Authors | Mohammad Sabokrou, Masoud Pourreza, Xiaobai Li, Mahmood Fathy, Guoying Zhao |
Abstract | This paper presents a novel method for remote heart rate (HR) estimation. Recent studies have proved that blood pumping by the heart is highly correlated to the intense color of face pixels, and surprisingly can be utilized for remote HR estimation. Researchers successfully proposed several methods for this task, but making it work in realistic situations is still a challenging problem in computer vision community. Furthermore, learning to solve such a complex task on a dataset with very limited annotated samples is not reasonable. Consequently, researchers do not prefer to use the deep learning approaches for this problem. In this paper, we propose a simple yet efficient approach to benefit the advantages of the Deep Neural Network (DNN) by simplifying HR estimation from a complex task to learning from very correlated representation to HR. Inspired by previous work, we learn a component called Front-End (FE) to provide a discriminative representation of face videos, afterward a light deep regression auto-encoder as Back-End (BE) is learned to map the FE representation to HR. Regression task on the informative representation is simple and could be learned efficiently on limited training samples. Beside of this, to be more accurate and work well on low-quality videos, two deep encoder-decoder networks are trained to refine the output of FE. We also introduce a challenging dataset (HR-D) to show that our method can efficiently work in realistic conditions. Experimental results on HR-D and MAHNOB datasets confirm that our method could run as a real-time method and estimate the average HR better than state-of-the-art ones. |
Tasks | Heart rate estimation |
Published | 2020-02-12 |
URL | https://arxiv.org/abs/2002.04821v1 |
https://arxiv.org/pdf/2002.04821v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-hr-fast-heart-rate-estimation-from-face |
Repo | |
Framework | |
Spatio-Temporal Relation and Attention Learning for Facial Action Unit Detection
Title | Spatio-Temporal Relation and Attention Learning for Facial Action Unit Detection |
Authors | Zhiwen Shao, Lixin Zou, Jianfei Cai, Yunsheng Wu, Lizhuang Ma |
Abstract | Spatio-temporal relations among facial action units (AUs) convey significant information for AU detection yet have not been thoroughly exploited. The main reasons are the limited capability of current AU detection works in simultaneously learning spatial and temporal relations, and the lack of precise localization information for AU feature learning. To tackle these limitations, we propose a novel spatio-temporal relation and attention learning framework for AU detection. Specifically, we introduce a spatio-temporal graph convolutional network to capture both spatial and temporal relations from dynamic AUs, in which the AU relations are formulated as a spatio-temporal graph with adaptively learned instead of predefined edge weights. Moreover, the learning of spatio-temporal relations among AUs requires individual AU features. Considering the dynamism and shape irregularity of AUs, we propose an attention regularization method to adaptively learn regional attentions that capture highly relevant regions and suppress irrelevant regions so as to extract a complete feature for each AU. Extensive experiments show that our approach achieves substantial improvements over the state-of-the-art AU detection methods on BP4D and especially DISFA benchmarks. |
Tasks | Action Unit Detection, Facial Action Unit Detection |
Published | 2020-01-05 |
URL | https://arxiv.org/abs/2001.01168v1 |
https://arxiv.org/pdf/2001.01168v1.pdf | |
PWC | https://paperswithcode.com/paper/spatio-temporal-relation-and-attention |
Repo | |
Framework | |
GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images
Title | GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images |
Authors | Lei Kang, Pau Riba, Yaxing Wang, Marçal Rusiñol, Alicia Fornés, Mauricio Villegas |
Abstract | Although current image generation methods have reached impressive quality levels, they are still unable to produce plausible yet diverse images of handwritten words. On the contrary, when writing by hand, a great variability is observed across different writers, and even when analyzing words scribbled by the same individual, involuntary variations are conspicuous. In this work, we take a step closer to producing realistic and varied artificially rendered handwritten words. We propose a novel method that is able to produce credible handwritten word images by conditioning the generative process with both calligraphic style features and textual content. Our generator is guided by three complementary learning objectives: to produce realistic images, to imitate a certain handwriting style and to convey a specific textual content. Our model is unconstrained to any predefined vocabulary, being able to render whatever input word. Given a sample writer, it is also able to mimic its calligraphic features in a few-shot setup. We significantly advance over prior art and demonstrate with qualitative, quantitative and human-based evaluations the realistic aspect of our synthetically produced images. |
Tasks | Image Generation |
Published | 2020-03-05 |
URL | https://arxiv.org/abs/2003.02567v1 |
https://arxiv.org/pdf/2003.02567v1.pdf | |
PWC | https://paperswithcode.com/paper/ganwriting-content-conditioned-generation-of |
Repo | |
Framework | |