April 1, 2020

3378 words 16 mins read

Paper Group ANR 462

Paper Group ANR 462

On the Estimation of Complex Circuits Functional Failure Rate by Machine Learning Techniques. Computing the Feedback Capacity of Finite State Channels using Reinforcement Learning. Deep Semantic Matching with Foreground Detection and Cycle-Consistency. Visual Summary of Value-level Feature Attribution in Prediction Classes with Recurrent Neural Net …

On the Estimation of Complex Circuits Functional Failure Rate by Machine Learning Techniques

Title On the Estimation of Complex Circuits Functional Failure Rate by Machine Learning Techniques
Authors Thomas Lange, Aneesh Balakrishnan, Maximilien Glorieux, Dan Alexandrescu, Luca Sterpone
Abstract De-Rating or Vulnerability Factors are a major feature of failure analysis efforts mandated by today’s Functional Safety requirements. Determining the Functional De-Rating of sequential logic cells typically requires computationally intensive fault-injection simulation campaigns. In this paper a new approach is proposed which uses Machine Learning to estimate the Functional De-Rating of individual flip-flops and thus, optimising and enhancing fault injection efforts. Therefore, first, a set of per-instance features is described and extracted through an analysis approach combining static elements (cell properties, circuit structure, synthesis attributes) and dynamic elements (signal activity). Second, reference data is obtained through first-principles fault simulation approaches. Finally, one part of the reference dataset is used to train the Machine Learning algorithm and the remaining is used to validate and benchmark the accuracy of the trained tool. The intended goal is to obtain a trained model able to provide accurate per-instance Functional De-Rating data for the full list of circuit instances, an objective that is difficult to reach using classical methods. The presented methodology is accompanied by a practical example to determine the performance of various Machine Learning models for different training sizes.
Tasks
Published 2020-02-18
URL https://arxiv.org/abs/2002.09945v1
PDF https://arxiv.org/pdf/2002.09945v1.pdf
PWC https://paperswithcode.com/paper/on-the-estimation-of-complex-circuits
Repo
Framework

Computing the Feedback Capacity of Finite State Channels using Reinforcement Learning

Title Computing the Feedback Capacity of Finite State Channels using Reinforcement Learning
Authors Ziv Aharoni, Oron Sabag, Haim Henry Permuter
Abstract In this paper, we propose a novel method to compute the feedback capacity of channels with memory using reinforcement learning (RL). In RL, one seeks to maximize cumulative rewards collected in a sequential decision-making environment. This is done by collecting samples of the underlying environment and using them to learn the optimal decision rule. The main advantage of this approach is its computational efficiency, even in high dimensional problems. Hence, RL can be used to estimate numerically the feedback capacity of unifilar finite state channels (FSCs) with large alphabet size. The outcome of the RL algorithm sheds light on the properties of the optimal decision rule, which in our case, is the optimal input distribution of the channel. These insights can be converted into analytic, single-letter capacity expressions by solving corresponding lower and upper bounds. We demonstrate the efficiency of this method by analytically solving the feedback capacity of the well-known Ising channel with a ternary alphabet. We also provide a simple coding scheme that achieves the feedback capacity.
Tasks Decision Making
Published 2020-01-27
URL https://arxiv.org/abs/2001.09685v1
PDF https://arxiv.org/pdf/2001.09685v1.pdf
PWC https://paperswithcode.com/paper/computing-the-feedback-capacity-of-finite
Repo
Framework

Deep Semantic Matching with Foreground Detection and Cycle-Consistency

Title Deep Semantic Matching with Foreground Detection and Cycle-Consistency
Authors Yun-Chun Chen, Po-Hsiang Huang, Li-Yu Yu, Jia-Bin Huang, Ming-Hsuan Yang, Yen-Yu Lin
Abstract Establishing dense semantic correspondences between object instances remains a challenging problem due to background clutter, significant scale and pose differences, and large intra-class variations. In this paper, we address weakly supervised semantic matching based on a deep network where only image pairs without manual keypoint correspondence annotations are provided. To facilitate network training with this weaker form of supervision, we 1) explicitly estimate the foreground regions to suppress the effect of background clutter and 2) develop cycle-consistent losses to enforce the predicted transformations across multiple images to be geometrically plausible and consistent. We train the proposed model using the PF-PASCAL dataset and evaluate the performance on the PF-PASCAL, PF-WILLOW, and TSS datasets. Extensive experimental results show that the proposed approach performs favorably against the state-of-the-art methods.
Tasks
Published 2020-03-31
URL https://arxiv.org/abs/2004.00144v1
PDF https://arxiv.org/pdf/2004.00144v1.pdf
PWC https://paperswithcode.com/paper/deep-semantic-matching-with-foreground
Repo
Framework

Visual Summary of Value-level Feature Attribution in Prediction Classes with Recurrent Neural Networks

Title Visual Summary of Value-level Feature Attribution in Prediction Classes with Recurrent Neural Networks
Authors Chuan Wang, Xumeng Wang, Kwan-Liu Ma
Abstract Deep Recurrent Neural Networks (RNN) is increasingly used in decision-making with temporal sequences. However, understanding how RNN models produce final predictions remains a major challenge. Existing work on interpreting RNN models for sequence predictions often focuses on explaining predictions for individual data instances (e.g., patients or students). Because state-of-the-art predictive models are formed with millions of parameters optimized over millions of instances, explaining predictions for single data instances can easily miss a bigger picture. Besides, many outperforming RNN models use multi-hot encoding to represent the presence/absence of features, where the interpretability of feature value attribution is missing. We present ViSFA, an interactive system that visually summarizes feature attribution over time for different feature values. ViSFA scales to large data such as the MIMIC dataset containing the electronic health records of 1.2 million high-dimensional temporal events. We demonstrate that ViSFA can help us reason RNN prediction and uncover insights from data by distilling complex attribution into compact and easy-to-interpret visualizations.
Tasks Decision Making
Published 2020-01-23
URL https://arxiv.org/abs/2001.08379v1
PDF https://arxiv.org/pdf/2001.08379v1.pdf
PWC https://paperswithcode.com/paper/visual-summary-of-value-level-feature
Repo
Framework

Provable Noisy Sparse Subspace Clustering using Greedy Neighbor Selection: A Coherence-Based Perspective

Title Provable Noisy Sparse Subspace Clustering using Greedy Neighbor Selection: A Coherence-Based Perspective
Authors Jwo-Yuh Wu, Wen-Hsuan Li, Liang-Chi Huang, Yen-Ping Lin, Chun-Hung Liu, Rung-Hung Gau
Abstract Sparse subspace clustering (SSC) using greedy-based neighbor selection, such as matching pursuit (MP) and orthogonal matching pursuit (OMP), has been known as a popular computationally-efficient alternative to the conventional L1-minimization based methods. Under deterministic bounded noise corruption, in this paper we derive coherence-based sufficient conditions guaranteeing correct neighbor identification using MP/OMP. Our analyses exploit the maximum/minimum inner product between two noisy data points subject to a known upper bound on the noise level. The obtained sufficient condition clearly reveals the impact of noise on greedy-based neighbor recovery. Specifically, it asserts that, as long as noise is sufficiently small so that the resultant perturbed residual vectors stay close to the desired subspace, both MP and OMP succeed in returning a correct neighbor subset. A striking finding is that, when the ground truth subspaces are well-separated from each other and noise is not large, MP-based iterations, while enjoying lower algorithmic complexity, yield smaller perturbation of residuals, thereby better able to identify correct neighbors and, in turn, achieving higher global data clustering accuracy. Extensive numerical experiments are used to corroborate our theoretical study.
Tasks
Published 2020-02-02
URL https://arxiv.org/abs/2002.00401v1
PDF https://arxiv.org/pdf/2002.00401v1.pdf
PWC https://paperswithcode.com/paper/provable-noisy-sparse-subspace-clustering
Repo
Framework

Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems

Title Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Authors Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, Elena L. Glassman
Abstract Explainable artificially intelligent (XAI) systems form part of sociotechnical systems, e.g., human+AI teams tasked with making decisions. Yet, current XAI systems are rarely evaluated by measuring the performance of human+AI teams on actual decision-making tasks. We conducted two online experiments and one in-person think-aloud study to evaluate two currently common techniques for evaluating XAI systems: (1) using proxy, artificial tasks such as how well humans predict the AI’s decision from the given explanations, and (2) using subjective measures of trust and preference as predictors of actual performance. The results of our experiments demonstrate that evaluations with proxy tasks did not predict the results of the evaluations with the actual decision-making tasks. Further, the subjective measures on evaluations with actual decision-making tasks did not predict the objective performance on those same tasks. Our results suggest that by employing misleading evaluation methods, our field may be inadvertently slowing its progress toward developing human+AI teams that can reliably perform better than humans or AIs alone.
Tasks Decision Making
Published 2020-01-22
URL https://arxiv.org/abs/2001.08298v1
PDF https://arxiv.org/pdf/2001.08298v1.pdf
PWC https://paperswithcode.com/paper/proxy-tasks-and-subjective-measures-can-be
Repo
Framework

DaST: Data-free Substitute Training for Adversarial Attacks

Title DaST: Data-free Substitute Training for Adversarial Attacks
Authors Mingyi Zhou, Jing Wu, Yipeng Liu, Shuaicheng Liu, Ce Zhu
Abstract Machine learning models are vulnerable to adversarial examples. For the black-box setting, current substitute attacks need pre-trained models to generate adversarial examples. However, pre-trained models are hard to obtain in real-world tasks. In this paper, we propose a data-free substitute training method (DaST) to obtain substitute models for adversarial black-box attacks without the requirement of any real data. To achieve this, DaST utilizes specially designed generative adversarial networks (GANs) to train the substitute models. In particular, we design a multi-branch architecture and label-control loss for the generative model to deal with the uneven distribution of synthetic samples. The substitute model is then trained by the synthetic samples generated by the generative model, which are labeled by the attacked model subsequently. The experiments demonstrate the substitute models produced by DaST can achieve competitive performance compared with the baseline models which are trained by the same train set with attacked models. Additionally, to evaluate the practicability of the proposed method on the real-world task, we attack an online machine learning model on the Microsoft Azure platform. The remote model misclassifies 98.35% of the adversarial examples crafted by our method. To the best of our knowledge, we are the first to train a substitute model for adversarial attacks without any real data.
Tasks
Published 2020-03-28
URL https://arxiv.org/abs/2003.12703v2
PDF https://arxiv.org/pdf/2003.12703v2.pdf
PWC https://paperswithcode.com/paper/dast-data-free-substitute-training-for
Repo
Framework

Online Parameter Estimation for Safety-Critical Systems with Gaussian Processes

Title Online Parameter Estimation for Safety-Critical Systems with Gaussian Processes
Authors Mouhyemen Khan, Abhijit Chatterjee
Abstract Parameter estimation is crucial for modeling, tracking, and control of complex dynamical systems. However, parameter uncertainties can compromise system performance under a controller relying on nominal parameter values. Typically, parameters are estimated using numerical regression approaches framed as inverse problems. However, they suffer from non-uniqueness due to existence of multiple local optima, reliance on gradients, numerous experimental data, or stability issues. Addressing these drawbacks, we present a Bayesian optimization framework based on Gaussian processes (GPs) for online parameter estimation. It uses an efficient search strategy over a response surface in the parameter space for finding the global optima with minimal function evaluations. The response surface is modeled as correlated surrogates using GPs on noisy data. The GP posterior predictive variance is exploited for smart adaptive sampling. This balances the exploration versus exploitation trade-off which is key in reaching the global optima under limited budget. We demonstrate our technique on an actuated planar pendulum and safety-critical quadrotor in simulation with changing parameters. We also benchmark our results against solvers using interior point method and sequential quadratic program. By reconfiguring the controller with new optimized parameters iteratively, we drastically improve trajectory tracking of the system versus the nominal case and other solvers.
Tasks Gaussian Processes
Published 2020-02-18
URL https://arxiv.org/abs/2002.07870v1
PDF https://arxiv.org/pdf/2002.07870v1.pdf
PWC https://paperswithcode.com/paper/online-parameter-estimation-for-safety
Repo
Framework

Reward Engineering for Object Pick and Place Training

Title Reward Engineering for Object Pick and Place Training
Authors Raghav Nagpal, Achyuthan Unni Krishnan, Hanshen Yu
Abstract Robotic grasping is a crucial area of research as it can result in the acceleration of the automation of several Industries utilizing robots ranging from manufacturing to healthcare. Reinforcement learning is the field of study where an agent learns a policy to execute an action by exploring and exploiting rewards from an environment. Reinforcement learning can thus be used by the agent to learn how to execute a certain task, in our case grasping an object. We have used the Pick and Place environment provided by OpenAI’s Gym to engineer rewards. Hindsight Experience Replay (HER) has shown promising results with problems having a sparse reward. In the default configuration of the OpenAI baseline and environment the reward function is calculated using the distance between the target location and the robot end-effector. By weighting the cost based on the distance of the end-effector from the goal in the x,y and z-axes we were able to almost halve the learning time compared to the baselines provided by OpenAI, an intuitive strategy that further reduced learning time. In this project, we were also able to introduce certain user desired trajectories in the learnt policies (city-block / Manhattan trajectories). This helps us understand that by engineering the rewards we can tune the agent to learn policies in a certain way even if it might not be the most optimal but is the desired manner.
Tasks Robotic Grasping
Published 2020-01-11
URL https://arxiv.org/abs/2001.03792v1
PDF https://arxiv.org/pdf/2001.03792v1.pdf
PWC https://paperswithcode.com/paper/reward-engineering-for-object-pick-and-place
Repo
Framework

Learning to Play Soccer by Reinforcement and Applying Sim-to-Real to Compete in the Real World

Title Learning to Play Soccer by Reinforcement and Applying Sim-to-Real to Compete in the Real World
Authors Hansenclever F. Bassani, Renie A. Delgado, Jose Nilton de O. Lima Junior, Heitor R. Medeiros, Pedro H. M. Braga, Alain Tapp
Abstract This work presents an application of Reinforcement Learning (RL) for the complete control of real soccer robots of the IEEE Very Small Size Soccer (VSSS), a traditional league in the Latin American Robotics Competition (LARC). In the VSSS league, two teams of three small robots play against each other. We propose a simulated environment in which continuous or discrete control policies can be trained, and a Sim-to-Real method to allow using the obtained policies to control a robot in the real world. The results show that the learned policies display a broad repertoire of behaviors that are difficult to specify by hand. This approach, called VSSS-RL, was able to beat the human-designed policy for the striker of the team ranked 3rd place in the 2018 LARC, in 1-vs-1 matches.
Tasks
Published 2020-03-24
URL https://arxiv.org/abs/2003.11102v1
PDF https://arxiv.org/pdf/2003.11102v1.pdf
PWC https://paperswithcode.com/paper/learning-to-play-soccer-by-reinforcement-and
Repo
Framework

Abstractive Snippet Generation

Title Abstractive Snippet Generation
Authors Wei-Fan Chen, Shahbaz Syed, Benno Stein, Matthias Hagen, Martin Potthast
Abstract An abstractive snippet is an originally created piece of text to summarize a web page on a search engine results page. Compared to the conventional extractive snippets, which are generated by extracting phrases and sentences verbatim from a web page, abstractive snippets circumvent copyright issues; even more interesting is the fact that they open the door for personalization. Abstractive snippets have been evaluated as equally powerful in terms of user acceptance and expressiveness—but the key question remains: Can abstractive snippets be automatically generated with sufficient quality? This paper introduces a new approach to abstractive snippet generation: We identify the first two large-scale sources for distant supervision, namely anchor contexts and web directories. By mining the entire ClueWeb09 and ClueWeb12 for anchor contexts and by utilizing the DMOZ Open Directory Project, we compile the Webis Abstractive Snippet Corpus 2020, comprising more than 3.5 million triples of the form $\langle$query, snippet, document$\rangle$ as training examples, where the snippet is either an anchor context or a web directory description in lieu of a genuine query-biased abstractive snippet of the web document. We propose a bidirectional abstractive snippet generation model and assess the quality of both our corpus and the generated abstractive snippets with standard measures, crowdsourcing, and in comparison to the state of the art. The evaluation shows that our novel data sources along with the proposed model allow for producing usable query-biased abstractive snippets while minimizing text reuse.
Tasks
Published 2020-02-25
URL https://arxiv.org/abs/2002.10782v2
PDF https://arxiv.org/pdf/2002.10782v2.pdf
PWC https://paperswithcode.com/paper/abstractive-snippet-generation
Repo
Framework

Interventions for Ranking in the Presence of Implicit Bias

Title Interventions for Ranking in the Presence of Implicit Bias
Authors L. Elisa Celis, Anay Mehrotra, Nisheeth K. Vishnoi
Abstract Implicit bias is the unconscious attribution of particular qualities (or lack thereof) to a member from a particular social group (e.g., defined by gender or race). Studies on implicit bias have shown that these unconscious stereotypes can have adverse outcomes in various social contexts, such as job screening, teaching, or policing. Recently, (Kleinberg and Raghavan, 2018) considered a mathematical model for implicit bias and showed the effectiveness of the Rooney Rule as a constraint to improve the utility of the outcome for certain cases of the subset selection problem. Here we study the problem of designing interventions for the generalization of subset selection – ranking – that requires to output an ordered set and is a central primitive in various social and computational contexts. We present a family of simple and interpretable constraints and show that they can optimally mitigate implicit bias for a generalization of the model studied in (Kleinberg and Raghavan, 2018). Subsequently, we prove that under natural distributional assumptions on the utilities of items, simple, Rooney Rule-like, constraints can also surprisingly recover almost all the utility lost due to implicit biases. Finally, we augment our theoretical results with empirical findings on real-world distributions from the IIT-JEE (2009) dataset and the Semantic Scholar Research corpus.
Tasks
Published 2020-01-23
URL https://arxiv.org/abs/2001.08767v1
PDF https://arxiv.org/pdf/2001.08767v1.pdf
PWC https://paperswithcode.com/paper/interventions-for-ranking-in-the-presence-of
Repo
Framework

Fast Dense Residual Network: Enhancing Global Dense Feature Flow for Text Recognition

Title Fast Dense Residual Network: Enhancing Global Dense Feature Flow for Text Recognition
Authors Zhao Zhang, Zemin Tang, Yang Wang, Jie Qin, Haijun Zhang, Shuicheng Yan
Abstract Deep Convolutional Neural Networks (CNNs), such as Dense Convolutional Networks (DenseNet), have achieved great success for image representation by discovering deep hierarchical information. However, most existing networks simply stacks the convolutional layers and hence failing to fully discover local and global feature information among layers. In this paper, we mainly explore how to enhance the local and global dense feature flow by exploiting hierarchical features fully from all the convolution layers. Technically, we propose an efficient and effective CNN framework, i.e., Fast Dense Residual Network (FDRN), for text recognition. To construct FDRN, we propose a new fast residual dense block (f-RDB) to retain the ability of local feature fusion and local residual learning of original RDB, which can reduce the computing efforts at the same time. After fully learning local residual dense features, we utilize the sum operation and several f-RDBs to define a new block termed global dense block (GDB) by imitating the construction of dense blocks to learn global dense residual features adaptively in a holistic way. Finally, we use two convolution layers to construct a down-sampling block to reduce the global feature size and extract deeper features. Extensive simulations show that FDRN obtains the enhanced recognition results, compared with other related models.
Tasks
Published 2020-01-23
URL https://arxiv.org/abs/2001.09021v1
PDF https://arxiv.org/pdf/2001.09021v1.pdf
PWC https://paperswithcode.com/paper/fast-dense-residual-network-enhancing-global
Repo
Framework

Approximating Activation Functions

Title Approximating Activation Functions
Authors Nicholas Gerard Timmons, Andrew Rice
Abstract ReLU is widely seen as the default choice for activation functions in neural networks. However, there are cases where more complicated functions are required. In particular, recurrent neural networks (such as LSTMs) make extensive use of both hyperbolic tangent and sigmoid functions. These functions are expensive to compute. We used function approximation techniques to develop replacements for these functions and evaluated them empirically on three popular network configurations. We find safe approximations that yield a 10% to 37% improvement in training times on the CPU. These approximations were suitable for all cases we considered and we believe are appropriate replacements for all networks using these activation functions. We also develop ranged approximations which only apply in some cases due to restrictions on their input domain. Our ranged approximations yield a performance improvement of 20% to 53% in network training time. Our functions also match or considerably out perform the ad-hoc approximations used in Theano and the implementation of Word2Vec.
Tasks
Published 2020-01-17
URL https://arxiv.org/abs/2001.06370v1
PDF https://arxiv.org/pdf/2001.06370v1.pdf
PWC https://paperswithcode.com/paper/approximating-activation-functions
Repo
Framework

Sentence Analogies: Exploring Linguistic Relationships and Regularities in Sentence Embeddings

Title Sentence Analogies: Exploring Linguistic Relationships and Regularities in Sentence Embeddings
Authors Xunjie Zhu, Gerard de Melo
Abstract While important properties of word vector representations have been studied extensively, far less is known about the properties of sentence vector representations. Word vectors are often evaluated by assessing to what degree they exhibit regularities with regard to relationships of the sort considered in word analogies. In this paper, we investigate to what extent commonly used sentence vector representation spaces as well reflect certain kinds of regularities. We propose a number of schemes to induce evaluation data, based on lexical analogy data as well as semantic relationships between sentences. Our experiments consider a wide range of sentence embedding methods, including ones based on BERT-style contextual embeddings. We find that different models differ substantially in their ability to reflect such regularities.
Tasks Sentence Embedding, Sentence Embeddings
Published 2020-03-09
URL https://arxiv.org/abs/2003.04036v1
PDF https://arxiv.org/pdf/2003.04036v1.pdf
PWC https://paperswithcode.com/paper/sentence-analogies-exploring-linguistic
Repo
Framework
comments powered by Disqus