January 30, 2020

2869 words 14 mins read

Paper Group ANR 362

Paper Group ANR 362

Articulatory and bottleneck features for speaker-independent ASR of dysarthric speech. Situational Grounding within Multimodal Simulations. The invisible power of fairness. How machine learning shapes democracy. Why we need an AI-resilient society. Benign Overfitting in Linear Regression. When Does Non-Orthogonal Tensor Decomposition Have No Spurio …

Articulatory and bottleneck features for speaker-independent ASR of dysarthric speech

Title Articulatory and bottleneck features for speaker-independent ASR of dysarthric speech
Authors Emre Yılmaz, Vikramjit Mitra, Ganesh Sivaraman, Horacio Franco
Abstract The rapid population aging has stimulated the development of assistive devices that provide personalized medical support to the needies suffering from various etiologies. One prominent clinical application is a computer-assisted speech training system which enables personalized speech therapy to patients impaired by communicative disorders in the patient’s home environment. Such a system relies on the robust automatic speech recognition (ASR) technology to be able to provide accurate articulation feedback. With the long-term aim of developing off-the-shelf ASR systems that can be incorporated in clinical context without prior speaker information, we compare the ASR performance of speaker-independent bottleneck and articulatory features on dysarthric speech used in conjunction with dedicated neural network-based acoustic models that have been shown to be robust against spectrotemporal deviations. We report ASR performance of these systems on two dysarthric speech datasets of different characteristics to quantify the achieved performance gains. Despite the remaining performance gap between the dysarthric and normal speech, significant improvements have been reported on both datasets using speaker-independent ASR architectures.
Tasks Speech Recognition
Published 2019-05-16
URL https://arxiv.org/abs/1905.06533v2
PDF https://arxiv.org/pdf/1905.06533v2.pdf
PWC https://paperswithcode.com/paper/articulatory-and-bottleneck-features-for
Repo
Framework

Situational Grounding within Multimodal Simulations

Title Situational Grounding within Multimodal Simulations
Authors James Pustejovsky, Nikhil Krishnaswamy
Abstract In this paper, we argue that simulation platforms enable a novel type of embodied spatial reasoning, one facilitated by a formal model of object and event semantics that renders the continuous quantitative search space of an open-world, real-time environment tractable. We provide examples for how a semantically-informed AI system can exploit the precise, numerical information provided by a game engine to perform qualitative reasoning about objects and events, facilitate learning novel concepts from data, and communicate with a human to improve its models and demonstrate its understanding. We argue that simulation environments, and game engines in particular, bring together many different notions of “simulation” and many different technologies to provide a highly-effective platform for developing both AI systems and tools to experiment in both machine and human intelligence.
Tasks
Published 2019-02-05
URL http://arxiv.org/abs/1902.01886v1
PDF http://arxiv.org/pdf/1902.01886v1.pdf
PWC https://paperswithcode.com/paper/situational-grounding-within-multimodal
Repo
Framework

The invisible power of fairness. How machine learning shapes democracy

Title The invisible power of fairness. How machine learning shapes democracy
Authors Elena Beretta, Antonio Santangelo, Bruno Lepri, Antonio Vetrò, Juan Carlos De Martin
Abstract Many machine learning systems make extensive use of large amounts of data regarding human behaviors. Several researchers have found various discriminatory practices related to the use of human-related machine learning systems, for example in the field of criminal justice, credit scoring and advertising. Fair machine learning is therefore emerging as a new field of study to mitigate biases that are inadvertently incorporated into algorithms. Data scientists and computer engineers are making various efforts to provide definitions of fairness. In this paper, we provide an overview of the most widespread definitions of fairness in the field of machine learning, arguing that the ideas highlighting each formalization are closely related to different ideas of justice and to different interpretations of democracy embedded in our culture. This work intends to analyze the definitions of fairness that have been proposed to date to interpret the underlying criteria and to relate them to different ideas of democracy.
Tasks
Published 2019-03-22
URL http://arxiv.org/abs/1903.09493v1
PDF http://arxiv.org/pdf/1903.09493v1.pdf
PWC https://paperswithcode.com/paper/the-invisible-power-of-fairness-how-machine
Repo
Framework

Why we need an AI-resilient society

Title Why we need an AI-resilient society
Authors Thomas Bartz-Beielstein
Abstract Artificial intelligence is considered as a key technology. It has a huge impact on our society. Besides many positive effects, there are also some negative effects or threats. Some of these threats to society are well-known, e.g., weapons or killer robots. But there are also threats that are ignored. These unknown-knowns or blind spots affect privacy, and facilitate manipulation and mistaken identities. We cannot trust data, audio, video, and identities any more. Democracies are able to cope with known threats, the known-knowns. Transforming unknown-knowns to known-knowns is one important cornerstone of resilient societies. An AI-resilient society is able to transform threats caused by new AI tecchnologies such as generative adversarial networks. Resilience can be seen as a positive adaptation of these threats. We propose three strategies how this adaptation can be achieved: awareness, agreements, and red flags. This article accompanies the TEDx talk “Why we urgently need an AI-resilient society”, see https://youtu.be/f6c2ngp7rqY.
Tasks
Published 2019-12-18
URL https://arxiv.org/abs/1912.08786v1
PDF https://arxiv.org/pdf/1912.08786v1.pdf
PWC https://paperswithcode.com/paper/why-we-need-an-ai-resilient-society
Repo
Framework

Benign Overfitting in Linear Regression

Title Benign Overfitting in Linear Regression
Authors Peter L. Bartlett, Philip M. Long, Gábor Lugosi, Alexander Tsigler
Abstract The phenomenon of benign overfitting is one of the key mysteries uncovered by deep learning methodology: deep neural networks seem to predict well, even with a perfect fit to noisy training data. Motivated by this phenomenon, we consider when a perfect fit to training data in linear regression is compatible with accurate prediction. We give a characterization of linear regression problems for which the minimum norm interpolating prediction rule has near-optimal prediction accuracy. The characterization is in terms of two notions of the effective rank of the data covariance. It shows that overparameterization is essential for benign overfitting in this setting: the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size. By studying examples of data covariance properties that this characterization shows are required for benign overfitting, we find an important role for finite-dimensional data: the accuracy of the minimum norm interpolating prediction rule approaches the best possible accuracy for a much narrower range of properties of the data distribution when the data lies in an infinite dimensional space versus when the data lies in a finite dimensional space whose dimension grows faster than the sample size.
Tasks
Published 2019-06-26
URL https://arxiv.org/abs/1906.11300v3
PDF https://arxiv.org/pdf/1906.11300v3.pdf
PWC https://paperswithcode.com/paper/benign-overfitting-in-linear-regression
Repo
Framework

When Does Non-Orthogonal Tensor Decomposition Have No Spurious Local Minima?

Title When Does Non-Orthogonal Tensor Decomposition Have No Spurious Local Minima?
Authors Maziar Sanjabi, Sina Baharlouei, Meisam Razaviyayn, Jason D. Lee
Abstract We study the optimization problem for decomposing $d$ dimensional fourth-order Tensors with $k$ non-orthogonal components. We derive \textit{deterministic} conditions under which such a problem does not have spurious local minima. In particular, we show that if $\kappa = \frac{\lambda_{max}}{\lambda_{min}} < \frac{5}{4}$, and incoherence coefficient is of the order $O(\frac{1}{\sqrt{d}})$, then all the local minima are globally optimal. Using standard techniques, these conditions could be easily transformed into conditions that would hold with high probability in high dimensions when the components are generated randomly. Finally, we prove that the tensor power method with deflation and restarts could efficiently extract all the components within a tolerance level $O(\kappa \sqrt{k\tau^3})$ that seems to be the noise floor of non-orthogonal tensor decomposition.
Tasks
Published 2019-11-22
URL https://arxiv.org/abs/1911.09815v1
PDF https://arxiv.org/pdf/1911.09815v1.pdf
PWC https://paperswithcode.com/paper/when-does-non-orthogonal-tensor-decomposition
Repo
Framework

Exact Recovery in the Latent Space Model

Title Exact Recovery in the Latent Space Model
Authors Chuyang Ke, Jean Honorio
Abstract We analyze the necessary and sufficient conditions for exact recovery of the symmetric Latent Space Model (LSM) with two communities. In a LSM, each node is associated with a latent vector following some probability distribution. We show that exact recovery can be achieved using a semidefinite programming (SDP) approach. We also analyze when NP-hard maximum likelihood estimation is correct. Our analysis predicts the experimental correctness of SDP with high accuracy, showing the suitability of our focus on the Karush-Kuhn-Tucker (KKT) conditions and the second minimum eigenvalue of a properly defined matrix.
Tasks
Published 2019-01-28
URL https://arxiv.org/abs/1902.03099v2
PDF https://arxiv.org/pdf/1902.03099v2.pdf
PWC https://paperswithcode.com/paper/exact-recovery-in-the-latent-space-model
Repo
Framework

A Study of Feature Extraction techniques for Sentiment Analysis

Title A Study of Feature Extraction techniques for Sentiment Analysis
Authors Avinash Madasu, Sivasankar E
Abstract Sentiment Analysis refers to the study of systematically extracting the meaning of subjective text . When analysing sentiments from the subjective text using Machine Learning techniques,feature extraction becomes a significant part. We perform a study on the performance of feature extraction techniques TF-IDF(Term Frequency-Inverse Document Frequency) and Doc2vec (Document to Vector) using Cornell movie review datasets, UCI sentiment labeled datasets, stanford movie review datasets,effectively classifying the text into positive and negative polarities by using various pre-processing methods like eliminating StopWords and Tokenization which increases the performance of sentiment analysis in terms of accuracy and time taken by the classifier.The features obtained after applying feature extraction techniques on the text sentences are trained and tested using the classifiers Logistic Regression,Support Vector Machines,K-Nearest Neighbours , Decision Tree and Bernoulli Nave Bayes
Tasks Sentiment Analysis, Tokenization
Published 2019-06-04
URL https://arxiv.org/abs/1906.01573v1
PDF https://arxiv.org/pdf/1906.01573v1.pdf
PWC https://paperswithcode.com/paper/a-study-of-feature-extraction-techniques-for
Repo
Framework

Towards Compact and Robust Deep Neural Networks

Title Towards Compact and Robust Deep Neural Networks
Authors Vikash Sehwag, Shiqi Wang, Prateek Mittal, Suman Jana
Abstract Deep neural networks have achieved impressive performance in many applications but their large number of parameters lead to significant computational and storage overheads. Several recent works attempt to mitigate these overheads by designing compact networks using pruning of connections. However, we observe that most of the existing strategies to design compact networks fail to preserve network robustness against adversarial examples. In this work, we rigorously study the extension of network pruning strategies to preserve both benign accuracy and robustness of a network. Starting with a formal definition of the pruning procedure, including pre-training, weights pruning, and fine-tuning, we propose a new pruning method that can create compact networks while preserving both benign accuracy and robustness. Our method is based on two main insights: (1) we ensure that the training objectives of the pre-training and fine-tuning steps match the training objective of the desired robust model (e.g., adversarial robustness/verifiable robustness), and (2) we keep the pruning strategy agnostic to pre-training and fine-tuning objectives. We evaluate our method on four different networks on the CIFAR-10 dataset and measure benign accuracy, empirical robust accuracy, and verifiable robust accuracy. We demonstrate that our pruning method can preserve on average 93% benign accuracy, 92.5% empirical robust accuracy, and 85.0% verifiable robust accuracy while compressing the tested network by 10$\times$.
Tasks Network Pruning
Published 2019-06-14
URL https://arxiv.org/abs/1906.06110v1
PDF https://arxiv.org/pdf/1906.06110v1.pdf
PWC https://paperswithcode.com/paper/towards-compact-and-robust-deep-neural
Repo
Framework

Implicit Neural Solver for Time-dependent Linear PDEs with Convergence Guarantee

Title Implicit Neural Solver for Time-dependent Linear PDEs with Convergence Guarantee
Authors Suprosanna Shit, Abinav Ravi Venkatakrishnan, Ivan Ezhov, Jana Lipkova, Marie Piraud, Bjoern Menze
Abstract Fast and accurate solution of time-dependent partial differential equations (PDEs) is of key interest in many research fields including physics, engineering, and biology. Generally, implicit schemes are preferred over the explicit ones for better stability and correctness. The existing implicit schemes are usually iterative and employ a general-purpose solver which may be sub-optimal for a specific class of PDEs. In this paper, we propose a neural solver to learn an optimal iterative scheme for a class of PDEs, in a data-driven fashion. We attain this objective by modifying an iteration of an existing semi-implicit solver using a deep neural network. Further, we prove theoretically that our approach preserves the correctness and convergence guarantees provided by the existing iterative-solvers. We also demonstrate that our model generalizes to a different parameter setting than the one seen during training and achieves faster convergence compared to the semi-implicit schemes.
Tasks
Published 2019-10-08
URL https://arxiv.org/abs/1910.03452v3
PDF https://arxiv.org/pdf/1910.03452v3.pdf
PWC https://paperswithcode.com/paper/implicit-neural-solver-for-time-dependent
Repo
Framework

Latent Dirichlet Allocation Based Acoustic Data Selection for Automatic Speech Recognition

Title Latent Dirichlet Allocation Based Acoustic Data Selection for Automatic Speech Recognition
Authors Mortaza, Doulaty, Thomas Hain
Abstract Selecting in-domain data from a large pool of diverse and out-of-domain data is a non-trivial problem. In most cases simply using all of the available data will lead to sub-optimal and in some cases even worse performance compared to carefully selecting a matching set. This is true even for data-inefficient neural models. Acoustic Latent Dirichlet Allocation (aLDA) is shown to be useful in a variety of speech technology related tasks, including domain adaptation of acoustic models for automatic speech recognition and entity labeling for information retrieval. In this paper we propose to use aLDA as a data similarity criterion in a data selection framework. Given a large pool of out-of-domain and potentially mismatched data, the task is to select the best-matching training data to a set of representative utterances sampled from a target domain. Our target data consists of around 32 hours of meeting data (both far-field and close-talk) and the pool contains 2k hours of meeting, talks, voice search, dictation, command-and-control, audio books, lectures, generic media and telephony speech data. The proposed technique for training data selection, significantly outperforms random selection, posterior-based selection as well as using all of the available data.
Tasks Domain Adaptation, Information Retrieval, Speech Recognition
Published 2019-07-02
URL https://arxiv.org/abs/1907.01302v1
PDF https://arxiv.org/pdf/1907.01302v1.pdf
PWC https://paperswithcode.com/paper/latent-dirichlet-allocation-based-acoustic
Repo
Framework

Suction Grasp Region Prediction using Self-supervised Learning for Object Picking in Dense Clutter

Title Suction Grasp Region Prediction using Self-supervised Learning for Object Picking in Dense Clutter
Authors Quanquan Shao, Jie Hu, Weiming Wang, Yi Fang, Wenhai Liu, Jin Qi, Jin Ma
Abstract This paper focuses on robotic picking tasks in cluttered scenario. Because of the diversity of poses, types of stack and complicated background in bin picking situation, it is much difficult to recognize and estimate their pose before grasping them. Here, this paper combines Resnet with U-net structure, a special framework of Convolution Neural Networks (CNN), to predict picking region without recognition and pose estimation. And it makes robotic picking system learn picking skills from scratch. At the same time, we train the network end to end with online samples. In the end of this paper, several experiments are conducted to demonstrate the performance of our methods.
Tasks Pose Estimation
Published 2019-04-16
URL http://arxiv.org/abs/1904.07402v2
PDF http://arxiv.org/pdf/1904.07402v2.pdf
PWC https://paperswithcode.com/paper/suction-grasp-region-prediction-using-self
Repo
Framework

Federated Learning for Ranking Browser History Suggestions

Title Federated Learning for Ranking Browser History Suggestions
Authors Florian Hartmann, Sunah Suh, Arkadiusz Komarzewski, Tim D. Smith, Ilana Segall
Abstract Federated Learning is a new subfield of machine learning that allows fitting models without collecting the training data itself. Instead of sharing data, users collaboratively train a model by only sending weight updates to a server. To improve the ranking of suggestions in the Firefox URL bar, we make use of Federated Learning to train a model on user interactions in a privacy-preserving way. This trained model replaces a handcrafted heuristic, and our results show that users now type over half a character less to find what they are looking for. To be able to deploy our system to real users without degrading their experience during training, we design the optimization process to be robust. To this end, we use a variant of Rprop for optimization, and implement additional safeguards. By using a numerical gradient approximation technique, our system is able to optimize anything in Firefox that is currently based on handcrafted heuristics. Our paper shows that Federated Learning can be used successfully to train models in privacy-respecting ways.
Tasks
Published 2019-11-26
URL https://arxiv.org/abs/1911.11807v1
PDF https://arxiv.org/pdf/1911.11807v1.pdf
PWC https://paperswithcode.com/paper/federated-learning-for-ranking-browser
Repo
Framework

Bi-cross validation for estimating spectral clustering hyper parameters

Title Bi-cross validation for estimating spectral clustering hyper parameters
Authors Sioan Zohar, Chun-Hong Yoon
Abstract One challenge impeding the analysis of terabyte scale x-ray scattering data from the Linac Coherent Light Source LCLS, is determining the number of clusters required for the execution of traditional clustering algorithms. Here we demonstrate that previous work using bi-cross validation (BCV) to determine the number of singular vectors directly maps to the spectral clustering problem of estimating both the number of clusters and hyper parameter values. These results indicate that the process of estimating the number of clusters should not be divorced from the process of estimating other hyper parameters. Applying this method to LCLS x-ray scattering data enables the identification of dropped shots without manually setting boundaries on detector fluence and provides a path towards identifying rare and anomalous events.
Tasks
Published 2019-08-10
URL https://arxiv.org/abs/1908.03747v3
PDF https://arxiv.org/pdf/1908.03747v3.pdf
PWC https://paperswithcode.com/paper/estimation-of-spectral-clustering-hyper
Repo
Framework

On Pruning for Score-Based Bayesian Network Structure Learning

Title On Pruning for Score-Based Bayesian Network Structure Learning
Authors Alvaro H. C. Correia, James Cussens, Cassio P. de Campos
Abstract Many algorithms for score-based Bayesian network structure learning (BNSL) take as input a collection of potentially optimal parent sets for each variable in a data set. Constructing these collections naively is computationally intensive since the number of parent sets grows exponentially with the number of variables. Therefore, pruning techniques are not only desirable but essential. While effective pruning exists for the Bayesian Information Criterion (BIC), current results for the Bayesian Dirichlet equivalent uniform (BDeu) score reduce the search space very modestly, hampering the use of (the often preferred) BDeu. We derive new non-trivial theoretical upper bounds for the BDeu score that considerably improve on the state of the art. Since the new bounds are efficient and easy to implement, they can be promptly integrated into many BNSL methods. We show that gains can be significant in multiple UCI data sets so as to highlight practical implications of the theoretical advances.
Tasks
Published 2019-05-23
URL https://arxiv.org/abs/1905.09943v1
PDF https://arxiv.org/pdf/1905.09943v1.pdf
PWC https://paperswithcode.com/paper/on-pruning-for-score-based-bayesian-network
Repo
Framework
comments powered by Disqus