Paper Group ANR 389
Real-time Ultrasound-enhanced Multimodal Imaging of Tongue using 3D Printable Stabilizer System: A Deep Learning Approach. Representation Disentanglement for Multi-task Learning with application to Fetal Ultrasound. Neuromorphic Hardware learns to learn. Robust Local Features for Improving the Generalization of Adversarial Training. A Novel CMB Com …
Real-time Ultrasound-enhanced Multimodal Imaging of Tongue using 3D Printable Stabilizer System: A Deep Learning Approach
Title | Real-time Ultrasound-enhanced Multimodal Imaging of Tongue using 3D Printable Stabilizer System: A Deep Learning Approach |
Authors | M. Hamed Mozaffari, Won-Sook Lee |
Abstract | Despite renewed awareness of the importance of articulation, it remains a challenge for instructors to handle the pronunciation needs of language learners. There are relatively scarce pedagogical tools for pronunciation teaching and learning. Unlike inefficient, traditional pronunciation instructions like listening and repeating, electronic visual feedback (EVF) systems such as ultrasound technology have been employed in new approaches. Recently, an ultrasound-enhanced multimodal method has been developed for visualizing tongue movements of a language learner overlaid on the face-side of the speaker’s head. That system was evaluated for several language courses via a blended learning paradigm at the university level. The result was asserted that visualizing the articulator’s system as biofeedback to language learners will significantly improve articulation learning efficiency. In spite of the successful usage of multimodal techniques for pronunciation training, it still requires manual works and human manipulation. In this article, we aim to contribute to this growing body of research by addressing difficulties of the previous approaches by proposing a new comprehensive, automatic, real-time multimodal pronunciation training system, benefits from powerful artificial intelligence techniques. The main objective of this research was to combine the advantages of ultrasound technology, three-dimensional printing, and deep learning algorithms to enhance the performance of previous systems. Our preliminary pedagogical evaluation of the proposed system revealed a significant improvement in flexibility, control, robustness, and autonomy. |
Tasks | |
Published | 2019-11-22 |
URL | https://arxiv.org/abs/1911.09840v1 |
https://arxiv.org/pdf/1911.09840v1.pdf | |
PWC | https://paperswithcode.com/paper/real-time-ultrasound-enhanced-multimodal |
Repo | |
Framework | |
Representation Disentanglement for Multi-task Learning with application to Fetal Ultrasound
Title | Representation Disentanglement for Multi-task Learning with application to Fetal Ultrasound |
Authors | Qingjie Meng, Nick Pawlowski, Daniel Rueckert, Bernhard Kainz |
Abstract | One of the biggest challenges for deep learning algorithms in medical image analysis is the indiscriminate mixing of image properties, e.g. artifacts and anatomy. These entangled image properties lead to a semantically redundant feature encoding for the relevant task and thus lead to poor generalization of deep learning algorithms. In this paper we propose a novel representation disentanglement method to extract semantically meaningful and generalizable features for different tasks within a multi-task learning framework. Deep neural networks are utilized to ensure that the encoded features are maximally informative with respect to relevant tasks, while an adversarial regularization encourages these features to be disentangled and minimally informative about irrelevant tasks. We aim to use the disentangled representations to generalize the applicability of deep neural networks. We demonstrate the advantages of the proposed method on synthetic data as well as fetal ultrasound images. Our experiments illustrate that our method is capable of learning disentangled internal representations. It outperforms baseline methods in multiple tasks, especially on images with new properties, e.g. previously unseen artifacts in fetal ultrasound. |
Tasks | Multi-Task Learning |
Published | 2019-08-21 |
URL | https://arxiv.org/abs/1908.07885v1 |
https://arxiv.org/pdf/1908.07885v1.pdf | |
PWC | https://paperswithcode.com/paper/190807885 |
Repo | |
Framework | |
Neuromorphic Hardware learns to learn
Title | Neuromorphic Hardware learns to learn |
Authors | Thomas Bohnstingl, Franz Scherr, Christian Pehle, Karlheinz Meier, Wolfgang Maass |
Abstract | Hyperparameters and learning algorithms for neuromorphic hardware are usually chosen by hand. In contrast, the hyperparameters and learning algorithms of networks of neurons in the brain, which they aim to emulate, have been optimized through extensive evolutionary and developmental processes for specific ranges of computing and learning tasks. Occasionally this process has been emulated through genetic algorithms, but these require themselves hand-design of their details and tend to provide a limited range of improvements. We employ instead other powerful gradient-free optimization tools, such as cross-entropy methods and evolutionary strategies, in order to port the function of biological optimization processes to neuromorphic hardware. As an example, we show that this method produces neuromorphic agents that learn very efficiently from rewards. In particular, meta-plasticity, i.e., the optimization of the learning rule which they use, substantially enhances reward-based learning capability of the hardware. In addition, we demonstrate for the first time Learning-to-Learn benefits from such hardware, in particular, the capability to extract abstract knowledge from prior learning experiences that speeds up the learning of new but related tasks. Learning-to-Learn is especially suited for accelerated neuromorphic hardware, since it makes it feasible to carry out the required very large number of network computations. |
Tasks | |
Published | 2019-03-15 |
URL | http://arxiv.org/abs/1903.06493v2 |
http://arxiv.org/pdf/1903.06493v2.pdf | |
PWC | https://paperswithcode.com/paper/neuromorphic-hardware-learns-to-learn |
Repo | |
Framework | |
Robust Local Features for Improving the Generalization of Adversarial Training
Title | Robust Local Features for Improving the Generalization of Adversarial Training |
Authors | Chuanbiao Song, Kun He, Jiadong Lin, Liwei Wang, John E. Hopcroft |
Abstract | Adversarial training has been demonstrated as one of the most effective methods for training robust models to defend against adversarial examples. However, adversarially trained models often lack adversarially robust generalization on unseen testing data. Recent works show that adversarially trained models are more biased towards global structure features. Instead, in this work, we would like to investigate the relationship between the generalization of adversarial training and the robust local features, as the robust local features generalize well for unseen shape variation. To learn the robust local features, we develop a Random Block Shuffle (RBS) transformation to break up the global structure features on normal adversarial examples. We continue to propose a new approach called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features by adversarial training on the RBS-transformed adversarial examples, and then transfers the robust local features into the training of normal adversarial examples. To demonstrate the generality of our argument, we implement RLFAT in currently state-of-the-art adversarial training frameworks. Extensive experiments on STL-10, CIFAR-10 and CIFAR-100 show that RLFAT significantly improves both the adversarially robust generalization and the standard generalization of adversarial training. Additionally, we demonstrate that our models capture more local features of the object on the images, aligning better with human perception. |
Tasks | |
Published | 2019-09-23 |
URL | https://arxiv.org/abs/1909.10147v5 |
https://arxiv.org/pdf/1909.10147v5.pdf | |
PWC | https://paperswithcode.com/paper/190910147 |
Repo | |
Framework | |
A Novel CMB Component Separation Method: Hierarchical Generalized Morphological Component Analysis
Title | A Novel CMB Component Separation Method: Hierarchical Generalized Morphological Component Analysis |
Authors | Sebastian Wagner-Carena, Max Hopkins, Ana Diaz Rivero, Cora Dvorkin |
Abstract | We present a novel technique for Cosmic Microwave Background (CMB) foreground subtraction based on the framework of blind source separation. Inspired by previous work incorporating local variation to Generalized Morphological Component Analysis (GMCA), we introduce Hierarchical GMCA (HGMCA), a Bayesian hierarchical framework for source separation. We test our method on $N_{\rm side}=256$ simulated sky maps that include dust, synchrotron, free-free and anomalous microwave emission, and show that HGMCA reduces foreground contamination by $25%$ over GMCA in both the regions included and excluded by the Planck UT78 mask, decreases the error in the measurement of the CMB temperature power spectrum to the $0.02-0.03%$ level at $\ell>200$ (and $<0.26%$ for all $\ell$), and reduces correlation to all the foregrounds. We find equivalent or improved performance when compared to state-of-the-art Internal Linear Combination (ILC)-type algorithms on these simulations, suggesting that HGMCA may be a competitive alternative to foreground separation techniques previously applied to observed CMB data. Additionally, we show that our performance does not suffer when we perturb model parameters or alter the CMB realization, which suggests that our algorithm generalizes well beyond our simplified simulations. Our results open a new avenue for constructing CMB maps through Bayesian hierarchical analysis. |
Tasks | |
Published | 2019-10-17 |
URL | https://arxiv.org/abs/1910.08077v1 |
https://arxiv.org/pdf/1910.08077v1.pdf | |
PWC | https://paperswithcode.com/paper/a-novel-cmb-component-separation-method |
Repo | |
Framework | |
Fine-Grained Semantic Segmentation of Motion Capture Data using Dilated Temporal Fully-Convolutional Networks
Title | Fine-Grained Semantic Segmentation of Motion Capture Data using Dilated Temporal Fully-Convolutional Networks |
Authors | Noshaba Cheema, Somayeh Hosseini, Janis Sprenger, Erik Herrmann, Han Du, Klaus Fischer, Philipp Slusallek |
Abstract | Human motion capture data has been widely used in data-driven character animation. In order to generate realistic, natural-looking motions, most data-driven approaches require considerable efforts of pre-processing, including motion segmentation and annotation. Existing (semi-) automatic solutions either require hand-crafted features for motion segmentation or do not produce the semantic annotations required for motion synthesis and building large-scale motion databases. In addition, human labeled annotation data suffers from inter- and intra-labeler inconsistencies by design. We propose a semi-automatic framework for semantic segmentation of motion capture data based on supervised machine learning techniques. It first transforms a motion capture sequence into a ``motion image’’ and applies a convolutional neural network for image segmentation. Dilated temporal convolutions enable the extraction of temporal information from a large receptive field. Our model outperforms two state-of-the-art models for action segmentation, as well as a popular network for sequence modeling. Most of all, our method is very robust under noisy and inaccurate training labels and thus can handle human errors during the labeling process. | |
Tasks | action segmentation, Motion Capture, Motion Segmentation, Semantic Segmentation |
Published | 2019-03-02 |
URL | http://arxiv.org/abs/1903.00695v1 |
http://arxiv.org/pdf/1903.00695v1.pdf | |
PWC | https://paperswithcode.com/paper/fine-grained-semantic-segmentation-of-motion |
Repo | |
Framework | |
Multi-Scale Vector Quantization with Reconstruction Trees
Title | Multi-Scale Vector Quantization with Reconstruction Trees |
Authors | Enrico Cecini, Ernesto De Vito, Lorenzo Rosasco |
Abstract | We propose and study a multi-scale approach to vector quantization. We develop an algorithm, dubbed reconstruction trees, inspired by decision trees. Here the objective is parsimonious reconstruction of unsupervised data, rather than classification. Contrasted to more standard vector quantization methods, such as K-means, the proposed approach leverages a family of given partitions, to quickly explore the data in a coarse to fine– multi-scale– fashion. Our main technical contribution is an analysis of the expected distortion achieved by the proposed algorithm, when the data are assumed to be sampled from a fixed unknown distribution. In this context, we derive both asymptotic and finite sample results under suitable regularity assumptions on the distribution. As a special case, we consider the setting where the data generating distribution is supported on a compact Riemannian sub-manifold. Tools from differential geometry and concentration of measure are useful in our analysis. |
Tasks | Quantization |
Published | 2019-07-08 |
URL | https://arxiv.org/abs/1907.03875v2 |
https://arxiv.org/pdf/1907.03875v2.pdf | |
PWC | https://paperswithcode.com/paper/multi-scale-vector-quantization-with |
Repo | |
Framework | |
Clinical acceptance of software based on artificial intelligence technologies (radiology)
Title | Clinical acceptance of software based on artificial intelligence technologies (radiology) |
Authors | S. P. Morozov, A. V. Vladzymyrskyy, V. G. Klyashtornyy, A. E. Andreychenko, N. S. Kulberg, V. A. Gombolevsky, K. A. Sergunova |
Abstract | Aim: provide a methodological framework for the process of clinical tests, clinical acceptance, and scientific assessment of algorithms and software based on the artificial intelligence (AI) technologies. Clinical tests are considered as a preparation stage for the software registration as a medical product. The authors propose approaches to evaluate accuracy and efficiency of the AI algorithms for radiology. |
Tasks | |
Published | 2019-08-01 |
URL | https://arxiv.org/abs/1908.00381v2 |
https://arxiv.org/pdf/1908.00381v2.pdf | |
PWC | https://paperswithcode.com/paper/clinical-acceptance-of-software-based-on |
Repo | |
Framework | |
Rule Extraction in Unsupervised Anomaly Detection for Model Explainability: Application to OneClass SVM
Title | Rule Extraction in Unsupervised Anomaly Detection for Model Explainability: Application to OneClass SVM |
Authors | Alberto Barbado, Óscar Corcho |
Abstract | OneClass SVM is a popular method for unsupervised anomaly detection. As many other methods, it suffers from the \textit{black box} problem: it is difficult to justify, in an intuitive and simple manner, why the decision frontier is identifying data points as anomalous or non anomalous. Such type of problem is being widely addressed for supervised models. However, it is still an uncharted area for unsupervised learning. In this paper, we evaluate some of the most important rule extraction techniques over OneClass SVM models, as well as presenting alternative designs for some of those XAI algorithms. Together with that, we propose algorithms to compute metrics related with XAI regarding the “comprehensivility”, “representativeness”, “stability” and “diversity” of the rules extracted. We evaluate our proposals with different datasets, including real-world data coming from industry. With this, our proposal contributes to extend Explainable AI techniques to unsupervised machine learning models. |
Tasks | Anomaly Detection, Unsupervised Anomaly Detection |
Published | 2019-11-21 |
URL | https://arxiv.org/abs/1911.09315v2 |
https://arxiv.org/pdf/1911.09315v2.pdf | |
PWC | https://paperswithcode.com/paper/rule-extraction-in-unsupervised-anomaly |
Repo | |
Framework | |
Learned Interpolation for 3D Generation
Title | Learned Interpolation for 3D Generation |
Authors | Austin Dill, Songwei Ge, Eunsu Kang, Chun-Liang Li, Barnabas Poczos |
Abstract | In order to generate novel 3D shapes with machine learning, one must allow for interpolation. The typical approach for incorporating this creative process is to interpolate in a learned latent space so as to avoid the problem of generating unrealistic instances by exploiting the model’s learned structure. The process of the interpolation is supposed to form a semantically smooth morphing. While this approach is sound for synthesizing realistic media such as lifelike portraits or new designs for everyday objects, it subjectively fails to directly model the unexpected, unrealistic, or creative. In this work, we present a method for learning how to interpolate point clouds. By encoding prior knowledge about real-world objects, the intermediate forms are both realistic and unlike any existing forms. We show not only how this method can be used to generate “creative” point clouds, but how the method can also be leveraged to generate 3D models suitable for sculpture. |
Tasks | |
Published | 2019-12-08 |
URL | https://arxiv.org/abs/1912.10787v2 |
https://arxiv.org/pdf/1912.10787v2.pdf | |
PWC | https://paperswithcode.com/paper/learned-interpolation-for-3d-generation |
Repo | |
Framework | |
Uninformed Students: Student-Teacher Anomaly Detection with Discriminative Latent Embeddings
Title | Uninformed Students: Student-Teacher Anomaly Detection with Discriminative Latent Embeddings |
Authors | Paul Bergmann, Michael Fauser, David Sattlegger, Carsten Steger |
Abstract | We introduce a powerful student-teacher framework for the challenging problem of unsupervised anomaly detection and pixel-precise anomaly segmentation in high-resolution images. Student networks are trained to regress the output of a descriptive teacher network that was pretrained on a large dataset of patches from natural images. This circumvents the need for prior data annotation. Anomalies are detected when the outputs of the student networks differ from that of the teacher network. This happens when they fail to generalize outside the manifold of anomaly-free training data. The intrinsic uncertainty in the student networks is used as an additional scoring function that indicates anomalies. We compare our method to a large number of existing deep learning based methods for unsupervised anomaly detection. Our experiments demonstrate improvements over state-of-the-art methods on a number of real-world datasets, including the recently introduced MVTec Anomaly Detection dataset that was specifically designed to benchmark anomaly segmentation algorithms. |
Tasks | Anomaly Detection, Unsupervised Anomaly Detection |
Published | 2019-11-06 |
URL | https://arxiv.org/abs/1911.02357v2 |
https://arxiv.org/pdf/1911.02357v2.pdf | |
PWC | https://paperswithcode.com/paper/uninformed-students-student-teacher-anomaly |
Repo | |
Framework | |
How Well Do WGANs Estimate the Wasserstein Metric?
Title | How Well Do WGANs Estimate the Wasserstein Metric? |
Authors | Anton Mallasto, Guido Montúfar, Augusto Gerolin |
Abstract | Generative modelling is often cast as minimizing a similarity measure between a data distribution and a model distribution. Recently, a popular choice for the similarity measure has been the Wasserstein metric, which can be expressed in the Kantorovich duality formulation as the optimum difference of the expected values of a potential function under the real data distribution and the model hypothesis. In practice, the potential is approximated with a neural network and is called the discriminator. Duality constraints on the function class of the discriminator are enforced approximately, and the expectations are estimated from samples. This gives at least three sources of errors: the approximated discriminator and constraints, the estimation of the expectation value, and the optimization required to find the optimal potential. In this work, we study how well the methods, that are used in generative adversarial networks to approximate the Wasserstein metric, perform. We consider, in particular, the $c$-transform formulation, which eliminates the need to enforce the constraints explicitly. We demonstrate that the $c$-transform allows for a more accurate estimation of the true Wasserstein metric from samples, but surprisingly, does not perform the best in the generative setting. |
Tasks | |
Published | 2019-10-09 |
URL | https://arxiv.org/abs/1910.03875v1 |
https://arxiv.org/pdf/1910.03875v1.pdf | |
PWC | https://paperswithcode.com/paper/how-well-do-wgans-estimate-the-wasserstein |
Repo | |
Framework | |
Deep Weakly-supervised Anomaly Detection
Title | Deep Weakly-supervised Anomaly Detection |
Authors | Guansong Pang, Chunhua Shen, Huidong Jin, Anton van den Hengel |
Abstract | Anomaly detection is typically posited as an unsupervised learning task in the literature due to the prohibitive cost and difficulty to obtain large-scale labeled anomaly data, but this ignores the fact that a very small number (e.g.,, a few dozens) of labeled anomalies can often be made available with small/trivial cost in many real-world anomaly detection applications. To leverage such labeled anomaly data, we study an important anomaly detection problem termed weakly-supervised anomaly detection, in which, in addition to a large amount of unlabeled data, a limited number of labeled anomalies are available during modeling. Learning with the small labeled anomaly data enables anomaly-informed modeling, which helps identify anomalies of interest and address the notorious high false positives in unsupervised anomaly detection. However, the problem is especially challenging, since (i) the limited amount of labeled anomaly data often, if not always, cannot cover all types of anomalies and (ii) the unlabeled data is often dominated by normal instances but has anomaly contamination. We address the problem by formulating it as a pairwise relation prediction task. Particularly, our approach defines a two-stream ordinal regression neural network to learn the relation of randomly sampled instance pairs, i.e., whether the instance pair contains two labeled anomalies, one labeled anomaly, or just unlabeled data instances. The resulting model effectively leverages both the labeled and unlabeled data to substantially augment the training data and learn well-generalized representations of normality and abnormality. Comprehensive empirical results on 40 real-world datasets show that our approach (i) significantly outperforms four state-of-the-art methods in detecting both of the known and previously unseen anomalies and (ii) is substantially more data-efficient. |
Tasks | Anomaly Detection, Unsupervised Anomaly Detection |
Published | 2019-10-30 |
URL | https://arxiv.org/abs/1910.13601v2 |
https://arxiv.org/pdf/1910.13601v2.pdf | |
PWC | https://paperswithcode.com/paper/weakly-supervised-deep-anomaly-detection-with |
Repo | |
Framework | |
Analyzing Data Selection Techniques with Tools from the Theory of Information Losses
Title | Analyzing Data Selection Techniques with Tools from the Theory of Information Losses |
Authors | Brandon Foggo, Nanpeng Yu |
Abstract | In this paper, we present and illustrate some new tools for rigorously analyzing training data selection methods. These tools focus on the information theoretic losses that occur when sampling data. We use this framework to prove that two methods, Facility Location Selection and Transductive Experimental Design, reduce these losses. These are meant to act as generalizable theoretical examples of applying the field of Information Theoretic Deep Learning Theory to the fields of data selection and active learning. Both analyses yield insight into their respective methods and increase their interpretability. In the case of Transductive Experimental Design, the provided analysis greatly increases the method’s scope as well. |
Tasks | Active Learning |
Published | 2019-02-25 |
URL | https://arxiv.org/abs/1902.09602v3 |
https://arxiv.org/pdf/1902.09602v3.pdf | |
PWC | https://paperswithcode.com/paper/interpreting-active-learning-methods-through |
Repo | |
Framework | |
Minimax experimental design: Bridging the gap between statistical and worst-case approaches to least squares regression
Title | Minimax experimental design: Bridging the gap between statistical and worst-case approaches to least squares regression |
Authors | Michał Dereziński, Kenneth L. Clarkson, Michael W. Mahoney, Manfred K. Warmuth |
Abstract | In experimental design, we are given a large collection of vectors, each with a hidden response value that we assume derives from an underlying linear model, and we wish to pick a small subset of the vectors such that querying the corresponding responses will lead to a good estimator of the model. A classical approach in statistics is to assume the responses are linear, plus zero-mean i.i.d. Gaussian noise, in which case the goal is to provide an unbiased estimator with smallest mean squared error (A-optimal design). A related approach, more common in computer science, is to assume the responses are arbitrary but fixed, in which case the goal is to estimate the least squares solution using few responses, as quickly as possible, for worst-case inputs. Despite many attempts, characterizing the relationship between these two approaches has proven elusive. We address this by proposing a framework for experimental design where the responses are produced by an arbitrary unknown distribution. We show that there is an efficient randomized experimental design procedure that achieves strong variance bounds for an unbiased estimator using few responses in this general model. Nearly tight bounds for the classical A-optimality criterion, as well as improved bounds for worst-case responses, emerge as special cases of this result. In the process, we develop a new algorithm for a joint sampling distribution called volume sampling, and we propose a new i.i.d. importance sampling method: inverse score sampling. A key novelty of our analysis is in developing new expected error bounds for worst-case regression by controlling the tail behavior of i.i.d. sampling via the jointness of volume sampling. Our result motivates a new minimax-optimality criterion for experimental design which can be viewed as an extension of both A-optimal design and sampling for worst-case regression. |
Tasks | |
Published | 2019-02-04 |
URL | http://arxiv.org/abs/1902.00995v1 |
http://arxiv.org/pdf/1902.00995v1.pdf | |
PWC | https://paperswithcode.com/paper/minimax-experimental-design-bridging-the-gap |
Repo | |
Framework | |