April 2, 2020

3517 words 17 mins read

Paper Group ANR 233

Paper Group ANR 233

Unsupervised Pidgin Text Generation By Pivoting English Data and Self-Training. Coronary Wall Segmentation in CCTA Scans via a Hybrid Net with Contours Regularization. Extracting dispersion curves from ambient noise correlations using deep learning. Differentiating through the Fréchet Mean. Classification of Chest Diseases using Wavelet Transforms …

Unsupervised Pidgin Text Generation By Pivoting English Data and Self-Training

Title Unsupervised Pidgin Text Generation By Pivoting English Data and Self-Training
Authors Ernie Chang, David Ifeoluwa Adelani, Xiaoyu Shen, Vera Demberg
Abstract West African Pidgin English is a language that is significantly spoken in West Africa, consisting of at least 75 million speakers. Nevertheless, proper machine translation systems and relevant NLP datasets for pidgin English are virtually absent. In this work, we develop techniques targeted at bridging the gap between Pidgin English and English in the context of natural language generation. %As a proof of concept, we explore the proposed techniques in the area of data-to-text generation. By building upon the previously released monolingual Pidgin English text and parallel English data-to-text corpus, we hope to build a system that can automatically generate Pidgin English descriptions from structured data. We first train a data-to-English text generation system, before employing techniques in unsupervised neural machine translation and self-training to establish the Pidgin-to-English cross-lingual alignment. The human evaluation performed on the generated Pidgin texts shows that, though still far from being practically usable, the pivoting + self-training technique improves both Pidgin text fluency and relevance.
Tasks Data-to-Text Generation, Machine Translation, Text Generation
Published 2020-03-18
URL https://arxiv.org/abs/2003.08272v1
PDF https://arxiv.org/pdf/2003.08272v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-pidgin-text-generation-by
Repo
Framework

Coronary Wall Segmentation in CCTA Scans via a Hybrid Net with Contours Regularization

Title Coronary Wall Segmentation in CCTA Scans via a Hybrid Net with Contours Regularization
Authors Kaikai Huang, Antonio Tejero-de-Pablos, Hiroaki Yamane, Yusuke Kurose, Junichi Iho, Youji Tokunaga, Makoto Horie, Keisuke Nishizawa, Yusaku Hayashi, Yasushi Koyama, Tatsuya Harada
Abstract Providing closed and well-connected boundaries of coronary artery is essential to assist cardiologists in the diagnosis of coronary artery disease (CAD). Recently, several deep learning-based methods have been proposed for boundary detection and segmentation in a medical image. However, when applied to coronary wall detection, they tend to produce disconnected and inaccurate boundaries. In this paper, we propose a novel boundary detection method for coronary arteries that focuses on the continuity and connectivity of the boundaries. In order to model the spatial continuity of consecutive images, our hybrid architecture takes a volume (i.e., a segment of the coronary artery) as input and detects the boundary of the target slice (i.e., the central slice of the segment). Then, to ensure closed boundaries, we propose a contour-constrained weighted Hausdorff distance loss. We evaluate our method on a dataset of 34 patients of coronary CT angiography scans with curved planar reconstruction (CCTA-CPR) of the arteries (i.e., cross-sections). Experiment results show that our method can produce smooth closed boundaries outperforming the state-of-the-art accuracy.
Tasks Boundary Detection
Published 2020-02-27
URL https://arxiv.org/abs/2002.12263v1
PDF https://arxiv.org/pdf/2002.12263v1.pdf
PWC https://paperswithcode.com/paper/coronary-wall-segmentation-in-ccta-scans-via
Repo
Framework

Extracting dispersion curves from ambient noise correlations using deep learning

Title Extracting dispersion curves from ambient noise correlations using deep learning
Authors Xiaotian Zhang, Zhe Jia, Zachary E. Ross, Robert W. Clayton
Abstract We present a machine-learning approach to classifying the phases of surface wave dispersion curves. Standard FTAN analysis of surfaces observed on an array of receivers is converted to an image, of which, each pixel is classified as fundamental mode, first overtone, or noise. We use a convolutional neural network (U-net) architecture with a supervised learning objective and incorporate transfer learning. The training is initially performed with synthetic data to learn coarse structure, followed by fine-tuning of the network using approximately 10% of the real data based on human classification. The results show that the machine classification is nearly identical to the human picked phases. Expanding the method to process multiple images at once did not improve the performance. The developed technique will faciliate automated processing of large dispersion curve datasets.
Tasks Transfer Learning
Published 2020-02-05
URL https://arxiv.org/abs/2002.02040v1
PDF https://arxiv.org/pdf/2002.02040v1.pdf
PWC https://paperswithcode.com/paper/extracting-dispersion-curves-from-ambient
Repo
Framework

Differentiating through the Fréchet Mean

Title Differentiating through the Fréchet Mean
Authors Aaron Lou, Isay Katsman, Qingxuan Jiang, Serge Belongie, Ser-Nam Lim, Christopher De Sa
Abstract Recent advances in deep representation learning on Riemannian manifolds extend classical deep learning operations to better capture the geometry of the manifold. One possible extension is the Fr'echet mean, the generalization of the Euclidean mean; however, it has been difficult to apply because it lacks a closed form with an easily computable derivative. In this paper, we show how to differentiate through the Fr'echet mean for arbitrary Riemannian manifolds. Then, focusing on hyperbolic space, we derive explicit gradient expressions and a fast, accurate, and hyperparameter-free Fr'echet mean solver. This fully integrates the Fr'echet mean into the hyperbolic neural network pipeline. To demonstrate this integration, we present two case studies. First, we apply our Fr'echet mean to the existing Hyperbolic Graph Convolutional Network, replacing its projected aggregation to obtain state-of-the-art results on datasets with high hyperbolicity. Second, to demonstrate the Fr'echet mean’s capacity to generalize Euclidean neural network operations, we develop a hyperbolic batch normalization method that gives an improvement parallel to the one observed in the Euclidean setting.
Tasks Representation Learning
Published 2020-02-29
URL https://arxiv.org/abs/2003.00335v2
PDF https://arxiv.org/pdf/2003.00335v2.pdf
PWC https://paperswithcode.com/paper/differentiating-through-the-frechet-mean
Repo
Framework

Classification of Chest Diseases using Wavelet Transforms and Transfer Learning

Title Classification of Chest Diseases using Wavelet Transforms and Transfer Learning
Authors Ahmed Rasheed, Muhammad Shahzad Younis, Muhammad Bilal, Maha Rasheed
Abstract Chest X-ray scan is a most often used modality by radiologists to diagnose many chest related diseases in their initial stages. The proposed system aids the radiologists in making decision about the diseases found in the scans more efficiently. Our system combines the techniques of image processing for feature enhancement and deep learning for classification among diseases. We have used the ChestX-ray14 database in order to train our deep learning model on the 14 different labeled diseases found in it. The proposed research shows the significant improvement in the results by using wavelet transforms as pre-processing technique.
Tasks Transfer Learning
Published 2020-02-03
URL https://arxiv.org/abs/2002.00625v1
PDF https://arxiv.org/pdf/2002.00625v1.pdf
PWC https://paperswithcode.com/paper/classification-of-chest-diseases-using
Repo
Framework

Continuous Emotion Recognition via Deep Convolutional Autoencoder and Support Vector Regressor

Title Continuous Emotion Recognition via Deep Convolutional Autoencoder and Support Vector Regressor
Authors Sevegni Odilon Clement Allognon, Alessandro L. Koerich, Alceu de S. Britto Jr
Abstract Automatic facial expression recognition is an important research area in the emotion recognition and computer vision. Applications can be found in several domains such as medical treatment, driver fatigue surveillance, sociable robotics, and several other human-computer interaction systems. Therefore, it is crucial that the machine should be able to recognize the emotional state of the user with high accuracy. In recent years, deep neural networks have been used with great success in recognizing emotions. In this paper, we present a new model for continuous emotion recognition based on facial expression recognition by using an unsupervised learning approach based on transfer learning and autoencoders. The proposed approach also includes preprocessing and post-processing techniques which contribute favorably to improving the performance of predicting the concordance correlation coefficient for arousal and valence dimensions. Experimental results for predicting spontaneous and natural emotions on the RECOLA 2016 dataset have shown that the proposed approach based on visual information can achieve CCCs of 0.516 and 0.264 for valence and arousal, respectively.
Tasks Emotion Recognition, Facial Expression Recognition, Transfer Learning
Published 2020-01-31
URL https://arxiv.org/abs/2001.11976v1
PDF https://arxiv.org/pdf/2001.11976v1.pdf
PWC https://paperswithcode.com/paper/continuous-emotion-recognition-via-deep
Repo
Framework

Learning to Encode Position for Transformer with Continuous Dynamical Model

Title Learning to Encode Position for Transformer with Continuous Dynamical Model
Authors Xuanqing Liu, Hsiang-Fu Yu, Inderjit Dhillon, Cho-Jui Hsieh
Abstract We introduce a new way of learning to encode position information for non-recurrent models, such as Transformer models. Unlike RNN and LSTM, which contain inductive bias by loading the input tokens sequentially, non-recurrent models are less sensitive to position. The main reason is that position information among input units is not inherently encoded, i.e., the models are permutation equivalent; this problem justifies why all of the existing models are accompanied by a sinusoidal encoding/embedding layer at the input. However, this solution has clear limitations: the sinusoidal encoding is not flexible enough as it is manually designed and does not contain any learnable parameters, whereas the position embedding restricts the maximum length of input sequences. It is thus desirable to design a new position layer that contains learnable parameters to adjust to different datasets and different architectures. At the same time, we would also like the encodings to extrapolate in accordance with the variable length of inputs. In our proposed solution, we borrow from the recent Neural ODE approach, which may be viewed as a versatile continuous version of a ResNet. This model is capable of modeling many kinds of dynamical systems. We model the evolution of encoded results along position index by such a dynamical system, thereby overcoming the above limitations of existing methods. We evaluate our new position layers on a variety of neural machine translation and language understanding tasks, the experimental results show consistent improvements over the baselines.
Tasks Machine Translation
Published 2020-03-13
URL https://arxiv.org/abs/2003.09229v1
PDF https://arxiv.org/pdf/2003.09229v1.pdf
PWC https://paperswithcode.com/paper/learning-to-encode-position-for-transformer
Repo
Framework

Capturing document context inside sentence-level neural machine translation models with self-training

Title Capturing document context inside sentence-level neural machine translation models with self-training
Authors Elman Mansimov, Gábor Melis, Lei Yu
Abstract Neural machine translation (NMT) has arguably achieved human level parity when trained and evaluated at the sentence-level. Document-level neural machine translation has received less attention and lags behind its sentence-level counterpart. The majority of the proposed document-level approaches investigate ways of conditioning the model on several source or target sentences to capture document context. These approaches require training a specialized NMT model from scratch on parallel document-level corpora. We propose an approach that doesn’t require training a specialized model on parallel document-level corpora and is applied to a trained sentence-level NMT model at decoding time. We process the document from left to right multiple times and self-train the sentence-level model on pairs of source sentences and generated translations. Our approach reinforces the choices made by the model, thus making it more likely that the same choices will be made in other sentences in the document. We evaluate our approach on three document-level datasets: NIST Chinese-English, WMT’19 Chinese-English and OpenSubtitles English-Russian. We demonstrate that our approach has higher BLEU score and higher human preference than the baseline. Qualitative analysis of our approach shows that choices made by model are consistent across the document.
Tasks Machine Translation
Published 2020-03-11
URL https://arxiv.org/abs/2003.05259v1
PDF https://arxiv.org/pdf/2003.05259v1.pdf
PWC https://paperswithcode.com/paper/capturing-document-context-inside-sentence
Repo
Framework

Grammar Filtering For Syntax-Guided Synthesis

Title Grammar Filtering For Syntax-Guided Synthesis
Authors Kairo Morton, William Hallahan, Elven Shum, Ruzica Piskac, Mark Santolucito
Abstract Programming-by-example (PBE) is a synthesis paradigm that allows users to generate functions by simply providing input-output examples. While a promising interaction paradigm, synthesis is still too slow for realtime interaction and more widespread adoption. Existing approaches to PBE synthesis have used automated reasoning tools, such as SMT solvers, as well as works applying machine learning techniques. At its core, the automated reasoning approach relies on highly domain specific knowledge of programming languages. On the other hand, the machine learning approaches utilize the fact that when working with program code, it is possible to generate arbitrarily large training datasets. In this work, we propose a system for using machine learning in tandem with automated reasoning techniques to solve Syntax Guided Synthesis (SyGuS) style PBE problems. By preprocessing SyGuS PBE problems with a neural network, we can use a data driven approach to reduce the size of the search space, then allow automated reasoning-based solvers to more quickly find a solution analytically. Our system is able to run atop existing SyGuS PBE synthesis tools, decreasing the runtime of the winner of the 2019 SyGuS Competition for the PBE Strings track by 47.65% to outperform all of the competing tools.
Tasks
Published 2020-02-07
URL https://arxiv.org/abs/2002.02884v1
PDF https://arxiv.org/pdf/2002.02884v1.pdf
PWC https://paperswithcode.com/paper/grammar-filtering-for-syntax-guided-synthesis
Repo
Framework

Unsupervised and Interpretable Domain Adaptation to Rapidly Filter Social Web Data for Emergency Services

Title Unsupervised and Interpretable Domain Adaptation to Rapidly Filter Social Web Data for Emergency Services
Authors Jitin Krishnan, Hemant Purohit, Huzefa Rangwala
Abstract During the onset of a disaster event, filtering relevant information from the social web data is challenging due to its sparse availability and practical limitations in labeling datasets of an ongoing crisis. In this paper, we show that unsupervised domain adaptation through multi-task learning can be a useful framework to leverage data from past crisis events, as well as exploit additional web resources for training efficient information filtering models during an ongoing crisis. We present a novel method to classify relevant tweets during an ongoing crisis without seeing any new examples, using the publicly available dataset of TREC incident streams that provides labeled tweets with 4 relevant classes across 10 different crisis events. Additionally, our method addresses a crucial but missing component from current research in web science for crisis data filtering models: interpretability. Specifically, we first identify a standard single-task attention-based neural network architecture and then construct a customized multi-task architecture for the crisis domain: Multi-Task Domain Adversarial Attention Network. This model consists of dedicated attention layers for each task and a domain classifier for gradient reversal. Evaluation of domain adaptation for crisis events is performed by choosing a target event as the test set and training on the rest. Our results show that the multi-task model outperformed its single-task counterpart and also, training with additional web-resources showed further performance boost. Furthermore, we show that the attention layer can be used as a guide to explain the model predictions by showcasing the words in a tweet that are deemed important in the classification process. Our research aims to pave the way towards a fully unsupervised and interpretable domain adaptation of low-resource crisis web data to aid emergency responders quickly and effectively.
Tasks Domain Adaptation, Multi-Task Learning, Unsupervised Domain Adaptation
Published 2020-03-04
URL https://arxiv.org/abs/2003.04991v1
PDF https://arxiv.org/pdf/2003.04991v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-and-interpretable-domain
Repo
Framework

Bayesian Inversion Of Generative Models For Geologic Storage Of Carbon Dioxide

Title Bayesian Inversion Of Generative Models For Geologic Storage Of Carbon Dioxide
Authors Gavin H. Graham, Yan Chen
Abstract Carbon capture and storage (CCS) can aid decarbonization of the atmosphere to limit further global temperature increases. A framework utilizing unsupervised learning is used to generate a range of subsurface geologic volumes to investigate potential sites for long-term storage of carbon dioxide. Generative adversarial networks are used to create geologic volumes, with a further neural network used to sample the posterior distribution of a trained Generator conditional to sparsely sampled physical measurements. These generative models are further conditioned to historic dynamic fluid flow data through Bayesian inversion to improve the resolution of the forecast of the storage capacity of injected carbon dioxide.
Tasks
Published 2020-01-08
URL https://arxiv.org/abs/2001.04829v1
PDF https://arxiv.org/pdf/2001.04829v1.pdf
PWC https://paperswithcode.com/paper/bayesian-inversion-of-generative-models-for
Repo
Framework

Robust Out-of-distribution Detection in Neural Networks

Title Robust Out-of-distribution Detection in Neural Networks
Authors Jiefeng Chen, Yixuan Li, Xi Wu, Yingyu Liang, Somesh Jha
Abstract Detecting anomalous inputs is critical for safely deploying deep learning models in the real world. Existing approaches for detecting out-of-distribution (OOD) examples work well when evaluated on natural samples drawn from a sufficiently different distribution than the training data distribution. However, in this paper, we show that existing detection mechanisms can be extremely brittle when evaluating on inputs with minimal adversarial perturbations which don’t change their semantics. Formally, we introduce a novel and challenging problem, Robust Out-of-Distribution Detection, and propose an algorithm that can fool existing OOD detectors by adding small perturbations to the inputs while preserving their semantics and thus the distributional membership. We take a first step to solve this challenge, and propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples. Our method can be flexibly combined with, and render existing methods robust. On common benchmark datasets, we show that ALOE substantially improves the robustness of state-of-the-art OOD detection, with 58.4% AUROC improvement on CIFAR-10 and 46.59% improvement on CIFAR-100. Finally, we provide theoretical analysis for our method, underpinning the empirical results above.
Tasks Out-of-Distribution Detection
Published 2020-03-21
URL https://arxiv.org/abs/2003.09711v2
PDF https://arxiv.org/pdf/2003.09711v2.pdf
PWC https://paperswithcode.com/paper/robust-out-of-distribution-detection-in
Repo
Framework

Topic Extraction of Crawled Documents Collection using Correlated Topic Model in MapReduce Framework

Title Topic Extraction of Crawled Documents Collection using Correlated Topic Model in MapReduce Framework
Authors Mi Khine Oo, May Aye Khine
Abstract The tremendous increase in the amount of available research documents impels researchers to propose topic models to extract the latent semantic themes of a documents collection. However, how to extract the hidden topics of the documents collection has become a crucial task for many topic model applications. Moreover, conventional topic modeling approaches suffer from the scalability problem when the size of documents collection increases. In this paper, the Correlated Topic Model with variational Expectation-Maximization algorithm is implemented in MapReduce framework to solve the scalability problem. The proposed approach utilizes the dataset crawled from the public digital library. In addition, the full-texts of the crawled documents are analysed to enhance the accuracy of MapReduce CTM. The experiments are conducted to demonstrate the performance of the proposed algorithm. From the evaluation, the proposed approach has a comparable performance in terms of topic coherences with LDA implemented in MapReduce framework.
Tasks Topic Models
Published 2020-01-06
URL https://arxiv.org/abs/2001.01669v1
PDF https://arxiv.org/pdf/2001.01669v1.pdf
PWC https://paperswithcode.com/paper/topic-extraction-of-crawled-documents
Repo
Framework

Performance Evaluation of Low-Cost Machine Vision Cameras for Image-Based Grasp Verification

Title Performance Evaluation of Low-Cost Machine Vision Cameras for Image-Based Grasp Verification
Authors Deebul Nair, Amirhossein Pakdaman, Paul G. Plöger
Abstract Grasp verification is advantageous for autonomous manipulation robots as they provide the feedback required for higher level planning components about successful task completion. However, a major obstacle in doing grasp verification is sensor selection. In this paper, we propose a vision based grasp verification system using machine vision cameras, with the verification problem formulated as an image classification task. Machine vision cameras consist of a camera and a processing unit capable of on-board deep learning inference. The inference in these low-power hardware are done near the data source, reducing the robot’s dependence on a centralized server, leading to reduced latency, and improved reliability. Machine vision cameras provide the deep learning inference capabilities using different neural accelerators. Although, it is not clear from the documentation of these cameras what is the effect of these neural accelerators on performance metrics such as latency and throughput. To systematically benchmark these machine vision cameras, we propose a parameterized model generator that generates end to end models of Convolutional Neural Networks(CNN). Using these generated models we benchmark latency and throughput of two machine vision cameras, JeVois A33 and Sipeed Maix Bit. Our experiments demonstrate that the selected machine vision camera and the deep learning models can robustly verify grasp with 97% per frame accuracy.
Tasks Image Classification
Published 2020-03-23
URL https://arxiv.org/abs/2003.10167v1
PDF https://arxiv.org/pdf/2003.10167v1.pdf
PWC https://paperswithcode.com/paper/performance-evaluation-of-low-cost-machine
Repo
Framework

A Computer-Aided Diagnosis System Using Artificial Intelligence for Proximal Femoral Fractures Enables Residents to Achieve a Diagnostic Rate Equivalent to Orthopedic Surgeons – multi-institutional joint development research

Title A Computer-Aided Diagnosis System Using Artificial Intelligence for Proximal Femoral Fractures Enables Residents to Achieve a Diagnostic Rate Equivalent to Orthopedic Surgeons – multi-institutional joint development research
Authors Yoichi Sato, Takamune Asamoto, Yutaro Ono, Ryosuke Goto, Asahi Kitamura, Seiwa Honda
Abstract [Objective] To develop a CAD system for proximal femoral fracture for plain frontal hip radiographs by CNN trained on a large dataset collected at multiple institutions. And, the possibility of the diagnosis rate improvement of the proximal femoral fracture by the resident using this CAD system as an aid of the diagnosis. [Materials and methods] In total, 4851 cases of proximal femoral fracture patients who visited each institution between 2009 and 2019 were included. 5242 plain pelvic radiographs were extracted from a DICOM server, and a total of 10484 images(5242 with fracture and 5242 without fracture) were used for machine learning. A CNN approach was used. We used the EffectiventNet-B4 framework with Pytorch 1.3 and Fast.ai 1.0. In the final evaluation, accuracy, sensitivity, specificity, F-value, and AUC were evaluated. Grad-CAM was used to conceptualize the basis of the diagnosis by the CAD system. For 31 residents and 4 orthopedic surgeons, the image diagnosis test was carried out for 600 photographs of proximal femoral fracture randomly extracted from test image data set. And, diagnosis rate in the situation with/without the diagnosis support by the CAD system were evaluated respectively. [Results] The diagnostic accuracy of the learning model was 96.1%, sensitivity 95.2%, specificity 96.9%, F value 0.961, and AUC 0.99. Grad-CAM was used to show the most accurate diagnosis. In the image diagnosis test, the resident acquired the diagnostic ability equivalent to that of the orthopedic surgeon by using the diagnostic aid of the CAD system. [Conclusions] The CAD system using AI for the proximal femoral fracture which we developed could offer the diagnosis reason, and it became an image diagnosis tool with the high diagnosis accuracy. And, the possibility of contributing to the diagnosis rate improvement was considered in the field of actual clinical environment such as emergency room.
Tasks
Published 2020-03-11
URL https://arxiv.org/abs/2003.12443v1
PDF https://arxiv.org/pdf/2003.12443v1.pdf
PWC https://paperswithcode.com/paper/a-computer-aided-diagnosis-system-using
Repo
Framework
comments powered by Disqus