October 18, 2019

3398 words 16 mins read

Paper Group ANR 485

Paper Group ANR 485

An Annotated Corpus for Machine Reading of Instructions in Wet Lab Protocols. Non-local Low-rank Cube-based Tensor Factorization for Spectral CT Reconstruction. Deep Learning Classification of Polygenic Obesity using Genome Wide Association Study SNPs. PlaneMatch: Patch Coplanarity Prediction for Robust RGB-D Reconstruction. Zero-Shot Cross-lingual …

An Annotated Corpus for Machine Reading of Instructions in Wet Lab Protocols

Title An Annotated Corpus for Machine Reading of Instructions in Wet Lab Protocols
Authors Chaitanya Kulkarni, Wei Xu, Alan Ritter, Raghu Machiraju
Abstract We describe an effort to annotate a corpus of natural language instructions consisting of 622 wet lab protocols to facilitate automatic or semi-automatic conversion of protocols into a machine-readable format and benefit biological research. Experimental results demonstrate the utility of our corpus for developing machine learning approaches to shallow semantic parsing of instructional texts. We make our annotated Wet Lab Protocol Corpus available to the research community.
Tasks Reading Comprehension, Semantic Parsing
Published 2018-05-01
URL http://arxiv.org/abs/1805.00195v1
PDF http://arxiv.org/pdf/1805.00195v1.pdf
PWC https://paperswithcode.com/paper/an-annotated-corpus-for-machine-reading-of
Repo
Framework

Non-local Low-rank Cube-based Tensor Factorization for Spectral CT Reconstruction

Title Non-local Low-rank Cube-based Tensor Factorization for Spectral CT Reconstruction
Authors Weiwen Wu, Fenglin Liu, Yanbo Zhang, Qian Wang, Hengyong Yu
Abstract Spectral computed tomography (CT) reconstructs material-dependent attenuation images with the projections of multiple narrow energy windows, it is meaningful for material identification and decomposition. Unfortunately, the multi-energy projection dataset always contains strong complicated noise and result in the projections has a lower signal-noise-ratio (SNR). Very recently, the spatial-spectral cube matching frame (SSCMF) was proposed to explore the non-local spatial-spectrum similarities for spectral CT. The method constructs such a group by clustering up a series of non-local spatial-spectrum cubes. The small size of spatial patch for such a group make SSCMF fails to encode the sparsity and low-rank properties. In addition, the hard-thresholding and collaboration filtering operation in the SSCMF are also rough to recover the image features and spatial edges. While for all steps are operated on 4-D group, we may not afford such huge computational and memory load in practical. To avoid the above limitation and further improve image quality, we first formulate a non-local cube-based tensor instead of the group to encode the sparsity and low-rank properties. Then, as a new regularizer, Kronecker-Basis-Representation (KBR) tensor factorization is employed into a basic spectral CT reconstruction model to enhance the ability of extracting image features and protecting spatial edges, generating the non-local low-rank cube-based tensor factorization (NLCTF) method. Finally, the split-Bregman strategy is adopted to solve the NLCTF model. Both numerical simulations and realistic preclinical mouse studies are performed to validate and assess the NLCTF algorithm. The results show that the NLCTF method outperforms the other competitors.
Tasks Computed Tomography (CT)
Published 2018-07-24
URL http://arxiv.org/abs/1807.10610v3
PDF http://arxiv.org/pdf/1807.10610v3.pdf
PWC https://paperswithcode.com/paper/non-local-low-rank-cube-based-tensor
Repo
Framework

Deep Learning Classification of Polygenic Obesity using Genome Wide Association Study SNPs

Title Deep Learning Classification of Polygenic Obesity using Genome Wide Association Study SNPs
Authors Casimiro Adays Curbelo Montañez, Paul Fergus, Almudena Curbelo Montañez, Carl Chalmers
Abstract In this paper, association results from genome-wide association studies (GWAS) are combined with a deep learning framework to test the predictive capacity of statistically significant single nucleotide polymorphism (SNPs) associated with obesity phenotype. Our approach demonstrates the potential of deep learning as a powerful framework for GWAS analysis that can capture information about SNPs and the important interactions between them. Basic statistical methods and techniques for the analysis of genetic SNP data from population-based genome-wide studies have been considered. Statistical association testing between individual SNPs and obesity was conducted under an additive model using logistic regression. Four subsets of loci after quality-control (QC) and association analysis were selected: P-values lower than 1x10-5 (5 SNPs), 1x10-4 (32 SNPs), 1x10-3 (248 SNPs) and 1x10-2 (2465 SNPs). A deep learning classifier is initialised using these sets of SNPs and fine-tuned to classify obese and non-obese observations. Using a deep learning classifier model and genetic variants with P-value < 1x10-2 (2465 SNPs) it was possible to obtain results (SE=0.9604, SP=0.9712, Gini=0.9817, LogLoss=0.1150, AUC=0.9908 and MSE=0.0300). As the P-value increased, an evident deterioration in performance was observed. Results demonstrate that single SNP analysis fails to capture the cumulative effect of less significant variants and their overall contribution to the outcome in disease prediction, which is captured using a deep learning framework.
Tasks Disease Prediction
Published 2018-04-09
URL http://arxiv.org/abs/1804.03198v2
PDF http://arxiv.org/pdf/1804.03198v2.pdf
PWC https://paperswithcode.com/paper/deep-learning-classification-of-polygenic
Repo
Framework

PlaneMatch: Patch Coplanarity Prediction for Robust RGB-D Reconstruction

Title PlaneMatch: Patch Coplanarity Prediction for Robust RGB-D Reconstruction
Authors Yifei Shi, Kai Xu, Matthias Niessner, Szymon Rusinkiewicz, Thomas Funkhouser
Abstract We introduce a novel RGB-D patch descriptor designed for detecting coplanar surfaces in SLAM reconstruction. The core of our method is a deep convolutional neural net that takes in RGB, depth, and normal information of a planar patch in an image and outputs a descriptor that can be used to find coplanar patches from other images.We train the network on 10 million triplets of coplanar and non-coplanar patches, and evaluate on a new coplanarity benchmark created from commodity RGB-D scans. Experiments show that our learned descriptor outperforms alternatives extended for this new task by a significant margin. In addition, we demonstrate the benefits of coplanarity matching in a robust RGBD reconstruction formulation.We find that coplanarity constraints detected with our method are sufficient to get reconstruction results comparable to state-of-the-art frameworks on most scenes, but outperform other methods on standard benchmarks when combined with a simple keypoint method.
Tasks
Published 2018-03-22
URL http://arxiv.org/abs/1803.08407v3
PDF http://arxiv.org/pdf/1803.08407v3.pdf
PWC https://paperswithcode.com/paper/planematch-patch-coplanarity-prediction-for
Repo
Framework

Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation

Title Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation
Authors Akiko Eriguchi, Melvin Johnson, Orhan Firat, Hideto Kazawa, Wolfgang Macherey
Abstract Transferring representations from large supervised tasks to downstream tasks has shown promising results in AI fields such as Computer Vision and Natural Language Processing (NLP). In parallel, the recent progress in Machine Translation (MT) has enabled one to train multilingual Neural MT (NMT) systems that can translate between multiple languages and are also capable of performing zero-shot translation. However, little attention has been paid to leveraging representations learned by a multilingual NMT system to enable zero-shot multilinguality in other NLP tasks. In this paper, we demonstrate a simple framework, a multilingual Encoder-Classifier, for cross-lingual transfer learning by reusing the encoder from a multilingual NMT system and stitching it with a task-specific classifier component. Our proposed model achieves significant improvements in the English setup on three benchmark tasks - Amazon Reviews, SST and SNLI. Further, our system can perform classification in a new language for which no classification data was seen during training, showing that zero-shot classification is possible and remarkably competitive. In order to understand the underlying factors contributing to this finding, we conducted a series of analyses on the effect of the shared vocabulary, the training data type for NMT, classifier complexity, encoder representation power, and model generalization on zero-shot performance. Our results provide strong evidence that the representations learned from multilingual NMT systems are widely applicable across languages and tasks.
Tasks Cross-Lingual Transfer, Machine Translation, Transfer Learning, Zero-Shot Learning
Published 2018-09-12
URL http://arxiv.org/abs/1809.04686v1
PDF http://arxiv.org/pdf/1809.04686v1.pdf
PWC https://paperswithcode.com/paper/zero-shot-cross-lingual-classification-using
Repo
Framework

JSR-Net: A Deep Network for Joint Spatial-Radon Domain CT Reconstruction from incomplete data

Title JSR-Net: A Deep Network for Joint Spatial-Radon Domain CT Reconstruction from incomplete data
Authors Haimiao Zhang, Bin Dong, Baodong Liu
Abstract CT image reconstruction from incomplete data, such as sparse views and limited angle reconstruction, is an important and challenging problem in medical imaging. This work proposes a new deep convolutional neural network (CNN), called JSR-Net, that jointly reconstructs CT images and their associated Radon domain projections. JSR-Net combines the traditional model-based approach with deep architecture design of deep learning. A hybrid loss function is adapted to improve the performance of the JSR-Net making it more effective in protecting important image structures. Numerical experiments demonstrate that JSR-Net outperforms some latest model-based reconstruction methods, as well as a recently proposed deep model.
Tasks Image Reconstruction
Published 2018-12-03
URL http://arxiv.org/abs/1812.00510v2
PDF http://arxiv.org/pdf/1812.00510v2.pdf
PWC https://paperswithcode.com/paper/jsr-net-a-deep-network-for-joint-spatial
Repo
Framework

Tensor-Train Long Short-Term Memory for Monaural Speech Enhancement

Title Tensor-Train Long Short-Term Memory for Monaural Speech Enhancement
Authors Suman Samui, Indrajit Chakrabarti, Soumya K. Ghosh
Abstract In recent years, Long Short-Term Memory (LSTM) has become a popular choice for speech separation and speech enhancement task. The capability of LSTM network can be enhanced by widening and adding more layers. However, this would introduce millions of parameters in the network and also increase the requirement of computational resources. These limitations hinders the efficient implementation of RNN models in low-end devices such as mobile phones and embedded systems with limited memory. To overcome these issues, we proposed to use an efficient alternative approach of reducing parameters by representing the weight matrix parameters of LSTM based on Tensor-Train (TT) format. We called this Tensor-Train factorized LSTM as TT-LSTM model. Based on this TT-LSTM units, we proposed a deep TensorNet model for single-channel speech enhancement task. Experimental results in various test conditions and in terms of standard speech quality and intelligibility metrics, demonstrated that the proposed deep TT-LSTM based speech enhancement framework can achieve competitive performances with the state-of-the-art uncompressed RNN model, even though the proposed model architecture is orders of magnitude less complex.
Tasks Speech Enhancement, Speech Separation
Published 2018-12-25
URL http://arxiv.org/abs/1812.10095v1
PDF http://arxiv.org/pdf/1812.10095v1.pdf
PWC https://paperswithcode.com/paper/tensor-train-long-short-term-memory-for
Repo
Framework

Data science is science’s second chance to get causal inference right: A classification of data science tasks

Title Data science is science’s second chance to get causal inference right: A classification of data science tasks
Authors Miguel A. Hernán, John Hsu, Brian Healy
Abstract Causal inference from observational data is the goal of many data analyses in the health and social sciences. However, academic statistics has often frowned upon data analyses with a causal objective. The introduction of the term “data science” provides a historic opportunity to redefine data analysis in such a way that it naturally accommodates causal inference from observational data. Like others before, we organize the scientific contributions of data science into three classes of tasks: Description, prediction, and counterfactual prediction (which includes causal inference). An explicit classification of data science tasks is necessary to discuss the data, assumptions, and analytics required to successfully accomplish each task. We argue that a failure to adequately describe the role of subject-matter expert knowledge in data analysis is a source of widespread misunderstandings about data science. Specifically, causal analyses typically require not only good data and algorithms, but also domain expert knowledge. We discuss the implications for the use of data science to guide decision-making in the real world and to train data scientists.
Tasks Causal Inference, Decision Making
Published 2018-04-28
URL http://arxiv.org/abs/1804.10846v6
PDF http://arxiv.org/pdf/1804.10846v6.pdf
PWC https://paperswithcode.com/paper/data-science-is-sciences-second-chance-to-get
Repo
Framework

Using recurrences in time and frequency within U-net architecture for speech enhancement

Title Using recurrences in time and frequency within U-net architecture for speech enhancement
Authors Tomasz Grzywalski, Szymon Drgas
Abstract When designing fully-convolutional neural network, there is a trade-off between receptive field size, number of parameters and spatial resolution of features in deeper layers of the network. In this work we present a novel network design based on combination of many convolutional and recurrent layers that solves these dilemmas. We compare our solution with U-nets based models known from the literature and other baseline models on speech enhancement task. We test our solution on TIMIT speech utterances combined with noise segments extracted from NOISEX-92 database and show clear advantage of proposed solution in terms of SDR (signal-to-distortion ratio), SIR (signal-to-interference ratio) and STOI (spectro-temporal objective intelligibility) metrics compared to the current state-of-the-art.
Tasks Speech Enhancement
Published 2018-11-16
URL http://arxiv.org/abs/1811.06805v1
PDF http://arxiv.org/pdf/1811.06805v1.pdf
PWC https://paperswithcode.com/paper/using-recurrences-in-time-and-frequency
Repo
Framework

Proceedings of the 2018 Workshop on Compositional Approaches in Physics, NLP, and Social Sciences

Title Proceedings of the 2018 Workshop on Compositional Approaches in Physics, NLP, and Social Sciences
Authors Martha Lewis, Bob Coecke, Jules Hedges, Dimitri Kartsaklis, Dan Marsden
Abstract The ability to compose parts to form a more complex whole, and to analyze a whole as a combination of elements, is desirable across disciplines. This workshop bring together researchers applying compositional approaches to physics, NLP, cognitive science, and game theory. Within NLP, a long-standing aim is to represent how words can combine to form phrases and sentences. Within the framework of distributional semantics, words are represented as vectors in vector spaces. The categorical model of Coecke et al. [2010], inspired by quantum protocols, has provided a convincing account of compositionality in vector space models of NLP. There is furthermore a history of vector space models in cognitive science. Theories of categorization such as those developed by Nosofsky [1986] and Smith et al. [1988] utilise notions of distance between feature vectors. More recently G"ardenfors [2004, 2014] has developed a model of concepts in which conceptual spaces provide geometric structures, and information is represented by points, vectors and regions in vector spaces. The same compositional approach has been applied to this formalism, giving conceptual spaces theory a richer model of compositionality than previously [Bolt et al., 2018]. Compositional approaches have also been applied in the study of strategic games and Nash equilibria. In contrast to classical game theory, where games are studied monolithically as one global object, compositional game theory works bottom-up by building large and complex games from smaller components. Such an approach is inherently difficult since the interaction between games has to be considered. Research into categorical compositional methods for this field have recently begun [Ghani et al., 2018]. Moreover, the interaction between the three disciplines of cognitive science, linguistics and game theory is a fertile ground for research. Game theory in cognitive science is a well-established area [Camerer, 2011]. Similarly game theoretic approaches have been applied in linguistics [J"ager, 2008]. Lastly, the study of linguistics and cognitive science is intimately intertwined [Smolensky and Legendre, 2006, Jackendoff, 2007]. Physics supplies compositional approaches via vector spaces and categorical quantum theory, allowing the interplay between the three disciplines to be examined.
Tasks
Published 2018-11-06
URL http://arxiv.org/abs/1811.02701v1
PDF http://arxiv.org/pdf/1811.02701v1.pdf
PWC https://paperswithcode.com/paper/proceedings-of-the-2018-workshop-on
Repo
Framework

On the dissection of degenerate cosmologies with machine learning

Title On the dissection of degenerate cosmologies with machine learning
Authors Julian Merten, Carlo Giocoli, Marco Baldi, Massimo Meneghetti, Austin Peel, Florian Lalande, Jean-Luc Starck, Valeria Pettorino
Abstract Based on the DUSTGRAIN-pathfinder suite of simulations, we investigate observational degeneracies between nine models of modified gravity and massive neutrinos. Three types of machine learning techniques are tested for their ability to discriminate lensing convergence maps by extracting dimensional reduced representations of the data. Classical map descriptors such as the power spectrum, peak counts and Minkowski functionals are combined into a joint feature vector and compared to the descriptors and statistics that are common to the field of digital image processing. To learn new features directly from the data we use a Convolutional Neural Network (CNN). For the mapping between feature vectors and the predictions of their underlying model, we implement two different classifiers; one based on a nearest-neighbour search and one that is based on a fully connected neural network. We find that the neural network provides a much more robust classification than the nearest-neighbour approach and that the CNN provides the most discriminating representation of the data. It achieves the cleanest separation between the different models and the highest classification success rate of 59% for a single source redshift. Once we perform a tomographic CNN analysis, the total classification accuracy increases significantly to 76% with no observational degeneracies remaining. Visualising the filter responses of the CNN at different network depths provides us with the unique opportunity to learn from very complex models and to understand better why they perform so well.
Tasks
Published 2018-10-25
URL http://arxiv.org/abs/1810.11027v2
PDF http://arxiv.org/pdf/1810.11027v2.pdf
PWC https://paperswithcode.com/paper/on-the-dissection-of-degenerate-cosmologies
Repo
Framework

Anomaly Detection and Interpretation using Multimodal Autoencoder and Sparse Optimization

Title Anomaly Detection and Interpretation using Multimodal Autoencoder and Sparse Optimization
Authors Yasuhiro Ikeda, Keisuke Ishibashi, Yuusuke Nakano, Keishiro Watanabe, Ryoichi Kawahara
Abstract Automated anomaly detection is essential for managing information and communications technology (ICT) systems to maintain reliable services with minimum burden on operators. For detecting varying and continually emerging anomalies as differences from normal states, learning normal relationships inherent among cross-domain data monitored from ICT systems is essential. Deep-learning-based anomaly detection using an autoencoder (AE) is therefore promising for such complicated learning; however, its interpretation is still problematic. Since the dimensions of the input data contributing to the detected anomaly are not directly indicated in an AE, they are not suitable for localizing anomalies in large ICT systems composed of a huge amount of equipment. We propose an algorithm using sparse optimization for estimating contributing dimensions to anomalies detected with AEs. We also propose a multimodal AE (MAE) for effectively learning the relationships among cross-domain data, which can induce nonlinearity and differences in learnability among data types. We evaluated our algorithms with several datasets including real measured data in comparison with conventional algorithms and confirmed the superiority of our estimation algorithm in specifying contributing dimensions of anomalous data and our MAE in detecting anomalies in cross-domain data.
Tasks Anomaly Detection
Published 2018-12-18
URL http://arxiv.org/abs/1812.07136v1
PDF http://arxiv.org/pdf/1812.07136v1.pdf
PWC https://paperswithcode.com/paper/anomaly-detection-and-interpretation-using
Repo
Framework

Enhancing the Structural Performance of Additively Manufactured Objects

Title Enhancing the Structural Performance of Additively Manufactured Objects
Authors Erva Ulu
Abstract The ability to accurately quantify the performance an additively manufactured (AM) product is important for a widespread industry adoption of AM as the design is required to: (1) satisfy geometrical constraints, (2) satisfy structural constraints dictated by its intended function, and (3) be cost effective compared to traditional manufacturing methods. Optimization techniques offer design aids in creating cost-effective structures that meet the prescribed structural objectives. The fundamental problem in existing approaches lies in the difficulty to quantify the structural performance as each unique design leads to a new set of analyses to determine the structural robustness and such analyses can be very costly due to the complexity of in-use forces experienced by the structure. This work develops computationally tractable methods tailored to maximize the structural performance of AM products. A geometry preserving build orientation optimization method as well as data-driven shape optimization approaches to structural design are presented. Proposed methods greatly enhance the value of AM technology by taking advantage of the design space enabled by it for a broad class of problems involving complex in-use loads.
Tasks
Published 2018-11-01
URL http://arxiv.org/abs/1811.00548v1
PDF http://arxiv.org/pdf/1811.00548v1.pdf
PWC https://paperswithcode.com/paper/enhancing-the-structural-performance-of
Repo
Framework

Who is Addressed in this Comment? Automatically Classifying Meta-Comments in News Comments

Title Who is Addressed in this Comment? Automatically Classifying Meta-Comments in News Comments
Authors Marlo Häring, Wiebke Loosen, Walid Maalej
Abstract User comments have become an essential part of online journalism. However, newsrooms are often overwhelmed by the vast number of diverse comments, for which a manual analysis is barely feasible. Identifying meta-comments that address or mention newsrooms, individual journalists, or moderators and that may call for reactions is particularly critical. In this paper, we present an automated approach to identify and classify meta-comments. We compare comment classification based on manually extracted features with an end-to-end learning approach. We develop, optimize, and evaluate multiple classifiers on a comment dataset of the large German online newsroom SPIEGEL Online and the ‘One Million Posts’ corpus of DER STANDARD, an Austrian newspaper. Both optimized classification approaches achieved encouraging $F_{0.5}$ values between 76% and 91%. We report on the most significant classification features with the results of a qualitative analysis and discuss how our work contributes to making participation in online journalism more constructive.
Tasks
Published 2018-10-02
URL http://arxiv.org/abs/1810.01114v1
PDF http://arxiv.org/pdf/1810.01114v1.pdf
PWC https://paperswithcode.com/paper/who-is-addressed-in-this-comment
Repo
Framework

Faithfully Explaining Rankings in a News Recommender System

Title Faithfully Explaining Rankings in a News Recommender System
Authors Maartje ter Hoeve, Anne Schuth, Daan Odijk, Maarten de Rijke
Abstract There is an increasing demand for algorithms to explain their outcomes. So far, there is no method that explains the rankings produced by a ranking algorithm. To address this gap we propose LISTEN, a LISTwise ExplaiNer, to explain rankings produced by a ranking algorithm. To efficiently use LISTEN in production, we train a neural network to learn the underlying explanation space created by LISTEN; we call this model Q-LISTEN. We show that LISTEN produces faithful explanations and that Q-LISTEN is able to learn these explanations. Moreover, we show that LISTEN is safe to use in a real world environment: users of a news recommendation system do not behave significantly differently when they are exposed to explanations generated by LISTEN instead of manually generated explanations.
Tasks Recommendation Systems
Published 2018-05-14
URL http://arxiv.org/abs/1805.05447v1
PDF http://arxiv.org/pdf/1805.05447v1.pdf
PWC https://paperswithcode.com/paper/faithfully-explaining-rankings-in-a-news
Repo
Framework
comments powered by Disqus