April 1, 2020

3228 words 16 mins read

Paper Group ANR 501

Paper Group ANR 501

AMR Similarity Metrics from Principles. Semantic Sensitive TF-IDF to Determine Word Relevance in Documents. Multi-label Prediction in Time Series Data using Deep Neural Networks. HumBug Zooniverse: a crowd-sourced acoustic mosquito dataset. Core-Collapse Supernova Gravitational-Wave Search and Deep Learning Classification. Accelerating RNN Transduc …

AMR Similarity Metrics from Principles

Title AMR Similarity Metrics from Principles
Authors Juri Opitz, Letitia Parcalabescu, Anette Frank
Abstract Different metrics have been proposed to compare Abstract Meaning Representation (AMR) graphs. The canonical Smatch metric (Cai and Knight, 2013) aligns variables from one graph to another and compares the matching triples. The recently released SemBleu metric (Song and Gildea, 2019) is based on the machine-translation metric Bleu (Papineni et al., 2002), increasing computational efficiency by ablating a variable-alignment step and aiming at capturing more global graph properties. Our aims are threefold: i) we establish criteria that allow us to perform a principled comparison between metrics of symbolic meaning representations like AMR; ii) we undertake a thorough analysis of Smatch and SemBleu where we show that the latter exhibits some undesirable properties. E.g., it violates the identity of indiscernibles rule and introduces biases that are hard to control; iii) we propose a novel metric S2match that is more benevolent to only very slight meaning deviations and targets the fulfilment of all established criteria. We assess its suitability and show its advantages over Smatch and SemBleu.
Tasks Machine Translation
Published 2020-01-29
URL https://arxiv.org/abs/2001.10929v1
PDF https://arxiv.org/pdf/2001.10929v1.pdf
PWC https://paperswithcode.com/paper/amr-similarity-metrics-from-principles
Repo
Framework

Semantic Sensitive TF-IDF to Determine Word Relevance in Documents

Title Semantic Sensitive TF-IDF to Determine Word Relevance in Documents
Authors Amir Jalilifard, Vinicius Caridá, Alex Mansano, Rogers Cristo
Abstract Keyword extraction has received an increasing attention as an important research topic which can lead to have advancements in diverse applications such as document context categorization, text indexing and document classification. In this paper we propose STF-IDF, a novel semantic method based on TF-IDF, for scoring word importance of informal documents in a corpus. A set of nearly four million documents from health-care social media was collected and was trained in order to draw semantic model and to find the word embeddings. Then, the features of semantic space were utilized to rearrange the original TF-IDF scores through an iterative solution so as to improve the moderate performance of this algorithm on informal texts. After testing the proposed method with 200 randomly chosen documents, our method managed to decrease the TF-IDF mean error rate by a factor of 50% and reaching the mean error of 13.7%, as opposed to 27.2% of the original TF-IDF.
Tasks Document Classification, Keyword Extraction, Word Embeddings
Published 2020-01-06
URL https://arxiv.org/abs/2001.09896v1
PDF https://arxiv.org/pdf/2001.09896v1.pdf
PWC https://paperswithcode.com/paper/semantic-sensitive-tf-idf-to-determine-word
Repo
Framework

Multi-label Prediction in Time Series Data using Deep Neural Networks

Title Multi-label Prediction in Time Series Data using Deep Neural Networks
Authors Wenyu Zhang, Devesh K. Jha, Emil Laftchiev, Daniel Nikovski
Abstract This paper addresses a multi-label predictive fault classification problem for multidimensional time-series data. While fault (event) detection problems have been thoroughly studied in literature, most of the state-of-the-art techniques can’t reliably predict faults (events) over a desired future horizon. In the most general setting of these types of problems, one or more samples of data across multiple time series can be assigned several concurrent fault labels from a finite, known set and the task is to predict the possibility of fault occurrence over a desired time horizon. This type of problem is usually accompanied by strong class imbalances where some classes are represented by only a few samples. Importantly, in many applications of the problem such as fault prediction and predictive maintenance, it is exactly these rare classes that are of most interest. To address the problem, this paper proposes a general approach that utilizes a multi-label recurrent neural network with a new cost function that accentuates learning in the imbalanced classes. The proposed algorithm is tested on two public benchmark datasets: an industrial plant dataset from the PHM Society Data Challenge, and a human activity recognition dataset. The results are compared with state-of-the-art techniques for time-series classification and evaluation is performed using the F1-score, precision and recall.
Tasks Activity Recognition, Human Activity Recognition, Time Series, Time Series Classification
Published 2020-01-27
URL https://arxiv.org/abs/2001.10098v1
PDF https://arxiv.org/pdf/2001.10098v1.pdf
PWC https://paperswithcode.com/paper/multi-label-prediction-in-time-series-data
Repo
Framework

HumBug Zooniverse: a crowd-sourced acoustic mosquito dataset

Title HumBug Zooniverse: a crowd-sourced acoustic mosquito dataset
Authors Ivan Kiskin, Adam D. Cobb, Lawrence Wang, Stephen Roberts
Abstract Mosquitoes are the only known vector of malaria, which leads to hundreds of thousands of deaths each year. Understanding the number and location of potential mosquito vectors is of paramount importance to aid the reduction of malaria transmission cases. In recent years, deep learning has become widely used for bioacoustic classification tasks. In order to enable further research applications in this field, we release a new dataset of mosquito audio recordings. With over a thousand contributors, we obtained 195,434 labels of two second duration, of which approximately 10 percent signify mosquito events. We present an example use of the dataset, in which we train a convolutional neural network on log-Mel features, showcasing the information content of the labels. We hope this will become a vital resource for those researching all aspects of malaria, and add to the existing audio datasets for bioacoustic detection and signal processing.
Tasks
Published 2020-01-14
URL https://arxiv.org/abs/2001.04733v2
PDF https://arxiv.org/pdf/2001.04733v2.pdf
PWC https://paperswithcode.com/paper/humbug-zooniverse-a-crowd-sourced-acoustic
Repo
Framework

Core-Collapse Supernova Gravitational-Wave Search and Deep Learning Classification

Title Core-Collapse Supernova Gravitational-Wave Search and Deep Learning Classification
Authors Alberto Iess, Elena Cuoco, Filip Morawski, Jade Powell
Abstract We describe a search and classification procedure for gravitational waves emitted by core-collapse supernova (CCSN) explosions, using a convolutional neural network (CNN) combined with an event trigger generator known as Wavelet Detection Filter (WDF). We employ both a 1-D CNN search using time series gravitational-wave data as input, and a 2-D CNN search with time-frequency representation of the data as input. To test the accuracies of our 1-D and 2-D CNN classification, we add CCSN waveforms from the most recent hydrodynamical simulations of neutrino-driven core-collapse to simulated Gaussian colored noise with the Virgo interferometer and the planned Einstein Telescope sensitivity curve. We find classification accuracies, for a single detector, of over 95% for both 1-D and 2-D CNN pipelines. For the first time in machine learning CCSN studies, we add short duration detector noise transients to our data to test the robustness of our method against false alarms created by detector noise artifacts. Further to this, we show that the CNN can distinguish between different types of CCSN waveform models.
Tasks Time Series
Published 2020-01-01
URL https://arxiv.org/abs/2001.00279v1
PDF https://arxiv.org/pdf/2001.00279v1.pdf
PWC https://paperswithcode.com/paper/core-collapse-supernova-gravitational-wave
Repo
Framework
Title Accelerating RNN Transducer Inference via One-Step Constrained Beam Search
Authors Juntae Kim, Yoonhan Lee
Abstract We propose a one-step constrained (OSC) beam search to accelerate recurrent neural network (RNN) transducer (RNN-T) inference. The original RNN-T beam search has a while-loop leading to speed down of the decoding process. The OSC beam search eliminates this while-loop by vectorizing multiple hypotheses. This vectorization is nontrivial as the expansion of the hypotheses within the original RNN-T beam search can be different from each other. However, we found that the hypotheses expanded only once at each decoding step in most cases; thus, we constrained the maximum expansion number to one, thereby allowing vectorization of the hypotheses. For further acceleration, we assign constraints to the prefixes of the hypotheses to prune the redundant search space. In addition, OSC beam search has duplication check among hypotheses during the decoding process as duplication can undesirably shrink the search space. We achieved significant speedup compared with other RNN-T beam search methods with lower phoneme and word error rate.
Tasks
Published 2020-02-10
URL https://arxiv.org/abs/2002.03577v1
PDF https://arxiv.org/pdf/2002.03577v1.pdf
PWC https://paperswithcode.com/paper/accelerating-rnn-transducer-inference-via-one
Repo
Framework

Gravitational-wave parameter estimation with autoregressive neural network flows

Title Gravitational-wave parameter estimation with autoregressive neural network flows
Authors Stephen R. Green, Christine Simpson, Jonathan Gair
Abstract We introduce the use of autoregressive normalizing flows for rapid likelihood-free inference of binary black hole system parameters from gravitational-wave data with deep neural networks. A normalizing flow is an invertible mapping on a sample space that can be used to induce a transformation from a simple probability distribution to a more complex one: if the simple distribution can be rapidly sampled and its density evaluated, then so can the complex distribution. Our first application to gravitational waves uses an autoregressive flow, conditioned on detector strain data, to map a multivariate standard normal distribution into the posterior distribution over system parameters. We train the model on artificial strain data consisting of IMRPhenomPv2 waveforms drawn from a five-parameter $(m_1, m_2, \phi_0, t_c, d_L)$ prior and stationary Gaussian noise realizations with a fixed power spectral density. This gives performance comparable to current best deep-learning approaches to gravitational-wave parameter estimation. We then build a more powerful latent variable model by incorporating autoregressive flows within the variational autoencoder framework. This model has performance comparable to Markov chain Monte Carlo and, in particular, successfully models the multimodal $\phi_0$ posterior. Finally, we train the autoregressive latent variable model on an expanded parameter space, including also aligned spins $(\chi_{1z}, \chi_{2z})$ and binary inclination $\theta_{JN}$, and show that all parameters and degeneracies are well-recovered. In all cases, sampling is extremely fast, requiring less than two seconds to draw $10^4$ posterior samples.
Tasks
Published 2020-02-18
URL https://arxiv.org/abs/2002.07656v1
PDF https://arxiv.org/pdf/2002.07656v1.pdf
PWC https://paperswithcode.com/paper/gravitational-wave-parameter-estimation-with-1
Repo
Framework

An enhanced Tree-LSTM architecture for sentence semantic modeling using typed dependencies

Title An enhanced Tree-LSTM architecture for sentence semantic modeling using typed dependencies
Authors Jeena Kleenankandy, K. A. Abdul Nazeer
Abstract Tree-based Long short term memory (LSTM) network has become state-of-the-art for modeling the meaning of language texts as they can effectively exploit the grammatical syntax and thereby non-linear dependencies among words of the sentence. However, most of these models cannot recognize the difference in meaning caused by a change in semantic roles of words or phrases because they do not acknowledge the type of grammatical relations, also known as typed dependencies, in sentence structure. This paper proposes an enhanced LSTM architecture, called relation gated LSTM, which can model the relationship between two inputs of a sequence using a control input. We also introduce a Tree-LSTM model called Typed Dependency Tree-LSTM that uses the sentence dependency parse structure as well as the dependency type to embed sentence meaning into a dense vector. The proposed model outperformed its type-unaware counterpart in two typical NLP tasks - Semantic Relatedness Scoring and Sentiment Analysis, in a lesser number of training epochs. The results were comparable or competitive with other state-of-the-art models. Qualitative analysis showed that changes in the voice of sentences had little effect on the model’s predicted scores, while changes in nominal (noun) words had a more significant impact. The model recognized subtle semantic relationships in sentence pairs. The magnitudes of learned typed dependencies embeddings were also in agreement with human intuitions. The research findings imply the significance of grammatical relations in sentence modeling. The proposed models would serve as a base for future researches in this direction.
Tasks Sentiment Analysis
Published 2020-02-18
URL https://arxiv.org/abs/2002.07775v1
PDF https://arxiv.org/pdf/2002.07775v1.pdf
PWC https://paperswithcode.com/paper/an-enhanced-tree-lstm-architecture-for
Repo
Framework

Sentiment Analysis Using Averaged Weighted Word Vector Features

Title Sentiment Analysis Using Averaged Weighted Word Vector Features
Authors Ali Erkan, Tunga Gungor
Abstract People use the world wide web heavily to share their experience with entities such as products, services, or travel destinations. Texts that provide online feedback in the form of reviews and comments are essential to make consumer decisions. These comments create a valuable source that may be used to measure satisfaction related to products or services. Sentiment analysis is the task of identifying opinions expressed in such text fragments. In this work, we develop two methods that combine different types of word vectors to learn and estimate polarity of reviews. We develop average review vectors from word vectors and add weights to this review vectors using word frequencies in positive and negative sensitivity-tagged reviews. We applied the methods to several datasets from different domains that are used as standard benchmarks for sentiment analysis. We ensemble the techniques with each other and existing methods, and we make a comparison with the approaches in the literature. The results show that the performances of our approaches outperform the state-of-the-art success rates.
Tasks Sentiment Analysis
Published 2020-02-13
URL https://arxiv.org/abs/2002.05606v1
PDF https://arxiv.org/pdf/2002.05606v1.pdf
PWC https://paperswithcode.com/paper/sentiment-analysis-using-averaged-weighted
Repo
Framework
Title Related Tasks can Share! A Multi-task Framework for Affective language
Authors Kumar Shikhar Deep, Md Shad Akhtar, Asif Ekbal, Pushpak Bhattacharyya
Abstract Expressing the polarity of sentiment as ‘positive’ and ‘negative’ usually have limited scope compared with the intensity/degree of polarity. These two tasks (i.e. sentiment classification and sentiment intensity prediction) are closely related and may offer assistance to each other during the learning process. In this paper, we propose to leverage the relatedness of multiple tasks in a multi-task learning framework. Our multi-task model is based on convolutional-Gated Recurrent Unit (GRU) framework, which is further assisted by a diverse hand-crafted feature set. Evaluation and analysis suggest that joint-learning of the related tasks in a multi-task framework can outperform each of the individual tasks in the single-task frameworks.
Tasks Multi-Task Learning, Sentiment Analysis
Published 2020-02-06
URL https://arxiv.org/abs/2002.02154v1
PDF https://arxiv.org/pdf/2002.02154v1.pdf
PWC https://paperswithcode.com/paper/related-tasks-can-share-a-multi-task
Repo
Framework

Real-Time target detection in maritime scenarios based on YOLOv3 model

Title Real-Time target detection in maritime scenarios based on YOLOv3 model
Authors Alessandro Betti, Benedetto Michelozzi, Andrea Bracci, Andrea Masini
Abstract In this work a novel ships dataset is proposed consisting of more than 56k images of marine vessels collected by means of web-scraping and including 12 ship categories. A YOLOv3 single-stage detector based on Keras API is built on top of this dataset. Current results on four categories (cargo ship, naval ship, oil ship and tug ship) show Average Precision up to 96% for Intersection over Union (IoU) of 0.5 and satisfactory detection performances up to IoU of 0.8. A Data Analytics GUI service based on QT framework and Darknet-53 engine is also implemented in order to simplify the deployment process and analyse massive amount of images even for people without Data Science expertise.
Tasks
Published 2020-02-10
URL https://arxiv.org/abs/2003.00800v1
PDF https://arxiv.org/pdf/2003.00800v1.pdf
PWC https://paperswithcode.com/paper/real-time-target-detection-in-maritime
Repo
Framework

Using Deep Learning to Explore Local Physical Similarity for Global-scale Bridging in Thermal-hydraulic Simulation

Title Using Deep Learning to Explore Local Physical Similarity for Global-scale Bridging in Thermal-hydraulic Simulation
Authors Han Bao, Nam Dinh, Linyu Lin, Robert Youngblood, Jeffrey Lane, Hongbin Zhang
Abstract Current system thermal-hydraulic codes have limited credibility in simulating real plant conditions, especially when the geometry and boundary conditions are extrapolated beyond the range of test facilities. This paper proposes a data-driven approach, Feature Similarity Measurement FFSM), to establish a technical basis to overcome these difficulties by exploring local patterns using machine learning. The underlying local patterns in multiscale data are represented by a set of physical features that embody the information from a physical system of interest, empirical correlations, and the effect of mesh size. After performing a limited number of high-fidelity numerical simulations and a sufficient amount of fast-running coarse-mesh simulations, an error database is built, and deep learning is applied to construct and explore the relationship between the local physical features and simulation errors. Case studies based on mixed convection have been designed for demonstrating the capability of data-driven models in bridging global scale gaps.
Tasks
Published 2020-01-06
URL https://arxiv.org/abs/2001.04298v1
PDF https://arxiv.org/pdf/2001.04298v1.pdf
PWC https://paperswithcode.com/paper/using-deep-learning-to-explore-local-physical
Repo
Framework

Predicting TUG score from gait characteristics based on video analysis and machine learning

Title Predicting TUG score from gait characteristics based on video analysis and machine learning
Authors Jian Ma
Abstract Fall is a leading cause of death which suffers the elderly and society. Timed Up and Go (TUG) test is a common tool for fall risk assessment. In this paper, we propose a method for predicting TUG score from gait characteristics extracted from video based on computer vision and machine learning technologies. First, 3D pose is estimated from video captured with 2D and 3D cameras during human motion and then a group of gait characteristics are computed from 3D pose series. After that, copula entropy is used to select those characteristics which are mostly associated with TUG score. Finally, the selected characteristics are fed into the predictive models to predict TUG score. Experiments on real world data demonstrated the effectiveness of the proposed method. As a byproduct, the associations between TUG score and several gait characteristics are discovered, which laid the scientific foundation of the proposed method and make the predictive models such built interpretable to clinical users.
Tasks
Published 2020-02-23
URL https://arxiv.org/abs/2003.00875v1
PDF https://arxiv.org/pdf/2003.00875v1.pdf
PWC https://paperswithcode.com/paper/predicting-tug-score-from-gait
Repo
Framework

Deep Networks as Logical Circuits: Generalization and Interpretation

Title Deep Networks as Logical Circuits: Generalization and Interpretation
Authors Christopher Snyder, Sriram Vishwanath
Abstract Not only are Deep Neural Networks (DNNs) black box models, but also we frequently conceptualize them as such. We lack good interpretations of the mechanisms linking inputs to outputs. Therefore, we find it difficult to analyze in human-meaningful terms (1) what the network learned and (2) whether the network learned. We present a hierarchical decomposition of the DNN discrete classification map into logical (AND/OR) combinations of intermediate (True/False) classifiers of the input. Those classifiers that can not be further decomposed, called atoms, are (interpretable) linear classifiers. Taken together, we obtain a logical circuit with linear classifier inputs that computes the same label as the DNN. This circuit does not structurally resemble the network architecture, and it may require many fewer parameters, depending on the configuration of weights. In these cases, we obtain simultaneously an interpretation and generalization bound (for the original DNN), connecting two fronts which have historically been investigated separately. Unlike compression techniques, our representation is. We motivate the utility of this perspective by studying DNNs in simple, controlled settings, where we obtain superior generalization bounds despite using only combinatorial information (e.g. no margin information). We demonstrate how to “open the black box” on the MNIST dataset. We show that the learned, internal, logical computations correspond to semantically meaningful (unlabeled) categories that allow DNN descriptions in plain English. We improve the generalization of an already trained network by interpreting, diagnosing, and replacing components the logical circuit that is the DNN.
Tasks
Published 2020-03-25
URL https://arxiv.org/abs/2003.11619v1
PDF https://arxiv.org/pdf/2003.11619v1.pdf
PWC https://paperswithcode.com/paper/deep-networks-as-logical-circuits
Repo
Framework

Knowledge graph based methods for record linkage

Title Knowledge graph based methods for record linkage
Authors B. Gautam, O. Ramos Terrades, J. M. Pujades, M. Valls
Abstract Nowadays, it is common in Historical Demography the use of individual-level data as a consequence of a predominant life-course approach for the understanding of the demographic behaviour, family transition, mobility, etc. Record linkage advance is key in these disciplines since it allows to increase the volume and the data complexity to be analyzed. However, current methods are constrained to link data coming from the same kind of sources. Knowledge graph are flexible semantic representations, which allow to encode data variability and semantic relations in a structured manner. In this paper we propose the knowledge graph use to tackle record linkage task. The proposed method, named {\bf WERL}, takes advantage of the main knowledge graph properties and learns embedding vectors to encode census information. These embeddings are properly weighted to maximize the record linkage performance. We have evaluated this method on benchmark data sets and we have compared it to related methods with stimulating and satisfactory results.
Tasks
Published 2020-03-06
URL https://arxiv.org/abs/2003.03136v1
PDF https://arxiv.org/pdf/2003.03136v1.pdf
PWC https://paperswithcode.com/paper/knowledge-graph-based-methods-for-record
Repo
Framework
comments powered by Disqus