Paper Group ANR 70
Classifying the Valence of Autobiographical Memories from fMRI Data. Question Answering over Knowledge Graphs via Structural Query Patterns. Detection of Glottal Closure Instants from Speech Signals: a Quantitative Review. CharBot: A Simple and Effective Method for Evading DGA Classifiers. Deep kernel learning for integral measurements. Predicting …
Classifying the Valence of Autobiographical Memories from fMRI Data
Title | Classifying the Valence of Autobiographical Memories from fMRI Data |
Authors | Alex Frid, Larry M. Manevitz, Norberto Eiji Nawa |
Abstract | We show that fMRI analysis using machine learning tools are sufficient to distinguish valence (i.e., positive or negative) of freely retrieved autobiographical memories in a cross-participant setting. Our methodology uses feature selection (ReliefF) in combination with boosting methods, both applied directly to data represented in voxel space. In previous work using the same data set, Nawa and Ando showed that whole-brain based classification could achieve above-chance classification accuracy only when both training and testing data came from the same individual. In a cross-participant setting, classification results were not statistically significant. Additionally, on average the classification accuracy obtained when using ReliefF is substantially higher than previous results - 81% for the within-participant classification, and 62% for the cross-participant classification. Furthermore, since features are defined in voxel space, it is possible to show brain maps indicating the regions of that are most relevant in determining the results of the classification. Interestingly, the voxels that were selected using the proposed computational pipeline seem to be consistent with current neurophysiological theories regarding the brain regions actively involved in autobiographical memory processes. |
Tasks | Feature Selection |
Published | 2019-09-10 |
URL | https://arxiv.org/abs/1909.04390v1 |
https://arxiv.org/pdf/1909.04390v1.pdf | |
PWC | https://paperswithcode.com/paper/classifying-the-valence-of-autobiographical |
Repo | |
Framework | |
Question Answering over Knowledge Graphs via Structural Query Patterns
Title | Question Answering over Knowledge Graphs via Structural Query Patterns |
Authors | Weiguo Zheng, Mei Zhang |
Abstract | Natural language question answering over knowledge graphs is an important and interesting task as it enables common users to gain accurate answers in an easy and intuitive manner. However, it remains a challenge to bridge the gap between unstructured questions and structured knowledge graphs. To address the problem, a natural discipline is building a structured query to represent the input question. Searching the structured query over the knowledge graph can produce answers to the question. Distinct from the existing methods that are based on semantic parsing or templates, we propose an effective approach powered by a novel notion, structural query pattern, in this paper. Given an input question, we first generate its query sketch that is compatible with the underlying structure of the knowledge graph. Then, we complete the query graph by labeling the nodes and edges under the guidance of the structural query pattern. Finally, answers can be retrieved by executing the constructed query graph over the knowledge graph. Evaluations on three question answering benchmarks show that our proposed approach outperforms state-of-the-art methods significantly. |
Tasks | Knowledge Graphs, Question Answering, Semantic Parsing |
Published | 2019-10-22 |
URL | https://arxiv.org/abs/1910.09760v2 |
https://arxiv.org/pdf/1910.09760v2.pdf | |
PWC | https://paperswithcode.com/paper/question-answering-over-knowledge-graphs-via |
Repo | |
Framework | |
Detection of Glottal Closure Instants from Speech Signals: a Quantitative Review
Title | Detection of Glottal Closure Instants from Speech Signals: a Quantitative Review |
Authors | Thomas Drugman, Mark Thomas, Jon Gudnason, Patrick Naylor, Thierry Dutoit |
Abstract | The pseudo-periodicity of voiced speech can be exploited in several speech processing applications. This requires however that the precise locations of the Glottal Closure Instants (GCIs) are available. The focus of this paper is the evaluation of automatic methods for the detection of GCIs directly from the speech waveform. Five state-of-the-art GCI detection algorithms are compared using six different databases with contemporaneous electroglottographic recordings as ground truth, and containing many hours of speech by multiple speakers. The five techniques compared are the Hilbert Envelope-based detection (HE), the Zero Frequency Resonator-based method (ZFR), the Dynamic Programming Phase Slope Algorithm (DYPSA), the Speech Event Detection using the Residual Excitation And a Mean-based Signal (SEDREAMS) and the Yet Another GCI Algorithm (YAGA). The efficacy of these methods is first evaluated on clean speech, both in terms of reliabililty and accuracy. Their robustness to additive noise and to reverberation is also assessed. A further contribution of the paper is the evaluation of their performance on a concrete application of speech processing: the causal-anticausal decomposition of speech. It is shown that for clean speech, SEDREAMS and YAGA are the best performing techniques, both in terms of identification rate and accuracy. ZFR and SEDREAMS also show a superior robustness to additive noise and reverberation. |
Tasks | |
Published | 2019-12-28 |
URL | https://arxiv.org/abs/2001.00473v1 |
https://arxiv.org/pdf/2001.00473v1.pdf | |
PWC | https://paperswithcode.com/paper/detection-of-glottal-closure-instants-from |
Repo | |
Framework | |
CharBot: A Simple and Effective Method for Evading DGA Classifiers
Title | CharBot: A Simple and Effective Method for Evading DGA Classifiers |
Authors | Jonathan Peck, Claire Nie, Raaghavi Sivaguru, Charles Grumer, Femi Olumofin, Bin Yu, Anderson Nascimento, Martine De Cock |
Abstract | Domain generation algorithms (DGAs) are commonly leveraged by malware to create lists of domain names which can be used for command and control (C&C) purposes. Approaches based on machine learning have recently been developed to automatically detect generated domain names in real-time. In this work, we present a novel DGA called CharBot which is capable of producing large numbers of unregistered domain names that are not detected by state-of-the-art classifiers for real-time detection of DGAs, including the recently published methods FANCI (a random forest based on human-engineered features) and LSTM.MI (a deep learning approach). CharBot is very simple, effective and requires no knowledge of the targeted DGA classifiers. We show that retraining the classifiers on CharBot samples is not a viable defense strategy. We believe these findings show that DGA classifiers are inherently vulnerable to adversarial attacks if they rely only on the domain name string to make a decision. Designing a robust DGA classifier may, therefore, necessitate the use of additional information besides the domain name alone. To the best of our knowledge, CharBot is the simplest and most efficient black-box adversarial attack against DGA classifiers proposed to date. |
Tasks | Adversarial Attack |
Published | 2019-05-03 |
URL | https://arxiv.org/abs/1905.01078v2 |
https://arxiv.org/pdf/1905.01078v2.pdf | |
PWC | https://paperswithcode.com/paper/charbot-a-simple-and-effective-method-for |
Repo | |
Framework | |
Deep kernel learning for integral measurements
Title | Deep kernel learning for integral measurements |
Authors | Carl Jidling, Johannes Hendriks, Thomas B. Schön, Adrian Wills |
Abstract | Deep kernel learning refers to a Gaussian process that incorporates neural networks to improve the modelling of complex functions. We present a method that makes this approach feasible for problems where the data consists of line integral measurements of the target function. The performance is illustrated on computed tomography reconstruction examples. |
Tasks | |
Published | 2019-09-04 |
URL | https://arxiv.org/abs/1909.01844v1 |
https://arxiv.org/pdf/1909.01844v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-kernel-learning-for-integral |
Repo | |
Framework | |
Predicting the Results of LTL Model Checking using Multiple Machine Learning Algorithms
Title | Predicting the Results of LTL Model Checking using Multiple Machine Learning Algorithms |
Authors | Weijun Zhu, Mingliang Xu, Jianwei Wang |
Abstract | In this paper, we study how to predict the results of LTL model checking using some machine learning algorithms. Some Kripke structures and LTL formulas and their model checking results are made up data set. The approaches based on the Random Forest (RF), K-Nearest Neighbors (KNN), Decision tree (DT), and Logistic Regression (LR) are used to training and prediction. The experiment results show that the predictive accuracy of the RF, KNN, DT and LR-based approaches are 97.9%, 98.2%, 97.1% and 98.2%, respectively, as well as the average computation efficiencies of the RF, KNN, DT and LR-based approaches are 7102500, 598, 4132364 and 5543415 times than that of the existing approach, respectively, if the length of each LTL formula is 500. |
Tasks | |
Published | 2019-01-23 |
URL | http://arxiv.org/abs/1901.07891v2 |
http://arxiv.org/pdf/1901.07891v2.pdf | |
PWC | https://paperswithcode.com/paper/predicting-the-results-of-ltl-model-checking |
Repo | |
Framework | |
Semantics-Guided Neural Networks for Efficient Skeleton-Based Human Action Recognition
Title | Semantics-Guided Neural Networks for Efficient Skeleton-Based Human Action Recognition |
Authors | Pengfei Zhang, Cuiling Lan, Wenjun Zeng, Junliang Xing, Jianru Xue, Nanning Zheng |
Abstract | Skeleton-based human action recognition has attracted great interest thanks to the easy accessibility of the human skeleton data. Recently, there is a trend of using very deep feedforward neural networks to model the 3D coordinates of joints without considering the computational efficiency. In this paper, we propose a simple yet effective semantics-guided neural network (SGN) for skeleton-based action recognition. We explicitly introduce the high level semantics of joints (joint type and frame index) into the network to enhance the feature representation capability. In addition, we exploit the relationship of joints hierarchically through two modules, i.e., a joint-level module for modeling the correlations of joints in the same frame and a framelevel module for modeling the dependencies of frames by taking the joints in the same frame as a whole. A strong baseline is proposed to facilitate the study of this field. With an order of magnitude smaller model size than most previous works, SGN achieves the state-of-the-art performance on the NTU60, NTU120, and SYSU datasets. |
Tasks | Skeleton Based Action Recognition, Temporal Action Localization |
Published | 2019-04-02 |
URL | https://arxiv.org/abs/1904.01189v2 |
https://arxiv.org/pdf/1904.01189v2.pdf | |
PWC | https://paperswithcode.com/paper/semantics-guided-neural-networks-for |
Repo | |
Framework | |
Neural Zero-Inflated Quality Estimation Model For Automatic Speech Recognition System
Title | Neural Zero-Inflated Quality Estimation Model For Automatic Speech Recognition System |
Authors | Kai Fan, Jiayi Wang, Bo Li, Boxing Chen, Niyu Ge |
Abstract | The performances of automatic speech recognition (ASR) systems are usually evaluated by the metric word error rate (WER) when the manually transcribed data are provided, which are, however, expensively available in the real scenario. In addition, the empirical distribution of WER for most ASR systems usually tends to put a significant mass near zero, making it difficult to simulate with a single continuous distribution. In order to address the two issues of ASR quality estimation (QE), we propose a novel neural zero-inflated model to predict the WER of the ASR result without transcripts. We design a neural zero-inflated beta regression on top of a bidirectional transformer language model conditional on speech features (speech-BERT). We adopt the pre-training strategy of token level mask language modeling for speech-BERT as well, and further fine-tune with our zero-inflated layer for the mixture of discrete and continuous outputs. The experimental results show that our approach achieves better performance on WER prediction in the metrics of Pearson and MAE, compared with most existed quality estimation algorithms for ASR or machine translation. |
Tasks | Language Modelling, Machine Translation, Speech Recognition |
Published | 2019-10-03 |
URL | https://arxiv.org/abs/1910.01289v1 |
https://arxiv.org/pdf/1910.01289v1.pdf | |
PWC | https://paperswithcode.com/paper/neural-zero-inflated-quality-estimation-model |
Repo | |
Framework | |
Presence-absence estimation in audio recordings of tropical frog communities
Title | Presence-absence estimation in audio recordings of tropical frog communities |
Authors | Andrés Estrella Terneux, Damián Nicolalde, Daniel Nicolalde, Andrés Merino-Viteri |
Abstract | One non-invasive way to study frog communities is by analyzing long-term samples of acoustic material containing calls. This immense task has been optimized by the development of Machine Learning tools to extract ecological information. We explored a likelihood-ratio audio detector based on Gaussian mixture model classification of 10 frog species, and applied it to estimate presence-absence in audio recordings from an actual amphibian monitoring performed at Yasun'i National Park in the Ecuadorian Amazonia. A modified filter-bank was used to extract 20 cepstral features that model the spectral content of frog calls. Experiments were carried out to investigate the hyperparameters and the minimum frog-call time needed to train an accurate GMM classifier. With 64 Gaussians and 12 seconds of training time, the classifier achieved an average weighted error rate of 0.9% on the 10-fold cross-validation for nine species classification, as compared to 3% with MFCC and 1.8% with PLP features. For testing, 10 GMMs were trained using all the available training-validation dataset to study 23.5 hours in 141, 10-minute long samples of unidentified real-world audio recorded at two frog communities in 2001 with analog equipment. To evaluate automatic presence-absence estimation, we characterized the audio samples with 10 binary variables each corresponding to a frog species, and manually labeled a sub-set of 18 samples using headphones. A recall of 87.5% and precision of 100% with average accuracy of 96.66% suggests good generalization ability of the algorithm, and provides evidence of the validity of this approach to study real-world audio recorded in a tropical acoustic environment. Finally, we applied the algorithm to the available corpus, and show its potentiality to gain insights into the temporal reproductive behavior of frogs. |
Tasks | |
Published | 2019-01-08 |
URL | http://arxiv.org/abs/1901.02495v1 |
http://arxiv.org/pdf/1901.02495v1.pdf | |
PWC | https://paperswithcode.com/paper/presence-absence-estimation-in-audio |
Repo | |
Framework | |
Estimating the Standard Error of Cross-Validation-Based Estimators of Classification Rules Performance
Title | Estimating the Standard Error of Cross-Validation-Based Estimators of Classification Rules Performance |
Authors | Waleed A. Yousef |
Abstract | First, we analyze the variance of the Cross Validation (CV)-based estimators used for estimating the performance of classification rules. Second, we propose a novel estimator to estimate this variance using the Influence Function (IF) approach that had been used previously very successfully to estimate the variance of the bootstrap-based estimators. The motivation for this research is that, as the best of our knowledge, the literature lacks a rigorous method for estimating the variance of the CV-based estimators. What is available is a set of ad-hoc procedures that have no mathematical foundation since they ignore the covariance structure among dependent random variables. The conducted experiments show that the IF proposed method has small RMS error with some bias. However, surprisingly, the ad-hoc methods still work better than the IF-based method. Unfortunately, this is due to the lack of enough smoothness if compared to the bootstrap estimator. This opens the research for three points: (1) more comprehensive simulation study to clarify when the IF method win or loose; (2) more mathematical analysis to figure out why the ad-hoc methods work well; and (3) more mathematical treatment to figure out the connection between the appropriate amount of “smoothness” and decreasing the bias of the IF method. |
Tasks | |
Published | 2019-08-01 |
URL | https://arxiv.org/abs/1908.00325v1 |
https://arxiv.org/pdf/1908.00325v1.pdf | |
PWC | https://paperswithcode.com/paper/estimating-the-standard-error-of-cross |
Repo | |
Framework | |
Recurrent Neural Networks For Accurate RSSI Indoor Localization
Title | Recurrent Neural Networks For Accurate RSSI Indoor Localization |
Authors | Minh Tu Hoang, Brosnan Yuen, Xiaodai Dong, Tao Lu, Robert Westendorp, Kishore Reddy |
Abstract | This paper proposes recurrent neuron networks (RNNs) for a fingerprinting indoor localization using WiFi. Instead of locating user’s position one at a time as in the cases of conventional algorithms, our RNN solution aims at trajectory positioning and takes into account the relation among the received signal strength indicator (RSSI) measurements in a trajectory. Furthermore, a weighted average filter is proposed for both input RSSI data and sequential output locations to enhance the accuracy among the temporal fluctuations of RSSI. The results using different types of RNN including vanilla RNN, long short-term memory (LSTM), gated recurrent unit (GRU) and bidirectional LSTM (BiLSTM) are presented. On-site experiments demonstrate that the proposed structure achieves an average localization error of $0.75$ m with $80%$ of the errors under $1$ m, which outperforms the conventional KNN algorithms and probabilistic algorithms by approximately $30%$ under the same test environment. |
Tasks | |
Published | 2019-03-27 |
URL | https://arxiv.org/abs/1903.11703v2 |
https://arxiv.org/pdf/1903.11703v2.pdf | |
PWC | https://paperswithcode.com/paper/recurrent-neural-networks-for-accurate-rssi |
Repo | |
Framework | |
Detecting muscle activation using ultrasound speed of sound inversion with deep learning
Title | Detecting muscle activation using ultrasound speed of sound inversion with deep learning |
Authors | Micha Feigin, Manuel Zwecker, Daniel Freedman, Brian W. Anthony |
Abstract | Functional muscle imaging is essential for diagnostics of a multitude of musculoskeletal afflictions such as degenerative muscle diseases, muscle injuries, muscle atrophy, and neurological related issues such as spasticity. However, there is currently no solution, imaging or otherwise, capable of providing a map of active muscles over a large field of view in dynamic scenarios. In this work, we look at the feasibility of longitudinal sound speed measurements to the task of dynamic muscle imaging of contraction or activation. We perform the assessment using a deep learning network applied to pre-beamformed ultrasound channel data for sound speed inversion. Preliminary results show that dynamic muscle contraction can be detected in the calf and that this contraction can be positively assigned to the operating muscles. Potential frame rates in the hundreds to thousands of frames per second are necessary to accomplish this. |
Tasks | |
Published | 2019-10-20 |
URL | https://arxiv.org/abs/1910.09046v2 |
https://arxiv.org/pdf/1910.09046v2.pdf | |
PWC | https://paperswithcode.com/paper/detecting-muscle-activation-using-ultrasound |
Repo | |
Framework | |
Non-asymptotic Analysis of Biased Stochastic Approximation Scheme
Title | Non-asymptotic Analysis of Biased Stochastic Approximation Scheme |
Authors | Belhal Karimi, Blazej Miasojedow, Eric Moulines, Hoi-To Wai |
Abstract | Stochastic approximation (SA) is a key method used in statistical learning. Recently, its non-asymptotic convergence analysis has been considered in many papers. However, most of the prior analyses are made under restrictive assumptions such as unbiased gradient estimates and convex objective function, which significantly limit their applications to sophisticated tasks such as online and reinforcement learning. These restrictions are all essentially relaxed in this work. In particular, we analyze a general SA scheme to minimize a non-convex, smooth objective function. We consider update procedure whose drift term depends on a state-dependent Markov chain and the mean field is not necessarily of gradient type, covering approximate second-order method and allowing asymptotic bias for the one-step updates. We illustrate these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning. |
Tasks | |
Published | 2019-02-02 |
URL | https://arxiv.org/abs/1902.00629v4 |
https://arxiv.org/pdf/1902.00629v4.pdf | |
PWC | https://paperswithcode.com/paper/non-asymptotic-analysis-of-biased-stochastic |
Repo | |
Framework | |
Online Learning for Measuring Incentive Compatibility in Ad Auctions
Title | Online Learning for Measuring Incentive Compatibility in Ad Auctions |
Authors | Zhe Feng, Okke Schrijvers, Eric Sodomka |
Abstract | In this paper we investigate the problem of measuring end-to-end Incentive Compatibility (IC) regret given black-box access to an auction mechanism. Our goal is to 1) compute an estimate for IC regret in an auction, 2) provide a measure of certainty around the estimate of IC regret, and 3) minimize the time it takes to arrive at an accurate estimate. We consider two main problems, with different informational assumptions: In the \emph{advertiser problem} the goal is to measure IC regret for some known valuation $v$, while in the more general \emph{demand-side platform (DSP) problem} we wish to determine the worst-case IC regret over all possible valuations. The problems are naturally phrased in an online learning model and we design $Regret-UCB$ algorithms for both problems. We give an online learning algorithm where for the advertiser problem the error of determining IC shrinks as $O\Big(\frac{B}{T}\cdot\Big(\frac{\ln T}{n} + \sqrt{\frac{\ln T}{n}}\Big)\Big)$ (where $B$ is the finite set of bids, $T$ is the number of time steps, and $n$ is number of auctions per time step), and for the DSP problem it shrinks as $O\Big(\frac{B}{T}\cdot\Big( \frac{B\ln T}{n} + \sqrt{\frac{B\ln T}{n}}\Big)\Big)$. For the DSP problem, we also consider stronger IC regret estimation and extend our $Regret-UCB$ algorithm to achieve better IC regret error. We validate the theoretical results using simulations with Generalized Second Price (GSP) auctions, which are known to not be incentive compatible and thus have strictly positive IC regret. |
Tasks | |
Published | 2019-01-21 |
URL | http://arxiv.org/abs/1901.06808v2 |
http://arxiv.org/pdf/1901.06808v2.pdf | |
PWC | https://paperswithcode.com/paper/190106808 |
Repo | |
Framework | |
The Long and the Short of It: Summarising Event Sequences with Serial Episodes
Title | The Long and the Short of It: Summarising Event Sequences with Serial Episodes |
Authors | Nikolaj Tatti, Jilles Vreeken |
Abstract | An ideal outcome of pattern mining is a small set of informative patterns, containing no redundancy or noise, that identifies the key structure of the data at hand. Standard frequent pattern miners do not achieve this goal, as due to the pattern explosion typically very large numbers of highly redundant patterns are returned. We pursue the ideal for sequential data, by employing a pattern set mining approach-an approach where, instead of ranking patterns individually, we consider results as a whole. Pattern set mining has been successfully applied to transactional data, but has been surprisingly under studied for sequential data. In this paper, we employ the MDL principle to identify the set of sequential patterns that summarises the data best. In particular, we formalise how to encode sequential data using sets of serial episodes, and use the encoded length as a quality score. As search strategy, we propose two approaches: the first algorithm selects a good pattern set from a large candidate set, while the second is a parameter-free any-time algorithm that mines pattern sets directly from the data. Experimentation on synthetic and real data demonstrates we efficiently discover small sets of informative patterns. |
Tasks | |
Published | 2019-02-07 |
URL | http://arxiv.org/abs/1902.02834v1 |
http://arxiv.org/pdf/1902.02834v1.pdf | |
PWC | https://paperswithcode.com/paper/the-long-and-the-short-of-it-summarising |
Repo | |
Framework | |