January 25, 2020

3222 words 16 mins read

Paper Group ANR 1645

Paper Group ANR 1645

Severity Detection Tool for Patients with Infectious Disease. Learning Semantic Parsers from Denotations with Latent Structured Alignments and Abstract Programs. NM-Net: Mining Reliable Neighbors for Robust Feature Correspondences. Time-Dependent Deep Image Prior for Dynamic MRI. Towards Interpretable Deep Extreme Multi-label Learning. Study of the …

Severity Detection Tool for Patients with Infectious Disease

Title Severity Detection Tool for Patients with Infectious Disease
Authors Girmaw Abebe Tadesse, Tingting Zhu, Nhan Le Nguyen Thanh, Nguyen Thanh Hung, Ha Thi Hai Duong, Truong Huu Khanh, Pham Van Quang, Duc Duong Tran, LamMinh Yen, H Rogier Van Doorn, Nguyen Van Hao, John Prince, Hamza Javed, DaniKiyasseh, Le Van Tan, Louise Thwaites, David A. Clifton
Abstract Hand, foot and mouth disease (HFMD) and tetanus are serious infectious diseases in low and middle income countries. Tetanus in particular has a high mortality rate and its treatment is resource-demanding. Furthermore, HFMD often affects a large number of infants and young children. As a result, its treatment consumes enormous healthcare resources, especially when outbreaks occur. Autonomic nervous system dysfunction (ANSD) is the main cause of death for both HFMD and tetanus patients. However, early detection of ANSD is a difficult and challenging problem. In this paper, we aim to provide a proof-of-principle to detect the ANSD level automatically by applying machine learning techniques to physiological patient data, such as electrocardiogram (ECG) and photoplethysmogram (PPG) waveforms, which can be collected using low-cost wearable sensors. Efficient features are extracted that encode variations in the waveforms in the time and frequency domains. A support vector machine is employed to classify the ANSD levels. The proposed approach is validated on multiple datasets of HFMD and tetanus patients in Vietnam. Results show that encouraging performance is achieved in classifying ANSD levels. Moreover, the proposed features are simple, more generalisable and outperformed the standard heart rate variability (HRV) analysis. The proposed approach would facilitate both the diagnosis and treatment of infectious diseases in low and middle income countries, and thereby improve overall patient care.
Tasks Heart Rate Variability
Published 2019-12-10
URL https://arxiv.org/abs/1912.05345v1
PDF https://arxiv.org/pdf/1912.05345v1.pdf
PWC https://paperswithcode.com/paper/severity-detection-tool-for-patients-with
Repo
Framework

Learning Semantic Parsers from Denotations with Latent Structured Alignments and Abstract Programs

Title Learning Semantic Parsers from Denotations with Latent Structured Alignments and Abstract Programs
Authors Bailin Wang, Ivan Titov, Mirella Lapata
Abstract Semantic parsing aims to map natural language utterances onto machine interpretable meaning representations, aka programs whose execution against a real-world environment produces a denotation. Weakly-supervised semantic parsers are trained on utterance-denotation pairs treating programs as latent. The task is challenging due to the large search space and spuriousness of programs which may execute to the correct answer but do not generalize to unseen examples. Our goal is to instill an inductive bias in the parser to help it distinguish between spurious and correct programs. We capitalize on the intuition that correct programs would likely respect certain structural constraints were they to be aligned to the question (e.g., program fragments are unlikely to align to overlapping text spans) and propose to model alignments as structured latent variables. In order to make the latent-alignment framework tractable, we decompose the parsing task into (1) predicting a partial “abstract program” and (2) refining it while modeling structured alignments with differential dynamic programming. We obtain state-of-the-art performance on the WIKITABLEQUESTIONS and WIKISQL datasets. When compared to a standard attention baseline, we observe that the proposed structured-alignment mechanism is highly beneficial.
Tasks Semantic Parsing
Published 2019-09-09
URL https://arxiv.org/abs/1909.04165v1
PDF https://arxiv.org/pdf/1909.04165v1.pdf
PWC https://paperswithcode.com/paper/learning-semantic-parsers-from-denotations
Repo
Framework

NM-Net: Mining Reliable Neighbors for Robust Feature Correspondences

Title NM-Net: Mining Reliable Neighbors for Robust Feature Correspondences
Authors Chen Zhao, Zhiguo Cao, Chi Li, Xin Li, Jiaqi Yang
Abstract Feature correspondence selection is pivotal to many feature-matching based tasks in computer vision. Searching for spatially k-nearest neighbors is a common strategy for extracting local information in many previous works. However, there is no guarantee that the spatially k-nearest neighbors of correspondences are consistent because the spatial distribution of false correspondences is often irregular. To address this issue, we present a compatibility-specific mining method to search for consistent neighbors. Moreover, in order to extract and aggregate more reliable features from neighbors, we propose a hierarchical network named NM-Net with a series of convolution layers taking the generated graph as input, which is insensitive to the order of correspondences. Our experimental results have shown the proposed method achieves the state-of-the-art performance on four datasets with various inlier ratios and varying numbers of feature consistencies.
Tasks
Published 2019-03-31
URL http://arxiv.org/abs/1904.00320v1
PDF http://arxiv.org/pdf/1904.00320v1.pdf
PWC https://paperswithcode.com/paper/nm-net-mining-reliable-neighbors-for-robust
Repo
Framework

Time-Dependent Deep Image Prior for Dynamic MRI

Title Time-Dependent Deep Image Prior for Dynamic MRI
Authors Kyong Hwan Jin, Harshit Gupta, Jerome Yerly, Matthias Stuber, Michael Unser
Abstract We propose a novel unsupervised deep-learning-based algorithm to solve the inverse problem found in dynamic magnetic resonance imaging (MRI). Our method needs neither prior training nor additional data; in particular, it does not require either electrocardiogram or spokes-reordering in the context of cardiac images. It generalizes to sequences of images the recently introduced deep-image-prior approach. The essence of the proposed algorithm is to proceed in two steps to fit k-space synthetic measurements to sparsely acquired dynamic MRI data. In the first step, we deploy a convolutional neural network (CNN) driven by a sequence of low-dimensional latent variables to generate a dynamic series of MRI images. In the second step, we submit the generated images to a nonuniform fast Fourier transform that represents the forward model of the MRI system. By manipulating the weights of the CNN, we fit our synthetic measurements to the acquired MRI data. The corresponding images from the CNN then provide the output of our system; their evolution through time is driven by controlling the sequence of latent variables whose interpolation gives access to the sub-frame—or even continuous—temporal control of reconstructed dynamic images. We perform experiments on simulated and real cardiac images of a fetus acquired through 5-spoke-based golden-angle measurements. Our results show improvement over the current state-of-the-art.
Tasks
Published 2019-10-03
URL https://arxiv.org/abs/1910.01684v1
PDF https://arxiv.org/pdf/1910.01684v1.pdf
PWC https://paperswithcode.com/paper/time-dependent-deep-image-prior-for-dynamic
Repo
Framework

Towards Interpretable Deep Extreme Multi-label Learning

Title Towards Interpretable Deep Extreme Multi-label Learning
Authors Yihuang Kang, I-Ling Cheng, Wenjui Mao, Bowen Kuo, Pei-Ju Lee
Abstract Many Machine Learning algorithms, such as deep neural networks, have long been criticized for being “black-boxes”-a kind of models unable to provide how it arrive at a decision without further efforts to interpret. This problem has raised concerns on model applications’ trust, safety, nondiscrimination, and other ethical issues. In this paper, we discuss the machine learning interpretability of a real-world application, eXtreme Multi-label Learning (XML), which involves learning models from annotated data with many pre-defined labels. We propose a two-step XML approach that combines deep non-negative autoencoder with other multi-label classifiers to tackle different data applications with a large number of labels. Our experimental result shows that the proposed approach is able to cope with many-label problems as well as to provide interpretable label hierarchies and dependencies that helps us understand how the model recognizes the existences of objects in an image.
Tasks Multi-Label Learning
Published 2019-07-03
URL https://arxiv.org/abs/1907.01723v1
PDF https://arxiv.org/pdf/1907.01723v1.pdf
PWC https://paperswithcode.com/paper/towards-interpretable-deep-extreme-multi
Repo
Framework

Study of the impact of climate change on precipitation in Paris area using method based on iterative multiscale dynamic time warping (IMS-DTW)

Title Study of the impact of climate change on precipitation in Paris area using method based on iterative multiscale dynamic time warping (IMS-DTW)
Authors Mohamed Djallel Dilmi, Laurent Barthès, Cécile Mallet, Aymeric Chazottes
Abstract Studying the impact of climate change on precipitation is constrained by finding a way to evaluate the evolution of precipitation variability over time. Classical approaches (feature-based) have shown their limitations for this issue due to the intermittent and irregular nature of precipitation. In this study, we present a novel variant of the Dynamic time warping method quantifying the dissimilarity between two rainfall time series based on shapes comparisons, for clustering annual time series recorded at daily scale. This shape based approach considers the whole information (variability, trends and intermittency). We further labeled each cluster using a feature-based approach. While testing the proposed approach on the time series of Paris Montsouris, we found that the precipitation variability increased over the years in Paris area.
Tasks Time Series
Published 2019-10-22
URL https://arxiv.org/abs/1910.10809v1
PDF https://arxiv.org/pdf/1910.10809v1.pdf
PWC https://paperswithcode.com/paper/study-of-the-impact-of-climate-change-on
Repo
Framework

Pelvis Surface Estimation From Partial CT for Computer-Aided Pelvic Osteotomies

Title Pelvis Surface Estimation From Partial CT for Computer-Aided Pelvic Osteotomies
Authors Robert Grupp, Yoshito Otake, Ryan Murphy, Javad Parvizi, Mehran Armand, Russell Taylor
Abstract Computer-aided surgical systems commonly use preoperative CT scans when performing pelvic osteotomies for intraoperative navigation. These systems have the potential to improve the safety and accuracy of pelvic osteotomies, however, exposing the patient to radiation is a significant drawback. In order to reduce radiation exposure, we propose a new smooth extrapolation method leveraging a partial pelvis CT and a statistical shape model (SSM) of the full pelvis in order to estimate a patient’s complete pelvis. A SSM of normal, complete, female pelvis anatomy was created and evaluated from 42 subjects. A leave-one-out test was performed to characterise the inherent generalisation capability of the SSM. An additional leave-one-out test was conducted to measure performance of the smooth extrapolation method and an existing “cut-and-paste” extrapolation method. Unknown anatomy was simulated by keeping the axial slices of the patient’s acetabulum intact and varying the amount of the superior iliac crest retained; from 0% to 15% of the total pelvis extent. The smooth technique showed an average improvement over the cut-and-paste method of 1.31 mm and 3.61 mm, in RMS and maximum surface error, respectively. With 5% of the iliac crest retained, the smoothly estimated surface had an RMS surface error of 2.21 mm, an improvement of 1.25 mm when retaining none of the iliac crest. This anatomical estimation method creates the possibility of a patient and surgeon benefiting from the use of a CAS system and simultaneously reducing the patient’s radiation exposure.
Tasks
Published 2019-09-23
URL https://arxiv.org/abs/1909.10452v1
PDF https://arxiv.org/pdf/1909.10452v1.pdf
PWC https://paperswithcode.com/paper/190910452
Repo
Framework

Sequential Training of Neural Networks with Gradient Boosting

Title Sequential Training of Neural Networks with Gradient Boosting
Authors Gonzalo Martínez-Muñoz
Abstract This paper presents a novel technique based on gradient boosting to train a shallow neural network (NN). Gradient boosting is an additive expansion algorithm in which a series of models are trained sequentially to approximate a given function. A one hidden layer neural network can also be seen as an additive model where the scalar product of the responses of the hidden layer and its weights provide the final output of the network. Instead of training the network as a whole, the proposed algorithm trains the network sequentially in $T$ steps. First, the bias term of the network is initialized with a constant approximation that minimizes the average loss of the data. Then, at each step, a portion of the network, composed of $K$ neurons, is trained to approximate the pseudo-residuals on the training data computed from the previous iteration. Finally, the $T$ partial models and bias are integrated as a single NN with $T \times K$ neurons in the hidden layer. We show that the proposed algorithm is more robust to overfitting than a standard neural network with respect to the number of neurons of the last hidden layer. Furthermore, we show that the proposed method design permits to reduce the number of neurons to be used without a significant reduction of its generalization ability. This permits to adapt the model to different classification speed requirements on the fly. Extensive experiments in classification and regression tasks, as well as in combination with a deep convolutional neural network, are carried out showing a better generalization performance than a standard neural network.
Tasks
Published 2019-09-26
URL https://arxiv.org/abs/1909.12098v1
PDF https://arxiv.org/pdf/1909.12098v1.pdf
PWC https://paperswithcode.com/paper/sequential-training-of-neural-networks-with
Repo
Framework

Normalizing Flows: An Introduction and Review of Current Methods

Title Normalizing Flows: An Introduction and Review of Current Methods
Authors Ivan Kobyzev, Simon Prince, Marcus A. Brubaker
Abstract Normalizing Flows are generative models which produce tractable distributions where both sampling and density evaluation can be efficient and exact. The goal of this survey article is to give a coherent and comprehensive review of the literature around the construction and use of Normalizing Flows for distribution learning. We aim to provide context and explanation of the models, review current state-of-the-art literature, and identify open questions and promising future directions.
Tasks
Published 2019-08-25
URL https://arxiv.org/abs/1908.09257v2
PDF https://arxiv.org/pdf/1908.09257v2.pdf
PWC https://paperswithcode.com/paper/normalizing-flows-introduction-and-ideas
Repo
Framework

Leveraging Legacy Data to Accelerate Materials Design via Preference Learning

Title Leveraging Legacy Data to Accelerate Materials Design via Preference Learning
Authors Xiaolin Sun, Zhufeng Hou, Masato Sumita, Shinsuke Ishihara, Ryo Tamura, Koji Tsuda
Abstract Machine learning applications in materials science are often hampered by shortage of experimental data. Integration with legacy data from past experiments is a viable way to solve the problem, but complex calibration is often necessary to use the data obtained under different conditions. In this paper, we present a novel calibration-free strategy to enhance the performance of Bayesian optimization with preference learning. The entire learning process is solely based on pairwise comparison of quantities (i.e., higher or lower) in the same dataset, and experimental design can be done without comparing quantities in different datasets. We demonstrate that Bayesian optimization is significantly enhanced via addition of legacy data for organic molecules and inorganic solid-state materials.
Tasks Calibration
Published 2019-10-25
URL https://arxiv.org/abs/1910.11516v1
PDF https://arxiv.org/pdf/1910.11516v1.pdf
PWC https://paperswithcode.com/paper/leveraging-legacy-data-to-accelerate
Repo
Framework

Asymptotic Normality and Variance Estimation For Supervised Ensembles

Title Asymptotic Normality and Variance Estimation For Supervised Ensembles
Authors Zhengze Zhou, Lucas Mentch, Giles Hooker
Abstract Ensemble methods based on bootstrapping have improved the predictive accuracy of base learners, but fail to provide a framework in which formal statistical inference can be conducted. Recent theoretical developments suggest taking subsamples without replacement and analyze the resulting estimator in the context of a U-statistic, thus demonstrating asymptotic normality properties. However, we observe that current methods for variance estimation exhibit severe bias when the number of base learners is not large enough, compromising the validity of the resulting confidence intervals or hypothesis tests. This paper shows that similar asymptotics can be achieved by means of V-statistics, corresponding to taking subsamples with replacement. Further, we develop a bias correction algorithm for estimating variance in the limiting distribution, which yields satisfactory results with moderate size of base learners.
Tasks
Published 2019-12-02
URL https://arxiv.org/abs/1912.01089v1
PDF https://arxiv.org/pdf/1912.01089v1.pdf
PWC https://paperswithcode.com/paper/191201089
Repo
Framework

Vehicles Detection Based on Background Modeling

Title Vehicles Detection Based on Background Modeling
Authors Mohamed Shehata, Reda Abo-Al-Ez, Farid Zaghlool, Mohamed Taha Abou-Kreisha
Abstract Background image subtraction algorithm is a common approach which detects moving objects in a video sequence by finding the significant difference between the video frames and the static background model. This paper presents a developed system which achieves vehicle detection by using background image subtraction algorithm based on blocks followed by deep learning data validation algorithm. The main idea is to segment the image into equal size blocks, to model the static reference background image (SRBI), by calculating the variance between each block pixels and each counterpart block pixels in the adjacent frame, the system implemented into four different methods: Absolute Difference, Image Entropy, Exclusive OR (XOR) and Discrete Cosine Transform (DCT). The experimental results showed that the DCT method has the highest vehicle detection accuracy.
Tasks
Published 2019-01-13
URL http://arxiv.org/abs/1901.04077v1
PDF http://arxiv.org/pdf/1901.04077v1.pdf
PWC https://paperswithcode.com/paper/vehicles-detection-based-on-background
Repo
Framework

A theory of incremental compression

Title A theory of incremental compression
Authors Arthur Franz, Oleksandr Antonenko, Roman Soletskyi
Abstract The ability to find short representations, i.e. to compress data, is crucial for many intelligent systems. We present a theory of incremental compression showing that arbitrary data strings, that can be described by a set of features, can be compressed by searching for those features incrementally, which results in a partition of the information content of the string into a complete set of pairwise independent pieces. The description length of this partition turns out to be close to optimal in terms of the Kolmogorov complexity of the string. At the same time, the incremental nature of our method constitutes a major step toward faster compression compared to non-incremental versions of universal search, while still staying general. We further show that our concept of a feature is closely related to Martin-L"of randomness tests, thereby formalizing the meaning of “property” for computable objects.
Tasks
Published 2019-08-10
URL https://arxiv.org/abs/1908.03781v1
PDF https://arxiv.org/pdf/1908.03781v1.pdf
PWC https://paperswithcode.com/paper/a-theory-of-incremental-compression
Repo
Framework

Large-Scale Joint Topic, Sentiment & User Preference Analysis for Online Reviews

Title Large-Scale Joint Topic, Sentiment & User Preference Analysis for Online Reviews
Authors Xinli Yu, Zheng Chen, Wei-Shih Yang, Xiaohua Hu, Erjia Yan
Abstract This paper presents a non-trivial reconstruction of a previous joint topic-sentiment-preference review model TSPRA with stick-breaking representation under the framework of variational inference (VI) and stochastic variational inference (SVI). TSPRA is a Gibbs Sampling based model that solves topics, word sentiments and user preferences altogether and has been shown to achieve good performance, but for large data set it can only learn from a relatively small sample. We develop the variational models vTSPRA and svTSPRA to improve the time use, and our new approach is capable of processing millions of reviews. We rebuild the generative process, improve the rating regression, solve and present the coordinate-ascent updates of variational parameters, and show the time complexity of each iteration is theoretically linear to the corpus size, and the experiments on Amazon data sets show it converges faster than TSPRA and attains better results given the same amount of time. In addition, we tune svTSPRA into an online algorithm ovTSPRA that can monitor oscillations of sentiment and preference overtime. Some interesting fluctuations are captured and possible explanations are provided. The results give strong visual evidence that user preference is better treated as an independent factor from sentiment.
Tasks
Published 2019-01-14
URL http://arxiv.org/abs/1901.04993v1
PDF http://arxiv.org/pdf/1901.04993v1.pdf
PWC https://paperswithcode.com/paper/large-scale-joint-topic-sentiment-user
Repo
Framework

Gradient Information Guided Deraining with A Novel Network and Adversarial Training

Title Gradient Information Guided Deraining with A Novel Network and Adversarial Training
Authors Yinglong Wang, Haokui Zhang, Yu Liu, Qinfeng Shi, Bing Zeng
Abstract In recent years, deep learning based methods have made significant progress in rain-removing. However, the existing methods usually do not have good generalization ability, which leads to the fact that almost all of existing methods have a satisfied performance on removing a specific type of rain streaks, but may have a relatively poor performance on other types of rain streaks. In this paper, aiming at removing multiple types of rain streaks from single images, we propose a novel deraining framework (GRASPP-GAN), which has better generalization capacity. Specifically, a modified ResNet-18 which extracts the deep features of rainy images and a revised ASPP structure which adapts to the various shapes and sizes of rain streaks are composed together to form the backbone of our deraining network. Taking the more prominent characteristics of rain streaks in the gradient domain into consideration, a gradient loss is introduced to help to supervise our deraining training process, for which, a Sobel convolution layer is built to extract the gradient information flexibly. To further boost the performance, an adversarial learning scheme is employed for the first time to train the proposed network. Extensive experiments on both real-world and synthetic datasets demonstrate that our method outperforms the state-of-the-art deraining methods quantitatively and qualitatively. In addition, without any modifications, our proposed framework also achieves good visual performance on dehazing.
Tasks Rain Removal
Published 2019-10-09
URL https://arxiv.org/abs/1910.03839v1
PDF https://arxiv.org/pdf/1910.03839v1.pdf
PWC https://paperswithcode.com/paper/gradient-information-guided-deraining-with-a
Repo
Framework
comments powered by Disqus