October 19, 2019

3021 words 15 mins read

Paper Group ANR 388

Paper Group ANR 388

Contextual Topic Modeling For Dialog Systems. Dimensionality Reduction and (Bucket) Ranking: a Mass Transportation Approach. Detecting and interpreting myocardial infarction using fully convolutional neural networks. Machine Learning for Forecasting Mid Price Movement using Limit Order Book Data. Code-switched Language Models Using Dual RNNs and Sa …

Contextual Topic Modeling For Dialog Systems

Title Contextual Topic Modeling For Dialog Systems
Authors Chandra Khatri, Rahul Goel, Behnam Hedayatnia, Angeliki Metanillou, Anushree Venkatesh, Raefer Gabriel, Arindam Mandal
Abstract Accurate prediction of conversation topics can be a valuable signal for creating coherent and engaging dialog systems. In this work, we focus on context-aware topic classification methods for identifying topics in free-form human-chatbot dialogs. We extend previous work on neural topic classification and unsupervised topic keyword detection by incorporating conversational context and dialog act features. On annotated data, we show that incorporating context and dialog acts leads to relative gains in topic classification accuracy by 35% and on unsupervised keyword detection recall by 11% for conversational interactions where topics frequently span multiple utterances. We show that topical metrics such as topical depth is highly correlated with dialog evaluation metrics such as coherence and engagement implying that conversational topic models can predict user satisfaction. Our work for detecting conversation topics and keywords can be used to guide chatbots towards coherent dialog.
Tasks Chatbot, Topic Models
Published 2018-10-18
URL http://arxiv.org/abs/1810.08135v2
PDF http://arxiv.org/pdf/1810.08135v2.pdf
PWC https://paperswithcode.com/paper/contextual-topic-modeling-for-dialog-systems
Repo
Framework

Dimensionality Reduction and (Bucket) Ranking: a Mass Transportation Approach

Title Dimensionality Reduction and (Bucket) Ranking: a Mass Transportation Approach
Authors Mastane Achab, Anna Korba, Stephan Clémençon
Abstract Whereas most dimensionality reduction techniques (e.g. PCA, ICA, NMF) for multivariate data essentially rely on linear algebra to a certain extent, summarizing ranking data, viewed as realizations of a random permutation $\Sigma$ on a set of items indexed by $i\in {1,\ldots,; n}$, is a great statistical challenge, due to the absence of vector space structure for the set of permutations $\mathfrak{S}_n$. It is the goal of this article to develop an original framework for possibly reducing the number of parameters required to describe the distribution of a statistical population composed of rankings/permutations, on the premise that the collection of items under study can be partitioned into subsets/buckets, such that, with high probability, items in a certain bucket are either all ranked higher or else all ranked lower than items in another bucket. In this context, $\Sigma$'s distribution can be hopefully represented in a sparse manner by a bucket distribution, i.e. a bucket ordering plus the ranking distributions within each bucket. More precisely, we introduce a dedicated distortion measure, based on a mass transportation metric, in order to quantify the accuracy of such representations. The performance of buckets minimizing an empirical version of the distortion is investigated through a rate bound analysis. Complexity penalization techniques are also considered to select the shape of a bucket order with minimum expected distortion. Beyond theoretical concepts and results, numerical experiments on real ranking data are displayed in order to provide empirical evidence of the relevance of the approach promoted.
Tasks Dimensionality Reduction
Published 2018-10-15
URL https://arxiv.org/abs/1810.06291v2
PDF https://arxiv.org/pdf/1810.06291v2.pdf
PWC https://paperswithcode.com/paper/dimensionality-reduction-and-bucket-ranking-a
Repo
Framework

Detecting and interpreting myocardial infarction using fully convolutional neural networks

Title Detecting and interpreting myocardial infarction using fully convolutional neural networks
Authors Nils Strodthoff, Claas Strodthoff
Abstract Objective: We aim to provide an algorithm for the detection of myocardial infarction that operates directly on ECG data without any preprocessing and to investigate its decision criteria. Approach: We train an ensemble of fully convolutional neural networks on the PTB ECG dataset and apply state-of-the-art attribution methods. Main results: Our classifier reaches 93.3% sensitivity and 89.7% specificity evaluated using 10-fold cross-validation with sampling based on patients. The presented method outperforms state-of-the-art approaches and reaches the performance level of human cardiologists for detection of myocardial infarction. We are able to discriminate channel-specific regions that contribute most significantly to the neural network’s decision. Interestingly, the network’s decision is influenced by signs also recognized by human cardiologists as indicative of myocardial infarction. Significance: Our results demonstrate the high prospects of algorithmic ECG analysis for future clinical applications considering both its quantitative performance as well as the possibility of assessing decision criteria on a per-example basis, which enhances the comprehensibility of the approach.
Tasks
Published 2018-06-18
URL http://arxiv.org/abs/1806.07385v2
PDF http://arxiv.org/pdf/1806.07385v2.pdf
PWC https://paperswithcode.com/paper/detecting-and-interpreting-myocardial
Repo
Framework

Machine Learning for Forecasting Mid Price Movement using Limit Order Book Data

Title Machine Learning for Forecasting Mid Price Movement using Limit Order Book Data
Authors Paraskevi Nousi, Avraam Tsantekidis, Nikolaos Passalis, Adamantios Ntakaris, Juho Kanniainen, Anastasios Tefas, Moncef Gabbouj, Alexandros Iosifidis
Abstract Forecasting the movements of stock prices is one the most challenging problems in financial markets analysis. In this paper, we use Machine Learning (ML) algorithms for the prediction of future price movements using limit order book data. Two different sets of features are combined and evaluated: handcrafted features based on the raw order book data and features extracted by ML algorithms, resulting in feature vectors with highly variant dimensionalities. Three classifiers are evaluated using combinations of these sets of features on two different evaluation setups and three prediction scenarios. Even though the large scale and high frequency nature of the limit order book poses several challenges, the scope of the conducted experiments and the significance of the experimental results indicate that Machine Learning highly befits this task carving the path towards future research in this field.
Tasks
Published 2018-09-19
URL http://arxiv.org/abs/1809.07861v2
PDF http://arxiv.org/pdf/1809.07861v2.pdf
PWC https://paperswithcode.com/paper/machine-learning-for-forecasting-mid-price
Repo
Framework

Code-switched Language Models Using Dual RNNs and Same-Source Pretraining

Title Code-switched Language Models Using Dual RNNs and Same-Source Pretraining
Authors Saurabh Garg, Tanmay Parekh, Preethi Jyothi
Abstract This work focuses on building language models (LMs) for code-switched text. We propose two techniques that significantly improve these LMs: 1) A novel recurrent neural network unit with dual components that focus on each language in the code-switched text separately 2) Pretraining the LM using synthetic text from a generative model estimated using the training data. We demonstrate the effectiveness of our proposed techniques by reporting perplexities on a Mandarin-English task and derive significant reductions in perplexity.
Tasks
Published 2018-09-06
URL http://arxiv.org/abs/1809.01962v1
PDF http://arxiv.org/pdf/1809.01962v1.pdf
PWC https://paperswithcode.com/paper/code-switched-language-models-using-dual-rnns
Repo
Framework

Clearing noisy annotations for computed tomography imaging

Title Clearing noisy annotations for computed tomography imaging
Authors Roman Khudorozhkov, Alexander Koryagin, Alexey Kozhevin
Abstract One of the problems on the way to successful implementation of neural networks is the quality of annotation. For instance, different annotators can annotate images in a different way and very often their decisions do not match exactly and in extreme cases are even mutually exclusive which results in noisy annotations and, consequently, inaccurate predictions. To avoid that problem in the task of computed tomography (CT) imaging segmentation we propose a clearing algorithm for annotations. It consists of 3 stages: - annotators scoring, which assigns a higher confidence level to better annotators; - nodules scoring, which assigns a higher confidence level to nodules confirmed by good annotators; - nodules merging, which aggregates annotations according to nodules confidence. In general, the algorithm can be applied to many different tasks (namely, binary and multi-class semantic segmentation, and also with trivial adjustments to classification and regression) where there are several annotators labeling each image.
Tasks Computed Tomography (CT), Semantic Segmentation
Published 2018-07-23
URL http://arxiv.org/abs/1807.09151v1
PDF http://arxiv.org/pdf/1807.09151v1.pdf
PWC https://paperswithcode.com/paper/clearing-noisy-annotations-for-computed
Repo
Framework

A Weighted Sparse Sampling and Smoothing Frame Transition Approach for Semantic Fast-Forward First-Person Videos

Title A Weighted Sparse Sampling and Smoothing Frame Transition Approach for Semantic Fast-Forward First-Person Videos
Authors Michel Melo Silva, Washington Luis Souza Ramos, Joao Klock Ferreira, Felipe Cadar Chamone, Mario Fernando Montenegro Campos, Erickson Rangel Nascimento
Abstract Thanks to the advances in the technology of low-cost digital cameras and the popularity of the self-recording culture, the amount of visual data on the Internet is going to the opposite side of the available time and patience of the users. Thus, most of the uploaded videos are doomed to be forgotten and unwatched in a computer folder or website. In this work, we address the problem of creating smooth fast-forward videos without losing the relevant content. We present a new adaptive frame selection formulated as a weighted minimum reconstruction problem, which combined with a smoothing frame transition method accelerates first-person videos emphasizing the relevant segments and avoids visual discontinuities. The experiments show that our method is able to fast-forward videos to retain as much relevant information and smoothness as the state-of-the-art techniques in less time. We also present a new 80-hour multimodal (RGB-D, IMU, and GPS) dataset of first-person videos with annotations for recorder profile, frame scene, activities, interaction, and attention.
Tasks
Published 2018-02-23
URL http://arxiv.org/abs/1802.08722v4
PDF http://arxiv.org/pdf/1802.08722v4.pdf
PWC https://paperswithcode.com/paper/a-weighted-sparse-sampling-and-smoothing-1
Repo
Framework

Detecting Learning vs Memorization in Deep Neural Networks using Shared Structure Validation Sets

Title Detecting Learning vs Memorization in Deep Neural Networks using Shared Structure Validation Sets
Authors Elias Chaibub Neto
Abstract The roles played by learning and memorization represent an important topic in deep learning research. Recent work on this subject has shown that the optimization behavior of DNNs trained on shuffled labels is qualitatively different from DNNs trained with real labels. Here, we propose a novel permutation approach that can differentiate memorization from learning in deep neural networks (DNNs) trained as usual (i.e., using the real labels to guide the learning, rather than shuffled labels). The evaluation of weather the DNN has learned and/or memorized, happens in a separate step where we compare the predictive performance of a shallow classifier trained with the features learned by the DNN, against multiple instances of the same classifier, trained on the same input, but using shuffled labels as outputs. By evaluating these shallow classifiers in validation sets that share structure with the training set, we are able to tell apart learning from memorization. Application of our permutation approach to multi-layer perceptrons and convolutional neural networks trained on image data corroborated many findings from other groups. Most importantly, our illustrations also uncovered interesting dynamic patterns about how DNNs memorize over increasing numbers of training epochs, and support the surprising result that DNNs are still able to learn, rather than only memorize, when trained with pure Gaussian noise as input.
Tasks
Published 2018-02-21
URL http://arxiv.org/abs/1802.07714v1
PDF http://arxiv.org/pdf/1802.07714v1.pdf
PWC https://paperswithcode.com/paper/detecting-learning-vs-memorization-in-deep
Repo
Framework

Context-Aware Attention for Understanding Twitter Abuse

Title Context-Aware Attention for Understanding Twitter Abuse
Authors Tuhin Chakrabarty, Kilol Gupta
Abstract The original goal of any social media platform is to facilitate users to indulge in healthy and meaningful conversations. But more often than not, it has been found that it becomes an avenue for wanton attacks. We want to alleviate this issue and hence we try to provide a detailed analysis of how abusive behavior can be monitored in Twitter. The complexity of the natural language constructs makes this task challenging. We show how applying contextual attention to Long Short Term Memory networks help us give near state of art results on multiple benchmarks abuse detection data sets from Twitter.
Tasks Abuse Detection
Published 2018-09-24
URL https://arxiv.org/abs/1809.08726v2
PDF https://arxiv.org/pdf/1809.08726v2.pdf
PWC https://paperswithcode.com/paper/context-aware-attention-for-understanding
Repo
Framework

Simulating the future urban growth in Xiongan New Area: a upcoming big city in China

Title Simulating the future urban growth in Xiongan New Area: a upcoming big city in China
Authors Xun Liang
Abstract China made the announement to create the Xiongan New Area in Hebei in April 1,2017. Thus a new magacity about 110km south west of Beijing will emerge. Xiongan New Area is of great practial significant and historical significant for transferring Beijing’s non-capital function. Simulating the urban dynamics in Xiongan New Area can help planners to decide where to build the new urban and further manage the future urban growth. However, only a little research focus on the future urban development in Xiongan New Area. In addition, previous models are unable to simulate the urban dynamics in Xiongan New Area. Because there are no original high density urbna for these models to learn the transition rules.In this study, we proposed a C-FLUS model to solve such problems. This framework was implemented by coupling a modified Cellular automata(CA). An elaborately designed random planted seeds machanism based on local maximums is addressed in the CA model to better simulate the occurrence of the new urban. Through an analysis of the current driving forces, the C-FLUS can detect the potential start zone and simulate the urban development under different scenarios in Xiongan New Area. Our study shows that the new urban is most likely to occur in northwest of Xiongxian, and it will rapidly extend to Rongcheng and Anxin until almost cover the northern part of Xiongan New Area. Moreover, the method can help planners to evaluate the impact of urban expansion in Xiongan New Area.
Tasks
Published 2018-03-16
URL http://arxiv.org/abs/1803.06916v1
PDF http://arxiv.org/pdf/1803.06916v1.pdf
PWC https://paperswithcode.com/paper/simulating-the-future-urban-growth-in-xiongan
Repo
Framework

A Statistical Approach to Assessing Neural Network Robustness

Title A Statistical Approach to Assessing Neural Network Robustness
Authors Stefan Webb, Tom Rainforth, Yee Whye Teh, M. Pawan Kumar
Abstract We present a new approach to assessing the robustness of neural networks based on estimating the proportion of inputs for which a property is violated. Specifically, we estimate the probability of the event that the property is violated under an input model. Our approach critically varies from the formal verification framework in that when the property can be violated, it provides an informative notion of how robust the network is, rather than just the conventional assertion that the network is not verifiable. Furthermore, it provides an ability to scale to larger networks than formal verification approaches. Though the framework still provides a formal guarantee of satisfiability whenever it successfully finds one or more violations, these advantages do come at the cost of only providing a statistical estimate of unsatisfiability whenever no violation is found. Key to the practical success of our approach is an adaptation of multi-level splitting, a Monte Carlo approach for estimating the probability of rare events, to our statistical robustness framework. We demonstrate that our approach is able to emulate formal verification procedures on benchmark problems, while scaling to larger networks and providing reliable additional information in the form of accurate estimates of the violation probability.
Tasks
Published 2018-11-17
URL http://arxiv.org/abs/1811.07209v4
PDF http://arxiv.org/pdf/1811.07209v4.pdf
PWC https://paperswithcode.com/paper/a-statistical-approach-to-assessing-neural
Repo
Framework

Domain Adaptive Generation of Aircraft on Satellite Imagery via Simulated and Unsupervised Learning

Title Domain Adaptive Generation of Aircraft on Satellite Imagery via Simulated and Unsupervised Learning
Authors Junghoon Seo, Seunghyun Jeon, Taegyun Jeon
Abstract Object detection and classification for aircraft are the most important tasks in the satellite image analysis. The success of modern detection and classification methods has been based on machine learning and deep learning. One of the key requirements for those learning processes is huge data to train. However, there is an insufficient portion of aircraft since the targets are on military action and oper- ation. Considering the characteristics of satellite imagery, this paper attempts to provide a framework of the simulated and unsupervised methodology without any additional su- pervision or physical assumptions. Finally, the qualitative and quantitative analysis revealed a potential to replenish insufficient data for machine learning platform for satellite image analysis.
Tasks Object Detection
Published 2018-06-08
URL http://arxiv.org/abs/1806.03002v1
PDF http://arxiv.org/pdf/1806.03002v1.pdf
PWC https://paperswithcode.com/paper/domain-adaptive-generation-of-aircraft-on
Repo
Framework

Comparison of various image fusion methods for impervious surface classification from VNREDSat-1

Title Comparison of various image fusion methods for impervious surface classification from VNREDSat-1
Authors Hung V. Luu, Manh V. Pham, Chuc D. Man, Hung Q. Bui, Thanh T. N. Nguyen
Abstract Impervious surface is an important indicator for urban development monitoring. Accurate urban impervious surfaces mapping with VNREDSat-1 remains challenging due to their spectral diversity not captured by individual PAN image. In this artical, five multi-resolution image fusion techniques were compared for classification task of urban impervious surface. The result shows that for VNREDSat-1 dataset, UNB and Wavelet tranform methods are the best techniques reserving spatial and spectral information of original MS image, respectively. However, the UNB technique gives best results when it comes to impervious surface classification especially in the case of shadow area included in non-impervious surface group.
Tasks
Published 2018-03-06
URL http://arxiv.org/abs/1803.02326v2
PDF http://arxiv.org/pdf/1803.02326v2.pdf
PWC https://paperswithcode.com/paper/comparison-of-various-image-fusion-methods
Repo
Framework

Adversarial Attacks on Stochastic Bandits

Title Adversarial Attacks on Stochastic Bandits
Authors Kwang-Sung Jun, Lihong Li, Yuzhe Ma, Xiaojin Zhu
Abstract We study adversarial attacks that manipulate the reward signals to control the actions chosen by a stochastic multi-armed bandit algorithm. We propose the first attack against two popular bandit algorithms: $\epsilon$-greedy and UCB, \emph{without} knowledge of the mean rewards. The attacker is able to spend only logarithmic effort, multiplied by a problem-specific parameter that becomes smaller as the bandit problem gets easier to attack. The result means the attacker can easily hijack the behavior of the bandit algorithm to promote or obstruct certain actions, say, a particular medical treatment. As bandits are seeing increasingly wide use in practice, our study exposes a significant security threat.
Tasks
Published 2018-10-29
URL http://arxiv.org/abs/1810.12188v1
PDF http://arxiv.org/pdf/1810.12188v1.pdf
PWC https://paperswithcode.com/paper/adversarial-attacks-on-stochastic-bandits
Repo
Framework

Policy Evaluation and Optimization with Continuous Treatments

Title Policy Evaluation and Optimization with Continuous Treatments
Authors Nathan Kallus, Angela Zhou
Abstract We study the problem of policy evaluation and learning from batched contextual bandit data when treatments are continuous, going beyond previous work on discrete treatments. Previous work for discrete treatment/action spaces focuses on inverse probability weighting (IPW) and doubly robust (DR) methods that use a rejection sampling approach for evaluation and the equivalent weighted classification problem for learning. In the continuous setting, this reduction fails as we would almost surely reject all observations. To tackle the case of continuous treatments, we extend the IPW and DR approaches to the continuous setting using a kernel function that leverages treatment proximity to attenuate discrete rejection. Our policy estimator is consistent and we characterize the optimal bandwidth. The resulting continuous policy optimizer (CPO) approach using our estimator achieves convergent regret and approaches the best-in-class policy for learnable policy classes. We demonstrate that the estimator performs well and, in particular, outperforms a discretization-based benchmark. We further study the performance of our policy optimizer in a case study on personalized dosing based on a dataset of Warfarin patients, their covariates, and final therapeutic doses. Our learned policy outperforms benchmarks and nears the oracle-best linear policy.
Tasks
Published 2018-02-16
URL http://arxiv.org/abs/1802.06037v1
PDF http://arxiv.org/pdf/1802.06037v1.pdf
PWC https://paperswithcode.com/paper/policy-evaluation-and-optimization-with
Repo
Framework
comments powered by Disqus