October 16, 2019

2950 words 14 mins read

Paper Group ANR 1128

Paper Group ANR 1128

Deep Recurrent Neural Networks for Product Attribute Extraction in eCommerce. Automated Detection of Adverse Drug Reactions in the Biomedical Literature Using Convolutional Neural Networks and Biomedical Word Embeddings. SfMLearner++: Learning Monocular Depth & Ego-Motion using Meaningful Geometric Constraints. Adaptive Grey-Box Fuzz-Testing with T …

Deep Recurrent Neural Networks for Product Attribute Extraction in eCommerce

Title Deep Recurrent Neural Networks for Product Attribute Extraction in eCommerce
Authors Bodhisattwa Prasad Majumder, Aditya Subramanian, Abhinandan Krishnan, Shreyansh Gandhi, Ajinkya More
Abstract Extracting accurate attribute qualities from product titles is a vital component in delivering eCommerce customers with a rewarding online shopping experience via an enriched faceted search. We demonstrate the potential of Deep Recurrent Networks in this domain, primarily models such as Bidirectional LSTMs and Bidirectional LSTM-CRF with or without an attention mechanism. These have improved overall F1 scores, as compared to the previous benchmarks (More et al.) by at least 0.0391, showcasing an overall precision of 97.94%, recall of 94.12% and the F1 score of 0.9599. This has made us achieve a significant coverage of important facets or attributes of products which not only shows the efficacy of deep recurrent models over previous machine learning benchmarks but also greatly enhances the overall customer experience while shopping online.
Tasks
Published 2018-03-29
URL http://arxiv.org/abs/1803.11284v1
PDF http://arxiv.org/pdf/1803.11284v1.pdf
PWC https://paperswithcode.com/paper/deep-recurrent-neural-networks-for-product
Repo
Framework

Automated Detection of Adverse Drug Reactions in the Biomedical Literature Using Convolutional Neural Networks and Biomedical Word Embeddings

Title Automated Detection of Adverse Drug Reactions in the Biomedical Literature Using Convolutional Neural Networks and Biomedical Word Embeddings
Authors Diego Saldana Miranda
Abstract Monitoring the biomedical literature for cases of Adverse Drug Reactions (ADRs) is a critically important and time consuming task in pharmacovigilance. The development of computer assisted approaches to aid this process in different forms has been the subject of many recent works. One particular area that has shown promise is the use of Deep Neural Networks, in particular, Convolutional Neural Networks (CNNs), for the detection of ADR relevant sentences. Using token-level convolutions and general purpose word embeddings, this architecture has shown good performance relative to more traditional models as well as Long Short Term Memory (LSTM) models. In this work, we evaluate and compare two different CNN architectures using the ADE corpus. In addition, we show that by de-duplicating the ADR relevant sentences, we can greatly reduce overoptimism in the classification results. Finally, we evaluate the use of word embeddings specifically developed for biomedical text and show that they lead to a better performance in this task.
Tasks Word Embeddings
Published 2018-04-24
URL http://arxiv.org/abs/1804.09148v1
PDF http://arxiv.org/pdf/1804.09148v1.pdf
PWC https://paperswithcode.com/paper/automated-detection-of-adverse-drug-reactions
Repo
Framework

SfMLearner++: Learning Monocular Depth & Ego-Motion using Meaningful Geometric Constraints

Title SfMLearner++: Learning Monocular Depth & Ego-Motion using Meaningful Geometric Constraints
Authors Vignesh Prasad, Brojeshwar Bhowmick
Abstract Most geometric approaches to monocular Visual Odometry (VO) provide robust pose estimates, but sparse or semi-dense depth estimates. Off late, deep methods have shown good performance in generating dense depths and VO from monocular images by optimizing the photometric consistency between images. Despite being intuitive, a naive photometric loss does not ensure proper pixel correspondences between two views, which is the key factor for accurate depth and relative pose estimations. It is a well known fact that simply minimizing such an error is prone to failures. We propose a method using Epipolar constraints to make the learning more geometrically sound. We use the Essential matrix, obtained using Nister’s Five Point Algorithm, for enforcing meaningful geometric constraints on the loss, rather than using it as labels for training. Our method, although simplistic but more geometrically meaningful, using lesser number of parameters, gives a comparable performance to state-of-the-art methods which use complex losses and large networks showing the effectiveness of using epipolar constraints. Such a geometrically constrained learning method performs successfully even in cases where simply minimizing the photometric error would fail.
Tasks Monocular Visual Odometry, Visual Odometry
Published 2018-12-20
URL http://arxiv.org/abs/1812.08370v1
PDF http://arxiv.org/pdf/1812.08370v1.pdf
PWC https://paperswithcode.com/paper/sfmlearner-learning-monocular-depth-ego
Repo
Framework

Adaptive Grey-Box Fuzz-Testing with Thompson Sampling

Title Adaptive Grey-Box Fuzz-Testing with Thompson Sampling
Authors Siddharth Karamcheti, Gideon Mann, David Rosenberg
Abstract Fuzz testing, or “fuzzing,” refers to a widely deployed class of techniques for testing programs by generating a set of inputs for the express purpose of finding bugs and identifying security flaws. Grey-box fuzzing, the most popular fuzzing strategy, combines light program instrumentation with a data driven process to generate new program inputs. In this work, we present a machine learning approach that builds on AFL, the preeminent grey-box fuzzer, by adaptively learning a probability distribution over its mutation operators on a program-specific basis. These operators, which are selected uniformly at random in AFL and mutational fuzzers in general, dictate how new inputs are generated, a core part of the fuzzer’s efficacy. Our main contributions are two-fold: First, we show that a sampling distribution over mutation operators estimated from training programs can significantly improve performance of AFL. Second, we introduce a Thompson Sampling, bandit-based optimization approach that fine-tunes the mutator distribution adaptively, during the course of fuzzing an individual program. A set of experiments across complex programs demonstrates that tuning the mutational operator distribution generates sets of inputs that yield significantly higher code coverage and finds more crashes faster and more reliably than both baseline versions of AFL as well as other AFL-based learning approaches.
Tasks
Published 2018-08-24
URL http://arxiv.org/abs/1808.08256v1
PDF http://arxiv.org/pdf/1808.08256v1.pdf
PWC https://paperswithcode.com/paper/adaptive-grey-box-fuzz-testing-with-thompson
Repo
Framework

Retrieve and Refine: Improved Sequence Generation Models For Dialogue

Title Retrieve and Refine: Improved Sequence Generation Models For Dialogue
Authors Jason Weston, Emily Dinan, Alexander H. Miller
Abstract Sequence generation models for dialogue are known to have several problems: they tend to produce short, generic sentences that are uninformative and unengaging. Retrieval models on the other hand can surface interesting responses, but are restricted to the given retrieval set leading to erroneous replies that cannot be tuned to the specific context. In this work we develop a model that combines the two approaches to avoid both their deficiencies: first retrieve a response and then refine it – the final sequence generator treating the retrieval as additional context. We show on the recent CONVAI2 challenge task our approach produces responses superior to both standard retrieval and generation models in human evaluations.
Tasks
Published 2018-08-14
URL http://arxiv.org/abs/1808.04776v2
PDF http://arxiv.org/pdf/1808.04776v2.pdf
PWC https://paperswithcode.com/paper/retrieve-and-refine-improved-sequence
Repo
Framework

Statistical Inference with Local Optima

Title Statistical Inference with Local Optima
Authors Yen-Chi Chen
Abstract We study the statistical properties of an estimator derived by applying a gradient ascent method with multiple initializations to a multi-modal likelihood function. We derive the population quantity that is the target of this estimator and study the properties of confidence intervals (CIs) constructed from asymptotic normality and the bootstrap approach. In particular, we analyze the coverage deficiency due to finite number of random initializations. We also investigate the CIs by inverting the likelihood ratio test, the score test, and the Wald test, and we show that the resulting CIs may be very different. We provide a summary of the uncertainties that we need to consider while making inference about the population. Note that we do not provide a solution to the problem of multiple local maxima; instead, our goal is to investigate the effect from local maxima on the behavior of our estimator. In addition, we analyze the performance of the EM algorithm under random initializations and derive the coverage of a CI with a finite number of initializations. Finally, we extend our analysis to a nonparametric mode hunting problem.
Tasks
Published 2018-07-12
URL http://arxiv.org/abs/1807.04431v1
PDF http://arxiv.org/pdf/1807.04431v1.pdf
PWC https://paperswithcode.com/paper/statistical-inference-with-local-optima
Repo
Framework

Removal of Parameter Adjustment of Frangi Filters in Case of Coronary Angiograms

Title Removal of Parameter Adjustment of Frangi Filters in Case of Coronary Angiograms
Authors Dhruv Gosain, Rishabh Joshi
Abstract Frangi Filters are one of the widely used filters for enhancing vessels in medical images. Since they were first proposed, the threshold of the vesselness function of Frangi Filters is to be arranged for each individual application. These thresholds are changed manually for individual fluoroscope, for enhancing coronary angiogram images. Hence it is felt, there is a need of mitigating the tuning procedure of threshold values for every fluoroscope. The current papers approach has been devised in order to treat the coronary angiogram images uniformly, irrespective of the fluoroscopes through which they were obtained and the patient demographics for further stenosis detection. This problem to the best of our knowledge has not been addressed yet. In the approach, before feeding the image to Frangi Filters, non uniform illumination of the input image is removed using homomorphic filters and the image is enhanced using Non Subsampled Contourlet Transform (NSCT). The experiment was conducted on the data that has been accumulated from various hospitals in India and the results obtained verifies dependency removal of parameters without compromising the results obtained by Frangi filters.
Tasks
Published 2018-12-07
URL http://arxiv.org/abs/1812.03186v1
PDF http://arxiv.org/pdf/1812.03186v1.pdf
PWC https://paperswithcode.com/paper/removal-of-parameter-adjustment-of-frangi
Repo
Framework

Towards a Near Universal Time Series Data Mining Tool: Introducing the Matrix Profile

Title Towards a Near Universal Time Series Data Mining Tool: Introducing the Matrix Profile
Authors Chin-Chia Michael Yeh
Abstract The last decade has seen a flurry of research on all-pairs-similarity-search (or, self-join) for text, DNA, and a handful of other datatypes, and these systems have been applied to many diverse data mining problems. Surprisingly, however, little progress has been made on addressing this problem for time series subsequences. In this thesis, we have introduced a near universal time series data mining tool called matrix profile which solves the all-pairs-similarity-search problem and caches the output in an easy-to-access fashion. The proposed algorithm is not only parameter-free, exact and scalable, but also applicable for both single and multidimensional time series. By building time series data mining methods on top of matrix profile, many time series data mining tasks (e.g., motif discovery, discord discovery, shapelet discovery, semantic segmentation, and clustering) can be efficiently solved. Because the same matrix profile can be shared by a diverse set of time series data mining methods, matrix profile is versatile and computed-once-use-many-times data structure. We demonstrate the utility of matrix profile for many time series data mining problems, including motif discovery, discord discovery, weakly labeled time series classification, and representation learning on domains as diverse as seismology, entomology, music processing, bioinformatics, human activity monitoring, electrical power-demand monitoring, and medicine. We hope the matrix profile is not the end but the beginning of many more time series data mining projects.
Tasks Representation Learning, Semantic Segmentation, Time Series, Time Series Classification
Published 2018-11-05
URL http://arxiv.org/abs/1811.03064v1
PDF http://arxiv.org/pdf/1811.03064v1.pdf
PWC https://paperswithcode.com/paper/towards-a-near-universal-time-series-data
Repo
Framework

Learning what and where to attend

Title Learning what and where to attend
Authors Drew Linsley, Dan Shiebler, Sven Eberhardt, Thomas Serre
Abstract Most recent gains in visual recognition have originated from the inclusion of attention mechanisms in deep convolutional networks (DCNs). Because these networks are optimized for object recognition, they learn where to attend using only a weak form of supervision derived from image class labels. Here, we demonstrate the benefit of using stronger supervisory signals by teaching DCNs to attend to image regions that humans deem important for object recognition. We first describe a large-scale online experiment (ClickMe) used to supplement ImageNet with nearly half a million human-derived “top-down” attention maps. Using human psychophysics, we confirm that the identified top-down features from ClickMe are more diagnostic than “bottom-up” saliency features for rapid image categorization. As a proof of concept, we extend a state-of-the-art attention network and demonstrate that adding ClickMe supervision significantly improves its accuracy and yields visual features that are more interpretable and more similar to those used by human observers.
Tasks Image Categorization, Object Recognition
Published 2018-05-22
URL https://arxiv.org/abs/1805.08819v4
PDF https://arxiv.org/pdf/1805.08819v4.pdf
PWC https://paperswithcode.com/paper/global-and-local-attention-networks-for
Repo
Framework

The Online Saddle Point Problem: Applications to Online Convex Optimization with Knapsacks

Title The Online Saddle Point Problem: Applications to Online Convex Optimization with Knapsacks
Authors Adrian Rivera, He Wang, Huan Xu
Abstract We study the online saddle point problem, an online learning problem where at each iteration a pair of actions need to be chosen without knowledge of the current and future (convex-concave) payoff functions. The objective is to minimize the gap between the cumulative payoffs and the saddle point value of the aggregate payoff function, which we measure using a metric called “SP-regret”. The problem generalizes the online convex optimization framework and can be interpreted as finding the Nash equilibrium for the aggregate of a sequence of two-player zero-sum games. We propose an algorithm that achieves $\tilde{O}(\sqrt{T})$ SP-regret in the general case, and $O(\log T)$ SP-regret for the strongly convex-concave case. We then consider an online convex optimization with knapsacks problem motivated by a wide variety of applications such as: dynamic pricing, auctions, and crowdsourcing. We relate this problem to the online saddle point problem and establish $O(\sqrt{T})$ regret using a primal-dual algorithm.
Tasks
Published 2018-06-21
URL http://arxiv.org/abs/1806.08301v2
PDF http://arxiv.org/pdf/1806.08301v2.pdf
PWC https://paperswithcode.com/paper/the-online-saddle-point-problem-applications
Repo
Framework

Bitcoin Volatility Forecasting with a Glimpse into Buy and Sell Orders

Title Bitcoin Volatility Forecasting with a Glimpse into Buy and Sell Orders
Authors Tian Guo, Albert Bifet, Nino Antulov-Fantulin
Abstract In this paper, we study the ability to make the short-term prediction of the exchange price fluctuations towards the United States dollar for the Bitcoin market. We use the data of realized volatility collected from one of the largest Bitcoin digital trading offices in 2016 and 2017 as well as order information. Experiments are performed to evaluate a variety of statistical and machine learning approaches.
Tasks
Published 2018-02-12
URL http://arxiv.org/abs/1802.04065v3
PDF http://arxiv.org/pdf/1802.04065v3.pdf
PWC https://paperswithcode.com/paper/bitcoin-volatility-forecasting-with-a-glimpse
Repo
Framework

Scraping and Preprocessing Commercial Auction Data for Fraud Classification

Title Scraping and Preprocessing Commercial Auction Data for Fraud Classification
Authors Ahmad Alzahrani, Samira Sadaoui
Abstract In the last three decades, we have seen a significant increase in trading goods and services through online auctions. However, this business created an attractive environment for malicious moneymakers who can commit different types of fraud activities, such as Shill Bidding (SB). The latter is predominant across many auctions but this type of fraud is difficult to detect due to its similarity to normal bidding behaviour. The unavailability of SB datasets makes the development of SB detection and classification models burdensome. Furthermore, to implement efficient SB detection models, we should produce SB data from actual auctions of commercial sites. In this study, we first scraped a large number of eBay auctions of a popular product. After preprocessing the raw auction data, we build a high-quality SB dataset based on the most reliable SB strategies. The aim of our research is to share the preprocessed auction dataset as well as the SB training (unlabelled) dataset, thereby researchers can apply various machine learning techniques by using authentic data of auctions and fraud.
Tasks
Published 2018-06-02
URL http://arxiv.org/abs/1806.00656v2
PDF http://arxiv.org/pdf/1806.00656v2.pdf
PWC https://paperswithcode.com/paper/scraping-and-preprocessing-commercial-auction
Repo
Framework

PhaseNet for Video Frame Interpolation

Title PhaseNet for Video Frame Interpolation
Authors Simone Meyer, Abdelaziz Djelouah, Brian McWilliams, Alexander Sorkine-Hornung, Markus Gross, Christopher Schroers
Abstract Most approaches for video frame interpolation require accurate dense correspondences to synthesize an in-between frame. Therefore, they do not perform well in challenging scenarios with e.g. lighting changes or motion blur. Recent deep learning approaches that rely on kernels to represent motion can only alleviate these problems to some extent. In those cases, methods that use a per-pixel phase-based motion representation have been shown to work well. However, they are only applicable for a limited amount of motion. We propose a new approach, PhaseNet, that is designed to robustly handle challenging scenarios while also coping with larger motion. Our approach consists of a neural network decoder that directly estimates the phase decomposition of the intermediate frame. We show that this is superior to the hand-crafted heuristics previously used in phase-based methods and also compares favorably to recent deep learning based approaches for video frame interpolation on challenging datasets.
Tasks Video Frame Interpolation
Published 2018-04-03
URL http://arxiv.org/abs/1804.00884v1
PDF http://arxiv.org/pdf/1804.00884v1.pdf
PWC https://paperswithcode.com/paper/phasenet-for-video-frame-interpolation
Repo
Framework

A Functional Taxonomy of Music Generation Systems

Title A Functional Taxonomy of Music Generation Systems
Authors Dorien Herremans, Ching-Hua Chuan, Elaine Chew
Abstract Digital advances have transformed the face of automatic music generation since its beginnings at the dawn of computing. Despite the many breakthroughs, issues such as the musical tasks targeted by different machines and the degree to which they succeed remain open questions. We present a functional taxonomy for music generation systems with reference to existing systems. The taxonomy organizes systems according to the purposes for which they were designed. It also reveals the inter-relatedness amongst the systems. This design-centered approach contrasts with predominant methods-based surveys and facilitates the identification of grand challenges to set the stage for new breakthroughs.
Tasks Music Generation
Published 2018-12-11
URL http://arxiv.org/abs/1812.04186v1
PDF http://arxiv.org/pdf/1812.04186v1.pdf
PWC https://paperswithcode.com/paper/a-functional-taxonomy-of-music-generation
Repo
Framework

Modeling and Simultaneously Removing Bias via Adversarial Neural Networks

Title Modeling and Simultaneously Removing Bias via Adversarial Neural Networks
Authors John Moore, Joel Pfeiffer, Kai Wei, Rishabh Iyer, Denis Charles, Ran Gilad-Bachrach, Levi Boyles, Eren Manavoglu
Abstract In real world systems, the predictions of deployed Machine Learned models affect the training data available to build subsequent models. This introduces a bias in the training data that needs to be addressed. Existing solutions to this problem attempt to resolve the problem by either casting this in the reinforcement learning framework or by quantifying the bias and re-weighting the loss functions. In this work, we develop a novel Adversarial Neural Network (ANN) model, an alternative approach which creates a representation of the data that is invariant to the bias. We take the Paid Search auction as our working example and ad display position features as the confounding features for this setting. We show the success of this approach empirically on both synthetic data as well as real world paid search auction data from a major search engine.
Tasks
Published 2018-04-18
URL http://arxiv.org/abs/1804.06909v1
PDF http://arxiv.org/pdf/1804.06909v1.pdf
PWC https://paperswithcode.com/paper/modeling-and-simultaneously-removing-bias-via
Repo
Framework
comments powered by Disqus