January 28, 2020

3044 words 15 mins read

Paper Group ANR 851

Paper Group ANR 851

On the overestimation of widely applicable Bayesian information criterion. Learning to segment images with classification labels. EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos. Emotional Contribution Analysis of Online Reviews. Prediction Focused Topic Models for Electronic Health Records. Combining RGB and Points to Predict Gr …

On the overestimation of widely applicable Bayesian information criterion

Title On the overestimation of widely applicable Bayesian information criterion
Authors Toru Imai
Abstract A widely applicable Bayesian information criterion (Watanabe, 2013) is applicable for both regular and singular models in the model selection problem. This criterion tends to overestimate the log marginal likelihood. We identify an overestimating term of a widely applicable Bayesian information criterion. Adjustment of the term gives an asymptotically unbiased estimator of the leading two terms of asymptotic expansion of the log marginal likelihood. In numerical experiments on regular and singular models, the adjustment resulted in smaller bias than the original criterion.
Tasks Model Selection
Published 2019-08-28
URL https://arxiv.org/abs/1908.10572v1
PDF https://arxiv.org/pdf/1908.10572v1.pdf
PWC https://paperswithcode.com/paper/on-the-overestimation-of-widely-applicable
Repo
Framework

Learning to segment images with classification labels

Title Learning to segment images with classification labels
Authors Ozan Ciga, Anne L. Martel
Abstract Two of the most common tasks in medical imaging are classification and segmentation. Either task requires labeled data annotated by experts, which is scarce and expensive to collect. Annotating data for segmentation is generally considered to be more laborious as the annotator has to draw around the boundaries of regions of interest, as opposed to assigning image patches a class label. Furthermore, in tasks such as breast cancer histopathology, any realistic clinical application often includes working with whole slide images, whereas most publicly available training data are in the form of image patches, which are given a class label. We propose an architecture that can alleviate the requirements for segmentation-level ground truth by making use of image-level labels to reduce the amount of time spent on data curation. In addition, this architecture can help unlock the potential of previously acquired image-level datasets on segmentation tasks by annotating a small number of regions of interest. In our experiments, we show using only one segmentation-level annotation per class, we can achieve performance comparable to a fully annotated dataset.
Tasks
Published 2019-12-28
URL https://arxiv.org/abs/1912.12533v1
PDF https://arxiv.org/pdf/1912.12533v1.pdf
PWC https://paperswithcode.com/paper/learning-to-segment-images-with
Repo
Framework

EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos

Title EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos
Authors Haipeng Zeng, Xingbo Wang, Aoyu Wu, Yong Wang, Quan Li, Alex Endert, Huamin Qu
Abstract Emotions play a key role in human communication and public presentations. Human emotions are usually expressed through multiple modalities. Therefore, exploring multimodal emotions and their coherence is of great value for understanding emotional expressions in presentations and improving presentation skills. However, manually watching and studying presentation videos is often tedious and time-consuming. There is a lack of tool support to help conduct an efficient and in-depth multi-level analysis. Thus, in this paper, we introduce EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos. Our visualization system features a channel coherence view and a sentence clustering view that together enable users to obtain a quick overview of emotion coherence and its temporal evolution. In addition, a detail view and word view enable detailed exploration and comparison from the sentence level and word level, respectively. We thoroughly evaluate the proposed system and visualization techniques through two usage scenarios based on TED Talk videos and interviews with two domain experts. The results demonstrate the effectiveness of our system in gaining insights into emotion coherence in presentations.
Tasks
Published 2019-07-29
URL https://arxiv.org/abs/1907.12918v2
PDF https://arxiv.org/pdf/1907.12918v2.pdf
PWC https://paperswithcode.com/paper/emoco-visual-analysis-of-emotion-coherence-in
Repo
Framework

Emotional Contribution Analysis of Online Reviews

Title Emotional Contribution Analysis of Online Reviews
Authors Elisa Claire Alemán Carreón, Hirofumi Nonaka, Toru Hiraoka, Minoru Kumano, Takao Ito, Masaharu Hirota
Abstract In response to the constant increase in population and tourism worldwide, there is a need for the development of cross-language market research tools that are more cost and time effective than surveys or interviews. Focusing on the Chinese tourism boom and the hotel industry in Japan, we extracted the most influential keywords in emotional judgement from Chinese online reviews of Japanese hotels in the portal site Ctrip. Using an entropy based mathematical model and a machine learning algorithm, we determined the words that most closely represent the demands and emotions of this customer base.
Tasks
Published 2019-05-01
URL http://arxiv.org/abs/1905.00185v1
PDF http://arxiv.org/pdf/1905.00185v1.pdf
PWC https://paperswithcode.com/paper/emotional-contribution-analysis-of-online
Repo
Framework

Prediction Focused Topic Models for Electronic Health Records

Title Prediction Focused Topic Models for Electronic Health Records
Authors Jason Ren, Russell Kunes, Finale Doshi-Velez
Abstract Electronic Health Record (EHR) data can be represented as discrete counts over a high dimensional set of possible procedures, diagnoses, and medications. Supervised topic models present an attractive option for incorporating EHR data as features into a prediction problem: given a patient’s record, we estimate a set of latent factors that are predictive of the response variable. However, existing methods for supervised topic modeling struggle to balance prediction quality and coherence of the latent factors. We introduce a novel approach, the prediction-focused topic model, that uses the supervisory signal to retain only features that improve, or do not hinder, prediction performance. By removing features with irrelevant signal, the topic model is able to learn task-relevant, interpretable topics. We demonstrate on a EHR dataset and a movie review dataset that compared to existing approaches, prediction-focused topic models are able to learn much more coherent topics while maintaining competitive predictions.
Tasks Topic Models
Published 2019-11-15
URL https://arxiv.org/abs/1911.08551v1
PDF https://arxiv.org/pdf/1911.08551v1.pdf
PWC https://paperswithcode.com/paper/prediction-focused-topic-models-for
Repo
Framework

Combining RGB and Points to Predict Grasping Region for Robotic Bin-Picking

Title Combining RGB and Points to Predict Grasping Region for Robotic Bin-Picking
Authors Quanquan Shao, Jie Hu
Abstract This paper focuses on a robotic picking tasks in cluttered scenario. Because of the diversity of objects and clutter by placing, it is much difficult to recognize and estimate their pose before grasping. Here, we use U-net, a special Convolution Neural Networks (CNN), to combine RGB images and depth information to predict picking region without recognition and pose estimation. The efficiency of diverse visual input of the network were compared, including RGB, RGB-D and RGB-Points. And we found the RGB-Points input could get a precision of 95.74%.
Tasks Pose Estimation
Published 2019-04-16
URL http://arxiv.org/abs/1904.07394v2
PDF http://arxiv.org/pdf/1904.07394v2.pdf
PWC https://paperswithcode.com/paper/combining-rgb-and-points-to-predict-grasping
Repo
Framework

Counterexample-Guided Strategy Improvement for POMDPs Using Recurrent Neural Networks

Title Counterexample-Guided Strategy Improvement for POMDPs Using Recurrent Neural Networks
Authors Steven Carr, Nils Jansen, Ralf Wimmer, Alexandru C. Serban, Bernd Becker, Ufuk Topcu
Abstract We study strategy synthesis for partially observable Markov decision processes (POMDPs). The particular problem is to determine strategies that provably adhere to (probabilistic) temporal logic constraints. This problem is computationally intractable and theoretically hard. We propose a novel method that combines techniques from machine learning and formal verification. First, we train a recurrent neural network (RNN) to encode POMDP strategies. The RNN accounts for memory-based decisions without the need to expand the full belief space of a POMDP. Secondly, we restrict the RNN-based strategy to represent a finite-memory strategy and implement it on a specific POMDP. For the resulting finite Markov chain, efficient formal verification techniques provide provable guarantees against temporal logic specifications. If the specification is not satisfied, counterexamples supply diagnostic information. We use this information to improve the strategy by iteratively training the RNN. Numerical experiments show that the proposed method elevates the state of the art in POMDP solving by up to three orders of magnitude in terms of solving times and model sizes.
Tasks
Published 2019-03-20
URL http://arxiv.org/abs/1903.08428v2
PDF http://arxiv.org/pdf/1903.08428v2.pdf
PWC https://paperswithcode.com/paper/counterexample-guided-strategy-improvement
Repo
Framework

A Retrospective Recount of Computer Architecture Research with a Data-Driven Study of Over Four Decades of ISCA Publications

Title A Retrospective Recount of Computer Architecture Research with a Data-Driven Study of Over Four Decades of ISCA Publications
Authors Omer Anjum, Wen-Mei Hwu, Jinjun Xiong
Abstract This study began with a research project, called DISCvR, conducted at the IBM-ILLINOIS Center for Cognitive Computing Systems Reseach. The goal of DISCvR was to build a practical NLP based AI pipeline for document understanding which will help us better understand the computation patterns and requirements of modern computing systems. While building such a prototype, an early use case came to us thanks to the 2017 IEEE/ACM International Symposium on Microarchitecture (MICRO-50) Program Co-chairs, Drs. Hillery Hunter and Jaime Moreno. They asked us if we can perform some data-driven analysis of the past 50 years of MICRO papers and show some interesting historical perspectives on MICRO’s 50 years of publication. We learned two important lessons from that experience: (1) building an AI solution to truly understand unstructured data is hard in spite of the many claimed successes in natural language understanding; and (2) providing a data-driven perspective on computer architecture research is a very interesting and fun project. Recently we decided to conduct a more thorough study based on all past papers of International Symposium on Computer Architecture (ISCA) from 1973 to 2018, which resulted this article. We recognize that we have just scratched the surface of natural language understanding of unstructured data, and there are many more aspects that we can improve. But even with our current study, we felt there were enough interesting findings that may be worthwhile to share with the community. Hence we decided to write this article to summarize our findings so far based only on ISCA publications. Our hope is to generate further interests from the community in this topic, and we welcome collaboration from the community to deepen our understanding both of the computer architecture research and of the challenges of NLP-based AI solutions.
Tasks
Published 2019-06-22
URL https://arxiv.org/abs/1906.09380v1
PDF https://arxiv.org/pdf/1906.09380v1.pdf
PWC https://paperswithcode.com/paper/a-retrospective-recount-of-computer
Repo
Framework

Adaptive Kernel Value Caching for SVM Training

Title Adaptive Kernel Value Caching for SVM Training
Authors Qinbin Li, Zeyi Wen, Bingsheng He
Abstract Support Vector Machines (SVMs) can solve structured multi-output learning problems such as multi-label classification, multiclass classification and vector regression. SVM training is expensive especially for large and high dimensional datasets. The bottleneck of the SVM training often lies in the kernel value computation. In many real-world problems, the same kernel values are used in many iterations during the training, which makes the caching of kernel values potentially useful. The majority of the existing studies simply adopt the LRU (least recently used) replacement strategy for caching kernel values. However, as we analyze in this paper, the LRU strategy generally achieves high hit ratio near the final stage of the training, but does not work well in the whole training process. Therefore, we propose a new caching strategy called EFU (less frequently used) which replaces the less frequently used kernel values that enhances LFU (least frequently used). Our experimental results show that EFU often has 20% higher hit ratio than LRU in the training with the Gaussian kernel. To further optimize the strategy, we propose a caching strategy called HCST (hybrid caching for the SVM training), which has a novel mechanism to automatically adapt the better caching strategy in the different stages of the training. We have integrated the caching strategy into ThunderSVM, a recent SVM library on many-core processors. Our experiments show that HCST adaptively achieves high hit ratios with little runtime overhead among different problems including multi-label classification, multiclass classification and regression problems. Compared with other existing caching strategies, HCST achieves 20% more reduction in training time on average.
Tasks Multi-Label Classification
Published 2019-11-08
URL https://arxiv.org/abs/1911.03011v1
PDF https://arxiv.org/pdf/1911.03011v1.pdf
PWC https://paperswithcode.com/paper/adaptive-kernel-value-caching-for-svm
Repo
Framework

The Use of Unlabeled Data versus Labeled Data for Stopping Active Learning for Text Classification

Title The Use of Unlabeled Data versus Labeled Data for Stopping Active Learning for Text Classification
Authors Garrett Beatty, Ethan Kochis, Michael Bloodgood
Abstract Annotation of training data is the major bottleneck in the creation of text classification systems. Active learning is a commonly used technique to reduce the amount of training data one needs to label. A crucial aspect of active learning is determining when to stop labeling data. Three potential sources for informing when to stop active learning are an additional labeled set of data, an unlabeled set of data, and the training data that is labeled during the process of active learning. To date, no one has compared and contrasted the advantages and disadvantages of stopping methods based on these three information sources. We find that stopping methods that use unlabeled data are more effective than methods that use labeled data.
Tasks Active Learning, Text Classification
Published 2019-01-26
URL http://arxiv.org/abs/1901.09126v2
PDF http://arxiv.org/pdf/1901.09126v2.pdf
PWC https://paperswithcode.com/paper/the-use-of-unlabeled-data-versus-labeled-data
Repo
Framework

Predicting Drug-Drug Interactions from Molecular Structure Images

Title Predicting Drug-Drug Interactions from Molecular Structure Images
Authors Devendra Singh Dhami, Gautam Kunapuli, David Page, Sriraam Natarajan
Abstract Predicting and discovering drug-drug interactions (DDIs) is an important problem and has been studied extensively both from medical and machine learning point of view. Almost all of the machine learning approaches have focused on text data or textual representation of the structural data of drugs. We present the first work that uses drug structure images as the input and utilizes a Siamese convolutional network architecture to predict DDIs.
Tasks
Published 2019-11-14
URL https://arxiv.org/abs/1911.06356v1
PDF https://arxiv.org/pdf/1911.06356v1.pdf
PWC https://paperswithcode.com/paper/predicting-drug-drug-interactions-from
Repo
Framework

Black Box Algorithm Selection by Convolutional Neural Network

Title Black Box Algorithm Selection by Convolutional Neural Network
Authors Yaodong He, Shiu Yin Yuen
Abstract Although a large number of optimization algorithms have been proposed for black box optimization problems, the no free lunch theorems inform us that no algorithm can beat others on all types of problems. Different types of optimization problems need different optimization algorithms. To deal with this issue, researchers propose algorithm selection to suggest the best optimization algorithm from the algorithm set for a given unknown optimization problem. Usually, algorithm selection is treated as a classification or regression task. Deep learning, which has been shown to perform well on various classification and regression tasks, is applied to the algorithm selection problem in this paper. Our deep learning architecture is based on convolutional neural network and follows the main architecture of visual geometry group. This architecture has been applied to many different types of 2-D data. Moreover, we also propose a novel method to extract landscape information from the optimization problems and save the information as 2-D images. In the experimental section, we conduct three experiments to investigate the classification and optimization capability of our approach on the BBOB functions. The results indicate that our new approach can effectively solve the algorithm selection problem.
Tasks
Published 2019-12-22
URL https://arxiv.org/abs/2001.01685v1
PDF https://arxiv.org/pdf/2001.01685v1.pdf
PWC https://paperswithcode.com/paper/black-box-algorithm-selection-by
Repo
Framework

Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study

Title Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study
Authors Chinnadhurai Sankar, Sandeep Subramanian, Christopher Pal, Sarath Chandar, Yoshua Bengio
Abstract Neural generative models have been become increasingly popular when building conversational agents. They offer flexibility, can be easily adapted to new domains, and require minimal domain engineering. A common criticism of these systems is that they seldom understand or use the available dialog history effectively. In this paper, we take an empirical approach to understanding how these models use the available dialog history by studying the sensitivity of the models to artificially introduced unnatural changes or perturbations to their context at test time. We experiment with 10 different types of perturbations on 4 multi-turn dialog datasets and find that commonly used neural dialog architectures like recurrent and transformer-based seq2seq models are rarely sensitive to most perturbations such as missing or reordering utterances, shuffling words, etc. Also, by open-sourcing our code, we believe that it will serve as a useful diagnostic tool for evaluating dialog systems in the future.
Tasks
Published 2019-06-04
URL https://arxiv.org/abs/1906.01603v2
PDF https://arxiv.org/pdf/1906.01603v2.pdf
PWC https://paperswithcode.com/paper/do-neural-dialog-systems-use-the-conversation
Repo
Framework

Relation-Aware Global Attention for Person Re-identification

Title Relation-Aware Global Attention for Person Re-identification
Authors Zhizheng Zhang, Cuiling Lan, Wenjun Zeng, Xin Jin, Zhibo Chen
Abstract For person re-identification (re-id), attention mechanisms have become attractive as they aim at strengthening discriminative features and suppressing irrelevant ones, which matches well the key of re-id, i.e., discriminative feature learning. Previous approaches typically learn attention using local convolutions, ignoring the mining of knowledge from global structure patterns. Intuitively, the affinities among spatial positions/nodes in the feature map provide clustering-like information and are helpful for inferring semantics and thus attention, especially for person images where the feasible human poses are constrained. In this work, we propose an effective Relation-Aware Global Attention (RGA) module which captures the global structural information for better attention learning. Specifically, for each feature position, in order to compactly grasp the structural information of global scope and local appearance information, we propose to stack the relations, i.e., its pairwise correlations/affinities with all the feature positions (e.g., in raster scan order), and the feature itself together to learn the attention with a shallow convolutional model. Extensive ablation studies demonstrate that our RGA can significantly enhance the feature representation power and help achieve the state-of-the-art performance on several popular benchmarks. The source code is available at https://github.com/microsoft/Relation-Aware-Global-Attention-Networks.
Tasks Image Classification, Person Re-Identification, Scene Segmentation
Published 2019-04-05
URL https://arxiv.org/abs/1904.02998v2
PDF https://arxiv.org/pdf/1904.02998v2.pdf
PWC https://paperswithcode.com/paper/relation-aware-global-attention
Repo
Framework

Distributed representation of patients and its use for medical cost prediction

Title Distributed representation of patients and its use for medical cost prediction
Authors Xianlong Zeng, Soheil Moosavinasab, En-Ju D Lin, Simon Lin, Razvan Bunescu, Chang Liu
Abstract Efficient representation of patients is very important in the healthcare domain and can help with many tasks such as medical risk prediction. Many existing methods, such as diagnostic Cost Groups (DCG), rely on expert knowledge to build patient representation from medical data, which is resource consuming and non-scalable. Unsupervised machine learning algorithms are a good choice for automating the representation learning process. However, there is very little research focusing on onpatient-level representation learning directly from medical claims. In this paper, weproposed a novel patient vector learning architecture that learns high quality,fixed-length patient representation from claims data. We conducted several experiments to test the quality of our learned representation, and the empirical results show that our learned patient vectors are superior to vectors learned through other methods including a popular commercial model. Lastly, we provide potential clinical interpretation for using our representation on predictive tasks, as interpretability is vital in the healthcare domain
Tasks Representation Learning
Published 2019-09-13
URL https://arxiv.org/abs/1909.07157v1
PDF https://arxiv.org/pdf/1909.07157v1.pdf
PWC https://paperswithcode.com/paper/distributed-representation-of-patients-and
Repo
Framework
comments powered by Disqus