October 19, 2019

3368 words 16 mins read

Paper Group ANR 237

Paper Group ANR 237

Maximum-Entropy Fine-Grained Classification. Social Media Would Not Lie: Prediction of the 2016 Taiwan Election via Online Heterogeneous Data. Direct Prediction of Cardiovascular Mortality from Low-dose Chest CT using Deep Learning. Super learning in the SAS system. Generalization Bounds for Vicinal Risk Minimization Principle. A Computational Meth …

Maximum-Entropy Fine-Grained Classification

Title Maximum-Entropy Fine-Grained Classification
Authors Abhimanyu Dubey, Otkrist Gupta, Ramesh Raskar, Nikhil Naik
Abstract Fine-Grained Visual Classification (FGVC) is an important computer vision problem that involves small diversity within the different classes, and often requires expert annotators to collect data. Utilizing this notion of small visual diversity, we revisit Maximum-Entropy learning in the context of fine-grained classification, and provide a training routine that maximizes the entropy of the output probability distribution for training convolutional neural networks on FGVC tasks. We provide a theoretical as well as empirical justification of our approach, and achieve state-of-the-art performance across a variety of classification tasks in FGVC, that can potentially be extended to any fine-tuning task. Our method is robust to different hyperparameter values, amount of training data and amount of training label noise and can hence be a valuable tool in many similar problems.
Tasks Fine-Grained Image Classification
Published 2018-09-16
URL http://arxiv.org/abs/1809.05934v2
PDF http://arxiv.org/pdf/1809.05934v2.pdf
PWC https://paperswithcode.com/paper/maximum-entropy-fine-grained-classification-1
Repo
Framework

Social Media Would Not Lie: Prediction of the 2016 Taiwan Election via Online Heterogeneous Data

Title Social Media Would Not Lie: Prediction of the 2016 Taiwan Election via Online Heterogeneous Data
Authors Zheng Xie, Guannan Liu, Junjie Wu, Yong Tan
Abstract The prevalence of online media has attracted researchers from various domains to explore human behavior and make interesting predictions. In this research, we leverage heterogeneous social media data collected from various online platforms to predict Taiwan’s 2016 presidential election. In contrast to most existing research, we take a “signal” view of heterogeneous information and adopt the Kalman filter to fuse multiple signals into daily vote predictions for the candidates. We also consider events that influenced the election in a quantitative manner based on the so-called event study model that originated in the field of financial research. We obtained the following interesting findings. First, public opinions in online media dominate traditional polls in Taiwan election prediction in terms of both predictive power and timeliness. But offline polls can still function on alleviating the sample bias of online opinions. Second, although online signals converge as election day approaches, the simple Facebook “Like” is consistently the strongest indicator of the election result. Third, most influential events have a strong connection to cross-strait relations, and the Chou Tzu-yu flag incident followed by the apology video one day before the election increased the vote share of Tsai Ing-Wen by 3.66%. This research justifies the predictive power of online media in politics and the advantages of information fusion. The combined use of the Kalman filter and the event study method contributes to the data-driven political analytics paradigm for both prediction and attribution purposes.
Tasks
Published 2018-03-21
URL http://arxiv.org/abs/1803.08010v2
PDF http://arxiv.org/pdf/1803.08010v2.pdf
PWC https://paperswithcode.com/paper/social-media-would-not-lie-prediction-of-the
Repo
Framework

Direct Prediction of Cardiovascular Mortality from Low-dose Chest CT using Deep Learning

Title Direct Prediction of Cardiovascular Mortality from Low-dose Chest CT using Deep Learning
Authors Sanne G. M. van Velzen, Majd Zreik, Nikolas Lessmann, Max A. Viergever, Pim A. de Jong, Helena M. Verkooijen, Ivana Išgum
Abstract Cardiovascular disease (CVD) is a leading cause of death in the lung cancer screening population. Chest CT scans made in lung cancer screening are suitable for identification of participants at risk of CVD. Existing methods analyzing CT images from lung cancer screening for prediction of CVD events or mortality use engineered features extracted from the images combined with patient information. In this work we propose a method that automatically predicts 5-year cardiovascular mortality directly from chest CT scans without the need for hand-crafting image features. A set of 1,583 participants of the National Lung Screening Trial was included (1,188 survivors, 395 non-survivors). Low-dose chest CT images acquired at baseline were analyzed and the follow-up time was 5 years. To limit the analysis to the heart region, the heart was first localized by our previously developed algorithm for organ localization exploiting convolutional neural networks. Thereafter, a convolutional autoencoder was used to encode the identified heart region. Finally, based on the extracted encodings subjects were classified into survivors or non-survivors using a support vector machine classifier. The performance of the method was assessed in eight cross-validation experiments with 1,433 images used for training, 50 for validation and 100 for testing. The method achieved a performance with an area under the ROC curve of 0.72. The results demonstrate that prediction of cardiovascular mortality directly from low-dose screening chest CT scans, without hand-crafted features, is feasible, allowing identification of subjects at risk of fatal CVD events.
Tasks
Published 2018-10-04
URL http://arxiv.org/abs/1810.02277v1
PDF http://arxiv.org/pdf/1810.02277v1.pdf
PWC https://paperswithcode.com/paper/direct-prediction-of-cardiovascular-mortality
Repo
Framework

Super learning in the SAS system

Title Super learning in the SAS system
Authors Alexander P. Keil, Daniel Westreich, Jessie K Edwards, Stephen R Cole
Abstract Background and objective: Stacking is an ensemble machine learning method that averages predictions from multiple other algorithms, such as generalized linear models and regression trees. An implementation of stacking, called super learning, has been developed as a general approach to supervised learning and has seen frequent usage, in part due to the availability of an R package. We develop super learning in the SAS software system using a new macro, and demonstrate its performance relative to the R package. Methods: Following previous work using the R SuperLearner package we assess the performance of super learning in a number of domains. We compare the R package with the new SAS macro in a small set of simulations assessing curve fitting in a predictive model as well in a set of 14 publicly available datasets to assess cross-validated accuracy. Results: Across the simulated data and the publicly available data, the SAS macro performed similarly to the R package, despite a different set of potential algorithms available natively in R and SAS. Conclusions: Our super learner macro performs as well as the R package at a number of tasks. Further, by extending the macro to include the use of R packages, the macro can leverage both the robust, enterprise oriented procedures in SAS and the nimble, cutting edge packages in R. In the spirit of ensemble learning, this macro extends the potential library of algorithms beyond a single software system and provides a simple avenue into machine learning in SAS.
Tasks Causal Inference
Published 2018-05-21
URL https://arxiv.org/abs/1805.08058v3
PDF https://arxiv.org/pdf/1805.08058v3.pdf
PWC https://paperswithcode.com/paper/super-learning-in-the-sas-system
Repo
Framework

Generalization Bounds for Vicinal Risk Minimization Principle

Title Generalization Bounds for Vicinal Risk Minimization Principle
Authors Chao Zhang, Min-Hsiu Hsieh, Dacheng Tao
Abstract The vicinal risk minimization (VRM) principle, first proposed by \citet{vapnik1999nature}, is an empirical risk minimization (ERM) variant that replaces Dirac masses with vicinal functions. Although there is strong numerical evidence showing that VRM outperforms ERM if appropriate vicinal functions are chosen, a comprehensive theoretical understanding of VRM is still lacking. In this paper, we study the generalization bounds for VRM. Our results support Vapnik’s original arguments and additionally provide deeper insights into VRM. First, we prove that the complexity of function classes convolving with vicinal functions can be controlled by that of the original function classes under the assumption that the function class is composed of Lipschitz-continuous functions. Then, the resulting generalization bounds for VRM suggest that the generalization performance of VRM is also effected by the choice of vicinity function and the quality of function classes. These findings can be used to examine whether the choice of vicinal function is appropriate for the VRM-based learning setting. Finally, we provide a theoretical explanation for existing VRM models, e.g., uniform distribution-based models, Gaussian distribution-based models, and mixup models.
Tasks
Published 2018-11-11
URL http://arxiv.org/abs/1811.04351v1
PDF http://arxiv.org/pdf/1811.04351v1.pdf
PWC https://paperswithcode.com/paper/generalization-bounds-for-vicinal-risk
Repo
Framework

A Computational Method for Evaluating UI Patterns

Title A Computational Method for Evaluating UI Patterns
Authors Bardia Doosti, Tao Dong, Biplab Deka, Jeffrey Nichols
Abstract UI design languages, such as Google’s Material Design, make applications both easier to develop and easier to learn by providing a set of standard UI components. Nonetheless, it is hard to assess the impact of design languages in the wild. Moreover, designers often get stranded by strong-opinionated debates around the merit of certain UI components, such as the Floating Action Button and the Navigation Drawer. To address these challenges, this short paper introduces a method for measuring the impact of design languages and informing design debates through analyzing a dataset consisting of view hierarchies, screenshots, and app metadata for more than 9,000 mobile apps. Our data analysis shows that use of Material Design is positively correlated to app ratings, and to some extent, also the number of installs. Furthermore, we show that use of UI components vary by app category, suggesting a more nuanced view needed in design debates.
Tasks
Published 2018-07-11
URL http://arxiv.org/abs/1807.04191v1
PDF http://arxiv.org/pdf/1807.04191v1.pdf
PWC https://paperswithcode.com/paper/a-computational-method-for-evaluating-ui
Repo
Framework

Improving High Resolution Histology Image Classification with Deep Spatial Fusion Network

Title Improving High Resolution Histology Image Classification with Deep Spatial Fusion Network
Authors Yongxiang Huang, Albert Chi-shing Chung
Abstract Histology imaging is an essential diagnosis method to finalize the grade and stage of cancer of different tissues, especially for breast cancer diagnosis. Specialists often disagree on the final diagnosis on biopsy tissue due to the complex morphological variety. Although convolutional neural networks (CNN) have advantages in extracting discriminative features in image classification, directly training a CNN on high resolution histology images is computationally infeasible currently. Besides, inconsistent discriminative features often distribute over the whole histology image, which incurs challenges in patch-based CNN classification method. In this paper, we propose a novel architecture for automatic classification of high resolution histology images. First, an adapted residual network is employed to explore hierarchical features without attenuation. Second, we develop a robust deep fusion network to utilize the spatial relationship between patches and learn to correct the prediction bias generated from inconsistent discriminative feature distribution. The proposed method is evaluated using 10-fold cross-validation on 400 high resolution breast histology images with balanced labels and reports 95% accuracy on 4-class classification and 98.5% accuracy, 99.6% AUC on 2-class classification (carcinoma and non-carcinoma), which substantially outperforms previous methods and close to pathologist performance.
Tasks Image Classification
Published 2018-07-27
URL http://arxiv.org/abs/1807.10552v1
PDF http://arxiv.org/pdf/1807.10552v1.pdf
PWC https://paperswithcode.com/paper/improving-high-resolution-histology-image
Repo
Framework

Deep Adaptive Temporal Pooling for Activity Recognition

Title Deep Adaptive Temporal Pooling for Activity Recognition
Authors Sibo Song, Ngai-Man Cheung, Vijay Chandrasekhar, Bappaditya Mandal
Abstract Deep neural networks have recently achieved competitive accuracy for human activity recognition. However, there is room for improvement, especially in modeling long-term temporal importance and determining the activity relevance of different temporal segments in a video. To address this problem, we propose a learnable and differentiable module: Deep Adaptive Temporal Pooling (DATP). DATP applies a self-attention mechanism to adaptively pool the classification scores of different video segments. Specifically, using frame-level features, DATP regresses importance of different temporal segments and generates weights for them. Remarkably, DATP is trained using only the video-level label. There is no need of additional supervision except video-level activity class label. We conduct extensive experiments to investigate various input features and different weight models. Experimental results show that DATP can learn to assign large weights to key video segments. More importantly, DATP can improve training of frame-level feature extractor. This is because relevant temporal segments are assigned large weights during back-propagation. Overall, we achieve state-of-the-art performance on UCF101, HMDB51 and Kinetics datasets.
Tasks Activity Recognition, Human Activity Recognition
Published 2018-08-22
URL http://arxiv.org/abs/1808.07272v1
PDF http://arxiv.org/pdf/1808.07272v1.pdf
PWC https://paperswithcode.com/paper/deep-adaptive-temporal-pooling-for-activity
Repo
Framework

Towards the identification of Parkinson’s Disease using only T1 MR Images

Title Towards the identification of Parkinson’s Disease using only T1 MR Images
Authors Sara Soltaninejad, Irene Cheng, Anup Basu
Abstract Parkinson’s Disease (PD) is one of the most common types of neurological diseases caused by progressive degeneration of dopamin- ergic neurons in the brain. Even though there is no fixed cure for this neurodegenerative disease, earlier diagnosis followed by earlier treatment can help patients have a better quality of life. Magnetic Resonance Imag- ing (MRI) has been one of the most popular diagnostic tool in recent years because it avoids harmful radiations. In this paper, we investi- gate the plausibility of using MRIs for automatically diagnosing PD. Our proposed method has three main steps : 1) Preprocessing, 2) Fea- ture Extraction, and 3) Classification. The FreeSurfer library is used for the first and the second steps. For classification, three main types of classifiers, including Logistic Regression (LR), Random Forest (RF) and Support Vector Machine (SVM), are applied and their classification abil- ity is compared. The Parkinsons Progression Markers Initiative (PPMI) data set is used to evaluate the proposed method. The proposed system prove to be promising in assisting the diagnosis of PD.
Tasks
Published 2018-06-19
URL http://arxiv.org/abs/1806.07489v1
PDF http://arxiv.org/pdf/1806.07489v1.pdf
PWC https://paperswithcode.com/paper/towards-the-identification-of-parkinsons
Repo
Framework

Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval

Title Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
Authors Sagar Uprety, Dimitris Gkoumas, Dawei Song
Abstract Relevance judgment in Information Retrieval is influenced by multiple factors. These include not only the topicality of the documents but also other user oriented factors like trust, user interest, etc. Recent works have identified these various factors into seven dimensions of relevance. In a previous work, these relevance dimensions were quantified and user’s cognitive state with respect to a document was represented as a state vector in a Hilbert Space, with each relevance dimension representing a basis. It was observed that relevance dimensions are incompatible in some documents, when making a judgment. Incompatibility being a fundamental feature of Quantum Theory, this motivated us to test the Quantum nature of relevance judgments using Bell type inequalities. However, none of the Bell-type inequalities tested have shown any violation. We discuss our methodology to construct incompatible basis for documents from real world query log data, the experiments to test Bell inequalities on this dataset and possible reasons for the lack of violation.
Tasks Information Retrieval
Published 2018-11-16
URL http://arxiv.org/abs/1811.06645v2
PDF http://arxiv.org/pdf/1811.06645v2.pdf
PWC https://paperswithcode.com/paper/investigating-bell-inequalities-for
Repo
Framework

Nonparametric Bayesian volatility learning under microstructure noise

Title Nonparametric Bayesian volatility learning under microstructure noise
Authors Shota Gugushvili, Frank van der Meulen, Moritz Schauer, Peter Spreij
Abstract Aiming at financial applications, we study the problem of learning the volatility under market microstructure noise. Specifically, we consider noisy discrete time observations from a stochastic differential equation and develop a novel computational method to learn the diffusion coefficient of the equation. We take a nonparametric Bayesian approach, where we model the volatility function a priori as piecewise constant. Its prior is specified via the inverse Gamma Markov chain. Sampling from the posterior is accomplished by incorporating the Forward Filtering Backward Simulation algorithm in the Gibbs sampler. Good performance of the method is demonstrated on two representative synthetic data examples. Finally, we apply the method on the EUR/USD exchange rate dataset.
Tasks
Published 2018-05-15
URL http://arxiv.org/abs/1805.05606v1
PDF http://arxiv.org/pdf/1805.05606v1.pdf
PWC https://paperswithcode.com/paper/nonparametric-bayesian-volatility-learning
Repo
Framework

Multi-scale Processing of Noisy Images using Edge Preservation Losses

Title Multi-scale Processing of Noisy Images using Edge Preservation Losses
Authors Nati Ofir, Yosi Keller
Abstract Noisy images processing is a fundamental task of computer vision. The first example is the detection of faint edges in noisy images, a challenging problem studied in the last decades. A recent study introduced a fast method to detect faint edges in the highest accuracy among all the existing approaches. Their complexity is nearly linear in the image’s pixels and their runtime is seconds for a noisy image. Their approach utilizes a multi-scale binary partitioning of the image. By utilizing the multi-scale U-net architecture, we show in this paper that their method can be dramatically improved in both aspects of run time and accuracy. By training the network on a dataset of binary images, we developed an approach for faint edge detection that works in a linear complexity. Our runtime of a noisy image is milliseconds on a GPU. Even though our method is orders of magnitude faster, we still achieve higher accuracy of detection under many challenging scenarios. In addition, we show that our approach to performing multi-scale preprocessing of noisy images using U-net improves the ability to perform other vision tasks under the presence of noise. We prove it on the problems of noisy objects classification and classical image denoising. We show that multi-scale denoising can be carried out by a novel edge preservation loss. As our experiments show, we achieve high-quality results in the three aspects of faint edge detection, noisy image classification and natural image denoising.
Tasks Denoising, Edge Detection, Image Classification, Image Denoising
Published 2018-03-26
URL http://arxiv.org/abs/1803.09420v5
PDF http://arxiv.org/pdf/1803.09420v5.pdf
PWC https://paperswithcode.com/paper/deep-faster-detection-of-faint-edges-in-noisy
Repo
Framework

Generating a Fusion Image: One’s Identity and Another’s Shape

Title Generating a Fusion Image: One’s Identity and Another’s Shape
Authors Donggyu Joo, Doyeon Kim, Junmo Kim
Abstract Generating a novel image by manipulating two input images is an interesting research problem in the study of generative adversarial networks (GANs). We propose a new GAN-based network that generates a fusion image with the identity of input image x and the shape of input image y. Our network can simultaneously train on more than two image datasets in an unsupervised manner. We define an identity loss LI to catch the identity of image x and a shape loss LS to get the shape of y. In addition, we propose a novel training method called Min-Patch training to focus the generator on crucial parts of an image, rather than its entirety. We show qualitative results on the VGG Youtube Pose dataset, Eye dataset (MPIIGaze and UnityEyes), and the Photo-Sketch-Cartoon dataset.
Tasks
Published 2018-04-20
URL http://arxiv.org/abs/1804.07455v1
PDF http://arxiv.org/pdf/1804.07455v1.pdf
PWC https://paperswithcode.com/paper/generating-a-fusion-image-ones-identity-and
Repo
Framework

Finding the Answers with Definition Models

Title Finding the Answers with Definition Models
Authors Jack Parry
Abstract Inspired by a previous attempt to answer crossword questions using neural networks (Hill, Cho, Korhonen, & Bengio, 2015), this dissertation implements extensions to improve the performance of this existing definition model on the task of answering crossword questions. A discussion and evaluation of the original implementation finds that there are some ways in which the recurrent neural model could be extended. Insights from related fields neural language modeling and neural machine translation provide the justification and means required for these extensions. Two extensions are applied to the LSTM encoder, first taking the average of LSTM states across the sequence and secondly using a bidirectional LSTM, both implementations serve to improve model performance on a definitions and crossword test set. In order to improve performance on crossword questions, the training data is increased to include crossword questions and answers, and this serves to improve results on definitions as well as crossword questions. The final experiments are conducted using sub-word unit segmentation, first on the source side and then later preliminary experimentation is conducted to facilitate character-level output. Initially, an exact reproduction of the baseline results proves unsuccessful. Despite this, the extensions improve performance, allowing the definition model to surpass the performance of the recurrent neural network variants of the previous work (Hill, et al., 2015).
Tasks Language Modelling, Machine Translation
Published 2018-09-01
URL http://arxiv.org/abs/1809.00224v1
PDF http://arxiv.org/pdf/1809.00224v1.pdf
PWC https://paperswithcode.com/paper/finding-the-answers-with-definition-models
Repo
Framework

Group Sparsity Residual with Non-Local Samples for Image Denoising

Title Group Sparsity Residual with Non-Local Samples for Image Denoising
Authors Zhiyuan Zha, Xinggan Zhang, Qiong Wang, Yechao Bai, Lan Tang, Xin Yuan
Abstract Inspired by group-based sparse coding, recently proposed group sparsity residual (GSR) scheme has demonstrated superior performance in image processing. However, one challenge in GSR is to estimate the residual by using a proper reference of the group-based sparse coding (GSC), which is desired to be as close to the truth as possible. Previous researches utilized the estimations from other algorithms (i.e., GMM or BM3D), which are either not accurate or too slow. In this paper, we propose to use the Non-Local Samples (NLS) as reference in the GSR regime for image denoising, thus termed GSR-NLS. More specifically, we first obtain a good estimation of the group sparse coefficients by the image nonlocal self-similarity, and then solve the GSR model by an effective iterative shrinkage algorithm. Experimental results demonstrate that the proposed GSR-NLS not only outperforms many state-of-the-art methods, but also delivers the competitive advantage of speed.
Tasks Denoising, Image Denoising
Published 2018-03-22
URL http://arxiv.org/abs/1803.08412v1
PDF http://arxiv.org/pdf/1803.08412v1.pdf
PWC https://paperswithcode.com/paper/group-sparsity-residual-with-non-local
Repo
Framework
comments powered by Disqus