July 28, 2019

3038 words 15 mins read

Paper Group ANR 355

Paper Group ANR 355

An Ensemble of Deep Convolutional Neural Networks for Alzheimer’s Disease Detection and Classification. End-to-End Musical Key Estimation Using a Convolutional Neural Network. Real-time Load Prediction with High Velocity Smart Home Data Stream. On the Contribution of Discourse Structure on Text Complexity Assessment. Learning Generative ConvNets vi …

An Ensemble of Deep Convolutional Neural Networks for Alzheimer’s Disease Detection and Classification

Title An Ensemble of Deep Convolutional Neural Networks for Alzheimer’s Disease Detection and Classification
Authors Jyoti Islam, Yanqing Zhang
Abstract Alzheimer’s Disease destroys brain cells causing people to lose their memory, mental functions and ability to continue daily activities. It is a severe neurological brain disorder which is not curable, but earlier detection of Alzheimer’s Disease can help for proper treatment and to prevent brain tissue damage. Detection and classification of Alzheimer’s Disease (AD) is challenging because sometimes the signs that distinguish Alzheimer’s Disease MRI data can be found in normal healthy brain MRI data of older people. Moreover, there are relatively small amount of dataset available to train the automated Alzheimer’s Disease detection and classification model. In this paper, we present a novel Alzheimer’s Disease detection and classification model using brain MRI data analysis. We develop an ensemble of deep convolutional neural networks and demonstrate superior performance on the Open Access Series of Imaging Studies (OASIS) dataset.
Tasks
Published 2017-12-02
URL http://arxiv.org/abs/1712.01675v2
PDF http://arxiv.org/pdf/1712.01675v2.pdf
PWC https://paperswithcode.com/paper/an-ensemble-of-deep-convolutional-neural
Repo
Framework

End-to-End Musical Key Estimation Using a Convolutional Neural Network

Title End-to-End Musical Key Estimation Using a Convolutional Neural Network
Authors Filip Korzeniowski, Gerhard Widmer
Abstract We present an end-to-end system for musical key estimation, based on a convolutional neural network. The proposed system not only out-performs existing key estimation methods proposed in the academic literature; it is also capable of learning a unified model for diverse musical genres that performs comparably to existing systems specialised for specific genres. Our experiments confirm that different genres do differ in their interpretation of tonality, and thus a system tuned e.g. for pop music performs subpar on pieces of electronic music. They also reveal that such cross-genre setups evoke specific types of error (predicting the relative or parallel minor). However, using the data-driven approach proposed in this paper, we can train models that deal with multiple musical styles adequately, and without major losses in accuracy.
Tasks
Published 2017-06-09
URL http://arxiv.org/abs/1706.02921v1
PDF http://arxiv.org/pdf/1706.02921v1.pdf
PWC https://paperswithcode.com/paper/end-to-end-musical-key-estimation-using-a
Repo
Framework

Real-time Load Prediction with High Velocity Smart Home Data Stream

Title Real-time Load Prediction with High Velocity Smart Home Data Stream
Authors Christoph Doblander, Martin Strohbach, Holger Ziekow, Hans-Arno Jacobsen
Abstract This paper addresses the use of smart-home sensor streams for continuous prediction of energy loads of individual households which participate as an agent in local markets. We introduces a new device level energy consumption dataset recorded over three years wich includes high resolution energy measurements from electrical devices collected within a pilot program. Using data from that pilot, we analyze the applicability of various machine learning mechanisms for continuous load prediction. Specifically, we address short-term load prediction that is required for load balancing in electrical micro-grids. We report on the prediction performance and the computational requirements of a broad range of prediction mechanisms. Furthermore we present an architecture and experimental evaluation when this prediction is applied in the stream.
Tasks
Published 2017-08-12
URL http://arxiv.org/abs/1708.04613v1
PDF http://arxiv.org/pdf/1708.04613v1.pdf
PWC https://paperswithcode.com/paper/real-time-load-prediction-with-high-velocity
Repo
Framework

On the Contribution of Discourse Structure on Text Complexity Assessment

Title On the Contribution of Discourse Structure on Text Complexity Assessment
Authors Elnaz Davoodi, Leila Kosseim
Abstract This paper investigates the influence of discourse features on text complexity assessment. To do so, we created two data sets based on the Penn Discourse Treebank and the Simple English Wikipedia corpora and compared the influence of coherence, cohesion, surface, lexical and syntactic features to assess text complexity. Results show that with both data sets coherence features are more correlated to text complexity than the other types of features. In addition, feature selection revealed that with both data sets the top most discriminating feature is a coherence feature.
Tasks Feature Selection
Published 2017-08-19
URL http://arxiv.org/abs/1708.05800v1
PDF http://arxiv.org/pdf/1708.05800v1.pdf
PWC https://paperswithcode.com/paper/on-the-contribution-of-discourse-structure-on
Repo
Framework

Learning Generative ConvNets via Multi-grid Modeling and Sampling

Title Learning Generative ConvNets via Multi-grid Modeling and Sampling
Authors Ruiqi Gao, Yang Lu, Junpei Zhou, Song-Chun Zhu, Ying Nian Wu
Abstract This paper proposes a multi-grid method for learning energy-based generative ConvNet models of images. For each grid, we learn an energy-based probabilistic model where the energy function is defined by a bottom-up convolutional neural network (ConvNet or CNN). Learning such a model requires generating synthesized examples from the model. Within each iteration of our learning algorithm, for each observed training image, we generate synthesized images at multiple grids by initializing the finite-step MCMC sampling from a minimal 1 x 1 version of the training image. The synthesized image at each subsequent grid is obtained by a finite-step MCMC initialized from the synthesized image generated at the previous coarser grid. After obtaining the synthesized examples, the parameters of the models at multiple grids are updated separately and simultaneously based on the differences between synthesized and observed examples. We show that this multi-grid method can learn realistic energy-based generative ConvNet models, and it outperforms the original contrastive divergence (CD) and persistent CD.
Tasks
Published 2017-09-26
URL http://arxiv.org/abs/1709.08868v2
PDF http://arxiv.org/pdf/1709.08868v2.pdf
PWC https://paperswithcode.com/paper/learning-generative-convnets-via-multi-grid
Repo
Framework

Multi-modal Geolocation Estimation Using Deep Neural Networks

Title Multi-modal Geolocation Estimation Using Deep Neural Networks
Authors Jesse M. Johns, Jeremiah Rounds, Michael J. Henry
Abstract Estimating the location where an image was taken based solely on the contents of the image is a challenging task, even for humans, as properly labeling an image in such a fashion relies heavily on contextual information, and is not as simple as identifying a single object in the image. Thus any methods which attempt to do so must somehow account for these complexities, and no single model to date is completely capable of addressing all challenges. This work contributes to the state of research in image geolocation inferencing by introducing a novel global meshing strategy, outlining a variety of training procedures to overcome the considerable data limitations when training these models, and demonstrating how incorporating additional information can be used to improve the overall performance of a geolocation inference model. In this work, it is shown that Delaunay triangles are an effective type of mesh for geolocation in relatively low volume scenarios when compared to results from state of the art models which use quad trees and an order of magnitude more training data. In addition, the time of posting, learned user albuming, and other meta data are easily incorporated to improve geolocation by up to 11% for country-level (750 km) locality accuracy to 3% for city-level (25 km) localities.
Tasks
Published 2017-12-26
URL http://arxiv.org/abs/1712.09458v1
PDF http://arxiv.org/pdf/1712.09458v1.pdf
PWC https://paperswithcode.com/paper/multi-modal-geolocation-estimation-using-deep
Repo
Framework

cyTRON and cyTRON/JS: two Cytoscape-based applications for the inference of cancer evolution models

Title cyTRON and cyTRON/JS: two Cytoscape-based applications for the inference of cancer evolution models
Authors Lucrezia Patruno, Edoardo Galimberti, Daniele Ramazzotti, Giulio Caravagna, Luca De Sano, Marco Antoniotti, Alex Graudenzi
Abstract The increasing availability of sequencing data of cancer samples is fueling the development of algorithmic strategies to investigate tumor heterogeneity and infer reliable models of cancer evolution. We here build up on previous works on cancer progression inference from genomic alteration data, to deliver two distinct Cytoscape-based applications, which allow to produce, visualize and manipulate cancer evolution models, also by interacting with public genomic and proteomics databases. In particular, we here introduce cyTRON, a stand-alone Cytoscape app, and cyTRON/JS, a web application which employs the functionalities of Cytoscape/JS. cyTRON was developed in Java; the code is available at https://github.com/BIMIB-DISCo/cyTRON and on the Cytoscape App Store http://apps.cytoscape.org/apps/cytron. cyTRON/JS was developed in JavaScript and R; the source code of the tool is available at https://github.com/BIMIB-DISCo/cyTRON-js and the tool is accessible from https://bimib.disco.unimib.it/cytronjs/welcome.
Tasks
Published 2017-05-08
URL https://arxiv.org/abs/1705.03067v2
PDF https://arxiv.org/pdf/1705.03067v2.pdf
PWC https://paperswithcode.com/paper/cytron-and-cytronjs-two-cytoscape-based
Repo
Framework

On Fundamental Limits of Robust Learning

Title On Fundamental Limits of Robust Learning
Authors Jiashi Feng
Abstract We consider the problems of robust PAC learning from distributed and streaming data, which may contain malicious errors and outliers, and analyze their fundamental complexity questions. In particular, we establish lower bounds on the communication complexity for distributed robust learning performed on multiple machines, and on the space complexity for robust learning from streaming data on a single machine. These results demonstrate that gaining robustness of learning algorithms is usually at the expense of increased complexities. As far as we know, this work gives the first complexity results for distributed and online robust PAC learning.
Tasks
Published 2017-03-30
URL http://arxiv.org/abs/1703.10444v1
PDF http://arxiv.org/pdf/1703.10444v1.pdf
PWC https://paperswithcode.com/paper/on-fundamental-limits-of-robust-learning
Repo
Framework

Iterative Block Tensor Singular Value Thresholding for Extraction of Low Rank Component of Image Data

Title Iterative Block Tensor Singular Value Thresholding for Extraction of Low Rank Component of Image Data
Authors Longxi Chen, Yipeng Liu, Ce Zhu
Abstract Tensor principal component analysis (TPCA) is a multi-linear extension of principal component analysis which converts a set of correlated measurements into several principal components. In this paper, we propose a new robust TPCA method to extract the princi- pal components of the multi-way data based on tensor singular value decomposition. The tensor is split into a number of blocks of the same size. The low rank component of each block tensor is extracted using iterative tensor singular value thresholding method. The prin- cipal components of the multi-way data are the concatenation of all the low rank components of all the block tensors. We give the block tensor incoherence conditions to guarantee the successful decom- position. This factorization has similar optimality properties to that of low rank matrix derived from singular value decomposition. Ex- perimentally, we demonstrate its effectiveness in two applications, including motion separation for surveillance videos and illumination normalization for face images.
Tasks
Published 2017-01-15
URL http://arxiv.org/abs/1701.04043v1
PDF http://arxiv.org/pdf/1701.04043v1.pdf
PWC https://paperswithcode.com/paper/iterative-block-tensor-singular-value
Repo
Framework

Understanding Kernel Size in Blind Deconvolution

Title Understanding Kernel Size in Blind Deconvolution
Authors Li Si-Yao, Dongwei Ren, Qian Yin
Abstract Most blind deconvolution methods usually pre-define a large kernel size to guarantee the support domain. Blur kernel estimation error is likely to be introduced, yielding severe artifacts in deblurring results. In this paper, we first theoretically and experimentally analyze the mechanism to estimation error in oversized kernel, and show that it holds even on blurry images without noises. Then to suppress this adverse effect, we propose a low rank-based regularization on blur kernel to exploit the structural information in degraded kernels, by which larger-kernel effect can be effectively suppressed. And we propose an efficient optimization algorithm to solve it. Experimental results on benchmark datasets show that the proposed method is comparable with the state-of-the-arts by accordingly setting proper kernel size, and performs much better in handling larger-size kernels quantitatively and qualitatively. The deblurring results on real-world blurry images further validate the effectiveness of the proposed method.
Tasks Deblurring
Published 2017-06-06
URL http://arxiv.org/abs/1706.01797v5
PDF http://arxiv.org/pdf/1706.01797v5.pdf
PWC https://paperswithcode.com/paper/understanding-kernel-size-in-blind
Repo
Framework

Visual Forecasting by Imitating Dynamics in Natural Sequences

Title Visual Forecasting by Imitating Dynamics in Natural Sequences
Authors Kuo-Hao Zeng, William B. Shen, De-An Huang, Min Sun, Juan Carlos Niebles
Abstract We introduce a general framework for visual forecasting, which directly imitates visual sequences without additional supervision. As a result, our model can be applied at several semantic levels and does not require any domain knowledge or handcrafted features. We achieve this by formulating visual forecasting as an inverse reinforcement learning (IRL) problem, and directly imitate the dynamics in natural sequences from their raw pixel values. The key challenge is the high-dimensional and continuous state-action space that prohibits the application of previous IRL algorithms. We address this computational bottleneck by extending recent progress in model-free imitation with trainable deep feature representations, which (1) bypasses the exhaustive state-action pair visits in dynamic programming by using a dual formulation and (2) avoids explicit state sampling at gradient computation using a deep feature reparametrization. This allows us to apply IRL at scale and directly imitate the dynamics in high-dimensional continuous visual sequences from the raw pixel values. We evaluate our approach at three different level-of-abstraction, from low level pixels to higher level semantics: future frame generation, action anticipation, visual story forecasting. At all levels, our approach outperforms existing methods.
Tasks
Published 2017-08-19
URL http://arxiv.org/abs/1708.05827v1
PDF http://arxiv.org/pdf/1708.05827v1.pdf
PWC https://paperswithcode.com/paper/visual-forecasting-by-imitating-dynamics-in
Repo
Framework

Automatically Redundant Features Removal for Unsupervised Feature Selection via Sparse Feature Graph

Title Automatically Redundant Features Removal for Unsupervised Feature Selection via Sparse Feature Graph
Authors Shuchu Han, Hao Huang, Hong Qin
Abstract The redundant features existing in high dimensional datasets always affect the performance of learning and mining algorithms. How to detect and remove them is an important research topic in machine learning and data mining research. In this paper, we propose a graph based approach to find and remove those redundant features automatically for high dimensional data. Based on the sparse learning based unsupervised feature selection framework, Sparse Feature Graph (SFG) is introduced not only to model the redundancy between two features, but also to disclose the group redundancy between two groups of features. With SFG, we can divide the whole features into different groups, and improve the intrinsic structure of data by removing detected redundant features. With accurate data structure, quality indicator vectors can be obtained to improve the learning performance of existing unsupervised feature selection algorithms such as multi-cluster feature selection (MCFS). Our experimental results on benchmark datasets show that the proposed SFG and feature redundancy remove algorithm can improve the performance of unsupervised feature selection algorithms consistently.
Tasks Feature Selection, Sparse Learning
Published 2017-05-13
URL http://arxiv.org/abs/1705.04804v2
PDF http://arxiv.org/pdf/1705.04804v2.pdf
PWC https://paperswithcode.com/paper/automatically-redundant-features-removal-for
Repo
Framework

Correlation and Class Based Block Formation for Improved Structured Dictionary Learning

Title Correlation and Class Based Block Formation for Improved Structured Dictionary Learning
Authors Nagendra Kumar, Rohit Sinha
Abstract In recent years, the creation of block-structured dictionary has attracted a lot of interest. Learning such dictionaries involve two step process: block formation and dictionary update. Both these steps are important in producing an effective dictionary. The existing works mostly assume that the block structure is known a priori while learning the dictionary. For finding the unknown block structure given a dictionary commonly sparse agglomerative clustering (SAC) is used. It groups atoms based on their consistency in sparse coding with respect to the unstructured dictionary. This paper explores two innovations towards improving the reconstruction as well as the classification ability achieved with the block-structured dictionary. First, we propose a novel block structuring approach that makes use of the correlation among dictionary atoms. Unlike the SAC approach, which groups diverse atoms, in the proposed approach the blocks are formed by grouping the top most correlated atoms in the dictionary. The proposed block clustering approach is noted to yield significant reductions in redundancy as well as provides a direct control on the block size when compared with the existing SAC-based block structuring. Later, motivated by works using supervised \emph{a priori} known block structure, we also explore the incorporation of class information in the proposed block formation approach to further enhance the classification ability of the block dictionary. For assessment of the reconstruction ability with proposed innovations is done on synthetic data while the classification ability has been evaluated in large variability speaker verification task.
Tasks Dictionary Learning, Speaker Verification
Published 2017-08-04
URL http://arxiv.org/abs/1708.01448v2
PDF http://arxiv.org/pdf/1708.01448v2.pdf
PWC https://paperswithcode.com/paper/correlation-and-class-based-block-formation
Repo
Framework

Online Article Ranking as a Constrained, Dynamic, Multi-Objective Optimization Problem

Title Online Article Ranking as a Constrained, Dynamic, Multi-Objective Optimization Problem
Authors Jeya Balaji Balasubramanian, Akshay Soni, Yashar Mehdad, Nikolay Laptev
Abstract The content ranking problem in a social news website, is typically a function that maximizes a scalar metric of interest like dwell-time. However, like in most real-world applications we are interested in more than one metric—for instance simultaneously maximizing click-through rate, monetization metrics, dwell-time—and also satisfy the traffic requirements promised to different publishers. All this needs to be done on online data and under the settings where the objective function and the constraints can dynamically change; this could happen if for instance new publishers are added, some contracts are adjusted, or if some contracts are over. In this paper, we formulate this problem as a constrained, dynamic, multi-objective optimization problem. We propose a novel framework that extends a successful genetic optimization algorithm, NSGA-II, to solve this online, data-driven problem. We design the modules of NSGA-II to suit our problem. We evaluate optimization performance using Hypervolume and introduce a confidence interval metric for assessing the practicality of a solution. We demonstrate the application of this framework on a real-world Article Ranking problem. We observe that we make considerable improvements in both time and performance over a brute-force baseline technique that is currently in production.
Tasks
Published 2017-05-16
URL http://arxiv.org/abs/1705.05765v1
PDF http://arxiv.org/pdf/1705.05765v1.pdf
PWC https://paperswithcode.com/paper/online-article-ranking-as-a-constrained
Repo
Framework

PowerAI DDL

Title PowerAI DDL
Authors Minsik Cho, Ulrich Finkler, Sameer Kumar, David Kung, Vaibhav Saxena, Dheeraj Sreedhar
Abstract As deep neural networks become more complex and input datasets grow larger, it can take days or even weeks to train a deep neural network to the desired accuracy. Therefore, distributed Deep Learning at a massive scale is a critical capability, since it offers the potential to reduce the training time from weeks to hours. In this paper, we present a software-hardware co-optimized distributed Deep Learning system that can achieve near-linear scaling up to hundreds of GPUs. The core algorithm is a multi-ring communication pattern that provides a good tradeoff between latency and bandwidth and adapts to a variety of system configurations. The communication algorithm is implemented as a library for easy use. This library has been integrated into Tensorflow, Caffe, and Torch. We train Resnet-101 on Imagenet 22K with 64 IBM Power8 S822LC servers (256 GPUs) in about 7 hours to an accuracy of 33.8 % validation accuracy. Microsoft’s ADAM and Google’s DistBelief results did not reach 30 % validation accuracy for Imagenet 22K. Compared to Facebook AI Research’s recent paper on 256 GPU training, we use a different communication algorithm, and our combined software and hardware system offers better communication overhead for Resnet-50. A PowerAI DDL enabled version of Torch completed 90 epochs of training on Resnet 50 for 1K classes in 50 minutes using 64 IBM Power8 S822LC servers (256 GPUs).
Tasks
Published 2017-08-07
URL http://arxiv.org/abs/1708.02188v1
PDF http://arxiv.org/pdf/1708.02188v1.pdf
PWC https://paperswithcode.com/paper/powerai-ddl
Repo
Framework
comments powered by Disqus