January 31, 2020

3071 words 15 mins read

Paper Group ANR 14

Paper Group ANR 14

Distinguishing between Normal and Cancer Cells Using Autoencoder Node Saliency. ISL: Optimal Policy Learning With Optimal Exploration-Exploitation Trade-Off. Visualization of Very Large High-Dimensional Data Sets as Minimum Spanning Trees. Adaptive Genomic Evolution of Neural Network Topologies (AGENT) for State-to-Action Mapping in Autonomous Agen …

Distinguishing between Normal and Cancer Cells Using Autoencoder Node Saliency

Title Distinguishing between Normal and Cancer Cells Using Autoencoder Node Saliency
Authors Ya Ju Fan, Jonathan E. Allen, Sam Ade Jacobs, Brian C. Van Essen
Abstract Gene expression profiles have been widely used to characterize patterns of cellular responses to diseases. As data becomes available, scalable learning toolkits become essential to processing large datasets using deep learning models to model complex biological processes. We present an autoencoder to capture nonlinear relationships recovered from gene expression profiles. The autoencoder is a nonlinear dimension reduction technique using an artificial neural network, which learns hidden representations of unlabeled data. We train the autoencoder on a large collection of tumor samples from the National Cancer Institute Genomic Data Commons, and obtain a generalized and unsupervised latent representation. We leverage a HPC-focused deep learning toolkit, Livermore Big Artificial Neural Network (LBANN) to efficiently parallelize the training algorithm, reducing computation times from several hours to a few minutes. With the trained autoencoder, we generate latent representations of a small dataset, containing pairs of normal and cancer cells of various tumor types. A novel measure called autoencoder node saliency (ANS) is introduced to identify the hidden nodes that best differentiate various pairs of cells. We compare our findings of the best classifying nodes with principal component analysis and the visualization of t-distributed stochastic neighbor embedding. We demonstrate that the autoencoder effectively extracts distinct gene features for multiple learning tasks in the dataset.
Tasks Dimensionality Reduction
Published 2019-01-30
URL http://arxiv.org/abs/1901.11152v1
PDF http://arxiv.org/pdf/1901.11152v1.pdf
PWC https://paperswithcode.com/paper/distinguishing-between-normal-and-cancer
Repo
Framework

ISL: Optimal Policy Learning With Optimal Exploration-Exploitation Trade-Off

Title ISL: Optimal Policy Learning With Optimal Exploration-Exploitation Trade-Off
Authors Lucas Cassano, Ali H. Sayed
Abstract Maximum entropy reinforcement learning (RL) has received considerable attention recently. Some of the algorithms within this framework exhibit state of the art performance in many challenging tasks. These algorithms exhibit improved exploration; however, they are still inefficient at performing deep exploration. The contribution of this paper is the introduction of a new kind of soft RL algorithm (referred to as the ISL strategy) that is efficient at performing deep exploration. Similarly to maximum entropy RL, we achieve this objective by augmenting the traditional RL objective with a novel regularization term. A distinctive feature of our approach is that, as opposed to other works that tackle the problem of deep exploration, in our derivation both the learning equations and the exploration-exploitation strategy are derived in tandem as the solution to a well-posed optimization problem whose minimization leads to the optimal value function. Empirically we show that our method exhibits state of the art performance on a range of challenging deep-exploration benchmarks.
Tasks Q-Learning
Published 2019-09-13
URL https://arxiv.org/abs/1909.06293v3
PDF https://arxiv.org/pdf/1909.06293v3.pdf
PWC https://paperswithcode.com/paper/isl-optimal-policy-learning-with-optimal
Repo
Framework

Visualization of Very Large High-Dimensional Data Sets as Minimum Spanning Trees

Title Visualization of Very Large High-Dimensional Data Sets as Minimum Spanning Trees
Authors Daniel Probst, Jean-Louis Reymond
Abstract The chemical sciences are producing an unprecedented amount of large, high-dimensional data sets containing chemical structures and associated properties. However, there are currently no algorithms to visualize such data while preserving both global and local features with a sufficient level of detail to allow for human inspection and interpretation. Here, we propose a solution to this problem with a new data visualization method, TMAP, capable of representing data sets of up to millions of data points and arbitrary high dimensionality as a two-dimensional tree (http://tmap.gdb.tools). Visualizations based on TMAP are better suited than t-SNE or UMAP for the exploration and interpretation of large data sets due to their tree-like nature, increased local and global neighborhood and structure preservation, and the transparency of the methods the algorithm is based on. We apply TMAP to the most used chemistry data sets including databases of molecules such as ChEMBL, FDB17, the Natural Products Atlas, DSSTox, as well as to the MoleculeNet benchmark collection of data sets. We also show its broad applicability with further examples from biology, particle physics, and literature.
Tasks
Published 2019-08-16
URL https://arxiv.org/abs/1908.10410v3
PDF https://arxiv.org/pdf/1908.10410v3.pdf
PWC https://paperswithcode.com/paper/visualization-of-very-large-high-dimensional
Repo
Framework

Adaptive Genomic Evolution of Neural Network Topologies (AGENT) for State-to-Action Mapping in Autonomous Agents

Title Adaptive Genomic Evolution of Neural Network Topologies (AGENT) for State-to-Action Mapping in Autonomous Agents
Authors Amir Behjat, Sharat Chidambaran, Souma Chowdhury
Abstract Neuroevolution is a process of training neural networks (NN) through an evolutionary algorithm, usually to serve as a state-to-action mapping model in control or reinforcement learning-type problems. This paper builds on the Neuro Evolution of Augmented Topologies (NEAT) formalism that allows designing topology and weight evolving NNs. Fundamental advancements are made to the neuroevolution process to address premature stagnation and convergence issues, central among which is the incorporation of automated mechanisms to control the population diversity and average fitness improvement within the neuroevolution process. Insights into the performance and efficiency of the new algorithm is obtained by evaluating it on three benchmark problems from the Open AI platform and an Unmanned Aerial Vehicle (UAV) collision avoidance problem.
Tasks
Published 2019-03-17
URL http://arxiv.org/abs/1903.07107v1
PDF http://arxiv.org/pdf/1903.07107v1.pdf
PWC https://paperswithcode.com/paper/adaptive-genomic-evolution-of-neural-network
Repo
Framework

Single Image Super-Resolution via CNN Architectures and TV-TV Minimization

Title Single Image Super-Resolution via CNN Architectures and TV-TV Minimization
Authors Marija Vella, João F. C. Mota
Abstract Super-resolution (SR) is a technique that allows increasing the resolution of a given image. Having applications in many areas, from medical imaging to consumer electronics, several SR methods have been proposed. Currently, the best performing methods are based on convolutional neural networks (CNNs) and require extensive datasets for training. However, at test time, they fail to impose consistency between the super-resolved image and the given low-resolution image, a property that classic reconstruction-based algorithms naturally enforce in spite of having poorer performance. Motivated by this observation, we propose a new framework that joins both approaches and produces images with superior quality than any of the prior methods. Although our framework requires additional computation, our experiments on Set5, Set14, and BSD100 show that it systematically produces images with better peak signal to noise ratio (PSNR) and structural similarity (SSIM) than the current state-of-the-art CNN architectures for SR.
Tasks Image Super-Resolution, Super-Resolution
Published 2019-07-11
URL https://arxiv.org/abs/1907.05380v2
PDF https://arxiv.org/pdf/1907.05380v2.pdf
PWC https://paperswithcode.com/paper/single-image-super-resolution-via-cnn
Repo
Framework

Towards Optimal and Efficient Best Arm Identification in Linear Bandits

Title Towards Optimal and Efficient Best Arm Identification in Linear Bandits
Authors Mohammadi Zaki, Avinash Mohan, Aditya Gopalan
Abstract We give a new algorithm for best arm identification in linearly parameterised bandits in the fixed confidence setting. The algorithm generalises the well-known LUCB algorithm of Kalyanakrishnan et al. (2012) by playing an arm which minimises a suitable notion of geometric overlap of the statistical confidence set for the unknown parameter, and is fully adaptive and computationally efficient as compared to several state-of-the methods. We theoretically analyse the sample complexity of the algorithm for problems with two and three arms, showing optimality in many cases. Numerical results indicate favourable performance over other algorithms with which we compare.
Tasks
Published 2019-11-05
URL https://arxiv.org/abs/1911.01695v2
PDF https://arxiv.org/pdf/1911.01695v2.pdf
PWC https://paperswithcode.com/paper/towards-optimal-and-efficient-best-arm
Repo
Framework

Hybrid Residual Attention Network for Single Image Super Resolution

Title Hybrid Residual Attention Network for Single Image Super Resolution
Authors Abdul Muqeet, Md Tauhid Bin Iqbal, Sung-Ho Bae
Abstract The extraction and proper utilization of convolution neural network (CNN) features have a significant impact on the performance of image super-resolution (SR). Although CNN features contain both the spatial and channel information, current deep techniques on SR often suffer to maximize performance due to using either the spatial or channel information. Moreover, they integrate such information within a deep or wide network rather than exploiting all the available features, eventually resulting in high computational complexity. To address these issues, we present a binarized feature fusion (BFF) structure that utilizes the extracted features from residual groups (RG) in an effective way. Each residual group (RG) consists of multiple hybrid residual attention blocks (HRAB) that effectively integrates the multiscale feature extraction module and channel attention mechanism in a single block. Furthermore, we use dilated convolutions with different dilation factors to extract multiscale features. We also propose to adopt global, short and long skip connections and residual groups (RG) structure to ease the flow of information without losing important features details. In the paper, we call this overall network architecture as hybrid residual attention network (HRAN). In the experiment, we have observed the efficacy of our method against the state-of-the-art methods for both the quantitative and qualitative comparisons.
Tasks Image Super-Resolution, Super-Resolution
Published 2019-07-11
URL https://arxiv.org/abs/1907.05514v1
PDF https://arxiv.org/pdf/1907.05514v1.pdf
PWC https://paperswithcode.com/paper/hybrid-residual-attention-network-for-single
Repo
Framework

Distilling with Residual Network for Single Image Super Resolution

Title Distilling with Residual Network for Single Image Super Resolution
Authors Xiaopeng Sun, Wen Lu, Rui Wang, Furui Bai
Abstract Recently, the deep convolutional neural network (CNN) has made remarkable progress in single image super resolution(SISR). However, blindly using the residual structure and dense structure to extract features from LR images, can cause the network to be bloated and difficult to train. To address these problems, we propose a simple and efficient distilling with residual network(DRN) for SISR. In detail, we propose residual distilling block(RDB) containing two branches, while one branch performs a residual operation and the other branch distills effective information. To further improve efficiency, we design residual distilling group(RDG) by stacking some RDBs and one long skip connection, which can effectively extract local features and fuse them with global features. These efficient features beneficially contribute to image reconstruction. Experiments on benchmark datasets demonstrate that our DRN is superior to the state-of-the-art methods, specifically has a better trade-off between performance and model size.
Tasks Image Reconstruction, Image Super-Resolution, Super-Resolution
Published 2019-07-05
URL https://arxiv.org/abs/1907.02843v1
PDF https://arxiv.org/pdf/1907.02843v1.pdf
PWC https://paperswithcode.com/paper/distilling-with-residual-network-for-single
Repo
Framework

Radio Galaxy Zoo: Unsupervised Clustering of Convolutionally Auto-encoded Radio-astronomical Images

Title Radio Galaxy Zoo: Unsupervised Clustering of Convolutionally Auto-encoded Radio-astronomical Images
Authors Nicholas O. Ralph, Ray P. Norris, Gu Fang, Laurence A. F. Park, Timothy J. Galvin, Matthew J. Alger, Heinz Andernach, Chris Lintott, Lawrence Rudnick, Stanislav Shabala, O. Ivy Wong
Abstract This paper demonstrates a novel and efficient unsupervised clustering method with the combination of a Self-Organising Map (SOM) and a convolutional autoencoder. The rapidly increasing volume of radio-astronomical data has increased demand for machine learning methods as solutions to classification and outlier detection. Major astronomical discoveries are unplanned and found in the unexpected, making unsupervised machine learning highly desirable by operating without assumptions and labelled training data. Our approach shows SOM training time is drastically reduced and high-level features can be clustered by training on auto-encoded feature vectors instead of raw images. Our results demonstrate this method is capable of accurately separating outliers on a SOM with neighbourhood similarity and K-means clustering of radio-astronomical features complexity. We present this method as a powerful new approach to data exploration by providing a detailed understanding of the morphology and relationships of Radio Galaxy Zoo (RGZ) dataset image features which can be applied to new radio survey data.
Tasks Outlier Detection
Published 2019-06-07
URL https://arxiv.org/abs/1906.02864v1
PDF https://arxiv.org/pdf/1906.02864v1.pdf
PWC https://paperswithcode.com/paper/radio-galaxy-zoo-unsupervised-clustering-of
Repo
Framework

An Online Reinforcement Learning Approach to Quality-Cost-Aware Task Allocation for Multi-Attribute Social Sensing

Title An Online Reinforcement Learning Approach to Quality-Cost-Aware Task Allocation for Multi-Attribute Social Sensing
Authors Yang Zhang, Daniel Zhang, Nathan Vance, Dong Wang
Abstract Social sensing has emerged as a new sensing paradigm where humans (or devices on their behalf) collectively report measurements about the physical world. This paper focuses on a quality-cost-aware task allocation problem in multi-attribute social sensing applications. The goal is to identify a task allocation strategy (i.e., decide when and where to collect sensing data) to achieve an optimized tradeoff between the data quality and the sensing cost. While recent progress has been made to tackle similar problems, three important challenges have not been well addressed: (i) “online task allocation”: the task allocation schemes need to respond quickly to the potentially large dynamics of the measured variables in social sensing; (ii) “multi-attribute constrained optimization”: minimizing the overall sensing error given the dependencies and constraints of multiple attributes of the measured variables is a non-trivial problem to solve; (iii) “nonuniform task allocation cost”: the task allocation cost in social sensing often has a nonuniform distribution which adds additional complexity to the optimized task allocation problem. This paper develops a Quality-Cost-Aware Online Task Allocation (QCO-TA) scheme to address the above challenges using a principled online reinforcement learning framework. We evaluate the QCO-TA scheme through a real-world social sensing application and the results show that our scheme significantly outperforms the state-of-the-art baselines in terms of both sensing accuracy and cost.
Tasks
Published 2019-09-11
URL https://arxiv.org/abs/1909.05388v1
PDF https://arxiv.org/pdf/1909.05388v1.pdf
PWC https://paperswithcode.com/paper/an-online-reinforcement-learning-approach-to
Repo
Framework

Task Oriented Channel State Information Quantization

Title Task Oriented Channel State Information Quantization
Authors Hang Zou, Chao Zhang, Samson Lasaulce
Abstract In this paper, we propose a new perspective for quantizing a signal and more specifically the channel state information (CSI). The proposed point of view is fully relevant for a receiver which has to send a quantized version of the channel state to the transmitter. Roughly, the key idea is that the receiver sends the right amount of information to the transmitter so that the latter be able to take its (resource allocation) decision. More formally, the decision task of the transmitter is to maximize an utility function u(x;g) with respect to x (e.g., a power allocation vector) given the knowledge of a quantized version of the function parameters g. We exhibit a special case of an energy-efficient power control (PC) problem for which the optimal task oriented CSI quantizer (TOCQ) can be found analytically. For more general utility functions, we propose to use neural networks (NN) based learning. Simulations show that the compression rate obtained by adapting the feedback information rate to the function to be optimized may be significantly increased.
Tasks Quantization
Published 2019-04-02
URL http://arxiv.org/abs/1904.04057v1
PDF http://arxiv.org/pdf/1904.04057v1.pdf
PWC https://paperswithcode.com/paper/task-oriented-channel-state-information
Repo
Framework

Short-duration Speaker Verification (SdSV) Challenge 2020: the Challenge Evaluation Plan

Title Short-duration Speaker Verification (SdSV) Challenge 2020: the Challenge Evaluation Plan
Authors Hossein Zeinali, Kong Aik Lee, Jahangir Alam, Lukas Burget
Abstract This document describes the Short-duration Speaker Verification (SdSV) Challenge 2020. The main goal of the challenge is to evaluate new technologies for text-dependent (TD) and text-independent (TI) speaker verification (SV) in a short duration scenario. The proposed challenge evaluates SdSV with varying degree of phonetic overlap between the enrollment and test utterances (cross-lingual). It is the first challenge with a broad focus on systematic benchmark and analysis on varying degrees of phonetic variability on short-duration speaker recognition. We expect that modern methods (deep neural networks in particular) will play a key role.
Tasks Speaker Recognition, Speaker Verification, Text-Dependent Speaker Verification, Text-Independent Speaker Verification
Published 2019-12-13
URL https://arxiv.org/abs/1912.06311v2
PDF https://arxiv.org/pdf/1912.06311v2.pdf
PWC https://paperswithcode.com/paper/short-duration-speaker-verification-sdsv
Repo
Framework

Money on the Table: Statistical information ignored by Softmax can improve classifier accuracy

Title Money on the Table: Statistical information ignored by Softmax can improve classifier accuracy
Authors Charles B. Delahunt, Courosh Mehanian, J. Nathan Kutz
Abstract Softmax is a standard final layer used in Neural Nets (NNs) to summarize information encoded in the trained NN and return a prediction. However, Softmax leverages only a subset of the class-specific structure encoded in the trained model and ignores potentially valuable information: During training, models encode an array $D$ of class response distributions, where $D_{ij}$ is the distribution of the $j^{th}$ pre-Softmax readout neuron’s responses to the $i^{th}$ class. Given a test sample, Softmax implicitly uses only the row of this array $D$ that corresponds to the readout neurons’ responses to the sample’s true class. Leveraging more of this array $D$ can improve classifier accuracy, because the likelihoods of two competing classes can be encoded in other rows of $D$. To explore this potential resource, we develop a hybrid classifier (Softmax-Pooling Hybrid, $SPH$) that uses Softmax on high-scoring samples, but on low-scoring samples uses a log-likelihood method that pools the information from the full array $D$. We apply $SPH$ to models trained on a vectorized MNIST dataset to varying levels of accuracy. $SPH$ replaces only the final Softmax layer in the trained NN, at test time only. All training is the same as for Softmax. Because the pooling classifier performs better than Softmax on low-scoring samples, $SPH$ reduces test set error by 6% to 23%, using the exact same trained model, whatever the baseline Softmax accuracy. This reduction in error reflects hidden capacity of the trained NN that is left unused by Softmax.
Tasks
Published 2019-01-26
URL https://arxiv.org/abs/1901.09283v2
PDF https://arxiv.org/pdf/1901.09283v2.pdf
PWC https://paperswithcode.com/paper/money-on-the-table-statistical-information
Repo
Framework

Monotonic Multihead Attention

Title Monotonic Multihead Attention
Authors Xutai Ma, Juan Pino, James Cross, Liezl Puzon, Jiatao Gu
Abstract Simultaneous machine translation models start generating a target sequence before they have encoded or read the source sequence. Recent approaches for this task either apply a fixed policy on a state-of-the art Transformer model, or a learnable monotonic attention on a weaker recurrent neural network-based structure. In this paper, we propose a new attention mechanism, Monotonic Multihead Attention (MMA), which extends the monotonic attention mechanism to multihead attention. We also introduce two novel and interpretable approaches for latency control that are specifically designed for multiple attentions heads. We apply MMA to the simultaneous machine translation task and demonstrate better latency-quality tradeoffs compared to MILk, the previous state-of-the-art approach. We also analyze how the latency controls affect the attention span and we motivate the introduction of our model by analyzing the effect of the number of decoder layers and heads on quality and latency.
Tasks Machine Translation
Published 2019-09-26
URL https://arxiv.org/abs/1909.12406v1
PDF https://arxiv.org/pdf/1909.12406v1.pdf
PWC https://paperswithcode.com/paper/monotonic-multihead-attention-1
Repo
Framework

Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently

Title Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently
Authors Rong Ge, Runzhe Wang, Haoyu Zhao
Abstract It has been observed \citep{zhang2016understanding} that deep neural networks can memorize: they achieve 100% accuracy on training data. Recent theoretical results explained such behavior in highly overparametrized regimes, where the number of neurons in each layer is larger than the number of training samples. In this paper, we show that neural networks can be trained to memorize training data perfectly in a mildly overparametrized regime, where the number of parameters is just a constant factor more than the number of training samples, and the number of neurons is much smaller.
Tasks
Published 2019-09-26
URL https://arxiv.org/abs/1909.11837v1
PDF https://arxiv.org/pdf/1909.11837v1.pdf
PWC https://paperswithcode.com/paper/mildly-overparametrized-neural-nets-can
Repo
Framework
comments powered by Disqus