January 31, 2020

2930 words 14 mins read

Paper Group ANR 92

Paper Group ANR 92

Statistical Linear Models in Virus Genomic Alignment-free Classification: Application to Hepatitis C Viruses. Neural Generative Rhetorical Structure Parsing. Utility Analysis of Network Architectures for 3D Point Cloud Processing. A Robust Hybrid Approach for Textual Document Classification. Cross-Dataset Person Re-Identification via Unsupervised P …

Statistical Linear Models in Virus Genomic Alignment-free Classification: Application to Hepatitis C Viruses

Title Statistical Linear Models in Virus Genomic Alignment-free Classification: Application to Hepatitis C Viruses
Authors Amine M. Remita, Abdoulaye Baniré Diallo
Abstract Viral sequence classification is an important task in pathogen detection, epidemiological surveys and evolutionary studies. Statistical learning methods are widely used to classify and identify viral sequences in samples from environments. These methods face several challenges associated with the nature and properties of viral genomes such as recombination, mutation rate and diversity. Also, new generations of sequencing technologies rise other difficulties by generating massive amounts of fragmented sequences. While linear classifiers are often used to classify viruses, there is a lack of exploration of the accuracy space of existing models in the context of alignment free approaches. In this study, we present an exhaustive assessment procedure exploring the power of linear classifiers in genotyping and subtyping partial and complete genomes. It is applied to the Hepatitis C viruses (HCV). Several variables are considered in this investigation such as classifier types (generative and discriminative) and their hyper-parameters (smoothing value and regularization penalty function), the classification task (genotyping and subtyping), the length of the tested sequences (partial and complete) and the length of k-mer words. Overall, several classifiers perform well given a set of precise combination of the experimental variables mentioned above. Finally, we provide the procedure and benchmark data to allow for more robust assessment of classification from virus genomes.
Tasks
Published 2019-10-11
URL https://arxiv.org/abs/1910.05421v2
PDF https://arxiv.org/pdf/1910.05421v2.pdf
PWC https://paperswithcode.com/paper/statistical-linear-models-in-virus-genomic
Repo
Framework

Neural Generative Rhetorical Structure Parsing

Title Neural Generative Rhetorical Structure Parsing
Authors Amandla Mabona, Laura Rimell, Stephen Clark, Andreas Vlachos
Abstract Rhetorical structure trees have been shown to be useful for several document-level tasks including summarization and document classification. Previous approaches to RST parsing have used discriminative models; however, these are less sample efficient than generative models, and RST parsing datasets are typically small. In this paper, we present the first generative model for RST parsing. Our model is a document-level RNN grammar (RNNG) with a bottom-up traversal order. We show that, for our parser’s traversal order, previous beam search algorithms for RNNGs have a left-branching bias which is ill-suited for RST parsing. We develop a novel beam search algorithm that keeps track of both structure- and word-generating actions without exhibiting this branching bias and results in absolute improvements of 6.8 and 2.9 on unlabelled and labelled F1 over previous algorithms. Overall, our generative model outperforms a discriminative model with the same features by 2.6 F1 points and achieves performance comparable to the state-of-the-art, outperforming all published parsers from a recent replication study that do not use additional training data.
Tasks Document Classification
Published 2019-09-24
URL https://arxiv.org/abs/1909.11049v1
PDF https://arxiv.org/pdf/1909.11049v1.pdf
PWC https://paperswithcode.com/paper/neural-generative-rhetorical-structure
Repo
Framework

Utility Analysis of Network Architectures for 3D Point Cloud Processing

Title Utility Analysis of Network Architectures for 3D Point Cloud Processing
Authors Shikun Huang, Binbin Zhang, Wen Shen, Zhihua Wei, Quanshi Zhang
Abstract In this paper, we diagnose deep neural networks for 3D point cloud processing to explore utilities of different network architectures. We propose a number of hypotheses on the effects of specific network architectures on the representation capacity of DNNs. In order to prove the hypotheses, we design five metrics to diagnose various types of DNNs from the following perspectives, information discarding, information concentration, rotation robustness, adversarial robustness, and neighborhood inconsistency. We conduct comparative studies based on such metrics to verify the hypotheses. We further use the verified hypotheses to revise architectures of existing DNNs to improve their utilities. Experiments demonstrate the effectiveness of our method.
Tasks
Published 2019-11-20
URL https://arxiv.org/abs/1911.09053v1
PDF https://arxiv.org/pdf/1911.09053v1.pdf
PWC https://paperswithcode.com/paper/utility-analysis-of-network-architectures-for
Repo
Framework

A Robust Hybrid Approach for Textual Document Classification

Title A Robust Hybrid Approach for Textual Document Classification
Authors Muhammad Nabeel Asim, Muhammad Usman Ghani Khan, Muhammad Imran Malik, Andreas Dengel, Sheraz Ahmed
Abstract Text document classification is an important task for diverse natural language processing based applications. Traditional machine learning approaches mainly focused on reducing dimensionality of textual data to perform classification. This although improved the overall classification accuracy, the classifiers still faced sparsity problem due to lack of better data representation techniques. Deep learning based text document classification, on the other hand, benefitted greatly from the invention of word embeddings that have solved the sparsity problem and researchers focus mainly remained on the development of deep architectures. Deeper architectures, however, learn some redundant features that limit the performance of deep learning based solutions. In this paper, we propose a two stage text document classification methodology which combines traditional feature engineering with automatic feature engineering (using deep learning). The proposed methodology comprises a filter based feature selection (FSE) algorithm followed by a deep convolutional neural network. This methodology is evaluated on the two most commonly used public datasets, i.e., 20 Newsgroups data and BBC news data. Evaluation results reveal that the proposed methodology outperforms the state-of-the-art of both the (traditional) machine learning and deep learning based text document classification methodologies with a significant margin of 7.7% on 20 Newsgroups and 6.6% on BBC news datasets.
Tasks Document Classification, Feature Engineering, Feature Selection, Word Embeddings
Published 2019-09-12
URL https://arxiv.org/abs/1909.05478v1
PDF https://arxiv.org/pdf/1909.05478v1.pdf
PWC https://paperswithcode.com/paper/a-robust-hybrid-approach-for-textual-document
Repo
Framework

Cross-Dataset Person Re-Identification via Unsupervised Pose Disentanglement and Adaptation

Title Cross-Dataset Person Re-Identification via Unsupervised Pose Disentanglement and Adaptation
Authors Yu-Jhe Li, Ci-Siang Lin, Yan-Bo Lin, Yu-Chiang Frank Wang
Abstract Person re-identification (re-ID) aims at recognizing the same person from images taken across different cameras. To address this challenging task, existing re-ID models typically rely on a large amount of labeled training data, which is not practical for real-world applications. To alleviate this limitation, researchers now targets at cross-dataset re-ID which focuses on generalizing the discriminative ability to the unlabeled target domain when given a labeled source domain dataset. To achieve this goal, our proposed Pose Disentanglement and Adaptation Network (PDA-Net) aims at learning deep image representation with pose and domain information properly disentangled. With the learned cross-domain pose invariant feature space, our proposed PDA-Net is able to perform pose disentanglement across domains without supervision in identities, and the resulting features can be applied to cross-dataset re-ID. Both of our qualitative and quantitative results on two benchmark datasets confirm the effectiveness of our approach and its superiority over the state-of-the-art cross-dataset Re-ID approaches.
Tasks Person Re-Identification
Published 2019-09-20
URL https://arxiv.org/abs/1909.09675v1
PDF https://arxiv.org/pdf/1909.09675v1.pdf
PWC https://paperswithcode.com/paper/190909675
Repo
Framework

Instance-Based Classification through Hypothesis Testing

Title Instance-Based Classification through Hypothesis Testing
Authors Zengyou He, Chaohua Sheng, Yan Liu, Quan Zou
Abstract Classification is a fundamental problem in machine learning and data mining. During the past decades, numerous classification methods have been presented based on different principles. However, most existing classifiers cast the classification problem as an optimization problem and do not address the issue of statistical significance. In this paper, we formulate the binary classification problem as a two-sample testing problem. More precisely, our classification model is a generic framework that is composed of two steps. In the first step, the distance between the test instance and each training instance is calculated to derive two distance sets. In the second step, the two-sample test is performed under the null hypothesis that the two sets of distances are drawn from the same cumulative distribution. After these two steps, we have two p-values for each test instance and the test instance is assigned to the class associated with the smaller p-value. Essentially, the presented classification method can be regarded as an instance-based classifier based on hypothesis testing. The experimental results on 40 real data sets show that our method is able to achieve the same level performance as the state-of-the-art classifiers and has significantly better performance than existing testing-based classifiers. Furthermore, we can handle outlying instances and control the false discovery rate of test instances assigned to each class under the same framework.
Tasks
Published 2019-01-03
URL http://arxiv.org/abs/1901.00560v2
PDF http://arxiv.org/pdf/1901.00560v2.pdf
PWC https://paperswithcode.com/paper/instance-based-classification-through
Repo
Framework

Song Hit Prediction: Predicting Billboard Hits Using Spotify Data

Title Song Hit Prediction: Predicting Billboard Hits Using Spotify Data
Authors Kai Middlebrook, Kian Sheik
Abstract In this work, we attempt to solve the Hit Song Science problem, which aims to predict which songs will become chart-topping hits. We constructed a dataset with approximately 1.8 million hit and non-hit songs and extracted their audio features using the Spotify Web API. We test four models on our dataset. Our best model was random forest, which was able to predict Billboard song success with 88% accuracy.
Tasks
Published 2019-08-22
URL https://arxiv.org/abs/1908.08609v2
PDF https://arxiv.org/pdf/1908.08609v2.pdf
PWC https://paperswithcode.com/paper/song-hit-prediction-predicting-billboard-hits
Repo
Framework

Polyphonic Music Composition with LSTM Neural Networks and Reinforcement Learning

Title Polyphonic Music Composition with LSTM Neural Networks and Reinforcement Learning
Authors Harish Kumar, Balaraman Ravindran
Abstract In the domain of algorithmic music composition, machine learning-driven systems eliminate the need for carefully hand-crafting rules for composition. In particular, the capability of recurrent neural networks to learn complex temporal patterns lends itself well to the musical domain. Promising results have been observed across a number of recent attempts at music composition using deep RNNs. These approaches generally aim at first training neural networks to reproduce subsequences drawn from existing songs. Subsequently, they are used to compose music either at the audio sample-level or at the note-level. We designed a representation that divides polyphonic music into a small number of monophonic streams. This representation greatly reduces the complexity of the problem and eliminates an exponential number of probably poor compositions. On top of our LSTM neural network that learnt musical sequences in this representation, we built an RL agent that learnt to find combinations of songs whose joint dominance produced pleasant compositions. We present Amadeus, an algorithmic music composition system that composes music that consists of intricate melodies, basic chords, and even occasional contrapuntal sequences.
Tasks
Published 2019-02-05
URL http://arxiv.org/abs/1902.01973v2
PDF http://arxiv.org/pdf/1902.01973v2.pdf
PWC https://paperswithcode.com/paper/polyphonic-music-composition-with-lstm-neural
Repo
Framework

Lexicase Selection of Specialists

Title Lexicase Selection of Specialists
Authors Thomas Helmuth, Edward Pantridge, Lee Spector
Abstract Lexicase parent selection filters the population by considering one random training case at a time, eliminating any individuals with errors for the current case that are worse than the best error in the selection pool, until a single individual remains. This process often stops before considering all training cases, meaning that it will ignore the error values on any cases that were not yet considered. Lexicase selection can therefore select specialist individuals that have poor errors on some training cases, if they have great errors on others and those errors come near the start of the random list of cases used for the parent selection event in question. We hypothesize here that selecting these specialists, which may have poor total error, plays an important role in lexicase selection’s observed performance advantages over error-aggregating parent selection methods such as tournament selection, which select specialists much less frequently. We conduct experiments examining this hypothesis, and find that lexicase selection’s performance and diversity maintenance degrade when we deprive it of the ability of selecting specialists. These findings help explain the improved performance of lexicase selection compared to tournament selection, and suggest that specialists help drive evolution under lexicase selection toward global solutions.
Tasks
Published 2019-05-22
URL https://arxiv.org/abs/1905.09372v3
PDF https://arxiv.org/pdf/1905.09372v3.pdf
PWC https://paperswithcode.com/paper/lexicase-selection-of-specialists
Repo
Framework

Group Fairness in Bandit Arm Selection

Title Group Fairness in Bandit Arm Selection
Authors Candice Schumann, Zhi Lang, Nicholas Mattei, John P. Dickerson
Abstract We propose a novel formulation of group fairness in the contextual multi-armed bandit (CMAB) setting. In the CMAB setting a sequential decision maker must at each time step choose an arm to pull from a finite set of arms after observing some context for each of the potential arm pulls. In our model arms are partitioned into two or more sensitive groups based on some protected feature (e.g., age, race, or socio-economic status). Despite the fact that there may be differences in expected payout between the groups, we may wish to ensure some form of fairness between picking arms from the various groups. In this work we explore two definitions of fairness: equal group probability, wherein the probability of pulling an arm from any of the protected groups is the same; and proportional parity, wherein the probability of choosing an arm from a particular group is proportional to the size of that group. We provide a novel algorithm that can accommodate these notions of fairness for an arbitrary number of groups, and provide bounds on the regret for our algorithm. We then validate our algorithm using synthetic data as well as two real-world datasets for intervention settings wherein we want to allocate resources fairly across protected groups.
Tasks
Published 2019-12-09
URL https://arxiv.org/abs/1912.03802v2
PDF https://arxiv.org/pdf/1912.03802v2.pdf
PWC https://paperswithcode.com/paper/group-fairness-in-bandit-arm-selection
Repo
Framework

Database Alignment with Gaussian Features

Title Database Alignment with Gaussian Features
Authors Osman Emre Dai, Daniel Cullina, Negar Kiyavash
Abstract We consider the problem of aligning a pair of databases with jointly Gaussian features. We consider two algorithms, complete database alignment via MAP estimation among all possible database alignments, and partial alignment via a thresholding approach of log likelihood ratios. We derive conditions on mutual information between feature pairs, identifying the regimes where the algorithms are guaranteed to perform reliably and those where they cannot be expected to succeed.
Tasks
Published 2019-03-04
URL https://arxiv.org/abs/1903.01422v2
PDF https://arxiv.org/pdf/1903.01422v2.pdf
PWC https://paperswithcode.com/paper/database-alignment-with-gaussian-features
Repo
Framework

Why Attention? Analyzing and Remedying BiLSTM Deficiency in Modeling Cross-Context for NER

Title Why Attention? Analyzing and Remedying BiLSTM Deficiency in Modeling Cross-Context for NER
Authors Peng-Hsuan Li, Tsu-Jui Fu, Wei-Yun Ma
Abstract State-of-the-art approaches of NER have used sequence-labeling BiLSTM as a core module. This paper formally shows the limitation of BiLSTM in modeling cross-context patterns. Two types of simple cross-structures – self-attention and Cross-BiLSTM – are shown to effectively remedy the problem. On both OntoNotes 5.0 and WNUT 2017, clear and consistent improvements are achieved over bare-bone models, up to 8.7% on some of the multi-token mentions. In-depth analyses across several aspects of the improvements, especially the identification of multi-token mentions, are further given.
Tasks
Published 2019-10-07
URL https://arxiv.org/abs/1910.02586v2
PDF https://arxiv.org/pdf/1910.02586v2.pdf
PWC https://paperswithcode.com/paper/why-attention-analyzing-and-remedying-bilstm
Repo
Framework

Class-Conditional Compression and Disentanglement: Bridging the Gap between Neural Networks and Naive Bayes Classifiers

Title Class-Conditional Compression and Disentanglement: Bridging the Gap between Neural Networks and Naive Bayes Classifiers
Authors Rana Ali Amjad, Bernhard C. Geiger
Abstract In this draft, which reports on work in progress, we 1) adapt the information bottleneck functional by replacing the compression term by class-conditional compression, 2) relax this functional using a variational bound related to class-conditional disentanglement, 3) consider this functional as a training objective for stochastic neural networks, and 4) show that the latent representations are learned such that they can be used in a naive Bayes classifier. We continue by suggesting a series of experiments along the lines of Nonlinear In-formation Bottleneck [Kolchinsky et al., 2018], Deep Variational Information Bottleneck [Alemi et al., 2017], and Information Dropout [Achille and Soatto, 2018]. We furthermore suggest a neural network where the decoder architecture is a parameterized naive Bayes decoder.
Tasks
Published 2019-06-06
URL https://arxiv.org/abs/1906.02576v1
PDF https://arxiv.org/pdf/1906.02576v1.pdf
PWC https://paperswithcode.com/paper/class-conditional-compression-and
Repo
Framework

Asymmetric Deep Semantic Quantization for Image Retrieval

Title Asymmetric Deep Semantic Quantization for Image Retrieval
Authors Zhan Yang, Osolo Ian Raymond, WuQing Sun, Jun Long
Abstract Due to its fast retrieval and storage efficiency capabilities, hashing has been widely used in nearest neighbor retrieval tasks. By using deep learning based techniques, hashing can outperform non-learning based hashing technique in many applications. However, we argue that the current deep learning based hashing methods ignore some critical problems (e.g., the learned hash codes are not discriminative due to the hashing methods being unable to discover rich semantic information and the training strategy having difficulty optimizing the discrete binary codes). In this paper, we propose a novel image hashing method, termed as \textbf{\underline{A}}symmetric \textbf{\underline{D}}eep \textbf{\underline{S}}emantic \textbf{\underline{Q}}uantization (\textbf{ADSQ}). \textbf{ADSQ} is implemented using three stream frameworks, which consist of one \emph{LabelNet} and two \emph{ImgNets}. The \emph{LabelNet} leverages the power of three fully-connected layers, which are used to capture rich semantic information between image pairs. For the two \emph{ImgNets}, they each adopt the same convolutional neural network structure, but with different weights (i.e., asymmetric convolutional neural networks). The two \emph{ImgNets} are used to generate discriminative compact hash codes. Specifically, the function of the \emph{LabelNet} is to capture rich semantic information that is used to guide the two \emph{ImgNets} in minimizing the gap between the real-continuous features and the discrete binary codes. Furthermore, \textbf{ADSQ} can utilize the most critical semantic information to guide the feature learning process and consider the consistency of the common semantic space and Hamming space. Experimental results on three benchmarks (i.e., CIFAR-10, NUS-WIDE, and ImageNet) demonstrate that the proposed \textbf{ADSQ} can outperforms current state-of-the-art methods.
Tasks Image Retrieval, Quantization
Published 2019-03-29
URL https://arxiv.org/abs/1903.12493v2
PDF https://arxiv.org/pdf/1903.12493v2.pdf
PWC https://paperswithcode.com/paper/asymmetric-deep-semantic-quantization-for
Repo
Framework

Generalization bounds for deep convolutional neural networks

Title Generalization bounds for deep convolutional neural networks
Authors Philip M. Long, Hanie Sedghi
Abstract We prove bounds on the generalization error of convolutional networks. The bounds are in terms of the training loss, the number of parameters, the Lipschitz constant of the loss and the distance from the weights to the initial weights. They are independent of the number of pixels in the input, and the height and width of hidden feature maps. We present experiments using CIFAR-10 with varying hyperparameters of a deep convolutional network, comparing our bounds with practical generalization gaps.
Tasks
Published 2019-05-29
URL https://arxiv.org/abs/1905.12600v5
PDF https://arxiv.org/pdf/1905.12600v5.pdf
PWC https://paperswithcode.com/paper/size-free-generalization-bounds-for
Repo
Framework
comments powered by Disqus