July 28, 2019

3186 words 15 mins read

Paper Group ANR 173

Paper Group ANR 173

Adversarial Dropout Regularization. A Visual Representation of Wittgenstein’s Tractatus Logico-Philosophicus. JADE: Joint Autoencoders for Dis-Entanglement. Leveraging Distributional Semantics for Multi-Label Learning. Improving Classification by Improving Labelling: Introducing Probabilistic Multi-Label Object Interaction Recognition. Unsupervised …

Adversarial Dropout Regularization

Title Adversarial Dropout Regularization
Authors Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada, Kate Saenko
Abstract We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.
Tasks Domain Adaptation, Image Classification, Semantic Segmentation, Unsupervised Domain Adaptation
Published 2017-11-05
URL http://arxiv.org/abs/1711.01575v3
PDF http://arxiv.org/pdf/1711.01575v3.pdf
PWC https://paperswithcode.com/paper/adversarial-dropout-regularization
Repo
Framework

A Visual Representation of Wittgenstein’s Tractatus Logico-Philosophicus

Title A Visual Representation of Wittgenstein’s Tractatus Logico-Philosophicus
Authors Anca Bucur, Sergiu Nisioi
Abstract In this paper we present a data visualization method together with its potential usefulness in digital humanities and philosophy of language. We compile a multilingual parallel corpus from different versions of Wittgenstein’s Tractatus Logico-Philosophicus, including the original in German and translations into English, Spanish, French, and Russian. Using this corpus, we compute a similarity measure between propositions and render a visual network of relations for different languages.
Tasks
Published 2017-03-13
URL http://arxiv.org/abs/1703.04336v1
PDF http://arxiv.org/pdf/1703.04336v1.pdf
PWC https://paperswithcode.com/paper/a-visual-representation-of-wittgensteins
Repo
Framework

JADE: Joint Autoencoders for Dis-Entanglement

Title JADE: Joint Autoencoders for Dis-Entanglement
Authors Ershad Banijamali, Amir-Hossein Karimi, Alexander Wong, Ali Ghodsi
Abstract The problem of feature disentanglement has been explored in the literature, for the purpose of image and video processing and text analysis. State-of-the-art methods for disentangling feature representations rely on the presence of many labeled samples. In this work, we present a novel method for disentangling factors of variation in data-scarce regimes. Specifically, we explore the application of feature disentangling for the problem of supervised classification in a setting where few labeled samples exist, and there are no unlabeled samples for use in unsupervised training. Instead, a similar datasets exists which shares at least one direction of variation with the sample-constrained datasets. We train our model end-to-end using the framework of variational autoencoders and are able to experimentally demonstrate that using an auxiliary dataset with similar variation factors contribute positively to classification performance, yielding competitive results with the state-of-the-art in unsupervised learning.
Tasks
Published 2017-11-24
URL http://arxiv.org/abs/1711.09163v1
PDF http://arxiv.org/pdf/1711.09163v1.pdf
PWC https://paperswithcode.com/paper/jade-joint-autoencoders-for-dis-entanglement
Repo
Framework

Leveraging Distributional Semantics for Multi-Label Learning

Title Leveraging Distributional Semantics for Multi-Label Learning
Authors Rahul Wadbude, Vivek Gupta, Piyush Rai, Nagarajan Natarajan, Harish Karnick, Prateek Jain
Abstract We present a novel and scalable label embedding framework for large-scale multi-label learning a.k.a ExMLDS (Extreme Multi-Label Learning using Distributional Semantics). Our approach draws inspiration from ideas rooted in distributional semantics, specifically the Skip Gram Negative Sampling (SGNS) approach, widely used to learn word embeddings for natural language processing tasks. Learning such embeddings can be reduced to a certain matrix factorization. Our approach is novel in that it highlights interesting connections between label embedding methods used for multi-label learning and paragraph/document embedding methods commonly used for learning representations of text data. The framework can also be easily extended to incorporate auxiliary information such as label-label correlations; this is crucial especially when there are a lot of missing labels in the training data. We demonstrate the effectiveness of our approach through an extensive set of experiments on a variety of benchmark datasets, and show that the proposed learning methods perform favorably compared to several baselines and state-of-the-art methods for large-scale multi-label learning. To facilitate end-to-end learning, we develop a joint learning algorithm that can learn the embeddings as well as a regression model that predicts these embeddings given input features, via efficient gradient-based methods.
Tasks Document Embedding, Multi-Label Learning, Word Embeddings
Published 2017-09-18
URL http://arxiv.org/abs/1709.05976v3
PDF http://arxiv.org/pdf/1709.05976v3.pdf
PWC https://paperswithcode.com/paper/leveraging-distributional-semantics-for-multi
Repo
Framework

Improving Classification by Improving Labelling: Introducing Probabilistic Multi-Label Object Interaction Recognition

Title Improving Classification by Improving Labelling: Introducing Probabilistic Multi-Label Object Interaction Recognition
Authors Michael Wray, Davide Moltisanti, Walterio Mayol-Cuevas, Dima Damen
Abstract This work deviates from easy-to-define class boundaries for object interactions. For the task of object interaction recognition, often captured using an egocentric view, we show that semantic ambiguities in verbs and recognising sub-interactions along with concurrent interactions result in legitimate class overlaps (Figure 1). We thus aim to model the mapping between observations and interaction classes, as well as class overlaps, towards a probabilistic multi-label classifier that emulates human annotators. Given a video segment containing an object interaction, we model the probability for a verb, out of a list of possible verbs, to be used to annotate that interaction. The proba- bility is learnt from crowdsourced annotations, and is tested on two public datasets, comprising 1405 video sequences for which we provide annotations on 90 verbs. We outper- form conventional single-label classification by 11% and 6% on the two datasets respectively, and show that learning from annotation probabilities outperforms majority voting and enables discovery of co-occurring labels.
Tasks
Published 2017-03-24
URL http://arxiv.org/abs/1703.08338v2
PDF http://arxiv.org/pdf/1703.08338v2.pdf
PWC https://paperswithcode.com/paper/improving-classification-by-improving
Repo
Framework

Unsupervised learning of object frames by dense equivariant image labelling

Title Unsupervised learning of object frames by dense equivariant image labelling
Authors James Thewlis, Hakan Bilen, Andrea Vedaldi
Abstract One of the key challenges of visual perception is to extract abstract models of 3D objects and object categories from visual measurements, which are affected by complex nuisance factors such as viewpoint, occlusion, motion, and deformations. Starting from the recent idea of viewpoint factorization, we propose a new approach that, given a large number of images of an object and no other supervision, can extract a dense object-centric coordinate frame. This coordinate frame is invariant to deformations of the images and comes with a dense equivariant labelling neural network that can map image pixels to their corresponding object coordinates. We demonstrate the applicability of this method to simple articulated objects and deformable objects such as human faces, learning embeddings from random synthetic transformations or optical flow correspondences, all without any manual supervision.
Tasks Optical Flow Estimation, Unsupervised Facial Landmark Detection
Published 2017-06-09
URL http://arxiv.org/abs/1706.02932v2
PDF http://arxiv.org/pdf/1706.02932v2.pdf
PWC https://paperswithcode.com/paper/unsupervised-learning-of-object-frames-by
Repo
Framework

Gaze Distribution Analysis and Saliency Prediction Across Age Groups

Title Gaze Distribution Analysis and Saliency Prediction Across Age Groups
Authors Onkar Krishna, Kiyoharu Aizawa, Andrea Helo, Rama Pia
Abstract Knowledge of the human visual system helps to develop better computational models of visual attention. State-of-the-art models have been developed to mimic the visual attention system of young adults that, however, largely ignore the variations that occur with age. In this paper, we investigated how visual scene processing changes with age and we propose an age-adapted framework that helps to develop a computational model that can predict saliency across different age groups. Our analysis uncovers how the explorativeness of an observer varies with age, how well saliency maps of an age group agree with fixation points of observers from the same or different age groups, and how age influences the center bias. We analyzed the eye movement behavior of 82 observers belonging to four age groups while they explored visual scenes. Explorativeness was quantified in terms of the entropy of a saliency map, and area under the curve (AUC) metrics was used to quantify the agreement analysis and the center bias. These results were used to develop age adapted saliency models. Our results suggest that the proposed age-adapted saliency model outperforms existing saliency models in predicting the regions of interest across age groups.
Tasks Saliency Prediction
Published 2017-05-20
URL http://arxiv.org/abs/1705.07284v2
PDF http://arxiv.org/pdf/1705.07284v2.pdf
PWC https://paperswithcode.com/paper/gaze-distribution-analysis-and-saliency
Repo
Framework

Analysis of planar ornament patterns via motif asymmetry assumption and local connections

Title Analysis of planar ornament patterns via motif asymmetry assumption and local connections
Authors Venera Adanova, Sibel Tari
Abstract Planar ornaments, a.k.a. wallpapers, are regular repetitive patterns which exhibit translational symmetry in two independent directions. There are exactly $17$ distinct planar symmetry groups. We present a fully automatic method for complete analysis of planar ornaments in $13$ of these groups, specifically, the groups called $p6m, , p6, , p4g, ,p4m, ,p4, , p31m, ,p3m, , p3, , cmm, , pgg, , pg, , p2$ and $p1$. Given the image of an ornament fragment, we present a method to simultaneously classify the input into one of the $13$ groups and extract the so called fundamental domain (FD), the minimum region that is sufficient to reconstruct the entire ornament. A nice feature of our method is that even when the given ornament image is a small portion such that it does not contain multiple translational units, the symmetry group as well as the fundamental domain can still be defined. This is because, in contrast to common approach, we do not attempt to first identify a global translational repetition lattice. Though the presented constructions work for quite a wide range of ornament patterns, a key assumption we make is that the perceivable motifs (shapes that repeat) alone do not provide clues for the underlying symmetries of the ornament. In this sense, our main target is the planar arrangements of asymmetric interlocking shapes, as in the symmetry art of Escher.
Tasks
Published 2017-10-12
URL http://arxiv.org/abs/1710.04623v1
PDF http://arxiv.org/pdf/1710.04623v1.pdf
PWC https://paperswithcode.com/paper/analysis-of-planar-ornament-patterns-via
Repo
Framework

An Ensemble Quadratic Echo State Network for Nonlinear Spatio-Temporal Forecasting

Title An Ensemble Quadratic Echo State Network for Nonlinear Spatio-Temporal Forecasting
Authors Patrick L. McDermott, Christopher K. Wikle
Abstract Spatio-temporal data and processes are prevalent across a wide variety of scientific disciplines. These processes are often characterized by nonlinear time dynamics that include interactions across multiple scales of spatial and temporal variability. The data sets associated with many of these processes are increasing in size due to advances in automated data measurement, management, and numerical simulator output. Non- linear spatio-temporal models have only recently seen interest in statistics, but there are many classes of such models in the engineering and geophysical sciences. Tradi- tionally, these models are more heuristic than those that have been presented in the statistics literature, but are often intuitive and quite efficient computationally. We show here that with fairly simple, but important, enhancements, the echo state net- work (ESN) machine learning approach can be used to generate long-lead forecasts of nonlinear spatio-temporal processes, with reasonable uncertainty quantification, and at only a fraction of the computational expense of a traditional parametric nonlinear spatio-temporal models.
Tasks Spatio-Temporal Forecasting
Published 2017-08-16
URL http://arxiv.org/abs/1708.05094v1
PDF http://arxiv.org/pdf/1708.05094v1.pdf
PWC https://paperswithcode.com/paper/an-ensemble-quadratic-echo-state-network-for
Repo
Framework

Optimal Transport for Deep Joint Transfer Learning

Title Optimal Transport for Deep Joint Transfer Learning
Authors Ying Lu, Liming Chen, Alexandre Saidi
Abstract Training a Deep Neural Network (DNN) from scratch requires a large amount of labeled data. For a classification task where only small amount of training data is available, a common solution is to perform fine-tuning on a DNN which is pre-trained with related source data. This consecutive training process is time consuming and does not consider explicitly the relatedness between different source and target tasks. In this paper, we propose a novel method to jointly fine-tune a Deep Neural Network with source data and target data. By adding an Optimal Transport loss (OT loss) between source and target classifier predictions as a constraint on the source classifier, the proposed Joint Transfer Learning Network (JTLN) can effectively learn useful knowledge for target classification from source data. Furthermore, by using different kind of metric as cost matrix for the OT loss, JTLN can incorporate different prior knowledge about the relatedness between target categories and source categories. We carried out experiments with JTLN based on Alexnet on image classification datasets and the results verify the effectiveness of the proposed JTLN in comparison with standard consecutive fine-tuning. This Joint Transfer Learning with OT loss is general and can also be applied to other kind of Neural Networks.
Tasks Image Classification, Transfer Learning
Published 2017-09-09
URL http://arxiv.org/abs/1709.02995v1
PDF http://arxiv.org/pdf/1709.02995v1.pdf
PWC https://paperswithcode.com/paper/optimal-transport-for-deep-joint-transfer
Repo
Framework

Adaptive Correlation Filters with Long-Term and Short-Term Memory for Object Tracking

Title Adaptive Correlation Filters with Long-Term and Short-Term Memory for Object Tracking
Authors Chao Ma, Jia-Bin Huang, Xiaokang Yang, Ming-Hsuan Yang
Abstract Object tracking is challenging as target objects often undergo drastic appearance changes over time. Recently, adaptive correlation filters have been successfully applied to object tracking. However, tracking algorithms relying on highly adaptive correlation filters are prone to drift due to noisy updates. Moreover, as these algorithms do not maintain long-term memory of target appearance, they cannot recover from tracking failures caused by heavy occlusion or target disappearance in the camera view. In this paper, we propose to learn multiple adaptive correlation filters with both long-term and short-term memory of target appearance for robust object tracking. First, we learn a kernelized correlation filter with an aggressive learning rate for locating target objects precisely. We take into account the appropriate size of surrounding context and the feature representations. Second, we learn a correlation filter over a feature pyramid centered at the estimated target position for predicting scale changes. Third, we learn a complementary correlation filter with a conservative learning rate to maintain long-term memory of target appearance. We use the output responses of this long-term filter to determine if tracking failure occurs. In the case of tracking failures, we apply an incrementally learned detector to recover the target position in a sliding window fashion. Extensive experimental results on large-scale benchmark datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods in terms of efficiency, accuracy, and robustness.
Tasks Object Tracking
Published 2017-07-07
URL http://arxiv.org/abs/1707.02309v2
PDF http://arxiv.org/pdf/1707.02309v2.pdf
PWC https://paperswithcode.com/paper/adaptive-correlation-filters-with-long-term
Repo
Framework

Disentangled Variational Auto-Encoder for Semi-supervised Learning

Title Disentangled Variational Auto-Encoder for Semi-supervised Learning
Authors Yang Li, Quan Pan, Suhang Wang, Haiyun Peng, Tao Yang, Erik Cambria
Abstract Semi-supervised learning is attracting increasing attention due to the fact that datasets of many domains lack enough labeled data. Variational Auto-Encoder (VAE), in particular, has demonstrated the benefits of semi-supervised learning. The majority of existing semi-supervised VAEs utilize a classifier to exploit label information, where the parameters of the classifier are introduced to the VAE. Given the limited labeled data, learning the parameters for the classifiers may not be an optimal solution for exploiting label information. Therefore, in this paper, we develop a novel approach for semi-supervised VAE without classifier. Specifically, we propose a new model called Semi-supervised Disentangled VAE (SDVAE), which encodes the input data into disentangled representation and non-interpretable representation, then the category information is directly utilized to regularize the disentangled representation via the equality constraint. To further enhance the feature learning ability of the proposed VAE, we incorporate reinforcement learning to relieve the lack of data. The dynamic framework is capable of dealing with both image and text data with its corresponding encoder and decoder networks. Extensive experiments on image and text datasets demonstrate the effectiveness of the proposed framework.
Tasks
Published 2017-09-15
URL http://arxiv.org/abs/1709.05047v2
PDF http://arxiv.org/pdf/1709.05047v2.pdf
PWC https://paperswithcode.com/paper/disentangled-variational-auto-encoder-for
Repo
Framework

Sentence-level quality estimation by predicting HTER as a multi-component metric

Title Sentence-level quality estimation by predicting HTER as a multi-component metric
Authors Eleftherios Avramidis
Abstract This submission investigates alternative machine learning models for predicting the HTER score on the sentence level. Instead of directly predicting the HTER score, we suggest a model that jointly predicts the amount of the 4 distinct post-editing operations, which are then used to calculate the HTER score. This also gives the possibility to correct invalid (e.g. negative) predicted values prior to the calculation of the HTER score. Without any feature exploration, a multi-layer perceptron with 4 outputs yields small but significant improvements over the baseline.
Tasks
Published 2017-07-19
URL http://arxiv.org/abs/1707.06167v1
PDF http://arxiv.org/pdf/1707.06167v1.pdf
PWC https://paperswithcode.com/paper/sentence-level-quality-estimation-by
Repo
Framework

Multi-dimensional Gated Recurrent Units for Automated Anatomical Landmark Localization

Title Multi-dimensional Gated Recurrent Units for Automated Anatomical Landmark Localization
Authors Simon Andermatt, Simon Pezold, Michael Amann, Philippe C. Cattin
Abstract We present an automated method for localizing an anatomical landmark in three-dimensional medical images. The method combines two recurrent neural networks in a coarse-to-fine approach: The first network determines a candidate neighborhood by analyzing the complete given image volume. The second network localizes the actual landmark precisely and accurately in the candidate neighborhood. Both networks take advantage of multi-dimensional gated recurrent units in their main layers, which allow for high model complexity with a comparatively small set of parameters. We localize the medullopontine sulcus in 3D magnetic resonance images of the head and neck. We show that the proposed approach outperforms similar localization techniques both in terms of mean distance in millimeters and voxels w.r.t. manual labelings of the data. With a mean localization error of 1.7 mm, the proposed approach performs on par with neurological experts, as we demonstrate in an interrater comparison.
Tasks
Published 2017-08-09
URL http://arxiv.org/abs/1708.02766v1
PDF http://arxiv.org/pdf/1708.02766v1.pdf
PWC https://paperswithcode.com/paper/multi-dimensional-gated-recurrent-units-for
Repo
Framework

Heterogeneous domain adaptation: An unsupervised approach

Title Heterogeneous domain adaptation: An unsupervised approach
Authors Feng Liu, Guanquan Zhang, Jie Lu
Abstract Domain adaptation leverages the knowledge in one domain - the source domain - to improve learning efficiency in another domain - the target domain. Existing heterogeneous domain adaptation research is relatively well-progressed, but only in situations where the target domain contains at least a few labeled instances. In contrast, heterogeneous domain adaptation with an unlabeled target domain has not been well-studied. To contribute to the research in this emerging field, this paper presents: (1) an unsupervised knowledge transfer theorem that guarantees the correctness of transferring knowledge; and (2) a principal angle-based metric to measure the distance between two pairs of domains: one pair comprises the original source and target domains and the other pair comprises two homogeneous representations of two domains. The theorem and the metric have been implemented in an innovative transfer model, called a Grassmann-Linear monotonic maps-geodesic flow kernel (GLG), that is specifically designed for heterogeneous unsupervised domain adaptation (HeUDA). The linear monotonic maps meet the conditions of the theorem and are used to construct homogeneous representations of the heterogeneous domains. The metric shows the extent to which the homogeneous representations have preserved the information in the original source and target domains. By minimizing the proposed metric, the GLG model learns the homogeneous representations of heterogeneous domains and transfers knowledge through these learned representations via a geodesic flow kernel. To evaluate the model, five public datasets were reorganized into ten HeUDA tasks across three applications: cancer detection, credit assessment, and text classification. The experiments demonstrate that the proposed model delivers superior performance over the existing baselines.
Tasks Domain Adaptation, Text Classification, Transfer Learning, Unsupervised Domain Adaptation
Published 2017-01-10
URL https://arxiv.org/abs/1701.02511v5
PDF https://arxiv.org/pdf/1701.02511v5.pdf
PWC https://paperswithcode.com/paper/heterogeneous-transfer-learning-an
Repo
Framework
comments powered by Disqus