October 16, 2019

2836 words 14 mins read

Paper Group ANR 985

Paper Group ANR 985

Active Learning for Interactive Neural Machine Translation of Data Streams. Gnirut: The Trouble With Being Born Human In An Autonomous World. Online Visual Robot Tracking and Identification using Deep LSTM Networks. On the Differences between L2-Boosting and the Lasso. Robustness of classifiers to uniform $\ell_p$ and Gaussian noise. A Comparative …

Active Learning for Interactive Neural Machine Translation of Data Streams

Title Active Learning for Interactive Neural Machine Translation of Data Streams
Authors Álvaro Peris, Francisco Casacuberta
Abstract We study the application of active learning techniques to the translation of unbounded data streams via interactive neural machine translation. The main idea is to select, from an unbounded stream of source sentences, those worth to be supervised by a human agent. The user will interactively translate those samples. Once validated, these data is useful for adapting the neural machine translation model. We propose two novel methods for selecting the samples to be validated. We exploit the information from the attention mechanism of a neural machine translation system. Our experiments show that the inclusion of active learning techniques into this pipeline allows to reduce the effort required during the process, while increasing the quality of the translation system. Moreover, it enables to balance the human effort required for achieving a certain translation quality. Moreover, our neural system outperforms classical approaches by a large margin.
Tasks Active Learning, Machine Translation
Published 2018-07-30
URL http://arxiv.org/abs/1807.11243v2
PDF http://arxiv.org/pdf/1807.11243v2.pdf
PWC https://paperswithcode.com/paper/active-learning-for-interactive-neural
Repo
Framework

Gnirut: The Trouble With Being Born Human In An Autonomous World

Title Gnirut: The Trouble With Being Born Human In An Autonomous World
Authors Luca Viganó, Diego Sempreboni
Abstract What if we delegated so much to autonomous AI and intelligent machines that They passed a law that forbids humans to carry out a number of professions? We conceive the plot of a new episode of Black Mirror to reflect on what might await us and how we can deal with such a future.
Tasks
Published 2018-07-11
URL http://arxiv.org/abs/1807.06078v1
PDF http://arxiv.org/pdf/1807.06078v1.pdf
PWC https://paperswithcode.com/paper/gnirut-the-trouble-with-being-born-human-in
Repo
Framework

Online Visual Robot Tracking and Identification using Deep LSTM Networks

Title Online Visual Robot Tracking and Identification using Deep LSTM Networks
Authors Hafez Farazi, Sven Behnke
Abstract Collaborative robots working on a common task are necessary for many applications. One of the challenges for achieving collaboration in a team of robots is mutual tracking and identification. We present a novel pipeline for online visionbased detection, tracking and identification of robots with a known and identical appearance. Our method runs in realtime on the limited hardware of the observer robot. Unlike previous works addressing robot tracking and identification, we use a data-driven approach based on recurrent neural networks to learn relations between sequential inputs and outputs. We formulate the data association problem as multiple classification problems. A deep LSTM network was trained on a simulated dataset and fine-tuned on small set of real data. Experiments on two challenging datasets, one synthetic and one real, which include long-term occlusions, show promising results.
Tasks
Published 2018-10-11
URL http://arxiv.org/abs/1810.04941v2
PDF http://arxiv.org/pdf/1810.04941v2.pdf
PWC https://paperswithcode.com/paper/online-visual-robot-tracking-and
Repo
Framework

On the Differences between L2-Boosting and the Lasso

Title On the Differences between L2-Boosting and the Lasso
Authors Michael Vogt
Abstract We prove that L2-Boosting lacks a theoretical property which is central to the behaviour of l1-penalized methods such as basis pursuit and the Lasso: Whereas l1-penalized methods are guaranteed to recover the sparse parameter vector in a high-dimensional linear model under an appropriate restricted nullspace property, L2-Boosting is not guaranteed to do so. Hence, L2-Boosting behaves quite differently from l1-penalized methods when it comes to parameter recovery/estimation in high-dimensional linear models.
Tasks
Published 2018-12-13
URL http://arxiv.org/abs/1812.05421v1
PDF http://arxiv.org/pdf/1812.05421v1.pdf
PWC https://paperswithcode.com/paper/on-the-differences-between-l2-boosting-and
Repo
Framework

Robustness of classifiers to uniform $\ell_p$ and Gaussian noise

Title Robustness of classifiers to uniform $\ell_p$ and Gaussian noise
Authors Jean-Yves Franceschi, Alhussein Fawzi, Omar Fawzi
Abstract We study the robustness of classifiers to various kinds of random noise models. In particular, we consider noise drawn uniformly from the $\ell_p$ ball for $p \in [1, \infty]$ and Gaussian noise with an arbitrary covariance matrix. We characterize this robustness to random noise in terms of the distance to the decision boundary of the classifier. This analysis applies to linear classifiers as well as classifiers with locally approximately flat decision boundaries, a condition which is satisfied by state-of-the-art deep neural networks. The predicted robustness is verified experimentally.
Tasks
Published 2018-02-22
URL http://arxiv.org/abs/1802.07971v1
PDF http://arxiv.org/pdf/1802.07971v1.pdf
PWC https://paperswithcode.com/paper/robustness-of-classifiers-to-uniform-ell_p
Repo
Framework

A Comparative Study of Pairwise Learning Methods based on Kernel Ridge Regression

Title A Comparative Study of Pairwise Learning Methods based on Kernel Ridge Regression
Authors Michiel Stock, Tapio Pahikkala, Antti Airola, Bernard De Baets, Willem Waegeman
Abstract Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction or network inference problems. During the last decade kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify existing kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency and spectral filtering properties. Our theoretical results provide valuable insights in assessing the advantages and limitations of existing pairwise learning methods.
Tasks Zero-Shot Learning
Published 2018-03-05
URL http://arxiv.org/abs/1803.01575v1
PDF http://arxiv.org/pdf/1803.01575v1.pdf
PWC https://paperswithcode.com/paper/a-comparative-study-of-pairwise-learning
Repo
Framework

Deep Saliency Hashing

Title Deep Saliency Hashing
Authors Sheng Jin, Hongxun Yao, Xiaoshuai Sun, Shangchen Zhou, Lei Zhang, Xiansheng Hua
Abstract In recent years, hashing methods have been proved to be effective and efficient for the large-scale Web media search. However, the existing general hashing methods have limited discriminative power for describing fine-grained objects that share similar overall appearance but have subtle difference. To solve this problem, we for the first time introduce the attention mechanism to the learning of fine-grained hashing codes. Specifically, we propose a novel deep hashing model, named deep saliency hashing (DSaH), which automatically mines salient regions and learns semantic-preserving hashing codes simultaneously. DSaH is a two-step end-to-end model consisting of an attention network and a hashing network. Our loss function contains three basic components, including the semantic loss, the saliency loss, and the quantization loss. As the core of DSaH, the saliency loss guides the attention network to mine discriminative regions from pairs of images. We conduct extensive experiments on both fine-grained and general retrieval datasets for performance evaluation. Experimental results on fine-grained datasets, including Oxford Flowers-17, Stanford Dogs-120, and CUB Bird demonstrate that our DSaH performs the best for fine-grained retrieval task and beats the strongest competitor (DTQ) by approximately 10% on both Stanford Dogs-120 and CUB Bird. DSaH is also comparable to several state-of-the-art hashing methods on general datasets, including CIFAR-10 and NUS-WIDE.
Tasks Quantization
Published 2018-07-04
URL http://arxiv.org/abs/1807.01459v2
PDF http://arxiv.org/pdf/1807.01459v2.pdf
PWC https://paperswithcode.com/paper/deep-saliency-hashing
Repo
Framework

Learning Topics using Semantic Locality

Title Learning Topics using Semantic Locality
Authors Ziyi Zhao, Krittaphat Pugdeethosapol, Sheng Lin, Zhe Li, Caiwen Ding, Yanzhi Wang, Qinru Qiu
Abstract The topic modeling discovers the latent topic probability of the given text documents. To generate the more meaningful topic that better represents the given document, we proposed a new feature extraction technique which can be used in the data preprocessing stage. The method consists of three steps. First, it generates the word/word-pair from every single document. Second, it applies a two-way TF-IDF algorithm to word/word-pair for semantic filtering. Third, it uses the K-means algorithm to merge the word pairs that have the similar semantic meaning. Experiments are carried out on the Open Movie Database (OMDb), Reuters Dataset and 20NewsGroup Dataset. The mean Average Precision score is used as the evaluation metric. Comparing our results with other state-of-the-art topic models, such as Latent Dirichlet allocation and traditional Restricted Boltzmann Machines. Our proposed data preprocessing can improve the generated topic accuracy by up to 12.99%.
Tasks Topic Models
Published 2018-04-11
URL http://arxiv.org/abs/1804.04205v1
PDF http://arxiv.org/pdf/1804.04205v1.pdf
PWC https://paperswithcode.com/paper/learning-topics-using-semantic-locality
Repo
Framework

A Hand-Held Multimedia Translation and Interpretation System with Application to Diet Management

Title A Hand-Held Multimedia Translation and Interpretation System with Application to Diet Management
Authors Albert Parra, Andrew W. Haddad, Mireille Boutin, Edward J. Delp
Abstract We propose a network independent, hand-held system to translate and disambiguate foreign restaurant menu items in real-time. The system is based on the use of a portable multimedia device, such as a smartphones or a PDA. An accurate and fast translation is obtained using a Machine Translation engine and a context-specific corpora to which we apply two pre-processing steps, called translation standardization and $n$-gram consolidation. The phrase-table generated is orders of magnitude lighter than the ones commonly used in market applications, thus making translations computationally less expensive, and decreasing the battery usage. Translation ambiguities are mitigated using multimedia information including images of dishes and ingredients, along with ingredient lists. We implemented a prototype of our system on an iPod Touch Second Generation for English speakers traveling in Spain. Our tests indicate that our translation method yields higher accuracy than translation engines such as Google Translate, and does so almost instantaneously. The memory requirements of the application, including the database of images, are also well within the limits of the device. By combining it with a database of nutritional information, our proposed system can be used to help individuals who follow a medical diet maintain this diet while traveling.
Tasks Machine Translation
Published 2018-07-17
URL http://arxiv.org/abs/1807.07149v1
PDF http://arxiv.org/pdf/1807.07149v1.pdf
PWC https://paperswithcode.com/paper/a-hand-held-multimedia-translation-and
Repo
Framework

Mask-aware networks for crowd counting

Title Mask-aware networks for crowd counting
Authors Shengqin Jiang, Xiaobo Lu, Yinjie Lei, Lingqiao Liu
Abstract Crowd counting problem aims to count the number of objects within an image or a frame in the videos and is usually solved by estimating the density map generated from the object location annotations. The values in the density map, by nature, take two possible states: zero indicating no object around, a non-zero value indicating the existence of objects and the value denoting the local object density. In contrast to traditional methods which do not differentiate the density prediction of these two states, we propose to use a dedicated network branch to predict the object/non-object mask and then combine its prediction with the input image to produce the density map. Our rationale is that the mask prediction could be better modeled as a binary segmentation problem and the difficulty of estimating the density could be reduced if the mask is known. A key to the proposed scheme is the strategy of incorporating the mask prediction into the density map estimator. To this end, we study five possible solutions, and via analysis and experimental validation we identify the most effective one. Through extensive experiments on five public datasets, we demonstrate the superior performance of the proposed approach over the baselines and show that our network could achieve the state-of-the-art performance.
Tasks Crowd Counting
Published 2018-12-18
URL https://arxiv.org/abs/1901.00039v2
PDF https://arxiv.org/pdf/1901.00039v2.pdf
PWC https://paperswithcode.com/paper/mask-aware-networks-for-crowd-counting
Repo
Framework

Efficient Decentralized Deep Learning by Dynamic Model Averaging

Title Efficient Decentralized Deep Learning by Dynamic Model Averaging
Authors Michael Kamp, Linara Adilova, Joachim Sicking, Fabian Hüger, Peter Schlicht, Tim Wirtz, Stefan Wrobel
Abstract We propose an efficient protocol for decentralized training of deep neural networks from distributed data sources. The proposed protocol allows to handle different phases of model training equally well and to quickly adapt to concept drifts. This leads to a reduction of communication by an order of magnitude compared to periodically communicating state-of-the-art approaches. Moreover, we derive a communication bound that scales well with the hardness of the serialized learning problem. The reduction in communication comes at almost no cost, as the predictive performance remains virtually unchanged. Indeed, the proposed protocol retains loss bounds of periodically averaging schemes. An extensive empirical evaluation validates major improvement of the trade-off between model performance and communication which could be beneficial for numerous decentralized learning applications, such as autonomous driving, or voice recognition and image classification on mobile phones.
Tasks Autonomous Driving, Image Classification
Published 2018-07-09
URL http://arxiv.org/abs/1807.03210v2
PDF http://arxiv.org/pdf/1807.03210v2.pdf
PWC https://paperswithcode.com/paper/efficient-decentralized-deep-learning-by
Repo
Framework

Ensemble Pruning based on Objection Maximization with a General Distributed Framework

Title Ensemble Pruning based on Objection Maximization with a General Distributed Framework
Authors Yijun Bian, Yijun Wang, Yaqiang Yao, Huanhuan Chen
Abstract Ensemble pruning, selecting a subset of individual learners from an original ensemble, alleviates the deficiencies of ensemble learning on the cost of time and space. Accuracy and diversity serve as two crucial factors while they usually conflict with each other. To balance both of them, we formalize the ensemble pruning problem as an objection maximization problem based on information entropy. Then we propose an ensemble pruning method including a centralized version and a distributed version, in which the latter is to speed up the former. At last, we extract a general distributed framework for ensemble pruning, which can be widely suitable for most of the existing ensemble pruning methods and achieve less time consuming without much accuracy degradation. Experimental results validate the efficiency of our framework and methods, particularly concerning a remarkable improvement of the execution speed, accompanied by gratifying accuracy performance.
Tasks
Published 2018-06-13
URL https://arxiv.org/abs/1806.04899v3
PDF https://arxiv.org/pdf/1806.04899v3.pdf
PWC https://paperswithcode.com/paper/ensemble-pruning-based-on-objection
Repo
Framework

Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings

Title Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings
Authors Mohsan Alvi, Andrew Zisserman, Christoffer Nellaker
Abstract Neural networks achieve the state-of-the-art in image classification tasks. However, they can encode spurious variations or biases that may be present in the training data. For example, training an age predictor on a dataset that is not balanced for gender can lead to gender biased predicitons (e.g. wrongly predicting that males are older if only elderly males are in the training set). We present two distinct contributions: 1) An algorithm that can remove multiple sources of variation from the feature representation of a network. We demonstrate that this algorithm can be used to remove biases from the feature representation, and thereby improve classification accuracies, when training networks on extremely biased datasets. 2) An ancestral origin database of 14,000 images of individuals from East Asia, the Indian subcontinent, sub-Saharan Africa, and Western Europe. We demonstrate on this dataset, for a number of facial attribute classification tasks, that we are able to remove racial biases from the network feature representation.
Tasks Facial Attribute Classification, Image Classification
Published 2018-09-06
URL http://arxiv.org/abs/1809.02169v2
PDF http://arxiv.org/pdf/1809.02169v2.pdf
PWC https://paperswithcode.com/paper/turning-a-blind-eye-explicit-removal-of
Repo
Framework

Physical Layer Communications System Design Over-the-Air Using Adversarial Networks

Title Physical Layer Communications System Design Over-the-Air Using Adversarial Networks
Authors Timothy J. O’Shea, Tamoghna Roy, Nathan West, Benjamin C. Hilburn
Abstract This paper presents a novel method for synthesizing new physical layer modulation and coding schemes for communications systems using a learning-based approach which does not require an analytic model of the impairments in the channel. It extends prior work published on the channel autoencoder to consider the case where the channel response is not known or can not be easily modeled in a closed form analytic expression. By adopting an adversarial approach for channel response approximation and information encoding, we can jointly learn a good solution to both tasks over a wide range of channel environments. We describe the operation of the proposed adversarial system, share results for its training and validation over-the-air, and discuss implications and future work in the area.
Tasks
Published 2018-03-08
URL http://arxiv.org/abs/1803.03145v1
PDF http://arxiv.org/pdf/1803.03145v1.pdf
PWC https://paperswithcode.com/paper/physical-layer-communications-system-design
Repo
Framework

Imbalanced Deep Learning by Minority Class Incremental Rectification

Title Imbalanced Deep Learning by Minority Class Incremental Rectification
Authors Qi Dong, Shaogang Gong, Xiatian Zhu
Abstract Model learning from class imbalanced training data is a long-standing and significant challenge for machine learning. In particular, existing deep learning methods consider mostly either class balanced data or moderately imbalanced data in model training, and ignore the challenge of learning from significantly imbalanced training data. To address this problem, we formulate a class imbalanced deep learning model based on batch-wise incremental minority (sparsely sampled) class rectification by hard sample mining in majority (frequently sampled) classes during model training. This model is designed to minimise the dominant effect of majority classes by discovering sparsely sampled boundaries of minority classes in an iterative batch-wise learning process. To that end, we introduce a Class Rectification Loss (CRL) function that can be deployed readily in deep network architectures. Extensive experimental evaluations are conducted on three imbalanced person attribute benchmark datasets (CelebA, X-Domain, DeepFashion) and one balanced object category benchmark dataset (CIFAR-100). These experimental results demonstrate the performance advantages and model scalability of the proposed batch-wise incremental minority class rectification model over the existing state-of-the-art models for addressing the problem of imbalanced data learning.
Tasks Facial Attribute Classification
Published 2018-04-28
URL http://arxiv.org/abs/1804.10851v1
PDF http://arxiv.org/pdf/1804.10851v1.pdf
PWC https://paperswithcode.com/paper/imbalanced-deep-learning-by-minority-class
Repo
Framework
comments powered by Disqus