Paper Group AWR 6
Bayesian sparse reconstruction: a brute-force approach to astronomical imaging and machine learning. AttentionXML: Label Tree-based Attention-Aware Deep Model for High-Performance Extreme Multi-Label Text Classification. Hierarchical Quantized Representations for Script Generation. Simple Baselines for Human Pose Estimation and Tracking. Mapping In …
Bayesian sparse reconstruction: a brute-force approach to astronomical imaging and machine learning
Title | Bayesian sparse reconstruction: a brute-force approach to astronomical imaging and machine learning |
Authors | Edward Higson, Will Handley, Michael Hobson, Anthony Lasenby |
Abstract | We present a principled Bayesian framework for signal reconstruction, in which the signal is modelled by basis functions whose number (and form, if required) is determined by the data themselves. This approach is based on a Bayesian interpretation of conventional sparse reconstruction and regularisation techniques, in which sparsity is imposed through priors via Bayesian model selection. We demonstrate our method for noisy 1- and 2-dimensional signals, including astronomical images. Furthermore, by using a product-space approach, the number and type of basis functions can be treated as integer parameters and their posterior distributions sampled directly. We show that order-of-magnitude increases in computational efficiency are possible from this technique compared to calculating the Bayesian evidences separately, and that further computational gains are possible using it in combination with dynamic nested sampling. Our approach can also be readily applied to neural networks, where it allows the network architecture to be determined by the data in a principled Bayesian manner by treating the number of nodes and hidden layers as parameters. |
Tasks | Model Selection |
Published | 2018-09-12 |
URL | http://arxiv.org/abs/1809.04598v2 |
http://arxiv.org/pdf/1809.04598v2.pdf | |
PWC | https://paperswithcode.com/paper/bayesian-sparse-reconstruction-a-brute-force |
Repo | https://github.com/ejhigson/bsr |
Framework | none |
AttentionXML: Label Tree-based Attention-Aware Deep Model for High-Performance Extreme Multi-Label Text Classification
Title | AttentionXML: Label Tree-based Attention-Aware Deep Model for High-Performance Extreme Multi-Label Text Classification |
Authors | Ronghui You, Zihan Zhang, Ziye Wang, Suyang Dai, Hiroshi Mamitsuka, Shanfeng Zhu |
Abstract | Extreme multi-label text classification (XMTC) is an important problem in the era of big data, for tagging a given text with the most relevant multiple labels from an extremely large-scale label set. XMTC can be found in many applications, such as item categorization, web page tagging, and news annotation. Traditionally most methods used bag-of-words (BOW) as inputs, ignoring word context as well as deep semantic information. Recent attempts to overcome the problems of BOW by deep learning still suffer from 1) failing to capture the important subtext for each label and 2) lack of scalability against the huge number of labels. We propose a new label tree-based deep learning model for XMTC, called AttentionXML, with two unique features: 1) a multi-label attention mechanism with raw text as input, which allows to capture the most relevant part of text to each label; and 2) a shallow and wide probabilistic label tree (PLT), which allows to handle millions of labels, especially for “tail labels”. We empirically compared the performance of AttentionXML with those of eight state-of-the-art methods over six benchmark datasets, including Amazon-3M with around 3 million labels. AttentionXML outperformed all competing methods under all experimental settings. Experimental results also show that AttentionXML achieved the best performance against tail labels among label tree-based methods. The code and datasets are available at http://github.com/yourh/AttentionXML . |
Tasks | Multi-Label Text Classification, News Annotation, Product Categorization, Text Classification, Web Page Tagging |
Published | 2018-11-01 |
URL | https://arxiv.org/abs/1811.01727v3 |
https://arxiv.org/pdf/1811.01727v3.pdf | |
PWC | https://paperswithcode.com/paper/attentionxml-extreme-multi-label-text |
Repo | https://github.com/yourh/AttentionXML |
Framework | pytorch |
Hierarchical Quantized Representations for Script Generation
Title | Hierarchical Quantized Representations for Script Generation |
Authors | Noah Weber, Leena Shekhar, Niranjan Balasubramanian, Nathanael Chambers |
Abstract | Scripts define knowledge about how everyday scenarios (such as going to a restaurant) are expected to unfold. One of the challenges to learning scripts is the hierarchical nature of the knowledge. For example, a suspect arrested might plead innocent or guilty, and a very different track of events is then expected to happen. To capture this type of information, we propose an autoencoder model with a latent space defined by a hierarchy of categorical variables. We utilize a recently proposed vector quantization based approach, which allows continuous embeddings to be associated with each latent variable value. This permits the decoder to softly decide what portions of the latent hierarchy to condition on by attending over the value embeddings for a given setting. Our model effectively encodes and generates scripts, outperforming a recent language modeling-based method on several standard tasks, and allowing the autoencoder model to achieve substantially lower perplexity scores compared to the previous language modeling-based method. |
Tasks | Language Modelling, Quantization |
Published | 2018-08-28 |
URL | http://arxiv.org/abs/1808.09542v1 |
http://arxiv.org/pdf/1808.09542v1.pdf | |
PWC | https://paperswithcode.com/paper/hierarchical-quantized-representations-for |
Repo | https://github.com/StonyBrookNLP/HAQAE |
Framework | pytorch |
Simple Baselines for Human Pose Estimation and Tracking
Title | Simple Baselines for Human Pose Estimation and Tracking |
Authors | Bin Xiao, Haiping Wu, Yichen Wei |
Abstract | There has been significant progress on pose estimation and increasing interests on pose tracking in recent years. At the same time, the overall algorithm and system complexity increases as well, making the algorithm analysis and comparison more difficult. This work provides simple and effective baseline methods. They are helpful for inspiring and evaluating new ideas for the field. State-of-the-art results are achieved on challenging benchmarks. The code will be available at https://github.com/leoxiaobin/pose.pytorch. |
Tasks | Keypoint Detection, Pose Estimation, Pose Tracking |
Published | 2018-04-17 |
URL | http://arxiv.org/abs/1804.06208v2 |
http://arxiv.org/pdf/1804.06208v2.pdf | |
PWC | https://paperswithcode.com/paper/simple-baselines-for-human-pose-estimation |
Repo | https://github.com/simochen/flowtrack.pytorch |
Framework | pytorch |
Mapping Instructions to Actions in 3D Environments with Visual Goal Prediction
Title | Mapping Instructions to Actions in 3D Environments with Visual Goal Prediction |
Authors | Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, Yoav Artzi |
Abstract | We propose to decompose instruction execution to goal prediction and action generation. We design a model that maps raw visual observations to goals using LINGUNET, a language-conditioned image generation network, and then generates the actions required to complete them. Our model is trained from demonstration only without external resources. To evaluate our approach, we introduce two benchmarks for instruction following: LANI, a navigation task; and CHAI, where an agent executes household instructions. Our evaluation demonstrates the advantages of our model decomposition, and illustrates the challenges posed by our new benchmarks. |
Tasks | Conditional Image Generation, Image Generation |
Published | 2018-09-04 |
URL | http://arxiv.org/abs/1809.00786v2 |
http://arxiv.org/pdf/1809.00786v2.pdf | |
PWC | https://paperswithcode.com/paper/mapping-instructions-to-actions-in-3d |
Repo | https://github.com/clic-lab/ciff |
Framework | pytorch |
Learning Not to Learn: Training Deep Neural Networks with Biased Data
Title | Learning Not to Learn: Training Deep Neural Networks with Biased Data |
Authors | Byungju Kim, Hyunwoo Kim, Kyungsu Kim, Sungjin Kim, Junmo Kim |
Abstract | We propose a novel regularization algorithm to train deep neural networks, in which data at training time is severely biased. Since a neural network efficiently learns data distribution, a network is likely to learn the bias information to categorize input data. It leads to poor performance at test time, if the bias is, in fact, irrelevant to the categorization. In this paper, we formulate a regularization loss based on mutual information between feature embedding and bias. Based on the idea of minimizing this mutual information, we propose an iterative algorithm to unlearn the bias information. We employ an additional network to predict the bias distribution and train the network adversarially against the feature embedding network. At the end of learning, the bias prediction network is not able to predict the bias not because it is poorly trained, but because the feature embedding network successfully unlearns the bias information. We also demonstrate quantitative and qualitative experimental results which show that our algorithm effectively removes the bias information from feature embedding. |
Tasks | |
Published | 2018-12-26 |
URL | http://arxiv.org/abs/1812.10352v2 |
http://arxiv.org/pdf/1812.10352v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-not-to-learn-training-deep-neural |
Repo | https://github.com/feidfoe/learning-not-to-learn |
Framework | pytorch |
Detection, localisation and tracking of pallets using machine learning techniques and 2D range data
Title | Detection, localisation and tracking of pallets using machine learning techniques and 2D range data |
Authors | Ihab S. Mohamed, Alessio Capitanelli, Fulvio Mastrogiovanni, Stefano Rovetta, Renato Zaccaria |
Abstract | The problem of autonomous transportation in industrial scenarios is receiving a renewed interest due to the way it can revolutionise internal logistics, especially in unstructured environments. This paper presents a novel architecture allowing a robot to detect, localise, and track (possibly multiple) pallets using machine learning techniques based on an on-board 2D laser rangefinder only. The architecture is composed of two main components: the first stage is a pallet detector employing a Faster Region-based Convolutional Neural Network (Faster R-CNN) detector cascaded with a CNN-based classifier; the second stage is a Kalman filter for localising and tracking detected pallets, which we also use to defer commitment to a pallet detected in the first stage until sufficient confidence has been acquired via a sequential data acquisition process. For fine-tuning the CNNs, the architecture has been systematically evaluated using a real-world dataset containing 340 labeled 2D scans, which have been made freely available in an online repository. Detection performance has been assessed on the basis of the average accuracy over k-fold cross-validation, and it scored 99.58% in our tests. Concerning pallet localisation and tracking, experiments have been performed in a scenario where the robot is approaching the pallet to fork. Although data have been originally acquired by considering only one pallet as per specification of the use case we consider, artificial data have been generated as well to mimic the presence of multiple pallets in the robot workspace. Our experimental results confirm that the system is capable of identifying, localising and tracking pallets with a high success rate while being robust to false positives. |
Tasks | |
Published | 2018-03-29 |
URL | http://arxiv.org/abs/1803.11254v3 |
http://arxiv.org/pdf/1803.11254v3.pdf | |
PWC | https://paperswithcode.com/paper/detection-localisation-and-tracking-of |
Repo | https://github.com/EMAROLab/PDT |
Framework | none |
ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms
Title | ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms |
Authors | Martin Aumüller, Erik Bernhardsson, Alexander Faithfull |
Abstract | This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several different ways of integrating $k$-NN algorithms, and its configuration system automatically tests a range of parameter settings for each algorithm. Algorithms are compared with respect to many different (approximate) quality measures, and adding more is easy and fast; the included plotting front-ends can visualise these as images, $\LaTeX$ plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of $k$-NN algorithms. In the short term, this overview allows users to choose the correct $k$-NN algorithm and parameters for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different approaches to $k$-NN search yield comparable quality-performance trade-offs. The system is available at http://ann-benchmarks.com . |
Tasks | |
Published | 2018-07-15 |
URL | https://arxiv.org/abs/1807.05614v2 |
https://arxiv.org/pdf/1807.05614v2.pdf | |
PWC | https://paperswithcode.com/paper/ann-benchmarks-a-benchmarking-tool-for |
Repo | https://github.com/erikbern/ann-benchmarks |
Framework | none |
A Deep Neural Model Of Emotion Appraisal
Title | A Deep Neural Model Of Emotion Appraisal |
Authors | Pablo Barros, Emilia Barakova, Stefan Wermter |
Abstract | Emotional concepts play a huge role in our daily life since they take part into many cognitive processes: from the perception of the environment around us to different learning processes and natural communication. Social robots need to communicate with humans, which increased also the popularity of affective embodied models that adopt different emotional concepts in many everyday tasks. However, there is still a gap between the development of these solutions and the integration and development of a complex emotion appraisal system, which is much necessary for true social robots. In this paper, we propose a deep neural model which is designed in the light of different aspects of developmental learning of emotional concepts to provide an integrated solution for internal and external emotion appraisal. We evaluate the performance of the proposed model with different challenging corpora and compare it with state-of-the-art models for external emotion appraisal. To extend the evaluation of the proposed model, we designed and collected a novel dataset based on a Human-Robot Interaction (HRI) scenario. We deployed the model in an iCub robot and evaluated the capability of the robot to learn and describe the affective behavior of different persons based on observation. The performed experiments demonstrate that the proposed model is competitive with the state of the art in describing emotion behavior in general. In addition, it is able to generate internal emotional concepts that evolve through time: it continuously forms and updates the formed emotional concepts, which is a step towards creating an emotional appraisal model grounded in the robot experiences. |
Tasks | |
Published | 2018-08-01 |
URL | http://arxiv.org/abs/1808.00252v1 |
http://arxiv.org/pdf/1808.00252v1.pdf | |
PWC | https://paperswithcode.com/paper/a-deep-neural-model-of-emotion-appraisal |
Repo | https://github.com/pablovin/AffectiveMemoryFramework |
Framework | tf |
Efficient Inference in Multi-task Cox Process Models
Title | Efficient Inference in Multi-task Cox Process Models |
Authors | Virginia Aglietti, Theodoros Damoulas, Edwin Bonilla |
Abstract | We generalize the log Gaussian Cox process (LGCP) framework to model multiple correlated point data jointly. The observations are treated as realizations of multiple LGCPs, whose log intensities are given by linear combinations of latent functions drawn from Gaussian process priors. The combination coefficients are also drawn from Gaussian processes and can incorporate additional dependencies. We derive closed-form expressions for the moments of the intensity functions and develop an efficient variational inference algorithm that is orders of magnitude faster than competing deterministic and stochastic approximations of multivariate LGCP, coregionalization models, and multi-task permanental processes. Our approach outperforms these benchmarks in multiple problems, offering the current state of the art in modeling multivariate point processes. |
Tasks | Gaussian Processes, Point Processes |
Published | 2018-05-24 |
URL | http://arxiv.org/abs/1805.09781v3 |
http://arxiv.org/pdf/1805.09781v3.pdf | |
PWC | https://paperswithcode.com/paper/efficient-inference-in-multi-task-cox-process |
Repo | https://github.com/VirgiAgl/MCPM |
Framework | tf |
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
Title | On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models |
Authors | Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, Pushmeet Kohli |
Abstract | Recent work has shown that it is possible to train deep neural networks that are provably robust to norm-bounded adversarial perturbations. Most of these methods are based on minimizing an upper bound on the worst-case loss over all possible adversarial perturbations. While these techniques show promise, they often result in difficult optimization procedures that remain hard to scale to larger networks. Through a comprehensive analysis, we show how a simple bounding technique, interval bound propagation (IBP), can be exploited to train large provably robust neural networks that beat the state-of-the-art in verified accuracy. While the upper bound computed by IBP can be quite weak for general networks, we demonstrate that an appropriate loss and clever hyper-parameter schedule allow the network to adapt such that the IBP bound is tight. This results in a fast and stable learning algorithm that outperforms more sophisticated methods and achieves state-of-the-art results on MNIST, CIFAR-10 and SVHN. It also allows us to train the largest model to be verified beyond vacuous bounds on a downscaled version of ImageNet. |
Tasks | |
Published | 2018-10-30 |
URL | https://arxiv.org/abs/1810.12715v4 |
https://arxiv.org/pdf/1810.12715v4.pdf | |
PWC | https://paperswithcode.com/paper/on-the-effectiveness-of-interval-bound |
Repo | https://github.com/Ping-C/certifiedpatchdefense |
Framework | pytorch |
Neural Nearest Neighbors Networks
Title | Neural Nearest Neighbors Networks |
Authors | Tobias Plötz, Stefan Roth |
Abstract | Non-local methods exploiting the self-similarity of natural signals have been well studied, for example in image analysis and restoration. Existing approaches, however, rely on k-nearest neighbors (KNN) matching in a fixed feature space. The main hurdle in optimizing this feature space w.r.t. application performance is the non-differentiability of the KNN selection rule. To overcome this, we propose a continuous deterministic relaxation of KNN selection that maintains differentiability w.r.t. pairwise distances, but retains the original KNN as the limit of a temperature parameter approaching zero. To exploit our relaxation, we propose the neural nearest neighbors block (N3 block), a novel non-local processing layer that leverages the principle of self-similarity and can be used as building block in modern neural network architectures. We show its effectiveness for the set reasoning task of correspondence classification as well as for image restoration, including image denoising and single image super-resolution, where we outperform strong convolutional neural network (CNN) baselines and recent non-local models that rely on KNN selection in hand-chosen features spaces. |
Tasks | Denoising, Image Denoising, Image Restoration, Image Super-Resolution, Super-Resolution |
Published | 2018-10-30 |
URL | http://arxiv.org/abs/1810.12575v1 |
http://arxiv.org/pdf/1810.12575v1.pdf | |
PWC | https://paperswithcode.com/paper/neural-nearest-neighbors-networks |
Repo | https://github.com/visinf/n3net |
Framework | pytorch |
Bridging machine learning and cryptography in defence against adversarial attacks
Title | Bridging machine learning and cryptography in defence against adversarial attacks |
Authors | Olga Taran, Shideh Rezaeifar, Slava Voloshynovskiy |
Abstract | In the last decade, deep learning algorithms have become very popular thanks to the achieved performance in many machine learning and computer vision tasks. However, most of the deep learning architectures are vulnerable to so called adversarial examples. This questions the security of deep neural networks (DNN) for many security- and trust-sensitive domains. The majority of the proposed existing adversarial attacks are based on the differentiability of the DNN cost function.Defence strategies are mostly based on machine learning and signal processing principles that either try to detect-reject or filter out the adversarial perturbations and completely neglect the classical cryptographic component in the defence. In this work, we propose a new defence mechanism based on the second Kerckhoffs’s cryptographic principle which states that the defence and classification algorithm are supposed to be known, but not the key. To be compliant with the assumption that the attacker does not have access to the secret key, we will primarily focus on a gray-box scenario and do not address a white-box one. More particularly, we assume that the attacker does not have direct access to the secret block, but (a) he completely knows the system architecture, (b) he has access to the data used for training and testing and (c) he can observe the output of the classifier for each given input. We show empirically that our system is efficient against most famous state-of-the-art attacks in black-box and gray-box scenarios. |
Tasks | |
Published | 2018-09-05 |
URL | http://arxiv.org/abs/1809.01715v1 |
http://arxiv.org/pdf/1809.01715v1.pdf | |
PWC | https://paperswithcode.com/paper/bridging-machine-learning-and-cryptography-in |
Repo | https://github.com/nikste/crypto-defense |
Framework | tf |
Automatic Pavement Crack Detection Based on Structured Prediction with the Convolutional Neural Network
Title | Automatic Pavement Crack Detection Based on Structured Prediction with the Convolutional Neural Network |
Authors | Zhun Fan, Yuming Wu, Jiewei Lu, Wenji Li |
Abstract | Automated pavement crack detection is a challenging task that has been researched for decades due to the complicated pavement conditions in real world. In this paper, a supervised method based on deep learning is proposed, which has the capability of dealing with different pavement conditions. Specifically, a convolutional neural network (CNN) is used to learn the structure of the cracks from raw images, without any preprocessing. Small patches are extracted from crack images as inputs to generate a large training database, a CNN is trained and crack detection is modeled as a multi-label classification problem. Typically, crack pixels are much fewer than non-crack pixels. To deal with the problem with severely imbalanced data, a strategy with modifying the ratio of positive to negative samples is proposed. The method is tested on two public databases and compared with five existing methods. Experimental results show that it outperforms the other methods. |
Tasks | Multi-Label Classification, Structured Prediction |
Published | 2018-02-01 |
URL | http://arxiv.org/abs/1802.02208v1 |
http://arxiv.org/pdf/1802.02208v1.pdf | |
PWC | https://paperswithcode.com/paper/automatic-pavement-crack-detection-based-on |
Repo | https://github.com/brijml/xcaliber-task |
Framework | none |
Transferable Interactiveness Knowledge for Human-Object Interaction Detection
Title | Transferable Interactiveness Knowledge for Human-Object Interaction Detection |
Authors | Yong-Lu Li, Siyuan Zhou, Xijie Huang, Liang Xu, Ze Ma, Hao-Shu Fang, Yan-Feng Wang, Cewu Lu |
Abstract | Human-Object Interaction (HOI) Detection is an important problem to understand how humans interact with objects. In this paper, we explore Interactiveness Knowledge which indicates whether human and object interact with each other or not. We found that interactiveness knowledge can be learned across HOI datasets, regardless of HOI category settings. Our core idea is to exploit an Interactiveness Network to learn the general interactiveness knowledge from multiple HOI datasets and perform Non-Interaction Suppression before HOI classification in inference. On account of the generalization of interactiveness, interactiveness network is a transferable knowledge learner and can be cooperated with any HOI detection models to achieve desirable results. We extensively evaluate the proposed method on HICO-DET and V-COCO datasets. Our framework outperforms state-of-the-art HOI detection results by a great margin, verifying its efficacy and flexibility. Code is available at https://github.com/DirtyHarryLYL/Transferable-Interactiveness-Network. |
Tasks | Human-Object Interaction Detection |
Published | 2018-11-20 |
URL | https://arxiv.org/abs/1811.08264v4 |
https://arxiv.org/pdf/1811.08264v4.pdf | |
PWC | https://paperswithcode.com/paper/transferable-interactiveness-prior-for-human |
Repo | https://github.com/DirtyHarryLYL/Transferable-Interactiveness-Network |
Framework | tf |