February 1, 2020

3204 words 16 mins read

Paper Group AWR 191

Paper Group AWR 191

Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs. Cosmological N-body simulations: a challenge for scalable generative models. Precise Detection in Densely Packed Scenes. DAGCN: Dual Attention Graph Convolutional Networks. Product-Aware Answer Generation in E-Commerce Question-Answering. Neural Chinese Word Segmentation …

Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs

Title Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs
Authors Debjit Paul, Anette Frank
Abstract To make machines better understand sentiments, research needs to move from polarity identification to understanding the reasons that underlie the expression of sentiment. Categorizing the goals or needs of humans is one way to explain the expression of sentiment in text. Humans are good at understanding situations described in natural language and can easily connect them to the character’s psychological needs using commonsense knowledge. We present a novel method to extract, rank, filter and select multi-hop relation paths from a commonsense knowledge resource to interpret the expression of sentiment in terms of their underlying human needs. We efficiently integrate the acquired knowledge paths in a neural model that interfaces context representations with knowledge using a gated attention mechanism. We assess the model’s performance on a recently published dataset for categorizing human needs. Selectively integrating knowledge paths boosts performance and establishes a new state-of-the-art. Our model offers interpretability through the learned attention map over commonsense knowledge paths. Human evaluation highlights the relevance of the encoded knowledge.
Tasks Common Sense Reasoning
Published 2019-04-01
URL http://arxiv.org/abs/1904.00676v1
PDF http://arxiv.org/pdf/1904.00676v1.pdf
PWC https://paperswithcode.com/paper/ranking-and-selecting-multi-hop-knowledge
Repo https://github.com/debjitpaul/Multi-Hop-Knowledge-Paths-Human-Needs
Framework tf

Cosmological N-body simulations: a challenge for scalable generative models

Title Cosmological N-body simulations: a challenge for scalable generative models
Authors Nathanaël Perraudin, Ankit Srivastava, Aurelien Lucchi, Tomasz Kacprzak, Thomas Hofmann, Alexandre Réfrégier
Abstract Deep generative models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAs) have been demonstrated to produce images of high visual quality. However, the existing hardware severely limits the size of the images that can be generated. The rapid growth of high dimensional data in many fields of science therefore poses a significant challenge for generative models. In cosmology, the large-scale, three-dimensional matter distribution, modeled with N-body simulations, plays a crucial role in understanding the evolution of the universe. As these simulations are computationally very expensive, GANs have recently generated interest as a possible method to emulate these datasets, but they have been, so far, mostly limited to two dimensional data. In this work, we introduce a new benchmark for the generation of three dimensional N-body simulations, in order to stimulate new ideas in the machine learning community and move closer to the practical use of generative models in cosmology. As a first benchmark result, we propose a scalable GAN approach for training a generator of N-body three-dimensional cubes. Our technique relies on two key building blocks, (i) splitting the generation of the high-dimensional data into smaller parts, and (ii) using a multi-scale approach that efficiently captures global image features that might otherwise be lost in the splitting process. We evaluate the performance of our model for the generation of N-body samples using various statistical measures commonly used in cosmology. Our results show that the proposed model produces samples of high visual quality, although the statistical analysis reveals that capturing rare features in the data poses significant problems for the generative models. We make the data, quality evaluation routines, and the proposed GAN architecture publicly available at https://github.com/nperraud/3DcosmoGAN
Tasks
Published 2019-08-15
URL https://arxiv.org/abs/1908.05519v2
PDF https://arxiv.org/pdf/1908.05519v2.pdf
PWC https://paperswithcode.com/paper/cosmological-n-body-simulations-a-challenge
Repo https://github.com/nperraud/3DcosmoGAN
Framework tf

Precise Detection in Densely Packed Scenes

Title Precise Detection in Densely Packed Scenes
Authors Eran Goldman, Roei Herzig, Aviv Eisenschtat, Oria Ratzon, Itsik Levi, Jacob Goldberger, Tal Hassner
Abstract Man-made scenes can be densely packed, containing numerous objects, often identical, positioned in close proximity. We show that precise object detection in such scenes remains a challenging frontier even for state-of-the-art object detectors. We propose a novel, deep-learning based method for precise object detection, designed for such challenging settings. Our contributions include: (1) A layer for estimating the Jaccard index as a detection quality score; (2) a novel EM merging unit, which uses our quality scores to resolve detection overlap ambiguities; finally, (3) an extensive, annotated data set, SKU-110K, representing packed retail environments, released for training and testing under such extreme settings. Detection tests on SKU-110K and counting tests on the CARPK and PUCPR+ show our method to outperform existing state-of-the-art with substantial margins. The code and data will be made available on \url{www.github.com/eg4000/SKU110K_CVPR19}.
Tasks Dense Object Detection, Object Detection
Published 2019-04-01
URL http://arxiv.org/abs/1904.00853v3
PDF http://arxiv.org/pdf/1904.00853v3.pdf
PWC https://paperswithcode.com/paper/precise-detection-in-densely-packed-scenes
Repo https://github.com/eg4000/SKU110K_CVPR19
Framework tf

DAGCN: Dual Attention Graph Convolutional Networks

Title DAGCN: Dual Attention Graph Convolutional Networks
Authors Fengwen Chen, Shirui Pan, Jing Jiang, Huan Huo, Guodong Long
Abstract Graph convolutional networks (GCNs) have recently become one of the most powerful tools for graph analytics tasks in numerous applications, ranging from social networks and natural language processing to bioinformatics and chemoinformatics, thanks to their ability to capture the complex relationships between concepts. At present, the vast majority of GCNs use a neighborhood aggregation framework to learn a continuous and compact vector, then performing a pooling operation to generalize graph embedding for the classification task. These approaches have two disadvantages in the graph classification task: (1)when only the largest sub-graph structure ($k$-hop neighbor) is used for neighborhood aggregation, a large amount of early-stage information is lost during the graph convolution step; (2) simple average/sum pooling or max pooling utilized, which loses the characteristics of each node and the topology between nodes. In this paper, we propose a novel framework called, dual attention graph convolutional networks (DAGCN) to address these problems. DAGCN automatically learns the importance of neighbors at different hops using a novel attention graph convolution layer, and then employs a second attention component, a self-attention pooling layer, to generalize the graph representation from the various aspects of a matrix graph embedding. The dual attention network is trained in an end-to-end manner for the graph classification task. We compare our model with state-of-the-art graph kernels and other deep learning methods. The experimental results show that our framework not only outperforms other baselines but also achieves a better rate of convergence.
Tasks Graph Classification, Graph Embedding
Published 2019-04-04
URL http://arxiv.org/abs/1904.02278v1
PDF http://arxiv.org/pdf/1904.02278v1.pdf
PWC https://paperswithcode.com/paper/dagcn-dual-attention-graph-convolutional
Repo https://github.com/dawenzi123/DAGCN
Framework pytorch

Product-Aware Answer Generation in E-Commerce Question-Answering

Title Product-Aware Answer Generation in E-Commerce Question-Answering
Authors Shen Gao, Zhaochun Ren, Yihong Eric Zhao, Dongyan Zhao, Dawei Yin, Rui Yan
Abstract In e-commerce portals, generating answers for product-related questions has become a crucial task. In this paper, we propose the task of product-aware answer generation, which tends to generate an accurate and complete answer from large-scale unlabeled e-commerce reviews and product attributes. Unlike existing question-answering problems, answer generation in e-commerce confronts three main challenges: (1) Reviews are informal and noisy; (2) joint modeling of reviews and key-value product attributes is challenging; (3) traditional methods easily generate meaningless answers. To tackle above challenges, we propose an adversarial learning based model, named PAAG, which is composed of three components: a question-aware review representation module, a key-value memory network encoding attributes, and a recurrent neural network as a sequence generator. Specifically, we employ a convolutional discriminator to distinguish whether our generated answer matches the facts. To extract the salience part of reviews, an attention-based review reader is proposed to capture the most relevant words given the question. Conducted on a large-scale real-world e-commerce dataset, our extensive experiments verify the effectiveness of each module in our proposed model. Moreover, our experiments show that our model achieves the state-of-the-art performance in terms of both automatic metrics and human evaluations.
Tasks Question Answering
Published 2019-01-23
URL http://arxiv.org/abs/1901.07696v2
PDF http://arxiv.org/pdf/1901.07696v2.pdf
PWC https://paperswithcode.com/paper/product-aware-answer-generation-in-e-commerce
Repo https://github.com/gsh199449/productqa
Framework none

Neural Chinese Word Segmentation as Sequence to Sequence Translation

Title Neural Chinese Word Segmentation as Sequence to Sequence Translation
Authors Xuewen Shi, Heyan Huang, Ping Jian, Yuhang Guo, Xiaochi Wei, Yi-Kun Tang
Abstract Recently, Chinese word segmentation (CWS) methods using neural networks have made impressive progress. Most of them regard the CWS as a sequence labeling problem which construct models based on local features rather than considering global information of input sequence. In this paper, we cast the CWS as a sequence translation problem and propose a novel sequence-to-sequence CWS model with an attention-based encoder-decoder framework. The model captures the global information from the input and directly outputs the segmented sequence. It can also tackle other NLP tasks with CWS jointly in an end-to-end mode. Experiments on Weibo, PKU and MSRA benchmark datasets show that our approach has achieved competitive performances compared with state-of-the-art methods. Meanwhile, we successfully applied our proposed model to jointly learning CWS and Chinese spelling correction, which demonstrates its applicability of multi-task fusion.
Tasks Chinese Word Segmentation, Spelling Correction
Published 2019-11-29
URL https://arxiv.org/abs/1911.12982v1
PDF https://arxiv.org/pdf/1911.12982v1.pdf
PWC https://paperswithcode.com/paper/neural-chinese-word-segmentation-as-sequence
Repo https://github.com/SourcecodeSharing/CWSpostediting
Framework none

Multimodal Machine Learning-based Knee Osteoarthritis Progression Prediction from Plain Radiographs and Clinical Data

Title Multimodal Machine Learning-based Knee Osteoarthritis Progression Prediction from Plain Radiographs and Clinical Data
Authors Aleksei Tiulpin, Stefan Klein, Sita M. A. Bierma-Zeinstra, Jérôme Thevenot, Esa Rahtu, Joyce van Meurs, Edwin H. G. Oei, Simo Saarakkala
Abstract Knee osteoarthritis (OA) is the most common musculoskeletal disease without a cure, and current treatment options are limited to symptomatic relief. Prediction of OA progression is a very challenging and timely issue, and it could, if resolved, accelerate the disease modifying drug development and ultimately help to prevent millions of total joint replacement surgeries performed annually. Here, we present a multi-modal machine learning-based OA progression prediction model that utilizes raw radiographic data, clinical examination results and previous medical history of the patient. We validated this approach on an independent test set of 3,918 knee images from 2,129 subjects. Our method yielded area under the ROC curve (AUC) of 0.79 (0.78-0.81) and Average Precision (AP) of 0.68 (0.66-0.70). In contrast, a reference approach, based on logistic regression, yielded AUC of 0.75 (0.74-0.77) and AP of 0.62 (0.60-0.64). The proposed method could significantly improve the subject selection process for OA drug-development trials and help the development of personalized therapeutic plans.
Tasks Knee Osteoarthritis Prediction
Published 2019-04-12
URL https://arxiv.org/abs/1904.06236v2
PDF https://arxiv.org/pdf/1904.06236v2.pdf
PWC https://paperswithcode.com/paper/multimodal-machine-learning-based-knee
Repo https://github.com/MIPT-Oulu/OAProgression
Framework none

MetH: A family of high-resolution and variable-shape image challenges

Title MetH: A family of high-resolution and variable-shape image challenges
Authors Ferran Parés, Dario Garcia-Gasulla, Harald Servat, Jesús Labarta, Eduard Ayguadé
Abstract High-resolution and variable-shape images have not yet been properly addressed by the AI community. The approach of down-sampling data often used with convolutional neural networks is sub-optimal for many tasks, and has too many drawbacks to be considered a sustainable alternative. In sight of the increasing importance of problems that can benefit from exploiting high-resolution (HR) and variable-shape, and with the goal of promoting research in that direction, we introduce a new family of datasets (MetH). The four proposed problems include two image classification, one image regression and one super resolution task. Each of these datasets contains thousands of art pieces captured by HR and variable-shape images, labeled by experts at the Metropolitan Museum of Art. We perform an analysis, which shows how the proposed tasks go well beyond current public alternatives in both pixel size and aspect ratio variance. At the same time, the performance obtained by popular architectures on these tasks shows that there is ample room for improvement. To wrap up the relevance of the contribution we review the fields, both in AI and high-performance computing, that could benefit from the proposed challenges.
Tasks Image Classification, Super-Resolution
Published 2019-11-20
URL https://arxiv.org/abs/1911.08953v2
PDF https://arxiv.org/pdf/1911.08953v2.pdf
PWC https://paperswithcode.com/paper/meth-a-family-of-high-resolution-and-variable
Repo https://github.com/HPAI-BSC/MetH-baselines
Framework none

Multi-label Cloud Segmentation Using a Deep Network

Title Multi-label Cloud Segmentation Using a Deep Network
Authors Soumyabrata Dev, Shilpa Manandhar, Yee Hui Lee, Stefan Winkler
Abstract Different empirical models have been developed for cloud detection. There is a growing interest in using the ground-based sky/cloud images for this purpose. Several methods exist that perform binary segmentation of clouds. In this paper, we propose to use a deep learning architecture (U-Net) to perform multi-label sky/cloud image segmentation. The proposed approach outperforms recent literature by a large margin.
Tasks Cloud Detection, Semantic Segmentation
Published 2019-03-15
URL http://arxiv.org/abs/1903.06562v1
PDF http://arxiv.org/pdf/1903.06562v1.pdf
PWC https://paperswithcode.com/paper/multi-label-cloud-segmentation-using-a-deep
Repo https://github.com/Soumyabrata/multilabel-unet
Framework none

MVFST-RL: An Asynchronous RL Framework for Congestion Control with Delayed Actions

Title MVFST-RL: An Asynchronous RL Framework for Congestion Control with Delayed Actions
Authors Viswanath Sivakumar, Tim Rocktäschel, Alexander H. Miller, Heinrich Küttler, Nantas Nardelli, Mike Rabbat, Joelle Pineau, Sebastian Riedel
Abstract Effective network congestion control strategies are key to keeping the Internet (or any large computer network) operational. Network congestion control has been dominated by hand-crafted heuristics for decades. Recently, ReinforcementLearning (RL) has emerged as an alternative to automatically optimize such control strategies. Research so far has primarily considered RL interfaces which block the sender while an agent considers its next action. This is largely an artifact of building on top of frameworks designed for RL in games (e.g. OpenAI Gym). However, this does not translate to real-world networking environments, where a network sender waiting on a policy without sending data leads to under-utilization of bandwidth. We instead propose to formulate congestion control with an asynchronous RL agent that handles delayed actions. We present MVFST-RL, a scalable framework for congestion control in the QUIC transport protocol that leverages state-of-the-art in asynchronous RL training with off-policy correction. We analyze modeling improvements to mitigate the deviation from Markovian dynamics, and evaluate our method on emulated networks from the Pantheon benchmark platform. The source code is publicly available at https://github.com/facebookresearch/mvfst-rl.
Tasks Network Congestion Control
Published 2019-10-09
URL https://arxiv.org/abs/1910.04054v3
PDF https://arxiv.org/pdf/1910.04054v3.pdf
PWC https://paperswithcode.com/paper/mvfst-rl-an-asynchronous-rl-framework-for
Repo https://github.com/facebookresearch/mvfst-rl
Framework pytorch

Cloud-Net: An end-to-end Cloud Detection Algorithm for Landsat 8 Imagery

Title Cloud-Net: An end-to-end Cloud Detection Algorithm for Landsat 8 Imagery
Authors Sorour Mohajerani, Parvaneh Saeedi
Abstract Cloud detection in satellite images is an important first-step in many remote sensing applications. This problem is more challenging when only a limited number of spectral bands are available. To address this problem, a deep learning-based algorithm is proposed in this paper. This algorithm consists of a Fully Convolutional Network (FCN) that is trained by multiple patches of Landsat 8 images. This network, which is called Cloud-Net, is capable of capturing global and local cloud features in an image using its convolutional blocks. Since the proposed method is an end-to-end solution, no complicated pre-processing step is required. Our experimental results prove that the proposed method outperforms the state-of-the-art method over a benchmark dataset by 8.7% in Jaccard Index.
Tasks Cloud Detection
Published 2019-01-29
URL http://arxiv.org/abs/1901.10077v1
PDF http://arxiv.org/pdf/1901.10077v1.pdf
PWC https://paperswithcode.com/paper/cloud-net-an-end-to-end-cloud-detection
Repo https://github.com/SorourMo/Cloud-Net-A-semantic-segmentation-CNN-for-cloud-detection
Framework tf

Twitch Plays Pokemon, Machine Learns Twitch: Unsupervised Context-Aware Anomaly Detection for Identifying Trolls in Streaming Data

Title Twitch Plays Pokemon, Machine Learns Twitch: Unsupervised Context-Aware Anomaly Detection for Identifying Trolls in Streaming Data
Authors Albert Haque
Abstract With the increasing importance of online communities, discussion forums, and customer reviews, Internet “trolls” have proliferated thereby making it difficult for information seekers to find relevant and correct information. In this paper, we consider the problem of detecting and identifying Internet trolls, almost all of which are human agents. Identifying a human agent among a human population presents significant challenges compared to detecting automated spam or computerized robots. To learn a troll’s behavior, we use contextual anomaly detection to profile each chat user. Using clustering and distance-based methods, we use contextual data such as the group’s current goal, the current time, and the username to classify each point as an anomaly. A user whose features significantly differ from the norm will be classified as a troll. We collected 38 million data points from the viral Internet fad, Twitch Plays Pokemon. Using clustering and distance-based methods, we develop heuristics for identifying trolls. Using MapReduce techniques for preprocessing and user profiling, we are able to classify trolls based on 10 features extracted from a user’s lifetime history.
Tasks Anomaly Detection
Published 2019-02-17
URL http://arxiv.org/abs/1902.06208v1
PDF http://arxiv.org/pdf/1902.06208v1.pdf
PWC https://paperswithcode.com/paper/twitch-plays-pokemon-machine-learns-twitch
Repo https://github.com/ahaque/twitch-troll-detection
Framework none

Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing

Title Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing
Authors Jinyuan Jia, Xiaoyu Cao, Binghui Wang, Neil Zhenqiang Gong
Abstract It is well-known that classifiers are vulnerable to adversarial perturbations. To defend against adversarial perturbations, various certified robustness results have been derived. However, existing certified robustnesses are limited to top-1 predictions. In many real-world applications, top-$k$ predictions are more relevant. In this work, we aim to derive certified robustness for top-$k$ predictions. In particular, our certified robustness is based on randomized smoothing, which turns any classifier to a new classifier via adding noise to an input example. We adopt randomized smoothing because it is scalable to large-scale neural networks and applicable to any classifier. We derive a tight robustness in $\ell_2$ norm for top-$k$ predictions when using randomized smoothing with Gaussian noise. We find that generalizing the certified robustness from top-1 to top-$k$ predictions faces significant technical challenges. We also empirically evaluate our method on CIFAR10 and ImageNet. For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62.8% when the $\ell_2$-norms of the adversarial perturbations are less than 0.5 (=127/255). Our code is publicly available at: \url{https://github.com/jjy1994/Certify_Topk}.
Tasks
Published 2019-12-20
URL https://arxiv.org/abs/1912.09899v1
PDF https://arxiv.org/pdf/1912.09899v1.pdf
PWC https://paperswithcode.com/paper/certified-robustness-for-top-k-predictions-1
Repo https://github.com/jjy1994/Certify_Topk
Framework none

Sensor-Independent Illumination Estimation for DNN Models

Title Sensor-Independent Illumination Estimation for DNN Models
Authors Mahmoud Afifi, Michael S. Brown
Abstract While modern deep neural networks (DNNs) achieve state-of-the-art results for illuminant estimation, it is currently necessary to train a separate DNN for each type of camera sensor. This means when a camera manufacturer uses a new sensor, it is necessary to retrain an existing DNN model with training images captured by the new sensor. This paper addresses this problem by introducing a novel sensor-independent illuminant estimation framework. Our method learns a sensor-independent working space that can be used to canonicalize the RGB values of any arbitrary camera sensor. Our learned space retains the linear property of the original sensor raw-RGB space and allows unseen camera sensors to be used on a single DNN model trained on this working space. We demonstrate the effectiveness of this approach on several different camera sensors and show it provides performance on par with state-of-the-art methods that were trained per sensor.
Tasks
Published 2019-12-14
URL https://arxiv.org/abs/1912.06888v1
PDF https://arxiv.org/pdf/1912.06888v1.pdf
PWC https://paperswithcode.com/paper/sensor-independent-illumination-estimation
Repo https://github.com/mahmoudnafifi/SIIE
Framework tf

Maximum Entropy Generators for Energy-Based Models

Title Maximum Entropy Generators for Energy-Based Models
Authors Rithesh Kumar, Sherjil Ozair, Anirudh Goyal, Aaron Courville, Yoshua Bengio
Abstract Maximum likelihood estimation of energy-based models is a challenging problem due to the intractability of the log-likelihood gradient. In this work, we propose learning both the energy function and an amortized approximate sampling mechanism using a neural generator network, which provides an efficient approximation of the log-likelihood gradient. The resulting objective requires maximizing entropy of the generated samples, which we perform using recently proposed nonparametric mutual information estimators. Finally, to stabilize the resulting adversarial game, we use a zero-centered gradient penalty derived as a necessary condition from the score matching literature. The proposed technique can generate sharp images with Inception and FID scores competitive with recent GAN techniques, does not suffer from mode collapse, and is competitive with state-of-the-art anomaly detection techniques.
Tasks Anomaly Detection
Published 2019-01-24
URL https://arxiv.org/abs/1901.08508v2
PDF https://arxiv.org/pdf/1901.08508v2.pdf
PWC https://paperswithcode.com/paper/maximum-entropy-generators-for-energy-based
Repo https://github.com/ritheshkumar95/energy_based_generative_models
Framework pytorch
comments powered by Disqus