April 2, 2020

3152 words 15 mins read

Paper Group ANR 174

Paper Group ANR 174

Unsupervised Domain Adaptation Through Transferring both the Source-Knowledge and Target-Relatedness Simultaneously. Differential Treatment for Stuff and Things: A Simple Unsupervised Domain Adaptation Method for Semantic Segmentation. Self-Supervised Fast Adaptation for Denoising via Meta-Learning. A Collaborative Learning Framework via Federated …

Unsupervised Domain Adaptation Through Transferring both the Source-Knowledge and Target-Relatedness Simultaneously

Title Unsupervised Domain Adaptation Through Transferring both the Source-Knowledge and Target-Relatedness Simultaneously
Authors Qing Tian, Chuang Ma, Meng Cao, Songcan Chen
Abstract Unsupervised domain adaptation (UDA) is an emerging research topic in the field of machine learning and pattern recognition, which aims to help the learning of unlabeled target domain by transferring knowledge from the source domain. To perform UDA, a variety of methods have been proposed, most of which concentrate on the scenario of single source and single target domain (1S1T). However, in real applications, usually single source domain with multiple target domains are involved (1SmT), which cannot be handled directly by those 1S1T models. Unfortunately, although a few related works on 1SmT UDA have been proposed, nearly none of them model the source domain knowledge and leverage the target-relatedness jointly. To overcome these shortcomings, we herein propose a more general 1SmT UDA model through transferring both the Source-Knowledge and Target-Relatedness, UDA-SKTR for short. In this way, not only the supervision knowledge from the source domain, but also the potential relatedness among the target domains are simultaneously modeled for exploitation in the process of 1SmT UDA. In addition, we construct an alternating optimization algorithm to solve the variables of the proposed model with convergence guarantee. Finally, through extensive experiments on both benchmark and real datasets, we validate the effectiveness and superiority of the proposed method.
Tasks Domain Adaptation, Unsupervised Domain Adaptation
Published 2020-03-18
URL https://arxiv.org/abs/2003.08051v2
PDF https://arxiv.org/pdf/2003.08051v2.pdf
PWC https://paperswithcode.com/paper/domain-adaptation-through-transferring-both
Repo
Framework

Differential Treatment for Stuff and Things: A Simple Unsupervised Domain Adaptation Method for Semantic Segmentation

Title Differential Treatment for Stuff and Things: A Simple Unsupervised Domain Adaptation Method for Semantic Segmentation
Authors Zhonghao Wang, Mo Yu, Yunchao Wei, Rogerior Feris, Jinjun Xiong, Wen-mei Hwu, Thomas S. Huang, Honghui Shi
Abstract We consider the problem of unsupervised domain adaptation for semantic segmentation by easing the domain shift between the source domain (synthetic data) and the target domain (real data) in this work. State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue. Based on the observation that stuff categories usually share similar appearances across images of different domains while things (i.e. object instances) have much larger differences, we propose to improve the semantic-level alignment with different strategies for stuff regions and for things: 1) for the stuff categories, we generate feature representation for each class and conduct the alignment operation from the target domain to the source domain; 2) for the thing categories, we generate feature representation for each individual instance and encourage the instance in the target domain to align with the most similar one in the source domain. In this way, the individual differences within thing categories will also be considered to alleviate over-alignment. In addition to our proposed method, we further reveal the reason why the current adversarial loss is often unstable in minimizing the distribution discrepancy and show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains. We conduct extensive experiments in two unsupervised domain adaptation tasks, i.e. GTA5 to Cityscapes and SYNTHIA to Cityscapes, and achieve the new state-of-the-art segmentation accuracy.
Tasks Domain Adaptation, Semantic Segmentation, Unsupervised Domain Adaptation
Published 2020-03-18
URL https://arxiv.org/abs/2003.08040v1
PDF https://arxiv.org/pdf/2003.08040v1.pdf
PWC https://paperswithcode.com/paper/differential-treatment-for-stuff-and-things-a
Repo
Framework

Self-Supervised Fast Adaptation for Denoising via Meta-Learning

Title Self-Supervised Fast Adaptation for Denoising via Meta-Learning
Authors Seunghwan Lee, Donghyeon Cho, Jiwon Kim, Tae Hyun Kim
Abstract Under certain statistical assumptions of noise, recent self-supervised approaches for denoising have been introduced to learn network parameters without true clean images, and these methods can restore an image by exploiting information available from the given input (i.e., internal statistics) at test time. However, self-supervised methods are not yet combined with conventional supervised denoising methods which train the denoising networks with a large number of external training samples. Thus, we propose a new denoising approach that can greatly outperform the state-of-the-art supervised denoising methods by adapting their network parameters to the given input through selfsupervision without changing the networks architectures. Moreover, we propose a meta-learning algorithm to enable quick adaptation of parameters to the specific input at test time. We demonstrate that the proposed method can be easily employed with state-of-the-art denoising networks without additional parameters, and achieve state-of-the-art performance on numerous benchmark datasets.
Tasks Denoising, Meta-Learning
Published 2020-01-09
URL https://arxiv.org/abs/2001.02899v1
PDF https://arxiv.org/pdf/2001.02899v1.pdf
PWC https://paperswithcode.com/paper/self-supervised-fast-adaptation-for-denoising
Repo
Framework

A Collaborative Learning Framework via Federated Meta-Learning

Title A Collaborative Learning Framework via Federated Meta-Learning
Authors Sen Lin, Guang Yang, Junshan Zhang
Abstract Many IoT applications at the network edge demand intelligent decisions in a real-time manner. The edge device alone, however, often cannot achieve real-time edge intelligence due to its constrained computing resources and limited local data. To tackle these challenges, we propose a platform-aided collaborative learning framework where a model is first trained across a set of source edge nodes by a federated meta-learning approach, and then it is rapidly adapted to learn a new task at the target edge node, using a few samples only. Further, we investigate the convergence of the proposed federated meta-learning algorithm under mild conditions on node similarity and the adaptation performance at the target edge. To combat against the vulnerability of meta-learning algorithms to possible adversarial attacks, we further propose a robust version of the federated meta-learning algorithm based on distributionally robust optimization, and establish its convergence under mild conditions. Experiments on different datasets demonstrate the effectiveness of the proposed Federated Meta-Learning based framework.
Tasks Meta-Learning
Published 2020-01-09
URL https://arxiv.org/abs/2001.03229v1
PDF https://arxiv.org/pdf/2001.03229v1.pdf
PWC https://paperswithcode.com/paper/a-collaborative-learning-framework-via
Repo
Framework

Optical Non-Line-of-Sight Physics-based 3D Human Pose Estimation

Title Optical Non-Line-of-Sight Physics-based 3D Human Pose Estimation
Authors Mariko Isogawa, Ye Yuan, Matthew O’Toole, Kris Kitani
Abstract We describe a method for 3D human pose estimation from transient images (i.e., a 3D spatio-temporal histogram of photons) acquired by an optical non-line-of-sight (NLOS) imaging system. Our method can perceive 3D human pose by `looking around corners’ through the use of light indirectly reflected by the environment. We bring together a diverse set of technologies from NLOS imaging, human pose estimation and deep reinforcement learning to construct an end-to-end data processing pipeline that converts a raw stream of photon measurements into a full 3D human pose sequence estimate. Our contributions are the design of data representation process which includes (1) a learnable inverse point spread function (PSF) to convert raw transient images into a deep feature vector; (2) a neural humanoid control policy conditioned on the transient image feature and learned from interactions with a physics simulator; and (3) a data synthesis and augmentation strategy based on depth data that can be transferred to a real-world NLOS imaging system. Our preliminary experiments suggest that our method is able to generalize to real-world NLOS measurement to estimate physically-valid 3D human poses. |
Tasks 3D Human Pose Estimation, Pose Estimation
Published 2020-03-31
URL https://arxiv.org/abs/2003.14414v1
PDF https://arxiv.org/pdf/2003.14414v1.pdf
PWC https://paperswithcode.com/paper/optical-non-line-of-sight-physics-based-3d
Repo
Framework

Real-Time Camera Pose Estimation for Sports Fields

Title Real-Time Camera Pose Estimation for Sports Fields
Authors Leonardo Citraro, Pablo Márquez-Neila, Stefano Savarè, Vivek Jayaram, Charles Dubout, Félix Renaut, Andrés Hasfura, Horesh Ben Shitrit, Pascal Fua
Abstract Given an image sequence featuring a portion of a sports field filmed by a moving and uncalibrated camera, such as the one of the smartphones, our goal is to compute automatically in real time the focal length and extrinsic camera parameters for each image in the sequence without using a priori knowledges of the position and orientation of the camera. To this end, we propose a novel framework that combines accurate localization and robust identification of specific keypoints in the image by using a fully convolutional deep architecture. Our algorithm exploits both the field lines and the players’ image locations, assuming their ground plane positions to be given, to achieve accuracy and robustness that is beyond the current state of the art. We will demonstrate its effectiveness on challenging soccer, basketball, and volleyball benchmark datasets.
Tasks Pose Estimation
Published 2020-03-31
URL https://arxiv.org/abs/2003.14109v1
PDF https://arxiv.org/pdf/2003.14109v1.pdf
PWC https://paperswithcode.com/paper/real-time-camera-pose-estimation-for-sports
Repo
Framework

How to Train Your Event Camera Neural Network

Title How to Train Your Event Camera Neural Network
Authors Timo Stoffregen, Cedric Scheerlinck, Davide Scaramuzza, Tom Drummond, Nick Barnes, Lindsay Kleeman, Robert Mahony
Abstract Event cameras are paradigm-shifting novel sensors that report asynchronous, per-pixel brightness changes called ‘events’ with unparalleled low latency. This makes them ideal for high speed, high dynamic range scenes where conventional cameras would fail. Recent work has demonstrated impressive results using Convolutional Neural Networks (CNNs) for video reconstruction and optic flow with events. We present strategies for improving training data for event based CNNs that result in 25-40% boost in performance of existing state-of-the-art (SOTA) video reconstruction networks retrained with our method, and up to 80% for optic flow networks. A challenge in evaluating event based video reconstruction is lack of quality groundtruth images in existing datasets. To address this, we present a new High Quality Frames (HQF) dataset, containing events and groundtruth frames from a DAVIS240C that are well-exposed and minimally motion-blurred. We evaluate our method on HQF + several existing major event camera datasets.
Tasks Video Reconstruction
Published 2020-03-20
URL https://arxiv.org/abs/2003.09078v1
PDF https://arxiv.org/pdf/2003.09078v1.pdf
PWC https://paperswithcode.com/paper/how-to-train-your-event-camera-neural-network
Repo
Framework

Unboxing MAC Protocol Design Optimization Using Deep Learning

Title Unboxing MAC Protocol Design Optimization Using Deep Learning
Authors Hannaneh Barahouei Pasandi, Tamer Nadeem
Abstract Evolving amendments of 802.11 standards feature a large set of physical and MAC layer control parameters to support the increasing communication objectives spanning application requirements and network dynamics. The significant growth and penetration of various devices come along with a tremendous increase in the number of applications supporting various domains and services which will impose a never-before-seen burden on wireless networks. The challenge however, is that each scenario requires a different wireless protocol functionality and parameter setting to optimally determine how to tune these functionalities and parameters to adapt to varying network scenarios. The traditional trial-error approach of manual tuning of parameters is not just becoming difficult to repeat but also sub-optimal for different networking scenarios. In this paper, we describe how we can leverage a deep reinforcement learning framework to be trained to learn the relation between different parameters in the physical and MAC layer and show that how our learning-based approach could help us in getting insights about protocol design optimization task.
Tasks
Published 2020-02-06
URL https://arxiv.org/abs/2002.03795v1
PDF https://arxiv.org/pdf/2002.03795v1.pdf
PWC https://paperswithcode.com/paper/unboxing-mac-protocol-design-optimization
Repo
Framework

An Intelligent and Time-Efficient DDoS Identification Framework for Real-Time Enterprise Networks SAD-F: Spark Based Anomaly Detection Framework

Title An Intelligent and Time-Efficient DDoS Identification Framework for Real-Time Enterprise Networks SAD-F: Spark Based Anomaly Detection Framework
Authors Awais Ahmed, Sufian Hameed, Muhammad Rafi, Qublai Khan Ali Mirza
Abstract Anomaly detection is a crucial step for preventing malicious activities in the network and keeping resources available all the time for legitimate users. It is noticed from various studies that classical anomaly detectors work well with small and sampled data, but the chances of failures increase with real-time (non-sampled data) traffic data. In this paper, we will be exploring security analytic techniques for DDoS anomaly detection using different machine learning techniques. In this paper, we are proposing a novel approach which deals with real traffic as input to the system. Further, we study and compare the performance factor of our proposed framework on three different testbeds including normal commodity hardware, low-end system, and high-end system. Hardware details of testbeds are discussed in the respective section. Further in this paper, we investigate the performance of the classifiers in (near) real-time detection of anomalies attacks. This study also focused on the feature selection process that is as important for the anomaly detection process as it is for general modeling problems. Several techniques have been studied for feature selection and it is observed that proper feature selection can increase performance in terms of model’s execution time - which totally depends upon the traffic file or traffic capturing process.
Tasks Anomaly Detection, Feature Selection
Published 2020-01-21
URL https://arxiv.org/abs/2001.08155v2
PDF https://arxiv.org/pdf/2001.08155v2.pdf
PWC https://paperswithcode.com/paper/live-anomaly-detection-based-on-machine
Repo
Framework

Theoretically Expressive and Edge-aware Graph Learning

Title Theoretically Expressive and Edge-aware Graph Learning
Authors Federico Errica, Davide Bacciu, Alessio Micheli
Abstract We propose a new Graph Neural Network that combines recent advancements in the field. We give theoretical contributions by proving that the model is strictly more general than the Graph Isomorphism Network and the Gated Graph Neural Network, as it can approximate the same functions and deal with arbitrary edge values. Then, we show how a single node information can flow through the graph unchanged.
Tasks
Published 2020-01-24
URL https://arxiv.org/abs/2001.09005v1
PDF https://arxiv.org/pdf/2001.09005v1.pdf
PWC https://paperswithcode.com/paper/theoretically-expressive-and-edge-aware-graph
Repo
Framework

Never Give Up: Learning Directed Exploration Strategies

Title Never Give Up: Learning Directed Exploration Strategies
Authors Adrià Puigdomènech Badia, Pablo Sprechmann, Alex Vitvitskyi, Daniel Guo, Bilal Piot, Steven Kapturowski, Olivier Tieleman, Martín Arjovsky, Alexander Pritzel, Andew Bolt, Charles Blundell
Abstract We propose a reinforcement learning agent to solve hard exploration games by learning a range of directed exploratory policies. We construct an episodic memory-based intrinsic reward using k-nearest neighbors over the agent’s recent experience to train the directed exploratory policies, thereby encouraging the agent to repeatedly revisit all states in its environment. A self-supervised inverse dynamics model is used to train the embeddings of the nearest neighbour lookup, biasing the novelty signal towards what the agent can control. We employ the framework of Universal Value Function Approximators (UVFA) to simultaneously learn many directed exploration policies with the same neural network, with different trade-offs between exploration and exploitation. By using the same neural network for different degrees of exploration/exploitation, transfer is demonstrated from predominantly exploratory policies yielding effective exploitative policies. The proposed method can be incorporated to run with modern distributed RL agents that collect large amounts of experience from many actors running in parallel on separate environment instances. Our method doubles the performance of the base agent in all hard exploration in the Atari-57 suite while maintaining a very high score across the remaining games, obtaining a median human normalised score of 1344.0%. Notably, the proposed method is the first algorithm to achieve non-zero rewards (with a mean score of 8,400) in the game of Pitfall! without using demonstrations or hand-crafted features.
Tasks
Published 2020-02-14
URL https://arxiv.org/abs/2002.06038v1
PDF https://arxiv.org/pdf/2002.06038v1.pdf
PWC https://paperswithcode.com/paper/never-give-up-learning-directed-exploration-1
Repo
Framework

Attentive Group Equivariant Convolutional Networks

Title Attentive Group Equivariant Convolutional Networks
Authors David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn
Abstract Although group convolutional networks are able to learn powerful representations based on symmetry patterns, they lack explicit means to learn meaningful relationships among them (e.g., relative positions and poses). In this paper, we present attentive group equivariant convolutions, a generalization of the group convolution, in which attention is applied during the course of convolution to accentuate meaningful symmetry combinations and suppress non-plausible, misleading ones. We indicate that prior work on visual attention can be described as special cases of our proposed framework and show empirically that our attentive group equivariant convolutional networks consistently outperform conventional group convolutional networks on benchmark image datasets. Simultaneously, we provide interpretability to the learned concepts through the visualization of equivariant attention maps.
Tasks
Published 2020-02-07
URL https://arxiv.org/abs/2002.03830v2
PDF https://arxiv.org/pdf/2002.03830v2.pdf
PWC https://paperswithcode.com/paper/attentive-group-equivariant-convolutional
Repo
Framework

Data Augmentation for Personal Knowledge Graph Population

Title Data Augmentation for Personal Knowledge Graph Population
Authors Lingraj S Vannur, Lokesh Nagalapatti, Balaji Ganesan, Hima Patel
Abstract A personal knowledge graph comprising people as nodes, their personal data as node attributes, and their relationships as edges has a number of applications in de-identification, master data management, and fraud prevention. While artificial neural networks have led to significant improvements in different tasks in cold start knowledge graph population, the overall F1 of the system remains quite low. This problem is more acute in personal knowledge graph population which presents additional challenges with regard to data protection, fairness and privacy. In this work, we present a system that uses rule based annotators to augment training data for neural models, and for slot filling to increase the diversity of the populated knowledge graph. We also propose a representative set sampling method to use the populated knowledge graph data for downstream applications. We introduce new resources and discuss our results.
Tasks Data Augmentation, Slot Filling
Published 2020-02-23
URL https://arxiv.org/abs/2002.10943v1
PDF https://arxiv.org/pdf/2002.10943v1.pdf
PWC https://paperswithcode.com/paper/data-augmentation-for-personal-knowledge
Repo
Framework

Fine-Tuning BERT for Schema-Guided Zero-Shot Dialogue State Tracking

Title Fine-Tuning BERT for Schema-Guided Zero-Shot Dialogue State Tracking
Authors Yu-Ping Ruan, Zhen-Hua Ling, Jia-Chen Gu, Quan Liu
Abstract We present our work on Track 4 in the Dialogue System Technology Challenges 8 (DSTC8). The DSTC8-Track 4 aims to perform dialogue state tracking (DST) under the zero-shot settings, in which the model needs to generalize on unseen service APIs given a schema definition of these target APIs. Serving as the core for many virtual assistants such as Siri, Alexa, and Google Assistant, the DST keeps track of the user’s goal and what happened in the dialogue history, mainly including intent prediction, slot filling, and user state tracking, which tests models’ ability of natural language understanding. Recently, the pretrained language models have achieved state-of-the-art results and shown impressive generalization ability on various NLP tasks, which provide a promising way to perform zero-shot learning for language understanding. Based on this, we propose a schema-guided paradigm for zero-shot dialogue state tracking (SGP-DST) by fine-tuning BERT, one of the most popular pretrained language models. The SGP-DST system contains four modules for intent prediction, slot prediction, slot transfer prediction, and user state summarizing respectively. According to the official evaluation results, our SGP-DST (team12) ranked 3rd on the joint goal accuracy (primary evaluation metric for ranking submissions) and 1st on the requsted slots F1 among 25 participant teams.
Tasks Dialogue State Tracking, Slot Filling, Zero-Shot Learning
Published 2020-02-01
URL https://arxiv.org/abs/2002.00181v1
PDF https://arxiv.org/pdf/2002.00181v1.pdf
PWC https://paperswithcode.com/paper/fine-tuning-bert-for-schema-guided-zero-shot
Repo
Framework

Balancing reconstruction error and Kullback-Leibler divergence in Variational Autoencoders

Title Balancing reconstruction error and Kullback-Leibler divergence in Variational Autoencoders
Authors Andrea Asperti, Matteo Trentin
Abstract In the loss function of Variational Autoencoders there is a well known tension between two components: the reconstruction loss, improving the quality of the resulting images, and the Kullback-Leibler divergence, acting as a regularizer of the latent space. Correctly balancing these two components is a delicate issue, easily resulting in poor generative behaviours. In a recent work, Dai and Wipf obtained a sensible improvement by allowing the network to learn the balancing factor during training, according to a suitable loss function. In this article, we show that learning can be replaced by a simple deterministic computation, helping to understand the underlying mechanism, and resulting in a faster and more accurate behaviour. On typical datasets such as Cifar and Celeba, our technique sensibly outperforms all previous VAE architectures.
Tasks
Published 2020-02-18
URL https://arxiv.org/abs/2002.07514v1
PDF https://arxiv.org/pdf/2002.07514v1.pdf
PWC https://paperswithcode.com/paper/balancing-reconstruction-error-and-kullback
Repo
Framework
comments powered by Disqus