October 20, 2019

2895 words 14 mins read

Paper Group AWR 231

Paper Group AWR 231

A Distributed Epigenetic Shape Formation and Regeneration Algorithm for a Swarm of Robots. Investigating Limit Order Book Characteristics for Short Term Price Prediction: a Machine Learning Approach. A Differentially Private Wilcoxon Signed-Rank Test. Semantic Aware Attention Based Deep Object Co-segmentation. GSPN: Generative Shape Proposal Networ …

A Distributed Epigenetic Shape Formation and Regeneration Algorithm for a Swarm of Robots

Title A Distributed Epigenetic Shape Formation and Regeneration Algorithm for a Swarm of Robots
Authors Rahul Shivnarayan Mishra, Tushar Semwal, Shivashankar B. Nair
Abstract Living cells exhibit both growth and regeneration of body tissues. Epigenetic Tracking (ET), models this growth and regenerative qualities of living cells and has been used to generate complex 2D and 3D shapes. In this paper, we present an ET based algorithm that aids a swarm of identically-programmed robots to form arbitrary shapes and regenerate them when cut. The algorithm works in a distributed manner using only local interactions and computations without any central control and aids the robots to form the shape in a triangular lattice structure. In case of damage or splitting of the shape, it helps each set of the remaining robots to regenerate and position themselves to build scaled down versions of the original shape. The paper presents the shapes formed and regenerated by the algorithm using the Kilombo simulator.
Tasks
Published 2018-10-29
URL http://arxiv.org/abs/1810.11935v1
PDF http://arxiv.org/pdf/1810.11935v1.pdf
PWC https://paperswithcode.com/paper/a-distributed-epigenetic-shape-formation-and
Repo https://github.com/bgmichelsen/Swarm_Shape_Formation
Framework none

Investigating Limit Order Book Characteristics for Short Term Price Prediction: a Machine Learning Approach

Title Investigating Limit Order Book Characteristics for Short Term Price Prediction: a Machine Learning Approach
Authors Faisal I Qureshi
Abstract With the proliferation of algorithmic high-frequency trading in financial markets, the Limit Order Book has generated increased research interest. Research is still at an early stage and there is much we do not understand about the dynamics of Limit Order Books. In this paper, we employ a machine learning approach to investigate Limit Order Book features and their potential to predict short term price movements. This is an initial broad-based investigation that results in some novel observations about LOB dynamics and identifies several promising directions for further research. Furthermore, we obtain prediction results that are significantly superior to a baseline predictor.
Tasks
Published 2018-12-20
URL http://arxiv.org/abs/1901.10534v1
PDF http://arxiv.org/pdf/1901.10534v1.pdf
PWC https://paperswithcode.com/paper/investigating-limit-order-book
Repo https://github.com/radoslawkrolikowski/financial-market-data-analysis
Framework pytorch

A Differentially Private Wilcoxon Signed-Rank Test

Title A Differentially Private Wilcoxon Signed-Rank Test
Authors Simon Couch, Zeki Kazan, Kaiyan Shi, Andrew Bray, Adam Groce
Abstract Hypothesis tests are a crucial statistical tool for data mining and are the workhorse of scientific research in many fields. Here we present a differentially private analogue of the classic Wilcoxon signed-rank hypothesis test, which is used when comparing sets of paired (e.g., before-and-after) data values. We present not only a private estimate of the test statistic, but a method to accurately compute a p-value and assess statistical significance. We evaluate our test on both simulated and real data. Compared to the only existing private test for this situation, that of Task and Clifton, we find that our test requires less than half as much data to achieve the same statistical power.
Tasks
Published 2018-09-05
URL http://arxiv.org/abs/1809.01635v1
PDF http://arxiv.org/pdf/1809.01635v1.pdf
PWC https://paperswithcode.com/paper/a-differentially-private-wilcoxon-signed-rank
Repo https://github.com/simonpcouch/wilcoxon
Framework none

Semantic Aware Attention Based Deep Object Co-segmentation

Title Semantic Aware Attention Based Deep Object Co-segmentation
Authors Hong Chen, Yifei Huang, Hideki Nakayama
Abstract Object co-segmentation is the task of segmenting the same objects from multiple images. In this paper, we propose the Attention Based Object Co-Segmentation for object co-segmentation that utilize a novel attention mechanism in the bottleneck layer of deep neural network for the selection of semantically related features. Furthermore, we take the benefit of attention learner and propose an algorithm to segment multi-input images in linear time complexity. Experiment results demonstrate that our model achieves state of the art performance on multiple datasets, with a significant reduction of computational time.
Tasks
Published 2018-10-16
URL http://arxiv.org/abs/1810.06859v1
PDF http://arxiv.org/pdf/1810.06859v1.pdf
PWC https://paperswithcode.com/paper/semantic-aware-attention-based-deep-object-co
Repo https://github.com/dmsi-ods/test-repo
Framework pytorch

GSPN: Generative Shape Proposal Network for 3D Instance Segmentation in Point Cloud

Title GSPN: Generative Shape Proposal Network for 3D Instance Segmentation in Point Cloud
Authors Li Yi, Wang Zhao, He Wang, Minhyuk Sung, Leonidas Guibas
Abstract We introduce a novel 3D object proposal approach named Generative Shape Proposal Network (GSPN) for instance segmentation in point cloud data. Instead of treating object proposal as a direct bounding box regression problem, we take an analysis-by-synthesis strategy and generate proposals by reconstructing shapes from noisy observations in a scene. We incorporate GSPN into a novel 3D instance segmentation framework named Region-based PointNet (R-PointNet) which allows flexible proposal refinement and instance segmentation generation. We achieve state-of-the-art performance on several 3D instance segmentation tasks. The success of GSPN largely comes from its emphasis on geometric understandings during object proposal, which greatly reducing proposals with low objectness.
Tasks 3D Instance Segmentation, Instance Segmentation, Semantic Segmentation
Published 2018-12-08
URL http://arxiv.org/abs/1812.03320v1
PDF http://arxiv.org/pdf/1812.03320v1.pdf
PWC https://paperswithcode.com/paper/gspn-generative-shape-proposal-network-for-3d
Repo https://github.com/ericyi/GSPN
Framework tf

Dynamic mode decomposition in vector-valued reproducing kernel Hilbert spaces for extracting dynamical structure among observables

Title Dynamic mode decomposition in vector-valued reproducing kernel Hilbert spaces for extracting dynamical structure among observables
Authors Keisuke Fujii, Yoshinobu Kawahara
Abstract Understanding nonlinear dynamical systems (NLDSs) is challenging in a variety of engineering and scientific fields. Dynamic mode decomposition (DMD), which is a numerical algorithm for the spectral analysis of Koopman operators, has been attracting attention as a way of obtaining global modal descriptions of NLDSs without requiring explicit prior knowledge. However, since existing DMD algorithms are in principle formulated based on the concatenation of scalar observables, it is not directly applicable to data with dependent structures among observables, which take, for example, the form of a sequence of graphs. In this paper, we formulate Koopman spectral analysis for NLDSs with structures among observables and propose an estimation algorithm for this problem. This method can extract and visualize the underlying low-dimensional global dynamics of NLDSs with structures among observables from data, which can be useful in understanding the underlying dynamics of such NLDSs. To this end, we first formulate the problem of estimating spectra of the Koopman operator defined in vector-valued reproducing kernel Hilbert spaces, and then develop an estimation procedure for this problem by reformulating tensor-based DMD. As a special case of our method, we propose the method named as Graph DMD, which is a numerical algorithm for Koopman spectral analysis of graph dynamical systems, using a sequence of adjacency matrices. We investigate the empirical performance of our method by using synthetic and real-world data.
Tasks
Published 2018-08-30
URL https://arxiv.org/abs/1808.10551v4
PDF https://arxiv.org/pdf/1808.10551v4.pdf
PWC https://paperswithcode.com/paper/dynamic-mode-decomposition-in-vector-valued
Repo https://github.com/keisuke198619/GraphDMD
Framework none

Label Refinery: Improving ImageNet Classification through Label Progression

Title Label Refinery: Improving ImageNet Classification through Label Progression
Authors Hessam Bagherinezhad, Maxwell Horton, Mohammad Rastegari, Ali Farhadi
Abstract Among the three main components (data, labels, and models) of any supervised learning system, data and models have been the main subjects of active research. However, studying labels and their properties has received very little attention. Current principles and paradigms of labeling impose several challenges to machine learning algorithms. Labels are often incomplete, ambiguous, and redundant. In this paper we study the effects of various properties of labels and introduce the Label Refinery: an iterative procedure that updates the ground truth labels after examining the entire dataset. We show significant gain using refined labels across a wide range of models. Using a Label Refinery improves the state-of-the-art top-1 accuracy of (1) AlexNet from 59.3 to 67.2, (2) MobileNet from 70.6 to 73.39, (3) MobileNet-0.25 from 50.6 to 55.59, (4) VGG19 from 72.7 to 75.46, and (5) Darknet19 from 72.9 to 74.47.
Tasks
Published 2018-05-07
URL http://arxiv.org/abs/1805.02641v1
PDF http://arxiv.org/pdf/1805.02641v1.pdf
PWC https://paperswithcode.com/paper/label-refinery-improving-imagenet
Repo https://github.com/hessamb/label-refinery
Framework pytorch

ECO: Efficient Convolutional Network for Online Video Understanding

Title ECO: Efficient Convolutional Network for Online Video Understanding
Authors Mohammadreza Zolfaghari, Kamaljeet Singh, Thomas Brox
Abstract The state of the art in video understanding suffers from two problems: (1) The major part of reasoning is performed locally in the video, therefore, it misses important relationships within actions that span several seconds. (2) While there are local methods with fast per-frame processing, the processing of the whole video is not efficient and hampers fast video retrieval or online classification of long-term activities. In this paper, we introduce a network architecture that takes long-term content into account and enables fast per-video processing at the same time. The architecture is based on merging long-term content already in the network rather than in a post-hoc fusion. Together with a sampling strategy, which exploits that neighboring frames are largely redundant, this yields high-quality action classification and video captioning at up to 230 videos per second, where each video can consist of a few hundred frames. The approach achieves competitive performance across all datasets while being 10x to 80x faster than state-of-the-art methods.
Tasks Action Classification, Action Recognition In Videos, Video Captioning, Video Retrieval, Video Understanding
Published 2018-04-24
URL http://arxiv.org/abs/1804.09066v2
PDF http://arxiv.org/pdf/1804.09066v2.pdf
PWC https://paperswithcode.com/paper/eco-efficient-convolutional-network-for
Repo https://github.com/mzolfaghari/ECO-efficient-video-understanding
Framework pytorch

Universal Transformers

Title Universal Transformers
Authors Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, Łukasz Kaiser
Abstract Recurrent neural networks (RNNs) sequentially process data by updating their state with each new data point, and have long been the de facto choice for sequence modeling tasks. However, their inherently sequential computation makes them slow to train. Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times. Despite these successes, however, popular feed-forward sequence models like the Transformer fail to generalize in many simple tasks that recurrent models handle with ease, e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time. We propose the Universal Transformer (UT), a parallel-in-time self-attentive recurrent sequence model which can be cast as a generalization of the Transformer model and which addresses these issues. UTs combine the parallelizability and global receptive field of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs. We also add a dynamic per-position halting mechanism and find that it improves accuracy on several tasks. In contrast to the standard Transformer, under certain assumptions, UTs can be shown to be Turing-complete. Our experiments show that UTs outperform standard Transformers on a wide range of algorithmic and language understanding tasks, including the challenging LAMBADA language modeling task where UTs achieve a new state of the art, and machine translation where UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De dataset.
Tasks Language Modelling, Learning to Execute, Machine Translation
Published 2018-07-10
URL http://arxiv.org/abs/1807.03819v3
PDF http://arxiv.org/pdf/1807.03819v3.pdf
PWC https://paperswithcode.com/paper/universal-transformers
Repo https://github.com/akikaaa/transformers
Framework pytorch

Learning latent representations for style control and transfer in end-to-end speech synthesis

Title Learning latent representations for style control and transfer in end-to-end speech synthesis
Authors Ya-Jie Zhang, Shifeng Pan, Lei He, Zhen-Hua Ling
Abstract In this paper, we introduce the Variational Autoencoder (VAE) to an end-to-end speech synthesis model, to learn the latent representation of speaking styles in an unsupervised manner. The style representation learned through VAE shows good properties such as disentangling, scaling, and combination, which makes it easy for style control. Style transfer can be achieved in this framework by first inferring style representation through the recognition network of VAE, then feeding it into TTS network to guide the style in synthesizing speech. To avoid Kullback-Leibler (KL) divergence collapse in training, several techniques are adopted. Finally, the proposed model shows good performance of style control and outperforms Global Style Token (GST) model in ABX preference tests on style transfer.
Tasks Speech Synthesis, Style Transfer
Published 2018-12-11
URL http://arxiv.org/abs/1812.04342v2
PDF http://arxiv.org/pdf/1812.04342v2.pdf
PWC https://paperswithcode.com/paper/learning-latent-representations-for-style
Repo https://github.com/yanggeng1995/vae_tacotron
Framework tf

Instance Segmentation by Deep Coloring

Title Instance Segmentation by Deep Coloring
Authors Victor Kulikov, Victor Yurchenko, Victor Lempitsky
Abstract We propose a new and, arguably, a very simple reduction of instance segmentation to semantic segmentation. This reduction allows to train feed-forward non-recurrent deep instance segmentation systems in an end-to-end fashion using architectures that have been proposed for semantic segmentation. Our approach proceeds by introducing a fixed number of labels (colors) and then dynamically assigning object instances to those labels during training (coloring). A standard semantic segmentation objective is then used to train a network that can color previously unseen images. At test time, individual object instances can be recovered from the output of the trained convolutional network using simple connected component analysis. In the experimental validation, the coloring approach is shown to be capable of solving diverse instance segmentation tasks arising in autonomous driving (the Cityscapes benchmark), plant phenotyping (the CVPPP leaf segmentation challenge), and high-throughput microscopy image analysis. The source code is publicly available: https://github.com/kulikovv/DeepColoring.
Tasks Autonomous Driving, Instance Segmentation, Semantic Segmentation
Published 2018-07-26
URL http://arxiv.org/abs/1807.10007v1
PDF http://arxiv.org/pdf/1807.10007v1.pdf
PWC https://paperswithcode.com/paper/instance-segmentation-by-deep-coloring
Repo https://github.com/kulikovv/DeepColoring
Framework pytorch

Learning deep structured active contours end-to-end

Title Learning deep structured active contours end-to-end
Authors Diego Marcos, Devis Tuia, Benjamin Kellenberger, Lisa Zhang, Min Bai, Renjie Liao, Raquel Urtasun
Abstract The world is covered with millions of buildings, and precisely knowing each instance’s position and extents is vital to a multitude of applications. Recently, automated building footprint segmentation models have shown superior detection accuracy thanks to the usage of Convolutional Neural Networks (CNN). However, even the latest evolutions struggle to precisely delineating borders, which often leads to geometric distortions and inadvertent fusion of adjacent building instances. We propose to overcome this issue by exploiting the distinct geometric properties of buildings. To this end, we present Deep Structured Active Contours (DSAC), a novel framework that integrates priors and constraints into the segmentation process, such as continuous boundaries, smooth edges, and sharp corners. To do so, DSAC employs Active Contour Models (ACM), a family of constraint- and prior-based polygonal models. We learn ACM parameterizations per instance using a CNN, and show how to incorporate all components in a structured output model, making DSAC trainable end-to-end. We evaluate DSAC on three challenging building instance segmentation datasets, where it compares favorably against state-of-the-art. Code will be made available.
Tasks Instance Segmentation, Semantic Segmentation
Published 2018-03-16
URL http://arxiv.org/abs/1803.06329v1
PDF http://arxiv.org/pdf/1803.06329v1.pdf
PWC https://paperswithcode.com/paper/learning-deep-structured-active-contours-end
Repo https://github.com/dmarcosg/DSAC
Framework tf

A0C: Alpha Zero in Continuous Action Space

Title A0C: Alpha Zero in Continuous Action Space
Authors Thomas M. Moerland, Joost Broekens, Aske Plaat, Catholijn M. Jonker
Abstract A core novelty of Alpha Zero is the interleaving of tree search and deep learning, which has proven very successful in board games like Chess, Shogi and Go. These games have a discrete action space. However, many real-world reinforcement learning domains have continuous action spaces, for example in robotic control, navigation and self-driving cars. This paper presents the necessary theoretical extensions of Alpha Zero to deal with continuous action space. We also provide some preliminary experiments on the Pendulum swing-up task, empirically showing the feasibility of our approach. Thereby, this work provides a first step towards the application of iterated search and learning in domains with a continuous action space.
Tasks Board Games, Self-Driving Cars
Published 2018-05-24
URL http://arxiv.org/abs/1805.09613v1
PDF http://arxiv.org/pdf/1805.09613v1.pdf
PWC https://paperswithcode.com/paper/a0c-alpha-zero-in-continuous-action-space
Repo https://github.com/jeapostrophe/monaco
Framework none

Unpaired Multi-Domain Image Generation via Regularized Conditional GANs

Title Unpaired Multi-Domain Image Generation via Regularized Conditional GANs
Authors Xudong Mao, Qing Li
Abstract In this paper, we study the problem of multi-domain image generation, the goal of which is to generate pairs of corresponding images from different domains. With the recent development in generative models, image generation has achieved great progress and has been applied to various computer vision tasks. However, multi-domain image generation may not achieve the desired performance due to the difficulty of learning the correspondence of different domain images, especially when the information of paired samples is not given. To tackle this problem, we propose Regularized Conditional GAN (RegCGAN) which is capable of learning to generate corresponding images in the absence of paired training data. RegCGAN is based on the conditional GAN, and we introduce two regularizers to guide the model to learn the corresponding semantics of different domains. We evaluate the proposed model on several tasks for which paired training data is not given, including the generation of edges and photos, the generation of faces with different attributes, etc. The experimental results show that our model can successfully generate corresponding images for all these tasks, while outperforms the baseline methods. We also introduce an approach of applying RegCGAN to unsupervised domain adaptation.
Tasks Domain Adaptation, Image Generation, Unsupervised Domain Adaptation
Published 2018-05-07
URL http://arxiv.org/abs/1805.02456v1
PDF http://arxiv.org/pdf/1805.02456v1.pdf
PWC https://paperswithcode.com/paper/unpaired-multi-domain-image-generation-via
Repo https://github.com/xudonmao/RegCGAN
Framework tf

LIDIOMS: A Multilingual Linked Idioms Data Set

Title LIDIOMS: A Multilingual Linked Idioms Data Set
Authors Diego Moussallem, Mohamed Ahmed Sherif, Diego Esteves, Marcos Zampieri, Axel-Cyrille Ngonga Ngomo
Abstract In this paper, we describe the LIDIOMS data set, a multilingual RDF representation of idioms currently containing five languages: English, German, Italian, Portuguese, and Russian. The data set is intended to support natural language processing applications by providing links between idioms across languages. The underlying data was crawled and integrated from various sources. To ensure the quality of the crawled data, all idioms were evaluated by at least two native speakers. Herein, we present the model devised for structuring the data. We also provide the details of linking LIDIOMS to well-known multilingual data sets such as BabelNet. The resulting data set complies with best practices according to Linguistic Linked Open Data Community.
Tasks
Published 2018-02-22
URL http://arxiv.org/abs/1802.08148v1
PDF http://arxiv.org/pdf/1802.08148v1.pdf
PWC https://paperswithcode.com/paper/lidioms-a-multilingual-linked-idioms-data-set
Repo https://github.com/dice-group/LIdioms
Framework none
comments powered by Disqus