July 29, 2019

2944 words 14 mins read

Paper Group AWR 178

Paper Group AWR 178

A simple but tough-to-beat baseline for the Fake News Challenge stance detection task. Exploiting ConvNet Diversity for Flooding Identification. CyCADA: Cycle-Consistent Adversarial Domain Adaptation. Need for Speed: A Benchmark for Higher Frame Rate Object Tracking. Being Robust (in High Dimensions) Can Be Practical. Auto-painter: Cartoon Image Ge …

A simple but tough-to-beat baseline for the Fake News Challenge stance detection task

Title A simple but tough-to-beat baseline for the Fake News Challenge stance detection task
Authors Benjamin Riedel, Isabelle Augenstein, Georgios P. Spithourakis, Sebastian Riedel
Abstract Identifying public misinformation is a complicated and challenging task. An important part of checking the veracity of a specific claim is to evaluate the stance different news sources take towards the assertion. Automatic stance evaluation, i.e. stance detection, would arguably facilitate the process of fact checking. In this paper, we present our stance detection system which claimed third place in Stage 1 of the Fake News Challenge. Despite our straightforward approach, our system performs at a competitive level with the complex ensembles of the top two winning teams. We therefore propose our system as the ‘simple but tough-to-beat baseline’ for the Fake News Challenge stance detection task.
Tasks Stance Detection
Published 2017-07-11
URL http://arxiv.org/abs/1707.03264v2
PDF http://arxiv.org/pdf/1707.03264v2.pdf
PWC https://paperswithcode.com/paper/a-simple-but-tough-to-beat-baseline-for-the
Repo https://github.com/chimera-detector/experienceExtension
Framework none

Exploiting ConvNet Diversity for Flooding Identification

Title Exploiting ConvNet Diversity for Flooding Identification
Authors Keiller Nogueira, Samuel G. Fadel, Ícaro C. Dourado, Rafael de O. Werneck, Javier A. V. Muñoz, Otávio A. B. Penatti, Rodrigo T. Calumby, Lin Tzy Li, Jefersson A. dos Santos, Ricardo da S. Torres
Abstract Flooding is the world’s most costly type of natural disaster in terms of both economic losses and human causalities. A first and essential procedure towards flood monitoring is based on identifying the area most vulnerable to flooding, which gives authorities relevant regions to focus. In this work, we propose several methods to perform flooding identification in high-resolution remote sensing images using deep learning. Specifically, some proposed techniques are based upon unique networks, such as dilated and deconvolutional ones, while other was conceived to exploit diversity of distinct networks in order to extract the maximum performance of each classifier. Evaluation of the proposed algorithms were conducted in a high-resolution remote sensing dataset. Results show that the proposed algorithms outperformed several state-of-the-art baselines, providing improvements ranging from 1 to 4% in terms of the Jaccard Index.
Tasks
Published 2017-11-09
URL http://arxiv.org/abs/1711.03564v2
PDF http://arxiv.org/pdf/1711.03564v2.pdf
PWC https://paperswithcode.com/paper/exploiting-convnet-diversity-for-flooding
Repo https://github.com/keillernogueira/FDSI
Framework tf

CyCADA: Cycle-Consistent Adversarial Domain Adaptation

Title CyCADA: Cycle-Consistent Adversarial Domain Adaptation
Authors Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A. Efros, Trevor Darrell
Abstract Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.
Tasks Domain Adaptation, Image-to-Image Translation, Semantic Segmentation, Synthetic-to-Real Translation, Unsupervised Image-To-Image Translation
Published 2017-11-08
URL http://arxiv.org/abs/1711.03213v3
PDF http://arxiv.org/pdf/1711.03213v3.pdf
PWC https://paperswithcode.com/paper/cycada-cycle-consistent-adversarial-domain
Repo https://github.com/jhoffman/cycada_release
Framework pytorch

Need for Speed: A Benchmark for Higher Frame Rate Object Tracking

Title Need for Speed: A Benchmark for Higher Frame Rate Object Tracking
Authors Hamed Kiani Galoogahi, Ashton Fagg, Chen Huang, Deva Ramanan, Simon Lucey
Abstract In this paper, we propose the first higher frame rate video dataset (called Need for Speed - NfS) and benchmark for visual object tracking. The dataset consists of 100 videos (380K frames) captured with now commonly available higher frame rate (240 FPS) cameras from real world scenarios. All frames are annotated with axis aligned bounding boxes and all sequences are manually labelled with nine visual attributes - such as occlusion, fast motion, background clutter, etc. Our benchmark provides an extensive evaluation of many recent and state-of-the-art trackers on higher frame rate sequences. We ranked each of these trackers according to their tracking accuracy and real-time performance. One of our surprising conclusions is that at higher frame rates, simple trackers such as correlation filters outperform complex methods based on deep networks. This suggests that for practical applications (such as in robotics or embedded vision), one needs to carefully tradeoff bandwidth constraints associated with higher frame rate acquisition, computational costs of real-time analysis, and the required application accuracy. Our dataset and benchmark allows for the first time (to our knowledge) systematic exploration of such issues, and will be made available to allow for further research in this space.
Tasks Object Tracking, Visual Object Tracking
Published 2017-03-17
URL http://arxiv.org/abs/1703.05884v2
PDF http://arxiv.org/pdf/1703.05884v2.pdf
PWC https://paperswithcode.com/paper/need-for-speed-a-benchmark-for-higher-frame
Repo https://github.com/susomena/DeepSlowMotion
Framework tf

Being Robust (in High Dimensions) Can Be Practical

Title Being Robust (in High Dimensions) Can Be Practical
Authors Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, Alistair Stewart
Abstract Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.
Tasks
Published 2017-03-02
URL http://arxiv.org/abs/1703.00893v4
PDF http://arxiv.org/pdf/1703.00893v4.pdf
PWC https://paperswithcode.com/paper/being-robust-in-high-dimensions-can-be
Repo https://github.com/hoonose/robust-filter
Framework none

Auto-painter: Cartoon Image Generation from Sketch by Using Conditional Generative Adversarial Networks

Title Auto-painter: Cartoon Image Generation from Sketch by Using Conditional Generative Adversarial Networks
Authors Yifan Liu, Zengchang Qin, Zhenbo Luo, Hua Wang
Abstract Recently, realistic image generation using deep neural networks has become a hot topic in machine learning and computer vision. Images can be generated at the pixel level by learning from a large collection of images. Learning to generate colorful cartoon images from black-and-white sketches is not only an interesting research problem, but also a potential application in digital entertainment. In this paper, we investigate the sketch-to-image synthesis problem by using conditional generative adversarial networks (cGAN). We propose the auto-painter model which can automatically generate compatible colors for a sketch. The new model is not only capable of painting hand-draw sketch with proper colors, but also allowing users to indicate preferred colors. Experimental results on two sketch datasets show that the auto-painter performs better that existing image-to-image methods.
Tasks Image Generation
Published 2017-05-04
URL http://arxiv.org/abs/1705.01908v2
PDF http://arxiv.org/pdf/1705.01908v2.pdf
PWC https://paperswithcode.com/paper/auto-painter-cartoon-image-generation-from
Repo https://github.com/sanjay235/Sketch2Color-anime-translation
Framework tf

A generalised framework for detailed classification of swimming paths inside the Morris Water Maze

Title A generalised framework for detailed classification of swimming paths inside the Morris Water Maze
Authors Avgoustinos Vouros, Tiago V. Gehring, Kinga Szydlowska, Artur Janusz, Mike Croucher, Katarzyna Lukasiuk, Witold Konopka, Carmen Sandi, Zehai Tu, Eleni Vasilaki
Abstract The Morris Water Maze is commonly used in behavioural neuroscience for the study of spatial learning with rodents. Over the years, various methods of analysing rodent data collected in this task have been proposed. These methods span from classical performance measurements (e.g. escape latency, rodent speed, quadrant preference) to more sophisticated methods of categorisation which classify the animal swimming path into behavioural classes known as strategies. Classification techniques provide additional insight in relation to the actual animal behaviours but still only a limited amount of studies utilise them mainly because they highly depend on machine learning knowledge. We have previously demonstrated that the animals implement various strategies and by classifying whole trajectories can lead to the loss of important information. In this work, we developed a generalised and robust classification methodology which implements majority voting to boost the classification performance and successfully nullify the need of manual tuning. Based on this framework, we built a complete software, capable of performing the full analysis described in this paper. The software provides an easy to use graphical user interface (GUI) through which users can enter their trajectory data, segment and label them and finally generate reports and figures of the results.
Tasks
Published 2017-11-20
URL http://arxiv.org/abs/1711.07446v2
PDF http://arxiv.org/pdf/1711.07446v2.pdf
PWC https://paperswithcode.com/paper/a-generalised-framework-for-detailed
Repo https://github.com/RodentDataAnalytics/mwm-ml-gen
Framework none

Deep Rewiring: Training very sparse deep networks

Title Deep Rewiring: Training very sparse deep networks
Authors Guillaume Bellec, David Kappel, Wolfgang Maass, Robert Legenstein
Abstract Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them. But also generic hardware and software implementations of deep learning run more efficiently for sparse networks. Several methods exist for pruning connections of a neural network after it was trained without connectivity constraints. We present an algorithm, DEEP R, that enables us to train directly a sparsely connected neural network. DEEP R automatically rewires the network during supervised training so that connections are there where they are most needed for the task, while its total number is all the time strictly bounded. We demonstrate that DEEP R can be used to train very sparse feedforward and recurrent neural networks on standard benchmark tasks with just a minor loss in performance. DEEP R is based on a rigorous theoretical foundation that views rewiring as stochastic sampling of network configurations from a posterior.
Tasks
Published 2017-11-14
URL http://arxiv.org/abs/1711.05136v5
PDF http://arxiv.org/pdf/1711.05136v5.pdf
PWC https://paperswithcode.com/paper/deep-rewiring-training-very-sparse-deep
Repo https://github.com/IGITUGraz/LSNN-official
Framework tf

Data-Free Knowledge Distillation for Deep Neural Networks

Title Data-Free Knowledge Distillation for Deep Neural Networks
Authors Raphael Gontijo Lopes, Stefano Fenu, Thad Starner
Abstract Recent advances in model compression have provided procedures for compressing large neural networks to a fraction of their original size while retaining most if not all of their accuracy. However, all of these approaches rely on access to the original training set, which might not always be possible if the network to be compressed was trained on a very large dataset, or on a dataset whose release poses privacy or safety concerns as may be the case for biometrics tasks. We present a method for data-free knowledge distillation, which is able to compress deep neural networks trained on large-scale datasets to a fraction of their size leveraging only some extra metadata to be provided with a pretrained model release. We also explore different kinds of metadata that can be used with our method, and discuss tradeoffs involved in using each of them.
Tasks Model Compression
Published 2017-10-19
URL http://arxiv.org/abs/1710.07535v2
PDF http://arxiv.org/pdf/1710.07535v2.pdf
PWC https://paperswithcode.com/paper/data-free-knowledge-distillation-for-deep
Repo https://github.com/huawei-noah/DAFL
Framework pytorch

Mining Point Cloud Local Structures by Kernel Correlation and Graph Pooling

Title Mining Point Cloud Local Structures by Kernel Correlation and Graph Pooling
Authors Yiru Shen, Chen Feng, Yaoqing Yang, Dong Tian
Abstract Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point’s local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at http://www.merl.com/research/license#KCNet
Tasks Point Cloud Registration
Published 2017-12-19
URL http://arxiv.org/abs/1712.06760v2
PDF http://arxiv.org/pdf/1712.06760v2.pdf
PWC https://paperswithcode.com/paper/mining-point-cloud-local-structures-by-kernel
Repo https://github.com/ftdlyc/KCNet_Pytorch
Framework pytorch

Cellulyzer - Automated analysis and interactive visualization/simulation of select cellular processes

Title Cellulyzer - Automated analysis and interactive visualization/simulation of select cellular processes
Authors Aliakbar Jafarpour, Holger Lorenz
Abstract Here we report on a set of programs developed at the ZMBH Bio-Imaging Facility for tracking real-life images of cellular processes. These programs perform 1) automated tracking; 2) quantitative and comparative track analyses of different images in different groups; 3) different interactive visualization schemes; and 4) interactive realistic simulation of different cellular processes for validation and optimal problem-specific adjustment of image acquisition parameters (tradeoff between speed, resolution, and quality with feedback from the very final results). The collection of programs is primarily developed for the common bio-image analysis software ImageJ (as a single Java Plugin). Some programs are also available in other languages (C++ and Javascript) and may be run simply with a web-browser; even on a low-end Tablet or Smartphone. The programs are available at https://github.com/nurlicht/CellulyzerDemo
Tasks
Published 2017-03-06
URL http://arxiv.org/abs/1703.02611v1
PDF http://arxiv.org/pdf/1703.02611v1.pdf
PWC https://paperswithcode.com/paper/cellulyzer-automated-analysis-and-interactive
Repo https://github.com/nurlicht/CellulyzerDemo
Framework none

Variational Generative Stochastic Networks with Collaborative Shaping

Title Variational Generative Stochastic Networks with Collaborative Shaping
Authors Philip Bachman, Doina Precup
Abstract We develop an approach to training generative models based on unrolling a variational auto-encoder into a Markov chain, and shaping the chain’s trajectories using a technique inspired by recent work in Approximate Bayesian computation. We show that the global minimizer of the resulting objective is achieved when the generative model reproduces the target distribution. To allow finer control over the behavior of the models, we add a regularization term inspired by techniques used for regularizing certain types of policy search in reinforcement learning. We present empirical results on the MNIST and TFD datasets which show that our approach offers state-of-the-art performance, both quantitatively and from a qualitative point of view.
Tasks
Published 2017-08-02
URL http://arxiv.org/abs/1708.00805v1
PDF http://arxiv.org/pdf/1708.00805v1.pdf
PWC https://paperswithcode.com/paper/variational-generative-stochastic-networks
Repo https://github.com/Philip-Bachman/ICML-2015
Framework none

Training GANs with Optimism

Title Training GANs with Optimism
Authors Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, Haoyang Zeng
Abstract We address the issue of limit cycling behavior in training Generative Adversarial Networks and propose the use of Optimistic Mirror Decent (OMD) for training Wasserstein GANs. Recent theoretical results have shown that optimistic mirror decent (OMD) can enjoy faster regret rates in the context of zero-sum games. WGANs is exactly a context of solving a zero-sum game with simultaneous no-regret dynamics. Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs. We formally show that in the case of bi-linear zero-sum games the last iterate of OMD dynamics converges to an equilibrium, in contrast to GD dynamics which are bound to cycle. We also portray the huge qualitative difference between GD and OMD dynamics with toy examples, even when GD is modified with many adaptations proposed in the recent literature, such as gradient penalty or momentum. We apply OMD WGAN training to a bioinformatics problem of generating DNA sequences. We observe that models trained with OMD achieve consistently smaller KL divergence with respect to the true underlying distribution, than models trained with GD variants. Finally, we introduce a new algorithm, Optimistic Adam, which is an optimistic variant of Adam. We apply it to WGAN training on CIFAR10 and observe improved performance in terms of inception score as compared to Adam.
Tasks
Published 2017-10-31
URL http://arxiv.org/abs/1711.00141v2
PDF http://arxiv.org/pdf/1711.00141v2.pdf
PWC https://paperswithcode.com/paper/training-gans-with-optimism
Repo https://github.com/vsyrgkanis/optimistic_GAN_training
Framework none

High-Quality Face Image SR Using Conditional Generative Adversarial Networks

Title High-Quality Face Image SR Using Conditional Generative Adversarial Networks
Authors Huang Bin, Chen Weihai, Wu Xingming, Lin Chun-Liang
Abstract We propose a novel single face image super-resolution method, which named Face Conditional Generative Adversarial Network(FCGAN), based on boundary equilibrium generative adversarial networks. Without taking any facial prior information, our method can generate a high-resolution face image from a low-resolution one. Compared with existing studies, both our training and testing phases are end-to-end pipeline with little pre/post-processing. To enhance the convergence speed and strengthen feature propagation, skip-layer connection is further employed in the generative and discriminative networks. Extensive experiments demonstrate that our model achieves competitive performance compared with state-of-the-art models.
Tasks Image Super-Resolution, Super-Resolution
Published 2017-07-04
URL http://arxiv.org/abs/1707.00737v1
PDF http://arxiv.org/pdf/1707.00737v1.pdf
PWC https://paperswithcode.com/paper/high-quality-face-image-sr-using-conditional
Repo https://github.com/nikhilsu/SuperPixel
Framework none

Evaluating 35 Methods to Generate Structural Connectomes Using Pairwise Classification

Title Evaluating 35 Methods to Generate Structural Connectomes Using Pairwise Classification
Authors Dmitry Petrov, Alexander Ivanov, Joshua Faskowitz, Boris Gutman, Daniel Moyer, Julio Villalon, Neda Jahanshad, Paul Thompson
Abstract There is no consensus on how to construct structural brain networks from diffusion MRI. How variations in pre-processing steps affect network reliability and its ability to distinguish subjects remains opaque. In this work, we address this issue by comparing 35 structural connectome-building pipelines. We vary diffusion reconstruction models, tractography algorithms and parcellations. Next, we classify structural connectome pairs as either belonging to the same individual or not. Connectome weights and eight topological derivative measures form our feature set. For experiments, we use three test-retest datasets from the Consortium for Reliability and Reproducibility (CoRR) comprised of a total of 105 individuals. We also compare pairwise classification results to a commonly used parametric test-retest measure, Intraclass Correlation Coefficient (ICC).
Tasks
Published 2017-06-19
URL http://arxiv.org/abs/1706.06031v1
PDF http://arxiv.org/pdf/1706.06031v1.pdf
PWC https://paperswithcode.com/paper/evaluating-35-methods-to-generate-structural
Repo https://github.com/lodurality/35_methods_MICCAI_2017
Framework none
comments powered by Disqus