Paper Group AWR 128
dyngraph2vec: Capturing Network Dynamics using Dynamic Graph Representation Learning. VoxelMorph: A Learning Framework for Deformable Medical Image Registration. Deep Depth Completion of a Single RGB-D Image. Semi-crowdsourced Clustering with Deep Generative Models. Mittens: An Extension of GloVe for Learning Domain-Specialized Representations. A G …
dyngraph2vec: Capturing Network Dynamics using Dynamic Graph Representation Learning
Title | dyngraph2vec: Capturing Network Dynamics using Dynamic Graph Representation Learning |
Authors | Palash Goyal, Sujit Rokka Chhetri, Arquimedes Canedo |
Abstract | Learning graph representations is a fundamental task aimed at capturing various properties of graphs in vector space. The most recent methods learn such representations for static networks. However, real world networks evolve over time and have varying dynamics. Capturing such evolution is key to predicting the properties of unseen networks. To understand how the network dynamics affect the prediction performance, we propose an embedding approach which learns the structure of evolution in dynamic graphs and can predict unseen links with higher precision. Our model, dyngraph2vec, learns the temporal transitions in the network using a deep architecture composed of dense and recurrent layers. We motivate the need of capturing dynamics for prediction on a toy data set created using stochastic block models. We then demonstrate the efficacy of dyngraph2vec over existing state-of-the-art methods on two real world data sets. We observe that learning dynamics can improve the quality of embedding and yield better performance in link prediction. |
Tasks | Graph Representation Learning, Link Prediction, Representation Learning |
Published | 2018-09-07 |
URL | https://arxiv.org/abs/1809.02657v2 |
https://arxiv.org/pdf/1809.02657v2.pdf | |
PWC | https://paperswithcode.com/paper/dyngraph2vec-capturing-network-dynamics-using |
Repo | https://github.com/palash1992/DynamicGEM |
Framework | tf |
VoxelMorph: A Learning Framework for Deformable Medical Image Registration
Title | VoxelMorph: A Learning Framework for Deformable Medical Image Registration |
Authors | Guha Balakrishnan, Amy Zhao, Mert R. Sabuncu, John Guttag, Adrian V. Dalca |
Abstract | We present VoxelMorph, a fast learning-based framework for deformable, pairwise medical image registration. Traditional registration methods optimize an objective function for each pair of images, which can be time-consuming for large datasets or rich deformation models. In contrast to this approach, and building on recent learning-based methods, we formulate registration as a function that maps an input image pair to a deformation field that aligns these images. We parameterize the function via a convolutional neural network (CNN), and optimize the parameters of the neural network on a set of images. Given a new pair of scans, VoxelMorph rapidly computes a deformation field by directly evaluating the function. In this work, we explore two different training strategies. In the first (unsupervised) setting, we train the model to maximize standard image matching objective functions that are based on the image intensities. In the second setting, we leverage auxiliary segmentations available in the training data. We demonstrate that the unsupervised model’s accuracy is comparable to state-of-the-art methods, while operating orders of magnitude faster. We also show that VoxelMorph trained with auxiliary data improves registration accuracy at test time, and evaluate the effect of training set size on registration. Our method promises to speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. Our code is freely available at voxelmorph.csail.mit.edu. |
Tasks | Deformable Medical Image Registration, Diffeomorphic Medical Image Registration, Image Registration, Medical Image Registration |
Published | 2018-09-14 |
URL | https://arxiv.org/abs/1809.05231v3 |
https://arxiv.org/pdf/1809.05231v3.pdf | |
PWC | https://paperswithcode.com/paper/voxelmorph-a-learning-framework-for |
Repo | https://github.com/microsoft/Recursive-Cascaded-Networks |
Framework | tf |
Deep Depth Completion of a Single RGB-D Image
Title | Deep Depth Completion of a Single RGB-D Image |
Authors | Yinda Zhang, Thomas Funkhouser |
Abstract | The goal of our work is to complete the depth channel of an RGB-D image. Commodity-grade depth cameras often fail to sense depth for shiny, bright, transparent, and distant surfaces. To address this problem, we train a deep network that takes an RGB image as input and predicts dense surface normals and occlusion boundaries. Those predictions are then combined with raw depth observations provided by the RGB-D camera to solve for depths for all pixels, including those missing in the original observation. This method was chosen over others (e.g., inpainting depths directly) as the result of extensive experiments with a new depth completion benchmark dataset, where holes are filled in training data through the rendering of surface reconstructions created from multiview RGB-D scans. Experiments with different network inputs, depth representations, loss functions, optimization methods, inpainting methods, and deep depth estimation networks show that our proposed approach provides better depth completions than these alternatives. |
Tasks | Depth Completion, Depth Estimation |
Published | 2018-03-25 |
URL | http://arxiv.org/abs/1803.09326v2 |
http://arxiv.org/pdf/1803.09326v2.pdf | |
PWC | https://paperswithcode.com/paper/deep-depth-completion-of-a-single-rgb-d-image |
Repo | https://github.com/yindaz/DeepCompletionRelease |
Framework | pytorch |
Semi-crowdsourced Clustering with Deep Generative Models
Title | Semi-crowdsourced Clustering with Deep Generative Models |
Authors | Yucen Luo, Tian Tian, Jiaxin Shi, Jun Zhu, Bo Zhang |
Abstract | We consider the semi-supervised clustering problem where crowdsourcing provides noisy information about the pairwise comparisons on a small subset of data, i.e., whether a sample pair is in the same cluster. We propose a new approach that includes a deep generative model (DGM) to characterize low-level features of the data, and a statistical relational model for noisy pairwise annotations on its subset. The two parts share the latent variables. To make the model automatically trade-off between its complexity and fitting data, we also develop its fully Bayesian variant. The challenge of inference is addressed by fast (natural-gradient) stochastic variational inference algorithms, where we effectively combine variational message passing for the relational part and amortized learning of the DGM under a unified framework. Empirical results on synthetic and real-world datasets show that our model outperforms previous crowdsourced clustering methods. |
Tasks | |
Published | 2018-10-29 |
URL | http://arxiv.org/abs/1810.11971v1 |
http://arxiv.org/pdf/1810.11971v1.pdf | |
PWC | https://paperswithcode.com/paper/semi-crowdsourced-clustering-with-deep |
Repo | https://github.com/xinmei9322/semicrowd |
Framework | tf |
Mittens: An Extension of GloVe for Learning Domain-Specialized Representations
Title | Mittens: An Extension of GloVe for Learning Domain-Specialized Representations |
Authors | Nicholas Dingwall, Christopher Potts |
Abstract | We present a simple extension of the GloVe representation learning model that begins with general-purpose representations and updates them based on data from a specialized domain. We show that the resulting representations can lead to faster learning and better results on a variety of tasks. |
Tasks | Representation Learning |
Published | 2018-03-27 |
URL | http://arxiv.org/abs/1803.09901v1 |
http://arxiv.org/pdf/1803.09901v1.pdf | |
PWC | https://paperswithcode.com/paper/mittens-an-extension-of-glove-for-learning |
Repo | https://github.com/roamanalytics/mittens |
Framework | tf |
A Globally Optimal Energy-Efficient Power Control Framework and its Efficient Implementation in Wireless Interference Networks
Title | A Globally Optimal Energy-Efficient Power Control Framework and its Efficient Implementation in Wireless Interference Networks |
Authors | Bho Matthiesen, Alessio Zappone, Karl-L. Besser, Eduard A. Jorswieck, Merouane Debbah |
Abstract | This work develops a novel power control framework for energy-efficient power control in wireless networks. The proposed method is a new branch-and-bound procedure based on problem-specific bounds for energy-efficiency maximization that allow for faster convergence. This enables to find the global solution for all of the most common energy-efficient power control problems with a complexity that, although still exponential in the number of variables, is much lower than other available global optimization frameworks. Moreover, the reduced complexity of the proposed framework allows its practical implementation through the use of deep neural networks. Specifically, thanks to its reduced complexity, the proposed method can be used to train an artificial neural network to predict the optimal resource allocation. This is in contrast with other power control methods based on deep learning, which train the neural network based on suboptimal power allocations due to the large complexity that generating large training sets of optimal power allocations would have with available global optimization methods. As a benchmark, we also develop a novel first-order optimal power allocation algorithm. Numerical results show that a neural network can be trained to predict the optimal power allocation policy. |
Tasks | |
Published | 2018-12-17 |
URL | https://arxiv.org/abs/1812.06920v2 |
https://arxiv.org/pdf/1812.06920v2.pdf | |
PWC | https://paperswithcode.com/paper/deep-learning-for-optimal-energy-efficient |
Repo | https://github.com/bmatthiesen/deep-EE-opt |
Framework | tf |
Feature Learning for Fault Detection in High-Dimensional Condition-Monitoring Signals
Title | Feature Learning for Fault Detection in High-Dimensional Condition-Monitoring Signals |
Authors | Gabriel Michau, Yang Hu, Thomas Palmé, Olga Fink |
Abstract | Complex industrial systems are continuously monitored by a large number of heterogeneous sensors. The diversity of their operating conditions and the possible fault types make it impossible to collect enough data for learning all the possible fault patterns. The paper proposes an integrated automatic unsupervised feature learning and one-class classification for fault detection that uses data on healthy conditions only for its training. The approach is based on stacked Extreme Learning Machines (namely Hierarchical, or HELM) and comprises an autoencoder, performing unsupervised feature learning, stacked with a one-class classifier monitoring the distance of the test data to the training healthy class, thereby assessing the health of the system. This study provides a comprehensive evaluation of HELM fault detection capability compared to other machine learning approaches, such as stand-alone one-class classifiers (ELM and SVM), these same one-class classifiers combined with traditional dimensionality reduction methods (PCA) and a Deep Belief Network. The performance is first evaluated on a synthetic dataset that encompasses typical characteristics of condition monitoring data. Subsequently, the approach is evaluated on a real case study of a power plant fault. The proposed algorithm for fault detection, combining feature learning with the one-class classifier, demonstrates a better performance, particularly in cases where condition monitoring data contain several non-informative signals. |
Tasks | Dimensionality Reduction, Fault Detection, One-class classifier |
Published | 2018-10-12 |
URL | https://arxiv.org/abs/1810.05550v2 |
https://arxiv.org/pdf/1810.05550v2.pdf | |
PWC | https://paperswithcode.com/paper/feature-learning-for-fault-detection-in-high |
Repo | https://github.com/MichauGabriel/HELM |
Framework | none |
Star Tracking using an Event Camera
Title | Star Tracking using an Event Camera |
Authors | Tat-Jun Chin, Samya Bagchi, Anders Eriksson, Andre van Schaik |
Abstract | Star trackers are primarily optical devices that are used to estimate the attitude of a spacecraft by recognising and tracking star patterns. Currently, most star trackers use conventional optical sensors. In this application paper, we propose the usage of event sensors for star tracking. There are potentially two benefits of using event sensors for star tracking: lower power consumption and higher operating speeds. Our main contribution is to formulate an algorithmic pipeline for star tracking from event data that includes novel formulations of rotation averaging and bundle adjustment. In addition, we also release with this paper a dataset for star tracking using event cameras. With this work, we introduce the problem of star tracking using event cameras to the computer vision community, whose expertise in SLAM and geometric optimisation can be brought to bear on this commercially important application. |
Tasks | |
Published | 2018-12-07 |
URL | http://arxiv.org/abs/1812.02895v2 |
http://arxiv.org/pdf/1812.02895v2.pdf | |
PWC | https://paperswithcode.com/paper/star-tracking-using-an-event-camera |
Repo | https://github.com/Ryan-Faulkner/StarTrackingWithAnEventCamera |
Framework | none |
Hidden Fluid Mechanics: A Navier-Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data
Title | Hidden Fluid Mechanics: A Navier-Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data |
Authors | Maziar Raissi, Alireza Yazdani, George Em Karniadakis |
Abstract | We present hidden fluid mechanics (HFM), a physics informed deep learning framework capable of encoding an important class of physical laws governing fluid motions, namely the Navier-Stokes equations. In particular, we seek to leverage the underlying conservation laws (i.e., for mass, momentum, and energy) to infer hidden quantities of interest such as velocity and pressure fields merely from spatio-temporal visualizations of a passive scaler (e.g., dye or smoke), transported in arbitrarily complex domains (e.g., in human arteries or brain aneurysms). Our approach towards solving the aforementioned data assimilation problem is unique as we design an algorithm that is agnostic to the geometry or the initial and boundary conditions. This makes HFM highly flexible in choosing the spatio-temporal domain of interest for data acquisition as well as subsequent training and predictions. Consequently, the predictions made by HFM are among those cases where a pure machine learning strategy or a mere scientific computing approach simply cannot reproduce. The proposed algorithm achieves accurate predictions of the pressure and velocity fields in both two and three dimensional flows for several benchmark problems motivated by real-world applications. Our results demonstrate that this relatively simple methodology can be used in physical and biomedical problems to extract valuable quantitative information (e.g., lift and drag forces or wall shear stresses in arteries) for which direct measurements may not be possible. |
Tasks | |
Published | 2018-08-13 |
URL | http://arxiv.org/abs/1808.04327v1 |
http://arxiv.org/pdf/1808.04327v1.pdf | |
PWC | https://paperswithcode.com/paper/hidden-fluid-mechanics-a-navier-stokes |
Repo | https://github.com/maziarraissi/HFM |
Framework | none |
APES: a Python toolbox for simulating reinforcement learning environments
Title | APES: a Python toolbox for simulating reinforcement learning environments |
Authors | Aqeel Labash, Ardi Tampuu, Tambet Matiisen, Jaan Aru, Raul Vicente |
Abstract | Assisted by neural networks, reinforcement learning agents have been able to solve increasingly complex tasks over the last years. The simulation environment in which the agents interact is an essential component in any reinforcement learning problem. The environment simulates the dynamics of the agents’ world and hence provides feedback to their actions in terms of state observations and external rewards. To ease the design and simulation of such environments this work introduces $\texttt{APES}$, a highly customizable and open source package in Python to create 2D grid-world environments for reinforcement learning problems. $\texttt{APES}$ equips agents with algorithms to simulate any field of vision, it allows the creation and positioning of items and rewards according to user-defined rules, and supports the interaction of multiple agents. |
Tasks | |
Published | 2018-08-31 |
URL | http://arxiv.org/abs/1808.10692v1 |
http://arxiv.org/pdf/1808.10692v1.pdf | |
PWC | https://paperswithcode.com/paper/apes-a-python-toolbox-for-simulating |
Repo | https://github.com/aqeel13932/APES |
Framework | none |
Formal Limitations on the Measurement of Mutual Information
Title | Formal Limitations on the Measurement of Mutual Information |
Authors | David McAllester, Karl Stratos |
Abstract | Motivated by applications to unsupervised learning, we consider the problem of measuring mutual information. Recent analysis has shown that naive kNN estimators of mutual information have serious statistical limitations motivating more refined methods. In this paper we prove that serious statistical limitations are inherent to any measurement method. More specifically, we show that any distribution-free high-confidence lower bound on mutual information cannot be larger than $O(\ln N)$ where $N$ is the size of the data sample. We also analyze the Donsker-Varadhan lower bound on KL divergence in particular and show that, when simple statistical considerations are taken into account, this bound can never produce a high-confidence value larger than $\ln N$. While large high-confidence lower bounds are impossible, in practice one can use estimators without formal guarantees. We suggest expressing mutual information as a difference of entropies and using cross-entropy as an entropy estimator. We observe that, although cross-entropy is only an upper bound on entropy, cross-entropy estimates converge to the true cross-entropy at the rate of $1/\sqrt{N}$. |
Tasks | |
Published | 2018-11-10 |
URL | http://arxiv.org/abs/1811.04251v3 |
http://arxiv.org/pdf/1811.04251v3.pdf | |
PWC | https://paperswithcode.com/paper/formal-limitations-on-the-measurement-of |
Repo | https://github.com/createamind/keras-cpcgan |
Framework | tf |
DeepLSR: a deep learning approach for laser speckle reduction
Title | DeepLSR: a deep learning approach for laser speckle reduction |
Authors | Taylor L. Bobrow, Faisal Mahmood, Miguel Inserni, Nicholas J. Durr |
Abstract | Speckle artifacts degrade image quality in virtually all modalities that utilize coherent energy, including optical coherence tomography, reflectance confocal microscopy, ultrasound, and widefield imaging with laser illumination. We present an adversarial deep learning framework for laser speckle reduction, called DeepLSR (https://durr.jhu.edu/DeepLSR), that transforms images from a source domain of coherent illumination to a target domain of speckle-free, incoherent illumination. We apply this method to widefield images of objects and tissues illuminated with a multi-wavelength laser, using light emitting diode-illuminated images as ground truth. In images of gastrointestinal tissues, DeepLSR reduces laser speckle noise by 6.4 dB, compared to a 2.9 dB reduction from optimized non-local means processing, a 3.0 dB reduction from BM3D, and a 3.7 dB reduction from an optical speckle reducer utilizing an oscillating diffuser. Further, DeepLSR can be combined with optical speckle reduction to reduce speckle noise by 9.4 dB. This dramatic reduction in speckle noise may enable the use of coherent light sources in applications that require small illumination sources and high-quality imaging, including medical endoscopy. |
Tasks | |
Published | 2018-10-23 |
URL | http://arxiv.org/abs/1810.10039v4 |
http://arxiv.org/pdf/1810.10039v4.pdf | |
PWC | https://paperswithcode.com/paper/deeplsr-a-deep-learning-approach-for-laser |
Repo | https://github.com/mahmoodlab/DeepLSR |
Framework | pytorch |
Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks
Title | Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks |
Authors | Abhijit Guha Roy, Nassir Navab, Christian Wachinger |
Abstract | Fully convolutional neural networks (F-CNNs) have set the state-of-the-art in image segmentation for a plethora of applications. Architectural innovations within F-CNNs have mainly focused on improving spatial encoding or network connectivity to aid gradient flow. In this paper, we explore an alternate direction of recalibrating the feature maps adaptively, to boost meaningful features, while suppressing weak ones. We draw inspiration from the recently proposed squeeze & excitation (SE) module for channel recalibration of feature maps for image classification. Towards this end, we introduce three variants of SE modules for image segmentation, (i) squeezing spatially and exciting channel-wise (cSE), (ii) squeezing channel-wise and exciting spatially (sSE) and (iii) concurrent spatial and channel squeeze & excitation (scSE). We effectively incorporate these SE modules within three different state-of-the-art F-CNNs (DenseNet, SD-Net, U-Net) and observe consistent improvement of performance across all architectures, while minimally effecting model complexity. Evaluations are performed on two challenging applications: whole brain segmentation on MRI scans (Multi-Atlas Labelling Challenge Dataset) and organ segmentation on whole body contrast enhanced CT scans (Visceral Dataset). |
Tasks | Brain Segmentation, Image Classification, Semantic Segmentation |
Published | 2018-03-07 |
URL | http://arxiv.org/abs/1803.02579v2 |
http://arxiv.org/pdf/1803.02579v2.pdf | |
PWC | https://paperswithcode.com/paper/concurrent-spatial-and-channel-squeeze |
Repo | https://github.com/alexshuang/TGS_Salt |
Framework | none |
Recalibrating Fully Convolutional Networks with Spatial and Channel ‘Squeeze & Excitation’ Blocks
Title | Recalibrating Fully Convolutional Networks with Spatial and Channel ‘Squeeze & Excitation’ Blocks |
Authors | Abhijit Guha Roy, Nassir Navab, Christian Wachinger |
Abstract | In a wide range of semantic segmentation tasks, fully convolutional neural networks (F-CNNs) have been successfully leveraged to achieve state-of-the-art performance. Architectural innovations of F-CNNs have mainly been on improving spatial encoding or network connectivity to aid gradient flow. In this article, we aim towards an alternate direction of recalibrating the learned feature maps adaptively; boosting meaningful features while suppressing weak ones. The recalibration is achieved by simple computational blocks that can be easily integrated in F-CNNs architectures. We draw our inspiration from the recently proposed ‘squeeze & excitation’ (SE) modules for channel recalibration for image classification. Towards this end, we introduce three variants of SE modules for segmentation, (i) squeezing spatially and exciting channel-wise, (ii) squeezing channel-wise and exciting spatially and (iii) joint spatial and channel ‘squeeze & excitation’. We effectively incorporate the proposed SE blocks in three state-of-the-art F-CNNs and demonstrate a consistent improvement of segmentation accuracy on three challenging benchmark datasets. Importantly, SE blocks only lead to a minimal increase in model complexity of about 1.5%, while the Dice score increases by 4-9% in the case of U-Net. Hence, we believe that SE blocks can be an integral part of future F-CNN architectures. |
Tasks | Image Classification, Semantic Segmentation |
Published | 2018-08-23 |
URL | http://arxiv.org/abs/1808.08127v1 |
http://arxiv.org/pdf/1808.08127v1.pdf | |
PWC | https://paperswithcode.com/paper/recalibrating-fully-convolutional-networks |
Repo | https://github.com/ai-med/squeeze_and_excitation |
Framework | pytorch |
Repetition Estimation
Title | Repetition Estimation |
Authors | Tom F. H. Runia, Cees G. M. Snoek, Arnold W. M. Smeulders |
Abstract | Visual repetition is ubiquitous in our world. It appears in human activity (sports, cooking), animal behavior (a bee’s waggle dance), natural phenomena (leaves in the wind) and in urban environments (flashing lights). Estimating visual repetition from realistic video is challenging as periodic motion is rarely perfectly static and stationary. To better deal with realistic video, we elevate the static and stationary assumptions often made by existing work. Our spatiotemporal filtering approach, established on the theory of periodic motion, effectively handles a wide variety of appearances and requires no learning. Starting from motion in 3D we derive three periodic motion types by decomposition of the motion field into its fundamental components. In addition, three temporal motion continuities emerge from the field’s temporal dynamics. For the 2D perception of 3D motion we consider the viewpoint relative to the motion; what follows are 18 cases of recurrent motion perception. To estimate repetition under all circumstances, our theory implies constructing a mixture of differential motion maps: gradient, divergence and curl. We temporally convolve the motion maps with wavelet filters to estimate repetitive dynamics. Our method is able to spatially segment repetitive motion directly from the temporal filter responses densely computed over the motion maps. For experimental verification of our claims, we use our novel dataset for repetition estimation, better-reflecting reality with non-static and non-stationary repetitive motion. On the task of repetition counting, we obtain favorable results compared to a deep learning alternative. |
Tasks | |
Published | 2018-06-18 |
URL | http://arxiv.org/abs/1806.06984v1 |
http://arxiv.org/pdf/1806.06984v1.pdf | |
PWC | https://paperswithcode.com/paper/repetition-estimation |
Repo | https://github.com/tomrunia/PyTorchWavelets |
Framework | pytorch |