Paper Group ANR 428
Towards Automatic Clustering Analysis using Traces of Information Gain: The InfoGuide Method. Disentangling Controllable Object through Video Prediction Improves Visual Reinforcement Learning. Improving Interaction Quality Estimation with BiLSTMs and the Impact on Dialogue Policy Learning. Local Rotation Invariance in 3D CNNs. On-the-fly Prediction …
Towards Automatic Clustering Analysis using Traces of Information Gain: The InfoGuide Method
Title | Towards Automatic Clustering Analysis using Traces of Information Gain: The InfoGuide Method |
Authors | Paulo Rocha, Diego Pinheiro, Martin Cadeiras, Carmelo Bastos-Filho |
Abstract | Clustering analysis has become a ubiquitous information retrieval tool in a wide range of domains, but a more automatic framework is still lacking. Though internal metrics are the key players towards a successful retrieval of clusters, their effectiveness on real-world datasets remains not fully understood, mainly because of their unrealistic assumptions underlying datasets. We hypothesized that capturing {\it traces of information gain} between increasingly complex clustering retrievals—{\it InfoGuide}—enables an automatic clustering analysis with improved clustering retrievals. We validated the {\it InfoGuide} hypothesis by capturing the traces of information gain using the Kolmogorov-Smirnov statistic and comparing the clusters retrieved by {\it InfoGuide} against those retrieved by other commonly used internal metrics in artificially-generated, benchmarks, and real-world datasets. Our results suggested that {\it InfoGuide} can enable a more automatic clustering analysis and may be more suitable for retrieving clusters in real-world datasets displaying nontrivial statistical properties. |
Tasks | Information Retrieval |
Published | 2020-01-23 |
URL | https://arxiv.org/abs/2001.08677v1 |
https://arxiv.org/pdf/2001.08677v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-automatic-clustering-analysis-using |
Repo | |
Framework | |
Disentangling Controllable Object through Video Prediction Improves Visual Reinforcement Learning
Title | Disentangling Controllable Object through Video Prediction Improves Visual Reinforcement Learning |
Authors | Yuanyi Zhong, Alexander Schwing, Jian Peng |
Abstract | In many vision-based reinforcement learning (RL) problems, the agent controls a movable object in its visual field, e.g., the player’s avatar in video games and the robotic arm in visual grasping and manipulation. Leveraging action-conditioned video prediction, we propose an end-to-end learning framework to disentangle the controllable object from the observation signal. The disentangled representation is shown to be useful for RL as additional observation channels to the agent. Experiments on a set of Atari games with the popular Double DQN algorithm demonstrate improved sample efficiency and game performance (from 222.8% to 261.4% measured in normalized game scores, with prediction bonus reward). |
Tasks | Atari Games, Video Prediction |
Published | 2020-02-21 |
URL | https://arxiv.org/abs/2002.09136v1 |
https://arxiv.org/pdf/2002.09136v1.pdf | |
PWC | https://paperswithcode.com/paper/disentangling-controllable-object-through |
Repo | |
Framework | |
Improving Interaction Quality Estimation with BiLSTMs and the Impact on Dialogue Policy Learning
Title | Improving Interaction Quality Estimation with BiLSTMs and the Impact on Dialogue Policy Learning |
Authors | Stefan Ultes |
Abstract | Learning suitable and well-performing dialogue behaviour in statistical spoken dialogue systems has been in the focus of research for many years. While most work which is based on reinforcement learning employs an objective measure like task success for modelling the reward signal, we use a reward based on user satisfaction estimation. We propose a novel estimator and show that it outperforms all previous estimators while learning temporal dependencies implicitly. Furthermore, we apply this novel user satisfaction estimation model live in simulated experiments where the satisfaction estimation model is trained on one domain and applied in many other domains which cover a similar task. We show that applying this model results in higher estimated satisfaction, similar task success rates and a higher robustness to noise. |
Tasks | Spoken Dialogue Systems |
Published | 2020-01-21 |
URL | https://arxiv.org/abs/2001.07615v1 |
https://arxiv.org/pdf/2001.07615v1.pdf | |
PWC | https://paperswithcode.com/paper/improving-interaction-quality-estimation-with-1 |
Repo | |
Framework | |
Local Rotation Invariance in 3D CNNs
Title | Local Rotation Invariance in 3D CNNs |
Authors | Vincent Andrearczyk, Julien Fageot, Valentin Oreiller, Xavier Montet, Adrien Depeursinge |
Abstract | Locally Rotation Invariant (LRI) image analysis was shown to be fundamental in many applications and in particular in medical imaging where local structures of tissues occur at arbitrary rotations. LRI constituted the cornerstone of several breakthroughs in texture analysis, including Local Binary Patterns (LBP), Maximum Response 8 (MR8) and steerable filterbanks. Whereas globally rotation invariant Convolutional Neural Networks (CNN) were recently proposed, LRI was very little investigated in the context of deep learning. LRI designs allow learning filters accounting for all orientations, which enables a drastic reduction of trainable parameters and training data when compared to standard 3D CNNs. In this paper, we propose and compare several methods to obtain LRI CNNs with directional sensitivity. Two methods use orientation channels (responses to rotated kernels), either by explicitly rotating the kernels or using steerable filters. These orientation channels constitute a locally rotation equivariant representation of the data. Local pooling across orientations yields LRI image analysis. Steerable filters are used to achieve a fine and efficient sampling of 3D rotations as well as a reduction of trainable parameters and operations, thanks to a parametric representations involving solid Spherical Harmonics (SH), which are products of SH with associated learned radial profiles.Finally, we investigate a third strategy to obtain LRI based on rotational invariants calculated from responses to a learned set of solid SHs. The proposed methods are evaluated and compared to standard CNNs on 3D datasets including synthetic textured volumes composed of rotated patterns, and pulmonary nodule classification in CT. The results show the importance of LRI image analysis while resulting in a drastic reduction of trainable parameters, outperforming standard 3D CNNs trained with data augmentation. |
Tasks | Data Augmentation, Texture Classification |
Published | 2020-03-19 |
URL | https://arxiv.org/abs/2003.08890v1 |
https://arxiv.org/pdf/2003.08890v1.pdf | |
PWC | https://paperswithcode.com/paper/local-rotation-invariance-in-3d-cnns |
Repo | |
Framework | |
On-the-fly Prediction of Protein Hydration Densities and Free Energies using Deep Learning
Title | On-the-fly Prediction of Protein Hydration Densities and Free Energies using Deep Learning |
Authors | Ahmadreza Ghanbarpour, Amr H. Mahmoud, Markus A. Lill |
Abstract | The calculation of thermodynamic properties of biochemical systems typically requires the use of resource-intensive molecular simulation methods. One example thereof is the thermodynamic profiling of hydration sites, i.e. high-probability locations for water molecules on the protein surface, which play an essential role in protein-ligand associations and must therefore be incorporated in the prediction of binding poses and affinities. To replace time-consuming simulations in hydration site predictions, we developed two different types of deep neural-network models aiming to predict hydration site data. In the first approach, meshed 3D images are generated representing the interactions between certain molecular probes placed on regular 3D grids, encompassing the binding pocket, with the static protein. These molecular interaction fields are mapped to the corresponding 3D image of hydration occupancy using a neural network based on an U-Net architecture. In a second approach, hydration occupancy and thermodynamics were predicted point-wise using a neural network based on fully-connected layers. In addition to direct protein interaction fields, the environment of each grid point was represented using moments of a spherical harmonics expansion of the interaction properties of nearby grid points. Application to structure-activity relationship analysis and protein-ligand pose scoring demonstrates the utility of the predicted hydration information. |
Tasks | |
Published | 2020-01-07 |
URL | https://arxiv.org/abs/2001.02201v1 |
https://arxiv.org/pdf/2001.02201v1.pdf | |
PWC | https://paperswithcode.com/paper/on-the-fly-prediction-of-protein-hydration |
Repo | |
Framework | |
Real-time Out-of-distribution Detection in Learning-Enabled Cyber-Physical Systems
Title | Real-time Out-of-distribution Detection in Learning-Enabled Cyber-Physical Systems |
Authors | Feiyang Cai, Xenofon Koutsoukos |
Abstract | Cyber-physical systems (CPS) greatly benefit by using machine learning components that can handle the uncertainty and variability of the real-world. Typical components such as deep neural networks, however, introduce new types of hazards that may impact system safety. The system behavior depends on data that are available only during runtime and may be different than the data used for training. Out-of-distribution data may lead to a large error and compromise safety. The paper considers the problem of efficiently detecting out-of-distribution data in CPS control systems. Detection must be robust and limit the number of false alarms while being computational efficient for real-time monitoring. The proposed approach leverages inductive conformal prediction and anomaly detection for developing a method that has a well-calibrated false alarm rate. We use variational autoencoders and deep support vector data description to learn models that can be used efficiently compute the nonconformity of new inputs relative to the training set and enable real-time detection of out-of-distribution high-dimensional inputs. We demonstrate the method using an advanced emergency braking system and a self-driving end-to-end controller implemented in an open source simulator for self-driving cars. The simulation results show very small number of false positives and detection delay while the execution time is comparable to the execution time of the original machine learning components. |
Tasks | Anomaly Detection, Out-of-Distribution Detection, Self-Driving Cars |
Published | 2020-01-28 |
URL | https://arxiv.org/abs/2001.10494v1 |
https://arxiv.org/pdf/2001.10494v1.pdf | |
PWC | https://paperswithcode.com/paper/real-time-out-of-distribution-detection-in |
Repo | |
Framework | |
Tracking Road Users using Constraint Programming
Title | Tracking Road Users using Constraint Programming |
Authors | Alexandre Pineault, Guillaume-Alexandre Bilodeau, Gilles Pesant |
Abstract | In this paper, we aim at improving the tracking of road users in urban scenes. We present a constraint programming (CP) approach for the data association phase found in the tracking-by-detection paradigm of the multiple object tracking (MOT) problem. Such an approach can solve the data association problem more efficiently than graph-based methods and can handle better the combinatorial explosion occurring when multiple frames are analyzed. Because our focus is on the data association problem, our MOT method only uses simple image features, which are the center position and color of detections for each frame. Constraints are defined on these two features and on the general MOT problem. For example, we enforce color appearance preservation over trajectories and constrain the extent of motion between frames. Filtering layers are used in order to eliminate detection candidates before using CP and to remove dummy trajectories produced by the CP solver. Our proposed method was tested on a motorized vehicles tracking dataset and produces results that outperform the top methods of the UA-DETRAC benchmark. |
Tasks | Multiple Object Tracking, Object Tracking |
Published | 2020-03-10 |
URL | https://arxiv.org/abs/2003.04468v1 |
https://arxiv.org/pdf/2003.04468v1.pdf | |
PWC | https://paperswithcode.com/paper/tracking-road-users-using-constraint |
Repo | |
Framework | |
Sensitive Data Detection and Classification in Spanish Clinical Text: Experiments with BERT
Title | Sensitive Data Detection and Classification in Spanish Clinical Text: Experiments with BERT |
Authors | Aitor García-Pablos, Naiara Perez, Montse Cuadros |
Abstract | Massive digital data processing provides a wide range of opportunities and benefits, but at the cost of endangering personal data privacy. Anonymisation consists in removing or replacing sensitive information from data, enabling its exploitation for different purposes while preserving the privacy of individuals. Over the years, a lot of automatic anonymisation systems have been proposed; however, depending on the type of data, the target language or the availability of training documents, the task remains challenging still. The emergence of novel deep-learning models during the last two years has brought large improvements to the state of the art in the field of Natural Language Processing. These advancements have been most noticeably led by BERT, a model proposed by Google in 2018, and the shared language models pre-trained on millions of documents. In this paper, we use a BERT-based sequence labelling model to conduct a series of anonymisation experiments on several clinical datasets in Spanish. We also compare BERT to other algorithms. The experiments show that a simple BERT-based model with general-domain pre-training obtains highly competitive results without any domain specific feature engineering. |
Tasks | Feature Engineering |
Published | 2020-03-06 |
URL | https://arxiv.org/abs/2003.03106v2 |
https://arxiv.org/pdf/2003.03106v2.pdf | |
PWC | https://paperswithcode.com/paper/sensitive-data-detection-and-classification |
Repo | |
Framework | |
Non-Determinism in TensorFlow ResNets
Title | Non-Determinism in TensorFlow ResNets |
Authors | Miguel Morin, Matthew Willetts |
Abstract | We show that the stochasticity in training ResNets for image classification on GPUs in TensorFlow is dominated by the non-determinism from GPUs, rather than by the initialisation of the weights and biases of the network or by the sequence of minibatches given. The standard deviation of test set accuracy is 0.02 with fixed seeds, compared to 0.027 with different seeds—nearly 74% of the standard deviation of a ResNet model is non-deterministic. For test set loss the ratio of standard deviations is more than 80%. These results call for more robust evaluation strategies of deep learning models, as a significant amount of the variation in results across runs can arise simply from GPU randomness. |
Tasks | Image Classification |
Published | 2020-01-30 |
URL | https://arxiv.org/abs/2001.11396v1 |
https://arxiv.org/pdf/2001.11396v1.pdf | |
PWC | https://paperswithcode.com/paper/non-determinism-in-tensorflow-resnets |
Repo | |
Framework | |
Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness
Title | Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness |
Authors | Samuel Yeom, Matt Fredrikson |
Abstract | We turn the definition of individual fairness on its head—rather than ascertaining the fairness of a model given a predetermined metric, we find a metric for a given model that satisfies individual fairness. This can facilitate the discussion on the fairness of a model, addressing the issue that it may be difficult to specify a priori a suitable metric. Our contributions are twofold: First, we introduce the definition of a minimal metric and characterize the behavior of models in terms of minimal metrics. Second, for more complicated models, we apply the mechanism of randomized smoothing from adversarial robustness to make them individually fair under a given weighted $L^p$ metric. Our experiments show that adapting the minimal metrics of linear models to more complicated neural networks can lead to meaningful and interpretable fairness guarantees at little cost to utility. |
Tasks | |
Published | 2020-02-18 |
URL | https://arxiv.org/abs/2002.07738v2 |
https://arxiv.org/pdf/2002.07738v2.pdf | |
PWC | https://paperswithcode.com/paper/individual-fairness-revisited-transferring |
Repo | |
Framework | |
Attention-based Assisted Excitation for Salient Object Segmentation
Title | Attention-based Assisted Excitation for Salient Object Segmentation |
Authors | Saeed Masoudnia, Melika Kheirieh, Abdol-Hossein Vahabie, Babak Nadjar-Araabi |
Abstract | Visual attention brings significant progress for Convolution Neural Networks (CNNs) in various applications. In this paper, object-based attention in human visual cortex inspires us to introduce a mechanism for modification of activations in feature maps of CNNs. In this mechanism, the activations of object locations are excited in feature maps. This mechanism is specifically inspired by gain additive attention modulation in object-based attention in brain. It facilitates figure-ground segregation in the visual cortex. Similar to brain, we use the idea to address two challenges in salient object segmentation: object interior parts and concise boundaries. We implemented it based on U-net model using different architectures in the encoder parts, including AlexNet, VGG, and ResNet. The proposed method was examined on three benchmark datasets: HKU-IS, MSRB, and PASCAL-S. Experimental results showed that the inspired idea could significantly improve the results in terms of mean absolute error and F-measure. The results also showed that our proposed method better captured not only the boundary but also the object interior. Thus, it can tackle the mentioned challenges. |
Tasks | Semantic Segmentation |
Published | 2020-03-31 |
URL | https://arxiv.org/abs/2003.14194v1 |
https://arxiv.org/pdf/2003.14194v1.pdf | |
PWC | https://paperswithcode.com/paper/attention-based-assisted-excitation-for |
Repo | |
Framework | |
Pathological Retinal Region Segmentation From OCT Images Using Geometric Relation Based Augmentation
Title | Pathological Retinal Region Segmentation From OCT Images Using Geometric Relation Based Augmentation |
Authors | Dwarikanath Mahapatra, Behzad Bozorgtabar, Jean-Philippe Thiran, Ling Shao |
Abstract | Medical image segmentation is an important task for computer aided diagnosis. Pixelwise manual annotations of large datasets require high expertise and is time consuming. Conventional data augmentations have limited benefit by not fully representing the underlying distribution of the training set, thus affecting model robustness when tested on images captured from different sources. Prior work leverages synthetic images for data augmentation ignoring the interleaved geometric relationship between different anatomical labels. We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape. Latent space variable sampling results in diverse generated images from a base image and improves robustness. Given those augmented images generated by our method, we train the segmentation network to enhance the segmentation performance of retinal optical coherence tomography (OCT) images. The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures. Ablation studies and visual analysis also demonstrate benefits of integrating geometry and diversity. |
Tasks | Data Augmentation, Image Generation, Medical Image Segmentation, Semantic Segmentation |
Published | 2020-03-31 |
URL | https://arxiv.org/abs/2003.14119v1 |
https://arxiv.org/pdf/2003.14119v1.pdf | |
PWC | https://paperswithcode.com/paper/pathological-retinal-region-segmentation-from |
Repo | |
Framework | |
Federated pretraining and fine tuning of BERT using clinical notes from multiple silos
Title | Federated pretraining and fine tuning of BERT using clinical notes from multiple silos |
Authors | Dianbo Liu, Tim Miller |
Abstract | Large scale contextual representation models, such as BERT, have significantly advanced natural language processing (NLP) in recently years. However, in certain area like healthcare, accessing diverse large scale text data from multiple institutions is extremely challenging due to privacy and regulatory reasons. In this article, we show that it is possible to both pretrain and fine tune BERT models in a federated manner using clinical texts from different silos without moving the data. |
Tasks | |
Published | 2020-02-20 |
URL | https://arxiv.org/abs/2002.08562v1 |
https://arxiv.org/pdf/2002.08562v1.pdf | |
PWC | https://paperswithcode.com/paper/federated-pretraining-and-fine-tuning-of-bert |
Repo | |
Framework | |
Elastic Bulk Synchronous Parallel Model for Distributed Deep Learning
Title | Elastic Bulk Synchronous Parallel Model for Distributed Deep Learning |
Authors | Xing Zhao, Manos Papagelis, Aijun An, Bao Xin Chen, Junfeng Liu, Yonggang Hu |
Abstract | The bulk synchronous parallel (BSP) is a celebrated synchronization model for general-purpose parallel computing that has successfully been employed for distributed training of machine learning models. A prevalent shortcoming of the BSP is that it requires workers to wait for the straggler at every iteration. To ameliorate this shortcoming of classic BSP, we propose ELASTICBSP a model that aims to relax its strict synchronization requirement. The proposed model offers more flexibility and adaptability during the training phase, without sacrificing on the accuracy of the trained model. We also propose an efficient method that materializes the model, named ZIPLINE. The algorithm is tunable and can effectively balance the trade-off between quality of convergence and iteration throughput, in order to accommodate different environments or applications. A thorough experimental evaluation demonstrates that our proposed ELASTICBSP model converges faster and to a higher accuracy than the classic BSP. It also achieves comparable (if not higher) accuracy than the other sensible synchronization models. |
Tasks | |
Published | 2020-01-06 |
URL | https://arxiv.org/abs/2001.01347v1 |
https://arxiv.org/pdf/2001.01347v1.pdf | |
PWC | https://paperswithcode.com/paper/elastic-bulk-synchronous-parallel-model-for |
Repo | |
Framework | |
Compressive MRI quantification using convex spatiotemporal priors and deep auto-encoders
Title | Compressive MRI quantification using convex spatiotemporal priors and deep auto-encoders |
Authors | Mohammad Golbabaee, Guido Bounincontri, Carolin Pirkl, Marion Menzel, Bjoern Menze, Mike Davies, Pedro Gomez |
Abstract | We propose a dictionary-matching-free pipeline for multi-parametric quantitative MRI image computing. Our approach has two stages based on compressed sensing reconstruction and deep learned quantitative inference. The reconstruction phase is convex and incorporates efficient spatiotemporal regularisations within an accelerated iterative shrinkage algorithm. This minimises the under-sampling (aliasing) artefacts from aggressively short scan times. The learned quantitative inference phase is purely trained on physical simulations (Bloch equations) that are flexible for producing rich training samples. We propose a deep and compact auto-encoder network with residual blocks in order to embed Bloch manifold projections through multiscale piecewise affine approximations, and to replace the nonscalable dictionary-matching baseline. Tested on a number of datasets we demonstrate effectiveness of the proposed scheme for recovering accurate and consistent quantitative information from novel and aggressively subsampled 2D/3D quantitative MRI acquisition protocols. |
Tasks | Physical Simulations |
Published | 2020-01-23 |
URL | https://arxiv.org/abs/2001.08746v1 |
https://arxiv.org/pdf/2001.08746v1.pdf | |
PWC | https://paperswithcode.com/paper/compressive-mri-quantification-using-convex |
Repo | |
Framework | |