Paper Group ANR 283
Virtual staining for mitosis detection in Breast Histopathology. Approximation Bounds for Random Neural Networks and Reservoir Systems. Generating Higher-Fidelity Synthetic Datasets with Privacy Guarantees. Learning Stochastic Behaviour of Aggregate Data. Supervised Phrase-boundary Embeddings. Learning the Loss Functions in a Discriminative Space f …
Virtual staining for mitosis detection in Breast Histopathology
Title | Virtual staining for mitosis detection in Breast Histopathology |
Authors | Caner Mercan, Germonda Reijnen-Mooij, David Tellez Martin, Johannes Lotz, Nick Weiss, Marcel van Gerven, Francesco Ciompi |
Abstract | We propose a virtual staining methodology based on Generative Adversarial Networks to map histopathology images of breast cancer tissue from H&E stain to PHH3 and vice versa. We use the resulting synthetic images to build Convolutional Neural Networks (CNN) for automatic detection of mitotic figures, a strong prognostic biomarker used in routine breast cancer diagnosis and grading. We propose several scenarios, in which CNN trained with synthetically generated histopathology images perform on par with or even better than the same baseline model trained with real images. We discuss the potential of this application to scale the number of training samples without the need for manual annotations. |
Tasks | Mitosis Detection |
Published | 2020-03-17 |
URL | https://arxiv.org/abs/2003.07801v1 |
https://arxiv.org/pdf/2003.07801v1.pdf | |
PWC | https://paperswithcode.com/paper/virtual-staining-for-mitosis-detection-in |
Repo | |
Framework | |
Approximation Bounds for Random Neural Networks and Reservoir Systems
Title | Approximation Bounds for Random Neural Networks and Reservoir Systems |
Authors | Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega |
Abstract | This work studies approximation based on single-hidden-layer feedforward and recurrent neural networks with randomly generated internal weights. These methods, in which only the last layer of weights and a few hyperparameters are optimized, have been successfully applied in a wide range of static and dynamic learning problems. Despite the popularity of this approach in empirical tasks, important theoretical questions regarding the relation between the unknown function, the weight distribution, and the approximation rate have remained open. In this work it is proved that, as long as the unknown function, functional, or dynamical system is sufficiently regular, it is possible to draw the internal weights of the random (recurrent) neural network from a generic distribution (not depending on the unknown object) and quantify the error in terms of the number of neurons and the hyperparameters. In particular, this proves that echo state networks with randomly generated weights are capable of approximating a wide class of dynamical systems arbitrarily well and thus provides the first mathematical explanation for their empirically observed success at learning dynamical systems. |
Tasks | |
Published | 2020-02-14 |
URL | https://arxiv.org/abs/2002.05933v1 |
https://arxiv.org/pdf/2002.05933v1.pdf | |
PWC | https://paperswithcode.com/paper/approximation-bounds-for-random-neural |
Repo | |
Framework | |
Generating Higher-Fidelity Synthetic Datasets with Privacy Guarantees
Title | Generating Higher-Fidelity Synthetic Datasets with Privacy Guarantees |
Authors | Aleksei Triastcyn, Boi Faltings |
Abstract | This paper considers the problem of enhancing user privacy in common machine learning development tasks, such as data annotation and inspection, by substituting the real data with samples form a generative adversarial network. We propose employing Bayesian differential privacy as the means to achieve a rigorous theoretical guarantee while providing a better privacy-utility trade-off. We demonstrate experimentally that our approach produces higher-fidelity samples, compared to prior work, allowing to (1) detect more subtle data errors and biases, and (2) reduce the need for real data labelling by achieving high accuracy when training directly on artificial samples. |
Tasks | |
Published | 2020-03-02 |
URL | https://arxiv.org/abs/2003.00997v1 |
https://arxiv.org/pdf/2003.00997v1.pdf | |
PWC | https://paperswithcode.com/paper/generating-higher-fidelity-synthetic-datasets |
Repo | |
Framework | |
Learning Stochastic Behaviour of Aggregate Data
Title | Learning Stochastic Behaviour of Aggregate Data |
Authors | Shaojun Ma, Shu Liu, Hongyuan Zha, Haomin Zhou |
Abstract | Learning nonlinear dynamics of aggregate data is a challenging problem since the full trajectory of each individual is not observable, namely, the individual observed at one time point may not be observed at next time point. One class of existing work investigate such dynamics by requiring complete longitudinal individual-level trajectories. However, in most of the practical applications, the requirement is unrealistic due to technical limitations, experimental costs and/or privacy issues. The other one class of methods learn the dynamics by regarding aggregate behaviour as a stochastic process with/without hidden variable. The performances of such methods may be restricted due to complex dynamics, high dimensions and computation costs. In this paper, we propose a new weak form based framework to study the hidden dynamics of aggregate data via Wasserstein generative adversarial network(WGAN) and Fokker Planck Equation(FPE). Our model fall into the second class of methods with simple structure and computation. We demonstrate our approach in the context of a series of synthetic and real-world datasets. |
Tasks | |
Published | 2020-02-10 |
URL | https://arxiv.org/abs/2002.03513v2 |
https://arxiv.org/pdf/2002.03513v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-stochastic-behaviour-of-aggregate |
Repo | |
Framework | |
Supervised Phrase-boundary Embeddings
Title | Supervised Phrase-boundary Embeddings |
Authors | Manni Singh, David Weston, Mark Levene |
Abstract | We propose a new word embedding model, called SPhrase, that incorporates supervised phrase information. Our method modifies traditional word embeddings by ensuring that all target words in a phrase have exactly the same context. We demonstrate that including this information within a context window produces superior embeddings for both intrinsic evaluation tasks and downstream extrinsic tasks. |
Tasks | Word Embeddings |
Published | 2020-02-15 |
URL | https://arxiv.org/abs/2002.06450v1 |
https://arxiv.org/pdf/2002.06450v1.pdf | |
PWC | https://paperswithcode.com/paper/supervised-phrase-boundary-embeddings |
Repo | |
Framework | |
Learning the Loss Functions in a Discriminative Space for Video Restoration
Title | Learning the Loss Functions in a Discriminative Space for Video Restoration |
Authors | Younghyun Jo, Jaeyeon Kang, Seoung Wug Oh, Seonghyeon Nam, Peter Vajda, Seon Joo Kim |
Abstract | With more advanced deep network architectures and learning schemes such as GANs, the performance of video restoration algorithms has greatly improved recently. Meanwhile, the loss functions for optimizing deep neural networks remain relatively unchanged. To this end, we propose a new framework for building effective loss functions by learning a discriminative space specific to a video restoration task. Our framework is similar to GANs in that we iteratively train two networks - a generator and a loss network. The generator learns to restore videos in a supervised fashion, by following ground truth features through the feature matching in the discriminative space learned by the loss network. In addition, we also introduce a new relation loss in order to maintain the temporal consistency in output videos. Experiments on video superresolution and deblurring show that our method generates visually more pleasing videos with better quantitative perceptual metric values than the other state-of-the-art methods. |
Tasks | Deblurring |
Published | 2020-03-20 |
URL | https://arxiv.org/abs/2003.09124v1 |
https://arxiv.org/pdf/2003.09124v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-the-loss-functions-in-a |
Repo | |
Framework | |
Word Embeddings Inherently Recover the Conceptual Organization of the Human Mind
Title | Word Embeddings Inherently Recover the Conceptual Organization of the Human Mind |
Authors | Victor Swift |
Abstract | Machine learning is a means to uncover deep patterns from rich sources of data. Here, we find that machine learning can recover the conceptual organization of the human mind when applied to the natural language use of millions of people. Utilizing text from billions of webpages, we recover most of the concepts contained in English, Dutch, and Japanese, as represented in large scale Word Association networks. Our results justify machine learning as a means to probe the human mind, at a depth and scale that has been unattainable using self-report and observational methods. Beyond direct psychological applications, our methods may prove useful for projects concerned with defining, assessing, relating, or uncovering concepts in any scientific field. |
Tasks | Word Embeddings |
Published | 2020-02-06 |
URL | https://arxiv.org/abs/2002.10284v1 |
https://arxiv.org/pdf/2002.10284v1.pdf | |
PWC | https://paperswithcode.com/paper/word-embeddings-inherently-recover-the |
Repo | |
Framework | |
Learning to Deblur and Generate High Frame Rate Video with an Event Camera
Title | Learning to Deblur and Generate High Frame Rate Video with an Event Camera |
Authors | Chen Haoyu, Teng Minggui, Shi Boxin, Wang YIzhou, Huang Tiejun |
Abstract | Event cameras are bio-inspired cameras which can measure the change of intensity asynchronously with high temporal resolution. One of the event cameras’ advantages is that they do not suffer from motion blur when recording high-speed scenes. In this paper, we formulate the deblurring task on traditional cameras directed by events to be a residual learning one, and we propose corresponding network architectures for effective learning of deblurring and high frame rate video generation tasks. We first train a modified U-Net network to restore a sharp image from a blurry image using corresponding events. Then we train another similar network with different downsampling blocks to generate high frame rate video using the restored sharp image and events. Experiment results show that our method can restore sharper images and videos than state-of-the-art methods. |
Tasks | Deblurring, Video Generation |
Published | 2020-03-02 |
URL | https://arxiv.org/abs/2003.00847v2 |
https://arxiv.org/pdf/2003.00847v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-deblur-and-generate-high-frame |
Repo | |
Framework | |
PPMC RL Training Algorithm: Rough Terrain Intelligent Robots through Reinforcement Learning
Title | PPMC RL Training Algorithm: Rough Terrain Intelligent Robots through Reinforcement Learning |
Authors | Tamir Blum, Kazuya Yoshida |
Abstract | Robots can now learn how to make decisions and control themselves, generalizing learned behaviors to unseen scenarios. In particular, AI powered robots show promise in rough environments like the lunar surface, due to the environmental uncertainties. We address this critical generalization aspect for robot locomotion in rough terrain through a training algorithm we have created called the Path Planning and Motion Control (PPMC) Training Algorithm. This algorithm is coupled with any generic reinforcement learning algorithm to teach robots how to respond to user commands and to travel to designated locations on a single neural network. In this paper, we show that the algorithm works independent of the robot structure, demonstrating that it works on a wheeled rover in addition the past results on a quadruped walking robot. Further, we take several big steps towards real world practicality by introducing a rough highly uneven terrain. Critically, we show through experiments that the robot learns to generalize to new rough terrain maps, retaining a 100% success rate. To the best of our knowledge, this is the first paper to introduce a generic training algorithm teaching generalized PPMC in rough environments to any robot, with just the use of reinforcement learning. |
Tasks | |
Published | 2020-03-02 |
URL | https://arxiv.org/abs/2003.02655v2 |
https://arxiv.org/pdf/2003.02655v2.pdf | |
PWC | https://paperswithcode.com/paper/ppmc-training-algorithm-a-robot-independent |
Repo | |
Framework | |
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Title | Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model |
Authors | Dongdong Wang, Yandong Li, Liqiang Wang, Boqing Gong |
Abstract | We study how to train a student deep neural network for visual recognition by distilling knowledge from a blackbox teacher model in a data-efficient manner. Progress on this problem can significantly reduce the dependence on large-scale datasets for learning high-performing visual recognition models. There are two major challenges. One is that the number of queries into the teacher model should be minimized to save computational and/or financial costs. The other is that the number of images used for the knowledge distillation should be small; otherwise, it violates our expectation of reducing the dependence on large-scale datasets. To tackle these challenges, we propose an approach that blends mixup and active learning. The former effectively augments the few unlabeled images by a big pool of synthetic images sampled from the convex hull of the original images, and the latter actively chooses from the pool hard examples for the student neural network and query their labels from the teacher model. We validate our approach with extensive experiments. |
Tasks | Active Learning |
Published | 2020-03-31 |
URL | https://arxiv.org/abs/2003.13960v1 |
https://arxiv.org/pdf/2003.13960v1.pdf | |
PWC | https://paperswithcode.com/paper/neural-networks-are-more-productive-teachers |
Repo | |
Framework | |
Classification of human activity recognition using smartphones
Title | Classification of human activity recognition using smartphones |
Authors | Hoda Sedighi |
Abstract | Smartphones have been the most popular and widely used devices among means of communication. Nowadays, human activity recognition is possible on mobile devices by embedded sensors, which can be exploited to manage user behavior on mobile devices by predicting user activity. To reach this aim, storing activity characteristics, Classification, and mapping them to a learning algorithm was studied in this research. In this study, we applied categorization through deep belief network to test and training data, which resulted in 98.25% correct diagnosis in training data and 93.01% in test data. Therefore, in this study, we prove that the deep belief network is a suitable method for this particular purpose. |
Tasks | Activity Recognition, Human Activity Recognition |
Published | 2020-01-06 |
URL | https://arxiv.org/abs/2001.09740v1 |
https://arxiv.org/pdf/2001.09740v1.pdf | |
PWC | https://paperswithcode.com/paper/classification-of-human-activity-recognition |
Repo | |
Framework | |
Gaussian Approximation of Quantization Error for Estimation from Compressed Data
Title | Gaussian Approximation of Quantization Error for Estimation from Compressed Data |
Authors | Alon Kipnis, Galen Reeves |
Abstract | We consider the distributional connection between the lossy compressed representation of a high-dimensional signal $X$ using a random spherical code and the observation of $X$ under an additive white Gaussian noise (AWGN). We show that the Wasserstein distance between a bitrate-$R$ compressed version of $X$ and its observation under an AWGN-channel of signal-to-noise ratio $2^{2R}-1$ is sub-linear in the problem dimension. We utilize this fact to connect the risk of an estimator based on an AWGN-corrupted version of $X$ to the risk attained by the same estimator when fed with its bitrate-$R$ quantized version. We demonstrate the usefulness of this connection by deriving various novel results for inference problems under compression constraints, including noisy source coding and limited-bitrate parameter estimation. |
Tasks | Quantization |
Published | 2020-01-09 |
URL | https://arxiv.org/abs/2001.03243v2 |
https://arxiv.org/pdf/2001.03243v2.pdf | |
PWC | https://paperswithcode.com/paper/gaussian-approximation-of-quantization-error |
Repo | |
Framework | |
A survey of statistical learning techniques as applied to inexpensive pediatric Obstructive Sleep Apnea data
Title | A survey of statistical learning techniques as applied to inexpensive pediatric Obstructive Sleep Apnea data |
Authors | Emily T. Winn, Marilyn Vazquez, Prachi Loliencar, Kaisa Taipale, Xu Wang, Giseon Heo |
Abstract | Pediatric obstructive sleep apnea affects an estimated 1-5% of elementary-school aged children and can lead to other detrimental health problems. Swift diagnosis and treatment are critical to a child’s growth and development, but the variability of symptoms and the complexity of the available data make this a challenge. We take a first step in streamlining the process by focusing on inexpensive data from questionnaires and craniofacial measurements. We apply correlation networks, the Mapper algorithm from topological data analysis, and singular value decomposition in a process of exploratory data analysis. We then apply a variety of supervised and unsupervised learning techniques from statistics, machine learning, and topology, ranging from support vector machines to Bayesian classifiers and manifold learning. Finally, we analyze the results of each of these methods and discuss the implications for a multi-data-sourced algorithm moving forward. |
Tasks | Topological Data Analysis |
Published | 2020-02-17 |
URL | https://arxiv.org/abs/2002.07873v2 |
https://arxiv.org/pdf/2002.07873v2.pdf | |
PWC | https://paperswithcode.com/paper/a-survey-of-statistical-learning-techniques |
Repo | |
Framework | |
4D Semantic Cardiac Magnetic Resonance Image Synthesis on XCAT Anatomical Model
Title | 4D Semantic Cardiac Magnetic Resonance Image Synthesis on XCAT Anatomical Model |
Authors | Samaneh Abbasi-Sureshjani, Sina Amirrajab, Cristian Lorenz, Juergen Weese, Josien Pluim, Marcel Breeuwer |
Abstract | We propose a hybrid controllable image generation method to synthesize anatomically meaningful 3D+t labeled Cardiac Magnetic Resonance (CMR) images. Our hybrid method takes the mechanistic 4D eXtended CArdiac Torso (XCAT) heart model as the anatomical ground truth and synthesizes CMR images via a data-driven Generative Adversarial Network (GAN). We employ the state-of-the-art SPatially Adaptive De-normalization (SPADE) technique for conditional image synthesis to preserve the semantic spatial information of ground truth anatomy. Using the parameterized motion model of the XCAT heart, we generate labels for 25 time frames of the heart for one cardiac cycle at 18 locations for the short axis view. Subsequently, realistic images are generated from these labels, with modality-specific features that are learned from real CMR image data. We demonstrate that style transfer from another cardiac image can be accomplished by using a style encoder network. Due to the flexibility of XCAT in creating new heart models, this approach can result in a realistic virtual population to address different challenges the medical image analysis research community is facing such as expensive data collection. Our proposed method has a great potential to synthesize 4D controllable CMR images with annotations and adaptable styles to be used in various supervised multi-site, multi-vendor applications in medical image analysis. |
Tasks | Image Generation, Style Transfer |
Published | 2020-02-17 |
URL | https://arxiv.org/abs/2002.07089v1 |
https://arxiv.org/pdf/2002.07089v1.pdf | |
PWC | https://paperswithcode.com/paper/4d-semantic-cardiac-magnetic-resonance-image |
Repo | |
Framework | |
Learning Better Lossless Compression Using Lossy Compression
Title | Learning Better Lossless Compression Using Lossy Compression |
Authors | Fabian Mentzer, Luc Van Gool, Michael Tschannen |
Abstract | We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system. Specifically, the original image is first decomposed into the lossy reconstruction obtained after compressing it with BPG and the corresponding residual. We then model the distribution of the residual with a convolutional neural network-based probabilistic model that is conditioned on the BPG reconstruction, and combine it with entropy coding to losslessly encode the residual. Finally, the image is stored using the concatenation of the bitstreams produced by BPG and the learned residual coder. The resulting compression system achieves state-of-the-art performance in learned lossless full-resolution image compression, outperforming previous learned approaches as well as PNG, WebP, and JPEG2000. |
Tasks | Image Compression |
Published | 2020-03-23 |
URL | https://arxiv.org/abs/2003.10184v1 |
https://arxiv.org/pdf/2003.10184v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-better-lossless-compression-using |
Repo | |
Framework | |