Paper Group AWR 212
Kernelized Capsule Networks. Lung and Colon Cancer Histopathological Image Dataset (LC25000). Learning to Seek: Deep Reinforcement Learning for Phototaxis of a Nano Drone in an Obstacle Field. Augmented Neural ODEs. Forced Spatial Attention for Driver Foot Activity Classification. A Benchmark for Edge-Preserving Image Smoothing. Adaptive Divergence …
Kernelized Capsule Networks
Title | Kernelized Capsule Networks |
Authors | Taylor Killian, Justin Goodwin, Olivia Brown, Sung-Hyun Son |
Abstract | Capsule Networks attempt to represent patterns in images in a way that preserves hierarchical spatial relationships. Additionally, research has demonstrated that these techniques may be robust against adversarial perturbations. We present an improvement to training capsule networks with added robustness via non-parametric kernel methods. The representations learned through the capsule network are used to construct covariance kernels for Gaussian processes (GPs). We demonstrate that this approach achieves comparable prediction performance to Capsule Networks while improving robustness to adversarial perturbations and providing a meaningful measure of uncertainty that may aid in the detection of adversarial inputs. |
Tasks | Gaussian Processes |
Published | 2019-06-07 |
URL | https://arxiv.org/abs/1906.03164v1 |
https://arxiv.org/pdf/1906.03164v1.pdf | |
PWC | https://paperswithcode.com/paper/kernelized-capsule-networks |
Repo | https://github.com/bakirillov/capsules |
Framework | pytorch |
Lung and Colon Cancer Histopathological Image Dataset (LC25000)
Title | Lung and Colon Cancer Histopathological Image Dataset (LC25000) |
Authors | Andrew A. Borkowski, Marilyn M. Bui, L. Brannon Thomas, Catherine P. Wilson, Lauren A. DeLand, Stephen M. Mastorides |
Abstract | The field of Machine Learning, a subset of Artificial Intelligence, has led to remarkable advancements in many areas, including medicine. Machine Learning algorithms require large datasets to train computer models successfully. Although there are medical image datasets available, more image datasets are needed from a variety of medical entities, especially cancer pathology. Even more scarce are ML-ready image datasets. To address this need, we created an image dataset (LC25000) with 25,000 color images in 5 classes. Each class contains 5,000 images of the following histologic entities: colon adenocarcinoma, benign colonic tissue, lung adenocarcinoma, lung squamous cell carcinoma, and benign lung tissue. All images are de-identified, HIPAA compliant, validated, and freely available for download to AI researchers. |
Tasks | |
Published | 2019-12-16 |
URL | https://arxiv.org/abs/1912.12142v1 |
https://arxiv.org/pdf/1912.12142v1.pdf | |
PWC | https://paperswithcode.com/paper/lung-and-colon-cancer-histopathological-image |
Repo | https://github.com/tampapath/lung_colon_image_set |
Framework | none |
Learning to Seek: Deep Reinforcement Learning for Phototaxis of a Nano Drone in an Obstacle Field
Title | Learning to Seek: Deep Reinforcement Learning for Phototaxis of a Nano Drone in an Obstacle Field |
Authors | Bardienus P. Duisterhof, Srivatsan Krishnan, Jonathan J. Cruz, Colby R. Banbury, William Fu, Aleksandra Faust, Guido C. H. E. de Croon, Vijay Janapa Reddi |
Abstract | Nano drones are uniquely equipped for fully autonomous applications due to their agility, low cost, and small size. However, their constrained form factor limits flight time, sensor payload, and compute capability. While visual servoing of nano drones can achieve complex tasks, state of the art solutions have significant impact on endurance and cost. The primary goal of our work is to demonstrate phototaxis in an obstacle field, by adding only a lightweight and low-cost light sensor to a nano drone. We deploy a deep reinforcement learning model, capable of direct paths even with noisy sensor readings. By carefully designing the network input, we feed features relevant to the agent in finding the source, while reducing computational cost and enabling inference up to 100 Hz onboard the nano drone. We verify our approach with simulation and in-field testing on a Bitcraze CrazyFlie, achieving 94% success rate in cluttered and randomized test environments. The policy demonstrates efficient light seeking by reaching the goal in simulation in 65% fewer steps and with 60% shorter paths, compared to a baseline random walker algorithm. |
Tasks | Autonomous Navigation, Quantization |
Published | 2019-09-25 |
URL | https://arxiv.org/abs/1909.11236v4 |
https://arxiv.org/pdf/1909.11236v4.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-seek-autonomous-source-seeking |
Repo | https://github.com/harvard-edge/source-seeking |
Framework | tf |
Augmented Neural ODEs
Title | Augmented Neural ODEs |
Authors | Emilien Dupont, Arnaud Doucet, Yee Whye Teh |
Abstract | We show that Neural Ordinary Differential Equations (ODEs) learn representations that preserve the topology of the input space and prove that this implies the existence of functions Neural ODEs cannot represent. To address these limitations, we introduce Augmented Neural ODEs which, in addition to being more expressive models, are empirically more stable, generalize better and have a lower computational cost than Neural ODEs. |
Tasks | Image Classification |
Published | 2019-04-02 |
URL | https://arxiv.org/abs/1904.01681v3 |
https://arxiv.org/pdf/1904.01681v3.pdf | |
PWC | https://paperswithcode.com/paper/augmented-neural-odes |
Repo | https://github.com/mandubian/pytorch-neural-ode |
Framework | pytorch |
Forced Spatial Attention for Driver Foot Activity Classification
Title | Forced Spatial Attention for Driver Foot Activity Classification |
Authors | Akshay Rangesh, Mohan M. Trivedi |
Abstract | This paper provides a simple solution for reliably solving image classification tasks tied to spatial locations of salient objects in the scene. Unlike conventional image classification approaches that are designed to be invariant to translations of objects in the scene, we focus on tasks where the output classes vary with respect to where an object of interest is situated within an image. To handle this variant of the image classification task, we propose augmenting the standard cross-entropy (classification) loss with a domain dependent Forced Spatial Attention (FSA) loss, which in essence compels the network to attend to specific regions in the image associated with the desired output class. To demonstrate the utility of this loss function, we consider the task of driver foot activity classification - where each activity is strongly correlated with where the driver’s foot is in the scene. Training with our proposed loss function results in significantly improved accuracies, better generalization, and robustness against noise, while obviating the need for very large datasets. |
Tasks | Image Classification |
Published | 2019-07-27 |
URL | https://arxiv.org/abs/1907.11824v3 |
https://arxiv.org/pdf/1907.11824v3.pdf | |
PWC | https://paperswithcode.com/paper/forced-spatial-attention-for-driver-foot |
Repo | https://github.com/arangesh/Forced-Spatial-Attention |
Framework | pytorch |
A Benchmark for Edge-Preserving Image Smoothing
Title | A Benchmark for Edge-Preserving Image Smoothing |
Authors | Feida Zhu, Zhetong Liang, Xixi Jia, Lei Zhang, Yizhou Yu |
Abstract | Edge-preserving image smoothing is an important step for many low-level vision problems. Though many algorithms have been proposed, there are several difficulties hindering its further development. First, most existing algorithms cannot perform well on a wide range of image contents using a single parameter setting. Second, the performance evaluation of edge-preserving image smoothing remains subjective, and there lacks a widely accepted datasets to objectively compare the different algorithms. To address these issues and further advance the state of the art, in this work we propose a benchmark for edge-preserving image smoothing. This benchmark includes an image dataset with groundtruth image smoothing results as well as baseline algorithms that can generate competitive edge-preserving smoothing results for a wide range of image contents. The established dataset contains 500 training and testing images with a number of representative visual object categories, while the baseline methods in our benchmark are built upon representative deep convolutional network architectures, on top of which we design novel loss functions well suited for edge-preserving image smoothing. The trained deep networks run faster than most state-of-the-art smoothing algorithms with leading smoothing results both qualitatively and quantitatively. The benchmark is publicly accessible via https://github.com/zhufeida/Benchmark_EPS. |
Tasks | |
Published | 2019-04-02 |
URL | http://arxiv.org/abs/1904.01579v1 |
http://arxiv.org/pdf/1904.01579v1.pdf | |
PWC | https://paperswithcode.com/paper/a-benchmark-for-edge-preserving-image |
Repo | https://github.com/zhufeida/Benchmark_EPS |
Framework | tf |
Adaptive Divergence for Rapid Adversarial Optimization
Title | Adaptive Divergence for Rapid Adversarial Optimization |
Authors | Maxim Borisyak, Tatiana Gaintseva, Andrey Ustyuzhanin |
Abstract | Adversarial Optimization (AO) provides a reliable, practical way to match two implicitly defined distributions, one of which is usually represented by a sample of real data, and the other is defined by a generator. Typically, AO involves training of a high-capacity model on each step of the optimization. In this work, we consider computationally heavy generators, for which training of high-capacity models is associated with substantial computational costs. To address this problem, we introduce a novel family of divergences, which varies the capacity of the underlying model, and allows for a significant acceleration with respect to the number of samples drawn from the generator. We demonstrate the performance of the proposed divergences on several tasks, including tuning parameters of a physics simulator, namely, Pythia event generator. |
Tasks | |
Published | 2019-12-01 |
URL | https://arxiv.org/abs/1912.00520v1 |
https://arxiv.org/pdf/1912.00520v1.pdf | |
PWC | https://paperswithcode.com/paper/adaptive-divergence-for-rapid-adversarial |
Repo | https://github.com/HSE-LAMBDA/rapid-ao |
Framework | pytorch |
Deep Set Prediction Networks
Title | Deep Set Prediction Networks |
Authors | Yan Zhang, Jonathon Hare, Adam Prügel-Bennett |
Abstract | Current approaches for predicting sets from feature vectors ignore the unordered nature of sets and suffer from discontinuity issues as a result. We propose a general model for predicting sets that properly respects the structure of sets and avoids this problem. With a single feature vector as input, we show that our model is able to auto-encode point sets, predict the set of bounding boxes of objects in an image, and predict the set of attributes of these objects. |
Tasks | |
Published | 2019-06-15 |
URL | https://arxiv.org/abs/1906.06565v5 |
https://arxiv.org/pdf/1906.06565v5.pdf | |
PWC | https://paperswithcode.com/paper/deep-set-prediction-networks |
Repo | https://github.com/Cyanogenoid/dspn |
Framework | none |
SAR: Learning Cross-Language API Mappings with Little Knowledge
Title | SAR: Learning Cross-Language API Mappings with Little Knowledge |
Authors | Nghi D. Q. Bui, Yijun Yu, Lingxiao Jiang |
Abstract | To save manual effort, developers often translate programs from one programming language to another, instead of implementing it from scratch. Translating application program interfaces (APIs) used in one language to functionally equivalent ones available in another language is an important aspect of program translation. Existing approaches facilitate the translation by automatically identifying the API mappings across programming languages. However, all these approaches still require large amount of manual effort in preparing parallel program corpora, ranging from pairs of APIs, to manually identified code in different languages that are considered as functionally equivalent. To minimize the manual effort in identifying parallel program corpora and API mappings, this paper aims at an automated approach to map APIs across languages with much less knowledge a priori needed than other existing approaches. The approach is based on an realization of the notion of domain adaption combined with code embedding, which can better align two vector spaces: taking as input large sets of programs, our approach first generates numeric vector representations of the programs, especially the APIs used in each language, and it adapts generative adversarial networks (GAN) to align the vectors from the spaces of two languages. For a better alignment, we initialize the GAN with parameters derived from optional API mapping seeds that can be identified accurately with a simple automatic signature-based matching heuristic. Then the cross-language API mappings can be identified via nearest-neighbors queries in the aligned vector spaces. |
Tasks | Domain Adaptation |
Published | 2019-06-10 |
URL | https://arxiv.org/abs/1906.03835v1 |
https://arxiv.org/pdf/1906.03835v1.pdf | |
PWC | https://paperswithcode.com/paper/sar-learning-cross-language-api-mappings-with |
Repo | https://github.com/djxvii/fse2019 |
Framework | pytorch |
Are Labels Required for Improving Adversarial Robustness?
Title | Are Labels Required for Improving Adversarial Robustness? |
Authors | Jonathan Uesato, Jean-Baptiste Alayrac, Po-Sen Huang, Robert Stanforth, Alhussein Fawzi, Pushmeet Kohli |
Abstract | Recent work has uncovered the interesting (and somewhat surprising) finding that training models to be invariant to adversarial perturbations requires substantially larger datasets than those required for standard classification. This result is a key hurdle in the deployment of robust machine learning models in many real world applications where labeled data is expensive. Our main insight is that unlabeled data can be a competitive alternative to labeled data for training adversarially robust models. Theoretically, we show that in a simple statistical setting, the sample complexity for learning an adversarially robust model from unlabeled data matches the fully supervised case up to constant factors. On standard datasets like CIFAR-10, a simple Unsupervised Adversarial Training (UAT) approach using unlabeled data improves robust accuracy by 21.7% over using 4K supervised examples alone, and captures over 95% of the improvement from the same number of labeled examples. Finally, we report an improvement of 4% over the previous state-of-the-art on CIFAR-10 against the strongest known attack by using additional unlabeled data from the uncurated 80 Million Tiny Images dataset. This demonstrates that our finding extends as well to the more realistic case where unlabeled data is also uncurated, therefore opening a new avenue for improving adversarial training. |
Tasks | |
Published | 2019-05-31 |
URL | https://arxiv.org/abs/1905.13725v4 |
https://arxiv.org/pdf/1905.13725v4.pdf | |
PWC | https://paperswithcode.com/paper/are-labels-required-for-improving-adversarial |
Repo | https://github.com/deepmind/deepmind-research/tree/master/unsupervised_adversarial_training |
Framework | tf |
Automatic Health Problem Detection from Gait Videos Using Deep Neural Networks
Title | Automatic Health Problem Detection from Gait Videos Using Deep Neural Networks |
Authors | Rahil Mehrizi, Xi Peng, Shaoting Zhang, Ruisong Liao, Kang Li |
Abstract | The aim of this study is developing an automatic system for detection of gait-related health problems using Deep Neural Networks (DNNs). The proposed system takes a video of patients as the input and estimates their 3D body pose using a DNN based method. Our code is publicly available at https://github.com/rmehrizi/multi-view-pose-estimation. The resulting 3D body pose time series are then analyzed in a classifier, which classifies input gait videos into four different groups including Healthy, with Parkinsons disease, Post Stroke patient, and with orthopedic problems. The proposed system removes the requirement of complex and heavy equipment and large laboratory space, and makes the system practical for home use. Moreover, it does not need domain knowledge for feature engineering since it is capable of extracting semantic and high level features from the input data. The experimental results showed the classification accuracy of 56% to 96% for different groups. Furthermore, only 1 out of 25 healthy subjects were misclassified (False positive), and only 1 out of 70 patients were classified as a healthy subject (False negative). This study presents a starting point toward a powerful tool for automatic classification of gait disorders and can be used as a basis for future applications of Deep Learning in clinical gait analysis. Since the system uses digital cameras as the only required equipment, it can be employed in domestic environment of patients and elderly people for consistent gait monitoring and early detection of gait alterations. |
Tasks | Feature Engineering, Pose Estimation, Time Series |
Published | 2019-06-04 |
URL | https://arxiv.org/abs/1906.01480v2 |
https://arxiv.org/pdf/1906.01480v2.pdf | |
PWC | https://paperswithcode.com/paper/automatic-health-problem-detection-from-gait |
Repo | https://github.com/rmehrizi/multi-view-pose-estimation |
Framework | pytorch |
Knowledge-aware Graph Neural Networks with Label Smoothness Regularization for Recommender Systems
Title | Knowledge-aware Graph Neural Networks with Label Smoothness Regularization for Recommender Systems |
Authors | Hongwei Wang, Fuzheng Zhang, Mengdi Zhang, Jure Leskovec, Miao Zhao, Wenjie Li, Zhongyuan Wang |
Abstract | Knowledge graphs capture structured information and relations between a set of entities or items. As such knowledge graphs represent an attractive source of information that could help improve recommender systems. However, existing approaches in this domain rely on manual feature engineering and do not allow for an end-to-end training. Here we propose Knowledge-aware Graph Neural Networks with Label Smoothness regularization (KGNN-LS) to provide better recommendations. Conceptually, our approach computes user-specific item embeddings by first applying a trainable function that identifies important knowledge graph relationships for a given user. This way we transform the knowledge graph into a user-specific weighted graph and then apply a graph neural network to compute personalized item embeddings. To provide better inductive bias, we rely on label smoothness assumption, which posits that adjacent items in the knowledge graph are likely to have similar user relevance labels/scores. Label smoothness provides regularization over the edge weights and we prove that it is equivalent to a label propagation scheme on a graph. We also develop an efficient implementation that shows strong scalability with respect to the knowledge graph size. Experiments on four datasets show that our method outperforms state of the art baselines. KGNN-LS also achieves strong performance in cold-start scenarios where user-item interactions are sparse. |
Tasks | Feature Engineering, Knowledge Graphs, Recommendation Systems |
Published | 2019-05-11 |
URL | https://arxiv.org/abs/1905.04413v3 |
https://arxiv.org/pdf/1905.04413v3.pdf | |
PWC | https://paperswithcode.com/paper/knowledge-graph-convolutional-networks-for |
Repo | https://github.com/hwwang55/KGNN-LS |
Framework | tf |
The Implicit Metropolis-Hastings Algorithm
Title | The Implicit Metropolis-Hastings Algorithm |
Authors | Kirill Neklyudov, Evgenii Egorov, Dmitry Vetrov |
Abstract | Recent works propose using the discriminator of a GAN to filter out unrealistic samples of the generator. We generalize these ideas by introducing the implicit Metropolis-Hastings algorithm. For any implicit probabilistic model and a target distribution represented by a set of samples, implicit Metropolis-Hastings operates by learning a discriminator to estimate the density-ratio and then generating a chain of samples. Since the approximation of density ratio introduces an error on every step of the chain, it is crucial to analyze the stationary distribution of such chain. For that purpose, we present a theoretical result stating that the discriminator loss upper bounds the total variation distance between the target distribution and the stationary distribution. Finally, we validate the proposed algorithm both for independent and Markov proposals on CIFAR-10 and CelebA datasets. |
Tasks | Image Generation |
Published | 2019-06-09 |
URL | https://arxiv.org/abs/1906.03644v1 |
https://arxiv.org/pdf/1906.03644v1.pdf | |
PWC | https://paperswithcode.com/paper/the-implicit-metropolis-hastings-algorithm |
Repo | https://github.com/necludov/implicit-MH |
Framework | none |
DEFT-FUNNEL: an open-source global optimization solver for constrained grey-box and black-box problems
Title | DEFT-FUNNEL: an open-source global optimization solver for constrained grey-box and black-box problems |
Authors | Phillipe R. Sampaio |
Abstract | The fast-growing need for grey-box and black-box optimization methods for constrained global optimization problems in fields such as medicine, chemistry, engineering and artificial intelligence, has contributed for the design of new efficient algorithms for finding the best possible solution. In this work, we present DEFT-FUNNEL, an open-source global optimization algorithm for general constrained grey-box and black-box problems that belongs to the class of trust-region sequential quadratic optimization algorithms. It extends the previous works by Sampaio and Toint (2015, 2016) to a global optimization solver that is able to exploit information from closed-form functions. Polynomial interpolation models are used as surrogates for the black-box functions and a clustering-based multistart strategy is applied for searching for the global minima. Numerical experiments show that DEFT-FUNNEL compares favorably with other state-of-the-art methods on two sets of benchmark problems: one set containing problems where every function is a black box and another set with problems where some of the functions and their derivatives are known to the solver. The code as well as the test sets used for experiments are available at the Github repository http://github.com/phrsampaio/deft-funnel. |
Tasks | |
Published | 2019-12-29 |
URL | https://arxiv.org/abs/1912.12637v1 |
https://arxiv.org/pdf/1912.12637v1.pdf | |
PWC | https://paperswithcode.com/paper/deft-funnel-an-open-source-global |
Repo | https://github.com/phrsampaio/deft-funnel |
Framework | none |
Adversarial Learning of Privacy-Preserving Text Representations for De-Identification of Medical Records
Title | Adversarial Learning of Privacy-Preserving Text Representations for De-Identification of Medical Records |
Authors | Max Friedrich, Arne Köhn, Gregor Wiedemann, Chris Biemann |
Abstract | De-identification is the task of detecting protected health information (PHI) in medical text. It is a critical step in sanitizing electronic health records (EHRs) to be shared for research. Automatic de-identification classifierscan significantly speed up the sanitization process. However, obtaining a large and diverse dataset to train such a classifier that works wellacross many types of medical text poses a challenge as privacy laws prohibit the sharing of raw medical records. We introduce a method to create privacy-preserving shareable representations of medical text (i.e. they contain no PHI) that does not require expensive manual pseudonymization. These representations can be shared between organizations to create unified datasets for training de-identification models. Our representation allows training a simple LSTM-CRF de-identification model to an F1 score of 97.4%, which is comparable to a strong baseline that exposes private information in its representation. A robust, widely available de-identification classifier based on our representation could potentially enable studies for which de-identification would otherwise be too costly. |
Tasks | |
Published | 2019-06-12 |
URL | https://arxiv.org/abs/1906.05000v1 |
https://arxiv.org/pdf/1906.05000v1.pdf | |
PWC | https://paperswithcode.com/paper/adversarial-learning-of-privacy-preserving |
Repo | https://github.com/maxfriedrich/deid-training-data |
Framework | tf |