February 1, 2020

3339 words 16 mins read

Paper Group AWR 188

Paper Group AWR 188

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. A CNN-RNN Framework for Crop Yield Prediction. Crop Lodging Prediction from UAV-Acquired Images of Wheat and Canola using a DCNN Augmented with Handcrafted Texture Features. Invertible Network for Classification and Biomarker Selection for ASD. Tree Recognitio …

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Title One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Authors Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
Abstract As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they be affected citizens, government regulators, domain experts, or system developers, present different requirements for explanations. Toward addressing these needs, we introduce AI Explainability 360 (http://aix360.mybluemix.net/), an open-source software toolkit featuring eight diverse and state-of-the-art explainability methods and two evaluation metrics. Equally important, we provide a taxonomy to help entities requiring explanations to navigate the space of explanation methods, not only those in the toolkit but also in the broader literature on explainability. For data scientists and other users of the toolkit, we have implemented an extensible software architecture that organizes methods according to their place in the AI modeling pipeline. We also discuss enhancements to bring research innovations closer to consumers of explanations, ranging from simplified, more accessible versions of algorithms, to tutorials and an interactive web demo to introduce AI explainability to different audiences and application domains. Together, our toolkit and taxonomy can help identify gaps where more explainability methods are needed and provide a platform to incorporate them as they are developed.
Tasks
Published 2019-09-06
URL https://arxiv.org/abs/1909.03012v2
PDF https://arxiv.org/pdf/1909.03012v2.pdf
PWC https://paperswithcode.com/paper/one-explanation-does-not-fit-all-a-toolkit
Repo https://github.com/IBM/AIX360
Framework pytorch

A CNN-RNN Framework for Crop Yield Prediction

Title A CNN-RNN Framework for Crop Yield Prediction
Authors Saeed Khaki, Lizhi Wang, Sotirios V. Archontoulis
Abstract Crop yield prediction is extremely challenging due to its dependence on multiple factors such as crop genotype, environmental factors, management practices, and their interactions. This paper presents a deep learning framework using convolutional neural networks (CNN) and recurrent neural networks (RNN) for crop yield prediction based on environmental data and management practices. The proposed CNN-RNN model, along with other popular methods such as random forest (RF), deep fully-connected neural networks (DFNN), and LASSO, was used to forecast corn and soybean yield across the entire Corn Belt (including 13 states) in the United States for years 2016, 2017, and 2018 using historical data. The new model achieved a root-mean-square-error (RMSE) 9% and 8% of their respective average yields, substantially outperforming all other methods that were tested. The CNN-RNN have three salient features that make it a potentially useful method for other crop yield prediction studies. (1) The CNN-RNN model was designed to capture the time dependencies of environmental factors and the genetic improvement of seeds over time without having their genotype information. (2) The model demonstrated the capability to generalize the yield prediction to untested environments without significant drop in the prediction accuracy. (3) Coupled with the backpropagation method, the model could reveal the extent to which weather conditions, accuracy of weather predictions, soil conditions, and management practices were able to explain the variation in the crop yields.
Tasks
Published 2019-11-20
URL https://arxiv.org/abs/1911.09045v2
PDF https://arxiv.org/pdf/1911.09045v2.pdf
PWC https://paperswithcode.com/paper/a-cnn-rnn-framework-for-crop-yield-prediction
Repo https://github.com/saeedkhaki92/Yield-Prediction-DNN
Framework tf

Crop Lodging Prediction from UAV-Acquired Images of Wheat and Canola using a DCNN Augmented with Handcrafted Texture Features

Title Crop Lodging Prediction from UAV-Acquired Images of Wheat and Canola using a DCNN Augmented with Handcrafted Texture Features
Authors Sara Mardanisamani, Farhad Maleki, Sara Hosseinzadeh Kassani, Sajith Rajapaksa, Hema Duddu, Menglu Wang, Steve Shirtliffe, Seungbum Ryu, Anique Josuttes, Ti Zhang, Sally Vail, Curtis Pozniak, Isobel Parkin, Ian Stavness, Mark Eramian
Abstract Lodging, the permanent bending over of food crops, leads to poor plant growth and development. Consequently, lodging results in reduced crop quality, lowers crop yield, and makes harvesting difficult. Plant breeders routinely evaluate several thousand breeding lines, and therefore, automatic lodging detection and prediction is of great value aid in selection. In this paper, we propose a deep convolutional neural network (DCNN) architecture for lodging classification using five spectral channel orthomosaic images from canola and wheat breeding trials. Also, using transfer learning, we trained 10 lodging detection models using well-established deep convolutional neural network architectures. Our proposed model outperforms the state-of-the-art lodging detection methods in the literature that use only handcrafted features. In comparison to 10 DCNN lodging detection models, our proposed model achieves comparable results while having a substantially lower number of parameters. This makes the proposed model suitable for applications such as real-time classification using inexpensive hardware for high-throughput phenotyping pipelines. The GitHub repository at https://github.com/FarhadMaleki/LodgedNet contains code and models.
Tasks Transfer Learning
Published 2019-06-18
URL https://arxiv.org/abs/1906.07771v1
PDF https://arxiv.org/pdf/1906.07771v1.pdf
PWC https://paperswithcode.com/paper/crop-lodging-prediction-from-uav-acquired
Repo https://github.com/FarhadMaleki/LodgedNet
Framework pytorch

Invertible Network for Classification and Biomarker Selection for ASD

Title Invertible Network for Classification and Biomarker Selection for ASD
Authors Juntang Zhuang, Nicha C. Dvornek, Xiaoxiao Li, Pamela Ventola, James S. Duncan
Abstract Determining biomarkers for autism spectrum disorder (ASD) is crucial to understanding its mechanisms. Recently deep learning methods have achieved success in the classification task of ASD using fMRI data. However, due to the black-box nature of most deep learning models, it’s hard to perform biomarker selection and interpret model decisions. The recently proposed invertible networks can accurately reconstruct the input from its output, and have the potential to unravel the black-box representation. Therefore, we propose a novel method to classify ASD and identify biomarkers for ASD using the connectivity matrix calculated from fMRI as the input. Specifically, with invertible networks, we explicitly determine the decision boundary and the projection of data points onto the boundary. Like linear classifiers, the difference between a point and its projection onto the decision boundary can be viewed as the explanation. We then define the importance as the explanation weighted by the gradient of prediction $w.r.t$ the input, and identify biomarkers based on this importance measure. We perform a regression task to further validate our biomarker selection: compared to using all edges in the connectivity matrix, using the top 10% important edges we generate a lower regression error on 6 different severity scores. Our experiments show that the invertible network is both effective at ASD classification and interpretable, allowing for discovery of reliable biomarkers.
Tasks
Published 2019-07-23
URL https://arxiv.org/abs/1907.09729v1
PDF https://arxiv.org/pdf/1907.09729v1.pdf
PWC https://paperswithcode.com/paper/invertible-network-for-classification-and
Repo https://github.com/juntang-zhuang/explain_invertible
Framework pytorch

Tree Recognition APP of Mount Tai Based on CNN

Title Tree Recognition APP of Mount Tai Based on CNN
Authors Zhihao Cao, Xinxin Zhang
Abstract Mount Tai has abundant sunshine, abundant rainfall and favorable climatic conditions, forming dense vegetation with various kinds of trees. In order to make it easier for tourists to understand each tree and experience the culture of Mount Tai, this paper develops an App for tree recognition of Mount Tai based on convolution neural network (CNN), taking advantage of CNN efficient image recognition ability and easy-to-carry characteristics of Android mobile phone. The APP can accurately identify several common trees in Mount Tai, and give a brief introduction for tourists.
Tasks
Published 2019-01-23
URL http://arxiv.org/abs/1901.11388v1
PDF http://arxiv.org/pdf/1901.11388v1.pdf
PWC https://paperswithcode.com/paper/tree-recognition-app-of-mount-tai-based-on
Repo https://github.com/gg1036419175/Tree-Recognition-APP-of-MountTai
Framework none

Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image Representations

Title Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image Representations
Authors Fenglin Liu, Yuanxin Liu, Xuancheng Ren, Xiaodong He, Xu Sun
Abstract In vision-and-language grounding problems, fine-grained representations of the image are considered to be of paramount importance. Most of the current systems incorporate visual features and textual concepts as a sketch of an image. However, plainly inferred representations are usually undesirable in that they are composed of separate components, the relations of which are elusive. In this work, we aim at representing an image with a set of integrated visual regions and corresponding textual concepts, reflecting certain semantics. To this end, we build the Mutual Iterative Attention (MIA) module, which integrates correlated visual features and textual concepts, respectively, by aligning the two modalities. We evaluate the proposed approach on two representative vision-and-language grounding tasks, i.e., image captioning and visual question answering. In both tasks, the semantic-grounded image representations consistently boost the performance of the baseline models under all metrics across the board. The results demonstrate that our approach is effective and generalizes well to a wide range of models for image-related applications. (The code is available at https://github.com/fenglinliu98/MIA)
Tasks Image Captioning, Question Answering, Text Generation, Visual Question Answering
Published 2019-05-15
URL https://arxiv.org/abs/1905.06139v3
PDF https://arxiv.org/pdf/1905.06139v3.pdf
PWC https://paperswithcode.com/paper/aligning-visual-regions-and-textual-concepts
Repo https://github.com/fenglinliu98/MIA
Framework pytorch

Domain-agnostic Question-Answering with Adversarial Training

Title Domain-agnostic Question-Answering with Adversarial Training
Authors Seanie Lee, Donggyu Kim, Jangwon Park
Abstract Adapting models to new domain without finetuning is a challenging problem in deep learning. In this paper, we utilize an adversarial training framework for domain generalization in Question Answering (QA) task. Our model consists of a conventional QA model and a discriminator. The training is performed in the adversarial manner, where the two models constantly compete, so that QA model can learn domain-invariant features. We apply this approach in MRQA Shared Task 2019 and show better performance compared to the baseline model.
Tasks Domain Generalization, Question Answering
Published 2019-10-21
URL https://arxiv.org/abs/1910.09342v2
PDF https://arxiv.org/pdf/1910.09342v2.pdf
PWC https://paperswithcode.com/paper/domain-agnostic-question-answering-with
Repo https://github.com/seanie12/mrqa
Framework pytorch

Orientation Aware Object Detection with Application to Firearms

Title Orientation Aware Object Detection with Application to Firearms
Authors Javed Iqbal, Muhammad Akhtar Munir, Arif Mahmood, Afsheen Rafaqat Ali, Mohsen Ali
Abstract Automatic detection of firearms is important for enhancing security and safety of people, however, it is a challenging task owing to the wide variations in shape, size and appearance of firearms. To handle these challenges we propose an Orientation Aware Object Detector (OAOD) which has achieved improved firearm detection and localization performance. The proposed detector has two phases. In the Phase-1 it predicts orientation of the object which is used to rotate the object proposal. Maximum area rectangles are cropped from the rotated object proposals which are again classified and localized in the Phase-2 of the algorithm. The oriented object proposals are mapped back to the original coordinates resulting in oriented bounding boxes which localize the weapons much better than the axis aligned bounding boxes. Being orientation aware, our non-maximum suppression is able to avoid multiple detection of the same object and it can better resolve objects which lie in close proximity to each other. This two phase system leverages OAOD to predict object oriented bounding boxes while being trained only on the axis aligned boxes in the ground-truth. In order to train object detectors for firearm detection, a dataset consisting of around eleven thousand firearm images is collected from the internet and manually annotated. The proposed ITU Firearm (ITUF) dataset contains wide range of guns and rifles. The OAOD algorithm is evaluated on the ITUF dataset and compared with current state of the art object detectors. Our experiments demonstrate the excellent performance of the proposed detector for the task of firearm detection.
Tasks Object Detection
Published 2019-04-22
URL http://arxiv.org/abs/1904.10032v1
PDF http://arxiv.org/pdf/1904.10032v1.pdf
PWC https://paperswithcode.com/paper/orientation-aware-object-detection-with
Repo https://github.com/makhtar17004/orientation-aware-firearm-detection
Framework caffe2

Machine learning-guided synthesis of advanced inorganic materials

Title Machine learning-guided synthesis of advanced inorganic materials
Authors Bijun Tang, Yuhao Lu, Jiadong Zhou, Han Wang, Prafful Golani, Manzhang Xu, Quan Xu, Cuntai Guan, Zheng Liu
Abstract Synthesis of advanced inorganic materials with minimum number of trials is of paramount importance towards the acceleration of inorganic materials development. The enormous complexity involved in existing multi-variable synthesis methods leads to high uncertainty, numerous trials and exorbitant cost. Recently, machine learning (ML) has demonstrated tremendous potential for material research. Here, we report the application of ML to optimize and accelerate material synthesis process in two representative multi-variable systems. A classification ML model on chemical vapor deposition-grown MoS2 is established, capable of optimizing the synthesis conditions to achieve higher success rate. While a regression model is constructed on the hydrothermal-synthesized carbon quantum dots, to enhance the process-related properties such as the photoluminescence quantum yield. Progressive adaptive model is further developed, aiming to involve ML at the beginning stage of new material synthesis. Optimization of the experimental outcome with minimized number of trials can be achieved with the effective feedback loops. This work serves as proof of concept revealing the feasibility and remarkable capability of ML to facilitate the synthesis of inorganic materials, and opens up a new window for accelerating material development.
Tasks
Published 2019-05-10
URL https://arxiv.org/abs/1905.03938v1
PDF https://arxiv.org/pdf/1905.03938v1.pdf
PWC https://paperswithcode.com/paper/machine-learning-guided-synthesis-of-advanced
Repo https://github.com/MSwML/ML-guided-material-synthesis
Framework none

Recurrent Back-Projection Network for Video Super-Resolution

Title Recurrent Back-Projection Network for Video Super-Resolution
Authors Muhammad Haris, Greg Shakhnarovich, Norimichi Ukita
Abstract We proposed a novel architecture for the problem of video super-resolution. We integrate spatial and temporal contexts from continuous video frames using a recurrent encoder-decoder module, that fuses multi-frame information with the more traditional, single frame super-resolution path for the target frame. In contrast to most prior work where frames are pooled together by stacking or warping, our model, the Recurrent Back-Projection Network (RBPN) treats each context frame as a separate source of information. These sources are combined in an iterative refinement framework inspired by the idea of back-projection in multiple-image super-resolution. This is aided by explicitly representing estimated inter-frame motion with respect to the target, rather than explicitly aligning frames. We propose a new video super-resolution benchmark, allowing evaluation at a larger scale and considering videos in different motion regimes. Experimental results demonstrate that our RBPN is superior to existing methods on several datasets.
Tasks Image Super-Resolution, Super-Resolution, Video Super-Resolution
Published 2019-03-25
URL http://arxiv.org/abs/1903.10128v1
PDF http://arxiv.org/pdf/1903.10128v1.pdf
PWC https://paperswithcode.com/paper/recurrent-back-projection-network-for-video
Repo https://github.com/alterzero/RBPN-PyTorch
Framework pytorch

Neural Graph Evolution: Towards Efficient Automatic Robot Design

Title Neural Graph Evolution: Towards Efficient Automatic Robot Design
Authors Tingwu Wang, Yuhao Zhou, Sanja Fidler, Jimmy Ba
Abstract Despite the recent successes in robotic locomotion control, the design of robot relies heavily on human engineering. Automatic robot design has been a long studied subject, but the recent progress has been slowed due to the large combinatorial search space and the difficulty in evaluating the found candidates. To address the two challenges, we formulate automatic robot design as a graph search problem and perform evolution search in graph space. We propose Neural Graph Evolution (NGE), which performs selection on current candidates and evolves new ones iteratively. Different from previous approaches, NGE uses graph neural networks to parameterize the control policies, which reduces evaluation cost on new candidates with the help of skill transfer from previously evaluated designs. In addition, NGE applies Graph Mutation with Uncertainty (GM-UC) by incorporating model uncertainty, which reduces the search space by balancing exploration and exploitation. We show that NGE significantly outperforms previous methods by an order of magnitude. As shown in experiments, NGE is the first algorithm that can automatically discover kinematically preferred robotic graph structures, such as a fish with two symmetrical flat side-fins and a tail, or a cheetah with athletic front and back legs. Instead of using thousands of cores for weeks, NGE efficiently solves searching problem within a day on a single 64 CPU-core Amazon EC2 machine.
Tasks
Published 2019-06-12
URL https://arxiv.org/abs/1906.05370v1
PDF https://arxiv.org/pdf/1906.05370v1.pdf
PWC https://paperswithcode.com/paper/neural-graph-evolution-towards-efficient
Repo https://github.com/WilsonWangTHU/neural_graph_evolution
Framework none

Tabula nearly rasa: Probing the Linguistic Knowledge of Character-Level Neural Language Models Trained on Unsegmented Text

Title Tabula nearly rasa: Probing the Linguistic Knowledge of Character-Level Neural Language Models Trained on Unsegmented Text
Authors Michael Hahn, Marco Baroni
Abstract Recurrent neural networks (RNNs) have reached striking performance in many natural language processing tasks. This has renewed interest in whether these generic sequence processing devices are inducing genuine linguistic knowledge. Nearly all current analytical studies, however, initialize the RNNs with a vocabulary of known words, and feed them tokenized input during training. We present a multi-lingual study of the linguistic knowledge encoded in RNNs trained as character-level language models, on input data with word boundaries removed. These networks face a tougher and more cognitively realistic task, having to discover any useful linguistic unit from scratch based on input statistics. The results show that our “near tabula rasa” RNNs are mostly able to solve morphological, syntactic and semantic tasks that intuitively presuppose word-level knowledge, and indeed they learned, to some extent, to track word boundaries. Our study opens the door to speculations about the necessity of an explicit, rigid word lexicon in language learning and usage.
Tasks
Published 2019-06-17
URL https://arxiv.org/abs/1906.07285v1
PDF https://arxiv.org/pdf/1906.07285v1.pdf
PWC https://paperswithcode.com/paper/tabula-nearly-rasa-probing-the-linguistic
Repo https://github.com/m-hahn/tabula-rasa-rnns
Framework pytorch

DPSOM: Deep Probabilistic Clustering with Self-Organizing Maps

Title DPSOM: Deep Probabilistic Clustering with Self-Organizing Maps
Authors Laura Manduchi, Matthias Hüser, Gunnar Rätsch, Vincent Fortuin
Abstract Generating visualizations and interpretations from high-dimensional data is a common problem in many applications. Two key approaches for tackling this problem are clustering and representation learning. On the one hand, there are very performant deep clustering models, such as DEC and IDEC. On the other hand, there are interpretable representation learning techniques, often relying on latent topological structures such as self-organizing maps. However, current methods do not yet successfully combine these two approaches. We present a novel way to fit self-organizing maps with probabilistic cluster assignments, PSOM, a new deep architecture for probabilistic clustering, DPSOM, and its extension to time series data, T-DPSOM. We show that they achieve superior clustering performance compared to current deep clustering methods on static MNIST/Fashion-MNIST data as well as medical time series, while also inducing an interpretable representation. Moreover, on medical time series, T-DPSOM successfully predicts future trajectories in the original data space.
Tasks Representation Learning, Time Series
Published 2019-10-03
URL https://arxiv.org/abs/1910.01590v2
PDF https://arxiv.org/pdf/1910.01590v2.pdf
PWC https://paperswithcode.com/paper/variational-psom-deep-probabilistic
Repo https://github.com/ratschlab/variational-psom
Framework none

ILP-M Conv: Optimize Convolution Algorithm for Single-Image Convolution Neural Network Inference on Mobile GPUs

Title ILP-M Conv: Optimize Convolution Algorithm for Single-Image Convolution Neural Network Inference on Mobile GPUs
Authors Zhuoran Ji
Abstract Convolution neural networks are widely used for mobile applications. However, GPU convolution algorithms are designed for mini-batch neural network training, the single-image convolution neural network inference algorithm on mobile GPUs is not well-studied. After discussing the usage difference and examining the existing convolution algorithms, we proposed the HNTMP convolution algorithm. The HNTMP convolution algorithm achieves $14.6 \times$ speedup than the most popular \textit{im2col} convolution algorithm, and $2.30 \times$ speedup than the fastest existing convolution algorithm (direct convolution) as far as we know.
Tasks
Published 2019-09-06
URL https://arxiv.org/abs/1909.02765v2
PDF https://arxiv.org/pdf/1909.02765v2.pdf
PWC https://paperswithcode.com/paper/hnmtp-conv-optimize-convolution-algorithm-for
Repo https://github.com/jizhuoran/sj_convolution
Framework none

Pathologist-level classification of histologic patterns on resected lung adenocarcinoma slides with deep neural networks

Title Pathologist-level classification of histologic patterns on resected lung adenocarcinoma slides with deep neural networks
Authors Jason W. Wei, Laura J. Tafe, Yevgeniy A. Linnik, Louis J. Vaickus, Naofumi Tomita, Saeed Hassanpour
Abstract Classification of histologic patterns in lung adenocarcinoma is critical for determining tumor grade and treatment for patients. However, this task is often challenging due to the heterogeneous nature of lung adenocarcinoma and the subjective criteria for evaluation. In this study, we propose a deep learning model that automatically classifies the histologic patterns of lung adenocarcinoma on surgical resection slides. Our model uses a convolutional neural network to identify regions of neoplastic cells, then aggregates those classifications to infer predominant and minor histologic patterns for any given whole-slide image. We evaluated our model on an independent set of 143 whole-slide images. It achieved a kappa score of 0.525 and an agreement of 66.6% with three pathologists for classifying the predominant patterns, slightly higher than the inter-pathologist kappa score of 0.485 and agreement of 62.7% on this test set. All evaluation metrics for our model and the three pathologists were within 95% confidence intervals of agreement. If confirmed in clinical practice, our model can assist pathologists in improving classification of lung adenocarcinoma patterns by automatically pre-screening and highlighting cancerous regions prior to review. Our approach can be generalized to any whole-slide image classification task, and code is made publicly available at https://github.com/BMIRDS/deepslide.
Tasks Image Classification, Lung Cancer Diagnosis
Published 2019-01-31
URL http://arxiv.org/abs/1901.11489v1
PDF http://arxiv.org/pdf/1901.11489v1.pdf
PWC https://paperswithcode.com/paper/pathologist-level-classification-of
Repo https://github.com/BMIRDS/deepslide
Framework pytorch
comments powered by Disqus