Paper Group ANR 294
Information Condensing Active Learning. Site2Vec: a reference frame invariant algorithm for vector embedding of protein-ligand binding sites. Communication-Efficient Edge AI: Algorithms and Systems. The Synthesizability of Molecules Proposed by Generative Models. Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine L …
Information Condensing Active Learning
Title | Information Condensing Active Learning |
Authors | Siddhartha Jain, Ge Liu, David Gifford |
Abstract | We introduce Information Condensing Active Learning (ICAL), a batch mode model agnostic Active Learning (AL) method targeted at Deep Bayesian Active Learning that focuses on acquiring labels for points which have as much information as possible about the still unacquired points. ICAL uses the Hilbert Schmidt Independence Criterion (HSIC) to measure the strength of the dependency between a candidate batch of points and the unlabeled set. We develop key optimizations that allow us to scale our method to large unlabeled sets. We show significant improvements in terms of model accuracy and negative log likelihood (NLL) on several image datasets compared to state of the art batch mode AL methods for deep learning. |
Tasks | Active Learning |
Published | 2020-02-18 |
URL | https://arxiv.org/abs/2002.07916v2 |
https://arxiv.org/pdf/2002.07916v2.pdf | |
PWC | https://paperswithcode.com/paper/information-condensing-active-learning |
Repo | |
Framework | |
Site2Vec: a reference frame invariant algorithm for vector embedding of protein-ligand binding sites
Title | Site2Vec: a reference frame invariant algorithm for vector embedding of protein-ligand binding sites |
Authors | Arnab Bhadra, Kalidas Y |
Abstract | Protein-ligand interaction is one of the fundamental molecular interactions of living systems. Proteins are the building blocks of functions in life at the molecular level. Ligands are small molecules that interact with proteins at specific regions on the surface of proteins called binding sites. Understanding the physicochemical properties of ligand-binding sites is very important in the field of drug discovery as well as understanding biological systems. Protein-ligand binding site plays an essential role in the interaction between protein and ligand that is necessary for any living system to survive. Comparing similarities between binding sites has been one of the main focus areas since the last decade in bioinformatics and drug discovery. In this regard, several computational methods have been developed to compare binding sites so far. Binding site comparison requires fast and efficient method as the amount of three-dimensional protein structural information is increasing rapidly nowadays. We report in this study, development of Site2Vec, a novel machine learning-based method for reference frame invariant ligand-independent vector embedding of the 3D structure of a protein-ligand binding site. Each binding site is represented in a $d$-dimensional vector form. The 3D structures of binding sites are mapped to vector form such that similar binding sites hash into proximal localities, and dissimilar sites fall across diverse regions. A sensitivity analysis of rotation and perturbation and validation study is performed to understand the behavior of the method. Benchmarking exercises have been carried out against state of the art binding site comparison methods on state of the art datasets. The exercises validate our proposed method and demonstrate that the proposed method is rotationally invariant and can handle natural perturbations expected in the biological system. |
Tasks | Drug Discovery |
Published | 2020-03-18 |
URL | https://arxiv.org/abs/2003.08149v1 |
https://arxiv.org/pdf/2003.08149v1.pdf | |
PWC | https://paperswithcode.com/paper/site2vec-a-reference-frame-invariant |
Repo | |
Framework | |
Communication-Efficient Edge AI: Algorithms and Systems
Title | Communication-Efficient Edge AI: Algorithms and Systems |
Authors | Yuanming Shi, Kai Yang, Tao Jiang, Jun Zhang, Khaled B. Letaief |
Abstract | Artificial intelligence (AI) has achieved remarkable breakthroughs in a wide range of fields, ranging from speech processing, image classification to drug discovery. This is driven by the explosive growth of data, advances in machine learning (especially deep learning), and easy access to vastly powerful computing resources. Particularly, the wide scale deployment of edge devices (e.g., IoT devices) generates an unprecedented scale of data, which provides the opportunity to derive accurate models and develop various intelligent applications at the network edge. However, such enormous data cannot all be sent from end devices to the cloud for processing, due to the varying channel quality, traffic congestion and/or privacy concerns. By pushing inference and training processes of AI models to edge nodes, edge AI has emerged as a promising alternative. AI at the edge requires close cooperation among edge devices, such as smart phones and smart vehicles, and edge servers at the wireless access points and base stations, which however result in heavy communication overheads. In this paper, we present a comprehensive survey of the recent developments in various techniques for overcoming these communication challenges. Specifically, we first identify key communication challenges in edge AI systems. We then introduce communication-efficient techniques, from both algorithmic and system perspectives for training and inference tasks at the network edge. Potential future research directions are also highlighted. |
Tasks | Drug Discovery, Image Classification |
Published | 2020-02-22 |
URL | https://arxiv.org/abs/2002.09668v1 |
https://arxiv.org/pdf/2002.09668v1.pdf | |
PWC | https://paperswithcode.com/paper/communication-efficient-edge-ai-algorithms |
Repo | |
Framework | |
The Synthesizability of Molecules Proposed by Generative Models
Title | The Synthesizability of Molecules Proposed by Generative Models |
Authors | Wenhao Gao, Connor W. Coley |
Abstract | The discovery of functional molecules is an expensive and time-consuming process, exemplified by the rising costs of small molecule therapeutic discovery. One class of techniques of growing interest for early-stage drug discovery is de novo molecular generation and optimization, catalyzed by the development of new deep learning approaches. These techniques can suggest novel molecular structures intended to maximize a multi-objective function, e.g., suitability as a therapeutic against a particular target, without relying on brute-force exploration of a chemical space. However, the utility of these approaches is stymied by ignorance of synthesizability. To highlight the severity of this issue, we use a data-driven computer-aided synthesis planning program to quantify how often molecules proposed by state-of-the-art generative models cannot be readily synthesized. Our analysis demonstrates that there are several tasks for which these models generate unrealistic molecular structures despite performing well on popular quantitative benchmarks. Synthetic complexity heuristics can successfully bias generation toward synthetically-tractable chemical space, although doing so necessarily detracts from the primary objective. This analysis suggests that to improve the utility of these models in real discovery workflows, new algorithm development is warranted. |
Tasks | Drug Discovery |
Published | 2020-02-17 |
URL | https://arxiv.org/abs/2002.07007v1 |
https://arxiv.org/pdf/2002.07007v1.pdf | |
PWC | https://paperswithcode.com/paper/the-synthesizability-of-molecules-proposed-by |
Repo | |
Framework | |
Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine Learning Based Approach
Title | Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine Learning Based Approach |
Authors | Peng Zhang, Jianbin Fang, Canqun Yang, Chun Huang, Tao Tang, Zheng Wang |
Abstract | This article presents an automatic approach to quickly derive a good solution for hardware resource partition and task granularity for task-based parallel applications on heterogeneous many-core architectures. Our approach employs a performance model to estimate the resulting performance of the target application under a given resource partition and task granularity configuration. The model is used as a utility to quickly search for a good configuration at runtime. Instead of hand-crafting an analytical model that requires expert insights into low-level hardware details, we employ machine learning techniques to automatically learn it. We achieve this by first learning a predictive model offline using training programs. The learnt model can then be used to predict the performance of any unseen program at runtime. We apply our approach to 39 representative parallel applications and evaluate it on two representative heterogeneous many-core platforms: a CPU-XeonPhi platform and a CPU-GPU platform. Compared to the single-stream version, our approach achieves, on average, a 1.6x and 1.1x speedup on the XeonPhi and the GPU platform, respectively. These results translate to over 93% of the performance delivered by a theoretically perfect predictor. |
Tasks | |
Published | 2020-03-05 |
URL | https://arxiv.org/abs/2003.04294v1 |
https://arxiv.org/pdf/2003.04294v1.pdf | |
PWC | https://paperswithcode.com/paper/optimizing-streaming-parallelism-on |
Repo | |
Framework | |
Generating Major Types of Chinese Classical Poetry in a Uniformed Framework
Title | Generating Major Types of Chinese Classical Poetry in a Uniformed Framework |
Authors | Jinyi Hu, Maosong Sun |
Abstract | Poetry generation is an interesting research topic in the field of text generation. As one of the most valuable literary and cultural heritages of China, Chinese classical poetry is very familiar and loved by Chinese people from generation to generation. It has many particular characteristics in its language structure, ranging from form, sound to meaning, thus is regarded as an ideal testing task for text generation. In this paper, we propose a GPT-2 based uniformed framework for generating major types of Chinese classical poems. We define a unified format for formulating all types of training samples by integrating detailed form information, then present a simple form-stressed weighting method in GPT-2 to strengthen the control to the form of the generated poems, with special emphasis on those forms with longer body length. Preliminary experimental results show this enhanced model can generate Chinese classical poems of major types with high quality in both form and content, validating the effectiveness of the proposed strategy. The model has been incorporated into Jiuge, the most influential Chinese classical poetry generation system developed by Tsinghua University (Guo et al., 2019). |
Tasks | Text Generation |
Published | 2020-03-13 |
URL | https://arxiv.org/abs/2003.11528v1 |
https://arxiv.org/pdf/2003.11528v1.pdf | |
PWC | https://paperswithcode.com/paper/generating-major-types-of-chinese-classical |
Repo | |
Framework | |
Knot Selection in Sparse Gaussian Processes with a Variational Objective
Title | Knot Selection in Sparse Gaussian Processes with a Variational Objective |
Authors | Nathaniel Garton, Jarad Niemi, Alicia Carriquiry |
Abstract | Sparse, knot-based Gaussian processes have enjoyed considerable success as scalable approximations to full Gaussian processes. Certain sparse models can be derived through specific variational approximations to the true posterior, and knots can be selected to minimize the Kullback-Leibler divergence between the approximate and true posterior. While this has been a successful approach, simultaneous optimization of knots can be slow due to the number of parameters being optimized. Furthermore, there have been few proposed methods for selecting the number of knots, and no experimental results exist in the literature. We propose using a one-at-a-time knot selection algorithm based on Bayesian optimization to select the number and locations of knots. We showcase the competitive performance of this method relative to simultaneous optimization of knots on three benchmark data sets, but at a fraction of the computational cost. |
Tasks | Gaussian Processes |
Published | 2020-03-05 |
URL | https://arxiv.org/abs/2003.02729v1 |
https://arxiv.org/pdf/2003.02729v1.pdf | |
PWC | https://paperswithcode.com/paper/knot-selection-in-sparse-gaussian-processes-1 |
Repo | |
Framework | |
AGATHA: Automatic Graph-mining And Transformer based Hypothesis generation Approach
Title | AGATHA: Automatic Graph-mining And Transformer based Hypothesis generation Approach |
Authors | Justin Sybrandt, Ilya Tyagin, Michael Shtutman, Ilya Safro |
Abstract | Medical research is risky and expensive. Drug discovery, as an example, requires that researchers efficiently winnow thousands of potential targets to a small candidate set for more thorough evaluation. However, research groups spend significant time and money to perform the experiments necessary to determine this candidate set long before seeing intermediate results. Hypothesis generation systems address this challenge by mining the wealth of publicly available scientific information to predict plausible research directions. We present AGATHA, a deep-learning hypothesis generation system that can introduce data-driven insights earlier in the discovery process. Through a learned ranking criteria, this system quickly prioritizes plausible term-pairs among entity sets, allowing us to recommend new research directions. We massively validate our system with a temporal holdout wherein we predict connections first introduced after 2015 using data published beforehand. We additionally explore biomedical sub-domains, and demonstrate AGATHA’s predictive capacity across the twenty most popular relationship types. This system achieves best-in-class performance on an established benchmark, and demonstrates high recommendation scores across subdomains. Reproducibility: All code, experimental data, and pre-trained models are available online: sybrandt.com/2020/agatha |
Tasks | Drug Discovery |
Published | 2020-02-13 |
URL | https://arxiv.org/abs/2002.05635v1 |
https://arxiv.org/pdf/2002.05635v1.pdf | |
PWC | https://paperswithcode.com/paper/agatha-automatic-graph-mining-and-transformer |
Repo | |
Framework | |
Viewing the Progression of the Novel Corona Virus (COVID-19) with NewsStand
Title | Viewing the Progression of the Novel Corona Virus (COVID-19) with NewsStand |
Authors | John Kastner, Hong Wei, Hanan Samet |
Abstract | With the continuing spread of COVID-19, it is clearly important to be able to track the progress of the virus over time to be better prepared to anticipate its emergence in new regions. Officially released numbers of cases will likely be the most accurate means by which to track this, but they will not necessarily paint a complete picture. We have developed an application 1 usable on desktop and mobile devices that allows users to explore geographic spread in discussion about the virus through analysis of keyword prevalence in geotaged news articles . |
Tasks | |
Published | 2020-02-28 |
URL | https://arxiv.org/abs/2003.00107v2 |
https://arxiv.org/pdf/2003.00107v2.pdf | |
PWC | https://paperswithcode.com/paper/viewing-the-progression-of-the-novel-corona |
Repo | |
Framework | |
Adaptive Graph Auto-Encoder for General Data Clustering
Title | Adaptive Graph Auto-Encoder for General Data Clustering |
Authors | Xuelong Li, Hongyuan Zhang, Rui Zhang |
Abstract | Graph based clustering plays an important role in clustering area. Recent studies about graph convolution neural networks have achieved impressive success on graph type data. However, in traditional clustering tasks, the graph structure of data does not exist such that the strategy to construct graph is crucial for performance. In addition, the existing graph auto-encoder based approaches perform poorly on weighted graph, which is widely used in graph based clustering. In this paper, we propose a graph auto-encoder with local structure preserving for general data clustering, which can update the constructed graph adaptively. The adaptive process is designed to utilize the non-Euclidean structure sufficiently. By combining generative model for graph embedding and graph based clustering, a graph auto-encoder with a novel decoder is developed and it performs well in weighted graph used scenarios. Extensive experiments prove the superiority of our model. |
Tasks | Graph Embedding |
Published | 2020-02-20 |
URL | https://arxiv.org/abs/2002.08648v2 |
https://arxiv.org/pdf/2002.08648v2.pdf | |
PWC | https://paperswithcode.com/paper/adaptive-graph-auto-encoder-for-general-data |
Repo | |
Framework | |
Geometrically Mappable Image Features
Title | Geometrically Mappable Image Features |
Authors | Janine Thoma, Danda Pani Paudel, Ajad Chhatkuli, Luc Van Gool |
Abstract | Vision-based localization of an agent in a map is an important problem in robotics and computer vision. In that context, localization by learning matchable image features is gaining popularity due to recent advances in machine learning. Features that uniquely describe the visual contents of images have a wide range of applications, including image retrieval and understanding. In this work, we propose a method that learns image features targeted for image-retrieval-based localization. Retrieval-based localization has several benefits, such as easy maintenance and quick computation. However, the state-of-the-art features only provide visual similarity scores which do not explicitly reveal the geometric distance between query and retrieved images. Knowing this distance is highly desirable for accurate localization, especially when the reference images are sparsely distributed in the scene. Therefore, we propose a novel loss function for learning image features which are both visually representative and geometrically relatable. This is achieved by guiding the learning process such that the feature and geometric distances between images are directly proportional. In our experiments we show that our features not only offer significantly better localization accuracy, but also allow to estimate the trajectory of a query sequence in absence of the reference images. |
Tasks | Image Retrieval |
Published | 2020-03-21 |
URL | https://arxiv.org/abs/2003.09682v1 |
https://arxiv.org/pdf/2003.09682v1.pdf | |
PWC | https://paperswithcode.com/paper/geometrically-mappable-image-features |
Repo | |
Framework | |
When Radiology Report Generation Meets Knowledge Graph
Title | When Radiology Report Generation Meets Knowledge Graph |
Authors | Yixiao Zhang, Xiaosong Wang, Ziyue Xu, Qihang Yu, Alan Yuille, Daguang Xu |
Abstract | Automatic radiology report generation has been an attracting research problem towards computer-aided diagnosis to alleviate the workload of doctors in recent years. Deep learning techniques for natural image captioning are successfully adapted to generating radiology reports. However, radiology image reporting is different from the natural image captioning task in two aspects: 1) the accuracy of positive disease keyword mentions is critical in radiology image reporting in comparison to the equivalent importance of every single word in a natural image caption; 2) the evaluation of reporting quality should focus more on matching the disease keywords and their associated attributes instead of counting the occurrence of N-gram. Based on these concerns, we propose to utilize a pre-constructed graph embedding module (modeled with a graph convolutional neural network) on multiple disease findings to assist the generation of reports in this work. The incorporation of knowledge graph allows for dedicated feature learning for each disease finding and the relationship modeling between them. In addition, we proposed a new evaluation metric for radiology image reporting with the assistance of the same composed graph. Experimental results demonstrate the superior performance of the methods integrated with the proposed graph embedding module on a publicly accessible dataset (IU-RR) of chest radiographs compared with previous approaches using both the conventional evaluation metrics commonly adopted for image captioning and our proposed ones. |
Tasks | Graph Embedding, Image Captioning |
Published | 2020-02-19 |
URL | https://arxiv.org/abs/2002.08277v1 |
https://arxiv.org/pdf/2002.08277v1.pdf | |
PWC | https://paperswithcode.com/paper/when-radiology-report-generation-meets |
Repo | |
Framework | |
A CNN-based Patent Image Retrieval Method for Design Ideation
Title | A CNN-based Patent Image Retrieval Method for Design Ideation |
Authors | Shuo Jiang, Jianxi Luo, Guillermo Ruiz Pava, Jie Hu, Christopher L. Magee |
Abstract | The patent database is often used in searches of inspirational stimuli for innovative design opportunities because of its large size, extensive variety and rich design information in patent documents. However, most patent mining research only focuses on textual information and ignores visual information. Herein, we propose a convolutional neural network (CNN)- based patent image retrieval method. The core of this approach is a novel neural network architecture named Dual-VGG that is aimed to accomplish two tasks: visual material type prediction and International Patent Classification (IPC) class label prediction. In turn, the trained neural network provides the deep features in the image embedding vectors that can be utilized for patent image retrieval. The accuracy of both training tasks and patent image embedding space are evaluated to show the performance of our model. This approach is also illustrated in a case study of robot arm design retrieval. Compared to traditional keyword-based searching and Google image searching, the proposed method discovers more useful visual information for engineering design. |
Tasks | Image Retrieval |
Published | 2020-03-10 |
URL | https://arxiv.org/abs/2003.08741v1 |
https://arxiv.org/pdf/2003.08741v1.pdf | |
PWC | https://paperswithcode.com/paper/a-cnn-based-patent-image-retrieval-method-for |
Repo | |
Framework | |
Adaptive Semantic-Visual Tree for Hierarchical Embeddings
Title | Adaptive Semantic-Visual Tree for Hierarchical Embeddings |
Authors | Shuo Yang, Wei Yu, Ying Zheng, Hongxun Yao, Tao Mei |
Abstract | Merchandise categories inherently form a semantic hierarchy with different levels of concept abstraction, especially for fine-grained categories. This hierarchy encodes rich correlations among various categories across different levels, which can effectively regularize the semantic space and thus make predictions less ambiguous. However, previous studies of fine-grained image retrieval primarily focus on semantic similarities or visual similarities. In a real application, merely using visual similarity may not satisfy the need of consumers to search merchandise with real-life images, e.g., given a red coat as a query image, we might get a red suit in recall results only based on visual similarity since they are visually similar. But the users actually want a coat rather than suit even the coat is with different color or texture attributes. We introduce this new problem based on photoshopping in real practice. That’s why semantic information are integrated to regularize the margins to make “semantic” prior to “visual”. To solve this new problem, we propose a hierarchical adaptive semantic-visual tree (ASVT) to depict the architecture of merchandise categories, which evaluates semantic similarities between different semantic levels and visual similarities within the same semantic class simultaneously. The semantic information satisfies the demand of consumers for similar merchandise with the query while the visual information optimizes the correlations within the semantic class. At each level, we set different margins based on the semantic hierarchy and incorporate them as prior information to learn a fine-grained feature embedding. To evaluate our framework, we propose a new dataset named JDProduct, with hierarchical labels collected from actual image queries and official merchandise images on an online shopping application. Extensive experimental results on the public CARS196 and CUB- |
Tasks | Image Retrieval |
Published | 2020-03-08 |
URL | https://arxiv.org/abs/2003.03707v1 |
https://arxiv.org/pdf/2003.03707v1.pdf | |
PWC | https://paperswithcode.com/paper/adaptive-semantic-visual-tree-for |
Repo | |
Framework | |
When Do Drivers Concentrate? Attention-based Driver Behavior Modeling With Deep Reinforcement Learning
Title | When Do Drivers Concentrate? Attention-based Driver Behavior Modeling With Deep Reinforcement Learning |
Authors | Xingbo Fu, Xuan Di, Zhaobin Mo |
Abstract | Driver distraction a significant risk to driving safety. Apart from spatial domain, research on temporal inattention is also necessary. In this paper, we propose an actor-critic method - Attention-based Twin Delayed Deep Deterministic policy gradient (ATD3) algorithm to approximate a driver’s action according to observations and measure the driver’s attention allocation for consecutive time steps in car-following model. Considering reaction time, we construct the attention mechanism in the actor network to capture temporal dependencies of consecutive observations. In the critic network, we employ Twin Delayed Deep Deterministic policy gradient algorithm (TD3) to address overestimated value estimates persisting in the actor-critic algorithm. We conduct experiments on real-world vehicle trajectory datasets and show that the accuracy of our proposed approach outperforms seven baseline algorithms. Moreover, the results reveal that the attention of the drivers in smooth vehicles is uniformly distributed in previous observations while they keep their attention to recent observations when sudden decreases of relative speeds occur. This study is the first contribution to drivers’ temporal attention. |
Tasks | |
Published | 2020-02-26 |
URL | https://arxiv.org/abs/2002.11385v1 |
https://arxiv.org/pdf/2002.11385v1.pdf | |
PWC | https://paperswithcode.com/paper/when-do-drivers-concentrate-attention-based |
Repo | |
Framework | |