January 28, 2020

3553 words 17 mins read

Paper Group ANR 954

Paper Group ANR 954

Affine Disentangled GAN for Interpretable and Robust AV Perception. Learning Scalable and Precise Representation of Program Semantics. Deep Ordinal Reinforcement Learning. Scalable Influence Estimation Without Sampling. Site-specific graph neural network for predicting protonation energy of oxygenate molecules. InfoGraph: Unsupervised and Semi-supe …

Affine Disentangled GAN for Interpretable and Robust AV Perception

Title Affine Disentangled GAN for Interpretable and Robust AV Perception
Authors Letao Liu, Martin Saerbeck, Justin Dauwels
Abstract Autonomous vehicles (AV) have progressed rapidly with the advancements in computer vision algorithms. The deep convolutional neural network as the main contributor to this advancement has boosted the classification accuracy dramatically. However, the discovery of adversarial examples reveals the generalization gap between dataset and the real world. Furthermore, affine transformations may also confuse computer vision based object detectors. The degradation of the perception system is undesirable for safety critical systems such as autonomous vehicles. In this paper, a deep learning system is proposed: Affine Disentangled GAN (ADIS-GAN), which is robust against affine transformations and adversarial attacks. It is demonstrated that conventional data augmentation for affine transformation and adversarial attacks are orthogonal, while ADIS-GAN can handle both attacks at the same time. Useful information such as image rotation angle and scaling factor are also generated in ADIS-GAN. On MNIST dataset, ADIS-GAN can achieve over 98 percent classification accuracy within 30 degrees rotation, and over 90 percent classification accuracy against FGSM and PGD adversarial attack.
Tasks Adversarial Attack, Autonomous Vehicles, Data Augmentation
Published 2019-07-06
URL https://arxiv.org/abs/1907.05274v1
PDF https://arxiv.org/pdf/1907.05274v1.pdf
PWC https://paperswithcode.com/paper/affine-disentangled-gan-for-interpretable-and
Repo
Framework

Learning Scalable and Precise Representation of Program Semantics

Title Learning Scalable and Precise Representation of Program Semantics
Authors Ke Wang
Abstract Neural program embedding has shown potential in aiding the analysis of large-scale, complicated software. Newly proposed deep neural architectures pride themselves on learning program semantics rather than superficial syntactic features. However, by considering the source code only, the vast majority of neural networks do not capture a deep, precise representation of program semantics. In this paper, we present \dypro, a novel deep neural network that learns from program execution traces. Compared to the prior dynamic models, not only is \dypro capable of generalizing across multiple executions for learning a program’s dynamic semantics in its entirety, but \dypro is also more efficient when dealing with programs yielding long execution traces. For evaluation, we task \dypro with semantic classification (i.e. categorizing programs based on their semantics) and compared it against two prominent static models: Gated Graph Neural Network and TreeLSTM. We find that \dypro achieves the highest prediction accuracy among all models. To further reveal the capacity of all aforementioned deep neural architectures, we examine if the models can learn to detect deeper semantic properties of a program. In particular given a task of recognizing loop invariants, we show \dypro beats all static models by a wide margin.
Tasks
Published 2019-05-13
URL https://arxiv.org/abs/1905.05251v3
PDF https://arxiv.org/pdf/1905.05251v3.pdf
PWC https://paperswithcode.com/paper/learning-scalable-and-precise-representation
Repo
Framework

Deep Ordinal Reinforcement Learning

Title Deep Ordinal Reinforcement Learning
Authors Alexander Zap, Tobias Joppen, Johannes Fürnkranz
Abstract Reinforcement learning usually makes use of numerical rewards, which have nice properties but also come with drawbacks and difficulties. Using rewards on an ordinal scale (ordinal rewards) is an alternative to numerical rewards that has received more attention in recent years. In this paper, a general approach to adapting reinforcement learning problems to the use of ordinal rewards is presented and motivated. We show how to convert common reinforcement learning algorithms to an ordinal variation by the example of Q-learning and introduce Ordinal Deep Q-Networks, which adapt deep reinforcement learning to ordinal rewards. Additionally, we run evaluations on problems provided by the OpenAI Gym framework, showing that our ordinal variants exhibit a performance that is comparable to the numerical variations for a number of problems. We also give first evidence that our ordinal variant is able to produce better results for problems with less engineered and simpler-to-design reward signals.
Tasks Q-Learning
Published 2019-05-06
URL https://arxiv.org/abs/1905.02005v2
PDF https://arxiv.org/pdf/1905.02005v2.pdf
PWC https://paperswithcode.com/paper/deep-ordinal-reinforcement-learning
Repo
Framework

Scalable Influence Estimation Without Sampling

Title Scalable Influence Estimation Without Sampling
Authors Andrey Y. Lokhov, David Saad
Abstract In a diffusion process on a network, how many nodes are expected to be influenced by a set of initial spreaders? This natural problem, often referred to as influence estimation, boils down to computing the marginal probability that a given node is active at a given time when the process starts from specified initial condition. Among many other applications, this task is crucial for a well-studied problem of influence maximization: finding optimal spreaders in a social network that maximize the influence spread by a certain time horizon. Indeed, influence estimation needs to be called multiple times for comparing candidate seed sets. Unfortunately, in many models of interest an exact computation of marginals is #P-hard. In practice, influence is often estimated using Monte-Carlo sampling methods that require a large number of runs for obtaining a high-fidelity prediction, especially at large times. It is thus desirable to develop analytic techniques as an alternative to sampling methods. Here, we suggest an algorithm for estimating the influence function in popular independent cascade model based on a scalable dynamic message-passing approach. This method has a computational complexity of a single Monte-Carlo simulation and provides an upper bound on the expected spread on a general graph, yielding exact answer for treelike networks. We also provide dynamic message-passing equations for a stochastic version of the linear threshold model. The resulting saving of a potentially large sampling factor in the running time compared to simulation-based techniques hence makes it possible to address large-scale problem instances.
Tasks
Published 2019-12-29
URL https://arxiv.org/abs/1912.12749v1
PDF https://arxiv.org/pdf/1912.12749v1.pdf
PWC https://paperswithcode.com/paper/scalable-influence-estimation-without
Repo
Framework

Site-specific graph neural network for predicting protonation energy of oxygenate molecules

Title Site-specific graph neural network for predicting protonation energy of oxygenate molecules
Authors Romit Maulik, Rajeev Surendran Array, Prasanna Balaprakash
Abstract Bio-oil molecule assessment is essential for the sustainable development of chemicals and transportation fuels. These oxygenated molecules have adequate carbon, hydrogen, and oxygen atoms that can be used for developing new value-added molecules (chemicals or transportation fuels). One motivation for our study stems from the fact that a liquid phase upgrading using mineral acid is a cost-effective chemical transformation. In this chemical upgrading process, adding a proton (positively charged atomic hydrogen) to an oxygen atom is a central step. The protonation energies of oxygen atoms in a molecule determine the thermodynamic feasibility of the reaction and likely chemical reaction pathway. A quantum chemical model based on coupled cluster theory is used to compute accurate thermochemical properties such as the protonation energies of oxygen atoms and the feasibility of protonation-based chemical transformations. However, this method is too computationally expensive to explore a large space of chemical transformations. We develop a graph neural network approach for predicting protonation energies of oxygen atoms of hundreds of bioxygenate molecules to predict the feasibility of aqueous acidic reactions. Our approach relies on an iterative local nonlinear embedding that gradually leads to global influence of distant atoms and a output layer that predicts the protonation energy. Our approach is geared to site-specific predictions for individual oxygen atoms of a molecule in comparison with commonly used graph convolutional networks that focus on a singular molecular property prediction. We demonstrate that our approach is effective in learning the location and magnitudes of protonation energies of oxygenated molecules.
Tasks Molecular Property Prediction
Published 2019-09-18
URL https://arxiv.org/abs/2001.03136v1
PDF https://arxiv.org/pdf/2001.03136v1.pdf
PWC https://paperswithcode.com/paper/site-specific-graph-neural-network-for
Repo
Framework

InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization

Title InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization
Authors Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, Jian Tang
Abstract This paper studies learning the representations of whole graphs in both unsupervised and semi-supervised scenarios. Graph-level representations are critical in a variety of real-world applications such as predicting the properties of molecules and community analysis in social networks. Traditional graph kernel based methods are simple, yet effective for obtaining fixed-length representations for graphs but they suffer from poor generalization due to hand-crafted designs. There are also some recent methods based on language models (e.g. graph2vec) but they tend to only consider certain substructures (e.g. subtrees) as graph representatives. Inspired by recent progress of unsupervised representation learning, in this paper we proposed a novel method called InfoGraph for learning graph-level representations. We maximize the mutual information between the graph-level representation and the representations of substructures of different scales (e.g., nodes, edges, triangles). By doing so, the graph-level representations encode aspects of the data that are shared across different scales of substructures. Furthermore, we further propose InfoGraph*, an extension of InfoGraph for semi-supervised scenarios. InfoGraph* maximizes the mutual information between unsupervised graph representations learned by InfoGraph and the representations learned by existing supervised methods. As a result, the supervised encoder learns from unlabeled data while preserving the latent semantic space favored by the current supervised task. Experimental results on the tasks of graph classification and molecular property prediction show that InfoGraph is superior to state-of-the-art baselines and InfoGraph* can achieve performance competitive with state-of-the-art semi-supervised models.
Tasks Graph Classification, Molecular Property Prediction, Representation Learning, Unsupervised Representation Learning
Published 2019-07-31
URL https://arxiv.org/abs/1908.01000v3
PDF https://arxiv.org/pdf/1908.01000v3.pdf
PWC https://paperswithcode.com/paper/infograph-unsupervised-and-semi-supervised
Repo
Framework

Multi-FAN: Multi-Spectral Mosaic Super-Resolution Via Multi-Scale Feature Aggregation Network

Title Multi-FAN: Multi-Spectral Mosaic Super-Resolution Via Multi-Scale Feature Aggregation Network
Authors Mehrdad Shoeiby, Sadegh Aliakbarian, Saeed Anwar, Lars Petersson
Abstract This paper introduces a novel method to super-resolve multi-spectral images captured by modern real-time single-shot mosaic image sensors, also known as multi-spectral cameras. Our contribution is two-fold. Firstly, we super-resolve multi-spectral images from mosaic images rather than image cubes, which helps to take into account the spatial offset of each wavelength. Secondly, we introduce an external multi-scale feature aggregation network (Multi-FAN) which concatenates the feature maps with different levels of semantic information throughout a super-resolution (SR) network. A cascade of convolutional layers then implicitly selects the most valuable feature maps to generate a mosaic image. This mosaic image is then merged with the mosaic image generated by the SR network to produce a quantitatively superior image. We apply our Multi-FAN to RCAN (Residual Channel Attention Network), which is the state-of-the-art SR algorithm. We show that Multi-FAN improves both quantitative results and well as inference time.
Tasks Super-Resolution
Published 2019-09-17
URL https://arxiv.org/abs/1909.07577v3
PDF https://arxiv.org/pdf/1909.07577v3.pdf
PWC https://paperswithcode.com/paper/multi-fan-multi-spectral-mosaic-super
Repo
Framework

REMIND Your Neural Network to Prevent Catastrophic Forgetting

Title REMIND Your Neural Network to Prevent Catastrophic Forgetting
Authors Tyler L. Hayes, Kushal Kafle, Robik Shrestha, Manoj Acharya, Christopher Kanan
Abstract People learn throughout life. However, incrementally updating conventional neural networks leads to catastrophic forgetting. A common remedy is replay, which is inspired by how the brain consolidates memory. Replay involves fine-tuning a network on a mixture of new and old instances. While the brain replays compressed memories, existing methods for convolutional networks replay raw images. Here, we propose REMIND, a brain-inspired approach that enables efficient replay with compressed hidden representations. REMIND is trained in an online manner, meaning it learns one example at a time, which is closer to how humans perceive new information. Under the same constraints, REMIND outperforms other methods for incremental class learning on the ImageNet ILSVRC-2012 dataset. We probe REMIND’s robustness to data ordering schemes known to induce catastrophic forgetting. We demonstrate REMIND’s generality by pioneering online learning for Visual Question Answering (VQA), which cannot be readily done with comparison models.
Tasks Quantization, Question Answering, Visual Question Answering
Published 2019-10-06
URL https://arxiv.org/abs/1910.02509v2
PDF https://arxiv.org/pdf/1910.02509v2.pdf
PWC https://paperswithcode.com/paper/remind-your-neural-network-to-prevent
Repo
Framework

Artificial Intelligence in Glioma Imaging: Challenges and Advances

Title Artificial Intelligence in Glioma Imaging: Challenges and Advances
Authors Weina Jin, Mostafa Fatehi, Kumar Abhishek, Mayur Mallya, Brian Toyota, Ghassan Hamarneh
Abstract Primary brain tumors including gliomas continue to pose significant management challenges to clinicians. While the presentation, the pathology, and the clinical course of these lesions are variable, the initial investigations are usually similar. Patients who are suspected to have a brain tumor will be assessed with computed tomography (CT) and magnetic resonance imaging (MRI). The imaging findings are used by neurosurgeons to determine the feasibility of surgical resection and plan such an undertaking. Imaging studies are also an indispensable tool in tracking tumor progression or its response to treatment. As these imaging studies are non-invasive, relatively cheap and accessible to patients, there have been many efforts over the past two decades to increase the amount of clinically-relevant information that can be extracted from brain imaging. Most recently, artificial intelligence (AI) techniques have been employed to segment and characterize brain tumors, as well as to detect progression or treatment-response. However, the clinical utility of such endeavours remains limited due to challenges in data collection and annotation, model training, and the reliability of AI-generated information. We provide a review of recent advances in addressing the above challenges. First, to overcome the challenge of data paucity, different image imputation and synthesis techniques along with annotation collection efforts are summarized. Next, various training strategies are presented to meet multiple desiderata, such as model performance, generalization ability, data privacy protection, and learning with sparse annotations. Finally, standardized performance evaluation and model interpretability methods have been reviewed. We believe that these technical approaches will facilitate the development of a fully-functional AI tool in the clinical care of patients with gliomas.
Tasks Computed Tomography (CT), Image Imputation, Imputation
Published 2019-11-28
URL https://arxiv.org/abs/1911.12886v2
PDF https://arxiv.org/pdf/1911.12886v2.pdf
PWC https://paperswithcode.com/paper/applying-artificial-intelligence-to-glioma
Repo
Framework

MSnet: A BERT-based Network for Gendered Pronoun Resolution

Title MSnet: A BERT-based Network for Gendered Pronoun Resolution
Authors Zili Wang
Abstract The pre-trained BERT model achieves a remarkable state of the art across a wide range of tasks in natural language processing. For solving the gender bias in gendered pronoun resolution task, I propose a novel neural network model based on the pre-trained BERT. This model is a type of mention score classifier and uses an attention mechanism with no parameters to compute the contextual representation of entity span, and a vector to represent the triple-wise semantic similarity among the pronoun and the entities. In stage 1 of the gendered pronoun resolution task, a variant of this model, trained in the fine-tuning approach, reduced the multi-class logarithmic loss to 0.3033 in the 5-fold cross-validation of training set and 0.2795 in testing set. Besides, this variant won the 2nd place with a score at 0.17289 in stage 2 of the task. The code in this paper is available at: https://github.com/ziliwang/MSnet-for-Gendered-PronounResolution
Tasks Semantic Similarity, Semantic Textual Similarity
Published 2019-08-01
URL https://arxiv.org/abs/1908.00308v1
PDF https://arxiv.org/pdf/1908.00308v1.pdf
PWC https://paperswithcode.com/paper/msnet-a-bert-based-network-for-gendered-1
Repo
Framework

Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces

Title Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces
Authors Philipp Becker, Harit Pandya, Gregor Gebhardt, Cheng Zhao, James Taylor, Gerhard Neumann
Abstract In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) (Kalman et al., 1960) have been integrated with deep learning models, however, such approaches typically rely on approximate inference techniques such as variational inference which makes learning more complex and often less scalable due to approximation errors. We propose a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations. Our approach uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions. Moreover, we use locally linear dynamic models to efficiently propagate the latent state to the next time step. The resulting network architecture, which we call Recurrent Kalman Network (RKN), can be used for any time-series data, similar to a LSTM (Hochreiter & Schmidhuber, 1997) but uses an explicit representation of uncertainty. As shown by our experiments, the RKN obtains much more accurate uncertainty estimates than an LSTM or Gated Recurrent Units (GRUs) (Cho et al., 2014) while also showing a slightly improved prediction performance and outperforms various recent generative models on an image imputation task.
Tasks Image Imputation, Imputation, Time Series
Published 2019-05-17
URL https://arxiv.org/abs/1905.07357v1
PDF https://arxiv.org/pdf/1905.07357v1.pdf
PWC https://paperswithcode.com/paper/recurrent-kalman-networks-factorized
Repo
Framework

Does Gender Matter? Towards Fairness in Dialogue Systems

Title Does Gender Matter? Towards Fairness in Dialogue Systems
Authors Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, Jiliang Tang
Abstract Recently there are increasing concerns about the fairness of Artificial Intelligence (AI) in real-world applications such as computer vision and recommendations. For example, recognition algorithms in computer vision are unfair to black people such as poorly detecting their faces and inappropriately identifying them as “gorillas”. As one crucial application of AI, dialogue systems have been extensively applied in our society. They are usually built with real human conversational data; thus they could inherit some fairness issues which are held in the real world. However, the fairness of dialogue systems has not been investigated. In this paper, we perform the initial study about the fairness issues in dialogue systems. In particular, we construct the first dataset and propose quantitative measures to understand fairness in dialogue models. Our studies demonstrate that popular dialogue models show significant prejudice towards different genders and races. We will release the dataset and the measurement code later to foster the fairness research in dialogue systems.
Tasks
Published 2019-10-16
URL https://arxiv.org/abs/1910.10486v1
PDF https://arxiv.org/pdf/1910.10486v1.pdf
PWC https://paperswithcode.com/paper/does-gender-matter-towards-fairness-in
Repo
Framework

A Comprehensive Study and Comparison of Core Technologies for MPEG 3D Point Cloud Compression

Title A Comprehensive Study and Comparison of Core Technologies for MPEG 3D Point Cloud Compression
Authors Hao Liu, Hui Yuan, Qi Liu, Junhui Hou, Ju Liu
Abstract Point cloud based 3D visual representation is becoming popular due to its ability to exhibit the real world in a more comprehensive and immersive way. However, under a limited network bandwidth, it is very challenging to communicate this kind of media due to its huge data volume. Therefore, the MPEG have launched the standardization for point cloud compression (PCC), and proposed three model categories, i.e., TMC1, TMC2, and TMC3. Because the 3D geometry compression methods of TMC1 and TMC3 are similar, TMC1 and TMC3 are further merged into a new platform namely TMC13. In this paper, we first introduce some basic technologies that are usually used in 3D point cloud compression, then review the encoder architectures of these test models in detail, and finally analyze their rate distortion performance as well as complexity quantitatively for different cases (i.e., lossless geometry and lossless color, lossless geometry and lossy color, lossy geometry and lossy color) by using 16 benchmark 3D point clouds that are recommended by MPEG. Experimental results demonstrate that the coding efficiency of TMC2 is the best on average (especially for lossy geometry and lossy color compression) for dense point clouds while TMC13 achieves the optimal coding performance for sparse and noisy point clouds with lower time complexity.
Tasks
Published 2019-12-20
URL https://arxiv.org/abs/1912.09674v1
PDF https://arxiv.org/pdf/1912.09674v1.pdf
PWC https://paperswithcode.com/paper/a-comprehensive-study-and-comparison-of-core
Repo
Framework

Texture-Aware Superpixel Segmentation

Title Texture-Aware Superpixel Segmentation
Authors Remi Giraud, Vinh-Thong Ta, Nicolas Papadakis, Yannick Berthoumieu
Abstract Most superpixel algorithms compute a trade-off between spatial and color features at the pixel level. Hence, they may need fine parameter tuning to balance the two measures, and highly fail to group pixels with similar local texture properties. In this paper, we address these issues with a new Texture-Aware SuperPixel (TASP) method. To accurately segment textured and smooth areas, TASP automatically adjusts its spatial constraint according to the local feature variance. Then, to ensure texture homogeneity within superpixels, a new pixel to superpixel patch-based distance is proposed. TASP outperforms the segmentation accuracy of the state-of-the-art methods on texture and also natural color image datasets.
Tasks
Published 2019-01-30
URL http://arxiv.org/abs/1901.11111v3
PDF http://arxiv.org/pdf/1901.11111v3.pdf
PWC https://paperswithcode.com/paper/texture-aware-superpixel-segmentation
Repo
Framework

Automated Segmentation of Hip and Thigh Muscles in Metal Artifact-Contaminated CT using Convolutional Neural Network-Enhanced Normalized Metal Artifact Reduction

Title Automated Segmentation of Hip and Thigh Muscles in Metal Artifact-Contaminated CT using Convolutional Neural Network-Enhanced Normalized Metal Artifact Reduction
Authors Mitsuki Sakamoto, Yuta Hiasa, Yoshito Otake, Masaki Takao, Yuki Suzuki, Nobuhiko Sugano, Yoshinobu Sato
Abstract In total hip arthroplasty, analysis of postoperative medical images is important to evaluate surgical outcome. Since Computed Tomography (CT) is most prevalent modality in orthopedic surgery, we aimed at the analysis of CT image. In this work, we focus on the metal artifact in postoperative CT caused by the metallic implant, which reduces the accuracy of segmentation especially in the vicinity of the implant. Our goal was to develop an automated segmentation method of the bones and muscles in the postoperative CT images. We propose a method that combines Normalized Metal Artifact Reduction (NMAR), which is one of the state-of-the-art metal artifact reduction methods, and a Convolutional Neural Network-based segmentation using two U-net architectures. The first U-net refines the result of NMAR and the muscle segmentation is performed by the second U-net. We conducted experiments using simulated images of 20 patients and real images of three patients to evaluate the segmentation accuracy of 19 muscles. In simulation study, the proposed method showed statistically significant improvement (p<0.05) in the average symmetric surface distance (ASD) metric for 14 muscles out of 19 muscles and the average ASD of all muscles from 1.17 +/- 0.543 mm (mean +/- std over all patients) to 1.10 +/- 0.509 mm over our previous method. The real image study using the manual trace of gluteus maximus and medius muscles showed ASD of 1.32 +/- 0.25 mm. Our future work includes training of a network in an end-to-end manner for both the metal artifact reduction and muscle segmentation.
Tasks Computed Tomography (CT), Metal Artifact Reduction
Published 2019-06-27
URL https://arxiv.org/abs/1906.11484v1
PDF https://arxiv.org/pdf/1906.11484v1.pdf
PWC https://paperswithcode.com/paper/automated-segmentation-of-hip-and-thigh
Repo
Framework
comments powered by Disqus