October 18, 2019

3367 words 16 mins read

Paper Group ANR 503

Paper Group ANR 503

Image Labeling with Markov Random Fields and Conditional Random Fields. Provably Robust Learning-Based Approach for High-Accuracy Tracking Control of Lagrangian Systems. Latent Molecular Optimization for Targeted Therapeutic Design. Altered Fingerprints: Detection and Localization. Learning Semantic Segmentation from Synthetic Data: A Geometrically …

Image Labeling with Markov Random Fields and Conditional Random Fields

Title Image Labeling with Markov Random Fields and Conditional Random Fields
Authors Shangxuan Wu, Xinshuo Weng
Abstract Most existing methods for object segmentation in computer vision are formulated as a labeling task. This, in general, could be transferred to a pixel-wise label assignment task, which is quite similar to the structure of hidden Markov random field. In terms of Markov random field, each pixel can be regarded as a state and has a transition probability to its neighbor pixel, the label behind each pixel is a latent variable and has an emission probability from its corresponding state. In this paper, we reviewed several modern image labeling methods based on Markov random field and conditional random Field. And we compare the result of these methods with some classical image labeling methods. The experiment demonstrates that the introduction of Markov random field and conditional random field make a big difference in the segmentation result.
Tasks Semantic Segmentation
Published 2018-11-28
URL https://arxiv.org/abs/1811.11323v2
PDF https://arxiv.org/pdf/1811.11323v2.pdf
PWC https://paperswithcode.com/paper/image-labeling-with-markov-random-fields-and
Repo
Framework

Provably Robust Learning-Based Approach for High-Accuracy Tracking Control of Lagrangian Systems

Title Provably Robust Learning-Based Approach for High-Accuracy Tracking Control of Lagrangian Systems
Authors Mohamed K. Helwa, Adam Heins, Angela P. Schoellig
Abstract Lagrangian systems represent a wide range of robotic systems, including manipulators, wheeled and legged robots, and quadrotors. Inverse dynamics control and feedforward linearization techniques are typically used to convert the complex nonlinear dynamics of Lagrangian systems to a set of decoupled double integrators, and then a standard, outer-loop controller can be used to calculate the commanded acceleration for the linearized system. However, these methods typically depend on having a very accurate system model, which is often not available in practice. While this challenge has been addressed in the literature using different learning approaches, most of these approaches do not provide safety guarantees in terms of stability of the learning-based control system. In this paper, we provide a novel, learning-based control approach based on Gaussian processes (GPs) that ensures both stability of the closed-loop system and high-accuracy tracking. We use GPs to approximate the error between the commanded acceleration and the actual acceleration of the system, and then use the predicted mean and variance of the GP to calculate an upper bound on the uncertainty of the linearized model. This uncertainty bound is then used in a robust, outer-loop controller to ensure stability of the overall system. Moreover, we show that the tracking error converges to a ball with a radius that can be made arbitrarily small. Furthermore, we verify the effectiveness of our approach via simulations on a 2 degree-of-freedom (DOF) planar manipulator and experimentally on a 6 DOF industrial manipulator.
Tasks Gaussian Processes, Legged Robots
Published 2018-04-03
URL http://arxiv.org/abs/1804.01031v2
PDF http://arxiv.org/pdf/1804.01031v2.pdf
PWC https://paperswithcode.com/paper/provably-robust-learning-based-approach-for
Repo
Framework

Latent Molecular Optimization for Targeted Therapeutic Design

Title Latent Molecular Optimization for Targeted Therapeutic Design
Authors Tristan Aumentado-Armstrong
Abstract We devise an approach for targeted molecular design, a problem of interest in computational drug discovery: given a target protein site, we wish to generate a chemical with both high binding affinity to the target and satisfactory pharmacological properties. This problem is made difficult by the enormity and discreteness of the space of potential therapeutics, as well as the graph-structured nature of biomolecular surface sites. Using a dataset of protein-ligand complexes, we surmount these issues by extracting a signature of the target site with a graph convolutional network and by encoding the discrete chemical into a continuous latent vector space. The latter embedding permits gradient-based optimization in molecular space, which we perform using learned differentiable models of binding affinity and other pharmacological properties. We show that our approach is able to efficiently optimize these multiple objectives and discover new molecules with potentially useful binding properties, validated via docking methods.
Tasks Drug Discovery
Published 2018-09-05
URL http://arxiv.org/abs/1809.02032v1
PDF http://arxiv.org/pdf/1809.02032v1.pdf
PWC https://paperswithcode.com/paper/latent-molecular-optimization-for-targeted
Repo
Framework

Altered Fingerprints: Detection and Localization

Title Altered Fingerprints: Detection and Localization
Authors Elham Tabassi, Tarang Chugh, Debayan Deb, Anil K. Jain
Abstract Fingerprint alteration, also referred to as obfuscation presentation attack, is to intentionally tamper or damage the real friction ridge patterns to avoid identification by an AFIS. This paper proposes a method for detection and localization of fingerprint alterations. Our main contributions are: (i) design and train CNN models on fingerprint images and minutiae-centered local patches in the image to detect and localize regions of fingerprint alterations, and (ii) train a Generative Adversarial Network (GAN) to synthesize altered fingerprints whose characteristics are similar to true altered fingerprints. A successfully trained GAN can alleviate the limited availability of altered fingerprint images for research. A database of 4,815 altered fingerprints from 270 subjects, and an equal number of rolled fingerprint images are used to train and test our models. The proposed approach achieves a True Detection Rate (TDR) of 99.24% at a False Detection Rate (FDR) of 2%, outperforming published results. The synthetically generated altered fingerprint dataset will be open-sourced.
Tasks
Published 2018-05-02
URL http://arxiv.org/abs/1805.00911v2
PDF http://arxiv.org/pdf/1805.00911v2.pdf
PWC https://paperswithcode.com/paper/altered-fingerprints-detection-and
Repo
Framework

Learning Semantic Segmentation from Synthetic Data: A Geometrically Guided Input-Output Adaptation Approach

Title Learning Semantic Segmentation from Synthetic Data: A Geometrically Guided Input-Output Adaptation Approach
Authors Yuhua Chen, Wen Li, Xiaoran Chen, Luc Van Gool
Abstract Recently, increasing attention has been drawn to training semantic segmentation models using synthetic data and computer-generated annotation. However, domain gap remains a major barrier and prevents models learned from synthetic data from generalizing well to real-world applications. In this work, we take the advantage of additional geometric information from synthetic data, a powerful yet largely neglected cue, to bridge the domain gap. Such geometric information can be generated easily from synthetic data, and is proven to be closely coupled with semantic information. With the geometric information, we propose a model to reduce domain shift on two levels: on the input level, we augment the traditional image translation network with the additional geometric information to translate synthetic images into realistic styles; on the output level, we build a task network which simultaneously performs depth estimation and semantic segmentation on the synthetic data. Meanwhile, we encourage the network to preserve correlation between depth and semantics by adversarial training on the output space. We then validate our method on two pairs of synthetic to real dataset: Virtual KITTI to KITTI, and SYNTHIA to Cityscapes, where we achieve a significant performance gain compared to the non-adapt baseline and methods using only semantic label. This demonstrates the usefulness of geometric information from synthetic data for cross-domain semantic segmentation.
Tasks Depth Estimation, Semantic Segmentation
Published 2018-12-12
URL http://arxiv.org/abs/1812.05040v2
PDF http://arxiv.org/pdf/1812.05040v2.pdf
PWC https://paperswithcode.com/paper/learning-semantic-segmentation-from-synthetic
Repo
Framework

Deep learning with asymmetric connections and Hebbian updates

Title Deep learning with asymmetric connections and Hebbian updates
Authors Yali Amit
Abstract We show that deep networks can be trained using Hebbian updates yielding similar performance to ordinary back-propagation on challenging image datasets. To overcome the unrealistic symmetry in connections between layers, implicit in back-propagation, the feedback weights are separate from the feedforward weights. The feedback weights are also updated with a local rule, the same as the feedforward weights - a weight is updated solely based on the product of activity of the units it connects. With fixed feedback weights as proposed in Lillicrap et. al (2016) performance degrades quickly as the depth of the network increases. If the feedforward and feedback weights are initialized with the same values, as proposed in Zipser and Rumelhart (1990), they remain the same throughout training thus precisely implementing back-propagation. We show that even when the weights are initialized differently and at random, and the algorithm is no longer performing back-propagation, performance is comparable on challenging datasets. We also propose a cost function whose derivative can be represented as a local Hebbian update on the last layer. Convolutional layers are updated with tied weights across space, which is not biologically plausible. We show that similar performance is achieved with untied layers, also known as locally connected layers, corresponding to the connectivity implied by the convolutional layers, but where weights are untied and updated separately. In the linear case we show theoretically that the convergence of the error to zero is accelerated by the update of the feedback weights.
Tasks
Published 2018-11-19
URL http://arxiv.org/abs/1812.07965v2
PDF http://arxiv.org/pdf/1812.07965v2.pdf
PWC https://paperswithcode.com/paper/deep-learning-with-asymmetric-connections-and
Repo
Framework

CDF Transform-Shift: An effective way to deal with inhomogeneous density datasets

Title CDF Transform-Shift: An effective way to deal with inhomogeneous density datasets
Authors Ye Zhu, Kai Ming Ting, Mark Carman, Maia Angelova
Abstract Many distance-based algorithms exhibit bias towards dense clusters in inhomogeneous datasets (i.e., those which contain clusters in both dense and sparse regions of the space). For example, density-based clustering algorithms tend to join neighbouring dense clusters together into a single group in the presence of a sparse cluster; while distance-based anomaly detectors exhibit difficulty in detecting local anomalies which are close to a dense cluster in datasets also containing sparse clusters. In this paper, we propose the CDF Transform-Shift (CDF-TS) algorithm which is based on a multi-dimensional Cumulative Distribution Function (CDF) transformation. It effectively converts a dataset with clusters of inhomogeneous density to one with clusters of homogeneous density, i.e., the data distribution is converted to one in which all locally low/high-density locations become globally low/high-density locations. Thus, after performing the proposed Transform-Shift, a single global density threshold can be used to separate the data into clusters and their surrounding noise points. Our empirical evaluations show that CDF-TS overcomes the shortcomings of existing density-based clustering and distance-based anomaly detection algorithms and significantly improves their performance.
Tasks Anomaly Detection
Published 2018-10-05
URL http://arxiv.org/abs/1810.02897v1
PDF http://arxiv.org/pdf/1810.02897v1.pdf
PWC https://paperswithcode.com/paper/cdf-transform-shift-an-effective-way-to-deal
Repo
Framework

Variational Collaborative Learning for User Probabilistic Representation

Title Variational Collaborative Learning for User Probabilistic Representation
Authors Kenan Cui, Xu Chen, Jiangchao Yao, Ya Zhang
Abstract Collaborative filtering (CF) has been successfully employed by many modern recommender systems. Conventional CF-based methods use the user-item interaction data as the sole information source to recommend items to users. However, CF-based methods are known for suffering from cold start problems and data sparsity problems. Hybrid models that utilize auxiliary information on top of interaction data have increasingly gained attention. A few “collaborative learning”-based models, which tightly bridges two heterogeneous learners through mutual regularization, are recently proposed for the hybrid recommendation. However, the “collaboration” in the existing methods are actually asynchronous due to the alternative optimization of the two learners. Leveraging the recent advances in variational autoencoder~(VAE), we here propose a model consisting of two streams of mutual linked VAEs, named variational collaborative model (VCM). Unlike the mutual regularization used in previous works where two learners are optimized asynchronously, VCM enables a synchronous collaborative learning mechanism. Besides, the two stream VAEs setup allows VCM to fully leverages the Bayesian probabilistic representations in collaborative learning. Extensive experiments on three real-life datasets have shown that VCM outperforms several state-of-art methods.
Tasks Recommendation Systems
Published 2018-09-22
URL http://arxiv.org/abs/1809.08400v1
PDF http://arxiv.org/pdf/1809.08400v1.pdf
PWC https://paperswithcode.com/paper/variational-collaborative-learning-for-user
Repo
Framework

Detecting DGA domains with recurrent neural networks and side information

Title Detecting DGA domains with recurrent neural networks and side information
Authors Ryan R. Curtin, Andrew B. Gardner, Slawomir Grzonkowski, Alexey Kleymenov, Alejandro Mosquera
Abstract Modern malware typically makes use of a domain generation algorithm (DGA) to avoid command and control domains or IPs being seized or sinkholed. This means that an infected system may attempt to access many domains in an attempt to contact the command and control server. Therefore, the automatic detection of DGA domains is an important task, both for the sake of blocking malicious domains and identifying compromised hosts. However, many DGAs use English wordlists to generate plausibly clean-looking domain names; this makes automatic detection difficult. In this work, we devise a notion of difficulty for DGA families called the smashword score; this measures how much a DGA family looks like English words. We find that this measure accurately reflects how much a DGA family’s domains look like they are made from natural English words. We then describe our new modeling approach, which is a combination of a novel recurrent neural network architecture with domain registration side information. Our experiments show the model is capable of effectively identifying domains generated by difficult DGA families. Our experiments also show that our model outperforms existing approaches, and is able to reliably detect difficult DGA families such as matsnu, suppobox, rovnix, and others. The model’s performance compared to the state of the art is best for DGA families that resemble English words. We believe that this model could either be used in a standalone DGA domain detector—such as an endpoint security application—or alternately the model could be used as a part of a larger malware detection system.
Tasks Malware Detection
Published 2018-10-04
URL https://arxiv.org/abs/1810.02023v2
PDF https://arxiv.org/pdf/1810.02023v2.pdf
PWC https://paperswithcode.com/paper/detecting-dga-domains-with-recurrent-neural
Repo
Framework

Patch-based Face Recognition using a Hierarchical Multi-label Matcher

Title Patch-based Face Recognition using a Hierarchical Multi-label Matcher
Authors Lingfeng Zhang, Pengfei Dou, Ioannis A Kakadiaris
Abstract This paper proposes a hierarchical multi-label matcher for patch-based face recognition. In signature generation, a face image is iteratively divided into multi-level patches. Two different types of patch divisions and signatures are introduced for 2D facial image and texture-lifted image, respectively. The matcher training consists of three steps. First, local classifiers are built to learn the local matching of each patch. Second, the hierarchical relationships defined between local patches are used to learn the global matching of each patch. Three ways are introduced to learn the global matching: majority voting, l1-regularized weighting, and decision rule. Last, the global matchings of different levels are combined as the final matching. Experimental results on different face recognition tasks demonstrate the effectiveness of the proposed matcher at the cost of gallery generalization. Compared with the UR2D system, the proposed matcher improves the Rank-1 accuracy significantly by 3% and 0.18% on the UHDB31 dataset and IJB-A dataset, respectively.
Tasks Face Recognition
Published 2018-04-03
URL http://arxiv.org/abs/1804.01417v1
PDF http://arxiv.org/pdf/1804.01417v1.pdf
PWC https://paperswithcode.com/paper/patch-based-face-recognition-using-a
Repo
Framework

Local Saddle Point Optimization: A Curvature Exploitation Approach

Title Local Saddle Point Optimization: A Curvature Exploitation Approach
Authors Leonard Adolphs, Hadi Daneshmand, Aurelien Lucchi, Thomas Hofmann
Abstract Gradient-based optimization methods are the most popular choice for finding local optima for classical minimization and saddle point problems. Here, we highlight a systemic issue of gradient dynamics that arise for saddle point problems, namely the presence of undesired stable stationary points that are no local optima. We propose a novel optimization approach that exploits curvature information in order to escape from these undesired stationary points. We prove that different optimization methods, including gradient method and Adagrad, equipped with curvature exploitation can escape non-optimal stationary points. We also provide empirical results on common saddle point problems which confirm the advantage of using curvature exploitation.
Tasks
Published 2018-05-15
URL http://arxiv.org/abs/1805.05751v3
PDF http://arxiv.org/pdf/1805.05751v3.pdf
PWC https://paperswithcode.com/paper/local-saddle-point-optimization-a-curvature
Repo
Framework

Logarithmic Regret for Online Gradient Descent Beyond Strong Convexity

Title Logarithmic Regret for Online Gradient Descent Beyond Strong Convexity
Authors Dan Garber
Abstract Hoffman’s classical result gives a bound on the distance of a point from a convex and compact polytope in terms of the magnitude of violation of the constraints. Recently, several results showed that Hoffman’s bound can be used to derive strongly-convex-like rates for first-order methods for \textit{offline} convex optimization of curved, though not strongly convex, functions, over polyhedral sets. In this work, we use this classical result for the first time to obtain faster rates for \textit{online convex optimization} over polyhedral sets with curved convex, though not strongly convex, loss functions. We show that under several reasonable assumptions on the data, the standard \textit{Online Gradient Descent} algorithm guarantees logarithmic regret. To the best of our knowledge, the only previous algorithm to achieve logarithmic regret in the considered settings is the \textit{Online Newton Step} algorithm which requires quadratic (in the dimension) memory and at least quadratic runtime per iteration, which greatly limits its applicability to large-scale problems. In particular, our results hold for \textit{semi-adversarial} settings in which the data is a combination of an arbitrary (adversarial) sequence and a stochastic sequence, which might provide reasonable approximation for many real-world sequences, or under a natural assumption that the data is low-rank. We demonstrate via experiments that the regret of OGD is indeed comparable to that of ONS (and even far better) on curved though not strongly-convex losses.
Tasks
Published 2018-02-13
URL http://arxiv.org/abs/1802.04623v2
PDF http://arxiv.org/pdf/1802.04623v2.pdf
PWC https://paperswithcode.com/paper/logarithmic-regret-for-online-gradient
Repo
Framework

Turbo Learning for Captionbot and Drawingbot

Title Turbo Learning for Captionbot and Drawingbot
Authors Qiuyuan Huang, Pengchuan Zhang, Dapeng Wu, Lei Zhang
Abstract We study in this paper the problems of both image captioning and text-to-image generation, and present a novel turbo learning approach to jointly training an image-to-text generator (a.k.a. CaptionBot) and a text-to-image generator (a.k.a. DrawingBot). The key idea behind the joint training is that image-to-text generation and text-to-image generation as dual problems can form a closed loop to provide informative feedback to each other. Based on such feedback, we introduce a new loss metric by comparing the original input with the output produced by the closed loop. In addition to the old loss metrics used in CaptionBot and DrawingBot, this extra loss metric makes the jointly trained CaptionBot and DrawingBot better than the separately trained CaptionBot and DrawingBot. Furthermore, the turbo-learning approach enables semi-supervised learning since the closed loop can provide pseudo-labels for unlabeled samples. Experimental results on the COCO dataset demonstrate that the proposed turbo learning can significantly improve the performance of both CaptionBot and DrawingBot by a large margin.
Tasks Image Captioning, Image Generation, Text Generation, Text-to-Image Generation
Published 2018-05-21
URL http://arxiv.org/abs/1805.08170v2
PDF http://arxiv.org/pdf/1805.08170v2.pdf
PWC https://paperswithcode.com/paper/turbo-learning-for-captionbot-and-drawingbot
Repo
Framework

AutoSpearman: Automatically Mitigating Correlated Metrics for Interpreting Defect Models

Title AutoSpearman: Automatically Mitigating Correlated Metrics for Interpreting Defect Models
Authors Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, Christoph Treude
Abstract The interpretation of defect models heavily relies on software metrics that are used to construct them. However, such software metrics are often correlated to defect models. Prior work often uses feature selection techniques to remove correlated metrics in order to improve the performance of defect models. Yet, the interpretation of defect models may be misleading if feature selection techniques produce subsets of inconsistent and correlated metrics. In this paper, we investigate the consistency and correlation of the subsets of metrics that are produced by nine commonly-used feature selection techniques. Through a case study of 13 publicly-available defect datasets, we find that feature selection techniques produce inconsistent subsets of metrics and do not mitigate correlated metrics, suggesting that feature selection techniques should not be used and correlation analyses must be applied when the goal is model interpretation. Since correlation analyses often involve manual selection of metrics by a domain expert, we introduce AutoSpearman, an automated metric selection approach based on correlation analyses. Our evaluation indicates that AutoSpearman yields the highest consistency of subsets of metrics among training samples and mitigates correlated metrics, while impacting model performance by 1-2%pts. Thus, to automatically mitigate correlated metrics when interpreting defect models, we recommend future studies use AutoSpearman in lieu of commonly-used feature selection techniques.
Tasks Feature Selection
Published 2018-06-26
URL http://arxiv.org/abs/1806.09791v1
PDF http://arxiv.org/pdf/1806.09791v1.pdf
PWC https://paperswithcode.com/paper/autospearman-automatically-mitigating
Repo
Framework

Anatomical Data Augmentation For CNN based Pixel-wise Classification

Title Anatomical Data Augmentation For CNN based Pixel-wise Classification
Authors Avi Ben-Cohen, Eyal Klang, Michal Marianne Amitai, Jacob Goldberger, Hayit Greenspan
Abstract In this work we propose a method for anatomical data augmentation that is based on using slices of computed tomography (CT) examinations that are adjacent to labeled slices as another resource of labeled data for training the network. The extended labeled data is used to train a U-net network for a pixel-wise classification into different hepatic lesions and normal liver tissues. Our dataset contains CT examinations from 140 patients with 333 CT images annotated by an expert radiologist. We tested our approach and compared it to the conventional training process. Results indicate superiority of our method. Using the anatomical data augmentation we achieved an improvement of 3% in the success rate, 5% in the classification accuracy, and 4% in Dice.
Tasks Computed Tomography (CT), Data Augmentation
Published 2018-01-07
URL http://arxiv.org/abs/1801.02261v1
PDF http://arxiv.org/pdf/1801.02261v1.pdf
PWC https://paperswithcode.com/paper/anatomical-data-augmentation-for-cnn-based
Repo
Framework
comments powered by Disqus