April 2, 2020

3032 words 15 mins read

Paper Group ANR 201

Paper Group ANR 201

Face Anti-Spoofing by Learning Polarization Cues in a Real-World Scenario. Fourier Transform Approach to Machine Learning III: Fourier Classification. PathVQA: 30000+ Questions for Medical Visual Question Answering. QActor: On-line Active Learning for Noisy Labeled Stream Data. Fairness Measures for Regression via Probabilistic Classification. Effi …

Face Anti-Spoofing by Learning Polarization Cues in a Real-World Scenario

Title Face Anti-Spoofing by Learning Polarization Cues in a Real-World Scenario
Authors Yu Tian, Kunbo Zhang, Leyuan Wang, Zhenan Sun
Abstract Face anti-spoofing is the key to preventing security breaches in biometric recognition applications. Existing software-based and hardware-based face liveness detection methods are effective in constrained environments or designated datasets only. Deep learning method using RGB and infrared images demands a large amount of training data for new attacks. In this paper, we present a face anti-spoofing method in a real-world scenario by automatic learning the physical characteristics in polarization images of a real face compared to a deceptive attack. A computational framework is developed to extract and classify the unique face features using convolutional neural networks and SVM together. Our real-time polarized face anti-spoofing (PAAS) detection method uses a on-chip integrated polarization imaging sensor with optimized processing algorithms. Extensive experiments demonstrate the advantages of the PAAS technique to counter diverse face spoofing attacks (print, replay, mask) in uncontrolled indoor and outdoor conditions by learning polarized face images of 33 people. A four-directional polarized face image dataset is released to inspire future applications within biometric anti-spoofing field.
Tasks Face Anti-Spoofing
Published 2020-03-18
URL https://arxiv.org/abs/2003.08024v2
PDF https://arxiv.org/pdf/2003.08024v2.pdf
PWC https://paperswithcode.com/paper/face-anti-spoofing-by-learning-polarization
Repo
Framework

Fourier Transform Approach to Machine Learning III: Fourier Classification

Title Fourier Transform Approach to Machine Learning III: Fourier Classification
Authors Soheil Mehrabkhani
Abstract We propose a Fourier-based learning algorithm for highly nonlinear multiclass classification. The algorithm is based on a smoothing technique to calculate the probability distribution of all classes. To obtain the probability distribution, the density distribution of each class is smoothed by a low-pass filter separately. The advantage of the Fourier representation is capturing the nonlinearities of the data distribution without defining any kernel function. Furthermore, contrary to the support vector machines, it makes a probabilistic explanation for the classification possible. Moreover, it can treat overlapped classes as well. Comparing to the logistic regression, it does not require feature engineering. In general, its computational performance is also very well for large data sets and in contrast to other algorithms, the typical overfitting problem does not happen at all. The capability of the algorithm is demonstrated for multiclass classification with overlapped classes and very high nonlinearity of the class distributions.
Tasks Feature Engineering
Published 2020-01-03
URL https://arxiv.org/abs/2001.06081v2
PDF https://arxiv.org/pdf/2001.06081v2.pdf
PWC https://paperswithcode.com/paper/fourier-transform-approach-to-machine-1
Repo
Framework

PathVQA: 30000+ Questions for Medical Visual Question Answering

Title PathVQA: 30000+ Questions for Medical Visual Question Answering
Authors Xuehai He, Yichen Zhang, Luntian Mou, Eric Xing, Pengtao Xie
Abstract Is it possible to develop an “AI Pathologist” to pass the board-certified examination of the American Board of Pathology? To achieve this goal, the first step is to create a visual question answering (VQA) dataset where the AI agent is presented with a pathology image together with a question and is asked to give the correct answer. Our work makes the first attempt to build such a dataset. Different from creating general-domain VQA datasets where the images are widely accessible and there are many crowdsourcing workers available and capable of generating question-answer pairs, developing a medical VQA dataset is much more challenging. First, due to privacy concerns, pathology images are usually not publicly available. Second, only well-trained pathologists can understand pathology images, but they barely have time to help create datasets for AI research. To address these challenges, we resort to pathology textbooks and online digital libraries. We develop a semi-automated pipeline to extract pathology images and captions from textbooks and generate question-answer pairs from captions using natural language processing. We collect 32,799 open-ended questions from 4,998 pathology images where each question is manually checked to ensure correctness. To our best knowledge, this is the first dataset for pathology VQA. Our dataset will be released publicly to promote research in medical VQA.
Tasks Question Answering, Visual Question Answering
Published 2020-03-07
URL https://arxiv.org/abs/2003.10286v1
PDF https://arxiv.org/pdf/2003.10286v1.pdf
PWC https://paperswithcode.com/paper/pathvqa-30000-questions-for-medical-visual
Repo
Framework

QActor: On-line Active Learning for Noisy Labeled Stream Data

Title QActor: On-line Active Learning for Noisy Labeled Stream Data
Authors Taraneh Younesian, Zilong Zhao, Amirmasoud Ghiassi, Robert Birke, Lydia Y. Chen
Abstract Noisy labeled data is more a norm than a rarity for self-generated content that is continuously published on the web and social media. Due to privacy concerns and governmental regulations, such a data stream can only be stored and used for learning purposes in a limited duration. To overcome the noise in this on-line scenario we propose QActor which novel combines: the selection of supposedly clean samples via quality models and actively querying an oracle for the most informative true labels. While the former can suffer from low data volumes of on-line scenarios, the latter is constrained by the availability and costs of human experts. QActor swiftly combines the merits of quality models for data filtering and oracle queries for cleaning the most informative data. The objective of QActor is to leverage the stringent oracle budget to robustly maximize the learning accuracy. QActor explores various strategies combining different query allocations and uncertainty measures. A central feature of QActor is to dynamically adjust the query limit according to the learning loss for each data batch. We extensively evaluate different image datasets fed into the classifier that can be standard machine learning (ML) models or deep neural networks (DNN) with noise label ratios ranging between 30% and 80%. Our results show that QActor can nearly match the optimal accuracy achieved using only clean data at the cost of at most an additional 6% of ground truth data from the oracle.
Tasks Active Learning
Published 2020-01-28
URL https://arxiv.org/abs/2001.10399v1
PDF https://arxiv.org/pdf/2001.10399v1.pdf
PWC https://paperswithcode.com/paper/qactor-on-line-active-learning-for-noisy
Repo
Framework

Fairness Measures for Regression via Probabilistic Classification

Title Fairness Measures for Regression via Probabilistic Classification
Authors Daniel Steinberg, Alistair Reid, Simon O’Callaghan
Abstract Algorithmic fairness involves expressing notions such as equity, or reasonable treatment, as quantifiable measures that a machine learning algorithm can optimise. Most work in the literature to date has focused on classification problems where the prediction is categorical, such as accepting or rejecting a loan application. This is in part because classification fairness measures are easily computed by comparing the rates of outcomes, leading to behaviours such as ensuring that the same fraction of eligible men are selected as eligible women. But such measures are computationally difficult to generalise to the continuous regression setting for problems such as pricing, or allocating payments. The difficulty arises from estimating conditional densities (such as the probability density that a system will over-charge by a certain amount). For the regression setting we introduce tractable approximations of the independence, separation and sufficiency criteria by observing that they factorise as ratios of different conditional probabilities of the protected attributes. We introduce and train machine learning classifiers, distinct from the predictor, as a mechanism to estimate these probabilities from the data. This naturally leads to model agnostic, tractable approximations of the criteria, which we explore experimentally.
Tasks
Published 2020-01-16
URL https://arxiv.org/abs/2001.06089v2
PDF https://arxiv.org/pdf/2001.06089v2.pdf
PWC https://paperswithcode.com/paper/fairness-measures-for-regression-via
Repo
Framework

Efficient improper learning for online logistic regression

Title Efficient improper learning for online logistic regression
Authors Rémi Jézéquel, Pierre Gaillard, Alessandro Rudi
Abstract We consider the setting of online logistic regression and consider the regret with respect to the 2-ball of radius B. It is known (see [Hazan et al., 2014]) that any proper algorithm which has logarithmic regret in the number of samples (denoted n) necessarily suffers an exponential multiplicative constant in B. In this work, we design an efficient improper algorithm that avoids this exponential constant while preserving a logarithmic regret. Indeed, [Foster et al., 2018] showed that the lower bound does not apply to improper algorithms and proposed a strategy based on exponential weights with prohibitive computational complexity. Our new algorithm based on regularized empirical risk minimization with surrogate losses satisfies a regret scaling as O(B log(Bn)) with a per-round time-complexity of order O(d^2).
Tasks
Published 2020-03-18
URL https://arxiv.org/abs/2003.08109v2
PDF https://arxiv.org/pdf/2003.08109v2.pdf
PWC https://paperswithcode.com/paper/efficient-improper-learning-for-online
Repo
Framework

Tigrinya Neural Machine Translation with Transfer Learning for Humanitarian Response

Title Tigrinya Neural Machine Translation with Transfer Learning for Humanitarian Response
Authors Alp Öktem, Mirko Plitt, Grace Tang
Abstract We report our experiments in building a domain-specific Tigrinya-to-English neural machine translation system. We use transfer learning from other Ge’ez script languages and report an improvement of 1.3 BLEU points over a classic neural baseline. We publish our development pipeline as an open-source library and also provide a demonstration application.
Tasks Machine Translation, Transfer Learning
Published 2020-03-09
URL https://arxiv.org/abs/2003.11523v1
PDF https://arxiv.org/pdf/2003.11523v1.pdf
PWC https://paperswithcode.com/paper/tigrinya-neural-machine-translation-with
Repo
Framework

Proposal Learning for Semi-Supervised Object Detection

Title Proposal Learning for Semi-Supervised Object Detection
Authors Peng Tang, Chetan Ramaiah, Ran Xu, Caiming Xiong
Abstract In this paper, we focus on semi-supervised object detection to boost accuracies of proposal-based object detectors (a.k.a. two-stage object detectors) by training on both labeled and unlabeled data. However, it is non-trivial to train object detectors on unlabeled data due to the unavailability of ground truth labels. To address this problem, we present a proposal learning approach to learn proposal features and predictions from both labeled and unlabeled data. The approach consists of a self-supervised proposal learning module and a consistency-based proposal learning module. In the self-supervised proposal learning module, we present a proposal location loss and a contrastive loss to learn context-aware and noise-robust proposal features respectively. In the consistency-based proposal learning module, we apply consistency losses to both bounding box classification and regression predictions of proposals to learn noise-robust proposal features and predictions. Experiments are conducted on the COCO dataset with all available labeled and unlabeled data. Results show that our approach consistently improves the accuracies of fully-supervised baselines. In particular, after combining with data distillation, our approach improves AP by about 2.0% and 0.9% on average compared with fully-supervised baselines and data distillation baselines respectively.
Tasks Object Detection
Published 2020-01-15
URL https://arxiv.org/abs/2001.05086v1
PDF https://arxiv.org/pdf/2001.05086v1.pdf
PWC https://paperswithcode.com/paper/proposal-learning-for-semi-supervised-object
Repo
Framework

Learning to Structure Long-term Dependence for Sequential Recommendation

Title Learning to Structure Long-term Dependence for Sequential Recommendation
Authors Renqin Cai, Qinglei Wang, Chong Wang, Xiaobing Liu
Abstract Sequential recommendation recommends items based on sequences of users’ historical actions. The key challenge in it is how to effectively model the influence from distant actions to the action to be predicted, i.e., recognizing the long-term dependence structure; and it remains an underexplored problem. To better model the long-term dependence structure, we propose a GatedLongRec solution in this work. To account for the long-term dependence, GatedLongRec extracts distant actions of top-$k$ related categories to the user’s ongoing intent with a top-$k$ gating network, and utilizes a long-term encoder to encode the transition patterns among these identified actions. As user intent is not directly observable, we take advantage of available side-information about the actions, i.e., the category of their associated items, to infer the intents. End-to-end training is performed to estimate the intent representation and predict the next action for sequential recommendation. Extensive experiments on two large datasets show that the proposed solution can recognize the structure of long-term dependence, thus greatly improving the sequential recommendation.
Tasks
Published 2020-01-30
URL https://arxiv.org/abs/2001.11369v1
PDF https://arxiv.org/pdf/2001.11369v1.pdf
PWC https://paperswithcode.com/paper/learning-to-structure-long-term-dependence
Repo
Framework

Optimization of Structural Similarity in Mathematical Imaging

Title Optimization of Structural Similarity in Mathematical Imaging
Authors D. Otero, D. La Torre, O. Michailovich, E. R. Vrscay
Abstract It is now generally accepted that Euclidean-based metrics may not always adequately represent the subjective judgement of a human observer. As a result, many image processing methodologies have been recently extended to take advantage of alternative visual quality measures, the most prominent of which is the Structural Similarity Index Measure (SSIM). The superiority of the latter over Euclidean-based metrics have been demonstrated in several studies. However, being focused on specific applications, the findings of such studies often lack generality which, if otherwise acknowledged, could have provided a useful guidance for further development of SSIM-based image processing algorithms. Accordingly, instead of focusing on a particular image processing task, in this paper, we introduce a general framework that encompasses a wide range of imaging applications in which the SSIM can be employed as a fidelity measure. Subsequently, we show how the framework can be used to cast some standard as well as original imaging tasks into optimization problems, followed by a discussion of a number of novel numerical strategies for their solution.
Tasks
Published 2020-02-07
URL https://arxiv.org/abs/2002.02657v1
PDF https://arxiv.org/pdf/2002.02657v1.pdf
PWC https://paperswithcode.com/paper/optimization-of-structural-similarity-in
Repo
Framework

Convex Optimization on Functionals of Probability Densities

Title Convex Optimization on Functionals of Probability Densities
Authors Tomohiro Nishiyama
Abstract In information theory, some optimization problems result in convex optimization problems on strictly convex functionals of probability densities. In this note, we study these problems and show conditions of minimizers and the uniqueness of the minimizer if there exist a minimizer.
Tasks
Published 2020-02-16
URL https://arxiv.org/abs/2002.06488v2
PDF https://arxiv.org/pdf/2002.06488v2.pdf
PWC https://paperswithcode.com/paper/convex-optimization-on-functionals-of
Repo
Framework

CBAG: Conditional Biomedical Abstract Generation

Title CBAG: Conditional Biomedical Abstract Generation
Authors Justin Sybrandt, Ilya Safro
Abstract Biomedical research papers use significantly different language and jargon when compared to typical English text, which reduces the utility of pre-trained NLP models in this domain. Meanwhile Medline, a database of biomedical abstracts, introduces nearly a million new documents per-year. Applications that could benefit from understanding this wealth of publicly available information, such as scientific writing assistants, chat-bots, or descriptive hypothesis generation systems, require new domain-centered approaches. A conditional language model, one that learns the probability of words given some a priori criteria, is a fundamental building block in many such applications. We propose a transformer-based conditional language model with a shallow encoder “condition” stack, and a deep “language model” stack of multi-headed attention blocks. The condition stack encodes metadata used to alter the output probability distribution of the language model stack. We sample this distribution in order to generate biomedical abstracts given only a proposed title, an intended publication year, and a set of keywords. Using typical natural language generation metrics, we demonstrate that this proposed approach is more capable of producing non-trivial relevant entities within the abstract body than the 1.5B parameter GPT-2 language model.
Tasks Language Modelling, Text Generation
Published 2020-02-13
URL https://arxiv.org/abs/2002.05637v1
PDF https://arxiv.org/pdf/2002.05637v1.pdf
PWC https://paperswithcode.com/paper/cbag-conditional-biomedical-abstract
Repo
Framework

Self-Attentive Associative Memory

Title Self-Attentive Associative Memory
Authors Hung Le, Truyen Tran, Svetha Venkatesh
Abstract Heretofore, neural networks with external memory are restricted to single memory with lossy representations of memory interactions. A rich representation of relationships between memory pieces urges a high-order and segregated relational memory. In this paper, we propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory). The idea is implemented through a novel Self-attentive Associative Memory (SAM) operator. Found upon outer product, SAM forms a set of associative memories that represent the hypothetical high-order relationships between arbitrary pairs of memory elements, through which a relational memory is constructed from an item memory. The two memories are wired into a single sequential model capable of both memorization and relational reasoning. We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks, from challenging synthetic problems to practical testbeds such as geometry, graph, reinforcement learning, and question answering.
Tasks Question Answering, Relational Reasoning
Published 2020-02-10
URL https://arxiv.org/abs/2002.03519v2
PDF https://arxiv.org/pdf/2002.03519v2.pdf
PWC https://paperswithcode.com/paper/self-assttentive-associative-memory
Repo
Framework

Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective

Title Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective
Authors Luis Lamb, Artur Garcez, Marco Gori, Marcelo Prates, Pedro Avelar, Moshe Vardi
Abstract Neural-symbolic computing has now become the subject of interest of both academic and industry research laboratories. Graph Neural Networks (GNN) have been widely used in relational and symbolic domains, with widespread application of GNNs in combinatorial optimization, constraint satisfaction, relational reasoning and other scientific domains. The need for improved explainability, interpretability and trust of AI systems in general demands principled methodologies, as suggested by neural-symbolic computing. In this paper, we review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. This includes the application of GNNs in several domains as well as its relationship to current developments in neural-symbolic computing.
Tasks Combinatorial Optimization, Relational Reasoning
Published 2020-02-29
URL https://arxiv.org/abs/2003.00330v4
PDF https://arxiv.org/pdf/2003.00330v4.pdf
PWC https://paperswithcode.com/paper/graph-neural-networks-meet-neural-symbolic
Repo
Framework

Symbolic Learning and Reasoning with Noisy Data for Probabilistic Anchoring

Title Symbolic Learning and Reasoning with Noisy Data for Probabilistic Anchoring
Authors Pedro Zuidberg Dos Martires, Nitesh Kumar, Andreas Persson, Amy Loutfi, Luc De Raedt
Abstract Robotic agents should be able to learn from sub-symbolic sensor data, and at the same time, be able to reason about objects and communicate with humans on a symbolic level. This raises the question of how to overcome the gap between symbolic and sub-symbolic artificial intelligence. We propose a semantic world modeling approach based on bottom-up object anchoring using an object-centered representation of the world. Perceptual anchoring processes continuous perceptual sensor data and maintains a correspondence to a symbolic representation. We extend the definitions of anchoring to handle multi-modal probability distributions and we couple the resulting symbol anchoring system to a probabilistic logic reasoner for performing inference. Furthermore, we use statistical relational learning to enable the anchoring framework to learn symbolic knowledge in the form of a set of probabilistic logic rules of the world from noisy and sub-symbolic sensor input. The resulting framework, which combines perceptual anchoring and statistical relational learning, is able to maintain a semantic world model of all the objects that have been perceived over time, while still exploiting the expressiveness of logical rules to reason about the state of objects which are not directly observed through sensory input data. To validate our approach we demonstrate, on the one hand, the ability of our system to perform probabilistic reasoning over multi-modal probability distributions, and on the other hand, the learning of probabilistic logical rules from anchored objects produced by perceptual observations. The learned logical rules are, subsequently, used to assess our proposed probabilistic anchoring procedure. We demonstrate our system in a setting involving object interactions where object occlusions arise and where probabilistic inference is needed to correctly anchor objects.
Tasks Relational Reasoning
Published 2020-02-24
URL https://arxiv.org/abs/2002.10373v1
PDF https://arxiv.org/pdf/2002.10373v1.pdf
PWC https://paperswithcode.com/paper/symbolic-learning-and-reasoning-with-noisy
Repo
Framework
comments powered by Disqus