October 19, 2019

3326 words 16 mins read

Paper Group ANR 105

Paper Group ANR 105

Discrete symbolic optimization and Boltzmann sampling by continuous neural dynamics: Gradient Symbolic Computation. Non-submodular Function Maximization subject to a Matroid Constraint, with Applications. Face Presentation Attack Detection in Learned Color-liked Space. A Hilbert Space of Stationary Ergodic Processes. Parallel Tracking and Verifying …

Discrete symbolic optimization and Boltzmann sampling by continuous neural dynamics: Gradient Symbolic Computation

Title Discrete symbolic optimization and Boltzmann sampling by continuous neural dynamics: Gradient Symbolic Computation
Authors Paul Tupper, Paul Smolensky, Pyeong Whan Cho
Abstract Gradient Symbolic Computation is proposed as a means of solving discrete global optimization problems using a neurally plausible continuous stochastic dynamical system. Gradient symbolic dynamics involves two free parameters that must be adjusted as a function of time to obtain the global maximizer at the end of the computation. We provide a summary of what is known about the GSC dynamics for special cases of settings of the parameters, and also establish that there is a schedule for the two parameters for which convergence to the correct answer occurs with high probability. These results put the empirical results already obtained for GSC on a sound theoretical footing.
Tasks
Published 2018-01-04
URL http://arxiv.org/abs/1801.03562v1
PDF http://arxiv.org/pdf/1801.03562v1.pdf
PWC https://paperswithcode.com/paper/discrete-symbolic-optimization-and-boltzmann
Repo
Framework

Non-submodular Function Maximization subject to a Matroid Constraint, with Applications

Title Non-submodular Function Maximization subject to a Matroid Constraint, with Applications
Authors Khashayar Gatmiry, Manuel Gomez-Rodriguez
Abstract The standard greedy algorithm has been recently shown to enjoy approximation guarantees for constrained non-submodular nondecreasing set function maximization. While these recent results allow to better characterize the empirical success of the greedy algorithm, they are only applicable to simple cardinality constraints. In this paper, we study the problem of maximizing a non-submodular nondecreasing set function subject to a general matroid constraint. We first show that the standard greedy algorithm offers an approximation factor of $\frac{0.4 {\gamma}^{2}}{\sqrt{\gamma r} + 1}$, where $\gamma$ is the submodularity ratio of the function and $r$ is the rank of the matroid. Then, we show that the same greedy algorithm offers a constant approximation factor of $(1 + 1/(1-\alpha))^{-1}$, where $\alpha$ is the generalized curvature of the function. In addition, we demonstrate that these approximation guarantees are applicable to several real-world applications in which the submodularity ratio and the generalized curvature can be bounded. Finally, we show that our greedy algorithm does achieve a competitive performance in practice using a variety of experiments on synthetic and real-world data.
Tasks Point Processes
Published 2018-11-19
URL https://arxiv.org/abs/1811.07863v5
PDF https://arxiv.org/pdf/1811.07863v5.pdf
PWC https://paperswithcode.com/paper/on-the-network-visibility-problem
Repo
Framework

Face Presentation Attack Detection in Learned Color-liked Space

Title Face Presentation Attack Detection in Learned Color-liked Space
Authors Lei Li, Zhaoqiang Xia, Xiaoyue Jiang, Fabio Roli, Xiaoyi Feng
Abstract Face presentation attack detection (PAD) has become a thorny problem for biometric systems and numerous countermeasures have been proposed to address it. However, majority of them directly extract feature descriptors and distinguish fake faces from the real ones in existing color spaces (e.g. RGB, HSV and YCbCr). Unfortunately, it is unknown for us which color space is the best or how to combine different spaces together. To make matters worse, the real and fake faces are overlapped in existing color spaces. So, in this paper, a learned distinguishable color-liked space is generated to deal with the problem of face PAD. More specifically, we present an end-to-end deep learning network that can map existing color spaces to a new learned color-liked space. Inspired by the generator of generative adversarial network (GAN), the proposed network consists of a space generator and a feature extractor. When training the color-liked space, a new triplet combination mechanism of points-to-center is explored to maximize interclass distance and minimize intraclass distance, and also keep a safe margin between the real and presented fake faces. Extensive experiments on two standard face PAD databases, i.e., Relay-Attack and OULU-NPU, indicate that our proposed color-liked space analysis based countermeasure significantly outperforms the state-of-the-art methods and show excellent generalization capability.
Tasks Face Presentation Attack Detection
Published 2018-10-31
URL http://arxiv.org/abs/1810.13170v2
PDF http://arxiv.org/pdf/1810.13170v2.pdf
PWC https://paperswithcode.com/paper/face-presentation-attack-detection-in-learned
Repo
Framework

A Hilbert Space of Stationary Ergodic Processes

Title A Hilbert Space of Stationary Ergodic Processes
Authors Ishanu Chattopadhyay
Abstract Identifying meaningful signal buried in noise is a problem of interest arising in diverse scenarios of data-driven modeling. We present here a theoretical framework for exploiting intrinsic geometry in data that resists noise corruption, and might be identifiable under severe obfuscation. Our approach is based on uncovering a valid complete inner product on the space of ergodic stationary finite valued processes, providing the latter with the structure of a Hilbert space on the real field. This rigorous construction, based on non-standard generalizations of the notions of sum and scalar multiplication of finite dimensional probability vectors, allows us to meaningfully talk about “angles” between data streams and data sources, and, make precise the notion of orthogonal stochastic processes. In particular, the relative angles appear to be preserved, and identifiable, under severe noise, and will be developed in future as the underlying principle for robust classification, clustering and unsupervised featurization algorithms.
Tasks
Published 2018-01-25
URL http://arxiv.org/abs/1801.08256v1
PDF http://arxiv.org/pdf/1801.08256v1.pdf
PWC https://paperswithcode.com/paper/a-hilbert-space-of-stationary-ergodic
Repo
Framework

Parallel Tracking and Verifying

Title Parallel Tracking and Verifying
Authors Heng Fan, Haibin Ling
Abstract Being intensively studied, visual object tracking has witnessed great advances in either speed (e.g., with correlation filters) or accuracy (e.g., with deep features). Real-time and high accuracy tracking algorithms, however, remain scarce. In this paper we study the problem from a new perspective and present a novel parallel tracking and verifying (PTAV) framework, by taking advantage of the ubiquity of multi-thread techniques and borrowing ideas from the success of parallel tracking and mapping in visual SLAM. The proposed PTAV framework is typically composed of two components, a (base) tracker T and a verifier V, working in parallel on two separate threads. The tracker T aims to provide a super real-time tracking inference and is expected to perform well most of the time; by contrast, the verifier V validates the tracking results and corrects T when needed. The key innovation is that, V does not work on every frame but only upon the requests from T; on the other end, T may adjust the tracking according to the feedback from V. With such collaboration, PTAV enjoys both the high efficiency provided by T and the strong discriminative power by V. Meanwhile, to adapt V to object appearance changes over time, we maintain a dynamic target template pool for adaptive verification, resulting in further performance improvements. In our extensive experiments on popular benchmarks including OTB2015, TC128, UAV20L and VOT2016, PTAV achieves the best tracking accuracy among all real-time trackers, and in fact even outperforms many deep learning based algorithms. Moreover, as a general framework, PTAV is very flexible with great potentials for future improvement and generalization.
Tasks Object Tracking, Visual Object Tracking
Published 2018-01-30
URL http://arxiv.org/abs/1801.10496v1
PDF http://arxiv.org/pdf/1801.10496v1.pdf
PWC https://paperswithcode.com/paper/parallel-tracking-and-verifying
Repo
Framework

Cycle-Consistent Speech Enhancement

Title Cycle-Consistent Speech Enhancement
Authors Zhong Meng, Jinyu Li, Yifan Gong, Biing-Hwang, Juang
Abstract Feature mapping using deep neural networks is an effective approach for single-channel speech enhancement. Noisy features are transformed to the enhanced ones through a mapping network and the mean square errors between the enhanced and clean features are minimized. In this paper, we propose a cycle-consistent speech enhancement (CSE) in which an additional inverse mapping network is introduced to reconstruct the noisy features from the enhanced ones. A cycle-consistent constraint is enforced to minimize the reconstruction loss. Similarly, a backward cycle of mappings is performed in the opposite direction with the same networks and losses. With cycle-consistency, the speech structure is well preserved in the enhanced features while noise is effectively reduced such that the feature-mapping network generalizes better to unseen data. In cases where only unparalleled noisy and clean data is available for training, two discriminator networks are used to distinguish the enhanced and noised features from the clean and noisy ones. The discrimination losses are jointly optimized with reconstruction losses through adversarial multi-task learning. Evaluated on the CHiME-3 dataset, the proposed CSE achieves 19.60% and 6.69% relative word error rate improvements respectively when using or without using parallel clean and noisy speech data.
Tasks Multi-Task Learning, Speech Enhancement
Published 2018-09-06
URL http://arxiv.org/abs/1809.02253v2
PDF http://arxiv.org/pdf/1809.02253v2.pdf
PWC https://paperswithcode.com/paper/cycle-consistent-speech-enhancement
Repo
Framework

HAMLET: Interpretable Human And Machine co-LEarning Technique

Title HAMLET: Interpretable Human And Machine co-LEarning Technique
Authors Olivier Deiss, Siddharth Biswal, Jing Jin, Haoqi Sun, M. Brandon Westover, Jimeng Sun
Abstract Efficient label acquisition processes are key to obtaining robust classifiers. However, data labeling is often challenging and subject to high levels of label noise. This can arise even when classification targets are well defined, if instances to be labeled are more difficult than the prototypes used to define the class, leading to disagreements among the expert community. Here, we enable efficient training of deep neural networks. From low-confidence labels, we iteratively improve their quality by simultaneous learning of machines and experts. We call it Human And Machine co-LEarning Technique (HAMLET). Throughout the process, experts become more consistent, while the algorithm provides them with explainable feedback for confirmation. HAMLET uses a neural embedding function and a memory module filled with diverse reference embeddings from different classes. Its output includes classification labels and highly relevant reference embeddings as explanation. We took the study of brain monitoring at intensive care unit (ICU) as an application of HAMLET on continuous electroencephalography (cEEG) data. Although cEEG monitoring yields large volumes of data, labeling costs and difficulty make it hard to build a classifier. Additionally, while experts agree on the labels of clear-cut examples of cEEG patterns, labeling many real-world cEEG data can be extremely challenging. Thus, a large minority of sequences might be mislabeled. HAMLET has shown significant performance gain against deep learning and other baselines, increasing accuracy from 7.03% to 68.75% on challenging inputs. Besides improved performance, clinical experts confirmed the interpretability of those reference embeddings in helping explaining the classification results by HAMLET.
Tasks
Published 2018-03-26
URL http://arxiv.org/abs/1803.09702v3
PDF http://arxiv.org/pdf/1803.09702v3.pdf
PWC https://paperswithcode.com/paper/hamlet-interpretable-human-and-machine-co
Repo
Framework

Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals

Title Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals
Authors Andrew Cotter, Heinrich Jiang, Serena Wang, Taman Narayan, Maya Gupta, Seungil You, Karthik Sridharan
Abstract We show that many machine learning goals, such as improved fairness metrics, can be expressed as constraints on the model’s predictions, which we call rate constraints. We study the problem of training non-convex models subject to these rate constraints (or any non-convex and non-differentiable constraints). In the non-convex setting, the standard approach of Lagrange multipliers may fail. Furthermore, if the constraints are non-differentiable, then one cannot optimize the Lagrangian with gradient-based methods. To solve these issues, we introduce the proxy-Lagrangian formulation. This new formulation leads to an algorithm that produces a stochastic classifier by playing a two-player non-zero-sum game solving for what we call a semi-coarse correlated equilibrium, which in turn corresponds to an approximately optimal and feasible solution to the constrained optimization problem. We then give a procedure which shrinks the randomized solution down to one that is a mixture of at most $m+1$ deterministic solutions, given $m$ constraints. This culminates in algorithms that can solve non-convex constrained optimization problems with possibly non-differentiable and non-convex constraints with theoretical guarantees. We provide extensive experimental results enforcing a wide range of policy goals including different fairness metrics, and other goals on accuracy, coverage, recall, and churn.
Tasks
Published 2018-09-11
URL http://arxiv.org/abs/1809.04198v1
PDF http://arxiv.org/pdf/1809.04198v1.pdf
PWC https://paperswithcode.com/paper/optimization-with-non-differentiable
Repo
Framework

Iterated Belief Revision Under Resource Constraints: Logic as Geometry

Title Iterated Belief Revision Under Resource Constraints: Logic as Geometry
Authors Dan P. Guralnik, Daniel E. Koditschek
Abstract We propose a variant of iterated belief revision designed for settings with limited computational resources, such as mobile autonomous robots. The proposed memory architecture—called the {\em universal memory architecture} (UMA)—maintains an epistemic state in the form of a system of default rules similar to those studied by Pearl and by Goldszmidt and Pearl (systems $Z$ and $Z^+$). A duality between the category of UMA representations and the category of the corresponding model spaces, extending the Sageev-Roller duality between discrete poc sets and discrete median algebras provides a two-way dictionary from inference to geometry, leading to immense savings in computation, at a cost in the quality of representation that can be quantified in terms of topological invariants. Moreover, the same framework naturally enables comparisons between different model spaces, making it possible to analyze the deficiencies of one model space in comparison to others. This paper develops the formalism underlying UMA, analyzes the complexity of maintenance and inference operations in UMA, and presents some learning guarantees for different UMA-based learners. Finally, we present simulation results to illustrate the viability of the approach, and close with a discussion of the strengths, weaknesses, and potential development of UMA-based learners.
Tasks
Published 2018-12-20
URL http://arxiv.org/abs/1812.08313v1
PDF http://arxiv.org/pdf/1812.08313v1.pdf
PWC https://paperswithcode.com/paper/iterated-belief-revision-under-resource
Repo
Framework

Co-Arg: Cogent Argumentation with Crowd Elicitation

Title Co-Arg: Cogent Argumentation with Crowd Elicitation
Authors Mihai Boicu, Dorin Marcu, Gheorghe Tecuci, Lou Kaiser, Chirag Uttamsingh, Navya Kalale
Abstract This paper presents Co-Arg, a new type of cognitive assistant to an intelligence analyst that enables the synergistic integration of analyst imagination and expertise, computer knowledge and critical reasoning, and crowd wisdom, to draw defensible and persuasive conclusions from masses of evidence of all types, in a world that is changing all the time. Co-Arg’s goal is to improve the quality of the analytic results and enhance their understandability for both experts and novices. The performed analysis is based on a sound and transparent argumentation that links evidence to conclusions in a way that shows very clearly how the conclusions have been reached, what evidence was used and how, what is not known, and what assumptions have been made. The analytic results are presented in a report describes the analytic conclusion and its probability, the main favoring and disfavoring arguments, the justification of the key judgments and assumptions, and the missing information that might increase the accuracy of the solution.
Tasks
Published 2018-10-02
URL http://arxiv.org/abs/1810.01541v1
PDF http://arxiv.org/pdf/1810.01541v1.pdf
PWC https://paperswithcode.com/paper/co-arg-cogent-argumentation-with-crowd
Repo
Framework

Discriminative Representation Combinations for Accurate Face Spoofing Detection

Title Discriminative Representation Combinations for Accurate Face Spoofing Detection
Authors Xiao Song, Xu Zhao, Liangji Fang, Tianwei Lin
Abstract Three discriminative representations for face presentation attack detection are introduced in this paper. Firstly we design a descriptor called spatial pyramid coding micro-texture (SPMT) feature to characterize local appearance information. Secondly we utilize the SSD, which is a deep learning framework for detection, to excavate context cues and conduct end-to-end face presentation attack detection. Finally we design a descriptor called template face matched binocular depth (TFBD) feature to characterize stereo structures of real and fake faces. For accurate presentation attack detection, we also design two kinds of representation combinations. Firstly, we propose a decision-level cascade strategy to combine SPMT with SSD. Secondly, we use a simple score fusion strategy to combine face structure cues (TFBD) with local micro-texture features (SPMT). To demonstrate the effectiveness of our design, we evaluate the representation combination of SPMT and SSD on three public datasets, which outperforms all other state-of-the-art methods. In addition, we evaluate the representation combination of SPMT and TFBD on our dataset and excellent performance is also achieved.
Tasks Face Presentation Attack Detection
Published 2018-08-27
URL http://arxiv.org/abs/1808.08802v2
PDF http://arxiv.org/pdf/1808.08802v2.pdf
PWC https://paperswithcode.com/paper/discriminative-representation-combinations
Repo
Framework

Stakeholders in Explainable AI

Title Stakeholders in Explainable AI
Authors Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett, Supriyo Chakraborty
Abstract There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable. However, there is no general consensus over what is meant by ‘explainable’ and ‘interpretable’. In this paper, we argue that this lack of consensus is due to there being several distinct stakeholder communities. We note that, while the concerns of the individual communities are broadly compatible, they are not identical, which gives rise to different intents and requirements for explainability/interpretability. We use the software engineering distinction between validation and verification, and the epistemological distinctions between knowns/unknowns, to tease apart the concerns of the stakeholder communities and highlight the areas where their foci overlap or diverge. It is not the purpose of the authors of this paper to ‘take sides’ - we count ourselves as members, to varying degrees, of multiple communities - but rather to help disambiguate what stakeholders mean when they ask ‘Why?’ of an AI.
Tasks
Published 2018-09-29
URL http://arxiv.org/abs/1810.00184v1
PDF http://arxiv.org/pdf/1810.00184v1.pdf
PWC https://paperswithcode.com/paper/stakeholders-in-explainable-ai
Repo
Framework

Client-Specific Anomaly Detection for Face Presentation Attack Detection

Title Client-Specific Anomaly Detection for Face Presentation Attack Detection
Authors Shervin Rahimzadeh Arashloo, Josef Kittler
Abstract The one-class anomaly detection approach has previously been found to be effective in face presentation attack detection, especially in an \textit{unseen} attack scenario, where the system is exposed to novel types of attacks. This work follows the same anomaly-based formulation of the problem and analyses the merits of deploying \textit{client-specific} information for face spoofing detection. We propose training one-class client-specific classifiers (both generative and discriminative) using representations obtained from pre-trained deep convolutional neural networks. Next, based on subject-specific score distributions, a distinct threshold is set for each client, which is then used for decision making regarding a test query. Through extensive experiments using different one-class systems, it is shown that the use of client-specific information in a one-class anomaly detection formulation (both in model construction as well as decision threshold tuning) improves the performance significantly. In addition, it is demonstrated that the same set of deep convolutional features used for the recognition purposes is effective for face presentation attack detection in the class-specific one-class anomaly detection paradigm.
Tasks Anomaly Detection, Decision Making, Face Presentation Attack Detection
Published 2018-07-02
URL http://arxiv.org/abs/1807.00848v1
PDF http://arxiv.org/pdf/1807.00848v1.pdf
PWC https://paperswithcode.com/paper/client-specific-anomaly-detection-for-face
Repo
Framework

“How to rate a video game?” - A prediction system for video games based on multimodal information

Title “How to rate a video game?” - A prediction system for video games based on multimodal information
Authors Vishal Batchu, Varshit Battu, Murali Krishna Reddy, Radhika Mamidi
Abstract Video games have become an integral part of most people’s lives in recent times. This led to an abundance of data related to video games being shared online. However, this comes with issues such as incorrect ratings, reviews or anything that is being shared. Recommendation systems are powerful tools that help users by providing them with meaningful recommendations. A straightforward approach would be to predict the scores of video games based on other information related to the game. It could be used as a means to validate user-submitted ratings as well as provide recommendations. This work provides a method to predict the G-Score, that defines how good a video game is, from its trailer (video) and summary (text). We first propose models to predict the G-Score based on the trailer alone (unimodal). Later on, we show that considering information from multiple modalities helps the models perform better compared to using information from videos alone. Since we couldn’t find any suitable multimodal video game dataset, we created our own dataset named VGD (Video Game Dataset) and provide it along with this work. The approach mentioned here can be generalized to other multimodal datasets such as movie trailers and summaries etc. Towards the end, we talk about the shortcomings of the work and some methods to overcome them.
Tasks Recommendation Systems
Published 2018-05-29
URL http://arxiv.org/abs/1805.11372v1
PDF http://arxiv.org/pdf/1805.11372v1.pdf
PWC https://paperswithcode.com/paper/how-to-rate-a-video-game-a-prediction-system
Repo
Framework

Critical Points to Determine Persistence Homology

Title Critical Points to Determine Persistence Homology
Authors Charmin Asirimath, Jayampathy Ratnayake, Chathuranga Weeraddana
Abstract Computation of the simplicial complexes of a large point cloud often relies on extracting a sample, to reduce the associated computational burden. The study considers sampling critical points of a Morse function associated to a point cloud, to approximate the Vietoris-Rips complex or the witness complex and compute persistence homology. The effectiveness of the novel approach is compared with the farthest point sampling, in a context of classifying human face images into ethnics groups using persistence homology.
Tasks
Published 2018-05-16
URL http://arxiv.org/abs/1805.06148v1
PDF http://arxiv.org/pdf/1805.06148v1.pdf
PWC https://paperswithcode.com/paper/critical-points-to-determine-persistence
Repo
Framework
comments powered by Disqus