January 28, 2020

2971 words 14 mins read

Paper Group ANR 1059

Paper Group ANR 1059

Towards Social Artificial Intelligence: Nonverbal Social Signal Prediction in A Triadic Interaction. Risk Convergence of Centered Kernel Ridge Regression with Large Dimensional Data. Effective writing style imitation via combinatorial paraphrasing. Multimodal Logical Inference System for Visual-Textual Entailment. GP-ALPS: Automatic Latent Process …

Towards Social Artificial Intelligence: Nonverbal Social Signal Prediction in A Triadic Interaction

Title Towards Social Artificial Intelligence: Nonverbal Social Signal Prediction in A Triadic Interaction
Authors Hanbyul Joo, Tomas Simon, Mina Cikara, Yaser Sheikh
Abstract We present a new research task and a dataset to understand human social interactions via computational methods, to ultimately endow machines with the ability to encode and decode a broad channel of social signals humans use. This research direction is essential to make a machine that genuinely communicates with humans, which we call Social Artificial Intelligence. We first formulate the “social signal prediction” problem as a way to model the dynamics of social signals exchanged among interacting individuals in a data-driven way. We then present a new 3D motion capture dataset to explore this problem, where the broad spectrum of social signals (3D body, face, and hand motions) are captured in a triadic social interaction scenario. Baseline approaches to predict speaking status, social formation, and body gestures of interacting individuals are presented in the defined social prediction framework.
Tasks Motion Capture
Published 2019-06-10
URL https://arxiv.org/abs/1906.04158v1
PDF https://arxiv.org/pdf/1906.04158v1.pdf
PWC https://paperswithcode.com/paper/towards-social-artificial-intelligence-1
Repo
Framework

Risk Convergence of Centered Kernel Ridge Regression with Large Dimensional Data

Title Risk Convergence of Centered Kernel Ridge Regression with Large Dimensional Data
Authors Khalil Elkhalil, Abla Kammoun, Xiangliang Zhang, Mohamed-Slim Alouini, Tareq Al-Naffouri
Abstract This paper carries out a large dimensional analysis of a variation of kernel ridge regression that we call \emph{centered kernel ridge regression} (CKRR), also known in the literature as kernel ridge regression with offset. This modified technique is obtained by accounting for the bias in the regression problem resulting in the old kernel ridge regression but with \emph{centered} kernels. The analysis is carried out under the assumption that the data is drawn from a Gaussian distribution and heavily relies on tools from random matrix theory (RMT). Under the regime in which the data dimension and the training size grow infinitely large with fixed ratio and under some mild assumptions controlling the data statistics, we show that both the empirical and the prediction risks converge to a deterministic quantities that describe in closed form fashion the performance of CKRR in terms of the data statistics and dimensions. Inspired by this theoretical result, we subsequently build a consistent estimator of the prediction risk based on the training data which allows to optimally tune the design parameters. A key insight of the proposed analysis is the fact that asymptotically a large class of kernels achieve the same minimum prediction risk. This insight is validated with both synthetic and real data.
Tasks
Published 2019-04-19
URL http://arxiv.org/abs/1904.09212v1
PDF http://arxiv.org/pdf/1904.09212v1.pdf
PWC https://paperswithcode.com/paper/risk-convergence-of-centered-kernel-ridge
Repo
Framework

Effective writing style imitation via combinatorial paraphrasing

Title Effective writing style imitation via combinatorial paraphrasing
Authors Tommi Gröndahl, N. Asokan
Abstract Stylometry can be used to profile authors based on their written text. Transforming text to imitate someone else’s writing style while retaining meaning constitutes a defence. A variety of deep learning methods for style imitation have been proposed in recent research literature. Via empirical evaluation of three state-of-the-art models on four datasets, we illustrate that none succeed in semantic retainment, often drastically changing the original meaning or removing important parts of the text. To mitigate this problem we present ParChoice: an alternative approach based on the combinatorial application of multiple paraphrasing techniques. ParChoice first produces a large number of possible candidate paraphrases, from which it then chooses the candidate that maximizes proximity to a target corpus. Through systematic automated and manual evaluation as well as a user study, we demonstrate that ParChoice significantly outperforms prior methods in its ability to retain semantic content. Using state-of-the art deep learning author profiling tools, we additionally show that ParChoice accomplishes better imitation success than A$^4$NT, the state-of-the-art style imitation technique with the best semantic retainment.
Tasks
Published 2019-05-31
URL https://arxiv.org/abs/1905.13464v1
PDF https://arxiv.org/pdf/1905.13464v1.pdf
PWC https://paperswithcode.com/paper/effective-writing-style-imitation-via
Repo
Framework

Multimodal Logical Inference System for Visual-Textual Entailment

Title Multimodal Logical Inference System for Visual-Textual Entailment
Authors Riko Suzuki, Hitomi Yanaka, Masashi Yoshikawa, Koji Mineshima, Daisuke Bekki
Abstract A large amount of research about multimodal inference across text and vision has been recently developed to obtain visually grounded word and sentence representations. In this paper, we use logic-based representations as unified meaning representations for texts and images and present an unsupervised multimodal logical inference system that can effectively prove entailment relations between them. We show that by combining semantic parsing and theorem proving, the system can handle semantically complex sentences for visual-textual inference.
Tasks Automated Theorem Proving, Natural Language Inference, Semantic Parsing
Published 2019-06-10
URL https://arxiv.org/abs/1906.03952v1
PDF https://arxiv.org/pdf/1906.03952v1.pdf
PWC https://paperswithcode.com/paper/multimodal-logical-inference-system-for
Repo
Framework

GP-ALPS: Automatic Latent Process Selection for Multi-Output Gaussian Process Models

Title GP-ALPS: Automatic Latent Process Selection for Multi-Output Gaussian Process Models
Authors Pavel Berkovich, Eric Perim, Wessel Bruinsma
Abstract A simple and widely adopted approach to extend Gaussian processes (GPs) to multiple outputs is to model each output as a linear combination of a collection of shared, unobserved latent GPs. An issue with this approach is choosing the number of latent processes and their kernels. These choices are typically done manually, which can be time consuming and prone to human biases. We propose Gaussian Process Automatic Latent Process Selection (GP-ALPS), which automatically chooses the latent processes by turning off those that do not meaningfully contribute to explaining the data. We develop a variational inference scheme, assess the quality of the variational posterior by comparing it against the gold standard MCMC, and demonstrate the suitability of GP-ALPS in a set of preliminary experiments.
Tasks Gaussian Processes
Published 2019-11-05
URL https://arxiv.org/abs/1911.01929v2
PDF https://arxiv.org/pdf/1911.01929v2.pdf
PWC https://paperswithcode.com/paper/gp-alps-automatic-latent-process-selection
Repo
Framework

Bayesian Optimization on Large Graphs via a Graph Convolutional Generative Model: Application in Cardiac Model Personalization

Title Bayesian Optimization on Large Graphs via a Graph Convolutional Generative Model: Application in Cardiac Model Personalization
Authors Jwala Dhamala, Sandesh Ghimire, John L. Sapp, B. Milan Horacek, Linwei Wang
Abstract Personalization of cardiac models involves the optimization of organ tissue properties that vary spatially over the non-Euclidean geometry model of the heart. To represent the high-dimensional (HD) unknown of tissue properties, most existing works rely on a low-dimensional (LD) partitioning of the geometrical model. While this exploits the geometry of the heart, it is of limited expressiveness to allow partitioning that is small enough for effective optimization. Recently, a variational auto-encoder (VAE) was utilized as a more expressive generative model to embed the HD optimization into the LD latent space. Its Euclidean nature, however, neglects the rich geometrical information in the heart. In this paper, we present a novel graph convolutional VAE to allow generative modeling of non-Euclidean data, and utilize it to embed Bayesian optimization of large graphs into a small latent space. This approach bridges the gap of previous works by introducing an expressive generative model that is able to incorporate the knowledge of spatial proximity and hierarchical compositionality of the underlying geometry. It further allows transferring of the learned features across different geometries, which was not possible with a regular VAE. We demonstrate these benefits of the presented method in synthetic and real data experiments of estimating tissue excitability in a cardiac electrophysiological model.
Tasks
Published 2019-07-01
URL https://arxiv.org/abs/1907.01406v1
PDF https://arxiv.org/pdf/1907.01406v1.pdf
PWC https://paperswithcode.com/paper/bayesian-optimization-on-large-graphs-via-a
Repo
Framework

NIL: Learning Nonlinear Interpolants

Title NIL: Learning Nonlinear Interpolants
Authors Mingshuai Chen, Jian Wang, Jie An, Bohua Zhan, Deepak Kapur, Naijun Zhan
Abstract Nonlinear interpolants have been shown useful for the verification of programs and hybrid systems in contexts of theorem proving, model checking, abstract interpretation, etc. The underlying synthesis problem, however, is challenging and existing methods have limitations on the form of formulae to be interpolated. We leverage classification techniques with space transformations and kernel tricks as established in the realm of machine learning, and present a counterexample-guided method named NIL for synthesizing polynomial interpolants, thereby yielding a unified framework tackling the interpolation problem for the general quantifier-free theory of nonlinear arithmetic, possibly involving transcendental functions. We prove the soundness of NIL and propose sufficient conditions under which NIL is guaranteed to converge, i.e., the derived sequence of candidate interpolants converges to an actual interpolant, and is complete, namely the algorithm terminates by producing an interpolant if there exists one. The applicability and effectiveness of our technique are demonstrated experimentally on a collection of representative benchmarks from the literature, where in particular, our method suffices to address more interpolation tasks, including those with perturbations in parameters, and in many cases synthesizes simpler interpolants compared with existing approaches.
Tasks Automated Theorem Proving
Published 2019-05-28
URL https://arxiv.org/abs/1905.11625v5
PDF https://arxiv.org/pdf/1905.11625v5.pdf
PWC https://paperswithcode.com/paper/nil-learning-nonlinear-interpolants
Repo
Framework

Nonembeddability of Persistence Diagrams with $p>2$ Wasserstein Metric

Title Nonembeddability of Persistence Diagrams with $p>2$ Wasserstein Metric
Authors Alexander Wagner
Abstract Persistence diagrams do not admit an inner product structure compatible with any Wasserstein metric. Hence, when applying kernel methods to persistence diagrams, the underlying feature map necessarily causes distortion. We prove persistence diagrams with the p-Wasserstein metric do not admit a coarse embedding into a Hilbert space when p > 2.
Tasks
Published 2019-10-30
URL https://arxiv.org/abs/1910.13935v1
PDF https://arxiv.org/pdf/1910.13935v1.pdf
PWC https://paperswithcode.com/paper/nonembeddability-of-persistence-diagrams-with
Repo
Framework

Deep Learning for Face Recognition: Pride or Prejudiced?

Title Deep Learning for Face Recognition: Pride or Prejudiced?
Authors Shruti Nagpal, Maneet Singh, Richa Singh, Mayank Vatsa
Abstract Do very high accuracies of deep networks suggest pride of effective AI or are deep networks prejudiced? Do they suffer from in-group biases (own-race-bias and own-age-bias), and mimic the human behavior? Is in-group specific information being encoded sub-consciously by the deep networks? This research attempts to answer these questions and presents an in-depth analysis of `bias’ in deep learning based face recognition systems. This is the first work which decodes if and where bias is encoded for face recognition. Taking cues from cognitive studies, we inspect if deep networks are also affected by social in- and out-group effect. Networks are analyzed for own-race and own-age bias, both of which have been well established in human beings. The sub-conscious behavior of face recognition models is examined to understand if they encode race or age specific features for face recognition. Analysis is performed based on 36 experiments conducted on multiple datasets. Four deep learning networks either trained from scratch or pre-trained on over 10M images are used. Variations across class activation maps and feature visualizations provide novel insights into the functioning of deep learning systems, suggesting behavior similar to humans. It is our belief that a better understanding of state-of-the-art deep learning networks would enable researchers to address the given challenge of bias in AI, and develop fairer systems. |
Tasks Face Recognition
Published 2019-04-02
URL https://arxiv.org/abs/1904.01219v2
PDF https://arxiv.org/pdf/1904.01219v2.pdf
PWC https://paperswithcode.com/paper/deep-learning-for-face-recognition-pride-or
Repo
Framework

Adaptive Context Encoding Module for Semantic Segmentation

Title Adaptive Context Encoding Module for Semantic Segmentation
Authors Congcong Wang, Faouzi Alaya Cheikh, Azeddine Beghdadi, Ole Jakob Elle
Abstract The object sizes in images are diverse, therefore, capturing multiple scale context information is essential for semantic segmentation. Existing context aggregation methods such as pyramid pooling module (PPM) and atrous spatial pyramid pooling (ASPP) design different pooling size or atrous rate, such that multiple scale information is captured. However, the pooling sizes and atrous rates are chosen manually and empirically. In order to capture object context information adaptively, in this paper, we propose an adaptive context encoding (ACE) module based on deformable convolution operation to argument multiple scale information. Our ACE module can be embedded into other Convolutional Neural Networks (CNN) easily for context aggregation. The effectiveness of the proposed module is demonstrated on Pascal-Context and ADE20K datasets. Although our proposed ACE only consists of three deformable convolution blocks, it outperforms PPM and ASPP in terms of mean Intersection of Union (mIoU) on both datasets. All the experiment study confirms that our proposed module is effective as compared to the state-of-the-art methods.
Tasks Semantic Segmentation
Published 2019-07-13
URL https://arxiv.org/abs/1907.06082v1
PDF https://arxiv.org/pdf/1907.06082v1.pdf
PWC https://paperswithcode.com/paper/adaptive-context-encoding-module-for-semantic
Repo
Framework

Coarse Graining of Data via Inhomogeneous Diffusion Condensation

Title Coarse Graining of Data via Inhomogeneous Diffusion Condensation
Authors Nathan Brugnone, Alex Gonopolskiy, Mark W. Moyle, Manik Kuchroo, David van Dijk, Kevin R. Moon, Daniel Colon-Ramos, Guy Wolf, Matthew J. Hirn, Smita Krishnaswamy
Abstract Big data often has emergent structure that exists at multiple levels of abstraction, which are useful for characterizing complex interactions and dynamics of the observations. Here, we consider multiple levels of abstraction via a multiresolution geometry of data points at different granularities. To construct this geometry we define a time-inhomogeneous diffusion process that effectively condenses data points together to uncover nested groupings at larger and larger granularities. This inhomogeneous process creates a deep cascade of intrinsic low pass filters on the data affinity graph that are applied in sequence to gradually eliminate local variability while adjusting the learned data geometry to increasingly coarser resolutions. We provide visualizations to exhibit our method as a continuously-hierarchical clustering with directions of eliminated variation highlighted at each step. The utility of our algorithm is demonstrated via neuronal data condensation, where the constructed multiresolution data geometry uncovers the organization, grouping, and connectivity between neurons.
Tasks
Published 2019-07-10
URL https://arxiv.org/abs/1907.04463v3
PDF https://arxiv.org/pdf/1907.04463v3.pdf
PWC https://paperswithcode.com/paper/coarse-graining-of-data-via-inhomogeneous
Repo
Framework

The Bouncer Problem: Challenges to Remote Explainability

Title The Bouncer Problem: Challenges to Remote Explainability
Authors Erwan Le Merrer, Gilles Tredan
Abstract The concept of explainability is envisioned to satisfy society’s demands for transparency on machine learning decisions. The concept is simple: like humans, algorithms should explain the rationale behind their decisions so that their fairness can be assessed. While this approach is promising in a local context (e.g. to explain a model during debugging at training time), we argue that this reasoning cannot simply be transposed in a remote context, where a trained model by a service provider is only accessible through its API. This is problematic as it constitutes precisely the target use-case requiring transparency from a societal perspective. Through an analogy with a club bouncer (which may provide untruthful explanations upon customer reject), we show that providing explanations cannot prevent a remote service from lying about the true reasons leading to its decisions. More precisely, we prove the impossibility of remote explainability for single explanations, by constructing an attack on explanations that hides discriminatory features to the querying user. We provide an example implementation of this attack. We then show that the probability that an observer spots the attack, using several explanations for attempting to find incoherences, is low in practical settings. This undermines the very concept of remote explainability in general.
Tasks
Published 2019-10-03
URL https://arxiv.org/abs/1910.01432v2
PDF https://arxiv.org/pdf/1910.01432v2.pdf
PWC https://paperswithcode.com/paper/the-bouncer-problem-challenges-to-remote
Repo
Framework

A Latent Variational Framework for Stochastic Optimization

Title A Latent Variational Framework for Stochastic Optimization
Authors Philippe Casgrain
Abstract This paper provides a unifying theoretical framework for stochastic optimization algorithms by means of a latent stochastic variational problem. Using techniques from stochastic control, the solution to the variational problem is shown to be equivalent to that of a Forward Backward Stochastic Differential Equation (FBSDE). By solving these equations, we recover a variety of existing adaptive stochastic gradient descent methods. This framework establishes a direct connection between stochastic optimization algorithms and a secondary Bayesian inference problem on gradients, where a prior measure on noisy gradient observations determines the resulting algorithm.
Tasks Bayesian Inference, Stochastic Optimization
Published 2019-05-05
URL https://arxiv.org/abs/1905.01707v5
PDF https://arxiv.org/pdf/1905.01707v5.pdf
PWC https://paperswithcode.com/paper/a-bayesian-variational-framework-for
Repo
Framework

The tension between openness and prudence in AI research

Title The tension between openness and prudence in AI research
Authors Jess Whittlestone, Aviv Ovadya
Abstract This paper explores the tension between openness and prudence in AI research, evident in two core principles of the Montr'eal Declaration for Responsible AI. While the AI community has strong norms around open sharing of research, concerns about the potential harms arising from misuse of research are growing, prompting some to consider whether the field of AI needs to reconsider publication norms. We discuss how different beliefs and values can lead to differing perspectives on how the AI community should manage this tension, and explore implications for what responsible publication norms in AI research might look like in practice.
Tasks
Published 2019-10-02
URL https://arxiv.org/abs/1910.01170v2
PDF https://arxiv.org/pdf/1910.01170v2.pdf
PWC https://paperswithcode.com/paper/the-tension-between-openness-and-prudence-in
Repo
Framework

Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness

Title Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
Authors Pengzhan Jin, Lu Lu, Yifa Tang, George Em Karniadakis
Abstract The accuracy of deep learning, i.e., deep neural networks, can be characterized by dividing the total error into three main types: approximation error, optimization error, and generalization error. Whereas there are some satisfactory answers to the problems of approximation and optimization, much less is known about the theory of generalization. Most existing theoretical works for generalization fail to explain the performance of neural networks in practice. To derive a meaningful bound, we study the generalization error of neural networks for classification problems in terms of data distribution and neural network smoothness. We introduce the cover complexity (CC) to measure the difficulty of learning a data set and the inverse of the modulus of continuity to quantify neural network smoothness. A quantitative bound for expected accuracy/error is derived by considering both the CC and neural network smoothness. We validate our theoretical results by several data sets of images. The numerical results confirm that the expected error of trained networks scaled with the square root of the number of classes has a linear relationship with respect to the CC. We also observe a clear consistency between test loss and neural network smoothness during the training process. In addition, we show that neural network smoothness decreases when the network size increases, while the smoothness is insensitive to training dataset size.
Tasks
Published 2019-05-27
URL https://arxiv.org/abs/1905.11427v2
PDF https://arxiv.org/pdf/1905.11427v2.pdf
PWC https://paperswithcode.com/paper/quantifying-the-generalization-error-in-deep
Repo
Framework
comments powered by Disqus