January 31, 2020

3236 words 16 mins read

Paper Group ANR 22

Paper Group ANR 22

High dynamic range image forensics using cnn. Efficient Querying from Weighted Binary Codes. Robust Optimisation Monte Carlo. Adaptive Learning Material Recommendation in Online Language Education. blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness. Generative Teaching Networks: Accelerating Neural Architecture S …

High dynamic range image forensics using cnn

Title High dynamic range image forensics using cnn
Authors Yongqing Huo, Xiaofeng Zhu
Abstract High dynamic range (HDR) imaging has recently drawn much attention in multimedia community. In this paper, we proposed a HDR image forensics method based on convolutional neural network (CNN).To our best knowledge, this is the first time to apply deep learning method on HDR image forensics. The proposed algorithm uses CNN to distinguish HDR images generated by multiple low dynamic range (LDR) images from that expanded by single LDR image using inverse tone mapping (iTM). To do this, we learn the change of statistical characteristics extracted by the proposed CNN architectures and classify two kinds of HDR images. Comparision results with some traditional statistical characteristics shows efficiency of the proposed method in HDR image source identification.
Tasks
Published 2019-02-28
URL http://arxiv.org/abs/1902.10938v1
PDF http://arxiv.org/pdf/1902.10938v1.pdf
PWC https://paperswithcode.com/paper/high-dynamic-range-image-forensics-using-cnn
Repo
Framework

Efficient Querying from Weighted Binary Codes

Title Efficient Querying from Weighted Binary Codes
Authors Zhenyu Weng, Yuesheng Zhu
Abstract Binary codes are widely used to represent the data due to their small storage and efficient computation. However, there exists an ambiguity problem that lots of binary codes share the same Hamming distance to a query. To alleviate the ambiguity problem, weighted binary codes assign different weights to each bit of binary codes and compare the binary codes by the weighted Hamming distance. Till now, performing the querying from the weighted binary codes efficiently is still an open issue. In this paper, we propose a new method to rank the weighted binary codes and return the nearest weighted binary codes of the query efficiently. In our method, based on the multi-index hash tables, two algorithms, the table bucket finding algorithm and the table merging algorithm, are proposed to select the nearest weighted binary codes of the query in a non-exhaustive and accurate way. The proposed algorithms are justified by proving their theoretic properties. The experiments on three large-scale datasets validate both the search efficiency and the search accuracy of our method. Especially for the number of weighted binary codes up to one billion, our method shows a great improvement of more than 1000 times faster than the linear scan.
Tasks
Published 2019-11-21
URL https://arxiv.org/abs/1912.05006v1
PDF https://arxiv.org/pdf/1912.05006v1.pdf
PWC https://paperswithcode.com/paper/efficient-querying-from-weighted-binary-codes
Repo
Framework

Robust Optimisation Monte Carlo

Title Robust Optimisation Monte Carlo
Authors Borislav Ikonomov, Michael U. Gutmann
Abstract This paper is on Bayesian inference for parametric statistical models that are defined by a stochastic simulator which specifies how data is generated. Exact sampling is then possible but evaluating the likelihood function is typically prohibitively expensive. Approximate Bayesian Computation (ABC) is a framework to perform approximate inference in such situations. While basic ABC algorithms are widely applicable, they are notoriously slow and much research has focused on increasing their efficiency. Optimisation Monte Carlo (OMC) has recently been proposed as an efficient and embarrassingly parallel method that leverages optimisation to accelerate the inference. In this paper, we demonstrate an important previously unrecognised failure mode of OMC: It generates strongly overconfident approximations by collapsing regions of similar or near-constant likelihood into a single point. We propose an efficient, robust generalisation of OMC that corrects this. It makes fewer assumptions, retains the main benefits of OMC, and can be performed either as post-processing to OMC or as a stand-alone computation. We demonstrate the effectiveness of the proposed Robust OMC on toy examples and tasks in inverse-graphics where we perform Bayesian inference with a complex image renderer.
Tasks Bayesian Inference
Published 2019-04-01
URL https://arxiv.org/abs/1904.00670v3
PDF https://arxiv.org/pdf/1904.00670v3.pdf
PWC https://paperswithcode.com/paper/robust-optimisation-monte-carlo
Repo
Framework

Adaptive Learning Material Recommendation in Online Language Education

Title Adaptive Learning Material Recommendation in Online Language Education
Authors Shuhan Wang, Hao Wu, Ji Hun Kim, Erik Andersen
Abstract Recommending personalized learning materials for online language learning is challenging because we typically lack data about the student’s ability and the relative difficulty of learning materials. This makes it hard to recommend appropriate content that matches the student’s prior knowledge. In this paper, we propose a refined hierarchical knowledge structure to model vocabulary knowledge, which enables us to automatically organize the authentic and up-to-date learning materials collected from the internet. Based on this knowledge structure, we then introduce a hybrid approach to recommend learning materials that adapts to a student’s language level. We evaluate our work with an online Japanese learning tool and the results suggest adding adaptivity into material recommendation significantly increases student engagement.
Tasks
Published 2019-05-26
URL https://arxiv.org/abs/1905.10893v1
PDF https://arxiv.org/pdf/1905.10893v1.pdf
PWC https://paperswithcode.com/paper/adaptive-learning-material-recommendation-in
Repo
Framework

blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness

Title blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness
Authors Jiaming Zhang, Jitao Sang, Kaiyuan Xu, Shangxi Wu, Yongli Hu, Yanfeng Sun, Jian Yu
Abstract Turing test was originally proposed to examine whether machine’s behavior is indistinguishable from a human. The most popular and practical Turing test is CAPTCHA, which is to discriminate algorithm from human by offering recognition-alike questions. The recent development of deep learning has significantly advanced the capability of algorithm in solving CAPTCHA questions, forcing CAPTCHA designers to increase question complexity. Instead of designing questions difficult for both algorithm and human, this study attempts to employ the limitations of algorithm to design robust CAPTCHA questions easily solvable to human. Specifically, our data analysis observes that human and algorithm demonstrates different vulnerability to visual distortions: adversarial perturbation is significantly annoying to algorithm yet friendly to human. We are motivated to employ adversarially perturbed images for robust CAPTCHA design in the context of character-based questions. Three modules of multi-target attack, ensemble adversarial training, and image preprocessing differentiable approximation are proposed to address the characteristics of character-based CAPTCHA cracking. Qualitative and quantitative experimental results demonstrate the effectiveness of the proposed solution. We hope this study can lead to the discussions around adversarial attack/defense in CAPTCHA design and also inspire the future attempts in employing algorithm limitation for practical usage.
Tasks Adversarial Attack
Published 2019-04-22
URL http://arxiv.org/abs/1904.09804v1
PDF http://arxiv.org/pdf/1904.09804v1.pdf
PWC https://paperswithcode.com/paper/blessing-in-disguise-designing-robust-turing
Repo
Framework

Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data

Title Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data
Authors Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth O. Stanley, Jeff Clune
Abstract This paper investigates the intriguing question of whether we can create learning algorithms that automatically generate training data, learning environments, and curricula in order to help AI agents rapidly learn. We show that such algorithms are possible via Generative Teaching Networks (GTNs), a general approach that is, in theory, applicable to supervised, unsupervised, and reinforcement learning, although our experiments only focus on the supervised case. GTNs are deep neural networks that generate data and/or training environments that a learner (e.g. a freshly initialized neural network) trains on for a few SGD steps before being tested on a target task. We then differentiate through the entire learning process via meta-gradients to update the GTN parameters to improve performance on the target task. GTNs have the beneficial property that they can theoretically generate any type of data or training environment, making their potential impact large. This paper introduces GTNs, discusses their potential, and showcases that they can substantially accelerate learning. We also demonstrate a practical and exciting application of GTNs: accelerating the evaluation of candidate architectures for neural architecture search (NAS), which is rate-limited by such evaluations, enabling massive speed-ups in NAS. GTN-NAS improves the NAS state of the art, finding higher performing architectures when controlling for the search proposal mechanism. GTN-NAS also is competitive with the overall state of the art approaches, which achieve top performance while using orders of magnitude less computation than typical NAS methods. Speculating forward, GTNs may represent a first step toward the ambitious goal of algorithms that generate their own training data and, in doing so, open a variety of interesting new research questions and directions.
Tasks Neural Architecture Search
Published 2019-12-17
URL https://arxiv.org/abs/1912.07768v1
PDF https://arxiv.org/pdf/1912.07768v1.pdf
PWC https://paperswithcode.com/paper/generative-teaching-networks-accelerating-1
Repo
Framework

FT-SWRL: A Fuzzy-Temporal Extension of Semantic Web Rule Language

Title FT-SWRL: A Fuzzy-Temporal Extension of Semantic Web Rule Language
Authors Abba Lawan, Abdur Rakib
Abstract We present, FT-SWRL, a fuzzy temporal extension to the Semantic Web Rule Language (SWRL), which combines fuzzy theories based on the valid-time temporal model to provide a standard approach for modeling imprecise temporal domain knowledge in OWL ontologies. The proposal introduces a fuzzy temporal model for the semantic web, which is syntactically defined as a fuzzy temporal SWRL ontology (SWRL-FTO) with a new set of fuzzy temporal SWRL built-ins for defining their semantics. The SWRL-FTO hierarchically defines the necessary linguistic terminologies and variables for the fuzzy temporal model. An example model demonstrating the usefulness of the fuzzy temporal SWRL built-ins to model imprecise temporal information is also represented. Fuzzification process of interval-based temporal logic is further discussed as a reasoning paradigm for our FT-SWRL rules, with the aim of achieving a complete OWL-based fuzzy temporal reasoning. Literature review on fuzzy temporal representation approaches, both within and without the use of ontologies, led to the conclusion that the FT-SWRL model can authoritatively serve as a formal specification for handling imprecise temporal expressions on the semantic web.
Tasks
Published 2019-11-27
URL https://arxiv.org/abs/1911.12399v1
PDF https://arxiv.org/pdf/1911.12399v1.pdf
PWC https://paperswithcode.com/paper/ft-swrl-a-fuzzy-temporal-extension-of
Repo
Framework

Dialog System Technology Challenge 7

Title Dialog System Technology Challenge 7
Authors Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fernando D’Haro, Lazaros Polymenakos, Chulaka Gunasekara, Walter S. Lasecki, Jonathan K. Kummerfeld, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Xiang Gao, Huda Alamari, Tim K. Marks, Devi Parikh, Dhruv Batra
Abstract This paper introduces the Seventh Dialog System Technology Challenges (DSTC), which use shared datasets to explore the problem of building dialog systems. Recently, end-to-end dialog modeling approaches have been applied to various dialog tasks. The seventh DSTC (DSTC7) focuses on developing technologies related to end-to-end dialog systems for (1) sentence selection, (2) sentence generation and (3) audio visual scene aware dialog. This paper summarizes the overall setup and results of DSTC7, including detailed descriptions of the different tracks and provided datasets. We also describe overall trends in the submitted systems and the key results. Each track introduced new datasets and participants achieved impressive results using state-of-the-art end-to-end technologies.
Tasks
Published 2019-01-11
URL http://arxiv.org/abs/1901.03461v1
PDF http://arxiv.org/pdf/1901.03461v1.pdf
PWC https://paperswithcode.com/paper/dialog-system-technology-challenge-7
Repo
Framework

A Joint Planning and Learning Framework for Human-Aided Decision-Making

Title A Joint Planning and Learning Framework for Human-Aided Decision-Making
Authors Daoming Lyu, Fangkai Yang, Bo Liu, Steven Gustafson
Abstract Conventional reinforcement learning (RL) allows an agent to learn policies via environmental rewards only, with a long and slow learning curve, especially at the beginning stage. On the contrary, human learning is usually much faster because prior and general knowledge and multiple information resources are utilized. In this paper, we propose a \textbf{P}lanner-\textbf{A}ctor-\textbf{C}ritic architecture for hu\textbf{MAN}-centered planning and learning (\textbf{PACMAN}), where an agent uses prior, high-level, deterministic symbolic knowledge to plan for goal-directed actions. PACMAN integrates Actor-Critic algorithm of RL to fine-tune its behavior towards both environmental rewards and human feedback. To the best our knowledge, This is the first unified framework where knowledge-based planning, RL, and human teaching jointly contribute to the policy learning of an agent. Our experiments demonstrate that PACMAN leads to a significant jump-start at the early stage of learning, converges rapidly and with small variance, and is robust to inconsistent, infrequent, and misleading feedback.
Tasks Decision Making
Published 2019-06-17
URL https://arxiv.org/abs/1906.07268v3
PDF https://arxiv.org/pdf/1906.07268v3.pdf
PWC https://paperswithcode.com/paper/pacman-a-planner-actor-critic-architecture
Repo
Framework

Efficient Inverse-Free Algorithms for Extreme Learning Machine Based on the Recursive Matrix Inverse and the Inverse LDL’ Factorization

Title Efficient Inverse-Free Algorithms for Extreme Learning Machine Based on the Recursive Matrix Inverse and the Inverse LDL’ Factorization
Authors Hufei Zhu, Chenghao Wei
Abstract The inverse-free extreme learning machine (ELM) algorithm proposed in [4] was based on an inverse-free algorithm to compute the regularized pseudo-inverse, which was deduced from an inverse-free recursive algorithm to update the inverse of a Hermitian matrix. Before that recursive algorithm was applied in [4], its improved version had been utilized in previous literatures [9], [10]. Accordingly from the improved recursive algorithm [9], [10], we deduce a more efficient inverse-free algorithm to update the regularized pseudo-inverse, from which we develop the proposed inverse-free ELM algorithm 1. Moreover, the proposed ELM algorithm 2 further reduces the computational complexity, which computes the output weights directly from the updated inverse, and avoids computing the regularized pseudoinverse. Lastly, instead of updating the inverse, the proposed ELM algorithm 3 updates the LDLT factor of the inverse by the inverse LDLT factorization [11], to avoid numerical instabilities after a very large number of iterations [12]. With respect to the existing ELM algorithm, the proposed ELM algorithms 1, 2 and 3 are expected to require only (8+3)/M , (8+1)/M and (8+1)/M of complexities, respectively, where M is the output node number. In the numerical experiments, the standard ELM, the existing inverse-free ELM algorithm and the proposed ELM algorithms 1, 2 and 3 achieve the same performance in regression and classification, while all the 3 proposed algorithms significantly accelerate the existing inverse-free ELM algorithm
Tasks
Published 2019-11-12
URL https://arxiv.org/abs/1911.04856v1
PDF https://arxiv.org/pdf/1911.04856v1.pdf
PWC https://paperswithcode.com/paper/efficient-inverse-free-algorithms-for-extreme
Repo
Framework

An Introduction to Artificial Intelligence and Solutions to the Problems of Algorithmic Discrimination

Title An Introduction to Artificial Intelligence and Solutions to the Problems of Algorithmic Discrimination
Authors Nicholas Schmidt, Bryce Stephens
Abstract There is substantial evidence that Artificial Intelligence (AI) and Machine Learning (ML) algorithms can generate bias against minorities, women, and other protected classes. Federal and state laws have been enacted to protect consumers from discrimination in credit, housing, and employment, where regulators and agencies are tasked with enforcing these laws. Additionally, there are laws in place to ensure that consumers understand why they are denied access to services and products, such as consumer loans. In this article, we provide an overview of the potential benefits and risks associated with the use of algorithms and data, and focus specifically on fairness. While our observations generalize to many contexts, we focus on the fairness concerns raised in consumer credit and the legal requirements of the Equal Credit and Opportunity Act. We propose a methodology for evaluating algorithmic fairness and minimizing algorithmic bias that aligns with the provisions of federal and state anti-discrimination statutes that outlaw overt, disparate treatment, and, specifically, disparate impact discrimination. We argue that while the use of AI and ML algorithms heighten potential discrimination risks, these risks can be evaluated and mitigated, but doing so requires a deep understanding of these algorithms and the contexts and domains in which they are being used.
Tasks
Published 2019-11-08
URL https://arxiv.org/abs/1911.05755v1
PDF https://arxiv.org/pdf/1911.05755v1.pdf
PWC https://paperswithcode.com/paper/an-introduction-to-artificial-intelligence
Repo
Framework

Precision annealing Monte Carlo methods for statistical data assimilation and machine learning

Title Precision annealing Monte Carlo methods for statistical data assimilation and machine learning
Authors Zheng Fang, Adrian S. Wong, Kangbo Hao, Alexander J. A. Ty, Henry D. I. Abarbanel
Abstract In statistical data assimilation (SDA) and supervised machine learning (ML), we wish to transfer information from observations to a model of the processes underlying those observations. For SDA, the model consists of a set of differential equations that describe the dynamics of a physical system. For ML, the model is usually constructed using other strategies. In this paper, we develop a systematic formulation based on Monte Carlo sampling to achieve such information transfer. Following the derivation of an appropriate target distribution, we present the formulation based on the standard Metropolis-Hasting (MH) procedure and the Hamiltonian Monte Carlo (HMC) method for performing the high dimensional integrals that appear. To the extensive literature on MH and HMC, we add (1) an annealing method using a hyperparameter that governs the precision of the model to identify and explore the highest probability regions of phase space dominating those integrals, and (2) a strategy for initializing the state space search. The efficacy of the proposed formulation is demonstrated using a nonlinear dynamical model with chaotic solutions widely used in geophysics.
Tasks
Published 2019-07-06
URL https://arxiv.org/abs/1907.03137v2
PDF https://arxiv.org/pdf/1907.03137v2.pdf
PWC https://paperswithcode.com/paper/precision-annealing-monte-carlo-methods-for
Repo
Framework

Towards Empathetic Planning

Title Towards Empathetic Planning
Authors Maayan Shvo, Sheila A. McIlraith
Abstract Critical to successful human interaction is a capacity for empathy - the ability to understand and share the thoughts and feelings of another. As Artificial Intelligence (AI) systems are increasingly required to interact with humans in a myriad of settings, it is important to enable AI to wield empathy as a tool to benefit those it interacts with. In this paper, we work towards this goal by bringing together a number of important concepts: empathy, AI planning, and reasoning in the presence of knowledge and belief. We formalize the notion of Empathetic Planning which is informed by the beliefs and affective state of the empathizee. We appeal to an epistemic logic framework to represent the beliefs of the empathizee and propose AI planning-based computational approaches to compute empathetic solutions. We illustrate the potential benefits of our approach by conducting a study where we evaluate participants’ perceptions of the agent’s empathetic abilities and assistive capabilities.
Tasks
Published 2019-06-14
URL https://arxiv.org/abs/1906.06436v1
PDF https://arxiv.org/pdf/1906.06436v1.pdf
PWC https://paperswithcode.com/paper/towards-empathetic-planning
Repo
Framework

A Probabilistic approach for Learning Embeddings without Supervision

Title A Probabilistic approach for Learning Embeddings without Supervision
Authors Ujjal Kr Dutta, Mehrtash Harandi, Chandra Sekhar Chellu
Abstract For challenging machine learning problems such as zero-shot learning and fine-grained categorization, embedding learning is the machinery of choice because of its ability to learn generic notions of similarity, as opposed to class-specific concepts in standard classification models. Embedding learning aims at learning discriminative representations of data such that similar examples are pulled closer, while pushing away dissimilar ones. Despite their exemplary performances, supervised embedding learning approaches require huge number of annotations for training. This restricts their applicability for large datasets in new applications where obtaining labels require extensive manual efforts and domain knowledge. In this paper, we propose to learn an embedding in a completely unsupervised manner without using any class labels. Using a graph-based clustering approach to obtain pseudo-labels, we form triplet-based constraints following a metric learning paradigm. Our novel embedding learning approach uses a probabilistic notion, that intuitively minimizes the chances of each triplet violating a geometric constraint. Due to nature of the search space, we learn the parameters of our approach using Riemannian geometry. Our proposed approach performs competitive to state-of-the-art approaches.
Tasks Metric Learning, Zero-Shot Learning
Published 2019-12-17
URL https://arxiv.org/abs/1912.08275v1
PDF https://arxiv.org/pdf/1912.08275v1.pdf
PWC https://paperswithcode.com/paper/a-probabilistic-approach-for-learning
Repo
Framework

Defending against Whitebox Adversarial Attacks via Randomized Discretization

Title Defending against Whitebox Adversarial Attacks via Randomized Discretization
Authors Yuchen Zhang, Percy Liang
Abstract Adversarial perturbations dramatically decrease the accuracy of state-of-the-art image classifiers. In this paper, we propose and analyze a simple and computationally efficient defense strategy: inject random Gaussian noise, discretize each pixel, and then feed the result into any pre-trained classifier. Theoretically, we show that our randomized discretization strategy reduces the KL divergence between original and adversarial inputs, leading to a lower bound on the classification accuracy of any classifier against any (potentially whitebox) $\ell_\infty$-bounded adversarial attack. Empirically, we evaluate our defense on adversarial examples generated by a strong iterative PGD attack. On ImageNet, our defense is more robust than adversarially-trained networks and the winning defenses of the NIPS 2017 Adversarial Attacks & Defenses competition.
Tasks Adversarial Attack
Published 2019-03-25
URL http://arxiv.org/abs/1903.10586v1
PDF http://arxiv.org/pdf/1903.10586v1.pdf
PWC https://paperswithcode.com/paper/defending-against-whitebox-adversarial
Repo
Framework
comments powered by Disqus