May 6, 2019

3059 words 15 mins read

Paper Group ANR 181

Paper Group ANR 181

Efficient Non-oblivious Randomized Reduction for Risk Minimization with Improved Excess Risk Guarantee. Artificial Persuasion in Pedagogical Games. Scalable Computation of Optimized Queries for Sequential Diagnosis. DeepCare: A Deep Dynamic Memory Model for Predictive Medicine. Dynamical optical flow of saliency maps for predicting visual attention …

Efficient Non-oblivious Randomized Reduction for Risk Minimization with Improved Excess Risk Guarantee

Title Efficient Non-oblivious Randomized Reduction for Risk Minimization with Improved Excess Risk Guarantee
Authors Yi Xu, Haiqin Yang, Lijun Zhang, Tianbao Yang
Abstract In this paper, we address learning problems for high dimensional data. Previously, oblivious random projection based approaches that project high dimensional features onto a random subspace have been used in practice for tackling high-dimensionality challenge in machine learning. Recently, various non-oblivious randomized reduction methods have been developed and deployed for solving many numerical problems such as matrix product approximation, low-rank matrix approximation, etc. However, they are less explored for the machine learning tasks, e.g., classification. More seriously, the theoretical analysis of excess risk bounds for risk minimization, an important measure of generalization performance, has not been established for non-oblivious randomized reduction methods. It therefore remains an open problem what is the benefit of using them over previous oblivious random projection based approaches. To tackle these challenges, we propose an algorithmic framework for employing non-oblivious randomized reduction method for general empirical risk minimizing in machine learning tasks, where the original high-dimensional features are projected onto a random subspace that is derived from the data with a small matrix approximation error. We then derive the first excess risk bound for the proposed non-oblivious randomized reduction approach without requiring strong assumptions on the training data. The established excess risk bound exhibits that the proposed approach provides much better generalization performance and it also sheds more insights about different randomized reduction approaches. Finally, we conduct extensive experiments on both synthetic and real-world benchmark datasets, whose dimension scales to $O(10^7)$, to demonstrate the efficacy of our proposed approach.
Tasks
Published 2016-12-06
URL http://arxiv.org/abs/1612.01663v1
PDF http://arxiv.org/pdf/1612.01663v1.pdf
PWC https://paperswithcode.com/paper/efficient-non-oblivious-randomized-reduction
Repo
Framework

Artificial Persuasion in Pedagogical Games

Title Artificial Persuasion in Pedagogical Games
Authors Zhiwei Zeng
Abstract A Persuasive Teachable Agent (PTA) is a special type of Teachable Agent which incorporates a persuasion theory in order to provide persuasive and more personalized feedback to the student. By employing the persuasion techniques, the PTA seeks to maintain the student in a high motivation and high ability state in which he or she has higher cognitive ability and his or her changes in attitudes are more persistent. However, the existing model of the PTA still has a few limitations. Firstly, the existing PTA model focuses on modelling the PTA’s ability to persuade, while does not model its ability to be taught by the student and to practice the knowledge it has learnt. Secondly, the quantitative model for computational processes in the PTA has low reusability. Thirdly, there is still a gap between theoretical models and practical implementation of the PTA. To address these three limitations, this book proposes an improved agent model which follows a goal-oriented approach and models the PTA in its totality by integrating the Persuasion Reasoning of the PTA with the Teachability Reasoning and the Practicability Reasoning. The project also proposes a more abstract and generalized quantitative model for the computations in the PTA. With higher level of abstraction, the reusability of the quantitative model is also improved. New system architecture is introduced to bridge the gap between theoretical models and implementation of the PTA.
Tasks
Published 2016-01-23
URL http://arxiv.org/abs/1601.06245v1
PDF http://arxiv.org/pdf/1601.06245v1.pdf
PWC https://paperswithcode.com/paper/artificial-persuasion-in-pedagogical-games
Repo
Framework

Scalable Computation of Optimized Queries for Sequential Diagnosis

Title Scalable Computation of Optimized Queries for Sequential Diagnosis
Authors Patrick Rodler, Wolfgang Schmid, Kostyantyn Shchekotykhin
Abstract In many model-based diagnosis applications it is impossible to provide such a set of observations and/or measurements that allow to identify the real cause of a fault. Therefore, diagnosis systems often return many possible candidates, leaving the burden of selecting the correct diagnosis to a user. Sequential diagnosis techniques solve this problem by automatically generating a sequence of queries to some oracle. The answers to these queries provide additional information necessary to gradually restrict the search space by removing diagnosis candidates inconsistent with the answers. During query computation, existing sequential diagnosis methods often require the generation of many unnecessary query candidates and strongly rely on expensive logical reasoners. We tackle this issue by devising efficient heuristic query search methods. The proposed methods enable for the first time a completely reasoner-free query generation while at the same time guaranteeing optimality conditions, e.g. minimal cardinality or best understandability, of the returned query that existing methods cannot realize. Hence, the performance of this approach is independent of the (complexity of the) diagnosed system. Experiments conducted using real-world problems show that the new approach is highly scalable and outperforms existing methods by orders of magnitude.
Tasks Sequential Diagnosis
Published 2016-12-14
URL http://arxiv.org/abs/1612.04791v3
PDF http://arxiv.org/pdf/1612.04791v3.pdf
PWC https://paperswithcode.com/paper/scalable-computation-of-optimized-queries-for
Repo
Framework

DeepCare: A Deep Dynamic Memory Model for Predictive Medicine

Title DeepCare: A Deep Dynamic Memory Model for Predictive Medicine
Authors Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh
Abstract Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, recorded in electronic medical records, are episodic and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors in space, models patient health state trajectories through explicit memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces time parameterizations to handle irregular timed events by moderating the forgetting and consolidation of memory cells. DeepCare also incorporates medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden – diabetes and mental health – the results show improved modeling and risk prediction accuracy.
Tasks
Published 2016-02-01
URL http://arxiv.org/abs/1602.00357v2
PDF http://arxiv.org/pdf/1602.00357v2.pdf
PWC https://paperswithcode.com/paper/deepcare-a-deep-dynamic-memory-model-for
Repo
Framework

Dynamical optical flow of saliency maps for predicting visual attention

Title Dynamical optical flow of saliency maps for predicting visual attention
Authors Aniello Raffaele Patrone, Christian Valuch, Ulrich Ansorge, Otmar Scherzer
Abstract Saliency maps are used to understand human attention and visual fixation. However, while very well established for static images, there is no general agreement on how to compute a saliency map of dynamic scenes. In this paper we propose a mathematically rigorous approach to this prob- lem, including static saliency maps of each video frame for the calculation of the optical flow. Taking into account static saliency maps for calculating the optical flow allows for overcoming the aperture problem. Our ap- proach is able to explain human fixation behavior in situations which pose challenges to standard approaches, such as when a fixated object disappears behind an occlusion and reappears after several frames. In addition, we quantitatively compare our model against alternative solutions using a large eye tracking data set. Together, our results suggest that assessing optical flow information across a series of saliency maps gives a highly accurate and useful account of human overt attention in dynamic scenes.
Tasks Eye Tracking, Optical Flow Estimation
Published 2016-06-23
URL http://arxiv.org/abs/1606.07324v1
PDF http://arxiv.org/pdf/1606.07324v1.pdf
PWC https://paperswithcode.com/paper/dynamical-optical-flow-of-saliency-maps-for
Repo
Framework

Package equivalence in complex software network

Title Package equivalence in complex software network
Authors Tomislav Slijepčević
Abstract The public package registry npm is one of the biggest software registry. With its 216 911 software packages, it forms a big network of software dependencies. In this paper we evaluate various methods for finding similar packages in the npm network, using only the structure of the graph. Namely, we want to find a way of categorizing similar packages, which would be useful for recommendation systems. This size enables us to compute meaningful results, as it softened the particularities of the graph. Npm is also quite famous as it is the default package repository of Node.js. We believe that it will make our results interesting for more people than a less used package repository. This makes it a good subject of analysis of software networks.
Tasks Recommendation Systems
Published 2016-02-11
URL http://arxiv.org/abs/1602.03681v1
PDF http://arxiv.org/pdf/1602.03681v1.pdf
PWC https://paperswithcode.com/paper/package-equivalence-in-complex-software
Repo
Framework

Convergence Analysis of MAP based Blur Kernel Estimation

Title Convergence Analysis of MAP based Blur Kernel Estimation
Authors Sunghyun Cho, Seungyong Lee
Abstract One popular approach for blind deconvolution is to formulate a maximum a posteriori (MAP) problem with sparsity priors on the gradients of the latent image, and then alternatingly estimate the blur kernel and the latent image. While several successful MAP based methods have been proposed, there has been much controversy and confusion about their convergence, because sparsity priors have been shown to prefer blurry images to sharp natural images. In this paper, we revisit this problem and provide an analysis on the convergence of MAP based approaches. We first introduce a slight modification to a conventional joint energy function for blind deconvolution. The reformulated energy function yields the same alternating estimation process, but more clearly reveals how blind deconvolution works. We then show the energy function can actually favor the right solution instead of the no-blur solution under certain conditions, which explains the success of previous MAP based approaches. The reformulated energy function and our conditions for the convergence also provide a way to compare the qualities of different blur kernels, and we demonstrate its applicability to automatic blur kernel size selection, blur kernel estimation using light streaks, and defocus estimation.
Tasks Defocus Estimation
Published 2016-11-23
URL http://arxiv.org/abs/1611.07752v2
PDF http://arxiv.org/pdf/1611.07752v2.pdf
PWC https://paperswithcode.com/paper/convergence-analysis-of-map-based-blur-kernel
Repo
Framework

SE3-Nets: Learning Rigid Body Motion using Deep Neural Networks

Title SE3-Nets: Learning Rigid Body Motion using Deep Neural Networks
Authors Arunkumar Byravan, Dieter Fox
Abstract We introduce SE3-Nets, which are deep neural networks designed to model and learn rigid body motion from raw point cloud data. Based only on sequences of depth images along with action vectors and point wise data associations, SE3-Nets learn to segment effected object parts and predict their motion resulting from the applied force. Rather than learning point wise flow vectors, SE3-Nets predict SE3 transformations for different parts of the scene. Using simulated depth data of a table top scene and a robot manipulator, we show that the structure underlying SE3-Nets enables them to generate a far more consistent prediction of object motion than traditional flow based networks. Additional experiments with a depth camera observing a Baxter robot pushing objects on a table show that SE3-Nets also work well on real data.
Tasks
Published 2016-06-08
URL http://arxiv.org/abs/1606.02378v3
PDF http://arxiv.org/pdf/1606.02378v3.pdf
PWC https://paperswithcode.com/paper/se3-nets-learning-rigid-body-motion-using
Repo
Framework

The Algorithm of Islamic Jurisprudence (Fiqh) with Validation of an Entscheidungsproblem

Title The Algorithm of Islamic Jurisprudence (Fiqh) with Validation of an Entscheidungsproblem
Authors Elnaserledinellah Mahmood Abdelwahab, Karim Daghbouche, Nadra Ahmad Shannan
Abstract The historic background of algorithmic processing with regard to etymology and methodology is translated into terms of mathematical logic and Computer Science. A formal logic structure is introduced by exemplaryquestions posed to Fiqh-chapters to define alogic query language. As a foundation, ageneric algorithm for deciding Fiqh-rulings is designed to enable and further leverage rule of law (vs. rule by law) with full transparency and complete algorithmic coverage of Islamic law eventually providing legal security, legal equality, and full legal accountability.This is implemented by disentangling and reinstating classic Fiqh-methodology (usul al-Fiqh) with the expressive power of subsets of First Order Logic (FOL)sustainably substituting ad hoc reasoning with falsifiable rational argumentation. The results are discussed in formal terms of completeness, decidability and complexity of formal Fiqh-systems. AnEntscheidungsproblem for formal Fiqh-Systems is formulated and validated.
Tasks
Published 2016-03-10
URL http://arxiv.org/abs/1604.00266v1
PDF http://arxiv.org/pdf/1604.00266v1.pdf
PWC https://paperswithcode.com/paper/the-algorithm-of-islamic-jurisprudence-fiqh
Repo
Framework

DTM: Deformable Template Matching

Title DTM: Deformable Template Matching
Authors Hyungtae Lee, Heesung Kwon, Ryan M. Robinson, William D. Nothwang
Abstract A novel template matching algorithm that can incorporate the concept of deformable parts, is presented in this paper. Unlike the deformable part model (DPM) employed in object recognition, the proposed template-matching approach called Deformable Template Matching (DTM) does not require a training step. Instead, deformation is achieved by a set of predefined basic rules (e.g. the left sub-patch cannot pass across the right patch). Experimental evaluation of this new method using the PASCAL VOC 07 dataset demonstrated substantial performance improvement over conventional template matching algorithms. Additionally, to confirm the applicability of DTM, the concept is applied to the generation of a rotation-invariant SIFT descriptor. Experimental evaluation employing deformable matching of SIFT features shows an increased number of matching features compared to a conventional SIFT matching.
Tasks Object Recognition
Published 2016-04-12
URL http://arxiv.org/abs/1604.03518v1
PDF http://arxiv.org/pdf/1604.03518v1.pdf
PWC https://paperswithcode.com/paper/dtm-deformable-template-matching
Repo
Framework

Non-Gaussian Random Generators in Bacteria Foraging Algorithm for Multiobjective Optimization

Title Non-Gaussian Random Generators in Bacteria Foraging Algorithm for Multiobjective Optimization
Authors Timothy Ganesan, Pandian Vasant, Irraivan Elamvazuthi
Abstract Random generators or stochastic engines are a key component in the structure of metaheuristic algorithms. This work investigates the effects of non-Gaussian stochastic engines on the performance of metaheuristics when solving a real-world optimization problem. In this work, the bacteria foraging algorithm (BFA) was employed in tandem with four random generators (stochastic engines). The stochastic engines operate using the Weibull distribution, Gamma distribution, Gaussian distribution and a chaotic mechanism. The two non-Gaussian distributions are the Weibull and Gamma distributions. In this work, the approaches developed were implemented on the real-world multi-objective resin bonded sand mould problem. The Pareto frontiers obtained were benchmarked using two metrics; the hyper volume indicator (HVI) and the proposed Average Explorative Rate (AER) metric. Detail discussions from various perspectives on the effects of non-Gaussian random generators in metaheuristics are provided.
Tasks Multiobjective Optimization
Published 2016-05-24
URL http://arxiv.org/abs/1605.07364v1
PDF http://arxiv.org/pdf/1605.07364v1.pdf
PWC https://paperswithcode.com/paper/non-gaussian-random-generators-in-bacteria
Repo
Framework

Using Well-Understood Single-Objective Functions in Multiobjective Black-Box Optimization Test Suites

Title Using Well-Understood Single-Objective Functions in Multiobjective Black-Box Optimization Test Suites
Authors Dimo Brockhoff, Tea Tusar, Anne Auger, Nikolaus Hansen
Abstract Several test function suites are being used for numerical benchmarking of multiobjective optimization algorithms. While they have some desirable properties, like well-understood Pareto sets and Pareto fronts of various shapes, most of the currently used functions possess characteristics that are arguably under-represented in real-world problems. They mainly stem from the easier construction of such functions and result in improbable properties such as separability, optima located exactly at the boundary constraints, and the existence of variables that solely control the distance between a solution and the Pareto front. Here, we propose an alternative way to constructing multiobjective problems-by combining existing single-objective problems from the literature. We describe in particular the bbob-biobj test suite with 55 bi-objective functions in continuous domain, and its extended version with 92 bi-objective functions (bbob-biobj-ext). Both test suites have been implemented in the COCO platform for black-box optimization benchmarking. Finally, we recommend a general procedure for creating test suites for an arbitrary number of objectives. Besides providing the formal function definitions and presenting their (known) properties, this paper also aims at giving the rationale behind our approach in terms of groups of functions with similar properties, objective space normalization, and problem instances. The latter allows us to easily compare the performance of deterministic and stochastic solvers, which is an often overlooked issue in benchmarking.
Tasks Multiobjective Optimization
Published 2016-04-01
URL http://arxiv.org/abs/1604.00359v3
PDF http://arxiv.org/pdf/1604.00359v3.pdf
PWC https://paperswithcode.com/paper/using-well-understood-single-objective
Repo
Framework

Total Variation Applications in Computer Vision

Title Total Variation Applications in Computer Vision
Authors Vania V. Estrela, Hermes Aguiar Magalhaes, Osamu Saotome
Abstract The objectives of this chapter are: (i) to introduce a concise overview of regularization; (ii) to define and to explain the role of a particular type of regularization called total variation norm (TV-norm) in computer vision tasks; (iii) to set up a brief discussion on the mathematical background of TV methods; and (iv) to establish a relationship between models and a few existing methods to solve problems cast as TV-norm. For the most part, image-processing algorithms blur the edges of the estimated images, however TV regularization preserves the edges with no prior information on the observed and the original images. The regularization scalar parameter {\lambda} controls the amount of regularization allowed and it is an essential to obtain a high-quality regularized output. A wide-ranging review of several ways to put into practice TV regularization as well as its advantages and limitations are discussed.
Tasks
Published 2016-03-31
URL http://arxiv.org/abs/1603.09599v1
PDF http://arxiv.org/pdf/1603.09599v1.pdf
PWC https://paperswithcode.com/paper/total-variation-applications-in-computer
Repo
Framework

A Formal Evaluation of PSNR as Quality Measurement Parameter for Image Segmentation Algorithms

Title A Formal Evaluation of PSNR as Quality Measurement Parameter for Image Segmentation Algorithms
Authors Fernando A. Fardo, Victor H. Conforto, Francisco C. de Oliveira, Paulo S. Rodrigues
Abstract Quality evaluation of image segmentation algorithms are still subject of debate and research. Currently, there is no generic metric that could be applied to any algorithm reliably. This article contains an evaluation for the PSRN (Peak Signal-To-Noise Ratio) as a metric which has been used to evaluate threshold level selection as well as the number of thresholds in the case of multi-level segmentation. The results obtained in this study suggest that the PSNR is not an adequate quality measurement for segmentation algorithms.
Tasks Semantic Segmentation
Published 2016-05-23
URL http://arxiv.org/abs/1605.07116v1
PDF http://arxiv.org/pdf/1605.07116v1.pdf
PWC https://paperswithcode.com/paper/a-formal-evaluation-of-psnr-as-quality
Repo
Framework

An Octree-Based Approach towards Efficient Variational Range Data Fusion

Title An Octree-Based Approach towards Efficient Variational Range Data Fusion
Authors Wadim Kehl, Tobias Holl, Federico Tombari, Slobodan Ilic, Nassir Navab
Abstract Volume-based reconstruction is usually expensive both in terms of memory consumption and runtime. Especially for sparse geometric structures, volumetric representations produce a huge computational overhead. We present an efficient way to fuse range data via a variational Octree-based minimization approach by taking the actual range data geometry into account. We transform the data into Octree-based truncated signed distance fields and show how the optimization can be conducted on the newly created structures. The main challenge is to uphold speed and a low memory footprint without sacrificing the solutions’ accuracy during optimization. We explain how to dynamically adjust the optimizer’s geometric structure via joining/splitting of Octree nodes and how to define the operators. We evaluate on various datasets and outline the suitability in terms of performance and geometric accuracy.
Tasks
Published 2016-08-26
URL http://arxiv.org/abs/1608.07411v1
PDF http://arxiv.org/pdf/1608.07411v1.pdf
PWC https://paperswithcode.com/paper/an-octree-based-approach-towards-efficient
Repo
Framework
comments powered by Disqus