May 6, 2019

3020 words 15 mins read

Paper Group ANR 305

Paper Group ANR 305

On the Solvability of Inductive Problems: A Study in Epistemic Topology. Semantics for probabilistic programming: higher-order functions, continuous distributions, and soft constraints. Partial Face Detection for Continuous Authentication. Probabilistic Inference Modulo Theories. Understanding image motion with group representations. Pseudo-Margina …

On the Solvability of Inductive Problems: A Study in Epistemic Topology

Title On the Solvability of Inductive Problems: A Study in Epistemic Topology
Authors Alexandru Baltag, Nina Gierasimczuk, Sonja Smets
Abstract We investigate the issues of inductive problem-solving and learning by doxastic agents. We provide topological characterizations of solvability and learnability, and we use them to prove that AGM-style belief revision is “universal”, i.e., that every solvable problem is solvable by AGM conditioning.
Tasks
Published 2016-06-24
URL http://arxiv.org/abs/1606.07518v1
PDF http://arxiv.org/pdf/1606.07518v1.pdf
PWC https://paperswithcode.com/paper/on-the-solvability-of-inductive-problems-a
Repo
Framework

Semantics for probabilistic programming: higher-order functions, continuous distributions, and soft constraints

Title Semantics for probabilistic programming: higher-order functions, continuous distributions, and soft constraints
Authors Sam Staton, Hongseok Yang, Chris Heunen, Ohad Kammar, Frank Wood
Abstract We study the semantic foundation of expressive probabilistic programming languages, that support higher-order functions, continuous distributions, and soft constraints (such as Anglican, Church, and Venture). We define a metalanguage (an idealised version of Anglican) for probabilistic computation with the above features, develop both operational and denotational semantics, and prove soundness, adequacy, and termination. They involve measure theory, stochastic labelled transition systems, and functor categories, but admit intuitive computational readings, one of which views sampled random variables as dynamically allocated read-only variables. We apply our semantics to validate nontrivial equations underlying the correctness of certain compiler optimisations and inference algorithms such as sequential Monte Carlo simulation. The language enables defining probability distributions on higher-order functions, and we study their properties.
Tasks Probabilistic Programming
Published 2016-01-19
URL http://arxiv.org/abs/1601.04943v3
PDF http://arxiv.org/pdf/1601.04943v3.pdf
PWC https://paperswithcode.com/paper/semantics-for-probabilistic-programming
Repo
Framework

Partial Face Detection for Continuous Authentication

Title Partial Face Detection for Continuous Authentication
Authors Upal Mahbub, Vishal M. Patel, Deepak Chandra, Brandon Barbello, Rama Chellappa
Abstract In this paper, a part-based technique for real time detection of users’ faces on mobile devices is proposed. This method is specifically designed for detecting partially cropped and occluded faces captured using a smartphone’s front-facing camera for continuous authentication. The key idea is to detect facial segments in the frame and cluster the results to obtain the region which is most likely to contain a face. Extensive experimentation on a mobile dataset of 50 users shows that our method performs better than many state-of-the-art face detection methods in terms of accuracy and processing speed.
Tasks Face Detection
Published 2016-03-30
URL http://arxiv.org/abs/1603.09364v1
PDF http://arxiv.org/pdf/1603.09364v1.pdf
PWC https://paperswithcode.com/paper/partial-face-detection-for-continuous
Repo
Framework

Probabilistic Inference Modulo Theories

Title Probabilistic Inference Modulo Theories
Authors Rodrigo de Salvo Braz, Ciaran O’Reilly, Vibhav Gogate, Rina Dechter
Abstract We present SGDPLL(T), an algorithm that solves (among many other problems) probabilistic inference modulo theories, that is, inference problems over probabilistic models defined via a logic theory provided as a parameter (currently, propositional, equalities on discrete sorts, and inequalities, more specifically difference arithmetic, on bounded integers). While many solutions to probabilistic inference over logic representations have been proposed, SGDPLL(T) is simultaneously (1) lifted, (2) exact and (3) modulo theories, that is, parameterized by a background logic theory. This offers a foundation for extending it to rich logic languages such as data structures and relational data. By lifted, we mean algorithms with constant complexity in the domain size (the number of values that variables can take). We also detail a solver for summations with difference arithmetic and show experimental results from a scenario in which SGDPLL(T) is much faster than a state-of-the-art probabilistic solver.
Tasks
Published 2016-05-26
URL http://arxiv.org/abs/1605.08367v2
PDF http://arxiv.org/pdf/1605.08367v2.pdf
PWC https://paperswithcode.com/paper/probabilistic-inference-modulo-theories
Repo
Framework

Understanding image motion with group representations

Title Understanding image motion with group representations
Authors Andrew Jaegle, Stephen Phillips, Daphne Ippolito, Kostas Daniilidis
Abstract Motion is an important signal for agents in dynamic environments, but learning to represent motion from unlabeled video is a difficult and underconstrained problem. We propose a model of motion based on elementary group properties of transformations and use it to train a representation of image motion. While most methods of estimating motion are based on pixel-level constraints, we use these group properties to constrain the abstract representation of motion itself. We demonstrate that a deep neural network trained using this method captures motion in both synthetic 2D sequences and real-world sequences of vehicle motion, without requiring any labels. Networks trained to respect these constraints implicitly identify the image characteristic of motion in different sequence types. In the context of vehicle motion, this method extracts information useful for localization, tracking, and odometry. Our results demonstrate that this representation is useful for learning motion in the general setting where explicit labels are difficult to obtain.
Tasks
Published 2016-12-01
URL http://arxiv.org/abs/1612.00472v2
PDF http://arxiv.org/pdf/1612.00472v2.pdf
PWC https://paperswithcode.com/paper/understanding-image-motion-with-group
Repo
Framework

Pseudo-Marginal Hamiltonian Monte Carlo

Title Pseudo-Marginal Hamiltonian Monte Carlo
Authors Johan Alenlöv, Arnaud Doucet, Fredrik Lindsten
Abstract Bayesian inference in the presence of an intractable likelihood function is computationally challenging. When following a Markov chain Monte Carlo (MCMC) approach to approximate the posterior distribution in this context, one typically either uses MCMC schemes which target the joint posterior of the parameters and some auxiliary latent variables, or pseudo-marginal Metropolis–Hastings (MH) schemes. The latter mimic a MH algorithm targeting the marginal posterior of the parameters by approximating unbiasedly the intractable likelihood. However, in scenarios where the parameters and auxiliary variables are strongly correlated under the posterior and/or this posterior is multimodal, Gibbs sampling or Hamiltonian Monte Carlo (HMC) will perform poorly and the pseudo-marginal MH algorithm, as any other MH scheme, will be inefficient for high dimensional parameters. We propose here an original MCMC algorithm, termed pseudo-marginal HMC, which combines the advantages of both HMC and pseudo-marginal schemes. Specifically, the pseudo-marginal HMC method is controlled by a precision parameter N, controlling the approximation of the likelihood and, for any N, it samples the marginal posterior of the parameters. Additionally, as N tends to infinity, its sample trajectories and acceptance probability converge to those of an ideal, but intractable, HMC algorithm which would have access to the marginal posterior of parameters and its gradient. We demonstrate through experiments that pseudo-marginal HMC can outperform significantly both standard HMC and pseudo-marginal MH schemes.
Tasks Bayesian Inference
Published 2016-07-08
URL https://arxiv.org/abs/1607.02516v2
PDF https://arxiv.org/pdf/1607.02516v2.pdf
PWC https://paperswithcode.com/paper/pseudo-marginal-hamiltonian-monte-carlo
Repo
Framework

Tight Complexity Bounds for Optimizing Composite Objectives

Title Tight Complexity Bounds for Optimizing Composite Objectives
Authors Blake Woodworth, Nathan Srebro
Abstract We provide tight upper and lower bounds on the complexity of minimizing the average of $m$ convex functions using gradient and prox oracles of the component functions. We show a significant gap between the complexity of deterministic vs randomized optimization. For smooth functions, we show that accelerated gradient descent (AGD) and an accelerated variant of SVRG are optimal in the deterministic and randomized settings respectively, and that a gradient oracle is sufficient for the optimal rate. For non-smooth functions, having access to prox oracles reduces the complexity and we present optimal methods based on smoothing that improve over methods using just gradient accesses.
Tasks
Published 2016-05-25
URL http://arxiv.org/abs/1605.08003v3
PDF http://arxiv.org/pdf/1605.08003v3.pdf
PWC https://paperswithcode.com/paper/tight-complexity-bounds-for-optimizing
Repo
Framework

Approachability of convex sets in generalized quitting games

Title Approachability of convex sets in generalized quitting games
Authors János Flesch, Rida Laraki, Vianney Perchet
Abstract We consider Blackwell approachability, a very powerful and geometric tool in game theory, used for example to design strategies of the uninformed player in repeated games with incomplete information. We extend this theory to “generalized quitting games” , a class of repeated stochastic games in which each player may have quitting actions, such as the Big-Match. We provide three simple geometric and strongly related conditions for the weak approachability of a convex target set. The first is sufficient: it guarantees that, for any fixed horizon, a player has a strategy ensuring that the expected time-average payoff vector converges to the target set as horizon goes to infinity. The third is necessary: if it is not satisfied, the opponent can weakly exclude the target set. In the special case where only the approaching player can quit the game (Big-Match of type I), the three conditions are equivalent and coincide with Blackwell’s condition. Consequently, we obtain a full characterization and prove that the game is weakly determined-every convex set is either weakly approachable or weakly excludable. In games where only the opponent can quit (Big-Match of type II), none of our conditions is both sufficient and necessary for weak approachability. We provide a continuous time sufficient condition using techniques coming from differential games, and show its usefulness in practice, in the spirit of Vieille’s seminal work for weak approachability.Finally, we study uniform approachability where the strategy should not depend on the horizon and demonstrate that, in contrast with classical Blackwell approacha-bility for convex sets, weak approachability does not imply uniform approachability.
Tasks
Published 2016-09-28
URL http://arxiv.org/abs/1609.08870v1
PDF http://arxiv.org/pdf/1609.08870v1.pdf
PWC https://paperswithcode.com/paper/approachability-of-convex-sets-in-generalized
Repo
Framework

A Richly Annotated Dataset for Pedestrian Attribute Recognition

Title A Richly Annotated Dataset for Pedestrian Attribute Recognition
Authors Dangwei Li, Zhang Zhang, Xiaotang Chen, Haibin Ling, Kaiqi Huang
Abstract In this paper, we aim to improve the dataset foundation for pedestrian attribute recognition in real surveillance scenarios. Recognition of human attributes, such as gender, and clothes types, has great prospects in real applications. However, the development of suitable benchmark datasets for attribute recognition remains lagged behind. Existing human attribute datasets are collected from various sources or an integration of pedestrian re-identification datasets. Such heterogeneous collection poses a big challenge on developing high quality fine-grained attribute recognition algorithms. Furthermore, human attribute recognition are generally severely affected by environmental or contextual factors, such as viewpoints, occlusions and body parts, while existing attribute datasets barely care about them. To tackle these problems, we build a Richly Annotated Pedestrian (RAP) dataset from real multi-camera surveillance scenarios with long term collection, where data samples are annotated with not only fine-grained human attributes but also environmental and contextual factors. RAP has in total 41,585 pedestrian samples, each of which is annotated with 72 attributes as well as viewpoints, occlusions, body parts information. To our knowledge, the RAP dataset is the largest pedestrian attribute dataset, which is expected to greatly promote the study of large-scale attribute recognition systems. Furthermore, we empirically analyze the effects of different environmental and contextual factors on pedestrian attribute recognition. Experimental results demonstrate that viewpoints, occlusions and body parts information could assist attribute recognition a lot in real applications.
Tasks Pedestrian Attribute Recognition
Published 2016-03-23
URL http://arxiv.org/abs/1603.07054v3
PDF http://arxiv.org/pdf/1603.07054v3.pdf
PWC https://paperswithcode.com/paper/a-richly-annotated-dataset-for-pedestrian
Repo
Framework

CRAFT Objects from Images

Title CRAFT Objects from Images
Authors Bin Yang, Junjie Yan, Zhen Lei, Stan Z. Li
Abstract Object detection is a fundamental problem in image understanding. One popular solution is the R-CNN framework and its fast versions. They decompose the object detection problem into two cascaded easier tasks: 1) generating object proposals from images, 2) classifying proposals into various object categories. Despite that we are handling with two relatively easier tasks, they are not solved perfectly and there’s still room for improvement. In this paper, we push the “divide and conquer” solution even further by dividing each task into two sub-tasks. We call the proposed method “CRAFT” (Cascade Region-proposal-network And FasT-rcnn), which tackles each task with a carefully designed network cascade. We show that the cascade structure helps in both tasks: in proposal generation, it provides more compact and better localized object proposals; in object classification, it reduces false positives (mainly between ambiguous categories) by capturing both inter- and intra-category variances. CRAFT achieves consistent and considerable improvement over the state-of-the-art on object detection benchmarks like PASCAL VOC 07/12 and ILSVRC.
Tasks Object Classification, Object Detection
Published 2016-04-12
URL http://arxiv.org/abs/1604.03239v1
PDF http://arxiv.org/pdf/1604.03239v1.pdf
PWC https://paperswithcode.com/paper/craft-objects-from-images
Repo
Framework

A framework for redescription set construction

Title A framework for redescription set construction
Authors Matej Mihelčić, Sašo Džeroski, Nada Lavrač, Tomislav Šmuc
Abstract Redescription mining is a field of knowledge discovery that aims at finding different descriptions of similar subsets of instances in the data. These descriptions are represented as rules inferred from one or more disjoint sets of attributes, called views. As such, they support knowledge discovery process and help domain experts in formulating new hypotheses or constructing new knowledge bases and decision support systems. In contrast to previous approaches that typically create one smaller set of redescriptions satisfying a pre-defined set of constraints, we introduce a framework that creates large and heterogeneous redescription set from which user/expert can extract compact sets of differing properties, according to its own preferences. Construction of large and heterogeneous redescription set relies on CLUS-RM algorithm and a novel, conjunctive refinement procedure that facilitates generation of larger and more accurate redescription sets. The work also introduces the variability of redescription accuracy when missing values are present in the data, which significantly extends applicability of the method. Crucial part of the framework is the redescription set extraction based on heuristic multi-objective optimization procedure that allows user to define importance levels towards one or more redescription quality criteria. We provide both theoretical and empirical comparison of the novel framework against current state of the art redescription mining algorithms and show that it represents more efficient and versatile approach for mining redescriptions from data.
Tasks
Published 2016-06-13
URL http://arxiv.org/abs/1606.03935v2
PDF http://arxiv.org/pdf/1606.03935v2.pdf
PWC https://paperswithcode.com/paper/a-framework-for-redescription-set
Repo
Framework

A Bayesian baseline for belief in uncommon events

Title A Bayesian baseline for belief in uncommon events
Authors V. Palonen
Abstract The plausibility of uncommon events and miracles based on testimony of such an event has been much discussed. When analyzing the probabilities involved, it has mostly been assumed that the common events can be taken as data in the calculations. However, we usually have only testimonies for the common events. While this difference does not have a significant effect on the inductive part of the inference, it has a large influence on how one should view the reliability of testimonies. In this work, a full Bayesian solution is given for the more realistic case, where one has a large number of testimonies for a common event and one testimony for an uncommon event. It is seen that, in order for there to be a large amount of testimonies for a common event, the testimonies will probably be quite reliable. For this reason, because the testimonies are quite reliable based on the testimonies for the common events, the probability for the uncommon event, given a testimony for it, is also higher. Hence, one should be more open-minded when considering the plausibility of uncommon events.
Tasks
Published 2016-02-25
URL http://arxiv.org/abs/1602.07836v2
PDF http://arxiv.org/pdf/1602.07836v2.pdf
PWC https://paperswithcode.com/paper/a-bayesian-baseline-for-belief-in-uncommon
Repo
Framework

Enhanced Directional Smoothing Algorithm for Edge-Preserving Smoothing of Synthetic-Aperture Radar Images

Title Enhanced Directional Smoothing Algorithm for Edge-Preserving Smoothing of Synthetic-Aperture Radar Images
Authors Mario Mastriani, Alberto E. Giraldez
Abstract Synthetic aperture radar (SAR) images are subject to prominent speckle noise, which is generally considered a purely multiplicative noise process. In theory, this multiplicative noise is that the ratio of the standard deviation to the signal value, the “coefficient of variation,” is theoretically constant at every point in a SAR image. Most of the filters for speckle reduction are based on this property. Such property is irrelevant for the new filter structure, which is based on directional smoothing (DS) theory, the enhanced directional smoothing (EDS) that removes speckle noise from SAR images without blurring edges. We demonstrate the effectiveness of this new filtering method by comparing it to established speckle noise removal techniques on SAR images.
Tasks
Published 2016-08-05
URL http://arxiv.org/abs/1608.01993v1
PDF http://arxiv.org/pdf/1608.01993v1.pdf
PWC https://paperswithcode.com/paper/enhanced-directional-smoothing-algorithm-for
Repo
Framework

Estimation of respiratory pattern from video using selective ensemble aggregation

Title Estimation of respiratory pattern from video using selective ensemble aggregation
Authors A. P. Prathosh, Pragathi Praveena, Lalit K. Mestha, Sanjay Bharadwaj
Abstract Non-contact estimation of respiratory pattern (RP) and respiration rate (RR) has multiple applications. Existing methods for RP and RR measurement fall into one of the three categories - (i) estimation through nasal air flow measurement, (ii) estimation from video-based remote photoplethysmography, and (iii) estimation by measurement of motion induced by respiration using motion detectors. These methods, however, require specialized sensors, are computationally expensive and/or critically depend on selection of a region of interest (ROI) for processing. In this paper a general framework is described for estimating a periodic signal driving noisy LTI channels connected in parallel with unknown dynamics. The method is then applied to derive a computationally inexpensive method for estimating RP using 2D cameras that does not critically depend on ROI. Specifically, RP is estimated by imaging the changes in the reflected light caused by respiration-induced motion. Each spatial location in the field of view of the camera is modeled as a noise-corrupted linear time-invariant (LTI) measurement channel with unknown system dynamics, driven by a single generating respiratory signal. Estimation of RP is cast as a blind deconvolution problem and is solved through a method comprising subspace projection and statistical aggregation. Experiments are carried out on 31 healthy human subjects by generating multiple RPs and comparing the proposed estimates with simultaneously acquired ground truth from an impedance pneumograph device. The proposed estimator agrees well with the ground truth device in terms of correlation measures, despite variability in clothing pattern, angle of view and ROI.
Tasks
Published 2016-11-21
URL http://arxiv.org/abs/1611.06674v1
PDF http://arxiv.org/pdf/1611.06674v1.pdf
PWC https://paperswithcode.com/paper/estimation-of-respiratory-pattern-from-video
Repo
Framework

Reconstructing parameters of spreading models from partial observations

Title Reconstructing parameters of spreading models from partial observations
Authors Andrey Y. Lokhov
Abstract Spreading processes are often modelled as a stochastic dynamics occurring on top of a given network with edge weights corresponding to the transmission probabilities. Knowledge of veracious transmission probabilities is essential for prediction, optimization, and control of diffusion dynamics. Unfortunately, in most cases the transmission rates are unknown and need to be reconstructed from the spreading data. Moreover, in realistic settings it is impossible to monitor the state of each node at every time, and thus the data is highly incomplete. We introduce an efficient dynamic message-passing algorithm, which is able to reconstruct parameters of the spreading model given only partial information on the activation times of nodes in the network. The method is generalizable to a large class of dynamic models, as well to the case of temporal graphs.
Tasks
Published 2016-08-31
URL http://arxiv.org/abs/1608.08698v1
PDF http://arxiv.org/pdf/1608.08698v1.pdf
PWC https://paperswithcode.com/paper/reconstructing-parameters-of-spreading-models
Repo
Framework
comments powered by Disqus