July 28, 2019

3084 words 15 mins read

Paper Group ANR 390

Paper Group ANR 390

Robust Causal Estimation in the Large-Sample Limit without Strict Faithfulness. Statistical Selection of CNN-Based Audiovisual Features for Instantaneous Estimation of Human Emotional States. Clingcon: The Next Generation. Deep Active Learning over the Long Tail. An Army of Me: Sockpuppets in Online Discussion Communities. Frequentist coverage and …

Robust Causal Estimation in the Large-Sample Limit without Strict Faithfulness

Title Robust Causal Estimation in the Large-Sample Limit without Strict Faithfulness
Authors Ioan Gabriel Bucur, Tom Claassen, Tom Heskes
Abstract Causal effect estimation from observational data is an important and much studied research topic. The instrumental variable (IV) and local causal discovery (LCD) patterns are canonical examples of settings where a closed-form expression exists for the causal effect of one variable on another, given the presence of a third variable. Both rely on faithfulness to infer that the latter only influences the target effect via the cause variable. In reality, it is likely that this assumption only holds approximately and that there will be at least some form of weak interaction. This brings about the paradoxical situation that, in the large-sample limit, no predictions are made, as detecting the weak edge invalidates the setting. We introduce an alternative approach by replacing strict faithfulness with a prior that reflects the existence of many ‘weak’ (irrelevant) and ‘strong’ interactions. We obtain a posterior distribution over the target causal effect estimator which shows that, in many cases, we can still make good estimates. We demonstrate the approach in an application on a simple linear-Gaussian setting, using the MultiNest sampling algorithm, and compare it with established techniques to show our method is robust even when strict faithfulness is violated.
Tasks Causal Discovery
Published 2017-04-06
URL http://arxiv.org/abs/1704.01864v1
PDF http://arxiv.org/pdf/1704.01864v1.pdf
PWC https://paperswithcode.com/paper/robust-causal-estimation-in-the-large-sample
Repo
Framework

Statistical Selection of CNN-Based Audiovisual Features for Instantaneous Estimation of Human Emotional States

Title Statistical Selection of CNN-Based Audiovisual Features for Instantaneous Estimation of Human Emotional States
Authors Ramesh Basnet, Mohammad Tariqul Islam, Tamanna Howlader, S. M. Mahbubur Rahman, Dimitrios Hatzinakos
Abstract Automatic prediction of continuous-level emotional state requires selection of suitable affective features to develop a regression system based on supervised machine learning. This paper investigates the performance of features statistically learned using convolutional neural networks for instantaneously predicting the continuous dimensions of emotional states. Features with minimum redundancy and maximum relevancy are chosen by using the mutual information-based selection process. The performance of frame-by-frame prediction of emotional state using the moderate length features as proposed in this paper is evaluated on spontaneous and naturalistic human-human conversation of RECOLA database. Experimental results show that the proposed model can be used for instantaneous prediction of emotional state with an accuracy higher than traditional audio or video features that are used for affective computation.
Tasks
Published 2017-08-23
URL http://arxiv.org/abs/1708.07021v1
PDF http://arxiv.org/pdf/1708.07021v1.pdf
PWC https://paperswithcode.com/paper/statistical-selection-of-cnn-based
Repo
Framework

Clingcon: The Next Generation

Title Clingcon: The Next Generation
Authors Mutsunori Banbara, Benjamin Kaufmann, Max Ostrowski, Torsten Schaub
Abstract We present the third generation of the constraint answer set system clingcon, combining Answer Set Programming (ASP) with finite domain constraint processing (CP). While its predecessors rely on a black-box approach to hybrid solving by integrating the CP solver gecode, the new clingcon system pursues a lazy approach using dedicated constraint propagators to extend propagation in the underlying ASP solver clasp. No extension is needed for parsing and grounding clingcon’s hybrid modeling language since both can be accommodated by the new generic theory handling capabilities of the ASP grounder gringo. As a whole, clingcon 3 is thus an extension of the ASP system clingo 5, which itself relies on the grounder gringo and the solver clasp. The new approach of clingcon offers a seamless integration of CP propagation into ASP solving that benefits from the whole spectrum of clasp’s reasoning modes, including for instance multi-shot solving and advanced optimization techniques. This is accomplished by a lazy approach that unfolds the representation of constraints and adds it to that of the logic program only when needed. Although the unfolding is usually dictated by the constraint propagators during solving, it can already be partially (or even totally) done during preprocessing. Moreover, clingcon’s constraint preprocessing and propagation incorporate several well established CP techniques that greatly improve its performance. We demonstrate this via an extensive empirical evaluation contrasting, first, the various techniques in the context of CSP solving and, second, the new clingcon system with other hybrid ASP systems. Under consideration in Theory and Practice of Logic Programming (TPLP)
Tasks
Published 2017-05-12
URL http://arxiv.org/abs/1705.04569v1
PDF http://arxiv.org/pdf/1705.04569v1.pdf
PWC https://paperswithcode.com/paper/clingcon-the-next-generation
Repo
Framework

Deep Active Learning over the Long Tail

Title Deep Active Learning over the Long Tail
Authors Yonatan Geifman, Ran El-Yaniv
Abstract This paper is concerned with pool-based active learning for deep neural networks. Motivated by coreset dataset compression ideas, we present a novel active learning algorithm that queries consecutive points from the pool using farthest-first traversals in the space of neural activation over a representation layer. We show consistent and overwhelming improvement in sample complexity over passive learning (random sampling) for three datasets: MNIST, CIFAR-10, and CIFAR-100. In addition, our algorithm outperforms the traditional uncertainty sampling technique (obtained using softmax activations), and we identify cases where uncertainty sampling is only slightly better than random sampling.
Tasks Active Learning
Published 2017-11-02
URL http://arxiv.org/abs/1711.00941v1
PDF http://arxiv.org/pdf/1711.00941v1.pdf
PWC https://paperswithcode.com/paper/deep-active-learning-over-the-long-tail
Repo
Framework

An Army of Me: Sockpuppets in Online Discussion Communities

Title An Army of Me: Sockpuppets in Online Discussion Communities
Authors Srijan Kumar, Justin Cheng, Jure Leskovec, V. S. Subrahmanian
Abstract In online discussion communities, users can interact and share information and opinions on a wide variety of topics. However, some users may create multiple identities, or sockpuppets, and engage in undesired behavior by deceiving others or manipulating discussions. In this work, we study sockpuppetry across nine discussion communities, and show that sockpuppets differ from ordinary users in terms of their posting behavior, linguistic traits, as well as social network structure. Sockpuppets tend to start fewer discussions, write shorter posts, use more personal pronouns such as “I”, and have more clustered ego-networks. Further, pairs of sockpuppets controlled by the same individual are more likely to interact on the same discussion at the same time than pairs of ordinary users. Our analysis suggests a taxonomy of deceptive behavior in discussion communities. Pairs of sockpuppets can vary in their deceptiveness, i.e., whether they pretend to be different users, or their supportiveness, i.e., if they support arguments of other sockpuppets controlled by the same user. We apply these findings to a series of prediction tasks, notably, to identify whether a pair of accounts belongs to the same underlying user or not. Altogether, this work presents a data-driven view of deception in online discussion communities and paves the way towards the automatic detection of sockpuppets.
Tasks
Published 2017-03-21
URL http://arxiv.org/abs/1703.07355v1
PDF http://arxiv.org/pdf/1703.07355v1.pdf
PWC https://paperswithcode.com/paper/an-army-of-me-sockpuppets-in-online
Repo
Framework

Frequentist coverage and sup-norm convergence rate in Gaussian process regression

Title Frequentist coverage and sup-norm convergence rate in Gaussian process regression
Authors Yun Yang, Anirban Bhattacharya, Debdeep Pati
Abstract Gaussian process (GP) regression is a powerful interpolation technique due to its flexibility in capturing non-linearity. In this paper, we provide a general framework for understanding the frequentist coverage of point-wise and simultaneous Bayesian credible sets in GP regression. As an intermediate result, we develop a Bernstein von-Mises type result under supremum norm in random design GP regression. Identifying both the mean and covariance function of the posterior distribution of the Gaussian process as regularized $M$-estimators, we show that the sampling distribution of the posterior mean function and the centered posterior distribution can be respectively approximated by two population level GPs. By developing a comparison inequality between two GPs, we provide exact characterization of frequentist coverage probabilities of Bayesian point-wise credible intervals and simultaneous credible bands of the regression function. Our results show that inference based on GP regression tends to be conservative; when the prior is under-smoothed, the resulting credible intervals and bands have minimax-optimal sizes, with their frequentist coverage converging to a non-degenerate value between their nominal level and one. As a byproduct of our theory, we show that the GP regression also yields minimax-optimal posterior contraction rate relative to the supremum norm, which provides a positive evidence to the long standing problem on optimal supremum norm contraction rate in GP regression.
Tasks
Published 2017-08-16
URL http://arxiv.org/abs/1708.04753v1
PDF http://arxiv.org/pdf/1708.04753v1.pdf
PWC https://paperswithcode.com/paper/frequentist-coverage-and-sup-norm-convergence
Repo
Framework

3D Scanning System for Automatic High-Resolution Plant Phenotyping

Title 3D Scanning System for Automatic High-Resolution Plant Phenotyping
Authors Chuong V Nguyen, Jurgen Fripp, David R Lovell, Robert Furbank, Peter Kuffner, Helen Daily, Xavier Sirault
Abstract Thin leaves, fine stems, self-occlusion, non-rigid and slowly changing structures make plants difficult for three-dimensional (3D) scanning and reconstruction – two critical steps in automated visual phenotyping. Many current solutions such as laser scanning, structured light, and multiview stereo can struggle to acquire usable 3D models because of limitations in scanning resolution and calibration accuracy. In response, we have developed a fast, low-cost, 3D scanning platform to image plants on a rotating stage with two tilting DSLR cameras centred on the plant. This uses new methods of camera calibration and background removal to achieve high-accuracy 3D reconstruction. We assessed the system’s accuracy using a 3D visual hull reconstruction algorithm applied on 2 plastic models of dicotyledonous plants, 2 sorghum plants and 2 wheat plants across different sets of tilt angles. Scan times ranged from 3 minutes (to capture 72 images using 2 tilt angles), to 30 minutes (to capture 360 images using 10 tilt angles). The leaf lengths, widths, areas and perimeters of the plastic models were measured manually and compared to measurements from the scanning system: results were within 3-4% of each other. The 3D reconstructions obtained with the scanning system show excellent geometric agreement with all six plant specimens, even plants with thin leaves and fine stems.
Tasks 3D Reconstruction, Calibration
Published 2017-02-26
URL http://arxiv.org/abs/1702.08112v1
PDF http://arxiv.org/pdf/1702.08112v1.pdf
PWC https://paperswithcode.com/paper/3d-scanning-system-for-automatic-high
Repo
Framework

Shape Generation using Spatially Partitioned Point Clouds

Title Shape Generation using Spatially Partitioned Point Clouds
Authors Matheus Gadelha, Subhransu Maji, Rui Wang
Abstract We propose a method to generate 3D shapes using point clouds. Given a point-cloud representation of a 3D shape, our method builds a kd-tree to spatially partition the points. This orders them consistently across all shapes, resulting in reasonably good correspondences across all shapes. We then use PCA analysis to derive a linear shape basis across the spatially partitioned points, and optimize the point ordering by iteratively minimizing the PCA reconstruction error. Even with the spatial sorting, the point clouds are inherently noisy and the resulting distribution over the shape coefficients can be highly multi-modal. We propose to use the expressive power of neural networks to learn a distribution over the shape coefficients in a generative-adversarial framework. Compared to 3D shape generative models trained on voxel-representations, our point-based method is considerably more light-weight and scalable, with little loss of quality. It also outperforms simpler linear factor models such as Probabilistic PCA, both qualitatively and quantitatively, on a number of categories from the ShapeNet dataset. Furthermore, our method can easily incorporate other point attributes such as normal and color information, an additional advantage over voxel-based representations.
Tasks
Published 2017-07-19
URL http://arxiv.org/abs/1707.06267v1
PDF http://arxiv.org/pdf/1707.06267v1.pdf
PWC https://paperswithcode.com/paper/shape-generation-using-spatially-partitioned
Repo
Framework

Stylizing Face Images via Multiple Exemplars

Title Stylizing Face Images via Multiple Exemplars
Authors Yibing Song, Linchao Bao, Shengfeng He, Qingxiong Yang, Ming-Hsuan Yang
Abstract We address the problem of transferring the style of a headshot photo to face images. Existing methods using a single exemplar lead to inaccurate results when the exemplar does not contain sufficient stylized facial components for a given photo. In this work, we propose an algorithm to stylize face images using multiple exemplars containing different subjects in the same style. Patch correspondences between an input photo and multiple exemplars are established using a Markov Random Field (MRF), which enables accurate local energy transfer via Laplacian stacks. As image patches from multiple exemplars are used, the boundaries of facial components on the target image are inevitably inconsistent. The artifacts are removed by a post-processing step using an edge-preserving filter. Experimental results show that the proposed algorithm consistently produces visually pleasing results.
Tasks
Published 2017-08-28
URL http://arxiv.org/abs/1708.08288v1
PDF http://arxiv.org/pdf/1708.08288v1.pdf
PWC https://paperswithcode.com/paper/stylizing-face-images-via-multiple-exemplars
Repo
Framework

Safe Adaptive Importance Sampling

Title Safe Adaptive Importance Sampling
Authors Sebastian U. Stich, Anant Raj, Martin Jaggi
Abstract Importance sampling has become an indispensable strategy to speed up optimization algorithms for large-scale applications. Improved adaptive variants - using importance values defined by the complete gradient information which changes during optimization - enjoy favorable theoretical properties, but are typically computationally infeasible. In this paper we propose an efficient approximation of gradient-based sampling, which is based on safe bounds on the gradient. The proposed sampling distribution is (i) provably the best sampling with respect to the given bounds, (ii) always better than uniform sampling and fixed importance sampling and (iii) can efficiently be computed - in many applications at negligible extra cost. The proposed sampling scheme is generic and can easily be integrated into existing algorithms. In particular, we show that coordinate-descent (CD) and stochastic gradient descent (SGD) can enjoy significant a speed-up under the novel scheme. The proven efficiency of the proposed sampling is verified by extensive numerical testing.
Tasks
Published 2017-11-07
URL http://arxiv.org/abs/1711.02637v1
PDF http://arxiv.org/pdf/1711.02637v1.pdf
PWC https://paperswithcode.com/paper/safe-adaptive-importance-sampling
Repo
Framework

Data Driven Coded Aperture Design for Depth Recovery

Title Data Driven Coded Aperture Design for Depth Recovery
Authors Prasan A Shedligeri, Sreyas Mohan, Kaushik Mitra
Abstract Inserting a patterned occluder at the aperture of a camera lens has been shown to improve the recovery of depth map and all-focus image compared to a fully open aperture. However, design of the aperture pattern plays a very critical role. Previous approaches for designing aperture codes make simple assumptions on image distributions to obtain metrics for evaluating aperture codes. However, real images may not follow those assumptions and hence the designed code may not be optimal for them. To address this drawback we propose a data driven approach for learning the optimal aperture pattern to recover depth map from a single coded image. We propose a two stage architecture where, in the first stage we simulate coded aperture images from a training dataset of all-focus images and depth maps and in the second stage we recover the depth map using a deep neural network. We demonstrate that our learned aperture code performs better than previously designed codes even on code design metrics proposed by previous approaches.
Tasks
Published 2017-05-29
URL http://arxiv.org/abs/1705.10021v2
PDF http://arxiv.org/pdf/1705.10021v2.pdf
PWC https://paperswithcode.com/paper/data-driven-coded-aperture-design-for-depth
Repo
Framework

Combining tabu search and graph reduction to solve the maximum balanced biclique problem

Title Combining tabu search and graph reduction to solve the maximum balanced biclique problem
Authors Yi Zhou, Jin-Kao Hao
Abstract The Maximum Balanced Biclique Problem is a well-known graph model with relevant applications in diverse domains. This paper introduces a novel algorithm, which combines an effective constraint-based tabu search procedure and two dedicated graph reduction techniques. We verify the effectiveness of the algorithm on 30 classical random benchmark graphs and 25 very large real-life sparse graphs from the popular Koblenz Network Collection (KONECT). The results show that the algorithm improves the best-known results (new lower bounds) for 10 classical benchmarks and obtains the optimal solutions for 14 KONECT instances.
Tasks
Published 2017-05-20
URL http://arxiv.org/abs/1705.07339v1
PDF http://arxiv.org/pdf/1705.07339v1.pdf
PWC https://paperswithcode.com/paper/combining-tabu-search-and-graph-reduction-to
Repo
Framework

Rethinking Skip-thought: A Neighborhood based Approach

Title Rethinking Skip-thought: A Neighborhood based Approach
Authors Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia R. de Sa
Abstract We study the skip-thought model with neighborhood information as weak supervision. More specifically, we propose a skip-thought neighbor model to consider the adjacent sentences as a neighborhood. We train our skip-thought neighbor model on a large corpus with continuous sentences, and then evaluate the trained model on 7 tasks, which include semantic relatedness, paraphrase detection, and classification benchmarks. Both quantitative comparison and qualitative investigation are conducted. We empirically show that, our skip-thought neighbor model performs as well as the skip-thought model on evaluation tasks. In addition, we found that, incorporating an autoencoder path in our model didn’t aid our model to perform better, while it hurts the performance of the skip-thought model.
Tasks
Published 2017-06-09
URL http://arxiv.org/abs/1706.03146v1
PDF http://arxiv.org/pdf/1706.03146v1.pdf
PWC https://paperswithcode.com/paper/rethinking-skip-thought-a-neighborhood-based
Repo
Framework

Identifying Genetic Risk Factors via Sparse Group Lasso with Group Graph Structure

Title Identifying Genetic Risk Factors via Sparse Group Lasso with Group Graph Structure
Authors Tao Yang, Paul Thompson, Sihai Zhao, Jieping Ye
Abstract Genome-wide association studies (GWA studies or GWAS) investigate the relationships between genetic variants such as single-nucleotide polymorphisms (SNPs) and individual traits. Recently, incorporating biological priors together with machine learning methods in GWA studies has attracted increasing attention. However, in real-world, nucleotide-level bio-priors have not been well-studied to date. Alternatively, studies at gene-level, for example, protein–protein interactions and pathways, are more rigorous and legitimate, and it is potentially beneficial to utilize such gene-level priors in GWAS. In this paper, we proposed a novel two-level structured sparse model, called Sparse Group Lasso with Group-level Graph structure (SGLGG), for GWAS. It can be considered as a sparse group Lasso along with a group-level graph Lasso. Essentially, SGLGG penalizes the nucleotide-level sparsity as well as takes advantages of gene-level priors (both gene groups and networks), to identifying phenotype-associated risk SNPs. We employ the alternating direction method of multipliers algorithm to optimize the proposed model. Our experiments on the Alzheimer’s Disease Neuroimaging Initiative whole genome sequence data and neuroimage data demonstrate the effectiveness of SGLGG. As a regression model, it is competitive to the state-of-the-arts sparse models; as a variable selection method, SGLGG is promising for identifying Alzheimer’s disease-related risk SNPs.
Tasks
Published 2017-09-12
URL http://arxiv.org/abs/1709.03645v1
PDF http://arxiv.org/pdf/1709.03645v1.pdf
PWC https://paperswithcode.com/paper/identifying-genetic-risk-factors-via-sparse
Repo
Framework

On Extending Neural Networks with Loss Ensembles for Text Classification

Title On Extending Neural Networks with Loss Ensembles for Text Classification
Authors Hamideh Hajiabadi, Diego Molla-Aliod, Reza Monsefi
Abstract Ensemble techniques are powerful approaches that combine several weak learners to build a stronger one. As a meta learning framework, ensemble techniques can easily be applied to many machine learning techniques. In this paper we propose a neural network extended with an ensemble loss function for text classification. The weight of each weak loss function is tuned within the training phase through the gradient propagation optimization method of the neural network. The approach is evaluated on several text classification datasets. We also evaluate its performance in various environments with several degrees of label noise. Experimental results indicate an improvement of the results and strong resilience against label noise in comparison with other methods.
Tasks Meta-Learning, Text Classification
Published 2017-11-14
URL http://arxiv.org/abs/1711.05170v1
PDF http://arxiv.org/pdf/1711.05170v1.pdf
PWC https://paperswithcode.com/paper/on-extending-neural-networks-with-loss
Repo
Framework
comments powered by Disqus