May 6, 2019

2525 words 12 mins read

Paper Group ANR 325

Paper Group ANR 325

Robust and Sparse Regression via $γ$-divergence. Genetic Architect: Discovering Genomic Structure with Learned Neural Architectures. Improving Fully Convolution Network for Semantic Segmentation. RGB-D-based Action Recognition Datasets: A Survey. Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach. A Perspective on Sentim …

Robust and Sparse Regression via $γ$-divergence

Title Robust and Sparse Regression via $γ$-divergence
Authors Takayuki Kawashima, Hironori Fujisawa
Abstract In high-dimensional data, many sparse regression methods have been proposed. However, they may not be robust against outliers. Recently, the use of density power weight has been studied for robust parameter estimation and the corresponding divergences have been discussed. One of such divergences is the $\gamma$-divergence and the robust estimator using the $\gamma$-divergence is known for having a strong robustness. In this paper, we consider the robust and sparse regression based on $\gamma$-divergence. We extend the $\gamma$-divergence to the regression problem and show that it has a strong robustness under heavy contamination even when outliers are heterogeneous. The loss function is constructed by an empirical estimate of the $\gamma$-divergence with sparse regularization and the parameter estimate is defined as the minimizer of the loss function. To obtain the robust and sparse estimate, we propose an efficient update algorithm which has a monotone decreasing property of the loss function. Particularly, we discuss a linear regression problem with $L_1$ regularization in detail. In numerical experiments and real data analyses, we see that the proposed method outperforms past robust and sparse methods.
Tasks
Published 2016-04-22
URL http://arxiv.org/abs/1604.06637v3
PDF http://arxiv.org/pdf/1604.06637v3.pdf
PWC https://paperswithcode.com/paper/robust-and-sparse-regression-via-divergence
Repo
Framework

Genetic Architect: Discovering Genomic Structure with Learned Neural Architectures

Title Genetic Architect: Discovering Genomic Structure with Learned Neural Architectures
Authors Laura Deming, Sasha Targ, Nate Sauder, Diogo Almeida, Chun Jimmie Ye
Abstract Each human genome is a 3 billion base pair set of encoding instructions. Decoding the genome using deep learning fundamentally differs from most tasks, as we do not know the full structure of the data and therefore cannot design architectures to suit it. As such, architectures that fit the structure of genomics should be learned not prescribed. Here, we develop a novel search algorithm, applicable across domains, that discovers an optimal architecture which simultaneously learns general genomic patterns and identifies the most important sequence motifs in predicting functional genomic outcomes. The architectures we find using this algorithm succeed at using only RNA expression data to predict gene regulatory structure, learn human-interpretable visualizations of key sequence motifs, and surpass state-of-the-art results on benchmark genomics challenges.
Tasks
Published 2016-05-23
URL http://arxiv.org/abs/1605.07156v1
PDF http://arxiv.org/pdf/1605.07156v1.pdf
PWC https://paperswithcode.com/paper/genetic-architect-discovering-genomic
Repo
Framework

Improving Fully Convolution Network for Semantic Segmentation

Title Improving Fully Convolution Network for Semantic Segmentation
Authors Bing Shuai, Ting Liu, Gang Wang
Abstract Fully Convolution Networks (FCN) have achieved great success in dense prediction tasks including semantic segmentation. In this paper, we start from discussing FCN by understanding its architecture limitations in building a strong segmentation network. Next, we present our Improved Fully Convolution Network (IFCN). In contrast to FCN, IFCN introduces a context network that progressively expands the receptive fields of feature maps. In addition, dense skip connections are added so that the context network can be effectively optimized. More importantly, these dense skip connections enable IFCN to fuse rich-scale context to make reliable predictions. Empirically, those architecture modifications are proven to be significant to enhance the segmentation performance. Without engaging any contextual post-processing, IFCN significantly advances the state-of-the-arts on ADE20K (ImageNet scene parsing), Pascal Context, Pascal VOC 2012 and SUN-RGBD segmentation datasets.
Tasks Scene Parsing, Semantic Segmentation
Published 2016-11-28
URL http://arxiv.org/abs/1611.08986v1
PDF http://arxiv.org/pdf/1611.08986v1.pdf
PWC https://paperswithcode.com/paper/improving-fully-convolution-network-for
Repo
Framework

RGB-D-based Action Recognition Datasets: A Survey

Title RGB-D-based Action Recognition Datasets: A Survey
Authors Jing Zhang, Wanqing Li, Philip O. Ogunbona, Pichao Wang, Chang Tang
Abstract Human action recognition from RGB-D (Red, Green, Blue and Depth) data has attracted increasing attention since the first work reported in 2010. Over this period, many benchmark datasets have been created to facilitate the development and evaluation of new algorithms. This raises the question of which dataset to select and how to use it in providing a fair and objective comparative evaluation against state-of-the-art methods. To address this issue, this paper provides a comprehensive review of the most commonly used action recognition related RGB-D video datasets, including 27 single-view datasets, 10 multi-view datasets, and 7 multi-person datasets. The detailed information and analysis of these datasets is a useful resource in guiding insightful selection of datasets for future research. In addition, the issues with current algorithm evaluation vis-'{a}-vis limitations of the available datasets and evaluation protocols are also highlighted; resulting in a number of recommendations for collection of new datasets and use of evaluation protocols.
Tasks Temporal Action Localization
Published 2016-01-21
URL http://arxiv.org/abs/1601.05511v1
PDF http://arxiv.org/pdf/1601.05511v1.pdf
PWC https://paperswithcode.com/paper/rgb-d-based-action-recognition-datasets-a
Repo
Framework

Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach

Title Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach
Authors John Duchi, Peter Glynn, Hongseok Namkoong
Abstract We study statistical inference and distributionally robust solution methods for stochastic optimization problems, focusing on confidence intervals for optimal values and solutions that achieve exact coverage asymptotically. We develop a generalized empirical likelihood framework—based on distributional uncertainty sets constructed from nonparametric $f$-divergence balls—for Hadamard differentiable functionals, and in particular, stochastic optimization problems. As consequences of this theory, we provide a principled method for choosing the size of distributional uncertainty regions to provide one- and two-sided confidence intervals that achieve exact coverage. We also give an asymptotic expansion for our distributionally robust formulation, showing how robustification regularizes problems by their variance. Finally, we show that optimizers of the distributionally robust formulations we study enjoy (essentially) the same consistency properties as those in classical sample average approximations. Our general approach applies to quickly mixing stationary sequences, including geometrically ergodic Harris recurrent Markov chains.
Tasks Stochastic Optimization
Published 2016-10-11
URL http://arxiv.org/abs/1610.03425v3
PDF http://arxiv.org/pdf/1610.03425v3.pdf
PWC https://paperswithcode.com/paper/statistics-of-robust-optimization-a
Repo
Framework

A Perspective on Sentiment Analysis

Title A Perspective on Sentiment Analysis
Authors K Paramesha, K C Ravishankar
Abstract Sentiment Analysis (SA) is indeed a fascinating area of research which has stolen the attention of researchers as it has many facets and more importantly it promises economic stakes in the corporate and governance sector. SA has been stemmed out of text analytics and established itself as a separate identity and a domain of research. The wide ranging results of SA have proved to influence the way some critical decisions are taken. Hence, it has become relevant in thorough understanding of the different dimensions of the input, output and the processes and approaches of SA.
Tasks Sentiment Analysis
Published 2016-07-21
URL http://arxiv.org/abs/1607.06221v2
PDF http://arxiv.org/pdf/1607.06221v2.pdf
PWC https://paperswithcode.com/paper/a-perspective-on-sentiment-analysis
Repo
Framework

Learning Boltzmann Machine with EM-like Method

Title Learning Boltzmann Machine with EM-like Method
Authors Jinmeng Song, Chun Yuan
Abstract We propose an expectation-maximization-like(EMlike) method to train Boltzmann machine with unconstrained connectivity. It adopts Monte Carlo approximation in the E-step, and replaces the intractable likelihood objective with efficiently computed objectives or directly approximates the gradient of likelihood objective in the M-step. The EM-like method is a modification of alternating minimization. We prove that EM-like method will be the exactly same with contrastive divergence in restricted Boltzmann machine if the M-step of this method adopts special approximation. We also propose a new measure to assess the performance of Boltzmann machine as generative models of data, and its computational complexity is O(Rmn). Finally, we demonstrate the performance of EM-like method using numerical experiments.
Tasks
Published 2016-09-07
URL http://arxiv.org/abs/1609.01840v1
PDF http://arxiv.org/pdf/1609.01840v1.pdf
PWC https://paperswithcode.com/paper/learning-boltzmann-machine-with-em-like
Repo
Framework

A Neutrosophic Recommender System for Medical Diagnosis Based on Algebraic Neutrosophic Measures

Title A Neutrosophic Recommender System for Medical Diagnosis Based on Algebraic Neutrosophic Measures
Authors Mumtaz Ali, Nguyen Van Minh, Le Hoang Son
Abstract Neutrosophic set has the ability to handle uncertain, incomplete, inconsistent, indeterminate information in a more accurate way. In this paper, we proposed a neutrosophic recommender system to predict the diseases based on neutrosophic set which includes single-criterion neutrosophic recommender system (SC-NRS) and multi-criterion neutrosophic recommender system (MC-NRS). Further, we investigated some algebraic operations of neutrosophic recommender system such as union, complement, intersection, probabilistic sum, bold sum, bold intersection, bounded difference, symmetric difference, convex linear sum of min and max operators, Cartesian product, associativity, commutativity and distributive. Based on these operations, we studied the algebraic structures such as lattices, Kleen algebra, de Morgan algebra, Brouwerian algebra, BCK algebra, Stone algebra and MV algebra. In addition, we introduced several types of similarity measures based on these algebraic operations and studied some of their theoretic properties. Moreover, we accomplished a prediction formula using the proposed algebraic similarity measure. We also proposed a new algorithm for medical diagnosis based on neutrosophic recommender system. Finally to check the validity of the proposed methodology, we made experiments on the datasets Heart, RHC, Breast cancer, Diabetes and DMD. At the end, we presented the MSE and computational time by comparing the proposed algorithm with the relevant ones such as ICSM, DSM, CARE, CFMD, as well as other variants namely Variant 67, Variant 69, and Varian 71 both in tabular and graphical form to analyze the efficiency and accuracy. Finally we analyzed the strength of all 8 algorithms by ANOVA statistical tool.
Tasks Medical Diagnosis, Recommendation Systems
Published 2016-02-25
URL http://arxiv.org/abs/1602.08447v1
PDF http://arxiv.org/pdf/1602.08447v1.pdf
PWC https://paperswithcode.com/paper/a-neutrosophic-recommender-system-for-medical
Repo
Framework

Learning relationships between data obtained independently

Title Learning relationships between data obtained independently
Authors Alexandra Carpentier, Teresa Schlueter
Abstract The aim of this paper is to provide a new method for learning the relationships between data that have been obtained independently. Unlike existing methods like matching, the proposed technique does not require any contextual information, provided that the dependency between the variables of interest is monotone. It can therefore be easily combined with matching in order to exploit the advantages of both methods. This technique can be described as a mix between quantile matching, and deconvolution. We provide for it a theoretical and an empirical validation.
Tasks
Published 2016-01-04
URL http://arxiv.org/abs/1601.00504v1
PDF http://arxiv.org/pdf/1601.00504v1.pdf
PWC https://paperswithcode.com/paper/learning-relationships-between-data-obtained
Repo
Framework

A note on the statistical view of matrix completion

Title A note on the statistical view of matrix completion
Authors Tianxi Li
Abstract A very simple interpretation of matrix completion problem is introduced based on statistical models. Combined with the well-known results from missing data analysis, such interpretation indicates that matrix completion is still a valid and principled estimation procedure even without the missing completely at random (MCAR) assumption, which almost all of the current theoretical studies of matrix completion assume.
Tasks Matrix Completion
Published 2016-05-10
URL http://arxiv.org/abs/1605.03040v1
PDF http://arxiv.org/pdf/1605.03040v1.pdf
PWC https://paperswithcode.com/paper/a-note-on-the-statistical-view-of-matrix
Repo
Framework

The TUM LapChole dataset for the M2CAI 2016 workflow challenge

Title The TUM LapChole dataset for the M2CAI 2016 workflow challenge
Authors Ralf Stauder, Daniel Ostler, Michael Kranzfelder, Sebastian Koller, Hubertus Feußner, Nassir Navab
Abstract In this technical report we present our collected dataset of laparoscopic cholecystectomies (LapChole). Laparoscopic videos of a total of 20 surgeries were recorded and annotated with surgical phase labels, of which 15 were randomly pre-determined as training data, while the remaining 5 videos are selected as test data. This dataset was later included as part of the M2CAI 2016 workflow detection challenge during MICCAI 2016 in Athens.
Tasks
Published 2016-10-28
URL http://arxiv.org/abs/1610.09278v2
PDF http://arxiv.org/pdf/1610.09278v2.pdf
PWC https://paperswithcode.com/paper/the-tum-lapchole-dataset-for-the-m2cai-2016
Repo
Framework

Virtual Worlds as Proxy for Multi-Object Tracking Analysis

Title Virtual Worlds as Proxy for Multi-Object Tracking Analysis
Authors Adrien Gaidon, Qiao Wang, Yohann Cabon, Eleonora Vig
Abstract Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called Virtual KITTI (see http://www.xrce.xerox.com/Research-Development/Computer-Vision/Proxy-Virtual-Worlds), automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking.
Tasks Instance Segmentation, Multi-Object Tracking, Object Detection, Object Tracking, Optical Flow Estimation, Semantic Segmentation
Published 2016-05-20
URL http://arxiv.org/abs/1605.06457v1
PDF http://arxiv.org/pdf/1605.06457v1.pdf
PWC https://paperswithcode.com/paper/virtual-worlds-as-proxy-for-multi-object
Repo
Framework

Approximating Wisdom of Crowds using K-RBMs

Title Approximating Wisdom of Crowds using K-RBMs
Authors Abhay Gupta
Abstract An important way to make large training sets is to gather noisy labels from crowds of non experts. We propose a method to aggregate noisy labels collected from a crowd of workers or annotators. Eliciting labels is important in tasks such as judging web search quality and rating products. Our method assumes that labels are generated by a probability distribution over items and labels. We formulate the method by drawing parallels between Gaussian Mixture Models (GMMs) and Restricted Boltzmann Machines (RBMs) and show that the problem of vote aggregation can be viewed as one of clustering. We use K-RBMs to perform clustering. We finally show some empirical evaluations over real datasets.
Tasks
Published 2016-11-16
URL http://arxiv.org/abs/1611.05340v2
PDF http://arxiv.org/pdf/1611.05340v2.pdf
PWC https://paperswithcode.com/paper/approximating-wisdom-of-crowds-using-k-rbms
Repo
Framework

Learning Influence Functions from Incomplete Observations

Title Learning Influence Functions from Incomplete Observations
Authors Xinran He, Ke Xu, David Kempe, Yan Liu
Abstract We study the problem of learning influence functions under incomplete observations of node activations. Incomplete observations are a major concern as most (online and real-world) social networks are not fully observable. We establish both proper and improper PAC learnability of influence functions under randomly missing observations. Proper PAC learnability under the Discrete-Time Linear Threshold (DLT) and Discrete-Time Independent Cascade (DIC) models is established by reducing incomplete observations to complete observations in a modified graph. Our improper PAC learnability result applies for the DLT and DIC models as well as the Continuous-Time Independent Cascade (CIC) model. It is based on a parametrization in terms of reachability features, and also gives rise to an efficient and practical heuristic. Experiments on synthetic and real-world datasets demonstrate the ability of our method to compensate even for a fairly large fraction of missing observations.
Tasks
Published 2016-11-07
URL http://arxiv.org/abs/1611.02305v1
PDF http://arxiv.org/pdf/1611.02305v1.pdf
PWC https://paperswithcode.com/paper/learning-influence-functions-from-incomplete
Repo
Framework

Joint Object-Material Category Segmentation from Audio-Visual Cues

Title Joint Object-Material Category Segmentation from Audio-Visual Cues
Authors Anurag Arnab, Michael Sapienza, Stuart Golodetz, Julien Valentin, Ondrej Miksik, Shahram Izadi, Philip Torr
Abstract It is not always possible to recognise objects and infer material properties for a scene from visual cues alone, since objects can look visually similar whilst being made of very different materials. In this paper, we therefore present an approach that augments the available dense visual cues with sparse auditory cues in order to estimate dense object and material labels. Since estimates of object class and material properties are mutually informative, we optimise our multi-output labelling jointly using a random-field framework. We evaluate our system on a new dataset with paired visual and auditory data that we make publicly available. We demonstrate that this joint estimation of object and material labels significantly outperforms the estimation of either category in isolation.
Tasks
Published 2016-01-10
URL http://arxiv.org/abs/1601.02220v1
PDF http://arxiv.org/pdf/1601.02220v1.pdf
PWC https://paperswithcode.com/paper/joint-object-material-category-segmentation
Repo
Framework
comments powered by Disqus