Paper Group ANR 5
Throwing fuel on the embers: Probability or Dichotomy, Cognitive or Linguistic?. Trainlets: Dictionary Learning in High Dimensions. On the Role and the Importance of Features for Background Modeling and Foreground Detection. A corpus of preposition supersenses in English web reviews. 2D SEM images turn into 3D object models. Lifted Region-Based Bel …
Throwing fuel on the embers: Probability or Dichotomy, Cognitive or Linguistic?
Title | Throwing fuel on the embers: Probability or Dichotomy, Cognitive or Linguistic? |
Authors | David M. W. Powers |
Abstract | Prof. Robert Berwick’s abstract for his forthcoming invited talk at the ACL2016 workshop on Cognitive Aspects of Computational Language Learning revives an ancient debate. Entitled “Why take a chance?", Berwick seems to refer implicitly to Chomsky’s critique of the statistical approach of Harris as well as the currently dominant paradigms in CoNLL. Berwick avoids Chomsky’s use of “innate” but states that “the debate over the existence of sophisticated mental grammars was settled with Chomsky’s Logical Structure of Linguistic Theory (1957/1975)", acknowledging that “this debate has often been revived”. This paper agrees with the view that this debate has long since been settled, but with the opposite outcome! Given the embers have not yet died away, and the questions remain fundamental, perhaps it is appropriate to refuel the debate, so I would like to join Bob in throwing fuel on this fire by reviewing the evidence against the Chomskian position! |
Tasks | |
Published | 2016-07-01 |
URL | http://arxiv.org/abs/1607.00186v1 |
http://arxiv.org/pdf/1607.00186v1.pdf | |
PWC | https://paperswithcode.com/paper/throwing-fuel-on-the-embers-probability-or |
Repo | |
Framework | |
Trainlets: Dictionary Learning in High Dimensions
Title | Trainlets: Dictionary Learning in High Dimensions |
Authors | Jeremias Sulam, Boaz Ophir, Michael Zibulevsky, Michael Elad |
Abstract | Sparse representations has shown to be a very powerful model for real world signals, and has enabled the development of applications with notable performance. Combined with the ability to learn a dictionary from signal examples, sparsity-inspired algorithms are often achieving state-of-the-art results in a wide variety of tasks. Yet, these methods have traditionally been restricted to small dimensions mainly due to the computational constraints that the dictionary learning problem entails. In the context of image processing, this implies handling small image patches. In this work we show how to efficiently handle bigger dimensions and go beyond the small patches in sparsity-based signal and image processing methods. We build our approach based on a new cropped wavelet decomposition, which enables a multi-scale analysis with virtually no border effects. We then employ this as the base dictionary within a double sparsity model to enable the training of adaptive dictionaries. To cope with the increase of training data, while at the same time improving the training performance, we present an Online Sparse Dictionary Learning (OSDL) algorithm to train this model effectively, enabling it to handle millions of examples. This work shows that dictionary learning can be up-scaled to tackle a new level of signal dimensions, obtaining large adaptable atoms that we call trainlets. |
Tasks | Dictionary Learning |
Published | 2016-01-31 |
URL | http://arxiv.org/abs/1602.00212v4 |
http://arxiv.org/pdf/1602.00212v4.pdf | |
PWC | https://paperswithcode.com/paper/trainlets-dictionary-learning-in-high |
Repo | |
Framework | |
On the Role and the Importance of Features for Background Modeling and Foreground Detection
Title | On the Role and the Importance of Features for Background Modeling and Foreground Detection |
Authors | Thierry Bouwmans, Caroline Silva, Cristina Marghes, Mohammed Sami Zitouni, Harish Bhaskar, Carl Frelicot |
Abstract | Background modeling has emerged as a popular foreground detection technique for various applications in video surveillance. Background modeling methods have become increasing efficient in robustly modeling the background and hence detecting moving objects in any visual scene. Although several background subtraction and foreground detection have been proposed recently, no traditional algorithm today still seem to be able to simultaneously address all the key challenges of illumination variation, dynamic camera motion, cluttered background and occlusion. This limitation can be attributed to the lack of systematic investigation concerning the role and importance of features within background modeling and foreground detection. With the availability of a rather large set of invariant features, the challenge is in determining the best combination of features that would improve accuracy and robustness in detection. The purpose of this study is to initiate a rigorous and comprehensive survey of features used within background modeling and foreground detection. Further, this paper presents a systematic experimental and statistical analysis of techniques that provide valuable insight on the trends in background modeling and use it to draw meaningful recommendations for practitioners. In this paper, a preliminary review of the key characteristics of features based on the types and sizes is provided in addition to investigating their intrinsic spectral, spatial and temporal properties. Furthermore, improvements using statistical and fuzzy tools are examined and techniques based on multiple features are benchmarked against reliability and selection criterion. Finally, a description of the different resources available such as datasets and codes is provided. |
Tasks | |
Published | 2016-11-28 |
URL | http://arxiv.org/abs/1611.09099v1 |
http://arxiv.org/pdf/1611.09099v1.pdf | |
PWC | https://paperswithcode.com/paper/on-the-role-and-the-importance-of-features |
Repo | |
Framework | |
A corpus of preposition supersenses in English web reviews
Title | A corpus of preposition supersenses in English web reviews |
Authors | Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Meredith Green, Kathryn Conger, Tim O’Gorman, Martha Palmer |
Abstract | We present the first corpus annotated with preposition supersenses, unlexicalized categories for semantic functions that can be marked by English prepositions (Schneider et al., 2015). That scheme improves upon its predecessors to better facilitate comprehensive manual annotation. Moreover, unlike the previous schemes, the preposition supersenses are organized hierarchically. Our data will be publicly released on the web upon publication. |
Tasks | |
Published | 2016-05-08 |
URL | http://arxiv.org/abs/1605.02257v1 |
http://arxiv.org/pdf/1605.02257v1.pdf | |
PWC | https://paperswithcode.com/paper/a-corpus-of-preposition-supersenses-in |
Repo | |
Framework | |
2D SEM images turn into 3D object models
Title | 2D SEM images turn into 3D object models |
Authors | Wichai Shanklin |
Abstract | The scanning electron microscopy (SEM) is probably one the most fascinating examination approach that has been used since more than two decades to detailed inspection of micro scale objects. Most of the scanning electron microscopes could only produce 2D images that could not assist operational analysis of microscopic surface properties. Computer vision algorithms combined with very advanced geometry and mathematical approaches turn any SEM into a full 3D measurement device. This work focuses on a methodical literature review for automatic 3D surface reconstruction of scanning electron microscope images. |
Tasks | |
Published | 2016-02-17 |
URL | http://arxiv.org/abs/1602.05256v1 |
http://arxiv.org/pdf/1602.05256v1.pdf | |
PWC | https://paperswithcode.com/paper/2d-sem-images-turn-into-3d-object-models |
Repo | |
Framework | |
Lifted Region-Based Belief Propagation
Title | Lifted Region-Based Belief Propagation |
Authors | David Smith, Parag Singla, Vibhav Gogate |
Abstract | Due to the intractable nature of exact lifted inference, research has recently focused on the discovery of accurate and efficient approximate inference algorithms in Statistical Relational Models (SRMs), such as Lifted First-Order Belief Propagation. FOBP simulates propositional factor graph belief propagation without constructing the ground factor graph by identifying and lifting over redundant message computations. In this work, we propose a generalization of FOBP called Lifted Generalized Belief Propagation, in which both the region structure and the message structure can be lifted. This approach allows more of the inference to be performed intra-region (in the exact inference step of BP), thereby allowing simulation of propagation on a graph structure with larger region scopes and fewer edges, while still maintaining tractability. We demonstrate that the resulting algorithm converges in fewer iterations to more accurate results on a variety of SRMs. |
Tasks | |
Published | 2016-06-30 |
URL | http://arxiv.org/abs/1606.09637v1 |
http://arxiv.org/pdf/1606.09637v1.pdf | |
PWC | https://paperswithcode.com/paper/lifted-region-based-belief-propagation |
Repo | |
Framework | |
Photo Filter Recommendation by Category-Aware Aesthetic Learning
Title | Photo Filter Recommendation by Category-Aware Aesthetic Learning |
Authors | Wei-Tse Sun, Ting-Hsuan Chao, Yin-Hsi Kuo, Winston H. Hsu |
Abstract | Nowadays, social media has become a popular platform for the public to share photos. To make photos more visually appealing, users usually apply filters on their photos without domain knowledge. However, due to the growing number of filter types, it becomes a major issue for users to choose the best filter type. For this purpose, filter recommendation for photo aesthetics takes an important role in image quality ranking problems. In these years, several works have declared that Convolutional Neural Networks (CNNs) outperform traditional methods in image aesthetic categorization, which classifies images into high or low quality. Most of them do not consider the effect on filtered images; hence, we propose a novel image aesthetic learning for filter recommendation. Instead of binarizing image quality, we adjust the state-of-the-art CNN architectures and design a pairwise loss function to learn the embedded aesthetic responses in hidden layers for filtered images. Based on our pilot study, we observe image categories (e.g., portrait, landscape, food) will affect user preference on filter selection. We further integrate category classification into our proposed aesthetic-oriented models. To the best of our knowledge, there is no public dataset for aesthetic judgment with filtered images. We create a new dataset called Filter Aesthetic Comparison Dataset (FACD). It contains 28,160 filtered images based on the AVA dataset and 42,240 reliable image pairs with aesthetic annotations using Amazon Mechanical Turk. It is the first dataset containing filtered images and user preference labels. We conduct experiments on the collected FACD for filter recommendation, and the results show that our proposed category-aware aesthetic learning outperforms aesthetic classification methods (e.g., 12% relative improvement). |
Tasks | |
Published | 2016-08-18 |
URL | http://arxiv.org/abs/1608.05339v2 |
http://arxiv.org/pdf/1608.05339v2.pdf | |
PWC | https://paperswithcode.com/paper/photo-filter-recommendation-by-category-aware |
Repo | |
Framework | |
Convex Histogram-Based Joint Image Segmentation with Regularized Optimal Transport Cost
Title | Convex Histogram-Based Joint Image Segmentation with Regularized Optimal Transport Cost |
Authors | Nicolas Papadakis, Julien Rabin |
Abstract | We investigate in this work a versatile convex framework for multiple image segmentation, relying on the regularized optimal mass transport theory. In this setting, several transport cost functions are considered and used to match statistical distributions of features. In practice, global multidimensional histograms are estimated from the segmented image regions, and are compared to referring models that are either fixed histograms given a priori, or directly inferred in the non-supervised case. The different convex problems studied are solved efficiently using primal-dual algorithms. The proposed approach is generic and enables multi-phase segmentation as well as co-segmentation of multiple images. |
Tasks | Semantic Segmentation |
Published | 2016-10-05 |
URL | http://arxiv.org/abs/1610.01400v1 |
http://arxiv.org/pdf/1610.01400v1.pdf | |
PWC | https://paperswithcode.com/paper/convex-histogram-based-joint-image |
Repo | |
Framework | |
Tracing metaphors in time through self-distance in vector spaces
Title | Tracing metaphors in time through self-distance in vector spaces |
Authors | Marco Del Tredici, Malvina Nissim, Andrea Zaninello |
Abstract | From a diachronic corpus of Italian, we build consecutive vector spaces in time and use them to compare a term’s cosine similarity to itself in different time spans. We assume that a drop in similarity might be related to the emergence of a metaphorical sense at a given time. Similarity-based observations are matched to the actual year when a figurative meaning was documented in a reference dictionary and through manual inspection of corpus occurrences. |
Tasks | |
Published | 2016-11-10 |
URL | http://arxiv.org/abs/1611.03279v1 |
http://arxiv.org/pdf/1611.03279v1.pdf | |
PWC | https://paperswithcode.com/paper/tracing-metaphors-in-time-through-self |
Repo | |
Framework | |
Semi-Automatic Data Annotation, POS Tagging and Mildly Context-Sensitive Disambiguation: the eXtended Revised AraMorph (XRAM)
Title | Semi-Automatic Data Annotation, POS Tagging and Mildly Context-Sensitive Disambiguation: the eXtended Revised AraMorph (XRAM) |
Authors | Giuliano Lancioni, Valeria Pettinari, Laura Garofalo, Marta Campanelli, Ivana Pepe, Simona Olivieri, Ilaria Cicola |
Abstract | An extended, revised form of Tim Buckwalter’s Arabic lexical and morphological resource AraMorph, eXtended Revised AraMorph (henceforth XRAM), is presented which addresses a number of weaknesses and inconsistencies of the original model by allowing a wider coverage of real-world Classical and contemporary (both formal and informal) Arabic texts. Building upon previous research, XRAM enhancements include (i) flag-selectable usage markers, (ii) probabilistic mildly context-sensitive POS tagging, filtering, disambiguation and ranking of alternative morphological analyses, (iii) semi-automatic increment of lexical coverage through extraction of lexical and morphological information from existing lexical resources. Testing of XRAM through a front-end Python module showed a remarkable success level. |
Tasks | |
Published | 2016-03-06 |
URL | http://arxiv.org/abs/1603.01833v1 |
http://arxiv.org/pdf/1603.01833v1.pdf | |
PWC | https://paperswithcode.com/paper/semi-automatic-data-annotation-pos-tagging |
Repo | |
Framework | |
Natural-Parameter Networks: A Class of Probabilistic Neural Networks
Title | Natural-Parameter Networks: A Class of Probabilistic Neural Networks |
Authors | Hao Wang, Xingjian Shi, Dit-Yan Yeung |
Abstract | Neural networks (NN) have achieved state-of-the-art performance in various applications. Unfortunately in applications where training data is insufficient, they are often prone to overfitting. One effective way to alleviate this problem is to exploit the Bayesian approach by using Bayesian neural networks (BNN). Another shortcoming of NN is the lack of flexibility to customize different distributions for the weights and neurons according to the data, as is often done in probabilistic graphical models. To address these problems, we propose a class of probabilistic neural networks, dubbed natural-parameter networks (NPN), as a novel and lightweight Bayesian treatment of NN. NPN allows the usage of arbitrary exponential-family distributions to model the weights and neurons. Different from traditional NN and BNN, NPN takes distributions as input and goes through layers of transformation before producing distributions to match the target output distributions. As a Bayesian treatment, efficient backpropagation (BP) is performed to learn the natural parameters for the distributions over both the weights and neurons. The output distributions of each layer, as byproducts, may be used as second-order representations for the associated tasks such as link prediction. Experiments on real-world datasets show that NPN can achieve state-of-the-art performance. |
Tasks | Link Prediction |
Published | 2016-11-02 |
URL | http://arxiv.org/abs/1611.00448v1 |
http://arxiv.org/pdf/1611.00448v1.pdf | |
PWC | https://paperswithcode.com/paper/natural-parameter-networks-a-class-of |
Repo | |
Framework | |
DARN: a Deep Adversial Residual Network for Intrinsic Image Decomposition
Title | DARN: a Deep Adversial Residual Network for Intrinsic Image Decomposition |
Authors | Louis Lettry, Kenneth Vanhoey, Luc Van Gool |
Abstract | We present a new deep supervised learning method for intrinsic decomposition of a single image into its albedo and shading components. Our contributions are based on a new fully convolutional neural network that estimates absolute albedo and shading jointly. Our solution relies on a single end-to-end deep sequence of residual blocks and a perceptually-motivated metric formed by two adversarially trained discriminators. As opposed to classical intrinsic image decomposition work, it is fully data-driven, hence does not require any physical priors like shading smoothness or albedo sparsity, nor does it rely on geometric information such as depth. Compared to recent deep learning techniques, we simplify the architecture, making it easier to build and train, and constrain it to generate a valid and reversible decomposition. We rediscuss and augment the set of quantitative metrics so as to account for the more challenging recovery of non scale-invariant quantities. We train and demonstrate our architecture on the publicly available MPI Sintel dataset and its intrinsic image decomposition, show attenuated overfitting issues and discuss generalizability to other data. Results show that our work outperforms the state of the art deep algorithms both on the qualitative and quantitative aspect. |
Tasks | Intrinsic Image Decomposition |
Published | 2016-12-23 |
URL | http://arxiv.org/abs/1612.07899v2 |
http://arxiv.org/pdf/1612.07899v2.pdf | |
PWC | https://paperswithcode.com/paper/darn-a-deep-adversial-residual-network-for |
Repo | |
Framework | |
Discussion on Mechanical Learning and Learning Machine
Title | Discussion on Mechanical Learning and Learning Machine |
Authors | Chuyu Xiong |
Abstract | Mechanical learning is a computing system that is based on a set of simple and fixed rules, and can learn from incoming data. A learning machine is a system that realizes mechanical learning. Importantly, we emphasis that it is based on a set of simple and fixed rules, contrasting to often called machine learning that is sophisticated software based on very complicated mathematical theory, and often needs human intervene for software fine tune and manual adjustments. Here, we discuss some basic facts and principles of such system, and try to lay down a framework for further study. We propose 2 directions to approach mechanical learning, just like Church-Turing pair: one is trying to realize a learning machine, another is trying to well describe the mechanical learning. |
Tasks | |
Published | 2016-01-31 |
URL | http://arxiv.org/abs/1602.00198v1 |
http://arxiv.org/pdf/1602.00198v1.pdf | |
PWC | https://paperswithcode.com/paper/discussion-on-mechanical-learning-and |
Repo | |
Framework | |
ABC random forests for Bayesian parameter inference
Title | ABC random forests for Bayesian parameter inference |
Authors | Louis Raynal, Jean-Michel Marin, Pierre Pudlo, Mathieu Ribatet, Christian P. Robert, Arnaud Estoup |
Abstract | This preprint has been reviewed and recommended by Peer Community In Evolutionary Biology (http://dx.doi.org/10.24072/pci.evolbiol.100036). Approximate Bayesian computation (ABC) has grown into a standard methodology that manages Bayesian inference for models associated with intractable likelihood functions. Most ABC implementations require the preliminary selection of a vector of informative statistics summarizing raw data. Furthermore, in almost all existing implementations, the tolerance level that separates acceptance from rejection of simulated parameter values needs to be calibrated. We propose to conduct likelihood-free Bayesian inferences about parameters with no prior selection of the relevant components of the summary statistics and bypassing the derivation of the associated tolerance level. The approach relies on the random forest methodology of Breiman (2001) applied in a (non parametric) regression setting. We advocate the derivation of a new random forest for each component of the parameter vector of interest. When compared with earlier ABC solutions, this method offers significant gains in terms of robustness to the choice of the summary statistics, does not depend on any type of tolerance level, and is a good trade-off in term of quality of point estimator precision and credible interval estimations for a given computing time. We illustrate the performance of our methodological proposal and compare it with earlier ABC methods on a Normal toy example and a population genetics example dealing with human population evolution. All methods designed here have been incorporated in the R package abcrf (version 1.7) available on CRAN. |
Tasks | Bayesian Inference |
Published | 2016-05-18 |
URL | http://arxiv.org/abs/1605.05537v5 |
http://arxiv.org/pdf/1605.05537v5.pdf | |
PWC | https://paperswithcode.com/paper/abc-random-forests-for-bayesian-parameter |
Repo | |
Framework | |
Lattice Structure of Variable Precision Rough Sets
Title | Lattice Structure of Variable Precision Rough Sets |
Authors | Sumita Basu |
Abstract | The main purpose of this paper is to study the lattice structure of variable precision rough sets. The notion of variation in precision of rough sets have been further extended to variable precision rough set with variable classification error and its algebraic properties are also studied. |
Tasks | |
Published | 2016-07-06 |
URL | http://arxiv.org/abs/1607.01634v1 |
http://arxiv.org/pdf/1607.01634v1.pdf | |
PWC | https://paperswithcode.com/paper/lattice-structure-of-variable-precision-rough |
Repo | |
Framework | |