October 19, 2019

3097 words 15 mins read

Paper Group ANR 109

Paper Group ANR 109

Syntree2Vec - An algorithm to augment syntactic hierarchy into word embeddings. Feature Selection Approach with Missing Values Conducted for Statistical Learning: A Case Study of Entrepreneurship Survival Dataset. Translating MFM into FOL: towards plant operation planning. Evaluating Creativity in Computational Co-Creative Systems. The EcoLexicon S …

Syntree2Vec - An algorithm to augment syntactic hierarchy into word embeddings

Title Syntree2Vec - An algorithm to augment syntactic hierarchy into word embeddings
Authors Shubham Bhardwaj
Abstract Word embeddings aims to map sense of the words into a lower dimensional vector space in order to reason over them. Training embeddings on domain specific data helps express concepts more relevant to their use case but comes at a cost of accuracy when data is less. Our effort is to minimise this by infusing syntactic knowledge into the embeddings. We propose a graph based embedding algorithm inspired from node2vec. Experimental results have shown that our algorithm improves the syntactic strength and gives robust performance on meagre data.
Tasks Word Embeddings
Published 2018-08-14
URL http://arxiv.org/abs/1808.05907v1
PDF http://arxiv.org/pdf/1808.05907v1.pdf
PWC https://paperswithcode.com/paper/syntree2vec-an-algorithm-to-augment-syntactic
Repo
Framework

Feature Selection Approach with Missing Values Conducted for Statistical Learning: A Case Study of Entrepreneurship Survival Dataset

Title Feature Selection Approach with Missing Values Conducted for Statistical Learning: A Case Study of Entrepreneurship Survival Dataset
Authors Diego Nascimento, Anderson Ara, Francisco Louzada Neto
Abstract In this article, we investigate the features which enhanced discriminate the survival in the micro and small business (MSE) using the approach of data mining with feature selection. According to the complexity of the data set, we proposed a comparison of three data imputation methods such as mean imputation (MI), k-nearest neighbor (KNN) and expectation maximization (EM) using mutually the selection of variables technique, whereby t-test, then through the data mining process using logistic regression classification methods, naive Bayes algorithm, linear discriminant analysis and support vector machine hence comparing their respective performances. The experimental results will be spread in developing a model to predict the MSE survival, providing a better understanding in the topic once it is a significant part of the Brazilian’ GPA and macroeconomy.
Tasks Feature Selection, Imputation
Published 2018-10-02
URL http://arxiv.org/abs/1810.01061v1
PDF http://arxiv.org/pdf/1810.01061v1.pdf
PWC https://paperswithcode.com/paper/feature-selection-approach-with-missing
Repo
Framework

Translating MFM into FOL: towards plant operation planning

Title Translating MFM into FOL: towards plant operation planning
Authors Shota Motoura, Kazeto Yamamoto, Shumpei Kubosawa, Takashi Onishi
Abstract This paper proposes a method to translate multilevel flow modeling (MFM) into a first-order language (FOL), which enables the utilisation of logical techniques, such as inference engines and abductive reasoners. An example of this is a planning task for a toy plant that can be solved in FOL using abduction. In addition, owing to the expressivity of FOL, the language is capable of describing actions and their preconditions. This allows the derivation of procedures consisting of multiple actions.
Tasks
Published 2018-06-19
URL http://arxiv.org/abs/1806.07037v1
PDF http://arxiv.org/pdf/1806.07037v1.pdf
PWC https://paperswithcode.com/paper/translating-mfm-into-fol-towards-plant
Repo
Framework

Evaluating Creativity in Computational Co-Creative Systems

Title Evaluating Creativity in Computational Co-Creative Systems
Authors Pegah Karimi, Kazjon Grace, Mary Lou Maher, Nicholas Davis
Abstract This paper provides a framework for evaluating creativity in co-creative systems: those that involve computer programs collaborating with human users on creative tasks. We situate co-creative systems within a broader context of computational creativity and explain the unique qualities of these systems. We present four main questions that can guide evaluation in co-creative systems: Who is evaluating the creativity, what is being evaluated, when does evaluation occur and how the evaluation is performed. These questions provide a framework for comparing how existing co-creative systems evaluate creativity, and we apply them to examples of co-creative systems in art, humor, games and robotics. We conclude that existing co-creative systems tend to focus on evaluating the user experience. Adopting evaluation methods from autonomous creative systems may lead to co-creative systems that are self-aware and intentional.
Tasks
Published 2018-07-25
URL http://arxiv.org/abs/1807.09886v1
PDF http://arxiv.org/pdf/1807.09886v1.pdf
PWC https://paperswithcode.com/paper/evaluating-creativity-in-computational-co
Repo
Framework

The EcoLexicon Semantic Sketch Grammar: from Knowledge Patterns to Word Sketches

Title The EcoLexicon Semantic Sketch Grammar: from Knowledge Patterns to Word Sketches
Authors P. León-Araúz, A. San Martín
Abstract Many projects have applied knowledge patterns (KPs) to the retrieval of specialized information. Yet terminologists still rely on manual analysis of concordance lines to extract semantic information, since there are no user-friendly publicly available applications enabling them to find knowledge rich contexts (KRCs). To fill this void, we have created the KP-based EcoLexicon Semantic SketchGrammar (ESSG) in the well-known corpus query system Sketch Engine. For the first time, the ESSG is now publicly available inSketch Engine to query the EcoLexicon English Corpus. Additionally, reusing the ESSG in any English corpus uploaded by the user enables Sketch Engine to extract KRCs codifying generic-specific, part-whole, location, cause and function relations, because most of the KPs are domain-independent. The information is displayed in the form of summary lists (word sketches) containing the pairs of terms linked by a given semantic relation. This paper describes the process of building a KP-based sketch grammar with special focus on the last stage, namely, the evaluation with refinement purposes. We conducted an initial shallow precision and recall evaluation of the 64 English sketch grammar rules created so far for hyponymy, meronymy and causality. Precision was measured based on a random sample of concordances extracted from each word sketch type. Recall was assessed based on a random sample of concordances where known term pairs are found. The results are necessary for the improvement and refinement of the ESSG. The noise of false positives helped to further specify the rules, whereas the silence of false negatives allows us to find useful new patterns.
Tasks
Published 2018-04-15
URL http://arxiv.org/abs/1804.05294v1
PDF http://arxiv.org/pdf/1804.05294v1.pdf
PWC https://paperswithcode.com/paper/the-ecolexicon-semantic-sketch-grammar-from
Repo
Framework

Fast Adaptive Bilateral Filtering

Title Fast Adaptive Bilateral Filtering
Authors Ruturaj G. Gavaskar, Kunal N. Chaudhury
Abstract In the classical bilateral filter, a fixed Gaussian range kernel is used along with a spatial kernel for edge-preserving smoothing. We consider a generalization of this filter, the so-called adaptive bilateral filter, where the center and width of the Gaussian range kernel is allowed to change from pixel to pixel. Though this variant was originally proposed for sharpening and noise removal, it can also be used for other applications such as artifact removal and texture filtering. Similar to the bilateral filter, the brute-force implementation of its adaptive counterpart requires intense computations. While several fast algorithms have been proposed in the literature for bilateral filtering, most of them work only with a fixed range kernel. In this paper, we propose a fast algorithm for adaptive bilateral filtering, whose complexity does not scale with the spatial filter width. This is based on the observation that the concerned filtering can be performed purely in range space using an appropriately defined local histogram. We show that by replacing the histogram with a polynomial and the finite range-space sum with an integral, we can approximate the filter using analytic functions. In particular, an efficient algorithm is derived using the following innovations: the polynomial is fitted by matching its moments to those of the target histogram (this is done using fast convolutions), and the analytic functions are recursively computed using integration-by-parts. Our algorithm can accelerate the brute-force implementation by at least $20 \times$, without perceptible distortions in the visual quality. We demonstrate the effectiveness of our algorithm for sharpening, JPEG deblocking, and texture filtering.
Tasks
Published 2018-11-06
URL http://arxiv.org/abs/1811.02308v1
PDF http://arxiv.org/pdf/1811.02308v1.pdf
PWC https://paperswithcode.com/paper/fast-adaptive-bilateral-filtering
Repo
Framework

SpaceNet: A Remote Sensing Dataset and Challenge Series

Title SpaceNet: A Remote Sensing Dataset and Challenge Series
Authors Adam Van Etten, Dave Lindenbaum, Todd M. Bacastow
Abstract Foundational mapping remains a challenge in many parts of the world, particularly in dynamic scenarios such as natural disasters when timely updates are critical. Updating maps is currently a highly manual process requiring a large number of human labelers to either create features or rigorously validate automated outputs. We propose that the frequent revisits of earth imaging satellite constellations may accelerate existing efforts to quickly update foundational maps when combined with advanced machine learning techniques. Accordingly, the SpaceNet partners (CosmiQ Works, Radiant Solutions, and NVIDIA), released a large corpus of labeled satellite imagery on Amazon Web Services (AWS) called SpaceNet. The SpaceNet partners also launched a series of public prize competitions to encourage improvement of remote sensing machine learning algorithms. The first two of these competitions focused on automated building footprint extraction, and the most recent challenge focused on road network extraction. In this paper we discuss the SpaceNet imagery, labels, evaluation metrics, prize challenge results to date, and future plans for the SpaceNet challenge series.
Tasks
Published 2018-07-03
URL https://arxiv.org/abs/1807.01232v3
PDF https://arxiv.org/pdf/1807.01232v3.pdf
PWC https://paperswithcode.com/paper/spacenet-a-remote-sensing-dataset-and
Repo
Framework

Towards Differentially Private Truth Discovery for Crowd Sensing Systems

Title Towards Differentially Private Truth Discovery for Crowd Sensing Systems
Authors Yaliang Li, Houping Xiao, Zhan Qin, Chenglin Miao, Lu Su, Jing Gao, Kui Ren, Bolin Ding
Abstract Nowadays, crowd sensing becomes increasingly more popular due to the ubiquitous usage of mobile devices. However, the quality of such human-generated sensory data varies significantly among different users. To better utilize sensory data, the problem of truth discovery, whose goal is to estimate user quality and infer reliable aggregated results through quality-aware data aggregation, has emerged as a hot topic. Although the existing truth discovery approaches can provide reliable aggregated results, they fail to protect the private information of individual users. Moreover, crowd sensing systems typically involve a large number of participants, making encryption or secure multi-party computation based solutions difficult to deploy. To address these challenges, in this paper, we propose an efficient privacy-preserving truth discovery mechanism with theoretical guarantees of both utility and privacy. The key idea of the proposed mechanism is to perturb data from each user independently and then conduct weighted aggregation among users’ perturbed data. The proposed approach is able to assign user weights based on information quality, and thus the aggregated results will not deviate much from the true results even when large noise is added. We adapt local differential privacy definition to this privacy-preserving task and demonstrate the proposed mechanism can satisfy local differential privacy while preserving high aggregation accuracy. We formally quantify utility and privacy trade-off and further verify the claim by experiments on both synthetic data and a real-world crowd sensing system.
Tasks
Published 2018-10-10
URL http://arxiv.org/abs/1810.04760v1
PDF http://arxiv.org/pdf/1810.04760v1.pdf
PWC https://paperswithcode.com/paper/towards-differentially-private-truth
Repo
Framework

Do Better ImageNet Models Transfer Better… for Image Recommendation?

Title Do Better ImageNet Models Transfer Better… for Image Recommendation?
Authors Felipe del Rio, Pablo Messina, Vicente Dominguez, Denis Parra
Abstract Visual embeddings from Convolutional Neural Networks (CNN) trained on the ImageNet dataset for the ILSVRC challenge have shown consistently good performance for transfer learning and are widely used in several tasks, including image recommendation. However, some important questions have not yet been answered in order to use these embeddings for a larger scope of recommendation domains: a) Do CNNs that perform better in ImageNet are also better for transfer learning in content-based image recommendation?, b) Does fine-tuning help to improve performance? and c) Which is the best way to perform the fine-tuning? In this paper we compare several CNN models pre-trained with ImageNet to evaluate their transfer learning performance to an artwork image recommendation task. Our results indicate that models with better performance in the ImageNet challenge do not always imply better transfer learning for recommendation tasks (e.g. NASNet vs. ResNet). Our results also show that fine-tuning can be helpful even with a small dataset, but not every fine-tuning works. Our results can inform other researchers and practitioners on how to train their CNNs for better transfer learning towards image recommendation systems.
Tasks Recommendation Systems, Transfer Learning
Published 2018-07-25
URL http://arxiv.org/abs/1807.09870v3
PDF http://arxiv.org/pdf/1807.09870v3.pdf
PWC https://paperswithcode.com/paper/do-better-imagenet-models-transfer-better-for
Repo
Framework

A Bayesian Additive Model for Understanding Public Transport Usage in Special Events

Title A Bayesian Additive Model for Understanding Public Transport Usage in Special Events
Authors Filipe Rodrigues, Stanislav S. Borysov, Bernardete Ribeiro, Francisco C. Pereira
Abstract Public special events, like sports games, concerts and festivals are well known to create disruptions in transportation systems, often catching the operators by surprise. Although these are usually planned well in advance, their impact is difficult to predict, even when organisers and transportation operators coordinate. The problem highly increases when several events happen concurrently. To solve these problems, costly processes, heavily reliant on manual search and personal experience, are usual practice in large cities like Singapore, London or Tokyo. This paper presents a Bayesian additive model with Gaussian process components that combines smart card records from public transport with context information about events that is continuously mined from the Web. We develop an efficient approximate inference algorithm using expectation propagation, which allows us to predict the total number of public transportation trips to the special event areas, thereby contributing to a more adaptive transportation system. Furthermore, for multiple concurrent event scenarios, the proposed algorithm is able to disaggregate gross trip counts into their most likely components related to specific events and routine behavior. Using real data from Singapore, we show that the presented model outperforms the best baseline model by up to 26% in R2 and also has explanatory power for its individual components.
Tasks
Published 2018-12-20
URL http://arxiv.org/abs/1812.08755v1
PDF http://arxiv.org/pdf/1812.08755v1.pdf
PWC https://paperswithcode.com/paper/a-bayesian-additive-model-for-understanding
Repo
Framework

A Resourceful Reframing of Behavior Trees

Title A Resourceful Reframing of Behavior Trees
Authors Chris Martens, Eric Butler, Joseph C. Osborn
Abstract Designers of autonomous agents, whether in physical or virtual environments, need to express nondeterminisim, failure, and parallelism in behaviors, as well as accounting for synchronous coordination between agents. Behavior Trees are a semi-formalism deployed widely for this purpose in the games industry, but with challenges to scalability, reasoning, and reuse of common sub-behaviors. We present an alternative formulation of behavior trees through a language design perspective, giving a formal operational semantics, type system, and corresponding implementation. We express specifications for atomic behaviors as linear logic formulas describing how they transform the environment, and our type system uses linear sequent calculus to derive a compositional type assignment to behavior tree expressions. These types expose the conditions required for behaviors to succeed and allow abstraction over parameters to behaviors, enabling the development of behavior “building blocks” amenable to compositional reasoning and reuse.
Tasks
Published 2018-03-24
URL http://arxiv.org/abs/1803.09099v1
PDF http://arxiv.org/pdf/1803.09099v1.pdf
PWC https://paperswithcode.com/paper/a-resourceful-reframing-of-behavior-trees
Repo
Framework

Polar Decoding on Sparse Graphs with Deep Learning

Title Polar Decoding on Sparse Graphs with Deep Learning
Authors Weihong Xu, Xiaohu You, Chuan Zhang, Yair Be’ery
Abstract In this paper, we present a sparse neural network decoder (SNND) of polar codes based on belief propagation (BP) and deep learning. At first, the conventional factor graph of polar BP decoding is converted to the bipartite Tanner graph similar to low-density parity-check (LDPC) codes. Then the Tanner graph is unfolded and translated into the graphical representation of deep neural network (DNN). The complex sum-product algorithm (SPA) is modified to min-sum (MS) approximation with low complexity. We dramatically reduce the number of weight by using single weight to parameterize the networks. Optimized by the training techniques of deep learning, proposed SNND achieves comparative decoding performance of SPA and obtains about $0.5$ dB gain over MS decoding on ($128,64$) and ($256,128$) codes. Moreover, $60 %$ complexity reduction is achieved and the decoding latency is significantly lower than the conventional polar BP.
Tasks
Published 2018-11-24
URL http://arxiv.org/abs/1811.09801v1
PDF http://arxiv.org/pdf/1811.09801v1.pdf
PWC https://paperswithcode.com/paper/polar-decoding-on-sparse-graphs-with-deep
Repo
Framework

Domain Generalization with Domain-Specific Aggregation Modules

Title Domain Generalization with Domain-Specific Aggregation Modules
Authors Antonio D’Innocente, Barbara Caputo
Abstract Visual recognition systems are meant to work in the real world. For this to happen, they must work robustly in any visual domain, and not only on the data used during training. Within this context, a very realistic scenario deals with domain generalization, i.e. the ability to build visual recognition algorithms able to work robustly in several visual domains, without having access to any information about target data statistic. This paper contributes to this research thread, proposing a deep architecture that maintains separated the information about the available source domains data while at the same time leveraging over generic perceptual information. We achieve this by introducing domain-specific aggregation modules that through an aggregation layer strategy are able to merge generic and specific information in an effective manner. Experiments on two different benchmark databases show the power of our approach, reaching the new state of the art in domain generalization.
Tasks Domain Generalization
Published 2018-09-28
URL http://arxiv.org/abs/1809.10966v1
PDF http://arxiv.org/pdf/1809.10966v1.pdf
PWC https://paperswithcode.com/paper/domain-generalization-with-domain-specific
Repo
Framework

A theory of the phenomenology of multipopulation genetic algorithm with an application to the Ising model

Title A theory of the phenomenology of multipopulation genetic algorithm with an application to the Ising model
Authors Bruno Messias, Bruno W. D. Morais
Abstract Genetic algorithm (GA) is a stochastic metaheuristic process consisting on the evolution of a population of candidate solutions for a given optimization problem. By extension, multipopulation genetic algorithm (MPGA) aims for efficiency by evolving many populations, or islands, in parallel and performing migrations between them periodically. The connectivity between islands constrains the directions of migration and characterizes MPGA as a dynamic process over a network. As such, predicting the evolution of the quality of the solutions is a difficult challenge, implying in the waste of computer resources and energy when the parameters are inadequate. By using models derived from statistical mechanics, this work aims to estimate equations for the study of dynamics in relation to the connectivity in MPGA. To illustrate the importance of understanding MPGA, we show its application as an efficient alternative to the thermalization phase of Metropolis-Hastings algorithm applied to the Ising model.
Tasks
Published 2018-03-25
URL http://arxiv.org/abs/1803.09254v3
PDF http://arxiv.org/pdf/1803.09254v3.pdf
PWC https://paperswithcode.com/paper/a-theory-of-the-phenomenology-of
Repo
Framework

Measures of Tractography Convergence

Title Measures of Tractography Convergence
Authors Daniel Moyer, Paul M. Thompson, Greg Ver Steeg
Abstract In the present work, we use information theory to understand the empirical convergence rate of tractography, a widely-used approach to reconstruct anatomical fiber pathways in the living brain. Based on diffusion MRI data, tractography is the starting point for many methods to study brain connectivity. Of the available methods to perform tractography, most reconstruct a finite set of streamlines, or 3D curves, representing probable connections between anatomical regions, yet relatively little is known about how the sampling of this set of streamlines affects downstream results, and how exhaustive the sampling should be. Here we provide a method to measure the information theoretic surprise (self-cross entropy) for tract sampling schema. We then empirically assess four streamline methods. We demonstrate that the relative information gain is very low after a moderate number of streamlines have been generated for each tested method. The results give rise to several guidelines for optimal sampling in brain connectivity analyses.
Tasks
Published 2018-06-12
URL http://arxiv.org/abs/1806.04634v1
PDF http://arxiv.org/pdf/1806.04634v1.pdf
PWC https://paperswithcode.com/paper/measures-of-tractography-convergence
Repo
Framework
comments powered by Disqus