Paper Group ANR 1349
Direct Mappings between RDF and Property Graph Databases. Simulating Evolution on Fitness Landscapes represented by Valued Constraint Satisfaction Problems. Experimental Evidence for Asymptotic Non-Optimality of Comb Adversary Strategy. Context is Key: Grammatical Error Detection with Contextual Word Representations. Inverse boosting pruning trees …
Direct Mappings between RDF and Property Graph Databases
Title | Direct Mappings between RDF and Property Graph Databases |
Authors | Harsh Thakkar, Renzo Angles, Dominik Tomaszuk, Jens Lehmann |
Abstract | Resource Description Framework (RDF) triplestores and Property Graph (PG) database systems are two approaches for data management that are based on modeling, storing and querying graph-like data. Given the heterogeneity between these systems, it becomes necessary to develop methods to allow interoperability among them. While there exist some approaches to exchange data and schema between RDF and PG databases, they lack compatibility and even a solid formal foundation. In this paper, we study the semantic interoperability between RDF and PG databases. Specifically, we present two direct mappings (schema-dependent and schema-independent) for transforming an RDF database into a PG database. We show that the proposed mappings possess the fundamental properties of semantics preservation and information preservation. The existence of both mappings allows us to conclude that the PG data model subsumes the expressiveness or information capacity of the RDF data model. |
Tasks | |
Published | 2019-12-04 |
URL | https://arxiv.org/abs/1912.02127v1 |
https://arxiv.org/pdf/1912.02127v1.pdf | |
PWC | https://paperswithcode.com/paper/direct-mappings-between-rdf-and-property |
Repo | |
Framework | |
Simulating Evolution on Fitness Landscapes represented by Valued Constraint Satisfaction Problems
Title | Simulating Evolution on Fitness Landscapes represented by Valued Constraint Satisfaction Problems |
Authors | Alexandru Strimbu |
Abstract | Recent theoretical research proposes that computational complexity can be seen as an ultimate constraint that allows for open-ended biological evolution on finite static fitness landscapes. Whereas on easy fitness landscapes, evolution will quickly converge to a local fitness peaks, on hard fitness landscapes this computational constraints prevents evolution from reaching any local fitness peak in polynomial time. Valued constraint satisfaction problems (VCSPs) can be used to represent both easy and hard fitness landscapes. Thus VCSPS can be seen as a natural way of linking the theory of evolution with notions of computer science to better understand the features that make landscapes hard. However, there are currently no simulators that study VCSP-structured fitness landscapes. This report describes the design and build of an evolution simulator for VCSP-structured fitness landscapes. The platform is used for simulating various instances of easy and hard fitness landscapes. In particular, we look at evolution under more realistic assumptions than fittest mutant strong-selection weak mutation dynamics on the winding semismooth fitness landscape. The results obtained match with the theoretical expectations, while also providing new information about the limits of evolution. The last part of the report introduces a mathematical model for smooth fitness landscapes and uses it to better understand why these landscapes are easy. |
Tasks | |
Published | 2019-12-04 |
URL | https://arxiv.org/abs/1912.02134v1 |
https://arxiv.org/pdf/1912.02134v1.pdf | |
PWC | https://paperswithcode.com/paper/simulating-evolution-on-fitness-landscapes |
Repo | |
Framework | |
Experimental Evidence for Asymptotic Non-Optimality of Comb Adversary Strategy
Title | Experimental Evidence for Asymptotic Non-Optimality of Comb Adversary Strategy |
Authors | Zachary Chase |
Abstract | For the problem of prediction with expert advice in the adversarial setting with finite stopping time, we give strong computer evidence that the comb strategy for $k=5$ experts is not asymptotically optimal, thereby giving strong evidence against a conjecture of Gravin, Peres, and Sivan. |
Tasks | |
Published | 2019-12-03 |
URL | https://arxiv.org/abs/1912.01548v1 |
https://arxiv.org/pdf/1912.01548v1.pdf | |
PWC | https://paperswithcode.com/paper/experimental-evidence-for-asymptotic-non |
Repo | |
Framework | |
Context is Key: Grammatical Error Detection with Contextual Word Representations
Title | Context is Key: Grammatical Error Detection with Contextual Word Representations |
Authors | Samuel Bell, Helen Yannakoudakis, Marek Rei |
Abstract | Grammatical error detection (GED) in non-native writing requires systems to identify a wide range of errors in text written by language learners. Error detection as a purely supervised task can be challenging, as GED datasets are limited in size and the label distributions are highly imbalanced. Contextualized word representations offer a possible solution, as they can efficiently capture compositional information in language and can be optimized on large amounts of unsupervised data. In this paper, we perform a systematic comparison of ELMo, BERT and Flair embeddings (Peters et al., 2017; Devlin et al., 2018; Akbik et al., 2018) on a range of public GED datasets, and propose an approach to effectively integrate such representations in current methods, achieving a new state of the art on GED. We further analyze the strengths and weaknesses of different contextual embeddings for the task at hand, and present detailed analyses of their impact on different types of errors. |
Tasks | Grammatical Error Detection |
Published | 2019-06-15 |
URL | https://arxiv.org/abs/1906.06593v1 |
https://arxiv.org/pdf/1906.06593v1.pdf | |
PWC | https://paperswithcode.com/paper/context-is-key-grammatical-error-detection |
Repo | |
Framework | |
Inverse boosting pruning trees for depression detection on Twitter
Title | Inverse boosting pruning trees for depression detection on Twitter |
Authors | Lei Tong, Xiangrong, Qianni Zhang, Abdul Sadka, Ling Li, Huiyu Zhou |
Abstract | Depression is one of the most common mental health disorders, and a large number of depression people commit suicide each year. Potential depression sufferers do not consult psychological doctors because they feel ashamed or are unaware of any depression, which may result in severe delay of diagnosis and treatment. In the meantime, evidence shows that social media data provides valuable clues about physical and mental health conditions. In this paper, we argue that it is feasible to identify depression at an early stage by mining online social behaviours. Our approach, which is innovative to the practice of depression detection, does not rely on the extraction of numerous or complicated features to achieve accurate depression detection. Instead, we propose a novel classifier, namely, Inverse Boosting Pruning Trees (IBPT), which demonstrates a strong classification ability on a publicly accessible dataset with 7862 Twitter users. To comprehensively evaluate the classification capability of the IBPT, we use three real datasets from the UCI machine learning repository and the IBPT still obtains the best classification results against several state of the arts techniques. The results manifest that our proposed framework is promising for identifying social networks’ users with depression. |
Tasks | |
Published | 2019-06-02 |
URL | https://arxiv.org/abs/1906.00398v1 |
https://arxiv.org/pdf/1906.00398v1.pdf | |
PWC | https://paperswithcode.com/paper/190600398 |
Repo | |
Framework | |
Linear Convergence of Frank-Wolfe for Rank-One Matrix Recovery Without Strong Convexity
Title | Linear Convergence of Frank-Wolfe for Rank-One Matrix Recovery Without Strong Convexity |
Authors | Dan Garber |
Abstract | We consider convex optimization problems which are widely used as convex relaxations for low-rank matrix recovery problems. In particular, in several important problems, such as phase retrieval and robust PCA, the underlying assumption in many cases is that the optimal solution is rank-one. In this paper we consider a simple and natural sufficient condition on the objective so that the optimal solution to these relaxations is indeed unique and rank-one. Mainly, we show that under this condition, the standard Frank-Wolfe method with line-search (i.e., without any tuning of parameters whatsoever), which only requires a single rank-one SVD computation per iteration, finds an $\epsilon$-approximated solution in only $O(\log{1/\epsilon})$ iterations (as opposed to the previous best known bound of $O(1/\epsilon)$), despite the fact that the objective is not strongly convex. We consider several variants of the basic method with improved complexities, as well as an extension motivated by robust PCA, and finally, an extension to nonsmooth problems. |
Tasks | |
Published | 2019-12-03 |
URL | https://arxiv.org/abs/1912.01467v1 |
https://arxiv.org/pdf/1912.01467v1.pdf | |
PWC | https://paperswithcode.com/paper/linear-convergence-of-frank-wolfe-for-rank |
Repo | |
Framework | |
Automated Coronary Artery Atherosclerosis Detection and Weakly Supervised Localization on Coronary CT Angiography with a Deep 3-Dimensional Convolutional Neural Network
Title | Automated Coronary Artery Atherosclerosis Detection and Weakly Supervised Localization on Coronary CT Angiography with a Deep 3-Dimensional Convolutional Neural Network |
Authors | Sema Candemir, Richard D. White, Mutlu Demirer, Vikash Gupta, Matthew T. Bigelow, Luciano M. Prevedello, Barbaros S. Erdal |
Abstract | We propose a fully automated algorithm based on a deep learning framework enabling screening of a Coronary Computed Tomography Angiography (CCTA) examination for confident detection of the presence or absence of coronary artery atherosclerosis. The system starts with extracting the coronary arteries and their branches from CCTA datasets and representing them with multi-planar reformatted volumes; pre-processing and augmentation techniques are then applied to increase the robustness and generalization ability of the system. A 3-Dimensional Convolutional Neural Network (3D CNN) is utilized to model pathological changes (e.g., atherosclerotic plaques) in coronary vessels. The system learns the discriminatory features between vessels with and without atherosclerosis. The discriminative features at the final convolutional layer are visualized with a saliency map approach to provide visual clues related to atherosclerosis likelihood and location. We have evaluated the system on a reference dataset representing 247 patients with atherosclerosis and 246 patients free of atherosclerosis. With 5-fold cross-validation, an Accuracy = 90:9%, Positive Predictive Value = 58:8%, Sensitivity = 68:9%, Specificity of 93:6%, and Negative Predictive Value (NPV) = 96:1% are achieved at the artery/branch level with threshold 0.5. The average area under the receiver operating characteristic curve is 0.91. The system indicates a high NPV, which may be potentially useful for assisting interpreting physicians in excluding coronary atherosclerosis in patients with acute chest pain. |
Tasks | |
Published | 2019-11-26 |
URL | https://arxiv.org/abs/1911.13219v2 |
https://arxiv.org/pdf/1911.13219v2.pdf | |
PWC | https://paperswithcode.com/paper/coronary-artery-classification-and-weakly |
Repo | |
Framework | |
Population Predictive Checks
Title | Population Predictive Checks |
Authors | Rajesh Ranganath, David M. Blei |
Abstract | Bayesian modeling has become a staple for researchers analyzing data. Thanks to recent developments in approximate posterior inference, modern researchers can easily build, use, and revise complicated Bayesian models for large and rich data. These new abilities, however, bring into focus the problem of model assessment. Researchers need tools to diagnose the fitness of their models, to understand where a model falls short, and to guide its revision. In this paper we develop a new method for Bayesian model checking, the population predictive check (Pop-PC). Pop-PCs are built on posterior predictive checks (PPC), a seminal method that checks a model by assessing the posterior predictive distribution on the observed data. Though powerful, PPCs use the data twice—both to calculate the posterior predictive and to evaluate it—which can lead to overconfident assessments. Pop-PCs, in contrast, compare the posterior predictive distribution to the population distribution of the data. This strategy blends Bayesian modeling with frequentist assessment, leading to a robust check that validates the model on its generalization. Of course the population distribution is not usually available; thus we use tools like the bootstrap and cross validation to estimate the Pop-PC. Further, we extend Pop-PCs to hierarchical models. We study Pop-PCs on classical regression and a hierarchical model of text. We show that Pop-PCs are robust to overfitting and can be easily deployed on a broad family of models. |
Tasks | |
Published | 2019-08-02 |
URL | https://arxiv.org/abs/1908.00882v2 |
https://arxiv.org/pdf/1908.00882v2.pdf | |
PWC | https://paperswithcode.com/paper/population-predictive-checks |
Repo | |
Framework | |
Testing Conditional Independence on Discrete Data using Stochastic Complexity
Title | Testing Conditional Independence on Discrete Data using Stochastic Complexity |
Authors | Alexander Marx, Jilles Vreeken |
Abstract | Testing for conditional independence is a core aspect of constraint-based causal discovery. Although commonly used tests are perfect in theory, they often fail to reject independence in practice, especially when conditioning on multiple variables. We focus on discrete data and propose a new test based on the notion of algorithmic independence that we instantiate using stochastic complexity. Amongst others, we show that our proposed test, SCI, is an asymptotically unbiased as well as $L_2$ consistent estimator for conditional mutual information (CMI). Further, we show that SCI can be reformulated to find a sensible threshold for CMI that works well on limited samples. Empirical evaluation shows that SCI has a lower type II error than commonly used tests. As a result, we obtain a higher recall when we use SCI in causal discovery algorithms, without compromising the precision. |
Tasks | Causal Discovery |
Published | 2019-03-12 |
URL | http://arxiv.org/abs/1903.04829v1 |
http://arxiv.org/pdf/1903.04829v1.pdf | |
PWC | https://paperswithcode.com/paper/testing-conditional-independence-on-discrete |
Repo | |
Framework | |
Informative Path Planning with Local Penalization for Decentralized and Asynchronous Swarm Robotic Search
Title | Informative Path Planning with Local Penalization for Decentralized and Asynchronous Swarm Robotic Search |
Authors | Payam Ghassemi, Souma Chowdhury |
Abstract | Decentralized swarm robotic solutions to searching for targets that emit a spatially varying signal promise task parallelism, time efficiency, and fault tolerance. It is, however, challenging for swarm algorithms to offer scalability and efficiency, while preserving mathematical insights into the exhibited behavior. A new decentralized search method (called Bayes-Swarm), founded on batch Bayesian Optimization (BO) principles, is presented here to address these challenges. Unlike swarm heuristics approaches, Bayes-Swarm decouples the knowledge generation and task planning process, thus preserving insights into the emergent behavior. Key contributions lie in: 1) modeling knowledge extraction over trajectories, unlike in BO; 2) time-adaptively balancing exploration/exploitation and using an efficient local penalization approach to account for potential interactions among different robots’ planned samples; and 3) presenting an asynchronous implementation of the algorithm. This algorithm is tested on case studies with bimodal and highly multimodal signal distributions. Up to 76 times better efficiency is demonstrated compared to an exhaustive search baseline. The benefits of exploitation/exploration balancing, asynchronous planning, and local penalization, and scalability with swarm size, are also demonstrated. |
Tasks | |
Published | 2019-07-09 |
URL | https://arxiv.org/abs/1907.04396v1 |
https://arxiv.org/pdf/1907.04396v1.pdf | |
PWC | https://paperswithcode.com/paper/informative-path-planning-with-local |
Repo | |
Framework | |
Continual Learning Using Bayesian Neural Networks
Title | Continual Learning Using Bayesian Neural Networks |
Authors | HongLin Li, Payam Barnaghi, Shirin Enshaeifar, Frieder Ganz |
Abstract | Continual learning models allow to learn and adapt to new changes and tasks over time. However, in continual and sequential learning scenarios in which the models are trained using different data with various distributions, neural networks tend to forget the previously learned knowledge. This phenomenon is often referred to as catastrophic forgetting. The catastrophic forgetting is an inevitable problem in continual learning models for dynamic environments. To address this issue, we propose a method, called Continual Bayesian Learning Networks (CBLN), which enables the networks to allocate additional resources to adapt to new tasks without forgetting the previously learned tasks. Using a Bayesian Neural Network, CBLN maintains a mixture of Gaussian posterior distributions that are associated with different tasks. The proposed method tries to optimise the number of resources that are needed to learn each task and avoids an exponential increase in the number of resources that are involved in learning multiple tasks. The proposed method does not need to access the past training data and can choose suitable weights to classify the data points during the test time automatically based on an uncertainty criterion. We have evaluated our method on the MNIST and UCR time-series datasets. The evaluation results show that our method can address the catastrophic forgetting problem at a promising rate compared to the state-of-the-art models. |
Tasks | Continual Learning, Time Series |
Published | 2019-10-09 |
URL | https://arxiv.org/abs/1910.04112v1 |
https://arxiv.org/pdf/1910.04112v1.pdf | |
PWC | https://paperswithcode.com/paper/continual-learning-using-bayesian-neural |
Repo | |
Framework | |
Generative Adversarial Networks (GAN) for compact beam source modelling in Monte Carlo simulations
Title | Generative Adversarial Networks (GAN) for compact beam source modelling in Monte Carlo simulations |
Authors | David Sarrut, Nils Krah, Jean-Michel Létang |
Abstract | A method is proposed and evaluated to model large and inconvenient phase space files used in Monte Carlo simulations by a compact Generative Adversarial Network (GAN). The GAN is trained based on a phase space dataset to create a neural network, called Generator (G), allowing G to mimic the multidimensional data distribution of the phase space. At the end of the training process, G is stored with about 0.5 million weights, around 10 MB, instead of few GB of the initial file. Particles are then generated with G to replace the phase space dataset. This concept is applied to beam models from linear accelerators (linacs) and from brachytherapy seed models. Simulations using particles from the reference phase space on one hand and those generated by the GAN on the other hand were compared. 3D distributions of deposited energy obtained from source distributions generated by the GAN were close to the reference ones, with less than 1% of voxel-by-voxel relative difference. Sharp parts such as the brachytherapy emission lines in the energy spectra were not perfectly modeled by the GAN. Detailed statistical properties and limitations of the GAN-generated particles still require further investigation, but the proposed exploratory approach is already promising and paves the way for a wide range of applications. |
Tasks | |
Published | 2019-07-31 |
URL | https://arxiv.org/abs/1907.13324v2 |
https://arxiv.org/pdf/1907.13324v2.pdf | |
PWC | https://paperswithcode.com/paper/generative-adversarial-networks-gan-for |
Repo | |
Framework | |
Biologically-Inspired Spatial Neural Networks
Title | Biologically-Inspired Spatial Neural Networks |
Authors | Maciej Wołczyk, Jacek Tabor, Marek Śmieja, Szymon Maszke |
Abstract | We introduce bio-inspired artificial neural networks consisting of neurons that are additionally characterized by spatial positions. To simulate properties of biological systems we add the costs penalizing long connections and the proximity of neurons in a two-dimensional space. Our experiments show that in the case where the network performs two different tasks, the neurons naturally split into clusters, where each cluster is responsible for processing a different task. This behavior not only corresponds to the biological systems, but also allows for further insight into interpretability or continual learning. |
Tasks | Continual Learning |
Published | 2019-10-07 |
URL | https://arxiv.org/abs/1910.02776v1 |
https://arxiv.org/pdf/1910.02776v1.pdf | |
PWC | https://paperswithcode.com/paper/biologically-inspired-spatial-neural-networks |
Repo | |
Framework | |
Quantized Fisher Discriminant Analysis
Title | Quantized Fisher Discriminant Analysis |
Authors | Benyamin Ghojogh, Ali Saheb Pasand, Fakhri Karray, Mark Crowley |
Abstract | This paper proposes a new subspace learning method, named Quantized Fisher Discriminant Analysis (QFDA), which makes use of both machine learning and information theory. There is a lack of literature for combination of machine learning and information theory and this paper tries to tackle this gap. QFDA finds a subspace which discriminates the uniformly quantized images in the Discrete Cosine Transform (DCT) domain at least as well as discrimination of non-quantized images by Fisher Discriminant Analysis (FDA) while the images have been compressed. This helps the user to throw away the original images and keep the compressed images instead without noticeable loss of classification accuracy. We propose a cost function whose minimization can be interpreted as rate-distortion optimization in information theory. We also propose quantized Fisherfaces for facial analysis in QFDA. Our experiments on AT&T face dataset and Fashion MNIST dataset show the effectiveness of this subspace learning method. |
Tasks | |
Published | 2019-09-06 |
URL | https://arxiv.org/abs/1909.03037v1 |
https://arxiv.org/pdf/1909.03037v1.pdf | |
PWC | https://paperswithcode.com/paper/quantized-fisher-discriminant-analysis |
Repo | |
Framework | |
The non-tightness of the reconstruction threshold of a 4 states symmetric model with different in-block and out-block mutations
Title | The non-tightness of the reconstruction threshold of a 4 states symmetric model with different in-block and out-block mutations |
Authors | Wenjian Liu, Ning Ning |
Abstract | The tree reconstruction problem is to collect and analyze massive data at the $n$th level of the tree, to identify whether there is non-vanishing information of the root, as $n$ goes to infinity. Its connection to the clustering problem in the setting of the stochastic block model, which has wide applications in machine learning and data mining, has been well established. For the stochastic block model, an “information-theoretically-solvable-but-computationally-hard” region, or say “hybrid-hard phase”, appears whenever the reconstruction bound is not tight of the corresponding reconstruction on the tree problem. Although it has been studied in numerous contexts, the existing literature with rigorous reconstruction thresholds established are very limited, and it becomes extremely challenging when the model under investigation has $4$ states (the stochastic block model with $4$ communities). In this paper, inspired by the newly proposed $q_1+q_2$ stochastic block model, we study a $4$ states symmetric model with different in-block and out-block transition probabilities, and rigorously give the conditions for the non-tightness of the reconstruction threshold. |
Tasks | |
Published | 2019-06-22 |
URL | https://arxiv.org/abs/1906.09479v1 |
https://arxiv.org/pdf/1906.09479v1.pdf | |
PWC | https://paperswithcode.com/paper/the-non-tightness-of-the-reconstruction |
Repo | |
Framework | |