Paper Group ANR 102
Markov models for ocular fixation locations in the presence and absence of colour. PCG-Based Game Design Patterns. Topic Sensitive Neural Headline Generation. Machine Learning Techniques with Ontology for Subjective Answer Evaluation. e-Commerce product classification: our participation at cDiscount 2015 challenge. Minimum cost polygon overlay with …
Markov models for ocular fixation locations in the presence and absence of colour
Title | Markov models for ocular fixation locations in the presence and absence of colour |
Authors | Adam B. Kashlak, Eoin Devane, Helge Dietert, Henry Jackson |
Abstract | We propose to model the fixation locations of the human eye when observing a still image by a Markovian point process in R 2 . Our approach is data driven using k-means clustering of the fixation locations to identify distinct salient regions of the image, which in turn correspond to the states of our Markov chain. Bayes factors are computed as model selection criterion to determine the number of clusters. Furthermore, we demonstrate that the behaviour of the human eye differs from this model when colour information is removed from the given image. |
Tasks | Model Selection |
Published | 2016-04-21 |
URL | http://arxiv.org/abs/1604.06335v1 |
http://arxiv.org/pdf/1604.06335v1.pdf | |
PWC | https://paperswithcode.com/paper/markov-models-for-ocular-fixation-locations |
Repo | |
Framework | |
PCG-Based Game Design Patterns
Title | PCG-Based Game Design Patterns |
Authors | Michael Cook, Mirjam Eladhari, Andy Nealen, Mike Treanor, Eddy Boxerman, Alex Jaffe, Paul Sottosanti, Steve Swink |
Abstract | People enjoy encounters with generative software, but rarely are they encouraged to interact with, understand or engage with it. In this paper we define the term ‘PCG-based game’, and explain how this concept follows on from the idea of an AI-based game. We look at existing examples of games which foreground their AI, put forward a methodology for designing PCG-based games, describe some example case study designs for PCG-based games, and describe lessons learned during this process of sketching and developing ideas. |
Tasks | |
Published | 2016-10-11 |
URL | http://arxiv.org/abs/1610.03138v1 |
http://arxiv.org/pdf/1610.03138v1.pdf | |
PWC | https://paperswithcode.com/paper/pcg-based-game-design-patterns |
Repo | |
Framework | |
Topic Sensitive Neural Headline Generation
Title | Topic Sensitive Neural Headline Generation |
Authors | Lei Xu, Ziyun Wang, Ayana, Zhiyuan Liu, Maosong Sun |
Abstract | Neural models have recently been used in text summarization including headline generation. The model can be trained using a set of document-headline pairs. However, the model does not explicitly consider topical similarities and differences of documents. We suggest to categorizing documents into various topics so that documents within the same topic are similar in content and share similar summarization patterns. Taking advantage of topic information of documents, we propose topic sensitive neural headline generation model. Our model can generate more accurate summaries guided by document topics. We test our model on LCSTS dataset, and experiments show that our method outperforms other baselines on each topic and achieves the state-of-art performance. |
Tasks | Text Summarization |
Published | 2016-08-20 |
URL | http://arxiv.org/abs/1608.05777v1 |
http://arxiv.org/pdf/1608.05777v1.pdf | |
PWC | https://paperswithcode.com/paper/topic-sensitive-neural-headline-generation |
Repo | |
Framework | |
Machine Learning Techniques with Ontology for Subjective Answer Evaluation
Title | Machine Learning Techniques with Ontology for Subjective Answer Evaluation |
Authors | M. Syamala Devi, Himani Mittal |
Abstract | Computerized Evaluation of English Essays is performed using Machine learning techniques like Latent Semantic Analysis (LSA), Generalized LSA, Bilingual Evaluation Understudy and Maximum Entropy. Ontology, a concept map of domain knowledge, can enhance the performance of these techniques. Use of Ontology makes the evaluation process holistic as presence of keywords, synonyms, the right word combination and coverage of concepts can be checked. In this paper, the above mentioned techniques are implemented both with and without Ontology and tested on common input data consisting of technical answers of Computer Science. Domain Ontology of Computer Graphics is designed and developed. The software used for implementation includes Java Programming Language and tools such as MATLAB, Prot'eg'e, etc. Ten questions from Computer Graphics with sixty answers for each question are used for testing. The results are analyzed and it is concluded that the results are more accurate with use of Ontology. |
Tasks | |
Published | 2016-05-09 |
URL | http://arxiv.org/abs/1605.02442v1 |
http://arxiv.org/pdf/1605.02442v1.pdf | |
PWC | https://paperswithcode.com/paper/machine-learning-techniques-with-ontology-for |
Repo | |
Framework | |
e-Commerce product classification: our participation at cDiscount 2015 challenge
Title | e-Commerce product classification: our participation at cDiscount 2015 challenge |
Authors | Ioannis Partalas, Georgios Balikas |
Abstract | This report describes our participation in the cDiscount 2015 challenge where the goal was to classify product items in a predefined taxonomy of products. Our best submission yielded an accuracy score of 64.20% in the private part of the leaderboard and we were ranked 10th out of 175 participating teams. We followed a text classification approach employing mainly linear models. The final solution was a weighted voting system which combined a variety of trained models. |
Tasks | Text Classification |
Published | 2016-06-09 |
URL | http://arxiv.org/abs/1606.02854v1 |
http://arxiv.org/pdf/1606.02854v1.pdf | |
PWC | https://paperswithcode.com/paper/e-commerce-product-classification-our |
Repo | |
Framework | |
Minimum cost polygon overlay with rectangular shape stock panels
Title | Minimum cost polygon overlay with rectangular shape stock panels |
Authors | Wilson S. Siringoringo, Andy M. Connor, Nick Clements, Nick Alexander |
Abstract | Minimum Cost Polygon Overlay (MCPO) is a unique two-dimensional optimization problem that involves the task of covering a polygon shaped area with a series of rectangular shaped panels. This has a number of applications in the construction industry. This work examines the MCPO problem in order to construct a model that captures essential parameters of the problem to be solved automatically using numerical optimization algorithms. Three algorithms have been implemented of the actual optimization task: the greedy search, the Monte Carlo (MC) method, and the Genetic Algorithm (GA). Results are presented to show the relative effectiveness of the algorithms. This is followed by critical analysis of various findings of this research. |
Tasks | |
Published | 2016-06-19 |
URL | http://arxiv.org/abs/1606.05927v1 |
http://arxiv.org/pdf/1606.05927v1.pdf | |
PWC | https://paperswithcode.com/paper/minimum-cost-polygon-overlay-with-rectangular |
Repo | |
Framework | |
Extracting Sub-Exposure Images from a Single Capture Through Fourier-based Optical Modulation
Title | Extracting Sub-Exposure Images from a Single Capture Through Fourier-based Optical Modulation |
Authors | Shah Rez Khan, Martin Feldman, Bahadir K. Gunturk |
Abstract | Through pixel-wise optical coding of images during exposure time, it is possible to extract sub-exposure images from a single capture. Such a capability can be used for different purposes, including high-speed imaging, high-dynamic-range imaging and compressed sensing. In this paper, we demonstrate a sub-exposure image extraction method, where the exposure coding pattern is inspired from frequency division multiplexing idea of communication systems. The coding masks modulate sub-exposure images in such a way that they are placed in non-overlapping regions in Fourier domain. The sub-exposure image extraction process involves digital filtering of the captured signal with proper band-pass filters. The prototype imaging system incorporates a Liquid Crystal over Silicon (LCoS) based spatial light modulator synchronized with a camera for pixel-wise exposure coding. |
Tasks | |
Published | 2016-12-26 |
URL | http://arxiv.org/abs/1612.08359v2 |
http://arxiv.org/pdf/1612.08359v2.pdf | |
PWC | https://paperswithcode.com/paper/extracting-sub-exposure-images-from-a-single |
Repo | |
Framework | |
SANTIAGO: Spine Association for Neuron Topology Improvement and Graph Optimization
Title | SANTIAGO: Spine Association for Neuron Topology Improvement and Graph Optimization |
Authors | William Gray Roncal, Colin Lea, Akira Baruah, Gregory D. Hager |
Abstract | Developing automated and semi-automated solutions for reconstructing wiring diagrams of the brain from electron micrographs is important for advancing the field of connectomics. While the ultimate goal is to generate a graph of neuron connectivity, most prior automated methods have focused on volume segmentation rather than explicit graph estimation. In these approaches, one of the key, commonly occurring error modes is dendritic shaft-spine fragmentation. We posit that directly addressing this problem of connection identification may provide critical insight into estimating more accurate brain graphs. To this end, we develop a network-centric approach motivated by biological priors image grammars. We build a computer vision pipeline to reconnect fragmented spines to their parent dendrites using both fully-automated and semi-automated approaches. Our experiments show we can learn valid connections despite uncertain segmentation paths. We curate the first known reference dataset for analyzing the performance of various spine-shaft algorithms and demonstrate promising results that recover many previously lost connections. Our automated approach improves the local subgraph score by more than four times and the full graph score by 60 percent. These data, results, and evaluation tools are all available to the broader scientific community. This reframing of the connectomics problem illustrates a semantic, biologically inspired solution to remedy a major problem with neuron tracking. |
Tasks | |
Published | 2016-08-08 |
URL | http://arxiv.org/abs/1608.02307v1 |
http://arxiv.org/pdf/1608.02307v1.pdf | |
PWC | https://paperswithcode.com/paper/santiago-spine-association-for-neuron |
Repo | |
Framework | |
Land Use Classification using Convolutional Neural Networks Applied to Ground-Level Images
Title | Land Use Classification using Convolutional Neural Networks Applied to Ground-Level Images |
Authors | Yi Zhu, Shawn Newsam |
Abstract | Land use mapping is a fundamental yet challenging task in geographic science. In contrast to land cover mapping, it is generally not possible using overhead imagery. The recent, explosive growth of online geo-referenced photo collections suggests an alternate approach to geographic knowledge discovery. In this work, we present a general framework that uses ground-level images from Flickr for land use mapping. Our approach benefits from several novel aspects. First, we address the nosiness of the online photo collections, such as imprecise geolocation and uneven spatial distribution, by performing location and indoor/outdoor filtering, and semi- supervised dataset augmentation. Our indoor/outdoor classifier achieves state-of-the-art performance on several bench- mark datasets and approaches human-level accuracy. Second, we utilize high-level semantic image features extracted using deep learning, specifically convolutional neural net- works, which allow us to achieve upwards of 76% accuracy on a challenging eight class land use mapping problem. |
Tasks | |
Published | 2016-09-21 |
URL | http://arxiv.org/abs/1609.06653v1 |
http://arxiv.org/pdf/1609.06653v1.pdf | |
PWC | https://paperswithcode.com/paper/land-use-classification-using-convolutional |
Repo | |
Framework | |
Large-scale Continuous Gesture Recognition Using Convolutional Neural Networks
Title | Large-scale Continuous Gesture Recognition Using Convolutional Neural Networks |
Authors | Pichao Wang, Wanqing Li, Song Liu, Yuyao Zhang, Zhimin Gao, Philip Ogunbona |
Abstract | This paper addresses the problem of continuous gesture recognition from sequences of depth maps using convolutional neutral networks (ConvNets). The proposed method first segments individual gestures from a depth sequence based on quantity of movement (QOM). For each segmented gesture, an Improved Depth Motion Map (IDMM), which converts the depth sequence into one image, is constructed and fed to a ConvNet for recognition. The IDMM effectively encodes both spatial and temporal information and allows the fine-tuning with existing ConvNet models for classification without introducing millions of parameters to learn. The proposed method is evaluated on the Large-scale Continuous Gesture Recognition of the ChaLearn Looking at People (LAP) challenge 2016. It achieved the performance of 0.2655 (Mean Jaccard Index) and ranked $3^{rd}$ place in this challenge. |
Tasks | Gesture Recognition |
Published | 2016-08-22 |
URL | http://arxiv.org/abs/1608.06338v2 |
http://arxiv.org/pdf/1608.06338v2.pdf | |
PWC | https://paperswithcode.com/paper/large-scale-continuous-gesture-recognition |
Repo | |
Framework | |
BMOBench: Black-Box Multi-Objective Optimization Benchmarking Platform
Title | BMOBench: Black-Box Multi-Objective Optimization Benchmarking Platform |
Authors | Abdullah Al-Dujaili, S. Suresh |
Abstract | This document briefly describes the Black-Box Multi-Objective Optimization Benchmarking (BMOBench) platform. It presents the test problems, evaluation procedure, and experimental setup. To this end, the BMOBench is demonstrated by comparing recent multi-objective solvers from the literature, namely SMS-EMOA, DMS, and MO-SOO. |
Tasks | |
Published | 2016-05-23 |
URL | http://arxiv.org/abs/1605.07009v2 |
http://arxiv.org/pdf/1605.07009v2.pdf | |
PWC | https://paperswithcode.com/paper/bmobench-black-box-multi-objective |
Repo | |
Framework | |
Symbolic Music Data Version 1.0
Title | Symbolic Music Data Version 1.0 |
Authors | Christian Walder |
Abstract | In this document, we introduce a new dataset designed for training machine learning models of symbolic music data. Five datasets are provided, one of which is from a newly collected corpus of 20K midi files. We describe our preprocessing and cleaning pipeline, which includes the exclusion of a number of files based on scores from a previously developed probabilistic machine learning model. We also define training, testing and validation splits for the new dataset, based on a clustering scheme which we also describe. Some simple histograms are included. |
Tasks | |
Published | 2016-06-08 |
URL | http://arxiv.org/abs/1606.02542v1 |
http://arxiv.org/pdf/1606.02542v1.pdf | |
PWC | https://paperswithcode.com/paper/symbolic-music-data-version-10 |
Repo | |
Framework | |
Fast rates with high probability in exp-concave statistical learning
Title | Fast rates with high probability in exp-concave statistical learning |
Authors | Nishant A. Mehta |
Abstract | We present an algorithm for the statistical learning setting with a bounded exp-concave loss in $d$ dimensions that obtains excess risk $O(d \log(1/\delta)/n)$ with probability at least $1 - \delta$. The core technique is to boost the confidence of recent in-expectation $O(d/n)$ excess risk bounds for empirical risk minimization (ERM), without sacrificing the rate, by leveraging a Bernstein condition which holds due to exp-concavity. We also show that with probability $1 - \delta$ the standard ERM method obtains excess risk $O(d (\log(n) + \log(1/\delta))/n)$. We further show that a regret bound for any online learner in this setting translates to a high probability excess risk bound for the corresponding online-to-batch conversion of the online learner. Lastly, we present two high probability bounds for the exp-concave model selection aggregation problem that are quantile-adaptive in a certain sense. The first bound is a purely exponential weights type algorithm, obtains a nearly optimal rate, and has no explicit dependence on the Lipschitz continuity of the loss. The second bound requires Lipschitz continuity but obtains the optimal rate. |
Tasks | Model Selection |
Published | 2016-05-04 |
URL | http://arxiv.org/abs/1605.01288v4 |
http://arxiv.org/pdf/1605.01288v4.pdf | |
PWC | https://paperswithcode.com/paper/fast-rates-with-high-probability-in-exp |
Repo | |
Framework | |
Viewpoint and Topic Modeling of Current Events
Title | Viewpoint and Topic Modeling of Current Events |
Authors | Kerry Zhang, Jussi Karlgren, Cheng Zhang, Jens Lagergren |
Abstract | There are multiple sides to every story, and while statistical topic models have been highly successful at topically summarizing the stories in corpora of text documents, they do not explicitly address the issue of learning the different sides, the viewpoints, expressed in the documents. In this paper, we show how these viewpoints can be learned completely unsupervised and represented in a human interpretable form. We use a novel approach of applying CorrLDA2 for this purpose, which learns topic-viewpoint relations that can be used to form groups of topics, where each group represents a viewpoint. A corpus of documents about the Israeli-Palestinian conflict is then used to demonstrate how a Palestinian and an Israeli viewpoint can be learned. By leveraging the magnitudes and signs of the feature weights of a linear SVM, we introduce a principled method to evaluate associations between topics and viewpoints. With this, we demonstrate, both quantitatively and qualitatively, that the learned topic groups are contextually coherent, and form consistently correct topic-viewpoint associations. |
Tasks | Topic Models |
Published | 2016-08-14 |
URL | http://arxiv.org/abs/1608.04089v1 |
http://arxiv.org/pdf/1608.04089v1.pdf | |
PWC | https://paperswithcode.com/paper/viewpoint-and-topic-modeling-of-current |
Repo | |
Framework | |
Redundancy-free Verbalization of Individuals for Ontology Validation
Title | Redundancy-free Verbalization of Individuals for Ontology Validation |
Authors | E. V. Vinu, P Sreenivasa Kumar |
Abstract | We investigate the problem of verbalizing Web Ontology Language (OWL) axioms of domain ontologies in this paper. The existing approaches address the problem of fidelity of verbalized OWL texts to OWL semantics by exploring different ways of expressing the same OWL axiom in various linguistic forms. They also perform grouping and aggregating of the natural language (NL) sentences that are generated corresponding to each OWL statement into a comprehensible structure. However, no efforts have been taken to try out a semantic reduction at logical level to remove redundancies and repetitions, so that the reduced set of axioms can be used for generating a more meaningful and human-understandable (what we call redundancy-free) text. Our experiments show that, formal semantic reduction at logical level is very helpful to generate redundancy-free descriptions of ontology entities. In this paper, we particularly focus on generating descriptions of individuals of SHIQ based ontologies. The details of a case study are provided to support the usefulness of the redundancy-free NL descriptions of individuals, in knowledge validation application. |
Tasks | |
Published | 2016-07-24 |
URL | http://arxiv.org/abs/1607.07027v1 |
http://arxiv.org/pdf/1607.07027v1.pdf | |
PWC | https://paperswithcode.com/paper/redundancy-free-verbalization-of-individuals |
Repo | |
Framework | |