July 27, 2019

3004 words 15 mins read

Paper Group ANR 532

Paper Group ANR 532

XES Tensorflow - Process Prediction using the Tensorflow Deep-Learning Framework. Introduction to Nonnegative Matrix Factorization. Ontology based Scene Creation for the Development of Automated Vehicles. Poverty Prediction with Public Landsat 7 Satellite Imagery and Machine Learning. Applying advanced machine learning models to classify electro-ph …

XES Tensorflow - Process Prediction using the Tensorflow Deep-Learning Framework

Title XES Tensorflow - Process Prediction using the Tensorflow Deep-Learning Framework
Authors Joerg Evermann, Jana-Rebecca Rehse, Peter Fettke
Abstract Predicting the next activity of a running process is an important aspect of process management. Recently, artificial neural networks, so called deep-learning approaches, have been proposed to address this challenge. This demo paper describes a software application that applies the Tensorflow deep-learning framework to process prediction. The software application reads industry-standard XES files for training and presents the user with an easy-to-use graphical user interface for both training and prediction. The system provides several improvements over earlier work. This demo paper focuses on the software implementation and describes the architecture and user interface.
Tasks
Published 2017-05-03
URL http://arxiv.org/abs/1705.01507v1
PDF http://arxiv.org/pdf/1705.01507v1.pdf
PWC https://paperswithcode.com/paper/xes-tensorflow-process-prediction-using-the
Repo
Framework

Introduction to Nonnegative Matrix Factorization

Title Introduction to Nonnegative Matrix Factorization
Authors Nicolas Gillis
Abstract In this paper, we introduce and provide a short overview of nonnegative matrix factorization (NMF). Several aspects of NMF are discussed, namely, the application in hyperspectral imaging, geometry and uniqueness of NMF solutions, complexity, algorithms, and its link with extended formulations of polyhedra. In order to put NMF into perspective, the more general problem class of constrained low-rank matrix approximation problems is first briefly introduced.
Tasks
Published 2017-03-02
URL http://arxiv.org/abs/1703.00663v1
PDF http://arxiv.org/pdf/1703.00663v1.pdf
PWC https://paperswithcode.com/paper/introduction-to-nonnegative-matrix
Repo
Framework

Ontology based Scene Creation for the Development of Automated Vehicles

Title Ontology based Scene Creation for the Development of Automated Vehicles
Authors Gerrit Bagschik, Till Menzel, Markus Maurer
Abstract The introduction of automated vehicles without permanent human supervision demands a functional system description, including functional system boundaries and a comprehensive safety analysis. These inputs to the technical development can be identified and analyzed by a scenario-based approach. Furthermore, to establish an economical test and release process, a large number of scenarios must be identified to obtain meaningful test results. Experts are doing well to identify scenarios that are difficult to handle or unlikely to happen. However, experts are unlikely to identify all scenarios possible based on the knowledge they have on hand. Expert knowledge modeled for computer aided processing may help for the purpose of providing a wide range of scenarios. This contribution reviews ontologies as knowledge-based systems in the field of automated vehicles, and proposes a generation of traffic scenes in natural language as a basis for a scenario creation.
Tasks
Published 2017-03-29
URL http://arxiv.org/abs/1704.01006v5
PDF http://arxiv.org/pdf/1704.01006v5.pdf
PWC https://paperswithcode.com/paper/ontology-based-scene-creation-for-the
Repo
Framework

Poverty Prediction with Public Landsat 7 Satellite Imagery and Machine Learning

Title Poverty Prediction with Public Landsat 7 Satellite Imagery and Machine Learning
Authors Anthony Perez, Christopher Yeh, George Azzari, Marshall Burke, David Lobell, Stefano Ermon
Abstract Obtaining detailed and reliable data about local economic livelihoods in developing countries is expensive, and data are consequently scarce. Previous work has shown that it is possible to measure local-level economic livelihoods using high-resolution satellite imagery. However, such imagery is relatively expensive to acquire, often not updated frequently, and is mainly available for recent years. We train CNN models on free and publicly available multispectral daytime satellite images of the African continent from the Landsat 7 satellite, which has collected imagery with global coverage for almost two decades. We show that despite these images’ lower resolution, we can achieve accuracies that exceed previous benchmarks.
Tasks
Published 2017-11-10
URL http://arxiv.org/abs/1711.03654v1
PDF http://arxiv.org/pdf/1711.03654v1.pdf
PWC https://paperswithcode.com/paper/poverty-prediction-with-public-landsat-7
Repo
Framework

Applying advanced machine learning models to classify electro-physiological activity of human brain for use in biometric identification

Title Applying advanced machine learning models to classify electro-physiological activity of human brain for use in biometric identification
Authors Iaroslav Omelianenko
Abstract In this article we present the results of our research related to the study of correlations between specific visual stimulation and the elicited brain’s electro-physiological response collected by EEG sensors from a group of participants. We will look at how the various characteristics of visual stimulation affect the measured electro-physiological response of the brain and describe the optimal parameters found that elicit a steady-state visually evoked potential (SSVEP) in certain parts of the cerebral cortex where it can be reliably perceived by the electrode of the EEG device. After that, we continue with a description of the advanced machine learning pipeline model that can perform confident classification of the collected EEG data in order to (a) reliably distinguish signal from noise (about 85% validation score) and (b) reliably distinguish between EEG records collected from different human participants (about 80% validation score). Finally, we demonstrate that the proposed method works reliably even with an inexpensive (less than $100) consumer-grade EEG sensing device and with participants who do not have previous experience with EEG technology (EEG illiterate). All this in combination opens up broad prospects for the development of new types of consumer devices, [e.g.] based on virtual reality helmets or augmented reality glasses where EEG sensor can be easily integrated. The proposed method can be used to improve an online user experience by providing [e.g.] password-less user identification for VR / AR applications. It can also find a more advanced application in intensive care units where collected EEG data can be used to classify the level of conscious awareness of patients during anesthesia or to automatically detect hardware failures by classifying the input signal as noise.
Tasks EEG
Published 2017-08-03
URL http://arxiv.org/abs/1708.01167v1
PDF http://arxiv.org/pdf/1708.01167v1.pdf
PWC https://paperswithcode.com/paper/applying-advanced-machine-learning-models-to
Repo
Framework

CommAI: Evaluating the first steps towards a useful general AI

Title CommAI: Evaluating the first steps towards a useful general AI
Authors Marco Baroni, Armand Joulin, Allan Jabri, Germàn Kruszewski, Angeliki Lazaridou, Klemen Simonic, Tomas Mikolov
Abstract With machine learning successfully applied to new daunting problems almost every day, general AI starts looking like an attainable goal. However, most current research focuses instead on important but narrow applications, such as image classification or machine translation. We believe this to be largely due to the lack of objective ways to measure progress towards broad machine intelligence. In order to fill this gap, we propose here a set of concrete desiderata for general AI, together with a platform to test machines on how well they satisfy such desiderata, while keeping all further complexities to a minimum.
Tasks Image Classification, Machine Translation
Published 2017-01-31
URL http://arxiv.org/abs/1701.08954v2
PDF http://arxiv.org/pdf/1701.08954v2.pdf
PWC https://paperswithcode.com/paper/commai-evaluating-the-first-steps-towards-a
Repo
Framework

Incorporating Intra-Class Variance to Fine-Grained Visual Recognition

Title Incorporating Intra-Class Variance to Fine-Grained Visual Recognition
Authors Yan Bai, Feng Gao, Yihang Lou, Shiqi Wang, Tiejun Huang, Ling-Yu Duan
Abstract Fine-grained visual recognition aims to capture discriminative characteristics amongst visually similar categories. The state-of-the-art research work has significantly improved the fine-grained recognition performance by deep metric learning using triplet network. However, the impact of intra-category variance on the performance of recognition and robust feature representation has not been well studied. In this paper, we propose to leverage intra-class variance in metric learning of triplet network to improve the performance of fine-grained recognition. Through partitioning training images within each category into a few groups, we form the triplet samples across different categories as well as different groups, which is called Group Sensitive TRiplet Sampling (GS-TRS). Accordingly, the triplet loss function is strengthened by incorporating intra-class variance with GS-TRS, which may contribute to the optimization objective of triplet network. Extensive experiments over benchmark datasets CompCar and VehicleID show that the proposed GS-TRS has significantly outperformed state-of-the-art approaches in both classification and retrieval tasks.
Tasks Fine-Grained Visual Recognition, Metric Learning
Published 2017-03-01
URL http://arxiv.org/abs/1703.00196v1
PDF http://arxiv.org/pdf/1703.00196v1.pdf
PWC https://paperswithcode.com/paper/incorporating-intra-class-variance-to-fine
Repo
Framework

Household poverty classification in data-scarce environments: a machine learning approach

Title Household poverty classification in data-scarce environments: a machine learning approach
Authors Varun Kshirsagar, Jerzy Wieczorek, Sharada Ramanathan, Rachel Wells
Abstract We describe a method to identify poor households in data-scarce countries by leveraging information contained in nationally representative household surveys. It employs standard statistical learning techniques—cross-validation and parameter regularization—which together reduce the extent to which the model is over-fitted to match the idiosyncracies of observed survey data. The automated framework satisfies three important constraints of this development setting: i) The prediction model uses at most ten questions, which limits the costs of data collection; ii) No computation beyond simple arithmetic is needed to calculate the probability that a given household is poor, immediately after data on the ten indicators is collected; and iii) One specification of the model (i.e. one scorecard) is used to predict poverty throughout a country that may be characterized by significant sub-national differences. Using survey data from Zambia, the model’s out-of-sample predictions distinguish poor households from non-poor households using information contained in ten questions.
Tasks
Published 2017-11-18
URL http://arxiv.org/abs/1711.06813v1
PDF http://arxiv.org/pdf/1711.06813v1.pdf
PWC https://paperswithcode.com/paper/household-poverty-classification-in-data
Repo
Framework

Polynomial Time and Sample Complexity for Non-Gaussian Component Analysis: Spectral Methods

Title Polynomial Time and Sample Complexity for Non-Gaussian Component Analysis: Spectral Methods
Authors Yan Shuo Tan, Roman Vershynin
Abstract The problem of Non-Gaussian Component Analysis (NGCA) is about finding a maximal low-dimensional subspace $E$ in $\mathbb{R}^n$ so that data points projected onto $E$ follow a non-gaussian distribution. Although this is an appropriate model for some real world data analysis problems, there has been little progress on this problem over the last decade. In this paper, we attempt to address this state of affairs in two ways. First, we give a new characterization of standard gaussian distributions in high-dimensions, which lead to effective tests for non-gaussianness. Second, we propose a simple algorithm, \emph{Reweighted PCA}, as a method for solving the NGCA problem. We prove that for a general unknown non-gaussian distribution, this algorithm recovers at least one direction in $E$, with sample and time complexity depending polynomially on the dimension of the ambient space. We conjecture that the algorithm actually recovers the entire $E$.
Tasks
Published 2017-04-04
URL http://arxiv.org/abs/1704.01041v1
PDF http://arxiv.org/pdf/1704.01041v1.pdf
PWC https://paperswithcode.com/paper/polynomial-time-and-sample-complexity-for-non
Repo
Framework

Deep Reinforcement Learning framework for Autonomous Driving

Title Deep Reinforcement Learning framework for Autonomous Driving
Authors Ahmad El Sallab, Mohammed Abdou, Etienne Perot, Senthil Yogamani
Abstract Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived utility, it has not yet been successfully applied in automotive applications. Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, we propose a framework for autonomous driving using deep reinforcement learning. This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks. As it is a relatively new area of research for autonomous driving, we provide a short overview of deep reinforcement learning and then describe our proposed framework. It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios. It also integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware. The framework was tested in an open source 3D car racing simulator called TORCS. Our simulation results demonstrate learning of autonomous maneuvering in a scenario of complex road curvatures and simple interaction of other vehicles.
Tasks Atari Games, Autonomous Driving, Car Racing
Published 2017-04-08
URL http://arxiv.org/abs/1704.02532v1
PDF http://arxiv.org/pdf/1704.02532v1.pdf
PWC https://paperswithcode.com/paper/deep-reinforcement-learning-framework-for
Repo
Framework

Stage 4 validation of the Satellite Image Automatic Mapper lightweight computer program for Earth observation Level 2 product generation, Part 2 Validation

Title Stage 4 validation of the Satellite Image Automatic Mapper lightweight computer program for Earth observation Level 2 product generation, Part 2 Validation
Authors Andrea Baraldi, Michael Laurence Humber, Dirk Tiede, Stefan Lang
Abstract The European Space Agency (ESA) defines an Earth Observation (EO) Level 2 product as a multispectral (MS) image corrected for geometric, atmospheric, adjacency and topographic effects, stacked with its scene classification map (SCM) whose legend includes quality layers such as cloud and cloud-shadow. No ESA EO Level 2 product has ever been systematically generated at the ground segment. To contribute toward filling an information gap from EO big sensory data to the ESA EO Level 2 product, a Stage 4 validation (Val) of an off the shelf Satellite Image Automatic Mapper (SIAM) lightweight computer program for prior knowledge based MS color naming was conducted by independent means. A time-series of annual Web Enabled Landsat Data (WELD) image composites of the conterminous U.S. (CONUS) was selected as input dataset. The annual SIAM WELD maps of the CONUS were validated in comparison with the U.S. National Land Cover Data (NLCD) 2006 map. These test and reference maps share the same spatial resolution and spatial extent, but their map legends are not the same and must be harmonized. For the sake of readability this paper is split into two. The previous Part 1 Theory provided the multidisciplinary background of a priori color naming. The present Part 2 Validation presents and discusses Stage 4 Val results collected from the test SIAM WELD map time series and the reference NLCD map by an original protocol for wall to wall thematic map quality assessment without sampling, where the test and reference map legends can differ in agreement with the Part 1. Conclusions are that the SIAM-WELD maps instantiate a Level 2 SCM product whose legend is the FAO Land Cover Classification System (LCCS) taxonomy at the Dichotomous Phase (DP) Level 1 vegetation/nonvegetation, Level 2 terrestrial/aquatic or superior LCCS level.
Tasks Scene Classification, Time Series
Published 2017-01-08
URL http://arxiv.org/abs/1701.01932v1
PDF http://arxiv.org/pdf/1701.01932v1.pdf
PWC https://paperswithcode.com/paper/stage-4-validation-of-the-satellite-image-1
Repo
Framework

Recurrent Environment Simulators

Title Recurrent Environment Simulators
Authors Silvia Chiappa, Sébastien Racaniere, Daan Wierstra, Shakir Mohamed
Abstract Models that can simulate how environments change in response to actions can be used by agents to plan and act efficiently. We improve on previous environment simulators from high-dimensional pixel observations by introducing recurrent neural networks that are able to make temporally and spatially coherent predictions for hundreds of time-steps into the future. We present an in-depth analysis of the factors affecting performance, providing the most extensive attempt to advance the understanding of the properties of these models. We address the issue of computationally inefficiency with a model that does not need to generate a high-dimensional image at each time-step. We show that our approach can be used to improve exploration and is adaptable to many diverse environments, namely 10 Atari games, a 3D car racing environment, and complex 3D mazes.
Tasks Atari Games, Car Racing
Published 2017-04-07
URL http://arxiv.org/abs/1704.02254v2
PDF http://arxiv.org/pdf/1704.02254v2.pdf
PWC https://paperswithcode.com/paper/recurrent-environment-simulators
Repo
Framework
Title SIM-CE: An Advanced Simulink Platform for Studying the Brain of Caenorhabditis elegans
Authors Ramin M. Hasani, Victoria Beneder, Magdalena Fuchs, David Lung, Radu Grosu
Abstract We introduce SIM-CE, an advanced, user-friendly modeling and simulation environment in Simulink for performing multi-scale behavioral analysis of the nervous system of Caenorhabditis elegans (C. elegans). SIM-CE contains an implementation of the mathematical models of C. elegans’s neurons and synapses, in Simulink, which can be easily extended and particularized by the user. The Simulink model is able to capture both complex dynamics of ion channels and additional biophysical detail such as intracellular calcium concentration. We demonstrate the performance of SIM-CE by carrying out neuronal, synaptic and neural-circuit-level behavioral simulations. Such environment enables the user to capture unknown properties of the neural circuits, test hypotheses and determine the origin of many behavioral plasticities exhibited by the worm.
Tasks
Published 2017-03-18
URL http://arxiv.org/abs/1703.06270v3
PDF http://arxiv.org/pdf/1703.06270v3.pdf
PWC https://paperswithcode.com/paper/sim-ce-an-advanced-simulink-platform-for
Repo
Framework

Align and Copy: UZH at SIGMORPHON 2017 Shared Task for Morphological Reinflection

Title Align and Copy: UZH at SIGMORPHON 2017 Shared Task for Morphological Reinflection
Authors Peter Makarov, Tatiana Ruzsics, Simon Clematide
Abstract This paper presents the submissions by the University of Zurich to the SIGMORPHON 2017 shared task on morphological reinflection. The task is to predict the inflected form given a lemma and a set of morpho-syntactic features. We focus on neural network approaches that can tackle the task in a limited-resource setting. As the transduction of the lemma into the inflected form is dominated by copying over lemma characters, we propose two recurrent neural network architectures with hard monotonic attention that are strong at copying and, yet, substantially different in how they achieve this. The first approach is an encoder-decoder model with a copy mechanism. The second approach is a neural state-transition system over a set of explicit edit actions, including a designated COPY action. We experiment with character alignment and find that naive, greedy alignment consistently produces strong results for some languages. Our best system combination is the overall winner of the SIGMORPHON 2017 Shared Task 1 without external resources. At a setting with 100 training samples, both our approaches, as ensembles of models, outperform the next best competitor.
Tasks
Published 2017-07-05
URL http://arxiv.org/abs/1707.01355v2
PDF http://arxiv.org/pdf/1707.01355v2.pdf
PWC https://paperswithcode.com/paper/align-and-copy-uzh-at-sigmorphon-2017-shared
Repo
Framework

Click Through Rate Prediction for Contextual Advertisment Using Linear Regression

Title Click Through Rate Prediction for Contextual Advertisment Using Linear Regression
Authors Muhammad Junaid Effendi, Syed Abbas Ali
Abstract This research presents an innovative and unique way of solving the advertisement prediction problem which is considered as a learning problem over the past several years. Online advertising is a multi-billion-dollar industry and is growing every year with a rapid pace. The goal of this research is to enhance click through rate of the contextual advertisements using Linear Regression. In order to address this problem, a new technique propose in this paper to predict the CTR which will increase the overall revenue of the system by serving the advertisements more suitable to the viewers with the help of feature extraction and displaying the advertisements based on context of the publishers. The important steps include the data collection, feature extraction, CTR prediction and advertisement serving. The statistical results obtained from the dynamically used technique show an efficient outcome by fitting the data close to perfection for the LR technique using optimized feature selection.
Tasks Click-Through Rate Prediction, Feature Selection
Published 2017-01-30
URL http://arxiv.org/abs/1701.08744v1
PDF http://arxiv.org/pdf/1701.08744v1.pdf
PWC https://paperswithcode.com/paper/click-through-rate-prediction-for-contextual
Repo
Framework
comments powered by Disqus