July 28, 2019

3198 words 16 mins read

Paper Group ANR 331

Paper Group ANR 331

Automatic Curation of Golf Highlights using Multimodal Excitement Features. A study of existing Ontologies in the IoT-domain. A Bimodal Network Approach to Model Topic Dynamics. Traffic Light Control Using Deep Policy-Gradient and Value-Function Based Reinforcement Learning. Achieving non-discrimination in prediction. Comparison of Modified Kneser- …

Automatic Curation of Golf Highlights using Multimodal Excitement Features

Title Automatic Curation of Golf Highlights using Multimodal Excitement Features
Authors Michele Merler, Dhiraj Joshi, Quoc-Bao Nguyen, Stephen Hammer, John Kent, John R. Smith, Rogerio S. Feris
Abstract The production of sports highlight packages summarizing a game’s most exciting moments is an essential task for broadcast media. Yet, it requires labor-intensive video editing. We propose a novel approach for auto-curating sports highlights, and use it to create a real-world system for the editorial aid of golf highlight reels. Our method fuses information from the players’ reactions (action recognition such as high-fives and fist pumps), spectators (crowd cheering), and commentator (tone of the voice and word analysis) to determine the most interesting moments of a game. We accurately identify the start and end frames of key shot highlights with additional metadata, such as the player’s name and the hole number, allowing personalized content summarization and retrieval. In addition, we introduce new techniques for learning our classifiers with reduced manual training data annotation by exploiting the correlation of different modalities. Our work has been demonstrated at a major golf tournament, successfully extracting highlights from live video streams over four consecutive days.
Tasks Temporal Action Localization
Published 2017-07-22
URL http://arxiv.org/abs/1707.07075v1
PDF http://arxiv.org/pdf/1707.07075v1.pdf
PWC https://paperswithcode.com/paper/automatic-curation-of-golf-highlights-using
Repo
Framework

A study of existing Ontologies in the IoT-domain

Title A study of existing Ontologies in the IoT-domain
Authors Garvita Bajaj, Rachit Agarwal, Pushpendra Singh, Nikolaos Georgantas, Valerie Issarny
Abstract Several domains have adopted the increasing use of IoT-based devices to collect sensor data for generating abstractions and perceptions of the real world. This sensor data is multi-modal and heterogeneous in nature. This heterogeneity induces interoperability issues while developing cross-domain applications, thereby restricting the possibility of reusing sensor data to develop new applications. As a solution to this, semantic approaches have been proposed in the literature to tackle problems related to interoperability of sensor data. Several ontologies have been proposed to handle different aspects of IoT-based sensor data collection, ranging from discovering the IoT sensors for data collection to applying reasoning on the collected sensor data for drawing inferences. In this paper, we survey these existing semantic ontologies to provide an overview of the recent developments in this field. We highlight the fundamental ontological concepts (e.g., sensor-capabilities and context-awareness) required for an IoT-based application, and survey the existing ontologies which include these concepts. Based on our study, we also identify the shortcomings of currently available ontologies, which serves as a stepping stone to state the need for a common unified ontology for the IoT domain.
Tasks
Published 2017-07-01
URL http://arxiv.org/abs/1707.00112v1
PDF http://arxiv.org/pdf/1707.00112v1.pdf
PWC https://paperswithcode.com/paper/a-study-of-existing-ontologies-in-the-iot
Repo
Framework

A Bimodal Network Approach to Model Topic Dynamics

Title A Bimodal Network Approach to Model Topic Dynamics
Authors Luigi Di Caro, Marco Guerzoni, Massimiliano Nuccio, Giovanni Siragusa
Abstract This paper presents an intertemporal bimodal network to analyze the evolution of the semantic content of a scientific field within the framework of topic modeling, namely using the Latent Dirichlet Allocation (LDA). The main contribution is the conceptualization of the topic dynamics and its formalization and codification into an algorithm. To benchmark the effectiveness of this approach, we propose three indexes which track the transformation of topics over time, their rate of birth and death, and the novelty of their content. Applying the LDA, we test the algorithm both on a controlled experiment and on a corpus of several thousands of scientific papers over a period of more than 100 years which account for the history of the economic thought.
Tasks
Published 2017-09-27
URL http://arxiv.org/abs/1709.09373v1
PDF http://arxiv.org/pdf/1709.09373v1.pdf
PWC https://paperswithcode.com/paper/a-bimodal-network-approach-to-model-topic
Repo
Framework

Traffic Light Control Using Deep Policy-Gradient and Value-Function Based Reinforcement Learning

Title Traffic Light Control Using Deep Policy-Gradient and Value-Function Based Reinforcement Learning
Authors Seyed Sajad Mousavi, Michael Schukat, Enda Howley
Abstract Recent advances in combining deep neural network architectures with reinforcement learning techniques have shown promising potential results in solving complex control problems with high dimensional state and action spaces. Inspired by these successes, in this paper, we build two kinds of reinforcement learning algorithms: deep policy-gradient and value-function based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The policy-gradient based agent maps its observation directly to the control signal, however the value-function based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Our methods show promising results in a traffic network simulated in the SUMO traffic simulator, without suffering from instability issues during the training process.
Tasks
Published 2017-04-28
URL http://arxiv.org/abs/1704.08883v2
PDF http://arxiv.org/pdf/1704.08883v2.pdf
PWC https://paperswithcode.com/paper/traffic-light-control-using-deep-policy
Repo
Framework

Achieving non-discrimination in prediction

Title Achieving non-discrimination in prediction
Authors Lu Zhang, Yongkai Wu, Xintao Wu
Abstract Discrimination-aware classification is receiving an increasing attention in data science fields. The pre-process methods for constructing a discrimination-free classifier first remove discrimination from the training data, and then learn the classifier from the cleaned data. However, they lack a theoretical guarantee for the potential discrimination when the classifier is deployed for prediction. In this paper, we fill this gap by mathematically bounding the probability of the discrimination in prediction being within a given interval in terms of the training data and classifier. We adopt the causal model for modeling the data generation mechanism, and formally defining discrimination in population, in a dataset, and in prediction. We obtain two important theoretical results: (1) the discrimination in prediction can still exist even if the discrimination in the training data is completely removed; and (2) not all pre-process methods can ensure non-discrimination in prediction even though they can achieve non-discrimination in the modified training data. Based on the results, we develop a two-phase framework for constructing a discrimination-free classifier with a theoretical guarantee. The experiments demonstrate the theoretical results and show the effectiveness of our two-phase framework.
Tasks
Published 2017-02-28
URL http://arxiv.org/abs/1703.00060v2
PDF http://arxiv.org/pdf/1703.00060v2.pdf
PWC https://paperswithcode.com/paper/achieving-non-discrimination-in-prediction
Repo
Framework

Comparison of Modified Kneser-Ney and Witten-Bell Smoothing Techniques in Statistical Language Model of Bahasa Indonesia

Title Comparison of Modified Kneser-Ney and Witten-Bell Smoothing Techniques in Statistical Language Model of Bahasa Indonesia
Authors Ismail Rusli
Abstract Smoothing is one technique to overcome data sparsity in statistical language model. Although in its mathematical definition there is no explicit dependency upon specific natural language, different natures of natural languages result in different effects of smoothing techniques. This is true for Russian language as shown by Whittaker (1998). In this paper, We compared Modified Kneser-Ney and Witten-Bell smoothing techniques in statistical language model of Bahasa Indonesia. We used train sets of totally 22M words that we extracted from Indonesian version of Wikipedia. As far as we know, this is the largest train set used to build statistical language model for Bahasa Indonesia. The experiments with 3-gram, 5-gram, and 7-gram showed that Modified Kneser-Ney consistently outperforms Witten-Bell smoothing technique in term of perplexity values. It is interesting to note that our experiments showed 5-gram model for Modified Kneser-Ney smoothing technique outperforms that of 7-gram. Meanwhile, Witten-Bell smoothing is consistently improving over the increase of n-gram order.
Tasks Language Modelling
Published 2017-06-23
URL http://arxiv.org/abs/1706.07786v1
PDF http://arxiv.org/pdf/1706.07786v1.pdf
PWC https://paperswithcode.com/paper/comparison-of-modified-kneser-ney-and-witten
Repo
Framework

Parameter Selection Algorithm For Continuous Variables

Title Parameter Selection Algorithm For Continuous Variables
Authors Peyman Tavallali, Marianne Razavi, Sean Brady
Abstract In this article, we propose a new algorithm for supervised learning methods, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, an ideal selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection to be more efficient. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology and including variable transformations and interaction. Moreover, this novel method controls multicollinearity, leading to an optimal set of explanatory variables.
Tasks Model Selection
Published 2017-01-19
URL http://arxiv.org/abs/1701.05593v1
PDF http://arxiv.org/pdf/1701.05593v1.pdf
PWC https://paperswithcode.com/paper/parameter-selection-algorithm-for-continuous
Repo
Framework

Adaptive Bayesian Sampling with Monte Carlo EM

Title Adaptive Bayesian Sampling with Monte Carlo EM
Authors Anirban Roychowdhury, Srinivasan Parthasarathy
Abstract We present a novel technique for learning the mass matrices in samplers obtained from discretized dynamics that preserve some energy function. Existing adaptive samplers use Riemannian preconditioning techniques, where the mass matrices are functions of the parameters being sampled. This leads to significant complexities in the energy reformulations and resultant dynamics, often leading to implicit systems of equations and requiring inversion of high-dimensional matrices in the leapfrog steps. Our approach provides a simpler alternative, by using existing dynamics in the sampling step of a Monte Carlo EM framework, and learning the mass matrices in the M step with a novel online technique. We also propose a way to adaptively set the number of samples gathered in the E step, using sampling error estimates from the leapfrog dynamics. Along with a novel stochastic sampler based on Nos'{e}-Poincar'{e} dynamics, we use this framework with standard Hamiltonian Monte Carlo (HMC) as well as newer stochastic algorithms such as SGHMC and SGNHT, and show strong performance on synthetic and real high-dimensional sampling scenarios; we achieve sampling accuracies comparable to Riemannian samplers while being significantly faster.
Tasks
Published 2017-11-06
URL http://arxiv.org/abs/1711.02159v1
PDF http://arxiv.org/pdf/1711.02159v1.pdf
PWC https://paperswithcode.com/paper/adaptive-bayesian-sampling-with-monte-carlo
Repo
Framework

Parent Oriented Teacher Selection Causes Language Diversity

Title Parent Oriented Teacher Selection Causes Language Diversity
Authors Ibrahim Cimentepe, Haluk O. Bingol
Abstract An evolutionary model for emergence of diversity in language is developed. We investigated the effects of two real life observations, namely, people prefer people that they communicate with well, and people interact with people that are physically close to each other. Clearly these groups are relatively small compared to the entire population. We restrict selection of the teachers from such small groups, called imitation sets, around parents. Then the child learns language from a teacher selected within the imitation set of her parent. As a result, there are subcommunities with their own languages developed. Within subcommunity comprehension is found to be high. The number of languages is related to the relative size of imitation set by a power law.
Tasks
Published 2017-02-20
URL http://arxiv.org/abs/1702.06027v2
PDF http://arxiv.org/pdf/1702.06027v2.pdf
PWC https://paperswithcode.com/paper/parent-oriented-teacher-selection-causes
Repo
Framework

Slim-DP: A Light Communication Data Parallelism for DNN

Title Slim-DP: A Light Communication Data Parallelism for DNN
Authors Shizhao Sun, Wei Chen, Jiang Bian, Xiaoguang Liu, Tie-Yan Liu
Abstract Data parallelism has emerged as a necessary technique to accelerate the training of deep neural networks (DNN). In a typical data parallelism approach, the local workers push the latest updates of all the parameters to the parameter server and pull all merged parameters back periodically. However, with the increasing size of DNN models and the large number of workers in practice, this typical data parallelism cannot achieve satisfactory training acceleration, since it usually suffers from the heavy communication cost due to transferring huge amount of information between workers and the parameter server. In-depth understanding on DNN has revealed that it is usually highly redundant, that deleting a considerable proportion of the parameters will not significantly decline the model performance. This redundancy property exposes a great opportunity to reduce the communication cost by only transferring the information of those significant parameters during the parallel training. However, if we only transfer information of temporally significant parameters of the latest snapshot, we may miss the parameters that are insignificant now but have potential to become significant as the training process goes on. To this end, we design an Explore-Exploit framework to dynamically choose the subset to be communicated, which is comprised of the significant parameters in the latest snapshot together with a random explored set of other parameters. We propose to measure the significance of the parameter by the combination of its magnitude and gradient. Our experimental results demonstrate that our proposed Slim-DP can achieve better training acceleration than standard data parallelism and its communication-efficient version by saving communication time without loss of accuracy.
Tasks
Published 2017-09-27
URL http://arxiv.org/abs/1709.09393v1
PDF http://arxiv.org/pdf/1709.09393v1.pdf
PWC https://paperswithcode.com/paper/slim-dp-a-light-communication-data
Repo
Framework

A Preliminary Study for Building an Arabic Corpus of Pair Questions-Texts from the Web: AQA-Webcorp

Title A Preliminary Study for Building an Arabic Corpus of Pair Questions-Texts from the Web: AQA-Webcorp
Authors Wided Bakari, Patrice Bellot, Mahmoud Neji
Abstract With the development of electronic media and the heterogeneity of Arabic data on the Web, the idea of building a clean corpus for certain applications of natural language processing, including machine translation, information retrieval, question answer, become more and more pressing. In this manuscript, we seek to create and develop our own corpus of pair’s questions-texts. This constitution then will provide a better base for our experimentation step. Thus, we try to model this constitution by a method for Arabic insofar as it recovers texts from the web that could prove to be answers to our factual questions. To do this, we had to develop a java script that can extract from a given query a list of html pages. Then clean these pages to the extent of having a data base of texts and a corpus of pair’s question-texts. In addition, we give preliminary results of our proposal method. Some investigations for the construction of Arabic corpus are also presented in this document.
Tasks Information Retrieval, Machine Translation
Published 2017-09-27
URL http://arxiv.org/abs/1709.09404v1
PDF http://arxiv.org/pdf/1709.09404v1.pdf
PWC https://paperswithcode.com/paper/a-preliminary-study-for-building-an-arabic
Repo
Framework

Supervised Adversarial Networks for Image Saliency Detection

Title Supervised Adversarial Networks for Image Saliency Detection
Authors Hengyue Pan, Hui Jiang
Abstract In the past few years, Generative Adversarial Network (GAN) became a prevalent research topic. By defining two convolutional neural networks (G-Network and D-Network) and introducing an adversarial procedure between them during the training process, GAN has ability to generate good quality images that look like natural images from a random vector. Besides image generation, GAN may have potential to deal with wide range of real world problems. In this paper, we follow the basic idea of GAN and propose a novel model for image saliency detection, which is called Supervised Adversarial Networks (SAN). Specifically, SAN also trains two models simultaneously: the G-Network takes natural images as inputs and generates corresponding saliency maps (synthetic saliency maps), and the D-Network is trained to determine whether one sample is a synthetic saliency map or ground-truth saliency map. However, different from GAN, the proposed method uses fully supervised learning to learn both G-Network and D-Network by applying class labels of the training set. Moreover, a novel kind of layer call conv-comparison layer is introduced into the D-Network to further improve the saliency performance by forcing the high-level feature of synthetic saliency maps and ground-truthes as similar as possible. Experimental results on Pascal VOC 2012 database show that the SAN model can generate high quality saliency maps for many complicate natural images.
Tasks Image Generation, Saliency Detection
Published 2017-04-24
URL http://arxiv.org/abs/1704.07242v2
PDF http://arxiv.org/pdf/1704.07242v2.pdf
PWC https://paperswithcode.com/paper/supervised-adversarial-networks-for-image
Repo
Framework

Constraint programming for planning test campaigns of communications satellites

Title Constraint programming for planning test campaigns of communications satellites
Authors Emmanuel Hébrard, Marie-José Huguet, Daniel Veysseire, Ludivine Sauvan, Bertrand Cabon
Abstract The payload of communications satellites must go through a series of tests to assert their ability to survive in space. Each test involves some equipment of the payload to be active, which has an impact on the temperature of the payload. Sequencing these tests in a way that ensures the thermal stability of the payload and minimizes the overall duration of the test campaign is a very important objective for satellite manufacturers. The problem can be decomposed in two sub-problems corresponding to two objectives: First, the number of distinct configurations necessary to run the tests must be minimized. This can be modeled as packing the tests into configurations, and we introduce a set of implied constraints to improve the lower bound of the model. Second, tests must be sequenced so that the number of times an equipment unit has to be switched on or off is minimized. We model this aspect using the constraint Switch, where a buffer with limited capacity represents the currently active equipment units, and we introduce an improvement of the propagation algorithm for this constraint. We then introduce a search strategy in which we sequentially solve the sub-problems (packing and sequencing). Experiments conducted on real and random instances show the respective interest of our contributions.
Tasks
Published 2017-01-23
URL http://arxiv.org/abs/1701.06388v1
PDF http://arxiv.org/pdf/1701.06388v1.pdf
PWC https://paperswithcode.com/paper/constraint-programming-for-planning-test
Repo
Framework

Counterfactual Conditionals in Quantified Modal Logic

Title Counterfactual Conditionals in Quantified Modal Logic
Authors Naveen Sundar Govindarajulu, Selmer Bringsjord
Abstract We present a novel formalization of counterfactual conditionals in a quantified modal logic. Counterfactual conditionals play a vital role in ethical and moral reasoning. Prior work has shown that moral reasoning systems (and more generally, theory-of-mind reasoning systems) should be at least as expressive as first-order (quantified) modal logic (QML) to be well-behaved. While existing work on moral reasoning has focused on counterfactual-free QML moral reasoning, we present a fully specified and implemented formal system that includes counterfactual conditionals. We validate our model with two projects. In the first project, we demonstrate that our system can be used to model a complex moral principle, the doctrine of double effect. In the second project, we use the system to build a data-set with true and false counterfactuals as licensed by our theory, which we believe can be useful for other researchers. This project also shows that our model can be computationally feasible.
Tasks
Published 2017-10-11
URL http://arxiv.org/abs/1710.04161v2
PDF http://arxiv.org/pdf/1710.04161v2.pdf
PWC https://paperswithcode.com/paper/counterfactual-conditionals-in-quantified
Repo
Framework

Recognizing Activities of Daily Living from Egocentric Images

Title Recognizing Activities of Daily Living from Egocentric Images
Authors Alejandro Cartas, Juan Marín, Petia Radeva, Mariella Dimiccoli
Abstract Recognizing Activities of Daily Living (ADLs) has a large number of health applications, such as characterize lifestyle for habit improvement, nursing and rehabilitation services. Wearable cameras can daily gather large amounts of image data that provide rich visual information about ADLs than using other wearable sensors. In this paper, we explore the classification of ADLs from images captured by low temporal resolution wearable camera (2fpm) by using a Convolutional Neural Networks (CNN) approach. We show that the classification accuracy of a CNN largely improves when its output is combined, through a random decision forest, with contextual information from a fully connected layer. The proposed method was tested on a subset of the NTCIR-12 egocentric dataset, consisting of 18,674 images and achieved an overall accuracy of 86% activity recognition on 21 classes.
Tasks Activity Recognition
Published 2017-04-13
URL http://arxiv.org/abs/1704.04097v1
PDF http://arxiv.org/pdf/1704.04097v1.pdf
PWC https://paperswithcode.com/paper/recognizing-activities-of-daily-living-from
Repo
Framework
comments powered by Disqus