May 6, 2019

3211 words 16 mins read

Paper Group ANR 396

Paper Group ANR 396

Construction Inspection through Spatial Database. ZaliQL: A SQL-Based Framework for Drawing Causal Inference from Big Data. PixelSNE: Visualizing Fast with Just Enough Precision via Pixel-Aligned Stochastic Neighbor Embedding. Intelligence in Artificial Intelligence. Non-Negative Matrix Factorization Test Cases. Blur Robust Optical Flow using Motio …

Construction Inspection through Spatial Database

Title Construction Inspection through Spatial Database
Authors Ahmad Hasan, Ashraf Qadir, Ian Nordeng, Jeremiah Neubert
Abstract This paper presents a novel pipeline for development of an efficient set of tools for extracting information from the video of a structure, captured by an Unmanned Aircraft System (UAS) to produce as-built documentation to aid inspection of large multi-storied building during construction. Our system uses the output from a Simultaneous Localization and Mapping system and a 3D CAD model of the structure in order to construct a spatial database to store images into the 3D CAD model space. This allows the user to perform a spatial query for images through spatial indexing into the 3D CAD model space. The image returned by the spatial query is used to extract metric information. The spatial database is also used to generate a 3D textured model which provides a visual as-built documentation.
Tasks Simultaneous Localization and Mapping
Published 2016-11-11
URL http://arxiv.org/abs/1611.03566v3
PDF http://arxiv.org/pdf/1611.03566v3.pdf
PWC https://paperswithcode.com/paper/construction-inspection-through-spatial
Repo
Framework

ZaliQL: A SQL-Based Framework for Drawing Causal Inference from Big Data

Title ZaliQL: A SQL-Based Framework for Drawing Causal Inference from Big Data
Authors Babak Salimi, Dan Suciu
Abstract Causal inference from observational data is a subject of active research and development in statistics and computer science. Many toolkits have been developed for this purpose that depends on statistical software. However, these toolkits do not scale to large datasets. In this paper we describe a suite of techniques for expressing causal inference tasks from observational data in SQL. This suite supports the state-of-the-art methods for causal inference and run at scale within a database engine. In addition, we introduce several optimization techniques that significantly speedup causal inference, both in the online and offline setting. We evaluate the quality and performance of our techniques by experiments of real datasets.
Tasks Causal Inference
Published 2016-09-12
URL http://arxiv.org/abs/1609.03540v2
PDF http://arxiv.org/pdf/1609.03540v2.pdf
PWC https://paperswithcode.com/paper/zaliql-a-sql-based-framework-for-drawing
Repo
Framework

PixelSNE: Visualizing Fast with Just Enough Precision via Pixel-Aligned Stochastic Neighbor Embedding

Title PixelSNE: Visualizing Fast with Just Enough Precision via Pixel-Aligned Stochastic Neighbor Embedding
Authors Minjeong Kim, Minsuk Choi, Sunwoong Lee, Jian Tang, Haesun Park, Jaegul Choo
Abstract Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem since such visualization can reveal deep insights out of complex data. Most of the existing embedding approaches, however, run on an excessively high precision, ignoring the fact that at the end, embedding outputs are converted into coarse-grained discrete pixel coordinates in a screen space. Motivated by such an observation and directly considering pixel coordinates in an embedding optimization process, we accelerate Barnes-Hut tree-based t-distributed stochastic neighbor embedding (BH-SNE), known as a state-of-the-art 2D embedding method, and propose a novel method called PixelSNE, a highly-efficient, screen resolution-driven 2D embedding method with a linear computational complexity in terms of the number of data items. Our experimental results show the significantly fast running time of PixelSNE by a large margin against BH-SNE, while maintaining the minimal degradation in the embedding quality. Finally, the source code of our method is publicly available at https://github.com/awesome-davian/PixelSNE
Tasks
Published 2016-11-08
URL http://arxiv.org/abs/1611.02568v3
PDF http://arxiv.org/pdf/1611.02568v3.pdf
PWC https://paperswithcode.com/paper/pixelsne-visualizing-fast-with-just-enough
Repo
Framework

Intelligence in Artificial Intelligence

Title Intelligence in Artificial Intelligence
Authors Shoumen Palit Austin Datta
Abstract The elusive quest for intelligence in artificial intelligence prompts us to consider that instituting human-level intelligence in systems may be (still) in the realm of utopia. In about a quarter century, we have witnessed the winter of AI (1990) being transformed and transported to the zenith of tabloid fodder about AI (2015). The discussion at hand is about the elements that constitute the canonical idea of intelligence. The delivery of intelligence as a pay-per-use-service, popping out of an app or from a shrink-wrapped software defined point solution, is in contrast to the bio-inspired view of intelligence as an outcome, perhaps formed from a tapestry of events, cross-pollinated by instances, each with its own microcosm of experiences and learning, which may not be discrete all-or-none functions but continuous, over space and time. The enterprise world may not require, aspire or desire such an engaged solution to improve its services for enabling digital transformation through the deployment of digital twins, for example. One might ask whether the “work-flow on steroids” version of decision support may suffice for intelligence? Are we harking back to the era of rule based expert systems? The image conjured by the publicity machines offers deep solutions with human-level AI and preposterous claims about capturing the “brain in a box” by 2020. Even emulating insects may be difficult in terms of real progress. Perhaps we can try to focus on worms (Caenorhabditis elegans) which may be better suited for what business needs to quench its thirst for so-called intelligence in AI.
Tasks
Published 2016-10-24
URL http://arxiv.org/abs/1610.07862v2
PDF http://arxiv.org/pdf/1610.07862v2.pdf
PWC https://paperswithcode.com/paper/intelligence-in-artificial-intelligence
Repo
Framework

Non-Negative Matrix Factorization Test Cases

Title Non-Negative Matrix Factorization Test Cases
Authors Connor Sell, Jeremy Kepner
Abstract Non-negative matrix factorization (NMF) is a prob- lem with many applications, ranging from facial recognition to document clustering. However, due to the variety of algorithms that solve NMF, the randomness involved in these algorithms, and the somewhat subjective nature of the problem, there is no clear “correct answer” to any particular NMF problem, and as a result, it can be hard to test new algorithms. This paper suggests some test cases for NMF algorithms derived from matrices with enumerable exact non-negative factorizations and perturbations of these matrices. Three algorithms using widely divergent approaches to NMF all give similar solutions over these test cases, suggesting that these test cases could be used as test cases for implementations of these existing NMF algorithms as well as potentially new NMF algorithms. This paper also describes how the proposed test cases could be used in practice.
Tasks
Published 2016-12-30
URL http://arxiv.org/abs/1701.00016v1
PDF http://arxiv.org/pdf/1701.00016v1.pdf
PWC https://paperswithcode.com/paper/non-negative-matrix-factorization-test-cases
Repo
Framework

Blur Robust Optical Flow using Motion Channel

Title Blur Robust Optical Flow using Motion Channel
Authors Wenbin Li, Yang Chen, JeeHang Lee, Gang Ren, Darren Cosker
Abstract It is hard to estimate optical flow given a realworld video sequence with camera shake and other motion blur. In this paper, we first investigate the blur parameterization for video footage using near linear motion elements. we then combine a commercial 3D pose sensor with an RGB camera, in order to film video footage of interest together with the camera motion. We illustrates that this additional camera motion/trajectory channel can be embedded into a hybrid framework by interleaving an iterative blind deconvolution and warping based optical flow scheme. Our method yields improved accuracy within three other state-of-the-art baselines given our proposed ground truth blurry sequences; and several other realworld sequences filmed by our imaging system.
Tasks Optical Flow Estimation
Published 2016-03-07
URL http://arxiv.org/abs/1603.02253v1
PDF http://arxiv.org/pdf/1603.02253v1.pdf
PWC https://paperswithcode.com/paper/blur-robust-optical-flow-using-motion-channel
Repo
Framework

Minimizing Regret on Reflexive Banach Spaces and Learning Nash Equilibria in Continuous Zero-Sum Games

Title Minimizing Regret on Reflexive Banach Spaces and Learning Nash Equilibria in Continuous Zero-Sum Games
Authors Maximilian Balandat, Walid Krichene, Claire Tomlin, Alexandre Bayen
Abstract We study a general version of the adversarial online learning problem. We are given a decision set $\mathcal{X}$ in a reflexive Banach space $X$ and a sequence of reward vectors in the dual space of $X$. At each iteration, we choose an action from $\mathcal{X}$, based on the observed sequence of previous rewards. Our goal is to minimize regret, defined as the gap between the realized reward and the reward of the best fixed action in hindsight. Using results from infinite dimensional convex analysis, we generalize the method of Dual Averaging (or Follow the Regularized Leader) to our setting and obtain general upper bounds on the worst-case regret that subsume a wide range of results from the literature. Under the assumption of uniformly continuous rewards, we obtain explicit anytime regret bounds in a setting where the decision set is the set of probability distributions on a compact metric space $S$ whose Radon-Nikodym derivatives are elements of $L^p(S)$ for some $p > 1$. Importantly, we make no convexity assumptions on either the set $S$ or the reward functions. We also prove a general lower bound on the worst-case regret for any online algorithm. We then apply these results to the problem of learning in repeated continuous two-player zero-sum games, in which players’ strategy sets are compact metric spaces. In doing so, we first prove that if both players play a Hannan-consistent strategy, then with probability 1 the empirical distributions of play weakly converge to the set of Nash equilibria of the game. We then show that, under mild assumptions, Dual Averaging on the (infinite-dimensional) space of probability distributions indeed achieves Hannan-consistency. Finally, we illustrate our results through numerical examples.
Tasks
Published 2016-06-03
URL http://arxiv.org/abs/1606.01261v1
PDF http://arxiv.org/pdf/1606.01261v1.pdf
PWC https://paperswithcode.com/paper/minimizing-regret-on-reflexive-banach-spaces
Repo
Framework

Real-Time Community Detection in Large Social Networks on a Laptop

Title Real-Time Community Detection in Large Social Networks on a Laptop
Authors Benjamin Paul Chamberlain, Josh Levy-Kramer, Clive Humby, Marc Peter Deisenroth
Abstract For a broad range of research, governmental and commercial applications it is important to understand the allegiances, communities and structure of key players in society. One promising direction towards extracting this information is to exploit the rich relational data in digital social networks (the social graph). As social media data sets are very large, most approaches make use of distributed computing systems for this purpose. Distributing graph processing requires solving many difficult engineering problems, which has lead some researchers to look at single-machine solutions that are faster and easier to maintain. In this article, we present a single-machine real-time system for large-scale graph processing that allows analysts to interactively explore graph structures. The key idea is that the aggregate actions of large numbers of users can be compressed into a data structure that encapsulates user similarities while being robust to noise and queryable in real-time. We achieve single machine real-time performance by compressing the neighbourhood of each vertex using minhash signatures and facilitate rapid queries through Locality Sensitive Hashing. These techniques reduce query times from hours using industrial desktop machines operating on the full graph to milliseconds on standard laptops. Our method allows exploration of strongly associated regions (i.e. communities) of large graphs in real-time on a laptop. It has been deployed in software that is actively used by social network analysts and offers another channel for media owners to monetise their data, helping them to continue to provide free services that are valued by billions of people globally.
Tasks Community Detection
Published 2016-01-15
URL http://arxiv.org/abs/1601.03958v2
PDF http://arxiv.org/pdf/1601.03958v2.pdf
PWC https://paperswithcode.com/paper/real-time-community-detection-in-large-social
Repo
Framework

Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks

Title Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks
Authors Jakob N. Foerster, Yannis M. Assael, Nando de Freitas, Shimon Whiteson
Abstract We propose deep distributed recurrent Q-networks (DDRQN), which enable teams of agents to learn to solve communication-based coordination tasks. In these tasks, the agents are not given any pre-designed communication protocol. Therefore, in order to successfully communicate, they must first automatically develop and agree upon their own communication protocol. We present empirical results on two multi-agent learning problems based on well-known riddles, demonstrating that DDRQN can successfully solve such tasks and discover elegant communication protocols to do so. To our knowledge, this is the first time deep reinforcement learning has succeeded in learning communication protocols. In addition, we present ablation experiments that confirm that each of the main components of the DDRQN architecture are critical to its success.
Tasks
Published 2016-02-08
URL http://arxiv.org/abs/1602.02672v1
PDF http://arxiv.org/pdf/1602.02672v1.pdf
PWC https://paperswithcode.com/paper/learning-to-communicate-to-solve-riddles-with
Repo
Framework

Agenda Separability in Judgment Aggregation

Title Agenda Separability in Judgment Aggregation
Authors Jérôme Lang, Marija Slavkovik, Srdjan Vesic
Abstract One of the better studied properties for operators in judgment aggregation is independence, which essentially dictates that the collective judgment on one issue should not depend on the individual judgments given on some other issue(s) in the same agenda. Independence, although considered a desirable property, is too strong, because together with mild additional conditions it implies dictatorship. We propose here a weakening of independence, named agenda separability: a judgment aggregation rule satisfies it if, whenever the agenda is composed of several independent sub-agendas, the resulting collective judgment sets can be computed separately for each sub-agenda and then put together. We show that this property is discriminant, in the sense that among judgment aggregation rules so far studied in the literature, some satisfy it and some do not. We briefly discuss the implications of agenda separability on the computation of judgment aggregation rules.
Tasks
Published 2016-04-22
URL http://arxiv.org/abs/1604.06614v1
PDF http://arxiv.org/pdf/1604.06614v1.pdf
PWC https://paperswithcode.com/paper/agenda-separability-in-judgment-aggregation
Repo
Framework

Recovering Structured Probability Matrices

Title Recovering Structured Probability Matrices
Authors Qingqing Huang, Sham M. Kakade, Weihao Kong, Gregory Valiant
Abstract We consider the problem of accurately recovering a matrix B of size M by M , which represents a probability distribution over M2 outcomes, given access to an observed matrix of “counts” generated by taking independent samples from the distribution B. How can structural properties of the underlying matrix B be leveraged to yield computationally efficient and information theoretically optimal reconstruction algorithms? When can accurate reconstruction be accomplished in the sparse data regime? This basic problem lies at the core of a number of questions that are currently being considered by different communities, including building recommendation systems and collaborative filtering in the sparse data regime, community detection in sparse random graphs, learning structured models such as topic models or hidden Markov models, and the efforts from the natural language processing community to compute “word embeddings”. Our results apply to the setting where B has a low rank structure. For this setting, we propose an efficient algorithm that accurately recovers the underlying M by M matrix using Theta(M) samples. This result easily translates to Theta(M) sample algorithms for learning topic models and learning hidden Markov Models. These linear sample complexities are optimal, up to constant factors, in an extremely strong sense: even testing basic properties of the underlying matrix (such as whether it has rank 1 or 2) requires Omega(M) samples. We provide an even stronger lower bound where distinguishing whether a sequence of observations were drawn from the uniform distribution over M observations versus being generated by an HMM with two hidden states requires Omega(M) observations. This precludes sublinear-sample hypothesis tests for basic properties, such as identity or uniformity, as well as sublinear sample estimators for quantities such as the entropy rate of HMMs.
Tasks Community Detection, Recommendation Systems, Topic Models, Word Embeddings
Published 2016-02-21
URL http://arxiv.org/abs/1602.06586v6
PDF http://arxiv.org/pdf/1602.06586v6.pdf
PWC https://paperswithcode.com/paper/recovering-structured-probability-matrices
Repo
Framework

Forecasting wind power - Modeling periodic and non-linear effects under conditional heteroscedasticity

Title Forecasting wind power - Modeling periodic and non-linear effects under conditional heteroscedasticity
Authors Florian Ziel, Carsten Croonenbroeck, Daniel Ambach
Abstract In this article we present an approach that enables joint wind speed and wind power forecasts for a wind park. We combine a multivariate seasonal time varying threshold autoregressive moving average (TVARMA) model with a power threshold generalized autoregressive conditional heteroscedastic (power-TGARCH) model. The modeling framework incorporates diurnal and annual periodicity modeling by periodic B-splines, conditional heteroscedasticity and a complex autoregressive structure with non-linear impacts. In contrast to usually time-consuming estimation approaches as likelihood estimation, we apply a high-dimensional shrinkage technique. We utilize an iteratively re-weighted least absolute shrinkage and selection operator (lasso) technique. It allows for conditional heteroscedasticity, provides fast computing times and guarantees a parsimonious and regularized specification, even though the parameter space may be vast. We are able to show that our approach provides accurate forecasts of wind power at a turbine-specific level for forecasting horizons of up to 48 h (short- to medium-term forecasts).
Tasks
Published 2016-06-02
URL http://arxiv.org/abs/1606.00546v1
PDF http://arxiv.org/pdf/1606.00546v1.pdf
PWC https://paperswithcode.com/paper/forecasting-wind-power-modeling-periodic-and
Repo
Framework

Accelerating Science: A Computing Research Agenda

Title Accelerating Science: A Computing Research Agenda
Authors Vasant G. Honavar, Mark D. Hill, Katherine Yelick
Abstract The emergence of “big data” offers unprecedented opportunities for not only accelerating scientific advances but also enabling new modes of discovery. Scientific progress in many disciplines is increasingly enabled by our ability to examine natural phenomena through the computational lens, i.e., using algorithmic or information processing abstractions of the underlying processes; and our ability to acquire, share, integrate and analyze disparate types of data. However, there is a huge gap between our ability to acquire, store, and process data and our ability to make effective use of the data to advance discovery. Despite successful automation of routine aspects of data management and analytics, most elements of the scientific process currently require considerable human expertise and effort. Accelerating science to keep pace with the rate of data acquisition and data processing calls for the development of algorithmic or information processing abstractions, coupled with formal methods and tools for modeling and simulation of natural processes as well as major innovations in cognitive tools for scientists, i.e., computational tools that leverage and extend the reach of human intellect, and partner with humans on a broad range of tasks in scientific discovery (e.g., identifying, prioritizing formulating questions, designing, prioritizing and executing experiments designed to answer a chosen question, drawing inferences and evaluating the results, and formulating new questions, in a closed-loop fashion). This calls for concerted research agenda aimed at: Development, analysis, integration, sharing, and simulation of algorithmic or information processing abstractions of natural processes, coupled with formal methods and tools for their analyses and simulation; Innovations in cognitive tools that augment and extend human intellect and partner with humans in all aspects of science.
Tasks
Published 2016-04-06
URL http://arxiv.org/abs/1604.02006v1
PDF http://arxiv.org/pdf/1604.02006v1.pdf
PWC https://paperswithcode.com/paper/accelerating-science-a-computing-research
Repo
Framework

Learning to Drive using Inverse Reinforcement Learning and Deep Q-Networks

Title Learning to Drive using Inverse Reinforcement Learning and Deep Q-Networks
Authors Sahand Sharifzadeh, Ioannis Chiotellis, Rudolph Triebel, Daniel Cremers
Abstract We propose an inverse reinforcement learning (IRL) approach using Deep Q-Networks to extract the rewards in problems with large state spaces. We evaluate the performance of this approach in a simulation-based autonomous driving scenario. Our results resemble the intuitive relation between the reward function and readings of distance sensors mounted at different poses on the car. We also show that, after a few learning rounds, our simulated agent generates collision-free motions and performs human-like lane change behaviour.
Tasks Autonomous Driving
Published 2016-12-12
URL http://arxiv.org/abs/1612.03653v2
PDF http://arxiv.org/pdf/1612.03653v2.pdf
PWC https://paperswithcode.com/paper/learning-to-drive-using-inverse-reinforcement
Repo
Framework

Information Extraction with Character-level Neural Networks and Free Noisy Supervision

Title Information Extraction with Character-level Neural Networks and Free Noisy Supervision
Authors Philipp Meerkamp, Zhengyi Zhou
Abstract We present an architecture for information extraction from text that augments an existing parser with a character-level neural network. The network is trained using a measure of consistency of extracted data with existing databases as a form of noisy supervision. Our architecture combines the ability of constraint-based information extraction systems to easily incorporate domain knowledge and constraints with the ability of deep neural networks to leverage large amounts of data to learn complex features. Boosting the existing parser’s precision, the system led to large improvements over a mature and highly tuned constraint-based production information extraction system used at Bloomberg for financial language text.
Tasks
Published 2016-12-13
URL http://arxiv.org/abs/1612.04118v2
PDF http://arxiv.org/pdf/1612.04118v2.pdf
PWC https://paperswithcode.com/paper/information-extraction-with-character-level
Repo
Framework
comments powered by Disqus