May 7, 2019

3454 words 17 mins read

Paper Group ANR 99

Paper Group ANR 99

A Bayesian Network approach to County-Level Corn Yield Prediction using historical data and expert knowledge. Weakly Supervised Object Localization Using Size Estimates. Creating a Real-Time, Reproducible Event Dataset. A Maximum A Posteriori Estimation Framework for Robust High Dynamic Range Video Synthesis. Empirical Evaluation of A New Approach …

A Bayesian Network approach to County-Level Corn Yield Prediction using historical data and expert knowledge

Title A Bayesian Network approach to County-Level Corn Yield Prediction using historical data and expert knowledge
Authors Vikas Chawla, Hsiang Sing Naik, Adedotun Akintayo, Dermot Hayes, Patrick Schnable, Baskar Ganapathysubramanian, Soumik Sarkar
Abstract Crop yield forecasting is the methodology of predicting crop yields prior to harvest. The availability of accurate yield prediction frameworks have enormous implications from multiple standpoints, including impact on the crop commodity futures markets, formulation of agricultural policy, as well as crop insurance rating. The focus of this work is to construct a corn yield predictor at the county scale. Corn yield (forecasting) depends on a complex, interconnected set of variables that include economic, agricultural, management and meteorological factors. Conventional forecasting is either knowledge-based computer programs (that simulate plant-weather-soil-management interactions) coupled with targeted surveys or statistical model based. The former is limited by the need for painstaking calibration, while the latter is limited to univariate analysis or similar simplifying assumptions that fail to capture the complex interdependencies affecting yield. In this paper, we propose a data-driven approach that is “gray box” i.e. that seamlessly utilizes expert knowledge in constructing a statistical network model for corn yield forecasting. Our multivariate gray box model is developed on Bayesian network analysis to build a Directed Acyclic Graph (DAG) between predictors and yield. Starting from a complete graph connecting various carefully chosen variables and yield, expert knowledge is used to prune or strengthen edges connecting variables. Subsequently the structure (connectivity and edge weights) of the DAG that maximizes the likelihood of observing the training data is identified via optimization. We curated an extensive set of historical data (1948-2012) for each of the 99 counties in Iowa as data to train the model.
Tasks Calibration
Published 2016-08-17
URL http://arxiv.org/abs/1608.05127v1
PDF http://arxiv.org/pdf/1608.05127v1.pdf
PWC https://paperswithcode.com/paper/a-bayesian-network-approach-to-county-level
Repo
Framework

Weakly Supervised Object Localization Using Size Estimates

Title Weakly Supervised Object Localization Using Size Estimates
Authors Miaojing Shi, Vittorio Ferrari
Abstract We present a technique for weakly supervised object localization (WSOL), building on the observation that WSOL algorithms usually work better on images with bigger objects. Instead of training the object detector on the entire training set at the same time, we propose a curriculum learning strategy to feed training images into the WSOL learning loop in an order from images containing bigger objects down to smaller ones. To automatically determine the order, we train a regressor to estimate the size of the object given the whole image as input. Furthermore, we use these size estimates to further improve the re-localization step of WSOL by assigning weights to object proposals according to how close their size matches the estimated object size. We demonstrate the effectiveness of using size order and size weighting on the challenging PASCAL VOC 2007 dataset, where we achieve a significant improvement over existing state-of-the-art WSOL techniques.
Tasks Object Localization, Weakly-Supervised Object Localization
Published 2016-08-15
URL http://arxiv.org/abs/1608.04314v2
PDF http://arxiv.org/pdf/1608.04314v2.pdf
PWC https://paperswithcode.com/paper/weakly-supervised-object-localization-using
Repo
Framework

Creating a Real-Time, Reproducible Event Dataset

Title Creating a Real-Time, Reproducible Event Dataset
Authors John Beieler
Abstract The generation of political event data has remained much the same since the mid-1990s, both in terms of data acquisition and the process of coding text into data. Since the 1990s, however, there have been significant improvements in open-source natural language processing software and in the availability of digitized news content. This paper presents a new, next-generation event dataset, named Phoenix, that builds from these and other advances. This dataset includes improvements in the underlying news collection process and event coding software, along with the creation of a general processing pipeline necessary to produce daily-updated data. This paper provides a face validity checks by briefly examining the data for the conflict in Syria, and a comparison between Phoenix and the Integrated Crisis Early Warning System data.
Tasks
Published 2016-12-02
URL http://arxiv.org/abs/1612.00866v1
PDF http://arxiv.org/pdf/1612.00866v1.pdf
PWC https://paperswithcode.com/paper/creating-a-real-time-reproducible-event
Repo
Framework

A Maximum A Posteriori Estimation Framework for Robust High Dynamic Range Video Synthesis

Title A Maximum A Posteriori Estimation Framework for Robust High Dynamic Range Video Synthesis
Authors Yuelong Li, Chul Lee, Vishal Monga
Abstract High dynamic range (HDR) image synthesis from multiple low dynamic range (LDR) exposures continues to be actively researched. The extension to HDR video synthesis is a topic of significant current interest due to potential cost benefits. For HDR video, a stiff practical challenge presents itself in the form of accurate correspondence estimation of objects between video frames. In particular, loss of data resulting from poor exposures and varying intensity make conventional optical flow methods highly inaccurate. We avoid exact correspondence estimation by proposing a statistical approach via maximum a posterior (MAP) estimation, and under appropriate statistical assumptions and choice of priors and models, we reduce it to an optimization problem of solving for the foreground and background of the target frame. We obtain the background through rank minimization and estimate the foreground via a novel multiscale adaptive kernel regression technique, which implicitly captures local structure and temporal motion by solving an unconstrained optimization problem. Extensive experimental results on both real and synthetic datasets demonstrate that our algorithm is more capable of delivering high-quality HDR videos than current state-of-the-art methods, under both subjective and objective assessments. Furthermore, a thorough complexity analysis reveals that our algorithm achieves better complexity-performance trade-off than conventional methods.
Tasks Image Generation, Optical Flow Estimation
Published 2016-12-08
URL http://arxiv.org/abs/1612.02761v1
PDF http://arxiv.org/pdf/1612.02761v1.pdf
PWC https://paperswithcode.com/paper/a-maximum-a-posteriori-estimation-framework
Repo
Framework

Empirical Evaluation of A New Approach to Simplifying Long Short-term Memory (LSTM)

Title Empirical Evaluation of A New Approach to Simplifying Long Short-term Memory (LSTM)
Authors Yuzhen Lu
Abstract The standard LSTM, although it succeeds in the modeling long-range dependences, suffers from a highly complex structure that can be simplified through modifications to its gate units. This paper was to perform an empirical comparison between the standard LSTM and three new simplified variants that were obtained by eliminating input signal, bias and hidden unit signal from individual gates, on the tasks of modeling two sequence datasets. The experiments show that the three variants, with reduced parameters, can achieve comparable performance with the standard LSTM. Due attention should be paid to turning the learning rate to achieve high accuracies
Tasks
Published 2016-12-12
URL http://arxiv.org/abs/1612.03707v1
PDF http://arxiv.org/pdf/1612.03707v1.pdf
PWC https://paperswithcode.com/paper/empirical-evaluation-of-a-new-approach-to
Repo
Framework

Message-passing algorithms for synchronization problems over compact groups

Title Message-passing algorithms for synchronization problems over compact groups
Authors Amelia Perry, Alexander S. Wein, Afonso S. Bandeira, Ankur Moitra
Abstract Various alignment problems arising in cryo-electron microscopy, community detection, time synchronization, computer vision, and other fields fall into a common framework of synchronization problems over compact groups such as Z/L, U(1), or SO(3). The goal of such problems is to estimate an unknown vector of group elements given noisy relative observations. We present an efficient iterative algorithm to solve a large class of these problems, allowing for any compact group, with measurements on multiple ‘frequency channels’ (Fourier modes, or more generally, irreducible representations of the group). Our algorithm is a highly efficient iterative method following the blueprint of approximate message passing (AMP), which has recently arisen as a central technique for inference problems such as structured low-rank estimation and compressed sensing. We augment the standard ideas of AMP with ideas from representation theory so that the algorithm can work with distributions over compact groups. Using standard but non-rigorous methods from statistical physics we analyze the behavior of our algorithm on a Gaussian noise model, identifying phases where the problem is easy, (computationally) hard, and (statistically) impossible. In particular, such evidence predicts that our algorithm is information-theoretically optimal in many cases, and that the remaining cases show evidence of statistical-to-computational gaps.
Tasks Community Detection
Published 2016-10-14
URL http://arxiv.org/abs/1610.04583v1
PDF http://arxiv.org/pdf/1610.04583v1.pdf
PWC https://paperswithcode.com/paper/message-passing-algorithms-for
Repo
Framework

On the Consistency of the Likelihood Maximization Vertex Nomination Scheme: Bridging the Gap Between Maximum Likelihood Estimation and Graph Matching

Title On the Consistency of the Likelihood Maximization Vertex Nomination Scheme: Bridging the Gap Between Maximum Likelihood Estimation and Graph Matching
Authors Vince Lyzinski, Keith Levin, Donniell E. Fishkind, Carey E. Priebe
Abstract Given a graph in which a few vertices are deemed interesting a priori, the vertex nomination task is to order the remaining vertices into a nomination list such that there is a concentration of interesting vertices at the top of the list. Previous work has yielded several approaches to this problem, with theoretical results in the setting where the graph is drawn from a stochastic block model (SBM), including a vertex nomination analogue of the Bayes optimal classifier. In this paper, we prove that maximum likelihood (ML)-based vertex nomination is consistent, in the sense that the performance of the ML-based scheme asymptotically matches that of the Bayes optimal scheme. We prove theorems of this form both when model parameters are known and unknown. Additionally, we introduce and prove consistency of a related, more scalable restricted-focus ML vertex nomination scheme. Finally, we incorporate vertex and edge features into ML-based vertex nomination and briefly explore the empirical effectiveness of this approach.
Tasks Graph Matching
Published 2016-07-05
URL http://arxiv.org/abs/1607.01369v3
PDF http://arxiv.org/pdf/1607.01369v3.pdf
PWC https://paperswithcode.com/paper/on-the-consistency-of-the-likelihood
Repo
Framework

Fusing Audio, Textual and Visual Features for Sentiment Analysis of News Videos

Title Fusing Audio, Textual and Visual Features for Sentiment Analysis of News Videos
Authors Moisés H. R. Pereira, Flávio L. C. Pádua, Adriano C. M. Pereira, Fabrício Benevenuto, Daniel H. Dalip
Abstract This paper presents a novel approach to perform sentiment analysis of news videos, based on the fusion of audio, textual and visual clues extracted from their contents. The proposed approach aims at contributing to the semiodiscoursive study regarding the construction of the ethos (identity) of this media universe, which has become a central part of the modern-day lives of millions of people. To achieve this goal, we apply state-of-the-art computational methods for (1) automatic emotion recognition from facial expressions, (2) extraction of modulations in the participants’ speeches and (3) sentiment analysis from the closed caption associated to the videos of interest. More specifically, we compute features, such as, visual intensities of recognized emotions, field sizes of participants, voicing probability, sound loudness, speech fundamental frequencies and the sentiment scores (polarities) from text sentences in the closed caption. Experimental results with a dataset containing 520 annotated news videos from three Brazilian and one American popular TV newscasts show that our approach achieves an accuracy of up to 84% in the sentiments (tension levels) classification task, thus demonstrating its high potential to be used by media analysts in several applications, especially, in the journalistic domain.
Tasks 3D Human Pose Estimation, Emotion Recognition, Sentiment Analysis
Published 2016-04-09
URL http://arxiv.org/abs/1604.02612v1
PDF http://arxiv.org/pdf/1604.02612v1.pdf
PWC https://paperswithcode.com/paper/fusing-audio-textual-and-visual-features-for
Repo
Framework

Unsupervised Learning of Predictors from Unpaired Input-Output Samples

Title Unsupervised Learning of Predictors from Unpaired Input-Output Samples
Authors Jianshu Chen, Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng
Abstract Unsupervised learning is the most challenging problem in machine learning and especially in deep learning. Among many scenarios, we study an unsupervised learning problem of high economic value — learning to predict without costly pairing of input data and corresponding labels. Part of the difficulty in this problem is a lack of solid evaluation measures. In this paper, we take a practical approach to grounding unsupervised learning by using the same success criterion as for supervised learning in prediction tasks but we do not require the presence of paired input-output training data. In particular, we propose an objective function that aims to make the predicted outputs fit well the structure of the output while preserving the correlation between the input and the predicted output. We experiment with a synthetic structural prediction problem and show that even with simple linear classifiers, the objective function is already highly non-convex. We further demonstrate the nature of this non-convex optimization problem as well as potential solutions. In particular, we show that with regularization via a generative model, learning with the proposed unsupervised objective function converges to an optimal solution.
Tasks
Published 2016-06-15
URL http://arxiv.org/abs/1606.04646v1
PDF http://arxiv.org/pdf/1606.04646v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-learning-of-predictors-from
Repo
Framework

Cascaded Neural Networks with Selective Classifiers and its evaluation using Lung X-ray CT Images

Title Cascaded Neural Networks with Selective Classifiers and its evaluation using Lung X-ray CT Images
Authors Masaharu Sakamoto, Hiroki Nakano
Abstract Lung nodule detection is a class imbalanced problem because nodules are found with much lower frequency than non-nodules. In the class imbalanced problem, conventional classifiers tend to be overwhelmed by the majority class and ignore the minority class. We therefore propose cascaded convolutional neural networks to cope with the class imbalanced problem. In the proposed approach, cascaded convolutional neural networks that perform as selective classifiers filter out obvious non-nodules. Successively, a convolutional neural network trained with a balanced data set calculates nodule probabilities. The proposed method achieved the detection sensitivity of 85.3% and 90.7% at 1 and 4 false positives per scan in FROC curve, respectively.
Tasks Lung Nodule Detection
Published 2016-11-22
URL http://arxiv.org/abs/1611.07136v1
PDF http://arxiv.org/pdf/1611.07136v1.pdf
PWC https://paperswithcode.com/paper/cascaded-neural-networks-with-selective
Repo
Framework

$\ell_p$-Box ADMM: A Versatile Framework for Integer Programming

Title $\ell_p$-Box ADMM: A Versatile Framework for Integer Programming
Authors Baoyuan Wu, Bernard Ghanem
Abstract This paper revisits the integer programming (IP) problem, which plays a fundamental role in many computer vision and machine learning applications. The literature abounds with many seminal works that address this problem, some focusing on continuous approaches (e.g. linear program relaxation) while others on discrete ones (e.g., min-cut). However, a limited number of them are designed to handle the general IP form and even these methods cannot adequately satisfy the simultaneous requirements of accuracy, feasibility, and scalability. To this end, we propose a novel and versatile framework called $\ell_p$-box ADMM, which is based on two parts. (1) The discrete constraint is equivalently replaced by the intersection of a box and a $(n-1)$-dimensional sphere (defined through the $\ell_p$ norm). (2) We infuse this equivalence into the ADMM (Alternating Direction Method of Multipliers) framework to handle these continuous constraints separately and to harness its attractive properties. More importantly, the ADMM update steps can lead to manageable sub-problems in the continuous domain. To demonstrate its efficacy, we consider an instance of the framework, namely $\ell_2$-box ADMM applied to binary quadratic programming (BQP). Here, the ADMM steps are simple, computationally efficient, and theoretically guaranteed to converge to a KKT point. We demonstrate the applicability of $\ell_2$-box ADMM on three important applications: MRF energy minimization, graph matching, and clustering. Results clearly show that it significantly outperforms existing generic IP solvers both in runtime and objective. It also achieves very competitive performance vs. state-of-the-art methods specific to these applications.
Tasks Graph Matching
Published 2016-04-26
URL http://arxiv.org/abs/1604.07666v3
PDF http://arxiv.org/pdf/1604.07666v3.pdf
PWC https://paperswithcode.com/paper/ell_p-box-admm-a-versatile-framework-for
Repo
Framework

Poisson Noise Reduction with Higher-order Natural Image Prior Model

Title Poisson Noise Reduction with Higher-order Natural Image Prior Model
Authors Wensen Feng, Hong Qiao, Yunjin Chen
Abstract Poisson denoising is an essential issue for various imaging applications, such as night vision, medical imaging and microscopy. State-of-the-art approaches are clearly dominated by patch-based non-local methods in recent years. In this paper, we aim to propose a local Poisson denoising model with both structure simplicity and good performance. To this end, we consider a variational modeling to integrate the so-called Fields of Experts (FoE) image prior, that has proven an effective higher-order Markov Random Fields (MRF) model for many classic image restoration problems. We exploit several feasible variational variants for this task. We start with a direct modeling in the original image domain by taking into account the Poisson noise statistics, which performs generally well for the cases of high SNR. However, this strategy encounters problem in cases of low SNR. Then we turn to an alternative modeling strategy by using the Anscombe transform and Gaussian statistics derived data term. We retrain the FoE prior model directly in the transform domain. With the newly trained FoE model, we end up with a local variational model providing strongly competitive results against state-of-the-art non-local approaches, meanwhile bearing the property of simple structure. Furthermore, our proposed model comes along with an additional advantage, that the inference is very efficient as it is well-suited for parallel computation on GPUs. For images of size $512 \times 512$, our GPU implementation takes less than 1 second to produce state-of-the-art Poisson denoising performance.
Tasks Denoising, Image Restoration
Published 2016-09-19
URL http://arxiv.org/abs/1609.05722v1
PDF http://arxiv.org/pdf/1609.05722v1.pdf
PWC https://paperswithcode.com/paper/poisson-noise-reduction-with-higher-order
Repo
Framework

Online Algorithms For Parameter Mean And Variance Estimation In Dynamic Regression Models

Title Online Algorithms For Parameter Mean And Variance Estimation In Dynamic Regression Models
Authors Carlos Alberto Gomez-Uribe
Abstract We study the problem of estimating the parameters of a regression model from a set of observations, each consisting of a response and a predictor. The response is assumed to be related to the predictor via a regression model of unknown parameters. Often, in such models the parameters to be estimated are assumed to be constant. Here we consider the more general scenario where the parameters are allowed to evolve over time, a more natural assumption for many applications. We model these dynamics via a linear update equation with additive noise that is often used in a wide range of engineering applications, particularly in the well-known and widely used Kalman filter (where the system state it seeks to estimate maps to the parameter values here). We derive an approximate algorithm to estimate both the mean and the variance of the parameter estimates in an online fashion for a generic regression model. This algorithm turns out to be equivalent to the extended Kalman filter. We specialize our algorithm to the multivariate exponential family distribution to obtain a generalization of the generalized linear model (GLM). Because the common regression models encountered in practice such as logistic, exponential and multinomial all have observations modeled through an exponential family distribution, our results are used to easily obtain algorithms for online mean and variance parameter estimation for all these regression models in the context of time-dependent parameters. Lastly, we propose to use these algorithms in the contextual multi-armed bandit scenario, where so far model parameters are assumed static and observations univariate and Gaussian or Bernoulli. Both of these restrictions can be relaxed using the algorithms described here, which we combine with Thompson sampling to show the resulting performance on a simulation.
Tasks
Published 2016-05-18
URL http://arxiv.org/abs/1605.05697v1
PDF http://arxiv.org/pdf/1605.05697v1.pdf
PWC https://paperswithcode.com/paper/online-algorithms-for-parameter-mean-and
Repo
Framework

Redefining part-of-speech classes with distributional semantic models

Title Redefining part-of-speech classes with distributional semantic models
Authors Andrey Kutuzov, Erik Velldal, Lilja Øvrelid
Abstract This paper studies how word embeddings trained on the British National Corpus interact with part of speech boundaries. Our work targets the Universal PoS tag set, which is currently actively being used for annotation of a range of languages. We experiment with training classifiers for predicting PoS tags for words based on their embeddings. The results show that the information about PoS affiliation contained in the distributional vectors allows us to discover groups of words with distributional patterns that differ from other words of the same part of speech. This data often reveals hidden inconsistencies of the annotation process or guidelines. At the same time, it supports the notion of soft' or graded’ part of speech affiliations. Finally, we show that information about PoS is distributed among dozens of vector components, not limited to only one or two features.
Tasks Word Embeddings
Published 2016-08-12
URL http://arxiv.org/abs/1608.03803v1
PDF http://arxiv.org/pdf/1608.03803v1.pdf
PWC https://paperswithcode.com/paper/redefining-part-of-speech-classes-with
Repo
Framework

Sim-to-Real Robot Learning from Pixels with Progressive Nets

Title Sim-to-Real Robot Learning from Pixels with Progressive Nets
Authors Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
Abstract Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
Tasks
Published 2016-10-13
URL http://arxiv.org/abs/1610.04286v2
PDF http://arxiv.org/pdf/1610.04286v2.pdf
PWC https://paperswithcode.com/paper/sim-to-real-robot-learning-from-pixels-with
Repo
Framework
comments powered by Disqus