October 16, 2019

2856 words 14 mins read

Paper Group ANR 1100

Paper Group ANR 1100

First-order Newton-type Estimator for Distributed Estimation and Inference. An Online Algorithm for Learning Buyer Behavior under Realistic Pricing Restrictions. Plan-And-Write: Towards Better Automatic Storytelling. Approximate Bayesian Computation via Population Monte Carlo and Classification. Automated segmentaiton and classification of arteriol …

First-order Newton-type Estimator for Distributed Estimation and Inference

Title First-order Newton-type Estimator for Distributed Estimation and Inference
Authors Xi Chen, Weidong Liu, Yichen Zhang
Abstract This paper studies distributed estimation and inference for a general statistical problem with a convex loss that could be non-differentiable. For the purpose of efficient computation, we restrict ourselves to stochastic first-order optimization, which enjoys low per-iteration complexity. To motivate the proposed method, we first investigate the theoretical properties of a straightforward Divide-and-Conquer Stochastic Gradient Descent (DC-SGD) approach. Our theory shows that there is a restriction on the number of machines and this restriction becomes more stringent when the dimension $p$ is large. To overcome this limitation, this paper proposes a new multi-round distributed estimation procedure that approximates the Newton step only using stochastic subgradient. The key component in our method is the proposal of a computationally efficient estimator of $\Sigma^{-1} w$, where $\Sigma$ is the population Hessian matrix and $w$ is any given vector. Instead of estimating $\Sigma$ (or $\Sigma^{-1}$) that usually requires the second-order differentiability of the loss, the proposed First-Order Newton-type Estimator (FONE) directly estimates the vector of interest $\Sigma^{-1} w$ as a whole and is applicable to non-differentiable losses. Our estimator also facilitates the inference for the empirical risk minimizer. It turns out that the key term in the limiting covariance has the form of $\Sigma^{-1} w$, which can be estimated by FONE.
Tasks
Published 2018-11-28
URL http://arxiv.org/abs/1811.11368v1
PDF http://arxiv.org/pdf/1811.11368v1.pdf
PWC https://paperswithcode.com/paper/first-order-newton-type-estimator-for
Repo
Framework

An Online Algorithm for Learning Buyer Behavior under Realistic Pricing Restrictions

Title An Online Algorithm for Learning Buyer Behavior under Realistic Pricing Restrictions
Authors Debjyoti Saharoy, Theja Tulabandhula
Abstract We propose a new efficient online algorithm to learn the parameters governing the purchasing behavior of a utility maximizing buyer, who responds to prices, in a repeated interaction setting. The key feature of our algorithm is that it can learn even non-linear buyer utility while working with arbitrary price constraints that the seller may impose. This overcomes a major shortcoming of previous approaches, which use unrealistic prices to learn these parameters making them unsuitable in practice.
Tasks
Published 2018-03-06
URL http://arxiv.org/abs/1803.01968v1
PDF http://arxiv.org/pdf/1803.01968v1.pdf
PWC https://paperswithcode.com/paper/an-online-algorithm-for-learning-buyer
Repo
Framework

Plan-And-Write: Towards Better Automatic Storytelling

Title Plan-And-Write: Towards Better Automatic Storytelling
Authors Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, Rui Yan
Abstract Automatic storytelling is challenging since it requires generating long, coherent natural language to describes a sensible sequence of events. Despite considerable efforts on automatic story generation in the past, prior work either is restricted in plot planning, or can only generate stories in a narrow domain. In this paper, we explore open-domain story generation that writes stories given a title (topic) as input. We propose a plan-and-write hierarchical generation framework that first plans a storyline, and then generates a story based on the storyline. We compare two planning strategies. The dynamic schema interweaves story planning and its surface realization in text, while the static schema plans out the entire storyline before generating stories. Experiments show that with explicit storyline planning, the generated stories are more diverse, coherent, and on topic than those generated without creating a full plan, according to both automatic and human evaluations.
Tasks
Published 2018-11-14
URL http://arxiv.org/abs/1811.05701v3
PDF http://arxiv.org/pdf/1811.05701v3.pdf
PWC https://paperswithcode.com/paper/plan-and-write-towards-better-automatic
Repo
Framework

Approximate Bayesian Computation via Population Monte Carlo and Classification

Title Approximate Bayesian Computation via Population Monte Carlo and Classification
Authors Charlie Rogers-Smith, Henri Pesonen, Samuel Kaski
Abstract Approximate Bayesian computation (ABC) methods can be used to sample from posterior distributions when the likelihood function is unavailable or intractable, as is often the case in biological systems. ABC methods suffer from inefficient particle proposals in high dimensions, and subjectivity in the choice of summary statistics, discrepancy measure, and error tolerance. Sequential Monte Carlo (SMC) methods have been combined with ABC to improve the efficiency of particle proposals, but suffer from subjectivity and require many simulations from the likelihood function. Likelihood-Free Inference by Ratio Estimation (LFIRE) leverages classification to estimate the posterior density directly but does not explore the parameter space efficiently. This work proposes a classification approach that approximates population Monte Carlo (PMC), where model class probabilities from classification are used to update particle weights. This approach, called Classification-PMC, blends adaptive proposals and classification, efficiently producing samples from the posterior without subjectivity. We show through a simulation study that Classification-PMC outperforms two state-of-the-art methods: ratio estimation and SMC ABC when it is computationally difficult to simulate from the likelihood.
Tasks
Published 2018-10-29
URL https://arxiv.org/abs/1810.12233v2
PDF https://arxiv.org/pdf/1810.12233v2.pdf
PWC https://paperswithcode.com/paper/approximate-bayesian-computation-via
Repo
Framework

Automated segmentaiton and classification of arterioles and venules using Cascading Dilated Convolutional Neural Networks

Title Automated segmentaiton and classification of arterioles and venules using Cascading Dilated Convolutional Neural Networks
Authors Meng Li, Yan Zhang, Haicheng She, Jinqiong Zhou, Jia Jia, Danmei He, Li Zhang
Abstract The change of retinal vasculature is an early sign of many vascular and systematic diseases, such as diabetes and hypertension. Different behaviors of retinal arterioles and venules form an important metric to measure the disease severity. Therefore, an accurate classification of arterioles and venules is of great necessity. In this work, we propose a novel architecture of deep convolutional neural network for segmenting and classifying arterioles and venules on retinal fundus images. This network takes the original color fundus image as inputs and multi-class labels as outputs. We adopt the encoding-decoding structure (Unet) as the backbone network of our proposed model. To improve the classification accuracy, we develop a special encoding path that couples InceptionV4 modules and Cascading Dilated Convolutions (CDCs) on top of the backbone network. The model is thus able to extract and fuse high-level semantic features from multi-scale receptive fields. The proposed method has outperformed the previous state-of-the-art method on DRIVE dataset with an accuracy of 0.955 $\pm$ 0.002.
Tasks
Published 2018-12-01
URL http://arxiv.org/abs/1812.00137v1
PDF http://arxiv.org/pdf/1812.00137v1.pdf
PWC https://paperswithcode.com/paper/automated-segmentaiton-and-classification-of
Repo
Framework

Best Arm Identification in Linked Bandits

Title Best Arm Identification in Linked Bandits
Authors Anant Gupta
Abstract We consider the problem of best arm identification in a variant of multi-armed bandits called linked bandits. In a single interaction with linked bandits, multiple arms are played sequentially until one of them receives a positive reward. Since each interaction provides feedback about more than one arm, the sample complexity can be much lower than in the regular bandit setting. We propose an algorithm for linked bandits, that combines a novel subroutine to perform uniform sampling with a known optimal algorithm for regular bandits. We prove almost matching upper and lower bounds on the sample complexity of best arm identification in linked bandits. These bounds have an interesting structure, with an explicit dependence on the mean rewards of the arms, not just the gaps. We also corroborate our theoretical results with experiments.
Tasks Multi-Armed Bandits
Published 2018-11-19
URL http://arxiv.org/abs/1811.07476v2
PDF http://arxiv.org/pdf/1811.07476v2.pdf
PWC https://paperswithcode.com/paper/best-arm-identification-in-linked-bandits
Repo
Framework

The Algorithm Selection Competitions 2015 and 2017

Title The Algorithm Selection Competitions 2015 and 2017
Authors Marius Lindauer, Jan N. van Rijn, Lars Kotthoff
Abstract The algorithm selection problem is to choose the most suitable algorithm for solving a given problem instance. It leverages the complementarity between different approaches that is present in many areas of AI. We report on the state of the art in algorithm selection, as defined by the Algorithm Selection competitions in 2015 and 2017. The results of these competitions show how the state of the art improved over the years. We show that although performance in some cases is very good, there is still room for improvement in other cases. Finally, we provide insights into why some scenarios are hard, and pose challenges to the community on how to advance the current state of the art.
Tasks
Published 2018-05-03
URL http://arxiv.org/abs/1805.01214v2
PDF http://arxiv.org/pdf/1805.01214v2.pdf
PWC https://paperswithcode.com/paper/the-algorithm-selection-competitions-2015-and
Repo
Framework

A deep learning framework for segmentation of retinal layers from OCT images

Title A deep learning framework for segmentation of retinal layers from OCT images
Authors Karthik Gopinath, Samrudhdhi B Rangrej, Jayanthi Sivaswamy
Abstract Segmentation of retinal layers from Optical Coherence Tomography (OCT) volumes is a fundamental problem for any computer aided diagnostic algorithm development. This requires preprocessing steps such as denoising, region of interest extraction, flattening and edge detection all of which involve separate parameter tuning. In this paper, we explore deep learning techniques to automate all these steps and handle the presence/absence of pathologies. A model is proposed consisting of a combination of Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM). The CNN is used to extract layers of interest image and extract the edges, while the LSTM is used to trace the layer boundary. This model is trained on a mixture of normal and AMD cases using minimal data. Validation results on three public datasets show that the pixel-wise mean absolute error obtained with our system is 1.30 plus or minus 0.48 which is lower than the inter-marker error of 1.79 plus or minus 0.76. Our model’s performance is also on par with the existing methods.
Tasks Denoising, Edge Detection
Published 2018-06-22
URL http://arxiv.org/abs/1806.08859v1
PDF http://arxiv.org/pdf/1806.08859v1.pdf
PWC https://paperswithcode.com/paper/a-deep-learning-framework-for-segmentation-of
Repo
Framework

To Find Where You Talk: Temporal Sentence Localization in Video with Attention Based Location Regression

Title To Find Where You Talk: Temporal Sentence Localization in Video with Attention Based Location Regression
Authors Yitian Yuan, Tao Mei, Wenwu Zhu
Abstract Given an untrimmed video and a sentence description, temporal sentence localization aims to automatically determine the start and end points of the described sentence within the video. The problem is challenging as it needs the understanding of both video and sentence. Existing research predominantly employs a costly “scan and localize” framework, neglecting the global video context and the specific details within sentences which play as critical issues for this problem. In this paper, we propose a novel Attention Based Location Regression (ABLR) approach to solve the temporal sentence localization from a global perspective. Specifically, to preserve the context information, ABLR first encodes both video and sentence via Bidirectional LSTM networks. Then, a multi-modal co-attention mechanism is introduced to generate not only video attention which reflects the global video structure, but also sentence attention which highlights the crucial details for temporal localization. Finally, a novel attention based location regression network is designed to predict the temporal coordinates of sentence query from the previous attention. ABLR is jointly trained in an end-to-end manner. Comprehensive experiments on ActivityNet Captions and TACoS datasets demonstrate both the effectiveness and the efficiency of the proposed ABLR approach.
Tasks Temporal Localization
Published 2018-04-19
URL http://arxiv.org/abs/1804.07014v4
PDF http://arxiv.org/pdf/1804.07014v4.pdf
PWC https://paperswithcode.com/paper/to-find-where-you-talk-temporal-sentence
Repo
Framework

Entropy-Assisted Multi-Modal Emotion Recognition Framework Based on Physiological Signals

Title Entropy-Assisted Multi-Modal Emotion Recognition Framework Based on Physiological Signals
Authors Kuan Tung, Po-Kang Liu, Yu-Chuan Chuang, Sheng-Hui Wang, An-Yeu Wu
Abstract As the result of the growing importance of the Human Computer Interface system, understanding human’s emotion states has become a consequential ability for the computer. This paper aims to improve the performance of emotion recognition by conducting the complexity analysis of physiological signals. Based on AMIGOS dataset, we extracted several entropy-domain features such as Refined Composite Multi-Scale Entropy (RCMSE), Refined Composite Multi-Scale Permutation Entropy (RCMPE) from ECG and GSR signals, and Multivariate Multi-Scale Entropy (MMSE), Multivariate Multi-Scale Permutation Entropy (MMPE) from EEG, respectively. The statistical results show that RCMSE in GSR has a dominating performance in arousal, while RCMPE in GSR would be the excellent feature in valence. Furthermore, we selected XGBoost model to predict emotion and get 68% accuracy in arousal and 84% in valence.
Tasks EEG, Emotion Recognition
Published 2018-09-22
URL http://arxiv.org/abs/1809.08410v1
PDF http://arxiv.org/pdf/1809.08410v1.pdf
PWC https://paperswithcode.com/paper/entropy-assisted-multi-modal-emotion
Repo
Framework

iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making

Title iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making
Authors Preethi Lahoti, Krishna P. Gummadi, Gerhard Weikum
Abstract People are rated and ranked, towards algorithmic decision making in an increasing number of applications, typically based on machine learning. Research on how to incorporate fairness into such tasks has prevalently pursued the paradigm of group fairness: giving adequate success rates to specifically protected groups. In contrast, the alternative paradigm of individual fairness has received relatively little attention, and this paper advances this less explored direction. The paper introduces a method for probabilistically mapping user records into a low-rank representation that reconciles individual fairness and the utility of classifiers and rankings in downstream applications. Our notion of individual fairness requires that users who are similar in all task-relevant attributes such as job qualification, and disregarding all potentially discriminating attributes such as gender, should have similar outcomes. We demonstrate the versatility of our method by applying it to classification and learning-to-rank tasks on a variety of real-world datasets. Our experiments show substantial improvements over the best prior work for this setting.
Tasks Decision Making, Learning-To-Rank
Published 2018-06-04
URL http://arxiv.org/abs/1806.01059v2
PDF http://arxiv.org/pdf/1806.01059v2.pdf
PWC https://paperswithcode.com/paper/ifair-learning-individually-fair-data
Repo
Framework

Finding the way from ä to a: Sub-character morphological inflection for the SIGMORPHON 2018 Shared Task

Title Finding the way from ä to a: Sub-character morphological inflection for the SIGMORPHON 2018 Shared Task
Authors Fynn Schröder, Marcel Kamlot, Gregor Billing, Arne Köhn
Abstract In this paper we describe the system submitted by UHH to the CoNLL–SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection. We propose a neural architecture based on the concepts of UZH (Makarov et al., 2017), adding new ideas and techniques to their key concept and evaluating different combinations of parameters. The resulting system is a language-agnostic network model that aims to reduce the number of learned edit operations by introducing equivalence classes over graphical features of individual characters. We try to pinpoint advantages and drawbacks of this approach by comparing different network configurations and evaluating our results over a wide range of languages.
Tasks Morphological Inflection
Published 2018-09-15
URL http://arxiv.org/abs/1809.05742v1
PDF http://arxiv.org/pdf/1809.05742v1.pdf
PWC https://paperswithcode.com/paper/finding-the-way-from-a-to-a-sub-character-1
Repo
Framework

Image denoising and restoration with CNN-LSTM Encoder Decoder with Direct Attention

Title Image denoising and restoration with CNN-LSTM Encoder Decoder with Direct Attention
Authors Kazi Nazmul Haque, Mohammad Abu Yousuf, Rajib Rana
Abstract Image denoising is always a challenging task in the field of computer vision and image processing. In this paper, we have proposed an encoder-decoder model with direct attention, which is capable of denoising and reconstruct highly corrupted images. Our model consists of an encoder and a decoder, where the encoder is a convolutional neural network and decoder is a multilayer Long Short-Term memory network. In the proposed model, the encoder reads an image and catches the abstraction of that image in a vector, where decoder takes that vector as well as the corrupted image to reconstruct a clean image. We have trained our model on MNIST handwritten digit database after making lower half of every image as black as well as adding noise top of that. After a massive destruction of the images where it is hard for a human to understand the content of those images, our model can retrieve that image with minimal error. Our proposed model has been compared with convolutional encoder-decoder, where our model has performed better at generating missing part of the images than convolutional autoencoder.
Tasks Denoising, Image Denoising
Published 2018-01-16
URL http://arxiv.org/abs/1801.05141v1
PDF http://arxiv.org/pdf/1801.05141v1.pdf
PWC https://paperswithcode.com/paper/image-denoising-and-restoration-with-cnn-lstm
Repo
Framework

Compressing Deep Neural Networks: A New Hashing Pipeline Using Kac’s Random Walk Matrices

Title Compressing Deep Neural Networks: A New Hashing Pipeline Using Kac’s Random Walk Matrices
Authors Jack Parker-Holder, Sam Gass
Abstract The popularity of deep learning is increasing by the day. However, despite the recent advancements in hardware, deep neural networks remain computationally intensive. Recent work has shown that by preserving the angular distance between vectors, random feature maps are able to reduce dimensionality without introducing bias to the estimator. We test a variety of established hashing pipelines as well as a new approach using Kac’s random walk matrices. We demonstrate that this method achieves similar accuracy to existing pipelines.
Tasks
Published 2018-01-09
URL http://arxiv.org/abs/1801.02764v3
PDF http://arxiv.org/pdf/1801.02764v3.pdf
PWC https://paperswithcode.com/paper/compressing-deep-neural-networks-a-new
Repo
Framework

Scene-Adapted Plug-and-Play Algorithm with Guaranteed Convergence: Applications to Data Fusion in Imaging

Title Scene-Adapted Plug-and-Play Algorithm with Guaranteed Convergence: Applications to Data Fusion in Imaging
Authors Afonso M. Teodoro, José M. Bioucas-Dias, Mário A. T. Figueiredo
Abstract The recently proposed plug-and-play (PnP) framework allows leveraging recent developments in image denoising to tackle other, more involved, imaging inverse problems. In a PnP method, a black-box denoiser is plugged into an iterative algorithm, taking the place of a formal denoising step that corresponds to the proximity operator of some convex regularizer. While this approach offers flexibility and excellent performance, convergence of the resulting algorithm may be hard to analyze, as most state-of-the-art denoisers lack an explicit underlying objective function. In this paper, we propose a PnP approach where a scene-adapted prior (i.e., where the denoiser is targeted to the specific scene being imaged) is plugged into ADMM (alternating direction method of multipliers), and prove convergence of the resulting algorithm. Finally, we apply the proposed framework in two different imaging inverse problems: hyperspectral sharpening/fusion and image deblurring from blurred/noisy image pairs.
Tasks Deblurring, Denoising, Image Denoising
Published 2018-01-02
URL http://arxiv.org/abs/1801.00605v1
PDF http://arxiv.org/pdf/1801.00605v1.pdf
PWC https://paperswithcode.com/paper/scene-adapted-plug-and-play-algorithm-with
Repo
Framework
comments powered by Disqus