October 19, 2019

3353 words 16 mins read

Paper Group ANR 169

Paper Group ANR 169

Acquiring Annotated Data with Cross-lingual Explicitation for Implicit Discourse Relation Classification. Counting the uncountable: deep semantic density estimation from Space. Scanner: Efficient Video Analysis at Scale. Optimizing Prediction Intervals by Tuning Random Forest via Meta-Validation. LISA: Explaining Recurrent Neural Network Judgments …

Acquiring Annotated Data with Cross-lingual Explicitation for Implicit Discourse Relation Classification

Title Acquiring Annotated Data with Cross-lingual Explicitation for Implicit Discourse Relation Classification
Authors Wei Shi, Frances Yung, Vera Demberg
Abstract Implicit discourse relation classification is one of the most challenging and important tasks in discourse parsing, due to the lack of connective as strong linguistic cues. A principle bottleneck to further improvement is the shortage of training data (ca.~16k instances in the PDTB). Shi et al. (2017) proposed to acquire additional data by exploiting connectives in translation: human translators mark discourse relations which are implicit in the source language explicitly in the translation. Using back-translations of such explicitated connectives improves discourse relation parsing performance. This paper addresses the open question of whether the choice of the translation language matters, and whether multiple translations into different languages can be effectively used to improve the quality of the additional data.
Tasks Implicit Discourse Relation Classification, Relation Classification
Published 2018-08-30
URL http://arxiv.org/abs/1808.10290v2
PDF http://arxiv.org/pdf/1808.10290v2.pdf
PWC https://paperswithcode.com/paper/acquiring-annotated-data-with-cross-lingual
Repo
Framework

Counting the uncountable: deep semantic density estimation from Space

Title Counting the uncountable: deep semantic density estimation from Space
Authors Andres C. Rodriguez, Jan D. Wegner
Abstract We propose a new method to count objects of specific categories that are significantly smaller than the ground sampling distance of a satellite image. This task is hard due to the cluttered nature of scenes where different object categories occur. Target objects can be partially occluded, vary in appearance within the same class and look alike to different categories. Since traditional object detection is infeasible due to the small size of objects with respect to the pixel size, we cast object counting as a density estimation problem. To distinguish objects of different classes, our approach combines density estimation with semantic segmentation in an end-to-end learnable convolutional neural network (CNN). Experiments show that deep semantic density estimation can robustly count objects of various classes in cluttered scenes. Experiments also suggest that we need specific CNN architectures in remote sensing instead of blindly applying existing ones from computer vision.
Tasks Density Estimation, Object Counting, Object Detection, Semantic Segmentation
Published 2018-09-19
URL http://arxiv.org/abs/1809.07091v2
PDF http://arxiv.org/pdf/1809.07091v2.pdf
PWC https://paperswithcode.com/paper/counting-the-uncountable-deep-semantic
Repo
Framework

Scanner: Efficient Video Analysis at Scale

Title Scanner: Efficient Video Analysis at Scale
Authors Alex Poms, Will Crichton, Pat Hanrahan, Kayvon Fatahalian
Abstract A growing number of visual computing applications depend on the analysis of large video collections. The challenge is that scaling applications to operate on these datasets requires efficient systems for pixel data access and parallel processing across large numbers of machines. Few programmers have the capability to operate efficiently at these scales, limiting the field’s ability to explore new applications that leverage big video data. In response, we have created Scanner, a system for productive and efficient video analysis at scale. Scanner organizes video collections as tables in a data store optimized for sampling frames from compressed video, and executes pixel processing computations, expressed as dataflow graphs, on these frames. Scanner schedules video analysis applications expressed using these abstractions onto heterogeneous throughput computing hardware, such as multi-core CPUs, GPUs, and media processing ASICs, for high-throughput pixel processing. We demonstrate the productivity of Scanner by authoring a variety of video processing applications including the synthesis of stereo VR video streams from multi-camera rigs, markerless 3D human pose reconstruction from video, and data-mining big video datasets such as hundreds of feature-length films or over 70,000 hours of TV news. These applications achieve near-expert performance on a single machine and scale efficiently to hundreds of machines, enabling formerly long-running big video data analysis tasks to be carried out in minutes to hours.
Tasks
Published 2018-05-18
URL http://arxiv.org/abs/1805.07339v1
PDF http://arxiv.org/pdf/1805.07339v1.pdf
PWC https://paperswithcode.com/paper/scanner-efficient-video-analysis-at-scale
Repo
Framework

Optimizing Prediction Intervals by Tuning Random Forest via Meta-Validation

Title Optimizing Prediction Intervals by Tuning Random Forest via Meta-Validation
Authors Sean Bayley, Davide Falessi
Abstract Recent studies have shown that tuning prediction models increases prediction accuracy and that Random Forest can be used to construct prediction intervals. However, to our best knowledge, no study has investigated the need to, and the manner in which one can, tune Random Forest for optimizing prediction intervals { this paper aims to fill this gap. We explore a tuning approach that combines an effectively exhaustive search with a validation technique on a single Random Forest parameter. This paper investigates which, out of eight validation techniques, are beneficial for tuning, i.e., which automatically choose a Random Forest configuration constructing prediction intervals that are reliable and with a smaller width than the default configuration. Additionally, we present and validate three meta-validation techniques to determine which are beneficial, i.e., those which automatically chose a beneficial validation technique. This study uses data from our industrial partner (Keymind Inc.) and the Tukutuku Research Project, related to post-release defect prediction and Web application effort estimation, respectively. Results from our study indicate that: i) the default configuration is frequently unreliable, ii) most of the validation techniques, including previously successfully adopted ones such as 50/50 holdout and bootstrap, are counterproductive in most of the cases, and iii) the 75/25 holdout meta-validation technique is always beneficial; i.e., it avoids the likely counterproductive effects of validation techniques.
Tasks
Published 2018-01-22
URL http://arxiv.org/abs/1801.07194v1
PDF http://arxiv.org/pdf/1801.07194v1.pdf
PWC https://paperswithcode.com/paper/optimizing-prediction-intervals-by-tuning
Repo
Framework

LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation

Title LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation
Authors Pankaj Gupta, Hinrich Schütze
Abstract Recurrent neural networks (RNNs) are temporal networks and cumulative in nature that have shown promising results in various natural language processing tasks. Despite their success, it still remains a challenge to understand their hidden behavior. In this work, we analyze and interpret the cumulative nature of RNN via a proposed technique named as Layer-wIse-Semantic-Accumulation (LISA) for explaining decisions and detecting the most likely (i.e., saliency) patterns that the network relies on while decision making. We demonstrate (1) LISA: “How an RNN accumulates or builds semantics during its sequential processing for a given text example and expected response” (2) Example2pattern: “How the saliency patterns look like for each category in the data according to the network in decision making”. We analyse the sensitiveness of RNNs about different inputs to check the increase or decrease in prediction scores and further extract the saliency patterns learned by the network. We employ two relation classification datasets: SemEval 10 Task 8 and TAC KBP Slot Filling to explain RNN predictions via the LISA and example2pattern.
Tasks Decision Making, Relation Classification, Slot Filling
Published 2018-08-05
URL http://arxiv.org/abs/1808.01591v1
PDF http://arxiv.org/pdf/1808.01591v1.pdf
PWC https://paperswithcode.com/paper/lisa-explaining-recurrent-neural-network
Repo
Framework

Image-based model parameter optimization using Model-Assisted Generative Adversarial Networks

Title Image-based model parameter optimization using Model-Assisted Generative Adversarial Networks
Authors Saúl Alonso-Monsalve, Leigh H. Whitehead
Abstract We propose and demonstrate the use of a model-assisted generative adversarial network (GAN) to produce fake images that accurately match true images through the variation of the parameters of the model that describes the features of the images. The generator learns the model parameter values that produce fake images that best match the true images. Two case studies show excellent agreement between the generated best match parameters and the true parameters. The best match model parameter values can be used to retune the default simulation to minimize any bias when applying image recognition techniques to fake and true images. In the case of a real-world experiment, the true images are experimental data with unknown true model parameter values, and the fake images are produced by a simulation that takes the model parameters as input. The model-assisted GAN uses a convolutional neural network to emulate the simulation for all parameter values that, when trained, can be used as a conditional generator for fast fake-image production.
Tasks Image Generation
Published 2018-11-30
URL https://arxiv.org/abs/1812.00879v2
PDF https://arxiv.org/pdf/1812.00879v2.pdf
PWC https://paperswithcode.com/paper/image-based-model-parameter-optimisation
Repo
Framework

Counting Motifs with Graph Sampling

Title Counting Motifs with Graph Sampling
Authors Jason M. Klusowski, Yihong Wu
Abstract Applied researchers often construct a network from a random sample of nodes in order to infer properties of the parent network. Two of the most widely used sampling schemes are subgraph sampling, where we sample each vertex independently with probability $p$ and observe the subgraph induced by the sampled vertices, and neighborhood sampling, where we additionally observe the edges between the sampled vertices and their neighbors. In this paper, we study the problem of estimating the number of motifs as induced subgraphs under both models from a statistical perspective. We show that: for any connected $h$ on $k$ vertices, to estimate $s=\mathsf{s}(h,G)$, the number of copies of $h$ in the parent graph $G$ of maximum degree $d$, with a multiplicative error of $\epsilon$, (a) For subgraph sampling, the optimal sampling ratio $p$ is $\Theta_{k}(\max{ (s\epsilon^2)^{-\frac{1}{k}}, ; \frac{d^{k-1}}{s\epsilon^{2}} })$, achieved by Horvitz-Thompson type of estimators. (b) For neighborhood sampling, we propose a family of estimators, encompassing and outperforming the Horvitz-Thompson estimator and achieving the sampling ratio $O_{k}(\min{ (\frac{d}{s\epsilon^2})^{\frac{1}{k-1}}, ; \sqrt{\frac{d^{k-2}}{s\epsilon^2}}})$. This is shown to be optimal for all motifs with at most $4$ vertices and cliques of all sizes. The matching minimax lower bounds are established using certain algebraic properties of subgraph counts. These results quantify how much more informative neighborhood sampling is than subgraph sampling, as empirically verified by experiments on both synthetic and real-world data. We also address the issue of adaptation to the unknown maximum degree, and study specific problems for parent graphs with additional structures, e.g., trees or planar graphs.
Tasks
Published 2018-02-21
URL http://arxiv.org/abs/1802.07773v1
PDF http://arxiv.org/pdf/1802.07773v1.pdf
PWC https://paperswithcode.com/paper/counting-motifs-with-graph-sampling
Repo
Framework

Learning Recommendations While Influencing Interests

Title Learning Recommendations While Influencing Interests
Authors Rahul Meshram, D. Manjunath, Nikhil Karamchandani
Abstract Personalized recommendation systems (RS) are extensively used in many services. Many of these are based on learning algorithms where the RS uses the recommendation history and the user response to learn an optimal strategy. Further, these algorithms are based on the assumption that the user interests are rigid. Specifically, they do not account for the effect of learning strategy on the evolution of the user interests. In this paper we develop influence models for a learning algorithm that is used to optimally recommend websites to web users. We adapt the model of \cite{Ioannidis10} to include an item-dependent reward to the RS from the suggestions that are accepted by the user. For this we first develop a static optimisation scheme when all the parameters are known. Next we develop a stochastic approximation based learning scheme for the RS to learn the optimal strategy when the user profiles are not known. Finally, we describe several user-influence models for the learning algorithm and analyze their effect on the steady user interests and on the steady state optimal strategy as compared to that when the users are not influenced.
Tasks Recommendation Systems
Published 2018-03-23
URL http://arxiv.org/abs/1803.08651v1
PDF http://arxiv.org/pdf/1803.08651v1.pdf
PWC https://paperswithcode.com/paper/learning-recommendations-while-influencing
Repo
Framework

Single-Shot Bidirectional Pyramid Networks for High-Quality Object Detection

Title Single-Shot Bidirectional Pyramid Networks for High-Quality Object Detection
Authors Xiongwei Wu, Daoxin Zhang, Jianke Zhu, Steven C. H. Hoi
Abstract Recent years have witnessed many exciting achievements for object detection using deep learning techniques. Despite achieving significant progresses, most existing detectors are designed to detect objects with relatively low-quality prediction of locations, i.e., often trained with the threshold of Intersection over Union (IoU) set to 0.5 by default, which can yield low-quality or even noisy detections. It remains an open challenge for how to devise and train a high-quality detector that can achieve more precise localization (i.e., IoU$>$0.5) without sacrificing the detection performance. In this paper, we propose a novel single-shot detection framework of Bidirectional Pyramid Networks (BPN) towards high-quality object detection, which consists of two novel components: (i) a Bidirectional Feature Pyramid structure for more effective and robust feature representations; and (ii) a Cascade Anchor Refinement to gradually refine the quality of predesigned anchors for more effective training. Our experiments showed that the proposed BPN achieves the best performances among all the single-stage object detectors on both PASCAL VOC and MS COCO datasets, especially for high-quality detections.
Tasks Object Detection
Published 2018-03-22
URL http://arxiv.org/abs/1803.08208v1
PDF http://arxiv.org/pdf/1803.08208v1.pdf
PWC https://paperswithcode.com/paper/single-shot-bidirectional-pyramid-networks
Repo
Framework

Latent Dirichlet Allocation (LDA) for Topic Modeling of the CFPB Consumer Complaints

Title Latent Dirichlet Allocation (LDA) for Topic Modeling of the CFPB Consumer Complaints
Authors Kaveh Bastani, Hamed Namavari, Jeffry Shaffer
Abstract A text mining approach is proposed based on latent Dirichlet allocation (LDA) to analyze the Consumer Financial Protection Bureau (CFPB) consumer complaints. The proposed approach aims to extract latent topics in the CFPB complaint narratives, and explores their associated trends over time. The time trends will then be used to evaluate the effectiveness of the CFPB regulations and expectations on financial institutions in creating a consumer oriented culture that treats consumers fairly and prioritizes consumer protection in their decision making processes. The proposed approach can be easily operationalized as a decision support system to automate detection of emerging topics in consumer complaints. Hence, the technology-human partnership between the proposed approach and the CFPB team could certainly improve consumer protections from unfair, deceptive or abusive practices in the financial markets by providing more efficient and effective investigations of consumer complaint narratives.
Tasks Decision Making
Published 2018-07-18
URL http://arxiv.org/abs/1807.07468v1
PDF http://arxiv.org/pdf/1807.07468v1.pdf
PWC https://paperswithcode.com/paper/latent-dirichlet-allocation-lda-for-topic
Repo
Framework

Explicit Utilization of General Knowledge in Machine Reading Comprehension

Title Explicit Utilization of General Knowledge in Machine Reading Comprehension
Authors Chao Wang, Hui Jiang
Abstract To bridge the gap between Machine Reading Comprehension (MRC) models and human beings, which is mainly reflected in the hunger for data and the robustness to noise, in this paper, we explore how to integrate the neural networks of MRC models with the general knowledge of human beings. On the one hand, we propose a data enrichment method, which uses WordNet to extract inter-word semantic connections as general knowledge from each given passage-question pair. On the other hand, we propose an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the above extracted general knowledge to assist its attention mechanisms. Based on the data enrichment method, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them. When only a subset (20%-80%) of the training examples are available, KAR outperforms the state-of-the-art MRC models by a large margin, and is still reasonably robust to noise.
Tasks Machine Reading Comprehension, Question Answering, Reading Comprehension
Published 2018-09-10
URL https://arxiv.org/abs/1809.03449v3
PDF https://arxiv.org/pdf/1809.03449v3.pdf
PWC https://paperswithcode.com/paper/exploring-machine-reading-comprehension-with
Repo
Framework

Rare Event Detection using Disentangled Representation Learning

Title Rare Event Detection using Disentangled Representation Learning
Authors Ryuhei Hamaguchi, Ken Sakurada, Ryosuke Nakamura
Abstract This paper presents a novel method for rare event detection from an image pair with class-imbalanced datasets. A straightforward approach for event detection tasks is to train a detection network from a large-scale dataset in an end-to-end manner. However, in many applications such as building change detection on satellite images, few positive samples are available for the training. Moreover, scene image pairs contain many trivial events, such as in illumination changes or background motions. These many trivial events and the class imbalance problem lead to false alarms for rare event detection. In order to overcome these difficulties, we propose a novel method to learn disentangled representations from only low-cost negative samples. The proposed method disentangles different aspects in a pair of observations: variant and invariant factors that represent trivial events and image contents, respectively. The effectiveness of the proposed approach is verified by the quantitative evaluations on four change detection datasets, and the qualitative analysis shows that the proposed method can acquire the representations that disentangle rare events from trivial ones.
Tasks Representation Learning
Published 2018-12-04
URL http://arxiv.org/abs/1812.01285v1
PDF http://arxiv.org/pdf/1812.01285v1.pdf
PWC https://paperswithcode.com/paper/rare-event-detection-using-disentangled
Repo
Framework

Distance preserving model order reduction of graph-Laplacians and cluster analysis

Title Distance preserving model order reduction of graph-Laplacians and cluster analysis
Authors Vladimir Druskin, Alexander V. Mamonov, Mikhail Zaslavsky
Abstract Graph-Laplacians and their spectral embeddings play an important role in multiple areas of machine learning. This paper is focused on graph-Laplacian dimension reduction for the spectral clustering of data as a primary application. Spectral embedding provides a low-dimensional parametrization of the data manifold which makes the subsequent task (e.g., clustering) much easier. However, despite reducing the dimensionality of data, the overall computational cost may still be prohibitive for large data sets due to two factors. First, computing the partial eigendecomposition of the graph-Laplacian typically requires a large Krylov subspace. Second, after the spectral embedding is complete, one still has to operate with the same number of data points. For example, clustering of the embedded data is typically performed with various relaxations of k-means which computational cost scales poorly with respect to the size of data set. In this work, we switch the focus from the entire data set to a subset of graph vertices (target subset). We develop two novel algorithms for such low-dimensional representation of the original graph that preserves important global distances between the nodes of the target subset. In particular, it allows to ensure that target subset clustering is consistent with the spectral clustering of the full data set if one would perform such. That is achieved by a properly parametrized reduced-order model (ROM) of the graph-Laplacian that approximates accurately the diffusion transfer function of the original graph for inputs and outputs restricted to the target subset. Working with a small target subset reduces greatly the required dimension of Krylov subspace and allows to exploit the conventional algorithms (like approximations of k-means) in the regimes when they are most robust and efficient.
Tasks Dimensionality Reduction, Graph Clustering
Published 2018-09-09
URL https://arxiv.org/abs/1809.03048v2
PDF https://arxiv.org/pdf/1809.03048v2.pdf
PWC https://paperswithcode.com/paper/clustering-of-graph-vertex-subset-via-krylov
Repo
Framework

FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces

Title FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces
Authors Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, Matthias Nießner
Abstract With recent advances in computer vision and graphics, it is now possible to generate videos with extremely realistic synthetic faces, even in real time. Countless applications are possible, some of which raise a legitimate alarm, calling for reliable detectors of fake videos. In fact, distinguishing between original and manipulated video can be a challenge for humans and computers alike, especially when the videos are compressed or have low resolution, as it often happens on social networks. Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets. To this end, we introduce a novel face manipulation dataset of about half a million edited images (from over 1000 videos). The manipulations have been generated with a state-of-the-art face editing approach. It exceeds all existing video manipulation datasets by at least an order of magnitude. Using our new dataset, we introduce benchmarks for classical image forensic tasks, including classification and segmentation, considering videos compressed at various quality levels. In addition, we introduce a benchmark evaluation for creating indistinguishable forgeries with known ground truth; for instance with generative refinement models.
Tasks
Published 2018-03-24
URL http://arxiv.org/abs/1803.09179v1
PDF http://arxiv.org/pdf/1803.09179v1.pdf
PWC https://paperswithcode.com/paper/faceforensics-a-large-scale-video-dataset-for
Repo
Framework

Traversing Latent Space using Decision Ferns

Title Traversing Latent Space using Decision Ferns
Authors Yan Zuo, Gil Avraham, Tom Drummond
Abstract The practice of transforming raw data to a feature space so that inference can be performed in that space has been popular for many years. Recently, rapid progress in deep neural networks has given both researchers and practitioners enhanced methods that increase the richness of feature representations, be it from images, text or speech. In this work we show how a constructed latent space can be explored in a controlled manner and argue that this complements well founded inference methods. For constructing the latent space a Variational Autoencoder is used. We present a novel controller module that allows for smooth traversal in the latent space and construct an end-to-end trainable framework. We explore the applicability of our method for performing spatial transformations as well as kinematics for predicting future latent vectors of a video sequence.
Tasks
Published 2018-12-06
URL http://arxiv.org/abs/1812.02636v1
PDF http://arxiv.org/pdf/1812.02636v1.pdf
PWC https://paperswithcode.com/paper/traversing-latent-space-using-decision-ferns
Repo
Framework
comments powered by Disqus