January 30, 2020

3079 words 15 mins read

Paper Group ANR 249

Paper Group ANR 249

Stochastic Blockmodels meet Graph Neural Networks. More Powerful Selective Kernel Tests for Feature Selection. SpaceNet MVOI: a Multi-View Overhead Imagery Dataset. An Integrated Autoencoder-Based Filter for Sparse Big Data. Availability-Based Production Predicts Speakers’ Real-time Choices of Mandarin Classifiers. Learning to Generate Unambiguous …

Stochastic Blockmodels meet Graph Neural Networks

Title Stochastic Blockmodels meet Graph Neural Networks
Authors Nikhil Mehta, Lawrence Carin, Piyush Rai
Abstract Stochastic blockmodels (SBM) and their variants, $e.g.$, mixed-membership and overlapping stochastic blockmodels, are latent variable based generative models for graphs. They have proven to be successful for various tasks, such as discovering the community structure and link prediction on graph-structured data. Recently, graph neural networks, $e.g.$, graph convolutional networks, have also emerged as a promising approach to learn powerful representations (embeddings) for the nodes in the graph, by exploiting graph properties such as locality and invariance. In this work, we unify these two directions by developing a \emph{sparse} variational autoencoder for graphs, that retains the interpretability of SBMs, while also enjoying the excellent predictive performance of graph neural nets. Moreover, our framework is accompanied by a fast recognition model that enables fast inference of the node embeddings (which are of independent interest for inference in SBM and its variants). Although we develop this framework for a particular type of SBM, namely the \emph{overlapping} stochastic blockmodel, the proposed framework can be adapted readily for other types of SBMs. Experimental results on several benchmarks demonstrate encouraging results on link prediction while learning an interpretable latent structure that can be used for community discovery.
Tasks Link Prediction
Published 2019-05-14
URL https://arxiv.org/abs/1905.05738v1
PDF https://arxiv.org/pdf/1905.05738v1.pdf
PWC https://paperswithcode.com/paper/stochastic-blockmodels-meet-graph-neural
Repo
Framework

More Powerful Selective Kernel Tests for Feature Selection

Title More Powerful Selective Kernel Tests for Feature Selection
Authors Jen Ning Lim, Makoto Yamada, Wittawat Jitkrittum, Yoshikazu Terada, Shigeyuki Matsui, Hidetoshi Shimodaira
Abstract Refining one’s hypotheses in the light of data is a common scientific practice; however, the dependency on the data introduces selection bias and can lead to specious statistical analysis. An approach for addressing this is via conditioning on the selection procedure to account for how we have used the data to generate our hypotheses, and prevent information to be used again after selection. Many selective inference (a.k.a. post-selection inference) algorithms typically take this approach but will “over-condition” for sake of tractability. While this practice yields well calibrated statistic tests with controlled false positive rates (FPR), it can incur a major loss in power. In our work, we extend two recent proposals for selecting features using the Maximum Mean Discrepancy and Hilbert Schmidt Independence Criterion to condition on the minimal conditioning event. We show how recent advances in multiscale bootstrap makes conditioning on the minimal selection event possible and demonstrate our proposal over a range of synthetic and real world experiments. Our results show that our proposed test is indeed more powerful in most scenarios.
Tasks Feature Selection
Published 2019-10-14
URL https://arxiv.org/abs/1910.06134v2
PDF https://arxiv.org/pdf/1910.06134v2.pdf
PWC https://paperswithcode.com/paper/more-powerful-selective-kernel-tests-for
Repo
Framework

SpaceNet MVOI: a Multi-View Overhead Imagery Dataset

Title SpaceNet MVOI: a Multi-View Overhead Imagery Dataset
Authors Nicholas Weir, David Lindenbaum, Alexei Bastidas, Adam Van Etten, Sean McPherson, Jacob Shermeyer, Varun Kumar, Hanlin Tang
Abstract Detection and segmentation of objects in overheard imagery is a challenging task. The variable density, random orientation, small size, and instance-to-instance heterogeneity of objects in overhead imagery calls for approaches distinct from existing models designed for natural scene datasets. Though new overhead imagery datasets are being developed, they almost universally comprise a single view taken from directly overhead (“at nadir”), failing to address a critical variable: look angle. By contrast, views vary in real-world overhead imagery, particularly in dynamic scenarios such as natural disasters where first looks are often over 40 degrees off-nadir. This represents an important challenge to computer vision methods, as changing view angle adds distortions, alters resolution, and changes lighting. At present, the impact of these perturbations for algorithmic detection and segmentation of objects is untested. To address this problem, we present an open source Multi-View Overhead Imagery dataset, termed SpaceNet MVOI, with 27 unique looks from a broad range of viewing angles (-32.5 degrees to 54.0 degrees). Each of these images cover the same 665 square km geographic extent and are annotated with 126,747 building footprint labels, enabling direct assessment of the impact of viewpoint perturbation on model performance. We benchmark multiple leading segmentation and object detection models on: (1) building detection, (2) generalization to unseen viewing angles and resolutions, and (3) sensitivity of building footprint extraction to changes in resolution. We find that state of the art segmentation and object detection models struggle to identify buildings in off-nadir imagery and generalize poorly to unseen views, presenting an important benchmark to explore the broadly relevant challenge of detecting small, heterogeneous target objects in visually dynamic contexts.
Tasks Object Detection
Published 2019-03-28
URL https://arxiv.org/abs/1903.12239v2
PDF https://arxiv.org/pdf/1903.12239v2.pdf
PWC https://paperswithcode.com/paper/spacenet-mvoi-a-multi-view-overhead-imagery
Repo
Framework

An Integrated Autoencoder-Based Filter for Sparse Big Data

Title An Integrated Autoencoder-Based Filter for Sparse Big Data
Authors Baogui Xin, Wei Peng
Abstract We propose a novel filter for sparse big data, called an integrated autoencoder (IAE), which utilizes auxiliary information to mitigate data sparsity. The proposed model achieves an appropriate balance between prediction accuracy, convergence speed, and complexity. We implement experiments on a GPS trajectory dataset, and the results demonstrate that the IAE is more accurate and robust than some state-of-the-art methods.
Tasks
Published 2019-04-13
URL https://arxiv.org/abs/1904.06513v2
PDF https://arxiv.org/pdf/1904.06513v2.pdf
PWC https://paperswithcode.com/paper/a-joint-autoencoder-for-prediction-and-its
Repo
Framework

Availability-Based Production Predicts Speakers’ Real-time Choices of Mandarin Classifiers

Title Availability-Based Production Predicts Speakers’ Real-time Choices of Mandarin Classifiers
Authors Meilin Zhan, Roger Levy
Abstract Speakers often face choices as to how to structure their intended message into an utterance. Here we investigate the influence of contextual predictability on the encoding of linguistic content manifested by speaker choice in a classifier language. In English, a numeral modifies a noun directly (e.g., three computers). In classifier languages such as Mandarin Chinese, it is obligatory to use a classifier (CL) with the numeral and the noun (e.g., three CL.machinery computer, three CL.general computer). While different nouns are compatible with different specific classifiers, there is a general classifier “ge” (CL.general) that can be used with most nouns. When the upcoming noun is less predictable, the use of a more specific classifier would reduce surprisal at the noun thus potentially facilitate comprehension (predicted by Uniform Information Density, Levy & Jaeger, 2007), but the use of that more specific classifier may be dispreferred from a production standpoint if accessing the general classifier is always available (predicted by Availability-Based Production; Bock, 1987; Ferreira & Dell, 2000). Here we use a picture-naming experiment showing that Availability-Based Production predicts speakers’ real-time choices of Mandarin classifiers.
Tasks
Published 2019-05-17
URL https://arxiv.org/abs/1905.07321v1
PDF https://arxiv.org/pdf/1905.07321v1.pdf
PWC https://paperswithcode.com/paper/availability-based-production-predicts
Repo
Framework

Learning to Generate Unambiguous Spatial Referring Expressions for Real-World Environments

Title Learning to Generate Unambiguous Spatial Referring Expressions for Real-World Environments
Authors Fethiye Irmak Doğan, Sinan Kalkan, Iolanda Leite
Abstract Referring to objects in a natural and unambiguous manner is crucial for effective human-robot interaction. Previous research on learning-based referring expressions has focused primarily on comprehension tasks, while generating referring expressions is still mostly limited to rule-based methods. In this work, we propose a two-stage approach that relies on deep learning for estimating spatial relations to describe an object naturally and unambiguously with a referring expression. We compare our method to the state of the art algorithm in ambiguous environments (e.g., environments that include very similar objects with similar relationships). We show that our method generates referring expressions that people find to be more accurate ($\sim$30% better) and would prefer to use ($\sim$32% more often).
Tasks
Published 2019-04-15
URL https://arxiv.org/abs/1904.07165v4
PDF https://arxiv.org/pdf/1904.07165v4.pdf
PWC https://paperswithcode.com/paper/learning-to-generate-unambiguous-spatial
Repo
Framework

Dynamic Neural Network Channel Execution for Efficient Training

Title Dynamic Neural Network Channel Execution for Efficient Training
Authors Simeon E. Spasov, Pietro Lio
Abstract Existing methods for reducing the computational burden of neural networks at run-time, such as parameter pruning or dynamic computational path selection, focus solely on improving computational efficiency during inference. On the other hand, in this work, we propose a novel method which reduces the memory footprint and number of computing operations required for training and inference. Our framework efficiently integrates pruning as part of the training procedure by exploring and tracking the relative importance of convolutional channels. At each training step, we select only a subset of highly salient channels to execute according to the combinatorial upper confidence bound algorithm, and run a forward and backward pass only on these activated channels, hence learning their parameters. Consequently, we enable the efficient discovery of compact models. We validate our approach empirically on state-of-the-art CNNs - VGGNet, ResNet and DenseNet, and on several image classification datasets. Results demonstrate our framework for dynamic channel execution reduces computational cost up to 4x and parameter count up to 9x, thus reducing the memory and computational demands for discovering and training compact neural network models.
Tasks Image Classification
Published 2019-05-15
URL https://arxiv.org/abs/1905.06435v1
PDF https://arxiv.org/pdf/1905.06435v1.pdf
PWC https://paperswithcode.com/paper/dynamic-neural-network-channel-execution-for
Repo
Framework

Topological Autoencoders

Title Topological Autoencoders
Authors Michael Moor, Max Horn, Bastian Rieck, Karsten Borgwardt
Abstract We propose a novel approach for preserving topological structures of the input space in latent representations of autoencoders. Using persistent homology, a technique from topological data analysis, we calculate topological signatures of both the input and latent space to derive a topological loss term. Under weak theoretical assumptions, we construct this loss in a differentiable manner, such that the encoding learns to retain multi-scale connectivity information. We show that our approach is theoretically well-founded and that it exhibits favourable latent representations on a synthetic manifold as well as on real-world image data sets, while preserving low reconstruction errors.
Tasks Topological Data Analysis
Published 2019-06-03
URL https://arxiv.org/abs/1906.00722v3
PDF https://arxiv.org/pdf/1906.00722v3.pdf
PWC https://paperswithcode.com/paper/190600722
Repo
Framework

Findings of the NLP4IF-2019 Shared Task on Fine-Grained Propaganda Detection

Title Findings of the NLP4IF-2019 Shared Task on Fine-Grained Propaganda Detection
Authors Giovanni Da San Martino, Alberto Barrón-Cedeño, Preslav Nakov
Abstract We present the shared task on Fine-Grained Propaganda Detection, which was organized as part of the NLP4IF workshop at EMNLP-IJCNLP 2019. There were two subtasks. FLC is a fragment-level task that asks for the identification of propagandist text fragments in a news article and also for the prediction of the specific propaganda technique used in each such fragment (18-way classification task). SLC is a sentence-level binary classification task asking to detect the sentences that contain propaganda. A total of 12 teams submitted systems for the FLC task, 25 teams did so for the SLC task, and 14 teams eventually submitted a system description paper. For both subtasks, most systems managed to beat the baseline by a sizable margin. The leaderboard and the data from the competition are available at http://propaganda.qcri.org/nlp4if-shared-task/.
Tasks
Published 2019-10-20
URL https://arxiv.org/abs/1910.09982v1
PDF https://arxiv.org/pdf/1910.09982v1.pdf
PWC https://paperswithcode.com/paper/findings-of-the-nlp4if-2019-shared-task-on
Repo
Framework

An Introduction to a New Text Classification and Visualization for Natural Language Processing Using Topological Data Analysis

Title An Introduction to a New Text Classification and Visualization for Natural Language Processing Using Topological Data Analysis
Authors Naiereh Elyasi, Mehdi Hosseini Moghadam
Abstract Topological Data Analysis (TDA) is a novel new and fast growing field of data science providing a set of new topological and geometric tools to derive relevant features out of complex high-dimensional data. In this paper we apply two of best methods in topological data analysis, “Persistent Homology” and “Mapper”, in order to classify persian poems which has been composed by two of the best Iranian poets namely “Ferdowsi” and “Hafez”. This article has two main parts, in the first part we explain the mathematics behind these two methods which is easy to understand for general audience and in the second part we describe our models and the results of applying TDA tools to NLP.
Tasks Text Classification, Topological Data Analysis
Published 2019-06-03
URL https://arxiv.org/abs/1906.01726v1
PDF https://arxiv.org/pdf/1906.01726v1.pdf
PWC https://paperswithcode.com/paper/an-introduction-to-a-new-text-classification
Repo
Framework

Self-Adapting Goals Allow Transfer of Predictive Models to New Tasks

Title Self-Adapting Goals Allow Transfer of Predictive Models to New Tasks
Authors Kai Olav Ellefsen, Jim Torresen
Abstract A long-standing challenge in Reinforcement Learning is enabling agents to learn a model of their environment which can be transferred to solve other problems in a world with the same underlying rules. One reason this is difficult is the challenge of learning accurate models of an environment. If such a model is inaccurate, the agent’s plans and actions will likely be sub-optimal, and likely lead to the wrong outcomes. Recent progress in model-based reinforcement learning has improved the ability for agents to learn and use predictive models. In this paper, we extend a recent deep learning architecture which learns a predictive model of the environment that aims to predict only the value of a few key measurements, which are be indicative of an agent’s performance. Predicting only a few measurements rather than the entire future state of an environment makes it more feasible to learn a valuable predictive model. We extend this predictive model with a small, evolving neural network that suggests the best goals to pursue in the current state. We demonstrate that this allows the predictive model to transfer to new scenarios where goals are different, and that the adaptive goals can even adjust agent behavior on-line, changing its strategy to fit the current context.
Tasks
Published 2019-04-04
URL https://arxiv.org/abs/1904.02435v2
PDF https://arxiv.org/pdf/1904.02435v2.pdf
PWC https://paperswithcode.com/paper/self-adapting-goals-allow-transfer-of
Repo
Framework

Asynchronous Delay-Aware Accelerated Proximal Coordinate Descent for Nonconvex Nonsmooth Problems

Title Asynchronous Delay-Aware Accelerated Proximal Coordinate Descent for Nonconvex Nonsmooth Problems
Authors Ehsan Kazemi, Liqiang Wang
Abstract Nonconvex and nonsmooth problems have recently attracted considerable attention in machine learning. However, developing efficient methods for the nonconvex and nonsmooth optimization problems with certain performance guarantee remains a challenge. Proximal coordinate descent (PCD) has been widely used for solving optimization problems, but the knowledge of PCD methods in the nonconvex setting is very limited. On the other hand, the asynchronous proximal coordinate descent (APCD) recently have received much attention in order to solve large-scale problems. However, the accelerated variants of APCD algorithms are rarely studied. In this paper, we extend APCD method to the accelerated algorithm (AAPCD) for nonsmooth and nonconvex problems that satisfies the sufficient descent property, by comparing between the function values at proximal update and a linear extrapolated point using a delay-aware momentum value. To the best of our knowledge, we are the first to provide stochastic and deterministic accelerated extension of APCD algorithms for general nonconvex and nonsmooth problems ensuring that for both bounded delays and unbounded delays every limit point is a critical point. By leveraging Kurdyka-Lojasiewicz property, we will show linear and sublinear convergence rates for the deterministic AAPCD with bounded delays. Numerical results demonstrate the practical efficiency of our algorithm in speed.
Tasks
Published 2019-02-05
URL http://arxiv.org/abs/1902.01856v1
PDF http://arxiv.org/pdf/1902.01856v1.pdf
PWC https://paperswithcode.com/paper/asynchronous-delay-aware-accelerated-proximal
Repo
Framework

Persistent homology detects curvature

Title Persistent homology detects curvature
Authors Peter Bubenik, Michael Hull, Dhruv Patel, Benjamin Whittle
Abstract In topological data analysis, persistent homology is used to study the “shape of data”. Persistent homology computations are completely characterized by a set of intervals called a bar code. It is often said that the long intervals represent the “topological signal” and the short intervals represent “noise”. We give evidence to dispute this thesis, showing that the short intervals encode geometric information. Specifically, we prove that persistent homology detects the curvature of disks from which points have been sampled. We describe a general computational framework for solving inverse problems using the average persistence landscape, a continuous mapping from metric spaces with a probability measure to a Hilbert space. In the present application, the average persistence landscapes of points sampled from disks of constant curvature results in a path in this Hilbert space which may be learned using standard tools from statistical and machine learning.
Tasks Topological Data Analysis
Published 2019-05-30
URL https://arxiv.org/abs/1905.13196v3
PDF https://arxiv.org/pdf/1905.13196v3.pdf
PWC https://paperswithcode.com/paper/persistent-homology-detects-curvature
Repo
Framework

Lift-the-flap: what, where and when for context reasoning

Title Lift-the-flap: what, where and when for context reasoning
Authors Mengmi Zhang, Claire Tseng, Karla Montejo, Joseph Kwon, Gabriel Kreiman
Abstract Context reasoning is critical in a wide variety of applications where current inputs need to be interpreted in the light of previous experience and knowledge. Both spatial and temporal contextual information play a critical role in the domain of visual recognition. Here we investigate spatial constraints (what image features provide contextual information and where they are located), and temporal constraints (when different contextual cues matter) for visual recognition. The task is to reason about the scene context and infer what a target object hidden behind a flap is in a natural image. To tackle this problem, we first describe an online human psychophysics experiment recording active sampling via mouse clicks in lift-the-flap games and identify clicking patterns and features which are diagnostic for high contextual reasoning accuracy. As a proof of the usefulness of these clicking patterns and visual features, we extend a state-of-the-art recurrent model capable of attending to salient context regions, dynamically integrating useful information, making inferences, and predicting class label for the target object over multiple clicks. The proposed model achieves human-level contextual reasoning accuracy, shares human-like sampling behavior and learns interpretable features for contextual reasoning.
Tasks Object Recognition, Semantic Segmentation
Published 2019-02-01
URL https://arxiv.org/abs/1902.00163v2
PDF https://arxiv.org/pdf/1902.00163v2.pdf
PWC https://paperswithcode.com/paper/lift-the-flap-context-reasoning-using-object
Repo
Framework

Supervised Learning for Multi-Block Incomplete Data

Title Supervised Learning for Multi-Block Incomplete Data
Authors Hadrien Lorenzo, Jérôme Saracco, Rodolphe Thiébaut
Abstract In the supervised high dimensional settings with a large number of variables and a low number of individuals, one objective is to select the relevant variables and thus to reduce the dimension. That subspace selection is often managed with supervised tools. However, some data can be missing, compromising the validity of the sub-space selection. We propose a Partial Least Square (PLS) based method, called Multi-block Data-Driven sparse PLS mdd-sPLS, allowing jointly variable selection and subspace estimation while training and testing missing data imputation through a new algorithm called Koh-Lanta. This method was challenged through simulations against existing methods such as mean imputation, nipals, softImpute and imputeMFA. In the context of supervised analysis of high dimensional data, the proposed method shows the lowest prediction error of the response variables. So far this is the only method combining data imputation and response variable prediction. The superiority of the supervised multi-block mdd-sPLS method increases with the intra-block and inter-block correlations. The application to a real data-set from a rVSV-ZEBOV Ebola vaccine trial revealed interesting and biologically relevant results. The method is implemented in a R-package available on the CRAN and a Python-package available on pypi.
Tasks Imputation
Published 2019-01-14
URL http://arxiv.org/abs/1901.04380v1
PDF http://arxiv.org/pdf/1901.04380v1.pdf
PWC https://paperswithcode.com/paper/supervised-learning-for-multi-block
Repo
Framework
comments powered by Disqus