October 18, 2019

3049 words 15 mins read

Paper Group ANR 613

Paper Group ANR 613

Computer Analysis of Architecture Using Automatic Image Understanding. Sign-Full Random Projections. Fairly Allocating Many Goods with Few Queries. Deep Person Detection in 2D Range Data. Bioinformatics and Medicine in the Era of Deep Learning. Embedding-reparameterization procedure for manifold-valued latent variables in generative models. Cluster …

Computer Analysis of Architecture Using Automatic Image Understanding

Title Computer Analysis of Architecture Using Automatic Image Understanding
Authors Fan Wei, Yuan Li, Lior Shamir
Abstract In the past few years, computer vision and pattern recognition systems have been becoming increasingly more powerful, expanding the range of automatic tasks enabled by machine vision. Here we show that computer analysis of building images can perform quantitative analysis of architecture, and quantify similarities between city architectural styles in a quantitative fashion. Images of buildings from 18 cities and three countries were acquired using Google StreetView, and were used to train a machine vision system to automatically identify the location of the imaged building based on the image visual content. Experimental results show that the automatic computer analysis can automatically identify the geographical location of the StreetView image. More importantly, the algorithm was able to group the cities and countries and provide a phylogeny of the similarities between architectural styles as captured by StreetView images. These results demonstrate that computer vision and pattern recognition algorithms can perform the complex cognitive task of analyzing images of buildings, and can be used to measure and quantify visual similarities and differences between different styles of architectures. This experiment provides a new paradigm for studying architecture, based on a quantitative approach that can enhance the traditional manual observation and analysis. The source code used for the analysis is open and publicly available.
Tasks
Published 2018-07-13
URL http://arxiv.org/abs/1807.04892v3
PDF http://arxiv.org/pdf/1807.04892v3.pdf
PWC https://paperswithcode.com/paper/computer-analysis-of-architecture-using
Repo
Framework

Sign-Full Random Projections

Title Sign-Full Random Projections
Authors Ping Li
Abstract The method of 1-bit (“sign-sign”) random projections has been a popular tool for efficient search and machine learning on large datasets. Given two $D$-dim data vectors $u$, $v\in\mathbb{R}^D$, one can generate $x = \sum_{i=1}^D u_i r_i$, and $y = \sum_{i=1}^D v_i r_i$, where $r_i\sim N(0,1)$ iid. The “collision probability” is ${Pr}\left(sgn(x)=sgn(y)\right) = 1-\frac{\cos^{-1}\rho}{\pi}$, where $\rho = \rho(u,v)$ is the cosine similarity. We develop “sign-full” random projections by estimating $\rho$ from (e.g.,) the expectation $E(sgn(x)y)=\sqrt{\frac{2}{\pi}} \rho$, which can be further substantially improved by normalizing $y$. For nonnegative data, we recommend an interesting estimator based on $E\left(y_- 1_{x\geq 0} + y_+ 1_{x<0}\right)$ and its normalized version. The recommended estimator almost matches the accuracy of the (computationally expensive) maximum likelihood estimator. At high similarity ($\rho\rightarrow1$), the asymptotic variance of recommended estimator is only $\frac{4}{3\pi} \approx 0.4$ of the estimator for sign-sign projections. At small $k$ and high similarity, the improvement would be even much more substantial.
Tasks
Published 2018-04-26
URL http://arxiv.org/abs/1805.00533v1
PDF http://arxiv.org/pdf/1805.00533v1.pdf
PWC https://paperswithcode.com/paper/sign-full-random-projections
Repo
Framework

Fairly Allocating Many Goods with Few Queries

Title Fairly Allocating Many Goods with Few Queries
Authors Hoon Oh, Ariel D. Procaccia, Warut Suksompong
Abstract We investigate the query complexity of the fair allocation of indivisible goods. For two agents with arbitrary monotonic valuations, we design an algorithm that computes an allocation satisfying envy-freeness up to one good (EF1), a relaxation of envy-freeness, using a logarithmic number of queries. We show that the logarithmic query complexity bound also holds for three agents with additive valuations. These results suggest that it is possible to fairly allocate goods in practice even when the number of goods is extremely large. By contrast, we prove that computing an allocation satisfying envy-freeness and another of its relaxations, envy-freeness up to any good (EFX), requires a linear number of queries even when there are only two agents with identical additive valuations.
Tasks
Published 2018-07-30
URL http://arxiv.org/abs/1807.11367v1
PDF http://arxiv.org/pdf/1807.11367v1.pdf
PWC https://paperswithcode.com/paper/fairly-allocating-many-goods-with-few-queries
Repo
Framework

Deep Person Detection in 2D Range Data

Title Deep Person Detection in 2D Range Data
Authors Lucas Beyer, Alexander Hermans, Timm Linder, Kai O. Arras, Bastian Leibe
Abstract Detecting humans is a key skill for mobile robots and intelligent vehicles in a large variety of applications. While the problem is well studied for certain sensory modalities such as image data, few works exist that address this detection task using 2D range data. However, a widespread sensory setup for many mobile robots in service and domestic applications contains a horizontally mounted 2D laser scanner. Detecting people from 2D range data is challenging due to the speed and dynamics of human leg motion and the high levels of occlusion and self-occlusion particularly in crowds of people. While previous approaches mostly relied on handcrafted features, we recently developed the deep learning based wheelchair and walker detector DROW. In this paper, we show the generalization to people, including small modifications that significantly boost DROW’s performance. Additionally, by providing a small, fully online temporal window in our network, we further boost our score. We extend the DROW dataset with person annotations, making this the largest dataset of person annotations in 2D range data, recorded during several days in a real-world environment with high diversity. Extensive experiments with three current baseline methods indicate it is a challenging dataset, on which our improved DROW detector beats the current state-of-the-art.
Tasks Human Detection
Published 2018-04-06
URL http://arxiv.org/abs/1804.02463v1
PDF http://arxiv.org/pdf/1804.02463v1.pdf
PWC https://paperswithcode.com/paper/deep-person-detection-in-2d-range-data
Repo
Framework

Bioinformatics and Medicine in the Era of Deep Learning

Title Bioinformatics and Medicine in the Era of Deep Learning
Authors Davide Bacciu, Paulo J. G. Lisboa, José D. Martín, Ruxandra Stoean, Alfredo Vellido
Abstract Many of the current scientific advances in the life sciences have their origin in the intensive use of data for knowledge discovery. In no area this is so clear as in bioinformatics, led by technological breakthroughs in data acquisition technologies. It has been argued that bioinformatics could quickly become the field of research generating the largest data repositories, beating other data-intensive areas such as high-energy physics or astroinformatics. Over the last decade, deep learning has become a disruptive advance in machine learning, giving new live to the long-standing connectionist paradigm in artificial intelligence. Deep learning methods are ideally suited to large-scale data and, therefore, they should be ideally suited to knowledge discovery in bioinformatics and biomedicine at large. In this brief paper, we review key aspects of the application of deep learning in bioinformatics and medicine, drawing from the themes covered by the contributions to an ESANN 2018 special session devoted to this topic.
Tasks
Published 2018-02-27
URL http://arxiv.org/abs/1802.09791v1
PDF http://arxiv.org/pdf/1802.09791v1.pdf
PWC https://paperswithcode.com/paper/bioinformatics-and-medicine-in-the-era-of
Repo
Framework

Embedding-reparameterization procedure for manifold-valued latent variables in generative models

Title Embedding-reparameterization procedure for manifold-valued latent variables in generative models
Authors Eugene Golikov, Maksim Kretov
Abstract Conventional prior for Variational Auto-Encoder (VAE) is a Gaussian distribution. Recent works demonstrated that choice of prior distribution affects learning capacity of VAE models. We propose a general technique (embedding-reparameterization procedure, or ER) for introducing arbitrary manifold-valued variables in VAE model. We compare our technique with a conventional VAE on a toy benchmark problem. This is work in progress.
Tasks
Published 2018-12-06
URL http://arxiv.org/abs/1812.02769v1
PDF http://arxiv.org/pdf/1812.02769v1.pdf
PWC https://paperswithcode.com/paper/embedding-reparameterization-procedure-for
Repo
Framework

Clustering With Pairwise Relationships: A Generative Approach

Title Clustering With Pairwise Relationships: A Generative Approach
Authors Yen-Yun Yu, Shireen Y. Elhabian, Ross T. Whitaker
Abstract Semi-supervised learning (SSL) has become important in current data analysis applications, where the amount of unlabeled data is growing exponentially and user input remains limited by logistics and expense. Constrained clustering, as a subclass of SSL, makes use of user input in the form of relationships between data points (e.g., pairs of data points belonging to the same class or different classes) and can remarkably improve the performance of unsupervised clustering in order to reflect user-defined knowledge of the relationships between particular data points. Existing algorithms incorporate such user input, heuristically, as either hard constraints or soft penalties, which are separate from any generative or statistical aspect of the clustering model; this results in formulations that are suboptimal and not sufficiently general. In this paper, we propose a principled, generative approach to probabilistically model, without ad hoc penalties, the joint distribution given by user-defined pairwise relations. The proposed model accounts for general underlying distributions without assuming a specific form and relies on expectation-maximization for model fitting. For distributions in a standard form, the proposed approach results in a closed-form solution for updated parameters.
Tasks
Published 2018-05-06
URL http://arxiv.org/abs/1805.02285v1
PDF http://arxiv.org/pdf/1805.02285v1.pdf
PWC https://paperswithcode.com/paper/clustering-with-pairwise-relationships-a
Repo
Framework

Associative Compression Networks for Representation Learning

Title Associative Compression Networks for Representation Learning
Authors Alex Graves, Jacob Menick, Aaron van den Oord
Abstract This paper introduces Associative Compression Networks (ACNs), a new framework for variational autoencoding with neural networks. The system differs from existing variational autoencoders (VAEs) in that the prior distribution used to model each code is conditioned on a similar code from the dataset. In compression terms this equates to sequentially transmitting the dataset using an ordering determined by proximity in latent space. Since the prior need only account for local, rather than global variations in the latent space, the coding cost is greatly reduced, leading to rich, informative codes. Crucially, the codes remain informative when powerful, autoregressive decoders are used, which we argue is fundamentally difficult with normal VAEs. Experimental results on MNIST, CIFAR-10, ImageNet and CelebA show that ACNs discover high-level latent features such as object class, writing style, pose and facial expression, which can be used to cluster and classify the data, as well as to generate diverse and convincing samples. We conclude that ACNs are a promising new direction for representation learning: one that steps away from IID modelling, and towards learning a structured description of the dataset as a whole.
Tasks Representation Learning
Published 2018-04-06
URL http://arxiv.org/abs/1804.02476v2
PDF http://arxiv.org/pdf/1804.02476v2.pdf
PWC https://paperswithcode.com/paper/associative-compression-networks-for
Repo
Framework

Detecting cognitive impairments by agreeing on interpretations of linguistic features

Title Detecting cognitive impairments by agreeing on interpretations of linguistic features
Authors Zining Zhu, Jekaterina Novikova, Frank Rudzicz
Abstract Linguistic features have shown promising applications for detecting various cognitive impairments. To improve detection accuracies, increasing the amount of data or the number of linguistic features have been two applicable approaches. However, acquiring additional clinical data can be expensive, and hand-crafting features is burdensome. In this paper, we take a third approach, proposing Consensus Networks (CNs), a framework to classify after reaching agreements between modalities. We divide linguistic features into non-overlapping subsets according to their modalities, and let neural networks learn low-dimensional representations that agree with each other. These representations are passed into a classifier network. All neural networks are optimized iteratively. In this paper, we also present two methods that improve the performance of CNs. We then present ablation studies to illustrate the effectiveness of modality division. To understand further what happens in CNs, we visualize the representations during training. Overall, using all of the 413 linguistic features, our models significantly outperform traditional classifiers, which are used by the state-of-the-art papers.
Tasks
Published 2018-08-20
URL http://arxiv.org/abs/1808.06570v3
PDF http://arxiv.org/pdf/1808.06570v3.pdf
PWC https://paperswithcode.com/paper/detecting-cognitive-impairments-by-agreeing
Repo
Framework

An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos

Title An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos
Authors B Ravi Kiran, Dilip Mathew Thomas, Ranjith Parakkal
Abstract Videos represent the primary source of information for surveillance applications and are available in large amounts but in most cases contain little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods for video anomaly detection and categorizes them based on the type of model and criteria of detection. We also perform simple studies to understand the different approaches and provide the criteria of evaluation for spatio-temporal anomaly detection.
Tasks Anomaly Detection
Published 2018-01-09
URL http://arxiv.org/abs/1801.03149v2
PDF http://arxiv.org/pdf/1801.03149v2.pdf
PWC https://paperswithcode.com/paper/an-overview-of-deep-learning-based-methods
Repo
Framework

Failure Prediction for Autonomous Driving

Title Failure Prediction for Autonomous Driving
Authors Simon Hecker, Dengxin Dai, Luc Van Gool
Abstract The primary focus of autonomous driving research is to improve driving accuracy. While great progress has been made, state-of-the-art algorithms still fail at times. Such failures may have catastrophic consequences. It therefore is important that automated cars foresee problems ahead as early as possible. This is also of paramount importance if the driver will be asked to take over. We conjecture that failures do not occur randomly. For instance, driving models may fail more likely at places with heavy traffic, at complex intersections, and/or under adverse weather/illumination conditions. This work presents a method to learn to predict the occurrence of these failures, i.e. to assess how difficult a scene is to a given driving model and to possibly give the human driver an early headsup. A camera-based driving model is developed and trained over real driving datasets. The discrepancies between the model’s predictions and the human ground-truth' maneuvers were then recorded, to yield the failure’ scores. Experimental results show that the failure score can indeed be learned and predicted. Thus, our prediction method is able to improve the overall safety of an automated driving model by alerting the human driver timely, leading to better human-vehicle collaborative driving.
Tasks Autonomous Driving
Published 2018-05-04
URL http://arxiv.org/abs/1805.01811v1
PDF http://arxiv.org/pdf/1805.01811v1.pdf
PWC https://paperswithcode.com/paper/failure-prediction-for-autonomous-driving
Repo
Framework

The Price of Governance: A Middle Ground Solution to Coordination in Organizational Control

Title The Price of Governance: A Middle Ground Solution to Coordination in Organizational Control
Authors Chao Yu
Abstract Achieving coordination is crucial in organizational control. This paper investigates a middle ground solution between decentralized interactions and centralized administrations for coordinating agents beyond inefficient behavior. We first propose the price of governance (PoG) to evaluate how such a middle ground solution performs in terms of effectiveness and cost. We then propose a hierarchical supervision framework to explicitly model the PoG, and define step by step how to realize the core principle of the framework and compute the optimal PoG for a control problem. Two illustrative case studies are carried out to exemplify the applications of the proposed framework and its methodology. Results show that by properly formulating and implementing each step, the hierarchical supervision framework is capable of promoting coordination among agents while bounding administrative cost to a minimum in different kinds of organizational control problems.
Tasks
Published 2018-11-09
URL http://arxiv.org/abs/1811.03819v1
PDF http://arxiv.org/pdf/1811.03819v1.pdf
PWC https://paperswithcode.com/paper/the-price-of-governance-a-middle-ground
Repo
Framework

Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks

Title Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Authors Kang Liu, Brendan Dolan-Gavitt, Siddharth Garg
Abstract Deep neural networks (DNNs) provide excellent performance across a wide range of classification tasks, but their training requires high computational resources and is often outsourced to third parties. Recent work has shown that outsourced training introduces the risk that a malicious trainer will return a backdoored DNN that behaves normally on most inputs but causes targeted misclassifications or degrades the accuracy of the network when a trigger known only to the attacker is present. In this paper, we provide the first effective defenses against backdoor attacks on DNNs. We implement three backdoor attacks from prior work and use them to investigate two promising defenses, pruning and fine-tuning. We show that neither, by itself, is sufficient to defend against sophisticated attackers. We then evaluate fine-pruning, a combination of pruning and fine-tuning, and show that it successfully weakens or even eliminates the backdoors, i.e., in some cases reducing the attack success rate to 0% with only a 0.4% drop in accuracy for clean (non-triggering) inputs. Our work provides the first step toward defenses against backdoor attacks in deep neural networks.
Tasks
Published 2018-05-30
URL http://arxiv.org/abs/1805.12185v1
PDF http://arxiv.org/pdf/1805.12185v1.pdf
PWC https://paperswithcode.com/paper/fine-pruning-defending-against-backdooring
Repo
Framework

Pointly-Supervised Action Localization

Title Pointly-Supervised Action Localization
Authors Pascal Mettes, Cees G. M. Snoek
Abstract This paper strives for spatio-temporal localization of human actions in videos. In the literature, the consensus is to achieve localization by training on bounding box annotations provided for each frame of each training video. As annotating boxes in video is expensive, cumbersome and error-prone, we propose to bypass box-supervision. Instead, we introduce action localization based on point-supervision. We start from unsupervised spatio-temporal proposals, which provide a set of candidate regions in videos. While normally used exclusively for inference, we show spatio-temporal proposals can also be leveraged during training when guided by a sparse set of point annotations. We introduce an overlap measure between points and spatio-temporal proposals and incorporate them all into a new objective of a Multiple Instance Learning optimization. During inference, we introduce pseudo-points, visual cues from videos, that automatically guide the selection of spatio-temporal proposals. We outline five spatial and one temporal pseudo-point, as well as a measure to best leverage pseudo-points at test time. Experimental evaluation on three action localization datasets shows our pointly-supervised approach (i) is as effective as traditional box-supervision at a fraction of the annotation cost, (ii) is robust to sparse and noisy point annotations, (iii) benefits from pseudo-points during inference, and (iv) outperforms recent weakly-supervised alternatives. This leads us to conclude that points provide a viable alternative to boxes for action localization.
Tasks Action Localization, Multiple Instance Learning, Temporal Localization
Published 2018-05-29
URL http://arxiv.org/abs/1805.11333v2
PDF http://arxiv.org/pdf/1805.11333v2.pdf
PWC https://paperswithcode.com/paper/pointly-supervised-action-localization
Repo
Framework

Knowledge Integration for Disease Characterization: A Breast Cancer Example

Title Knowledge Integration for Disease Characterization: A Breast Cancer Example
Authors Oshani Seneviratne, Sabbir M. Rashid, Shruthi Chari, James P. McCusker, Kristin P. Bennett, James A. Hendler, Deborah L. McGuinness
Abstract With the rapid advancements in cancer research, the information that is useful for characterizing disease, staging tumors, and creating treatment and survivorship plans has been changing at a pace that creates challenges when physicians try to remain current. One example involves increasing usage of biomarkers when characterizing the pathologic prognostic stage of a breast tumor. We present our semantic technology approach to support cancer characterization and demonstrate it in our end-to-end prototype system that collects the newest breast cancer staging criteria from authoritative oncology manuals to construct an ontology for breast cancer. Using a tool we developed that utilizes this ontology, physician-facing applications can be used to quickly stage a new patient to support identifying risks, treatment options, and monitoring plans based on authoritative and best practice guidelines. Physicians can also re-stage existing patients or patient populations, allowing them to find patients whose stage has changed in a given patient cohort. As new guidelines emerge, using our proposed mechanism, which is grounded by semantic technologies for ingesting new data from staging manuals, we have created an enriched cancer staging ontology that integrates relevant data from several sources with very little human intervention.
Tasks
Published 2018-07-20
URL http://arxiv.org/abs/1807.07991v1
PDF http://arxiv.org/pdf/1807.07991v1.pdf
PWC https://paperswithcode.com/paper/knowledge-integration-for-disease
Repo
Framework
comments powered by Disqus