May 5, 2019

3217 words 16 mins read

Paper Group ANR 470

Paper Group ANR 470

Deep Blur Mapping: Exploiting High-Level Semantics by Deep Neural Networks. Practical Secure Aggregation for Federated Learning on User-Held Data. Full-Time Supervision based Bidirectional RNN for Factoid Question Answering. Online Feature Selection with Group Structure Analysis. Scalable Pooled Time Series of Big Video Data from the Deep Web. Deep …

Deep Blur Mapping: Exploiting High-Level Semantics by Deep Neural Networks

Title Deep Blur Mapping: Exploiting High-Level Semantics by Deep Neural Networks
Authors Kede Ma, Huan Fu, Tongliang Liu, Zhou Wang, Dacheng Tao
Abstract The human visual system excels at detecting local blur of visual images, but the underlying mechanism is not well understood. Traditional views of blur such as reduction in energy at high frequencies and loss of phase coherence at localized features have fundamental limitations. For example, they cannot well discriminate flat regions from blurred ones. Here we propose that high-level semantic information is critical in successfully identifying local blur. Therefore, we resort to deep neural networks that are proficient at learning high-level features and propose the first end-to-end local blur mapping algorithm based on a fully convolutional network. By analyzing various architectures with different depths and design philosophies, we empirically show that high-level features of deeper layers play a more important role than low-level features of shallower layers in resolving challenging ambiguities for this task. We test the proposed method on a standard blur detection benchmark and demonstrate that it significantly advances the state-of-the-art (ODS F-score of 0.853). Furthermore, we explore the use of the generated blur maps in three applications, including blur region segmentation, blur degree estimation, and blur magnification.
Tasks
Published 2016-12-05
URL http://arxiv.org/abs/1612.01227v2
PDF http://arxiv.org/pdf/1612.01227v2.pdf
PWC https://paperswithcode.com/paper/deep-blur-mapping-exploiting-high-level
Repo
Framework

Practical Secure Aggregation for Federated Learning on User-Held Data

Title Practical Secure Aggregation for Federated Learning on User-Held Data
Authors Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, Karn Seth
Abstract Secure Aggregation protocols allow a collection of mutually distrust parties, each holding a private value, to collaboratively compute the sum of those values without revealing the values themselves. We consider training a deep neural network in the Federated Learning model, using distributed stochastic gradient descent across user-held training data on mobile devices, wherein Secure Aggregation protects each user’s model gradient. We design a novel, communication-efficient Secure Aggregation protocol for high-dimensional data that tolerates up to 1/3 users failing to complete the protocol. For 16-bit input values, our protocol offers 1.73x communication expansion for $2^{10}$ users and $2^{20}$-dimensional vectors, and 1.98x expansion for $2^{14}$ users and $2^{24}$ dimensional vectors.
Tasks
Published 2016-11-14
URL http://arxiv.org/abs/1611.04482v1
PDF http://arxiv.org/pdf/1611.04482v1.pdf
PWC https://paperswithcode.com/paper/practical-secure-aggregation-for-federated
Repo
Framework

Full-Time Supervision based Bidirectional RNN for Factoid Question Answering

Title Full-Time Supervision based Bidirectional RNN for Factoid Question Answering
Authors Dong Xu, Wu-Jun Li
Abstract Recently, bidirectional recurrent neural network (BRNN) has been widely used for question answering (QA) tasks with promising performance. However, most existing BRNN models extract the information of questions and answers by directly using a pooling operation to generate the representation for loss or similarity calculation. Hence, these existing models don’t put supervision (loss or similarity calculation) at every time step, which will lose some useful information. In this paper, we propose a novel BRNN model called full-time supervision based BRNN (FTS-BRNN), which can put supervision at every time step. Experiments on the factoid QA task show that our FTS-BRNN can outperform other baselines to achieve the state-of-the-art accuracy.
Tasks Question Answering
Published 2016-06-19
URL http://arxiv.org/abs/1606.05854v2
PDF http://arxiv.org/pdf/1606.05854v2.pdf
PWC https://paperswithcode.com/paper/full-time-supervision-based-bidirectional-rnn
Repo
Framework

Online Feature Selection with Group Structure Analysis

Title Online Feature Selection with Group Structure Analysis
Authors Jing Wang, Meng Wang, Peipei Li, Luoqi Liu, Zhongqiu Zhao, Xuegang Hu, Xindong Wu
Abstract Online selection of dynamic features has attracted intensive interest in recent years. However, existing online feature selection methods evaluate features individually and ignore the underlying structure of feature stream. For instance, in image analysis, features are generated in groups which represent color, texture and other visual information. Simply breaking the group structure in feature selection may degrade performance. Motivated by this fact, we formulate the problem as an online group feature selection. The problem assumes that features are generated individually but there are group structure in the feature stream. To the best of our knowledge, this is the first time that the correlation among feature stream has been considered in the online feature selection process. To solve this problem, we develop a novel online group feature selection method named OGFS. Our proposed approach consists of two stages: online intra-group selection and online inter-group selection. In the intra-group selection, we design a criterion based on spectral analysis to select discriminative features in each group. In the inter-group selection, we utilize a linear regression model to select an optimal subset. This two-stage procedure continues until there are no more features arriving or some predefined stopping conditions are met. %Our method has been applied Finally, we apply our method to multiple tasks including image classification %, face verification and face verification. Extensive empirical studies performed on real-world and benchmark data sets demonstrate that our method outperforms other state-of-the-art online feature selection %method methods.
Tasks Face Verification, Feature Selection, Image Classification
Published 2016-08-21
URL http://arxiv.org/abs/1608.05889v1
PDF http://arxiv.org/pdf/1608.05889v1.pdf
PWC https://paperswithcode.com/paper/online-feature-selection-with-group-structure
Repo
Framework

Scalable Pooled Time Series of Big Video Data from the Deep Web

Title Scalable Pooled Time Series of Big Video Data from the Deep Web
Authors Chris Mattmann, Madhav Sharan
Abstract We contribute a scalable implementation of Ryoo et al’s Pooled Time Series algorithm from CVPR 2015. The updated algorithm has been evaluated on a large and diverse dataset of approximately 6800 videos collected from a crawl of the deep web related to human trafficking on DARPA’s MEMEX effort. We describe the properties of Pooled Time Series and the motivation for using it to relate videos collected from the deep web. We highlight issues that we found while running Pooled Time Series on larger datasets and discuss solutions for those issues. Our solution centers are re-imagining Pooled Time Series as a Hadoop-based algorithm in which we compute portions of the eventual solution in parallel on large commodity clusters. We demonstrate that our new Hadoop-based algorithm works well on the 6800 video dataset and shares all of the properties described in the CVPR 2015 paper. We suggest avenues of future work in the project.
Tasks Time Series
Published 2016-10-21
URL http://arxiv.org/abs/1610.06669v1
PDF http://arxiv.org/pdf/1610.06669v1.pdf
PWC https://paperswithcode.com/paper/scalable-pooled-time-series-of-big-video-data
Repo
Framework

Deep Contrast Learning for Salient Object Detection

Title Deep Contrast Learning for Salient Object Detection
Authors Guanbin Li, Yizhou Yu
Abstract Salient object detection has recently witnessed substantial progress due to powerful features extracted using deep convolutional neural networks (CNNs). However, existing CNN-based methods operate at the patch level instead of the pixel level. Resulting saliency maps are typically blurry, especially near the boundary of salient objects. Furthermore, image patches are treated as independent samples even when they are overlapping, giving rise to significant redundancy in computation and storage. In this CVPR 2016 paper, we propose an end-to-end deep contrast network to overcome the aforementioned limitations. Our deep network consists of two complementary components, a pixel-level fully convolutional stream and a segment-wise spatial pooling stream. The first stream directly produces a saliency map with pixel-level accuracy from an input image. The second stream extracts segment-wise features very efficiently, and better models saliency discontinuities along object boundaries. Finally, a fully connected CRF model can be optionally incorporated to improve spatial coherence and contour localization in the fused result from these two streams. Experimental results demonstrate that our deep model significantly improves the state of the art.
Tasks Object Detection, Salient Object Detection
Published 2016-03-07
URL http://arxiv.org/abs/1603.01976v1
PDF http://arxiv.org/pdf/1603.01976v1.pdf
PWC https://paperswithcode.com/paper/deep-contrast-learning-for-salient-object
Repo
Framework

An improved analysis of the ER-SpUD dictionary learning algorithm

Title An improved analysis of the ER-SpUD dictionary learning algorithm
Authors Jarosław Błasiok, Jelani Nelson
Abstract In “dictionary learning” we observe $Y = AX + E$ for some $Y\in\mathbb{R}^{n\times p}$, $A \in\mathbb{R}^{m\times n}$, and $X\in\mathbb{R}^{m\times p}$. The matrix $Y$ is observed, and $A, X, E$ are unknown. Here $E$ is “noise” of small norm, and $X$ is column-wise sparse. The matrix $A$ is referred to as a {\em dictionary}, and its columns as {\em atoms}. Then, given some small number $p$ of samples, i.e.\ columns of $Y$, the goal is to learn the dictionary $A$ up to small error, as well as $X$. The motivation is that in many applications data is expected to sparse when represented by atoms in the “right” dictionary $A$ (e.g.\ images in the Haar wavelet basis), and the goal is to learn $A$ from the data to then use it for other applications. Recently, [SWW12] proposed the dictionary learning algorithm ER-SpUD with provable guarantees when $E = 0$ and $m = n$. They showed if $X$ has independent entries with an expected $s$ non-zeroes per column for $1 \lesssim s \lesssim \sqrt{n}$, and with non-zero entries being subgaussian, then for $p\gtrsim n^2\log^2 n$ with high probability ER-SpUD outputs matrices $A’, X'$ which equal $A, X$ up to permuting and scaling columns (resp.\ rows) of $A$ (resp.\ $X$). They conjectured $p\gtrsim n\log n$ suffices, which they showed was information theoretically necessary for {\em any} algorithm to succeed when $s \simeq 1$. Significant progress was later obtained in [LV15]. We show that for a slight variant of ER-SpUD, $p\gtrsim n\log(n/\delta)$ samples suffice for successful recovery with probability $1-\delta$. We also show that for the unmodified ER-SpUD, $p\gtrsim n^{1.99}$ samples are required even to learn $A, X$ with polynomially small success probability. This resolves the main conjecture of [SWW12], and contradicts the main result of [LV15], which claimed that $p\gtrsim n\log^4 n$ guarantees success whp.
Tasks Dictionary Learning
Published 2016-02-18
URL http://arxiv.org/abs/1602.05719v1
PDF http://arxiv.org/pdf/1602.05719v1.pdf
PWC https://paperswithcode.com/paper/an-improved-analysis-of-the-er-spud
Repo
Framework

A Convex Program for Mixed Linear Regression with a Recovery Guarantee for Well-Separated Data

Title A Convex Program for Mixed Linear Regression with a Recovery Guarantee for Well-Separated Data
Authors Paul Hand, Babhru Joshi
Abstract We introduce a convex approach for mixed linear regression over $d$ features. This approach is a second-order cone program, based on L1 minimization, which assigns an estimate regression coefficient in $\mathbb{R}^{d}$ for each data point. These estimates can then be clustered using, for example, $k$-means. For problems with two or more mixture classes, we prove that the convex program exactly recovers all of the mixture components in the noiseless setting under technical conditions that include a well-separation assumption on the data. Under these assumptions, recovery is possible if each class has at least $d$ independent measurements. We also explore an iteratively reweighted least squares implementation of this method on real and synthetic data.
Tasks
Published 2016-12-19
URL http://arxiv.org/abs/1612.06067v2
PDF http://arxiv.org/pdf/1612.06067v2.pdf
PWC https://paperswithcode.com/paper/a-convex-program-for-mixed-linear-regression
Repo
Framework

Why is Compiling Lifted Inference into a Low-Level Language so Effective?

Title Why is Compiling Lifted Inference into a Low-Level Language so Effective?
Authors Seyed Mehran Kazemi, David Poole
Abstract First-order knowledge compilation techniques have proven efficient for lifted inference. They compile a relational probability model into a target circuit on which many inference queries can be answered efficiently. Early methods used data structures as their target circuit. In our KR-2016 paper, we showed that compiling to a low-level program instead of a data structure offers orders of magnitude speedup, resulting in the state-of-the-art lifted inference technique. In this paper, we conduct experiments to address two questions regarding our KR-2016 results: 1- does the speedup come from more efficient compilation or more efficient reasoning with the target circuit?, and 2- why are low-level programs more efficient target circuits than data structures?
Tasks
Published 2016-06-14
URL http://arxiv.org/abs/1606.04512v1
PDF http://arxiv.org/pdf/1606.04512v1.pdf
PWC https://paperswithcode.com/paper/why-is-compiling-lifted-inference-into-a-low
Repo
Framework

Identification and classification of TCM syndrome types among patients with vascular mild cognitive impairment using latent tree analysis

Title Identification and classification of TCM syndrome types among patients with vascular mild cognitive impairment using latent tree analysis
Authors Chen Fu, Nevin L. Zhang, Bao Xin Chen, Zhou Rong Chen, Xiang Lan Jin, Rong Juan Guo, Zhi Gang Chen, Yun Ling Zhang
Abstract Objective: To treat patients with vascular mild cognitive impairment (VMCI) using TCM, it is necessary to classify the patients into TCM syndrome types and to apply different treatments to different types. We investigate how to properly carry out the classification using a novel data-driven method known as latent tree analysis. Method: A cross-sectional survey on VMCI was carried out in several regions in northern China from 2008 to 2011, which resulted in a data set that involves 803 patients and 93 symptoms. Latent tree analysis was performed on the data to reveal symptom co-occurrence patterns, and the patients were partitioned into clusters in multiple ways based on the patterns. The patient clusters were matched up with syndrome types, and population statistics of the clusters are used to quantify the syndrome types and to establish classification rules. Results: Eight syndrome types are identified: Qi Deficiency, Qi Stagnation, Blood Deficiency, Blood Stasis, Phlegm-Dampness, Fire-Heat, Yang Deficiency, and Yin Deficiency. The prevalence and symptom occurrence characteristics of each syndrome type are determined. Quantitative classification rules are established for determining whether a patient belongs to each of the syndrome types. Conclusions: A solution for the TCM syndrome classification problem associated with VMCI is established based on the latent tree analysis of unlabeled symptom survey data. The results can be used as a reference in clinic practice to improve the quality of syndrome differentiation and to reduce diagnosis variances across physicians. They can also be used for patient selection in research projects aimed at finding biomarkers for the syndrome types and in randomized control trials aimed at determining the efficacy of TCM treatments of VMCI.
Tasks
Published 2016-01-26
URL http://arxiv.org/abs/1601.06923v2
PDF http://arxiv.org/pdf/1601.06923v2.pdf
PWC https://paperswithcode.com/paper/identification-and-classification-of-tcm
Repo
Framework

Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network

Title Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network
Authors Diederik Paul Moeys, Federico Corradi, Emmett Kerr, Philip Vance, Gautham Das, Daniel Neil, Dermot Kerr, Tobi Delbruck
Abstract This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor “frames” that consist of a constant number of DAVIS ON and OFF events. The network is thus “data driven” at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing.
Tasks
Published 2016-06-30
URL http://arxiv.org/abs/1606.09433v1
PDF http://arxiv.org/pdf/1606.09433v1.pdf
PWC https://paperswithcode.com/paper/steering-a-predator-robot-using-a-mixed
Repo
Framework

Traversing Environments Using Possibility Graphs for Humanoid Robots

Title Traversing Environments Using Possibility Graphs for Humanoid Robots
Authors Michael X. Grey, Aaron D. Ames, C. Karen Liu
Abstract Locomotion for legged robots poses considerable challenges when confronted by obstacles and adverse environments. Footstep planners are typically only designed for one mode of locomotion, but traversing unfavorable environments may require several forms of locomotion to be sequenced together, such as walking, crawling, and jumping. Multi-modal motion planners can be used to address some of these problems, but existing implementations tend to be time-consuming and are limited to quasi-static actions. This paper presents a motion planning method to traverse complex environments using multiple categories of actions. We introduce the concept of the “Possibility Graph”, which uses high-level approximations of constraint manifolds to rapidly explore the “possibility” of actions, thereby allowing lower-level single-action motion planners to be utilized more efficiently. We show that the Possibility Graph can quickly find paths through several different challenging environments which require various combinations of actions in order to traverse.
Tasks Legged Robots, Motion Planning
Published 2016-08-12
URL http://arxiv.org/abs/1608.03845v1
PDF http://arxiv.org/pdf/1608.03845v1.pdf
PWC https://paperswithcode.com/paper/traversing-environments-using-possibility
Repo
Framework

Tensor clustering with algebraic constraints gives interpretable groups of crosstalk mechanisms in breast cancer

Title Tensor clustering with algebraic constraints gives interpretable groups of crosstalk mechanisms in breast cancer
Authors Anna Seigal, Mariano Beguerisse-Díaz, Birgit Schoeberl, Mario Niepel, Heather A. Harrington
Abstract We introduce a tensor-based clustering method to extract sparse, low-dimensional structure from high-dimensional, multi-indexed datasets. This framework is designed to enable detection of clusters of data in the presence of structural requirements which we encode as algebraic constraints in a linear program. Our clustering method is general and can be tailored to a variety of applications in science and industry. We illustrate our method on a collection of experiments measuring the response of genetically diverse breast cancer cell lines to an array of ligands. Each experiment consists of a cell line-ligand combination, and contains time-course measurements of the early-signalling kinases MAPK and AKT at two different ligand dose levels. By imposing appropriate structural constraints and respecting the multi-indexed structure of the data, the analysis of clusters can be optimized for biological interpretation and therapeutic understanding. We then perform a systematic, large-scale exploration of mechanistic models of MAPK-AKT crosstalk for each cluster. This analysis allows us to quantify the heterogeneity of breast cancer cell subtypes, and leads to hypotheses about the signalling mechanisms that mediate the response of the cell lines to ligands.
Tasks
Published 2016-12-24
URL http://arxiv.org/abs/1612.08116v3
PDF http://arxiv.org/pdf/1612.08116v3.pdf
PWC https://paperswithcode.com/paper/tensor-clustering-with-algebraic-constraints
Repo
Framework

COCO: The Experimental Procedure

Title COCO: The Experimental Procedure
Authors Nikolaus Hansen, Tea Tusar, Olaf Mersmann, Anne Auger, Dimo Brockhoff
Abstract We present a budget-free experimental setup and procedure for benchmarking numericaloptimization algorithms in a black-box scenario. This procedure can be applied with the COCO benchmarking platform. We describe initialization of and input to the algorithm and touch upon therelevance of termination and restarts.
Tasks
Published 2016-03-29
URL http://arxiv.org/abs/1603.08776v2
PDF http://arxiv.org/pdf/1603.08776v2.pdf
PWC https://paperswithcode.com/paper/coco-the-experimental-procedure
Repo
Framework

Personality Traits and Echo Chambers on Facebook

Title Personality Traits and Echo Chambers on Facebook
Authors Alessandro Bessi
Abstract In online social networks, users tend to select information that adhere to their system of beliefs and to form polarized groups of like minded people. Polarization as well as its effects on online social interactions have been extensively investigated. Still, the relation between group formation and personality traits remains unclear. A better understanding of the cognitive and psychological determinants of online social dynamics might help to design more efficient communication strategies and to challenge the digital misinformation threat. In this work, we focus on users commenting posts published by US Facebook pages supporting scientific and conspiracy-like narratives, and we classify the personality traits of those users according to their online behavior. We show that different and conflicting communities are populated by users showing similar psychological profiles, and that the dominant personality model is the same in both scientific and conspiracy echo chambers. Moreover, we observe that the permanence within echo chambers slightly shapes users’ psychological profiles. Our results suggest that the presence of specific personality traits in individuals lead to their considerable involvement in supporting narratives inside virtual echo chambers.
Tasks
Published 2016-06-15
URL http://arxiv.org/abs/1606.04721v1
PDF http://arxiv.org/pdf/1606.04721v1.pdf
PWC https://paperswithcode.com/paper/personality-traits-and-echo-chambers-on
Repo
Framework
comments powered by Disqus