Paper Group ANR 501
Machine Learning with Membership Privacy using Adversarial Regularization. Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication. A novel approach for venue recommendation using cross-domain techniques. On Strong NP-Completeness of Rational Problems. DeepEM: Deep 3D ConvNets With EM For Weakly Supervised Pulmonary …
Machine Learning with Membership Privacy using Adversarial Regularization
Title | Machine Learning with Membership Privacy using Adversarial Regularization |
Authors | Milad Nasr, Reza Shokri, Amir Houmansadr |
Abstract | Machine learning models leak information about the datasets on which they are trained. An adversary can build an algorithm to trace the individual members of a model’s training dataset. As a fundamental inference attack, he aims to distinguish between data points that were part of the model’s training set and any other data points from the same distribution. This is known as the tracing (and also membership inference) attack. In this paper, we focus on such attacks against black-box models, where the adversary can only observe the output of the model, but not its parameters. This is the current setting of machine learning as a service in the Internet. We introduce a privacy mechanism to train machine learning models that provably achieve membership privacy: the model’s predictions on its training data are indistinguishable from its predictions on other data points from the same distribution. We design a strategic mechanism where the privacy mechanism anticipates the membership inference attacks. The objective is to train a model such that not only does it have the minimum prediction error (high utility), but also it is the most robust model against its corresponding strongest inference attack (high privacy). We formalize this as a min-max game optimization problem, and design an adversarial training algorithm that minimizes the classification loss of the model as well as the maximum gain of the membership inference attack against it. This strategy, which guarantees membership privacy (as prediction indistinguishability), acts also as a strong regularizer and significantly generalizes the model. We evaluate our privacy mechanism on deep neural networks using different benchmark datasets. We show that our min-max strategy can mitigate the risk of membership inference attacks (close to the random guess) with a negligible cost in terms of the classification error. |
Tasks | Inference Attack |
Published | 2018-07-16 |
URL | http://arxiv.org/abs/1807.05852v1 |
http://arxiv.org/pdf/1807.05852v1.pdf | |
PWC | https://paperswithcode.com/paper/machine-learning-with-membership-privacy |
Repo | |
Framework | |
Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication
Title | Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication |
Authors | Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek |
Abstract | Currently, progressively larger deep neural networks are trained on ever growing data corpora. As this trend is only going to increase in the future, distributed training schemes are becoming increasingly relevant. A major issue in distributed training is the limited communication bandwidth between contributing nodes or prohibitive communication cost in general. These challenges become even more pressing, as the number of computation nodes increases. To counteract this development we propose sparse binary compression (SBC), a compression framework that allows for a drastic reduction of communication cost for distributed training. SBC combines existing techniques of communication delay and gradient sparsification with a novel binarization method and optimal weight update encoding to push compression gains to new limits. By doing so, our method also allows us to smoothly trade-off gradient sparsity and temporal sparsity to adapt to the requirements of the learning task. Our experiments show, that SBC can reduce the upstream communication on a variety of convolutional and recurrent neural network architectures by more than four orders of magnitude without significantly harming the convergence speed in terms of forward-backward passes. For instance, we can train ResNet50 on ImageNet in the same number of iterations to the baseline accuracy, using $\times 3531$ less bits or train it to a $1%$ lower accuracy using $\times 37208$ less bits. In the latter case, the total upstream communication required is cut from 125 terabytes to 3.35 gigabytes for every participating client. |
Tasks | |
Published | 2018-05-22 |
URL | http://arxiv.org/abs/1805.08768v1 |
http://arxiv.org/pdf/1805.08768v1.pdf | |
PWC | https://paperswithcode.com/paper/sparse-binary-compression-towards-distributed |
Repo | |
Framework | |
A novel approach for venue recommendation using cross-domain techniques
Title | A novel approach for venue recommendation using cross-domain techniques |
Authors | Pablo Sánchez, Alejandro Bellogín |
Abstract | Finding the next venue to be visited by a user in a specific city is an interesting, but challenging, problem. Different techniques have been proposed, combining collaborative, content, social, and geographical signals; however it is not trivial to decide which tech- nique works best, since this may depend on the data density or the amount of activity logged for each user or item. At the same time, cross-domain strategies have been exploited in the recommender systems literature when dealing with (very) sparse situations, such as those inherently arising when recommendations are produced based on information from a single city. In this paper, we address the problem of venue recommendation from a novel perspective: applying cross-domain recommenda- tion techniques considering each city as a different domain. We perform an experimental comparison of several recommendation techniques in a temporal split under two conditions: single-domain (only information from the target city is considered) and cross- domain (information from many other cities is incorporated into the recommendation algorithm). For the latter, we have explored two strategies to transfer knowledge from one domain to another: testing the target city and training a model with information of the k cities with more ratings or only using the k closest cities. Our results show that, in general, applying cross-domain by proximity increases the performance of the majority of the recom- menders in terms of relevance. This is the first work, to the best of our knowledge, where so many domains (eight) are combined in the tourism context where a temporal split is used, and thus we expect these results could provide readers with an overall picture of what can be achieved in a real-world environment. |
Tasks | Recommendation Systems |
Published | 2018-09-26 |
URL | http://arxiv.org/abs/1809.09864v1 |
http://arxiv.org/pdf/1809.09864v1.pdf | |
PWC | https://paperswithcode.com/paper/a-novel-approach-for-venue-recommendation |
Repo | |
Framework | |
On Strong NP-Completeness of Rational Problems
Title | On Strong NP-Completeness of Rational Problems |
Authors | Dominik Wojtczak |
Abstract | The computational complexity of the partition, 0-1 subset sum, unbounded subset sum, 0-1 knapsack and unbounded knapsack problems and their multiple variants were studied in numerous papers in the past where all the weights and profits were assumed to be integers. We re-examine here the computational complexity of all these problems in the setting where the weights and profits are allowed to be any rational numbers. We show that all of these problems in this setting become strongly NP-complete and, as a result, no pseudo-polynomial algorithm can exist for solving them unless P=NP. Despite this result we show that they all still admit a fully polynomial-time approximation scheme. |
Tasks | |
Published | 2018-02-26 |
URL | http://arxiv.org/abs/1802.09465v1 |
http://arxiv.org/pdf/1802.09465v1.pdf | |
PWC | https://paperswithcode.com/paper/on-strong-np-completeness-of-rational |
Repo | |
Framework | |
DeepEM: Deep 3D ConvNets With EM For Weakly Supervised Pulmonary Nodule Detection
Title | DeepEM: Deep 3D ConvNets With EM For Weakly Supervised Pulmonary Nodule Detection |
Authors | Wentao Zhu, Yeeleng S. Vang, Yufang Huang, Xiaohui Xie |
Abstract | Recently deep learning has been witnessing widespread adoption in various medical image applications. However, training complex deep neural nets requires large-scale datasets labeled with ground truth, which are often unavailable in many medical image domains. For instance, to train a deep neural net to detect pulmonary nodules in lung computed tomography (CT) images, current practice is to manually label nodule locations and sizes in many CT images to construct a sufficiently large training dataset, which is costly and difficult to scale. On the other hand, electronic medical records (EMR) contain plenty of partial information on the content of each medical image. In this work, we explore how to tap this vast, but currently unexplored data source to improve pulmonary nodule detection. We propose DeepEM, a novel deep 3D ConvNet framework augmented with expectation-maximization (EM), to mine weakly supervised labels in EMRs for pulmonary nodule detection. Experimental results show that DeepEM can lead to 1.5% and 3.9% average improvement in free-response receiver operating characteristic (FROC) scores on LUNA16 and Tianchi datasets, respectively, demonstrating the utility of incomplete information in EMRs for improving deep learning algorithms.\footnote{https://github.com/uci-cbcl/DeepEM-for-Weakly-Supervised-Detection.git} |
Tasks | Computed Tomography (CT) |
Published | 2018-05-14 |
URL | http://arxiv.org/abs/1805.05373v3 |
http://arxiv.org/pdf/1805.05373v3.pdf | |
PWC | https://paperswithcode.com/paper/deepem-deep-3d-convnets-with-em-for-weakly |
Repo | |
Framework | |
An Empirical Analysis of Proximal Policy Optimization with Kronecker-factored Natural Gradients
Title | An Empirical Analysis of Proximal Policy Optimization with Kronecker-factored Natural Gradients |
Authors | Jiaming Song, Yuhuai Wu |
Abstract | In this technical report, we consider an approach that combines the PPO objective and K-FAC natural gradient optimization, for which we call PPOKFAC. We perform a range of empirical analysis on various aspects of the algorithm, such as sample complexity, training speed, and sensitivity to batch size and training epochs. We observe that PPOKFAC is able to outperform PPO in terms of sample complexity and speed in a range of MuJoCo environments, while being scalable in terms of batch size. In spite of this, it seems that adding more epochs is not necessarily helpful for sample efficiency, and PPOKFAC seems to be worse than its A2C counterpart, ACKTR. |
Tasks | |
Published | 2018-01-17 |
URL | http://arxiv.org/abs/1801.05566v1 |
http://arxiv.org/pdf/1801.05566v1.pdf | |
PWC | https://paperswithcode.com/paper/an-empirical-analysis-of-proximal-policy |
Repo | |
Framework | |
A Visual Distance for WordNet
Title | A Visual Distance for WordNet |
Authors | Raquel Pérez-Arnal, Armand Vilalta, Dario Garcia-Gasulla, Ulises Cortés, Eduard Ayguadé, Jesus Labarta |
Abstract | Measuring the distance between concepts is an important field of study of Natural Language Processing, as it can be used to improve tasks related to the interpretation of those same concepts. WordNet, which includes a wide variety of concepts associated with words (i.e., synsets), is often used as a source for computing those distances. In this paper, we explore a distance for WordNet synsets based on visual features, instead of lexical ones. For this purpose, we extract the graphic features generated within a deep convolutional neural networks trained with ImageNet and use those features to generate a representative of each synset. Based on those representatives, we define a distance measure of synsets, which complements the traditional lexical distances. Finally, we propose some experiments to evaluate its performance and compare it with the current state-of-the-art. |
Tasks | |
Published | 2018-04-24 |
URL | http://arxiv.org/abs/1804.09558v2 |
http://arxiv.org/pdf/1804.09558v2.pdf | |
PWC | https://paperswithcode.com/paper/a-visual-distance-for-wordnet |
Repo | |
Framework | |
Learning the progression and clinical subtypes of Alzheimer’s disease from longitudinal clinical data
Title | Learning the progression and clinical subtypes of Alzheimer’s disease from longitudinal clinical data |
Authors | Vipul Satone, Rachneet Kaur, Faraz Faghri, Mike A Nalls, Andrew B Singleton, Roy H Campbell |
Abstract | Alzheimer’s disease (AD) is a degenerative brain disease impairing a person’s ability to perform day to day activities. The clinical manifestations of Alzheimer’s disease are characterized by heterogeneity in age, disease span, progression rate, impairment of memory and cognitive abilities. Due to these variabilities, personalized care and treatment planning, as well as patient counseling about their individual progression is limited. Recent developments in machine learning to detect hidden patterns in complex, multi-dimensional datasets provides significant opportunities to address this critical need. In this work, we use unsupervised and supervised machine learning approaches for subtype identification and prediction. We apply machine learning methods to the extensive clinical observations available at the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data set to identify patient subtypes and to predict disease progression. Our analysis depicts the progression space for the Alzheimer’s disease into low, moderate and high disease progression zones. The proposed work will enable early detection and characterization of distinct disease subtypes based on clinical heterogeneity. We anticipate that our models will enable patient counseling, clinical trial design, and ultimately individualized clinical care. |
Tasks | |
Published | 2018-12-03 |
URL | http://arxiv.org/abs/1812.00546v3 |
http://arxiv.org/pdf/1812.00546v3.pdf | |
PWC | https://paperswithcode.com/paper/learning-the-progression-and-clinical |
Repo | |
Framework | |
Symbolic Tensor Neural Networks for Digital Media - from Tensor Processing via BNF Graph Rules to CREAMS Applications
Title | Symbolic Tensor Neural Networks for Digital Media - from Tensor Processing via BNF Graph Rules to CREAMS Applications |
Authors | Wladyslaw Skarbek |
Abstract | This tutorial material on Convolutional Neural Networks (CNN) and its applications in digital media research is based on the concept of Symbolic Tensor Neural Networks. The set of STNN expressions is specified in Backus-Naur Form (BNF) which is annotated by constraints typical for labeled acyclic directed graphs (DAG). The BNF induction begins from a collection of neural unit symbols with extra (up to five) decoration fields (including tensor depth and sharing fields). The inductive rules provide not only the general graph structure but also the specific shortcuts for residual blocks of units. A syntactic mechanism for network fragments modularization is introduced via user defined units and their instances. Moreover, the dual BNF rules are specified in order to generate the Dual Symbolic Tensor Neural Network (DSTNN). The joined interpretation of STNN and DSTNN provides the correct flow of gradient tensors, back propagated at the training stage. The proposed symbolic representation of CNNs is illustrated for six generic digital media applications (CREAMS): Compression, Recognition, Embedding, Annotation, 3D Modeling for human-computer interfacing, and data Security based on digital media objects. In order to make the CNN description and its gradient flow complete, for all presented applications, the symbolic representations of mathematically defined loss/gain functions and gradient flow equations for all used core units, are given. The tutorial is to convince the reader that STNN is not only a convenient symbolic notation for public presentations of CNN based solutions for CREAMS problems but also that it is a design blueprint with a potential for automatic generation of application source code. |
Tasks | |
Published | 2018-09-18 |
URL | http://arxiv.org/abs/1809.06582v2 |
http://arxiv.org/pdf/1809.06582v2.pdf | |
PWC | https://paperswithcode.com/paper/symbolic-tensor-neural-networks-for-digital |
Repo | |
Framework | |
Jointly Deep Multi-View Learning for Clustering Analysis
Title | Jointly Deep Multi-View Learning for Clustering Analysis |
Authors | Bingqian Lin, Yuan Xie, Yanyun Qu, Cuihua Li, Xiaodan Liang |
Abstract | In this paper, we propose a novel Joint framework for Deep Multi-view Clustering (DMJC), where multiple deep embedded features, multi-view fusion mechanism and clustering assignments can be learned simultaneously. Our key idea is that the joint learning strategy can sufficiently exploit clustering-friendly multi-view features and useful multi-view complementary information to improve the clustering performance. How to realize the multi-view fusion in such a joint framework is the primary challenge. To do so, we design two ingenious variants of deep multi-view joint clustering models under the proposed framework, where multi-view fusion is implemented by two different schemes. The first model, called DMJC-S, performs multi-view fusion in an implicit way via a novel multi-view soft assignment distribution. The second model, termed DMJC-T, defines a novel multi-view auxiliary target distribution to conduct the multi-view fusion explicitly. Both DMJC-S and DMJC-T are optimized under a KL divergence like clustering objective. Experiments on six challenging image datasets demonstrate the superiority of both DMJC-S and DMJC-T over single/multi-view baselines and the state-of-the-art multiview clustering methods, which proves the effectiveness of the proposed DMJC framework. To our best knowledge, this is the first work to model the multi-view clustering in a deep joint framework, which will provide a meaningful thinking in unsupervised multi-view learning. |
Tasks | MULTI-VIEW LEARNING |
Published | 2018-08-19 |
URL | http://arxiv.org/abs/1808.06220v2 |
http://arxiv.org/pdf/1808.06220v2.pdf | |
PWC | https://paperswithcode.com/paper/jointly-deep-multi-view-learning-for |
Repo | |
Framework | |
Conceptual Collectives
Title | Conceptual Collectives |
Authors | Robert E. Kent |
Abstract | The notions of formal contexts and concept lattices, although introduced by Wille only ten years ago, already have proven to be of great utility in various applications such as data analysis and knowledge representation. In this paper we give arguments that Wille’s original notion of formal context, although quite appealing in its simplicity, now should be replaced by a more semantic notion. This new notion of formal context entails a modified approach to concept construction. We base our arguments for these new versions of formal context and concept construction upon Wille’s philosophical attitude with reference to the intensional aspect of concepts. We give a brief development of the relational theory of formal contexts and concept construction, demonstrating the equivalence of “concept-lattice construction” of Wille with the well-known “completion by cuts” of MacNeille. Generalization and abstraction of these formal contexts offers a powerful approach to knowledge representation. |
Tasks | |
Published | 2018-10-14 |
URL | http://arxiv.org/abs/1810.07632v1 |
http://arxiv.org/pdf/1810.07632v1.pdf | |
PWC | https://paperswithcode.com/paper/conceptual-collectives |
Repo | |
Framework | |
SoPhie: An Attentive GAN for Predicting Paths Compliant to Social and Physical Constraints
Title | SoPhie: An Attentive GAN for Predicting Paths Compliant to Social and Physical Constraints |
Authors | Amir Sadeghian, Vineet Kosaraju, Ali Sadeghian, Noriaki Hirose, S. Hamid Rezatofighi, Silvio Savarese |
Abstract | This paper addresses the problem of path prediction for multiple interacting agents in a scene, which is a crucial step for many autonomous platforms such as self-driving cars and social robots. We present \textit{SoPhie}; an interpretable framework based on Generative Adversarial Network (GAN), which leverages two sources of information, the path history of all the agents in a scene, and the scene context information, using images of the scene. To predict a future path for an agent, both physical and social information must be leveraged. Previous work has not been successful to jointly model physical and social interactions. Our approach blends a social attention mechanism with a physical attention that helps the model to learn where to look in a large scene and extract the most salient parts of the image relevant to the path. Whereas, the social attention component aggregates information across the different agent interactions and extracts the most important trajectory information from the surrounding neighbors. SoPhie also takes advantage of GAN to generates more realistic samples and to capture the uncertain nature of the future paths by modeling its distribution. All these mechanisms enable our approach to predict socially and physically plausible paths for the agents and to achieve state-of-the-art performance on several different trajectory forecasting benchmarks. |
Tasks | Self-Driving Cars |
Published | 2018-06-05 |
URL | http://arxiv.org/abs/1806.01482v2 |
http://arxiv.org/pdf/1806.01482v2.pdf | |
PWC | https://paperswithcode.com/paper/sophie-an-attentive-gan-for-predicting-paths |
Repo | |
Framework | |
iMapper: Interaction-guided Joint Scene and Human Motion Mapping from Monocular Videos
Title | iMapper: Interaction-guided Joint Scene and Human Motion Mapping from Monocular Videos |
Authors | Aron Monszpart, Paul Guerrero, Duygu Ceylan, Ersin Yumer, Niloy J. Mitra |
Abstract | A long-standing challenge in scene analysis is the recovery of scene arrangements under moderate to heavy occlusion, directly from monocular video. While the problem remains a subject of active research, concurrent advances have been made in the context of human pose reconstruction from monocular video, including image-space feature point detection and 3D pose recovery. These methods, however, start to fail under moderate to heavy occlusion as the problem becomes severely under-constrained. We approach the problems differently. We observe that people interact similarly in similar scenes. Hence, we exploit the correlation between scene object arrangement and motions performed in that scene in both directions: first, typical motions performed when interacting with objects inform us about possible object arrangements; and second, object arrangements, in turn, constrain the possible motions. We present iMapper, a data-driven method that focuses on identifying human-object interactions, and jointly reasons about objects and human movement over space-time to recover both a plausible scene arrangement and consistent human interactions. We first introduce the notion of characteristic interactions as regions in space-time when an informative human-object interaction happens. This is followed by a novel occlusion-aware matching procedure that searches and aligns such characteristic snapshots from an interaction database to best explain the input monocular video. Through extensive evaluations, both quantitative and qualitative, we demonstrate that iMapper significantly improves performance over both dedicated state-of-the-art scene analysis and 3D human pose recovery approaches, especially under medium to heavy occlusion. |
Tasks | Human-Object Interaction Detection |
Published | 2018-06-20 |
URL | http://arxiv.org/abs/1806.07889v1 |
http://arxiv.org/pdf/1806.07889v1.pdf | |
PWC | https://paperswithcode.com/paper/imapper-interaction-guided-joint-scene-and |
Repo | |
Framework | |
Scale Optimization for Full-Image-CNN Vehicle Detection
Title | Scale Optimization for Full-Image-CNN Vehicle Detection |
Authors | Yang Gao, Shouyan Guo, Kaimin Huang, Jiaxin Chen, Qian Gong, Yang Zou, Tong Bai, Gary Overett |
Abstract | Many state-of-the-art general object detection methods make use of shared full-image convolutional features (as in Faster R-CNN). This achieves a reasonable test-phase computation time while enjoys the discriminative power provided by large Convolutional Neural Network (CNN) models. Such designs excel on benchmarks which contain natural images but which have very unnatural distributions, i.e. they have an unnaturally high-frequency of the target classes and a bias towards a “friendly” or “dominant” object scale. In this paper we present further study of the use and adaptation of the Faster R-CNN object detection method for datasets presenting natural scale distribution and unbiased real-world object frequency. In particular, we show that better alignment of the detector scale sensitivity to the extant distribution improves vehicle detection performance. We do this by modifying both the selection of Region Proposals, and through using more scale-appropriate full-image convolution features within the CNN model. By selecting better scales in the region proposal input and by combining feature maps through careful design of the convolutional neural network, we improve performance on smaller objects. We significantly increase detection AP for the KITTI dataset car class from 76.3% on our baseline Faster R-CNN detector to 83.6% in our improved detector. |
Tasks | Object Detection |
Published | 2018-02-20 |
URL | http://arxiv.org/abs/1802.06926v1 |
http://arxiv.org/pdf/1802.06926v1.pdf | |
PWC | https://paperswithcode.com/paper/scale-optimization-for-full-image-cnn-vehicle |
Repo | |
Framework | |
A Transferable Pedestrian Motion Prediction Model for Intersections with Different Geometries
Title | A Transferable Pedestrian Motion Prediction Model for Intersections with Different Geometries |
Authors | Nikita Jaipuria, Golnaz Habibi, Jonathan P. How |
Abstract | This paper presents a novel framework for accurate pedestrian intent prediction at intersections. Given some prior knowledge of the curbside geometry, the presented framework can accurately predict pedestrian trajectories, even in new intersections that it has not been trained on. This is achieved by making use of the contravariant components of trajectories in the curbside coordinate system, which ensures that the transformation of trajectories across intersections is affine, regardless of the curbside geometry. Our method is based on the Augmented Semi Nonnegative Sparse Coding (ASNSC) formulation and we use that as a baseline to show improvement in prediction performance on real pedestrian datasets collected at two intersections in Cambridge, with distinctly different curbside and crosswalk geometries. We demonstrate a 7.2% improvement in prediction accuracy in the case of same train and test intersections. Furthermore, we show a comparable prediction performance of TASNSC when trained and tested in different intersections with the baseline, trained and tested on the same intersection. |
Tasks | motion prediction |
Published | 2018-06-25 |
URL | http://arxiv.org/abs/1806.09444v1 |
http://arxiv.org/pdf/1806.09444v1.pdf | |
PWC | https://paperswithcode.com/paper/a-transferable-pedestrian-motion-prediction |
Repo | |
Framework | |