Paper Group ANR 470
Multivariate Dependency Measure based on Copula and Gaussian Kernel. Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models. Group-based Sparse Representation for Image Compressive Sensing Reconstruction with Non-Convex Regularization. Modular Learning Component Attacks: Today’s Reality, Tomorrow’s Challenge. …
Multivariate Dependency Measure based on Copula and Gaussian Kernel
Title | Multivariate Dependency Measure based on Copula and Gaussian Kernel |
Authors | Angshuman Roy, Alok Goswami, C. A. Murthy |
Abstract | We propose a new multivariate dependency measure. It is obtained by considering a Gaussian kernel based distance between the copula transform of the given d-dimensional distribution and the uniform copula and then appropriately normalizing it. The resulting measure is shown to satisfy a number of desirable properties. A nonparametric estimate is proposed for this dependency measure and its properties (finite sample as well as asymptotic) are derived. Some comparative studies of the proposed dependency measure estimate with some widely used dependency measure estimates on artificial datasets are included. A non-parametric test of independence between two or more random variables based on this measure is proposed. A comparison of the proposed test with some existing nonparametric multivariate test for independence is presented. |
Tasks | |
Published | 2017-08-24 |
URL | https://arxiv.org/abs/1708.07485v3 |
https://arxiv.org/pdf/1708.07485v3.pdf | |
PWC | https://paperswithcode.com/paper/multivariate-dependency-measure-based-on |
Repo | |
Framework | |
Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models
Title | Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models |
Authors | Jiuxiang Gu, Jianfei Cai, Shafiq Joty, Li Niu, Gang Wang |
Abstract | Textual-visual cross-modal retrieval has been a hot research topic in both computer vision and natural language processing communities. Learning appropriate representations for multi-modal data is crucial for the cross-modal retrieval performance. Unlike existing image-text retrieval approaches that embed image-text pairs as single feature vectors in a common representational space, we propose to incorporate generative processes into the cross-modal feature embedding, through which we are able to learn not only the global abstract features but also the local grounded features. Extensive experiments show that our framework can well match images and sentences with complex content, and achieve the state-of-the-art cross-modal retrieval results on MSCOCO dataset. |
Tasks | Cross-Modal Retrieval |
Published | 2017-11-17 |
URL | http://arxiv.org/abs/1711.06420v2 |
http://arxiv.org/pdf/1711.06420v2.pdf | |
PWC | https://paperswithcode.com/paper/look-imagine-and-match-improving-textual |
Repo | |
Framework | |
Group-based Sparse Representation for Image Compressive Sensing Reconstruction with Non-Convex Regularization
Title | Group-based Sparse Representation for Image Compressive Sensing Reconstruction with Non-Convex Regularization |
Authors | Zhiyuan Zha, Xinggan Zhang, Qiong Wang, Lan Tang, Xin Liu |
Abstract | Patch-based sparse representation modeling has shown great potential in image compressive sensing (CS) reconstruction. However, this model usually suffers from some limits, such as dictionary learning with great computational complexity, neglecting the relationship among similar patches. In this paper, a group-based sparse representation method with non-convex regularization (GSR-NCR) for image CS reconstruction is proposed. In GSR-NCR, the local sparsity and nonlocal self-similarity of images is simultaneously considered in a unified framework. Different from the previous methods based on sparsity-promoting convex regularization, we extend the non-convex weighted Lp (0 < p < 1) penalty function on group sparse coefficients of the data matrix, rather than conventional L1-based regularization. To reduce the computational complexity, instead of learning the dictionary with a high computational complexity from natural images, we learn the principle component analysis (PCA) based dictionary for each group. Moreover, to make the proposed scheme tractable and robust, we have developed an efficient iterative shrinkage/thresholding algorithm to solve the non-convex optimization problem. Experimental results demonstrate that the proposed method outperforms many state-of-the-art techniques for image CS reconstruction. |
Tasks | Compressive Sensing, Dictionary Learning |
Published | 2017-04-24 |
URL | http://arxiv.org/abs/1704.07023v2 |
http://arxiv.org/pdf/1704.07023v2.pdf | |
PWC | https://paperswithcode.com/paper/group-based-sparse-representation-for-image-1 |
Repo | |
Framework | |
Modular Learning Component Attacks: Today’s Reality, Tomorrow’s Challenge
Title | Modular Learning Component Attacks: Today’s Reality, Tomorrow’s Challenge |
Authors | Xinyang Zhang, Yujie Ji, Ting Wang |
Abstract | Many of today’s machine learning (ML) systems are not built from scratch, but are compositions of an array of {\em modular learning components} (MLCs). The increasing use of MLCs significantly simplifies the ML system development cycles. However, as most MLCs are contributed and maintained by third parties, their lack of standardization and regulation entails profound security implications. In this paper, for the first time, we demonstrate that potentially harmful MLCs pose immense threats to the security of ML systems. We present a broad class of {\em logic-bomb} attacks in which maliciously crafted MLCs trigger host systems to malfunction in a predictable manner. By empirically studying two state-of-the-art ML systems in the healthcare domain, we explore the feasibility of such attacks. For example, we show that, without prior knowledge about the host ML system, by modifying only 3.3{\textperthousand} of the MLC’s parameters, each with distortion below $10^{-3}$, the adversary is able to force the misdiagnosis of target victims’ skin cancers with 100% success rate. We provide analytical justification for the success of such attacks, which points to the fundamental characteristics of today’s ML models: high dimensionality, non-linearity, and non-convexity. The issue thus seems fundamental to many ML systems. We further discuss potential countermeasures to mitigate MLC-based attacks and their potential technical challenges. |
Tasks | |
Published | 2017-08-25 |
URL | http://arxiv.org/abs/1708.07807v1 |
http://arxiv.org/pdf/1708.07807v1.pdf | |
PWC | https://paperswithcode.com/paper/modular-learning-component-attacks-todays |
Repo | |
Framework | |
Towards Data Quality Assessment in Online Advertising
Title | Towards Data Quality Assessment in Online Advertising |
Authors | Sahin Cem Geyik, Jianqiang Shen, Shahriar Shariat, Ali Dasdan, Santanu Kolay |
Abstract | In online advertising, our aim is to match the advertisers with the most relevant users to optimize the campaign performance. In the pursuit of achieving this goal, multiple data sources provided by the advertisers or third-party data providers are utilized to choose the set of users according to the advertisers’ targeting criteria. In this paper, we present a framework that can be applied to assess the quality of such data sources in large scale. This framework efficiently evaluates the similarity of a specific data source categorization to that of the ground truth, especially for those cases when the ground truth is accessible only in aggregate, and the user-level information is anonymized or unavailable due to privacy reasons. We propose multiple methodologies within this framework, present some preliminary assessment results, and evaluate how the methodologies compare to each other. We also present two use cases where we can utilize the data quality assessment results: the first use case is targeting specific user categories, and the second one is forecasting the desirable audiences we can reach for an online advertising campaign with pre-set targeting criteria. |
Tasks | |
Published | 2017-11-30 |
URL | http://arxiv.org/abs/1711.11175v1 |
http://arxiv.org/pdf/1711.11175v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-data-quality-assessment-in-online |
Repo | |
Framework | |
Automatic Color Image Segmentation Using a Square Elemental Region-Based Seeded Region Growing and Merging Method
Title | Automatic Color Image Segmentation Using a Square Elemental Region-Based Seeded Region Growing and Merging Method |
Authors | Hisashi Shimodaira |
Abstract | This paper presents an efficient automatic color image segmentation method using a seeded region growing and merging method based on square elemental regions. Our segmentation method consists of the three steps: generating seed regions, merging the regions, and applying a pixel-wise boundary determination algorithm to the resultant polygonal regions. The major features of our method are as follows: the use of square elemental regions instead of pixels as the processing unit, a seed generation method based on enhanced gradient values, a seed region growing method exploiting local gradient values, a region merging method using a similarity measure including a homogeneity distance based on Tsallis entropy, and a termination condition of region merging using an estimated desired number of regions. Using square regions as the processing unit substantially reduces the time complexity of the algorithm and makes the performance stable. The experimental results show that our method exhibits stable performance for a variety of natural images, including heavily textured areas, and produces good segmentation results using the same parameter values. The results of our method are fairly comparable to, and in some respects better than, those of existing algorithms. |
Tasks | Semantic Segmentation |
Published | 2017-11-26 |
URL | http://arxiv.org/abs/1711.09352v1 |
http://arxiv.org/pdf/1711.09352v1.pdf | |
PWC | https://paperswithcode.com/paper/automatic-color-image-segmentation-using-a |
Repo | |
Framework | |
Outlier Cluster Formation in Spectral Clustering
Title | Outlier Cluster Formation in Spectral Clustering |
Authors | Takuro Ina, Atsushi Hashimoto, Masaaki Iiyama, Hidekazu Kasahara, Mikihiko Mori, Michihiko Minoh |
Abstract | Outlier detection and cluster number estimation is an important issue for clustering real data. This paper focuses on spectral clustering, a time-tested clustering method, and reveals its important properties related to outliers. The highlights of this paper are the following two mathematical observations: first, spectral clustering’s intrinsic property of an outlier cluster formation, and second, the singularity of an outlier cluster with a valid cluster number. Based on these observations, we designed a function that evaluates clustering and outlier detection results. In experiments, we prepared two scenarios, face clustering in photo album and person re-identification in a camera network. We confirmed that the proposed method detects outliers and estimates the number of clusters properly in both problems. Our method outperforms state-of-the-art methods in both the 128-dimensional sparse space for face clustering and the 4,096-dimensional non-sparse space for person re-identification. |
Tasks | Outlier Detection, Person Re-Identification |
Published | 2017-03-03 |
URL | http://arxiv.org/abs/1703.01028v1 |
http://arxiv.org/pdf/1703.01028v1.pdf | |
PWC | https://paperswithcode.com/paper/outlier-cluster-formation-in-spectral |
Repo | |
Framework | |
Information Theoretic Limits for Linear Prediction with Graph-Structured Sparsity
Title | Information Theoretic Limits for Linear Prediction with Graph-Structured Sparsity |
Authors | Adarsh Barik, Jean Honorio, Mohit Tawarmalani |
Abstract | We analyze the necessary number of samples for sparse vector recovery in a noisy linear prediction setup. This model includes problems such as linear regression and classification. We focus on structured graph models. In particular, we prove that sufficient number of samples for the weighted graph model proposed by Hegde and others is also necessary. We use the Fano’s inequality on well constructed ensembles as our main tool in establishing information theoretic lower bounds. |
Tasks | |
Published | 2017-01-26 |
URL | http://arxiv.org/abs/1701.07895v2 |
http://arxiv.org/pdf/1701.07895v2.pdf | |
PWC | https://paperswithcode.com/paper/information-theoretic-limits-for-linear |
Repo | |
Framework | |
Convergence Analysis of Optimization Algorithms
Title | Convergence Analysis of Optimization Algorithms |
Authors | HyoungSeok Kim, JiHoon Kang, WooMyoung Park, SukHyun Ko, YoonHo Cho, DaeSung Yu, YoungSook Song, JungWon Choi |
Abstract | The regret bound of an optimization algorithms is one of the basic criteria for evaluating the performance of the given algorithm. By inspecting the differences between the regret bounds of traditional algorithms and adaptive one, we provide a guide for choosing an optimizer with respect to the given data set and the loss function. For analysis, we assume that the loss function is convex and its gradient is Lipschitz continuous. |
Tasks | |
Published | 2017-07-06 |
URL | http://arxiv.org/abs/1707.01647v1 |
http://arxiv.org/pdf/1707.01647v1.pdf | |
PWC | https://paperswithcode.com/paper/convergence-analysis-of-optimization |
Repo | |
Framework | |
BAM! The Behance Artistic Media Dataset for Recognition Beyond Photography
Title | BAM! The Behance Artistic Media Dataset for Recognition Beyond Photography |
Authors | Michael J. Wilber, Chen Fang, Hailin Jin, Aaron Hertzmann, John Collomosse, Serge Belongie |
Abstract | Computer vision systems are designed to work well within the context of everyday photography. However, artists often render the world around them in ways that do not resemble photographs. Artwork produced by people is not constrained to mimic the physical world, making it more challenging for machines to recognize. This work is a step toward teaching machines how to categorize images in ways that are valuable to humans. First, we collect a large-scale dataset of contemporary artwork from Behance, a website containing millions of portfolios from professional and commercial artists. We annotate Behance imagery with rich attribute labels for content, emotions, and artistic media. Furthermore, we carry out baseline experiments to show the value of this dataset for artistic style prediction, for improving the generality of existing object classifiers, and for the study of visual domain adaptation. We believe our Behance Artistic Media dataset will be a good starting point for researchers wishing to study artistic imagery and relevant problems. |
Tasks | Domain Adaptation |
Published | 2017-04-27 |
URL | http://arxiv.org/abs/1704.08614v2 |
http://arxiv.org/pdf/1704.08614v2.pdf | |
PWC | https://paperswithcode.com/paper/bam-the-behance-artistic-media-dataset-for |
Repo | |
Framework | |
Stacked Deconvolutional Network for Semantic Segmentation
Title | Stacked Deconvolutional Network for Semantic Segmentation |
Authors | Jun Fu, Jing Liu, Yuhang Wang, Hanqing Lu |
Abstract | Recent progress in semantic segmentation has been driven by improving the spatial resolution under Fully Convolutional Networks (FCNs). To address this problem, we propose a Stacked Deconvolutional Network (SDN) for semantic segmentation. In SDN, multiple shallow deconvolutional networks, which are called as SDN units, are stacked one by one to integrate contextual information and guarantee the fine recovery of localization information. Meanwhile, inter-unit and intra-unit connections are designed to assist network training and enhance feature fusion since the connections improve the flow of information and gradient propagation throughout the network. Besides, hierarchical supervision is applied during the upsampling process of each SDN unit, which guarantees the discrimination of feature representations and benefits the network optimization. We carry out comprehensive experiments and achieve the new state-of-the-art results on three datasets, including PASCAL VOC 2012, CamVid, GATECH. In particular, our best model without CRF post-processing achieves an intersection-over-union score of 86.6% in the test set. |
Tasks | Semantic Segmentation |
Published | 2017-08-16 |
URL | http://arxiv.org/abs/1708.04943v1 |
http://arxiv.org/pdf/1708.04943v1.pdf | |
PWC | https://paperswithcode.com/paper/stacked-deconvolutional-network-for-semantic |
Repo | |
Framework | |
Accelerated Reconstruction of Perfusion-Weighted MRI Enforcing Jointly Local and Nonlocal Spatio-temporal Constraints
Title | Accelerated Reconstruction of Perfusion-Weighted MRI Enforcing Jointly Local and Nonlocal Spatio-temporal Constraints |
Authors | Cagdas Ulas, Christine Preibisch, Jonathan Sperl, Thomas Pyka, Jayashree Kalpathy-Cramer, Bjoern Menze |
Abstract | Perfusion-weighted magnetic resonance imaging (MRI) is an imaging technique that allows one to measure tissue perfusion in an organ of interest through the injection of an intravascular paramagnetic contrast agent (CA). Due to a preference for high temporal and spatial resolution in many applications, this modality could significantly benefit from accelerated data acquisitions. In this paper, we specifically address the problem of reconstructing perfusion MR image series from a subset of k-space data. Our proposed approach is motivated by the observation that temporal variations (dynamics) in perfusion imaging often exhibit correlation across different spatial scales. Hence, we propose a model that jointly penalizes the voxel-wise deviations in temporal gradient images obtained based on a baseline, and the patch-wise dissimilarities between the spatio-temporal neighborhoods of entire image sequence. We validate our method on dynamic susceptibility contrast (DSC)-MRI and dynamic contrast-enhanced (DCE)-MRI brain perfusion datasets acquired from 10 tumor patients in total. We provide extensive analysis of reconstruction performance and perfusion parameter estimation in comparison to state-of-the-art reconstruction methods. Experimental results on clinical datasets demonstrate that our reconstruction model can potentially achieve up to 8-fold acceleration by enabling accurate estimation of perfusion parameters while preserving spatial image details and reconstructing the complete perfusion time-intensity curves (TICs). |
Tasks | |
Published | 2017-08-25 |
URL | http://arxiv.org/abs/1708.07808v1 |
http://arxiv.org/pdf/1708.07808v1.pdf | |
PWC | https://paperswithcode.com/paper/accelerated-reconstruction-of-perfusion |
Repo | |
Framework | |
Stability Analysis of Optimal Adaptive Control using Value Iteration with Approximation Errors
Title | Stability Analysis of Optimal Adaptive Control using Value Iteration with Approximation Errors |
Authors | Ali Heydari |
Abstract | Adaptive optimal control using value iteration initiated from a stabilizing control policy is theoretically analyzed in terms of stability of the system during the learning stage without ignoring the effects of approximation errors. This analysis includes the system operated using any single/constant resulting control policy and also using an evolving/time-varying control policy. A feature of the presented results is providing estimations of the \textit{region of attraction} so that if the initial condition is within the region, the whole trajectory will remain inside it and hence, the function approximation results remain valid. |
Tasks | |
Published | 2017-10-23 |
URL | http://arxiv.org/abs/1710.08530v1 |
http://arxiv.org/pdf/1710.08530v1.pdf | |
PWC | https://paperswithcode.com/paper/stability-analysis-of-optimal-adaptive |
Repo | |
Framework | |
Taming Wild High Dimensional Text Data with a Fuzzy Lash
Title | Taming Wild High Dimensional Text Data with a Fuzzy Lash |
Authors | Amir Karami |
Abstract | The bag of words (BOW) represents a corpus in a matrix whose elements are the frequency of words. However, each row in the matrix is a very high-dimensional sparse vector. Dimension reduction (DR) is a popular method to address sparsity and high-dimensionality issues. Among different strategies to develop DR method, Unsupervised Feature Transformation (UFT) is a popular strategy to map all words on a new basis to represent BOW. The recent increase of text data and its challenges imply that DR area still needs new perspectives. Although a wide range of methods based on the UFT strategy has been developed, the fuzzy approach has not been considered for DR based on this strategy. This research investigates the application of fuzzy clustering as a DR method based on the UFT strategy to collapse BOW matrix to provide a lower-dimensional representation of documents instead of the words in a corpus. The quantitative evaluation shows that fuzzy clustering produces superior performance and features to Principal Components Analysis (PCA) and Singular Value Decomposition (SVD), two popular DR methods based on the UFT strategy. |
Tasks | Dimensionality Reduction |
Published | 2017-12-16 |
URL | http://arxiv.org/abs/1712.05997v1 |
http://arxiv.org/pdf/1712.05997v1.pdf | |
PWC | https://paperswithcode.com/paper/taming-wild-high-dimensional-text-data-with-a |
Repo | |
Framework | |
Common Knowledge in a Logic of Gossips
Title | Common Knowledge in a Logic of Gossips |
Authors | Krzysztof R. Apt, Dominik Wojtczak |
Abstract | Gossip protocols aim at arriving, by means of point-to-point or group communications, at a situation in which all the agents know each other secrets. Recently a number of authors studied distributed epistemic gossip protocols. These protocols use as guards formulas from a simple epistemic logic, which makes their analysis and verification substantially easier. We study here common knowledge in the context of such a logic. First, we analyze when it can be reduced to iterated knowledge. Then we show that the semantics and truth for formulas without nested common knowledge operator are decidable. This implies that implementability, partial correctness and termination of distributed epistemic gossip protocols that use non-nested common knowledge operator is decidable, as well. Given that common knowledge is equivalent to an infinite conjunction of nested knowledge, these results are non-trivial generalizations of the corresponding decidability results for the original epistemic logic, established in (Apt & Wojtczak, 2016). K. R. Apt & D. Wojtczak (2016): On Decidability of a Logic of Gossips. In Proc. of JELIA 2016, pp. 18-33, doi:10.1007/ 978-3-319-48758-8_2. |
Tasks | |
Published | 2017-07-27 |
URL | http://arxiv.org/abs/1707.08734v1 |
http://arxiv.org/pdf/1707.08734v1.pdf | |
PWC | https://paperswithcode.com/paper/common-knowledge-in-a-logic-of-gossips |
Repo | |
Framework | |