October 17, 2019

3094 words 15 mins read

Paper Group ANR 837

Paper Group ANR 837

Bucket Renormalization for Approximate Inference. $\ell_0$-Motivated Low-Rank Sparse Subspace Clustering. CPMetric: Deep Siamese Networks for Learning Distances Between Structured Preferences. Kernel Recursive ABC: Point Estimation with Intractable Likelihood. Supervised Fuzzy Partitioning. Deep Barcodes for Fast Retrieval of Histopathology Scans. …

Bucket Renormalization for Approximate Inference

Title Bucket Renormalization for Approximate Inference
Authors Sungsoo Ahn, Michael Chertkov, Adrian Weller, Jinwoo Shin
Abstract Probabilistic graphical models are a key tool in machine learning applications. Computing the partition function, i.e., normalizing constant, is a fundamental task of statistical inference but it is generally computationally intractable, leading to extensive study of approximation methods. Iterative variational methods are a popular and successful family of approaches. However, even state of the art variational methods can return poor results or fail to converge on difficult instances. In this paper, we instead consider computing the partition function via sequential summation over variables. We develop robust approximate algorithms by combining ideas from mini-bucket elimination with tensor network and renormalization group methods from statistical physics. The resulting “convergence-free” methods show good empirical performance on both synthetic and real-world benchmark models, even for difficult instances.
Tasks
Published 2018-03-14
URL http://arxiv.org/abs/1803.05104v3
PDF http://arxiv.org/pdf/1803.05104v3.pdf
PWC https://paperswithcode.com/paper/bucket-renormalization-for-approximate
Repo
Framework

$\ell_0$-Motivated Low-Rank Sparse Subspace Clustering

Title $\ell_0$-Motivated Low-Rank Sparse Subspace Clustering
Authors Maria Brbić, Ivica Kopriva
Abstract In many applications, high-dimensional data points can be well represented by low-dimensional subspaces. To identify the subspaces, it is important to capture a global and local structure of the data which is achieved by imposing low-rank and sparseness constraints on the data representation matrix. In low-rank sparse subspace clustering (LRSSC), nuclear and $\ell_1$ norms are used to measure rank and sparsity. However, the use of nuclear and $\ell_1$ norms leads to an overpenalized problem and only approximates the original problem. In this paper, we propose two $\ell_0$ quasi-norm based regularizations. First, the paper presents regularization based on multivariate generalization of minimax-concave penalty (GMC-LRSSC), which contains the global minimizers of $\ell_0$ quasi-norm regularized objective. Afterward, we introduce the Schatten-0 ($S_0$) and $\ell_0$ regularized objective and approximate the proximal map of the joint solution using a proximal average method ($S_0/\ell_0$-LRSSC). The resulting nonconvex optimization problems are solved using alternating direction method of multipliers with established convergence conditions of both algorithms. Results obtained on synthetic and four real-world datasets show the effectiveness of GMC-LRSSC and $S_0/\ell_0$-LRSSC when compared to state-of-the-art methods.
Tasks
Published 2018-12-17
URL http://arxiv.org/abs/1812.06580v1
PDF http://arxiv.org/pdf/1812.06580v1.pdf
PWC https://paperswithcode.com/paper/ell_0-motivated-low-rank-sparse-subspace
Repo
Framework

CPMetric: Deep Siamese Networks for Learning Distances Between Structured Preferences

Title CPMetric: Deep Siamese Networks for Learning Distances Between Structured Preferences
Authors Andrea Loreggia, Nicholas Mattei, Francesca Rossi, K. Brent Venable
Abstract Preference are central to decision making by both machines and humans. Representing, learning, and reasoning with preferences is an important area of study both within computer science and across the sciences. When working with preferences it is necessary to understand and compute the distance between sets of objects, e.g., the preferences of a user and a the descriptions of objects to be recommended. We present CPDist, a novel neural network to address the problem of learning to measure the distance between structured preference representations. We use the popular CP-net formalism to represent preferences and then leverage deep neural networks to learn a recently proposed metric function that is computationally hard to compute directly. CPDist is a novel metric learning approach based on the use of deep siamese networks which learn the Kendal Tau distance between partial orders that are induced by compact preference representations. We find that CPDist is able to learn the distance function with high accuracy and outperform existing approximation algorithms on both the regression and classification task using less computation time. Performance remains good even when CPDist is trained with only a small number of samples compared to the dimension of the solution space, indicating the network generalizes well.
Tasks Decision Making, Metric Learning
Published 2018-09-21
URL https://arxiv.org/abs/1809.08350v2
PDF https://arxiv.org/pdf/1809.08350v2.pdf
PWC https://paperswithcode.com/paper/cpdist-deep-siamese-networks-for-learning
Repo
Framework

Kernel Recursive ABC: Point Estimation with Intractable Likelihood

Title Kernel Recursive ABC: Point Estimation with Intractable Likelihood
Authors Takafumi Kajihara, Motonobu Kanagawa, Keisuke Yamazaki, Kenji Fukumizu
Abstract We propose a novel approach to parameter estimation for simulator-based statistical models with intractable likelihood. Our proposed method involves recursive application of kernel ABC and kernel herding to the same observed data. We provide a theoretical explanation regarding why the approach works, showing (for the population setting) that, under a certain assumption, point estimates obtained with this method converge to the true parameter, as recursion proceeds. We have conducted a variety of numerical experiments, including parameter estimation for a real-world pedestrian flow simulator, and show that in most cases our method outperforms existing approaches.
Tasks
Published 2018-02-23
URL http://arxiv.org/abs/1802.08404v2
PDF http://arxiv.org/pdf/1802.08404v2.pdf
PWC https://paperswithcode.com/paper/kernel-recursive-abc-point-estimation-with
Repo
Framework

Supervised Fuzzy Partitioning

Title Supervised Fuzzy Partitioning
Authors Pooya Ashtari, Fateme Nateghi Haredasht, Hamid Beigy
Abstract Centroid-based methods including k-means and fuzzy c-means are known as effective and easy-to-implement approaches to clustering purposes in many applications. However, these algorithms cannot be directly applied to supervised tasks. This paper thus presents a generative model extending the centroid-based clustering approach to be applicable to classification and regression tasks. Given an arbitrary loss function, the proposed approach, termed Supervised Fuzzy Partitioning (SFP), incorporates labels information into its objective function through a surrogate term penalizing the empirical risk. Entropy-based regularization is also employed to fuzzify the partition and to weight features, enabling the method to capture more complex patterns, identify significant features, and yield better performance facing high-dimensional data. An iterative algorithm based on block coordinate descent scheme is formulated to efficiently find a local optimum. Extensive classification experiments on synthetic, real-world, and high-dimensional datasets demonstrate that the predictive performance of SFP is competitive with state-of-the-art algorithms such as SVM and random forest. SFP has a major advantage over such methods, in that it not only leads to a flexible, nonlinear model but also can exploit any convex loss function in the training phase without compromising computational efficiency.
Tasks
Published 2018-06-15
URL https://arxiv.org/abs/1806.06124v5
PDF https://arxiv.org/pdf/1806.06124v5.pdf
PWC https://paperswithcode.com/paper/supervised-fuzzy-partitioning
Repo
Framework

Deep Barcodes for Fast Retrieval of Histopathology Scans

Title Deep Barcodes for Fast Retrieval of Histopathology Scans
Authors Meghana Dinesh Kumar, Morteza Babaie, Hamid Tizhoosh
Abstract We investigate the concept of deep barcodes and propose two methods to generate them in order to expedite the process of classification and retrieval of histopathology images. Since binary search is computationally less expensive, in terms of both speed and storage, deep barcodes could be useful when dealing with big data retrieval. Our experiments use the dataset Kimia Path24 to test three pre-trained networks for image retrieval. The dataset consists of 27,055 training images in 24 different classes with large variability, and 1,325 test images for testing. Apart from the high-speed and efficiency, results show a surprising retrieval accuracy of 71.62% for deep barcodes, as compared to 68.91% for deep features and 68.53% for compressed deep features.
Tasks Image Retrieval
Published 2018-04-30
URL http://arxiv.org/abs/1805.08833v1
PDF http://arxiv.org/pdf/1805.08833v1.pdf
PWC https://paperswithcode.com/paper/deep-barcodes-for-fast-retrieval-of
Repo
Framework

Text-to-image Synthesis via Symmetrical Distillation Networks

Title Text-to-image Synthesis via Symmetrical Distillation Networks
Authors Mingkuan Yuan, Yuxin Peng
Abstract Text-to-image synthesis aims to automatically generate images according to text descriptions given by users, which is a highly challenging task. The main issues of text-to-image synthesis lie in two gaps: the heterogeneous and homogeneous gaps. The heterogeneous gap is between the high-level concepts of text descriptions and the pixel-level contents of images, while the homogeneous gap exists between synthetic image distributions and real image distributions. For addressing these problems, we exploit the excellent capability of generic discriminative models (e.g. VGG19), which can guide the training process of a new generative model on multiple levels to bridge the two gaps. The high-level representations can teach the generative model to extract necessary visual information from text descriptions, which can bridge the heterogeneous gap. The mid-level and low-level representations can lead it to learn structures and details of images respectively, which relieves the homogeneous gap. Therefore, we propose Symmetrical Distillation Networks (SDN) composed of a source discriminative model as “teacher” and a target generative model as “student”. The target generative model has a symmetrical structure with the source discriminative model, in order to transfer hierarchical knowledge accessibly. Moreover, we decompose the training process into two stages with different distillation paradigms for promoting the performance of the target generative model. Experiments on two widely-used datasets are conducted to verify the effectiveness of our proposed SDN.
Tasks Image Generation
Published 2018-08-21
URL http://arxiv.org/abs/1808.06801v1
PDF http://arxiv.org/pdf/1808.06801v1.pdf
PWC https://paperswithcode.com/paper/text-to-image-synthesis-via-symmetrical
Repo
Framework

Context-Dependent Upper-Confidence Bounds for Directed Exploration

Title Context-Dependent Upper-Confidence Bounds for Directed Exploration
Authors Raksha Kumaraswamy, Matthew Schlegel, Adam White, Martha White
Abstract Directed exploration strategies for reinforcement learning are critical for learning an optimal policy in a minimal number of interactions with the environment. Many algorithms use optimism to direct exploration, either through visitation estimates or upper confidence bounds, as opposed to data-inefficient strategies like \epsilon-greedy that use random, undirected exploration. Most data-efficient exploration methods require significant computation, typically relying on a learned model to guide exploration. Least-squares methods have the potential to provide some of the data-efficiency benefits of model-based approaches – because they summarize past interactions – with the computation closer to that of model-free approaches. In this work, we provide a novel, computationally efficient, incremental exploration strategy, leveraging this property of least-squares temporal difference learning (LSTD). We derive upper confidence bounds on the action-values learned by LSTD, with context-dependent (or state-dependent) noise variance. Such context-dependent noise focuses exploration on a subset of variable states, and allows for reduced exploration in other states. We empirically demonstrate that our algorithm can converge more quickly than other incremental exploration strategies using confidence estimates on action-values.
Tasks Efficient Exploration
Published 2018-11-15
URL http://arxiv.org/abs/1811.06629v1
PDF http://arxiv.org/pdf/1811.06629v1.pdf
PWC https://paperswithcode.com/paper/context-dependent-upper-confidence-bounds-for
Repo
Framework

Different but Equal: Comparing User Collaboration with Digital Personal Assistants vs. Teams of Expert Agents

Title Different but Equal: Comparing User Collaboration with Digital Personal Assistants vs. Teams of Expert Agents
Authors Claudio S. Pinhanez, Heloisa Candello, Mauro C. Pichiliani, Marisa Vasconcelos, Melina Guerra, Maíra G. de Bayser, Paulo Cavalin
Abstract This work compares user collaboration with conversational personal assistants vs. teams of expert chatbots. Two studies were performed to investigate whether each approach affects accomplishment of tasks and collaboration costs. Participants interacted with two equivalent financial advice chatbot systems, one composed of a single conversational adviser and the other based on a team of four experts chatbots. Results indicated that users had different forms of experiences but were equally able to achieve their goals. Contrary to the expected, there were evidences that in the teamwork situation that users were more able to predict agent behavior better and did not have an overhead to maintain common ground, indicating similar collaboration costs. The results point towards the feasibility of either of the two approaches for user collaboration with conversational agents.
Tasks Chatbot
Published 2018-08-24
URL http://arxiv.org/abs/1808.08157v1
PDF http://arxiv.org/pdf/1808.08157v1.pdf
PWC https://paperswithcode.com/paper/different-but-equal-comparing-user
Repo
Framework

Modelling and Analysis of Temporal Preference Drifts Using A Component-Based Factorised Latent Approach

Title Modelling and Analysis of Temporal Preference Drifts Using A Component-Based Factorised Latent Approach
Authors F. Zafari, I. Moser, T. Baarslag
Abstract The changes in user preferences can originate from substantial reasons, like personality shift, or transient and circumstantial ones, like seasonal changes in item popularities. Disregarding these temporal drifts in modelling user preferences can result in unhelpful recommendations. Moreover, different temporal patterns can be associated with various preference domains, and preference components and their combinations. These components comprise preferences over features, preferences over feature values, conditional dependencies between features, socially-influenced preferences, and bias. For example, in the movies domain, the user can change his rating behaviour (bias shift), her preference for genre over language (feature preference shift), or start favouring drama over comedy (feature value preference shift). In this paper, we first propose a novel latent factor model to capture the domain-dependent component-specific temporal patterns in preferences. The component-based approach followed in modelling the aspects of preferences and their temporal effects enables us to arbitrarily switch components on and off. We evaluate the proposed method on three popular recommendation datasets and show that it significantly outperforms the most accurate state-of-the-art static models. The experiments also demonstrate the greater robustness and stability of the proposed dynamic model in comparison with the most successful models to date. We also analyse the temporal behaviour of different preference components and their combinations and show that the dynamic behaviour of preference components is highly dependent on the preference dataset and domain. Therefore, the results also highlight the importance of modelling temporal effects but also underline the advantages of a component-based architecture that is better suited to capture domain-specific balances in the contributions of the aspects.
Tasks
Published 2018-02-27
URL http://arxiv.org/abs/1802.09728v2
PDF http://arxiv.org/pdf/1802.09728v2.pdf
PWC https://paperswithcode.com/paper/modelling-and-analysis-of-temporal-preference
Repo
Framework

A Retinex-based Image Enhancement Scheme with Noise Aware Shadow-up Function

Title A Retinex-based Image Enhancement Scheme with Noise Aware Shadow-up Function
Authors Chien Cheng Chien, Yuma Kinoshita, Sayaka Shiota, Hitoshi Kiya
Abstract This paper proposes a novel image contrast enhancement method based on both a noise aware shadow-up function and Retinex (retina and cortex) decomposition. Under low light conditions, images taken by digital cameras have low contrast in dark or bright regions. This is due to a limited dynamic range that imaging sensors have. For this reason, various contrast enhancement methods have been proposed. Our proposed method can enhance the contrast of images without not only over-enhancement but also noise amplification. In the proposed method, an image is decomposed into illumination layer and reflectance layer based on the retinex theory, and lightness information of the illumination layer is adjusted. A shadow-up function is used for preventing over-enhancement. The proposed mapping function, designed by using a noise aware histogram, allows not only to enhance contrast of dark region, but also to avoid amplifying noise, even under strong noise environments.
Tasks Image Enhancement
Published 2018-11-08
URL http://arxiv.org/abs/1811.03280v1
PDF http://arxiv.org/pdf/1811.03280v1.pdf
PWC https://paperswithcode.com/paper/a-retinex-based-image-enhancement-scheme-with
Repo
Framework

Reward-estimation variance elimination in sequential decision processes

Title Reward-estimation variance elimination in sequential decision processes
Authors Sergey Pankov
Abstract Policy gradient methods are very attractive in reinforcement learning due to their model-free nature and convergence guarantees. These methods, however, suffer from high variance in gradient estimation, resulting in poor sample efficiency. To mitigate this issue, a number of variance-reduction approaches have been proposed. Unfortunately, in the challenging problems with delayed rewards, these approaches either bring a relatively modest improvement or do reduce variance at expense of introducing a bias and undermining convergence. The unbiased methods of gradient estimation, in general, only partially reduce variance, without eliminating it completely even in the limit of exact knowledge of the value functions and problem dynamics, as one might have wished. In this work we propose an unbiased method that does completely eliminate variance under some, commonly encountered, conditions. Of practical interest is the limit of deterministic dynamics and small policy stochasticity. In the case of a quadratic value function, as in linear quadratic Gaussian models, the policy randomness need not be small. We use such a model to analyze performance of the proposed variance-elimination approach and compare it with standard variance-reduction methods. The core idea behind the approach is to use control variates at all future times down the trajectory. We present both a model-based and model-free formulations.
Tasks Policy Gradient Methods
Published 2018-11-15
URL http://arxiv.org/abs/1811.06225v1
PDF http://arxiv.org/pdf/1811.06225v1.pdf
PWC https://paperswithcode.com/paper/reward-estimation-variance-elimination-in
Repo
Framework

Large scale classification in deep neural network with Label Mapping

Title Large scale classification in deep neural network with Label Mapping
Authors Qizhi Zhang, Kuang-Chih Lee, Hongying Bao, Yuan You, Wenjie Li, Dongbai Guo
Abstract In recent years, deep neural network is widely used in machine learning. The multi-class classification problem is a class of important problem in machine learning. However, in order to solve those types of multi-class classification problems effectively, the required network size should have hyper-linear growth with respect to the number of classes. Therefore, it is infeasible to solve the multi-class classification problem using deep neural network when the number of classes are huge. This paper presents a method, so called Label Mapping (LM), to solve this problem by decomposing the original classification problem to several smaller sub-problems which are solvable theoretically. Our method is an ensemble method like error-correcting output codes (ECOC), but it allows base learners to be multi-class classifiers with different number of class labels. We propose two design principles for LM, one is to maximize the number of base classifier which can separate two different classes, and the other is to keep all base learners to be independent as possible in order to reduce the redundant information. Based on these principles, two different LM algorithms are derived using number theory and information theory. Since each base learner can be trained independently, it is easy to scale our method into a large scale training system. Experiments show that our proposed method outperforms the standard one-hot encoding and ECOC significantly in terms of accuracy and model complexity.
Tasks
Published 2018-06-07
URL http://arxiv.org/abs/1806.02507v1
PDF http://arxiv.org/pdf/1806.02507v1.pdf
PWC https://paperswithcode.com/paper/large-scale-classification-in-deep-neural
Repo
Framework

Adaptive Minimax Regret against Smooth Logarithmic Losses over High-Dimensional $\ell_1$-Balls via Envelope Complexity

Title Adaptive Minimax Regret against Smooth Logarithmic Losses over High-Dimensional $\ell_1$-Balls via Envelope Complexity
Authors Kohei Miyaguchi, Kenji Yamanishi
Abstract We develop a new theoretical framework, the \emph{envelope complexity}, to analyze the minimax regret with logarithmic loss functions and derive a Bayesian predictor that adaptively achieves the minimax regret over high-dimensional $\ell_1$-balls within a factor of two. The prior is newly derived for achieving the minimax regret and called the \emph{spike-and-tails~(ST) prior} as it looks like. The resulting regret bound is so simple that it is completely determined with the smoothness of the loss function and the radius of the balls except with logarithmic factors, and it has a generalized form of existing regret/risk bounds. In the preliminary experiment, we confirm that the ST prior outperforms the conventional minimax-regret prior under non-high-dimensional asymptotics.
Tasks
Published 2018-10-09
URL http://arxiv.org/abs/1810.03825v2
PDF http://arxiv.org/pdf/1810.03825v2.pdf
PWC https://paperswithcode.com/paper/adaptive-minimax-regret-against-smooth
Repo
Framework

Detection and Analysis of Content Creator Collaborations in YouTube Videos using Face- and Speaker-Recognition

Title Detection and Analysis of Content Creator Collaborations in YouTube Videos using Face- and Speaker-Recognition
Authors Moritz Lode, Michael Örtl, Christian Koch, Amr Rizk, Ralf Steinmetz
Abstract This work discusses and implements the application of speaker recognition for the detection of collaborations in YouTube videos. CATANA, an existing framework for detection and analysis of YouTube collaborations, is utilizing face recognition for the detection of collaborators, which naturally performs poor on video-content without appearing faces. This work proposes an extension of CATANA using active speaker detection and speaker recognition to improve the detection accuracy.
Tasks Face Recognition, Speaker Recognition
Published 2018-07-05
URL http://arxiv.org/abs/1807.02020v1
PDF http://arxiv.org/pdf/1807.02020v1.pdf
PWC https://paperswithcode.com/paper/detection-and-analysis-of-content-creator
Repo
Framework
comments powered by Disqus