July 28, 2019

2789 words 14 mins read

Paper Group ANR 172

Paper Group ANR 172

Dynamic Pricing in Competitive Markets. Learning and Transferring IDs Representation in E-commerce. Gradient Normalization & Depth Based Decay For Deep Learning. Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks. Learning Affinity via Spatial Propagation Networks. TasselNet: Counting maize tassels in the wild via local counts …

Dynamic Pricing in Competitive Markets

Title Dynamic Pricing in Competitive Markets
Authors Paresh Nakhe
Abstract Dynamic pricing of goods in a competitive environment to maximize revenue is a natural objective and has been a subject of research over the years. In this paper, we focus on a class of markets exhibiting the substitutes property with sellers having divisible and replenishable goods. Depending on the prices chosen, each seller observes a certain demand which is satisfied subject to the supply constraint. The goal of the seller is to price her good dynamically so as to maximize her revenue. For the static market case, when the consumer utility satisfies the Constant Elasticity of Substitution (CES) property, we give a $O(\sqrt{T})$ regret bound on the maximum loss in revenue of a seller using a modified version of the celebrated Online Gradient Descent Algorithm by Zinkevich. For a more specialized set of consumer utilities satisfying the iso-elasticity condition, we show that when each seller uses a regret-minimizing algorithm satisfying a certain technical property, the regret with respect to $(1-\alpha)$ times optimal revenue is bounded as $O(T^{1/4} / \sqrt{\alpha})$. We extend this result to markets with dynamic supplies and prove a corresponding dynamic regret bound, whose guarantee deteriorates smoothly with the inherent instability of the market. As a side-result, we also extend the previously known convergence results of these algorithms in a general game to the dynamic setting.
Tasks
Published 2017-09-14
URL http://arxiv.org/abs/1709.04960v1
PDF http://arxiv.org/pdf/1709.04960v1.pdf
PWC https://paperswithcode.com/paper/dynamic-pricing-in-competitive-markets
Repo
Framework

Learning and Transferring IDs Representation in E-commerce

Title Learning and Transferring IDs Representation in E-commerce
Authors Kui Zhao, Yuechuan Li, Zhaoqian Shuai, Cheng Yang
Abstract Many machine intelligence techniques are developed in E-commerce and one of the most essential components is the representation of IDs, including user ID, item ID, product ID, store ID, brand ID, category ID etc. The classical encoding based methods (like one-hot encoding) are inefficient in that it suffers sparsity problems due to its high dimension, and it cannot reflect the relationships among IDs, either homogeneous or heterogeneous ones. In this paper, we propose an embedding based framework to learn and transfer the representation of IDs. As the implicit feedbacks of users, a tremendous amount of item ID sequences can be easily collected from the interactive sessions. By jointly using these informative sequences and the structural connections among IDs, all types of IDs can be embedded into one low-dimensional semantic space. Subsequently, the learned representations are utilized and transferred in four scenarios: (i) measuring the similarity between items, (ii) transferring from seen items to unseen items, (iii) transferring across different domains, (iv) transferring across different tasks. We deploy and evaluate the proposed approach in Hema App and the results validate its effectiveness.
Tasks
Published 2017-12-22
URL http://arxiv.org/abs/1712.08289v4
PDF http://arxiv.org/pdf/1712.08289v4.pdf
PWC https://paperswithcode.com/paper/learning-and-transferring-ids-representation
Repo
Framework

Gradient Normalization & Depth Based Decay For Deep Learning

Title Gradient Normalization & Depth Based Decay For Deep Learning
Authors Robert Kwiatkowski, Oscar Chang
Abstract In this paper we introduce a novel method of gradient normalization and decay with respect to depth. Our method leverages the simple concept of normalizing all gradients in a deep neural network, and then decaying said gradients with respect to their depth in the network. Our proposed normalization and decay techniques can be used in conjunction with most current state of the art optimizers and are a very simple addition to any network. This method, although simple, showed improvements in convergence time on state of the art networks such as DenseNet and ResNet on image classification tasks, as well as on an LSTM for natural language processing tasks.
Tasks Image Classification
Published 2017-12-10
URL http://arxiv.org/abs/1712.03607v2
PDF http://arxiv.org/pdf/1712.03607v2.pdf
PWC https://paperswithcode.com/paper/gradient-normalization-depth-based-decay-for
Repo
Framework

Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks

Title Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks
Authors Vahid Behzadan, Arslan Munir
Abstract Deep learning classifiers are known to be inherently vulnerable to manipulation by intentionally perturbed inputs, named adversarial examples. In this work, we establish that reinforcement learning techniques based on Deep Q-Networks (DQNs) are also vulnerable to adversarial input perturbations, and verify the transferability of adversarial examples across different DQN models. Furthermore, we present a novel class of attacks based on this vulnerability that enable policy manipulation and induction in the learning process of DQNs. We propose an attack mechanism that exploits the transferability of adversarial examples to implement policy induction attacks on DQNs, and demonstrate its efficacy and impact through experimental study of a game-learning scenario.
Tasks
Published 2017-01-16
URL http://arxiv.org/abs/1701.04143v1
PDF http://arxiv.org/pdf/1701.04143v1.pdf
PWC https://paperswithcode.com/paper/vulnerability-of-deep-reinforcement-learning
Repo
Framework

Learning Affinity via Spatial Propagation Networks

Title Learning Affinity via Spatial Propagation Networks
Authors Sifei Liu, Shalini De Mello, Jinwei Gu, Guangyu Zhong, Ming-Hsuan Yang, Jan Kautz
Abstract In this paper, we propose spatial propagation networks for learning the affinity matrix for vision tasks. We show that by constructing a row/column linear propagation model, the spatially varying transformation matrix exactly constitutes an affinity matrix that models dense, global pairwise relationships of an image. Specifically, we develop a three-way connection for the linear propagation model, which (a) formulates a sparse transformation matrix, where all elements can be the output from a deep CNN, but (b) results in a dense affinity matrix that effectively models any task-specific pairwise similarity matrix. Instead of designing the similarity kernels according to image features of two points, we can directly output all the similarities in a purely data-driven manner. The spatial propagation network is a generic framework that can be applied to many affinity-related tasks, including but not limited to image matting, segmentation and colorization, to name a few. Essentially, the model can learn semantically-aware affinity values for high-level vision tasks due to the powerful learning capability of the deep neural network classifier. We validate the framework on the task of refinement for image segmentation boundaries. Experiments on the HELEN face parsing and PASCAL VOC-2012 semantic segmentation tasks show that the spatial propagation network provides a general, effective and efficient solution for generating high-quality segmentation results.
Tasks Colorization, Image Matting, Semantic Segmentation
Published 2017-10-03
URL http://arxiv.org/abs/1710.01020v1
PDF http://arxiv.org/pdf/1710.01020v1.pdf
PWC https://paperswithcode.com/paper/learning-affinity-via-spatial-propagation
Repo
Framework

TasselNet: Counting maize tassels in the wild via local counts regression network

Title TasselNet: Counting maize tassels in the wild via local counts regression network
Authors Hao Lu, Zhiguo Cao, Yang Xiao, Bohan Zhuang, Chunhua Shen
Abstract Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment.
Tasks
Published 2017-07-07
URL http://arxiv.org/abs/1707.02290v1
PDF http://arxiv.org/pdf/1707.02290v1.pdf
PWC https://paperswithcode.com/paper/tasselnet-counting-maize-tassels-in-the-wild
Repo
Framework

Spatially Aware Melanoma Segmentation Using Hybrid Deep Learning Techniques

Title Spatially Aware Melanoma Segmentation Using Hybrid Deep Learning Techniques
Authors M. Attia, M. Hossny, S. Nahavandi, A. Yazdabadi
Abstract In this paper, we proposed using a hybrid method that utilises deep convolutional and recurrent neural networks for accurate delineation of skin lesion of images supplied with ISBI 2017 lesion segmentation challenge. The proposed method was trained using 1800 images and tested on 150 images from ISBI 2017 challenge.
Tasks Lesion Segmentation
Published 2017-02-26
URL http://arxiv.org/abs/1702.07963v1
PDF http://arxiv.org/pdf/1702.07963v1.pdf
PWC https://paperswithcode.com/paper/spatially-aware-melanoma-segmentation-using
Repo
Framework

k-Means Clustering and Ensemble of Regressions: An Algorithm for the ISIC 2017 Skin Lesion Segmentation Challenge

Title k-Means Clustering and Ensemble of Regressions: An Algorithm for the ISIC 2017 Skin Lesion Segmentation Challenge
Authors David Alvarez, Monica Iglesias
Abstract This abstract briefly describes a segmentation algorithm developed for the ISIC 2017 Skin Lesion Detection Competition hosted at [ref]. The objective of the competition is to perform a segmentation (in the form of a binary mask image) of skin lesions in dermoscopic images as close as possible to a segmentation performed by trained clinicians, which is taken as ground truth. This project only takes part in the segmentation phase of the challenge. The other phases of the competition (feature extraction and lesion identification) are not considered. The proposed algorithm consists of 4 steps: (1) lesion image preprocessing, (2) image segmentation using k-means clustering of pixel colors, (3) calculation of a set of features describing the properties of each segmented region, and (4) calculation of a final score for each region, representing the likelihood of corresponding to a suitable lesion segmentation. The scores in step (4) are obtained by averaging the results of 2 different regression models using the scores of each region as input. Before using the algorithm these regression models must be trained using the training set of images and ground truth masks provided by the Competition. Steps 2 to 4 are repeated with an increasing number of clusters (and therefore the image is segmented into more regions) until there is no further improvement of the calculated scores.
Tasks Lesion Segmentation, Semantic Segmentation
Published 2017-02-23
URL http://arxiv.org/abs/1702.07333v1
PDF http://arxiv.org/pdf/1702.07333v1.pdf
PWC https://paperswithcode.com/paper/k-means-clustering-and-ensemble-of
Repo
Framework

Streaming Algorithm for Euler Characteristic Curves of Multidimensional Images

Title Streaming Algorithm for Euler Characteristic Curves of Multidimensional Images
Authors Teresa Heiss, Hubert Wagner
Abstract We present an efficient algorithm to compute Euler characteristic curves of gray scale images of arbitrary dimension. In various applications the Euler characteristic curve is used as a descriptor of an image. Our algorithm is the first streaming algorithm for Euler characteristic curves. The usage of streaming removes the necessity to store the entire image in RAM. Experiments show that our implementation handles terabyte scale images on commodity hardware. Due to lock-free parallelism, it scales well with the number of processor cores. Our software—CHUNKYEuler—is available as open source on Bitbucket. Additionally, we put the concept of the Euler characteristic curve in the wider context of computational topology. In particular, we explain the connection with persistence diagrams.
Tasks
Published 2017-05-04
URL http://arxiv.org/abs/1705.02045v3
PDF http://arxiv.org/pdf/1705.02045v3.pdf
PWC https://paperswithcode.com/paper/streaming-algorithm-for-euler-characteristic
Repo
Framework

A Closer Look at Memorization in Deep Networks

Title A Closer Look at Memorization in Deep Networks
Authors Devansh Arpit, Stanisław Jastrzębski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, Simon Lacoste-Julien
Abstract We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.
Tasks
Published 2017-06-16
URL http://arxiv.org/abs/1706.05394v2
PDF http://arxiv.org/pdf/1706.05394v2.pdf
PWC https://paperswithcode.com/paper/a-closer-look-at-memorization-in-deep
Repo
Framework

Variational Methods for Normal Integration

Title Variational Methods for Normal Integration
Authors Yvain Quéau, Jean-Denis Durou, Jean-François Aujol
Abstract The need for an efficient method of integration of a dense normal field is inspired by several computer vision tasks, such as shape-from-shading, photometric stereo, deflectometry, etc. Inspired by edge-preserving methods from image processing, we study in this paper several variational approaches for normal integration, with a focus on non-rectangular domains, free boundary and depth discontinuities. We first introduce a new discretization for quadratic integration, which is designed to ensure both fast recovery and the ability to handle non-rectangular domains with a free boundary. Yet, with this solver, discontinuous surfaces can be handled only if the scene is first segmented into pieces without discontinuity. Hence, we then discuss several discontinuity-preserving strategies. Those inspired, respectively, by the Mumford-Shah segmentation method and by anisotropic diffusion, are shown to be the most effective for recovering discontinuities.
Tasks
Published 2017-09-18
URL http://arxiv.org/abs/1709.05965v1
PDF http://arxiv.org/pdf/1709.05965v1.pdf
PWC https://paperswithcode.com/paper/variational-methods-for-normal-integration
Repo
Framework

Double Q($σ$) and Q($σ, λ$): Unifying Reinforcement Learning Control Algorithms

Title Double Q($σ$) and Q($σ, λ$): Unifying Reinforcement Learning Control Algorithms
Authors Markus Dumke
Abstract Temporal-difference (TD) learning is an important field in reinforcement learning. Sarsa and Q-Learning are among the most used TD algorithms. The Q($\sigma$) algorithm (Sutton and Barto (2017)) unifies both. This paper extends the Q($\sigma$) algorithm to an online multi-step algorithm Q($\sigma, \lambda$) using eligibility traces and introduces Double Q($\sigma$) as the extension of Q($\sigma$) to double learning. Experiments suggest that the new Q($\sigma, \lambda$) algorithm can outperform the classical TD control methods Sarsa($\lambda$), Q($\lambda$) and Q($\sigma$).
Tasks Q-Learning
Published 2017-11-05
URL http://arxiv.org/abs/1711.01569v1
PDF http://arxiv.org/pdf/1711.01569v1.pdf
PWC https://paperswithcode.com/paper/double-q-and-q-unifying-reinforcement
Repo
Framework

DCCO: Towards Deformable Continuous Convolution Operators

Title DCCO: Towards Deformable Continuous Convolution Operators
Authors Joakim Johnander, Martin Danelljan, Fahad Shahbaz Khan, Michael Felsberg
Abstract Discriminative Correlation Filter (DCF) based methods have shown competitive performance on tracking benchmarks in recent years. Generally, DCF based trackers learn a rigid appearance model of the target. However, this reliance on a single rigid appearance model is insufficient in situations where the target undergoes non-rigid transformations. In this paper, we propose a unified formulation for learning a deformable convolution filter. In our framework, the deformable filter is represented as a linear combination of sub-filters. Both the sub-filter coefficients and their relative locations are inferred jointly in our formulation. Experiments are performed on three challenging tracking benchmarks: OTB- 2015, TempleColor and VOT2016. Our approach improves the baseline method, leading to performance comparable to state-of-the-art.
Tasks
Published 2017-06-09
URL http://arxiv.org/abs/1706.02888v1
PDF http://arxiv.org/pdf/1706.02888v1.pdf
PWC https://paperswithcode.com/paper/dcco-towards-deformable-continuous
Repo
Framework

Portable Trust: biometric-based authentication and blockchain storage for self-sovereign identity systems

Title Portable Trust: biometric-based authentication and blockchain storage for self-sovereign identity systems
Authors J. S. Hammudoglu, J. Sparreboom, J. I. Rauhamaa, J. K. Faber, L. C. Guerchi, I. P. Samiotis, S. P. Rao, J. A. Pouwelse
Abstract We devised a mobile biometric-based authentication system only relying on local processing. Our Android open source solution explores the capability of current smartphones to acquire, process and match fingerprints using only its built-in hardware. Our architecture is specifically designed to run completely locally and autonomously, not requiring any cloud service, server, or permissioned access to fingerprint reader hardware. It involves three main stages, starting with the fingerprint acquisition using the smartphone camera, followed by a processing pipeline to obtain minutiae features and a final step for matching against other locally stored fingerprints, based on Oriented FAST and Rotated BRIEF (ORB) descriptors. We obtained a mean matching accuracy of 55%, with the highest value of 67% for thumb fingers. Our ability to capture and process a finger fingerprint in mere seconds using a smartphone makes this work usable in a wide range of scenarios, for instance, offline remote regions. This work is specifically designed to be a key building block for a self-sovereign identity solution and integrate with our permissionless blockchain for identity and key attestation.
Tasks
Published 2017-06-12
URL http://arxiv.org/abs/1706.03744v1
PDF http://arxiv.org/pdf/1706.03744v1.pdf
PWC https://paperswithcode.com/paper/portable-trust-biometric-based-authentication
Repo
Framework

Putting Self-Supervised Token Embedding on the Tables

Title Putting Self-Supervised Token Embedding on the Tables
Authors Marc Szafraniec, Gautier Marti, Philippe Donnat
Abstract Information distribution by electronic messages is a privileged means of transmission for many businesses and individuals, often under the form of plain-text tables. As their number grows, it becomes necessary to use an algorithm to extract text and numbers instead of a human. Usual methods are focused on regular expressions or on a strict structure in the data, but are not efficient when we have many variations, fuzzy structure or implicit labels. In this paper we introduce SC2T, a totally self-supervised model for constructing vector representations of tokens in semi-structured messages by using characters and context levels that address these issues. It can then be used for an unsupervised labeling of tokens, or be the basis for a semi-supervised information extraction system.
Tasks
Published 2017-07-28
URL http://arxiv.org/abs/1708.04120v2
PDF http://arxiv.org/pdf/1708.04120v2.pdf
PWC https://paperswithcode.com/paper/putting-self-supervised-token-embedding-on
Repo
Framework
comments powered by Disqus