Paper Group ANR 728
Personalized Attraction Enhanced Sponsored Search with Multi-task Learning. Fully Distributed Bayesian Optimization with Stochastic Policies. TonY: An Orchestrator for Distributed Machine Learning Jobs. Rolling Shutter Camera Synchronization with Sub-millisecond Accuracy. Central Server Free Federated Learning over Single-sided Trust Social Network …
Personalized Attraction Enhanced Sponsored Search with Multi-task Learning
Title | Personalized Attraction Enhanced Sponsored Search with Multi-task Learning |
Authors | Wei Zhao, Boxuan Zhang, Beidou Wang, Ziyu Guan, Wanxian Guan, Guang Qiu, Wei Ning, Jiming Chen, Hongmin Liu |
Abstract | We study a novel problem of sponsored search (SS) for E-Commerce platforms: how we can attract query users to click product advertisements (ads) by presenting them features of products that attract them. This not only benefits merchants and the platform, but also improves user experience. The problem is challenging due to the following reasons: (1) We need to carefully manipulate the ad content without affecting user search experience. (2) It is difficult to obtain users’ explicit feedback of their preference in product features. (3) Nowadays, a great portion of the search traffic in E-Commerce platforms is from their mobile apps (e.g., nearly 90% in Taobao). The situation would get worse in the mobile setting due to limited space. We are focused on the mobile setting and propose to manipulate ad titles by adding a few selling point keywords (SPs) to attract query users. We model it as a personalized attractive SP prediction problem and carry out both large-scale offline evaluation and online A/B tests in Taobao. The contributions include: (1) We explore various exhibition schemes of SPs. (2) We propose a surrogate of user explicit feedback for SP preference. (3) We also explore multi-task learning and various additional features to boost the performance. A variant of our best model has already been deployed in Taobao, leading to a 2% increase in revenue per thousand impressions and an opt-out rate of merchants less than 4%. |
Tasks | Multi-Task Learning |
Published | 2019-07-24 |
URL | https://arxiv.org/abs/1907.12375v1 |
https://arxiv.org/pdf/1907.12375v1.pdf | |
PWC | https://paperswithcode.com/paper/personalized-attraction-enhanced-sponsored |
Repo | |
Framework | |
Fully Distributed Bayesian Optimization with Stochastic Policies
Title | Fully Distributed Bayesian Optimization with Stochastic Policies |
Authors | Javier Garcia-Barcos, Ruben Martinez-Cantin |
Abstract | Bayesian optimization has become a popular method for high-throughput computing, like the design of computer experiments or hyperparameter tuning of expensive models, where sample efficiency is mandatory. In these applications, distributed and scalable architectures are a necessity. However, Bayesian optimization is mostly sequential. Even parallel variants require certain computations between samples, limiting the parallelization bandwidth. Thompson sampling has been previously applied for distributed Bayesian optimization. But, when compared with other acquisition functions in the sequential setting, Thompson sampling is known to perform suboptimally. In this paper, we present a new method for fully distributed Bayesian optimization, which can be combined with any acquisition function. Our approach considers Bayesian optimization as a partially observable Markov decision process. In this context, stochastic policies, such as the Boltzmann policy, have some interesting properties which can also be studied for Bayesian optimization. Furthermore, the Boltzmann policy trivially allows a distributed Bayesian optimization implementation with high level of parallelism and scalability. We present results in several benchmarks and applications that shows the performance of our method. |
Tasks | |
Published | 2019-02-26 |
URL | https://arxiv.org/abs/1902.09992v2 |
https://arxiv.org/pdf/1902.09992v2.pdf | |
PWC | https://paperswithcode.com/paper/fully-distributed-bayesian-optimization-with |
Repo | |
Framework | |
TonY: An Orchestrator for Distributed Machine Learning Jobs
Title | TonY: An Orchestrator for Distributed Machine Learning Jobs |
Authors | Anthony Hsu, Keqiu Hu, Jonathan Hung, Arun Suresh, Zhe Zhang |
Abstract | Training machine learning (ML) models on large datasets requires considerable computing power. To speed up training, it is typical to distribute training across several machines, often with specialized hardware like GPUs or TPUs. Managing a distributed training job is complex and requires dealing with resource contention, distributed configurations, monitoring, and fault tolerance. In this paper, we describe TonY, an open-source orchestrator for distributed ML jobs built at LinkedIn to address these challenges. |
Tasks | |
Published | 2019-03-24 |
URL | http://arxiv.org/abs/1904.01631v1 |
http://arxiv.org/pdf/1904.01631v1.pdf | |
PWC | https://paperswithcode.com/paper/tony-an-orchestrator-for-distributed-machine |
Repo | |
Framework | |
Rolling Shutter Camera Synchronization with Sub-millisecond Accuracy
Title | Rolling Shutter Camera Synchronization with Sub-millisecond Accuracy |
Authors | Matej Smid, Jiri Matas |
Abstract | A simple method for synchronization of video streams with a precision better than one millisecond is proposed. The method is applicable to any number of rolling shutter cameras and when a few photographic flashes or other abrupt lighting changes are present in the video. The approach exploits the rolling shutter sensor property that every sensor row starts its exposure with a small delay after the onset of the previous row. The cameras may have different frame rates and resolutions, and need not have overlapping fields of view. The method was validated on five minutes of four streams from an ice hockey match. The found transformation maps events visible in all cameras to a reference time with a standard deviation of the temporal error in the range of 0.3 to 0.5 milliseconds. The quality of the synchronization is demonstrated on temporally and spatially overlapping images of a fast moving puck observed in two cameras. |
Tasks | |
Published | 2019-02-28 |
URL | http://arxiv.org/abs/1902.11084v1 |
http://arxiv.org/pdf/1902.11084v1.pdf | |
PWC | https://paperswithcode.com/paper/rolling-shutter-camera-synchronization-with |
Repo | |
Framework | |
Central Server Free Federated Learning over Single-sided Trust Social Networks
Title | Central Server Free Federated Learning over Single-sided Trust Social Networks |
Authors | Chaoyang He, Conghui Tan, Hanlin Tang, Shuang Qiu, Ji Liu |
Abstract | Federated learning has become increasingly important for modern machine learning, especially for data privacy-sensitive scenarios. Existing federated learning mostly adopts the central server-based architecture or centralized architecture. However, in many social network scenarios, centralized federated learning is not applicable (e.g., a central agent or server connecting all users may not exist, or the communication cost to the central server is not affordable). In this paper, we consider a generic setting: 1) the central server may not exist, and 2) the social network is unidirectional or of single-sided trust (i.e., user A trusts user B but user B may not trust user A). We propose a central server free federated learning algorithm, named Online Push-Sum (OPS) method, to handle this challenging but generic scenario. A rigorous regret analysis is also provided, which shows very interesting results on how users can benefit from communication with trusted users in the federated learning scenario. This work builds upon the fundamental algorithm framework and theoretical guarantees for federated learning in the generic social network scenario. |
Tasks | |
Published | 2019-10-11 |
URL | https://arxiv.org/abs/1910.04956v1 |
https://arxiv.org/pdf/1910.04956v1.pdf | |
PWC | https://paperswithcode.com/paper/central-server-free-federated-learning-over |
Repo | |
Framework | |
Cross-product Penalized Component Analysis (XCAN)
Title | Cross-product Penalized Component Analysis (XCAN) |
Authors | José Camacho, Evrim Acar, Morten A. Rasmussen, Rasmus Bro |
Abstract | Matrix factorization methods are extensively employed to understand complex data. In this paper, we introduce the cross-product penalized component analysis (XCAN), a sparse matrix factorization based on the optimization of a loss function that allows a trade-off between variance maximization and structural preservation. The approach is based on previous developments, notably (i) the Sparse Principal Component Analysis (SPCA) framework based on the LASSO, (ii) extensions of SPCA to constrain both modes of the factorization, like co-clustering or the Penalized Matrix Decomposition (PMD), and (iii) the Group-wise Principal Component Analysis (GPCA) method. The result is a flexible modeling approach that can be used for data exploration in a large variety of problems. We demonstrate its use with applications from different disciplines. |
Tasks | |
Published | 2019-06-28 |
URL | https://arxiv.org/abs/1907.00032v1 |
https://arxiv.org/pdf/1907.00032v1.pdf | |
PWC | https://paperswithcode.com/paper/cross-product-penalized-component-analysis |
Repo | |
Framework | |
Direct Image to Point Cloud Descriptors Matching for 6-DOF Camera Localization in Dense 3D Point Cloud
Title | Direct Image to Point Cloud Descriptors Matching for 6-DOF Camera Localization in Dense 3D Point Cloud |
Authors | Uzair Nadeem, Mohammad A. A. K. Jalwana, Mohammed Bennamoun, Roberto Togneri, Ferdous Sohel |
Abstract | We propose a novel concept to directly match feature descriptors extracted from RGB images, with feature descriptors extracted from 3D point clouds. We use this concept to localize the position and orientation (pose) of the camera of a query image in dense point clouds. We generate a dataset of matching 2D and 3D descriptors, and use it to train a proposed Descriptor-Matcher algorithm. To localize a query image in a point cloud, we extract 2D keypoints and descriptors from the query image. Then the Descriptor-Matcher is used to find the corresponding pairs 2D and 3D keypoints by matching the 2D descriptors with the pre-extracted 3D descriptors of the point cloud. This information is used in a robust pose estimation algorithm to localize the query image in the 3D point cloud. Experiments demonstrate that directly matching 2D and 3D descriptors is not only a viable idea but also achieves competitive accuracy compared to other state-of-the-art approaches for camera pose localization. |
Tasks | Camera Localization, Pose Estimation |
Published | 2019-06-14 |
URL | https://arxiv.org/abs/1906.06064v1 |
https://arxiv.org/pdf/1906.06064v1.pdf | |
PWC | https://paperswithcode.com/paper/direct-image-to-point-cloud-descriptors |
Repo | |
Framework | |
VSSA-NET: Vertical Spatial Sequence Attention Network for Traffic Sign Detection
Title | VSSA-NET: Vertical Spatial Sequence Attention Network for Traffic Sign Detection |
Authors | Yuan Yuan, Senior Member, IEEE, Zhitong Xiong, Student Member, IEEE, Qi Wang, Senior Member, IEEE |
Abstract | Although traffic sign detection has been studied for years and great progress has been made with the rise of deep learning technique, there are still many problems remaining to be addressed. For complicated real-world traffic scenes, there are two main challenges. Firstly, traffic signs are usually small size objects, which makes it more difficult to detect than large ones; Secondly, it is hard to distinguish false targets which resemble real traffic signs in complex street scenes without context information. To handle these problems, we propose a novel end-to-end deep learning method for traffic sign detection in complex environments. Our contributions are as follows: 1) We propose a multi-resolution feature fusion network architecture which exploits densely connected deconvolution layers with skip connections, and can learn more effective features for the small size object; 2) We frame the traffic sign detection as a spatial sequence classification and regression task, and propose a vertical spatial sequence attention (VSSA) module to gain more context information for better detection performance. To comprehensively evaluate the proposed method, we do experiments on several traffic sign datasets as well as the general object detection dataset and the results have shown the effectiveness of our proposed method. |
Tasks | Object Detection |
Published | 2019-05-05 |
URL | https://arxiv.org/abs/1905.01583v1 |
https://arxiv.org/pdf/1905.01583v1.pdf | |
PWC | https://paperswithcode.com/paper/vssa-net-vertical-spatial-sequence-attention |
Repo | |
Framework | |
All-Pay Bidding Games on Graphs
Title | All-Pay Bidding Games on Graphs |
Authors | Guy Avni, Rasmus Ibsen-Jensen, Josef Tkadlec |
Abstract | In this paper we introduce and study {\em all-pay bidding games}, a class of two player, zero-sum games on graphs. The game proceeds as follows. We place a token on some vertex in the graph and assign budgets to the two players. Each turn, each player submits a sealed legal bid (non-negative and below their remaining budget), which is deducted from their budget and the highest bidder moves the token onto an adjacent vertex. The game ends once a sink is reached, and \PO pays \PT the outcome that is associated with the sink. The players attempt to maximize their expected outcome. Our games model settings where effort (of no inherent value) needs to be invested in an ongoing and stateful manner. On the negative side, we show that even in simple games on DAGs, optimal strategies may require a distribution over bids with infinite support. A central quantity in bidding games is the {\em ratio} of the players budgets. On the positive side, we show a simple FPTAS for DAGs, that, for each budget ratio, outputs an approximation for the optimal strategy for that ratio. We also implement it, show that it performs well, and suggests interesting properties of these games. Then, given an outcome $c$, we show an algorithm for finding the necessary and sufficient initial ratio for guaranteeing outcome $c$ with probability~$1$ and a strategy ensuring such. Finally, while the general case has not previously been studied, solving the specific game in which \PO wins iff he wins the first two auctions, has been long stated as an open question, which we solve. |
Tasks | |
Published | 2019-11-19 |
URL | https://arxiv.org/abs/1911.08360v1 |
https://arxiv.org/pdf/1911.08360v1.pdf | |
PWC | https://paperswithcode.com/paper/all-pay-bidding-games-on-graphs |
Repo | |
Framework | |
Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks
Title | Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks |
Authors | Jörg Wagner, Jan Mathias Köhler, Tobias Gindele, Leon Hetzel, Jakob Thaddäus Wiedemer, Sven Behnke |
Abstract | To verify and validate networks, it is essential to gain insight into their decisions, limitations as well as possible shortcomings of training data. In this work, we propose a post-hoc, optimization based visual explanation method, which highlights the evidence in the input image for a specific prediction. Our approach is based on a novel technique to defend against adversarial evidence (i.e. faulty evidence due to artefacts) by filtering gradients during optimization. The defense does not depend on human-tuned parameters. It enables explanations which are both fine-grained and preserve the characteristics of images, such as edges and colors. The explanations are interpretable, suited for visualizing detailed evidence and can be tested as they are valid model inputs. We qualitatively and quantitatively evaluate our approach on a multitude of models and datasets. |
Tasks | |
Published | 2019-08-07 |
URL | https://arxiv.org/abs/1908.02686v1 |
https://arxiv.org/pdf/1908.02686v1.pdf | |
PWC | https://paperswithcode.com/paper/interpretable-and-fine-grained-visual-1 |
Repo | |
Framework | |
Text-to-Image Synthesis Based on Machine Generated Captions
Title | Text-to-Image Synthesis Based on Machine Generated Captions |
Authors | Marco Menardi, Alex Falcon, Saida S. Mohamed, Lorenzo Seidenari, Giuseppe Serra, Alberto Del Bimbo, Carlo Tasso |
Abstract | Text to Image Synthesis refers to the process of automatic generation of a photo-realistic image starting from a given text and is revolutionizing many real-world applications. In order to perform such process it is necessary to exploit datasets containing captioned images, meaning that each image is associated with one (or more) captions describing it. Despite the abundance of uncaptioned images datasets, the number of captioned datasets is limited. To address this issue, in this paper we propose an approach capable of generating images starting from a given text using conditional GANs trained on uncaptioned images dataset. In particular, uncaptioned images are fed to an Image Captioning Module to generate the descriptions. Then, the GAN Module is trained on both the input image and the machine-generated caption. To evaluate the results, the performance of our solution is compared with the results obtained by the unconditional GAN. For the experiments, we chose to use the uncaptioned dataset LSUN bedroom. The results obtained in our study are preliminary but still promising. |
Tasks | Image Captioning, Image Generation |
Published | 2019-10-09 |
URL | https://arxiv.org/abs/1910.04056v1 |
https://arxiv.org/pdf/1910.04056v1.pdf | |
PWC | https://paperswithcode.com/paper/text-to-image-synthesis-based-on-machine |
Repo | |
Framework | |
DIST: Rendering Deep Implicit Signed Distance Function with Differentiable Sphere Tracing
Title | DIST: Rendering Deep Implicit Signed Distance Function with Differentiable Sphere Tracing |
Authors | Shaohui Liu, Yinda Zhang, Songyou Peng, Boxin Shi, Marc Pollefeys, Zhaopeng Cui |
Abstract | We propose a differentiable sphere tracing algorithm to bridge the gap between inverse graphics methods and the recently proposed deep learning based implicit signed distance function. Due to the nature of the implicit function, the rendering process requires tremendous function queries, which is particularly problematic when the function is represented as a neural network. We optimize both the forward and backward pass of our rendering layer to make it run efficiently with affordable memory consumption on a commodity graphics card. Our rendering method is fully differentiable such that losses can be directly computed on the rendered 2D observations, and the gradients can be propagated backward to optimize the 3D geometry. We show that our rendering method can effectively reconstruct accurate 3D shapes from various inputs, such as sparse depth and multi-view images, through inverse optimization. With the geometry based reasoning, our 3D shape prediction methods show excellent generalization capability and robustness against various noise. |
Tasks | |
Published | 2019-11-29 |
URL | https://arxiv.org/abs/1911.13225v1 |
https://arxiv.org/pdf/1911.13225v1.pdf | |
PWC | https://paperswithcode.com/paper/dist-rendering-deep-implicit-signed-distance |
Repo | |
Framework | |
Tight Regret Bounds for Infinite-armed Linear Contextual Bandits
Title | Tight Regret Bounds for Infinite-armed Linear Contextual Bandits |
Authors | Yingkai Li, Yining Wang, Yuan Zhou |
Abstract | Linear contextual bandit is a class of sequential decision making problems with important applications in recommendation systems, online advertising, healthcare, and other machine learning related tasks. While there is much prior research, tight regret bounds of linear contextual bandit with infinite action sets remain open. In this paper, we prove regret upper bound of $O(\sqrt{d^2T\log T})\times \mathrm{poly}(\log\log T)$ where $d$ is the domain dimension and $T$ is the time horizon. Our upper bound matches the previous lower bound of $\Omega(\sqrt{d^2 T\log T})$ up to iterated logarithmic terms. |
Tasks | Decision Making, Multi-Armed Bandits, Recommendation Systems |
Published | 2019-05-04 |
URL | https://arxiv.org/abs/1905.01435v1 |
https://arxiv.org/pdf/1905.01435v1.pdf | |
PWC | https://paperswithcode.com/paper/tight-regret-bounds-for-infinite-armed-linear |
Repo | |
Framework | |
A Measurement of Social Capital in an Open Source Software Project
Title | A Measurement of Social Capital in an Open Source Software Project |
Authors | Saad Alqithami, Musaad Alzahrani, Fahad Alghamdi, Rahmat Budiarto, Henry Hexmoor |
Abstract | The paper provides an understanding of social capital in organizations that are open membership multi-agent systems with an emphasis in our formulation on the dynamic network of social interaction that, in part, elucidate evolving structures and impromptu topologies of networks. This paper, therefore, models an open source project as an organizational network. It provides definitions of social capital for this organizational network and formulation of the mechanism to optimize the social capital for achieving its goal that is optimized productivity. A case study of an open source Apache-Hadoop project is considered and empirically evaluated. An analysis of how social capital can be created within this type of organizations and driven to a measurement for its value is provided. Finally, a verification on whether the social capital of the organizational network is proportional towards optimizing their productivity is considered. |
Tasks | |
Published | 2019-11-22 |
URL | https://arxiv.org/abs/1911.10283v1 |
https://arxiv.org/pdf/1911.10283v1.pdf | |
PWC | https://paperswithcode.com/paper/a-measurement-of-social-capital-in-an-open |
Repo | |
Framework | |
dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs
Title | dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs |
Authors | Kede Ma, Wentao Liu, Tongliang Liu, Zhou Wang, Dacheng Tao |
Abstract | Objective assessment of image quality is fundamentally important in many image processing tasks. In this work, we focus on learning blind image quality assessment (BIQA) models which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIP) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation (gMAD) competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL Inferred Quality (dilIQ) index achieves an additional performance gain. |
Tasks | Blind Image Quality Assessment, Image Quality Assessment, Learning-To-Rank |
Published | 2019-04-13 |
URL | http://arxiv.org/abs/1904.06505v1 |
http://arxiv.org/pdf/1904.06505v1.pdf | |
PWC | https://paperswithcode.com/paper/dipiq-blind-image-quality-assessment-by |
Repo | |
Framework | |