January 27, 2020

3179 words 15 mins read

Paper Group ANR 1297

Paper Group ANR 1297

Network Offloading Policies for Cloud Robotics: a Learning-based Approach. Learning Instance Occlusion for Panoptic Segmentation. BitcoinHeist: Topological Data Analysis for Ransomware Detection on the Bitcoin Blockchain. Learning to Advertise for Organic Traffic Maximization in E-Commerce Product Feeds. A Study on Agreement in PICO Span Annotation …

Network Offloading Policies for Cloud Robotics: a Learning-based Approach

Title Network Offloading Policies for Cloud Robotics: a Learning-based Approach
Authors Sandeep Chinchali, Apoorva Sharma, James Harrison, Amine Elhafsi, Daniel Kang, Evgenya Pergament, Eyal Cidon, Sachin Katti, Marco Pavone
Abstract Today’s robotic systems are increasingly turning to computationally expensive models such as deep neural networks (DNNs) for tasks like localization, perception, planning, and object detection. However, resource-constrained robots, like low-power drones, often have insufficient on-board compute resources or power reserves to scalably run the most accurate, state-of-the art neural network compute models. Cloud robotics allows mobile robots the benefit of offloading compute to centralized servers if they are uncertain locally or want to run more accurate, compute-intensive models. However, cloud robotics comes with a key, often understated cost: communicating with the cloud over congested wireless networks may result in latency or loss of data. In fact, sending high data-rate video or LIDAR from multiple robots over congested networks can lead to prohibitive delay for real-time applications, which we measure experimentally. In this paper, we formulate a novel Robot Offloading Problem — how and when should robots offload sensing tasks, especially if they are uncertain, to improve accuracy while minimizing the cost of cloud communication? We formulate offloading as a sequential decision making problem for robots, and propose a solution using deep reinforcement learning. In both simulations and hardware experiments using state-of-the art vision DNNs, our offloading strategy improves vision task performance by between 1.3-2.6x of benchmark offloading strategies, allowing robots the potential to significantly transcend their on-board sensing accuracy but with limited cost of cloud communication.
Tasks Decision Making, Object Detection
Published 2019-02-15
URL http://arxiv.org/abs/1902.05703v1
PDF http://arxiv.org/pdf/1902.05703v1.pdf
PWC https://paperswithcode.com/paper/network-offloading-policies-for-cloud
Repo
Framework

Learning Instance Occlusion for Panoptic Segmentation

Title Learning Instance Occlusion for Panoptic Segmentation
Authors Justin Lazarow, Kwonjoon Lee, Kunyu Shi, Zhuowen Tu
Abstract Panoptic segmentation requires segments of both “things” (countable object instances) and “stuff” (uncountable and amorphous regions) within a single output. A common approach involves the fusion of instance segmentation (for “things”) and semantic segmentation (for “stuff”) into a non-overlapping placement of segments, and resolves occlusions (or overlaps). However, instance ordering with detection confidence do not correlate well with natural occlusion relationship. To resolve this issue, we propose a branch that is tasked with modeling how two instance masks should overlap one another as a binary relation. Our method, named OCFusion, is lightweight but particularly effective on the “things” portion of the standard panoptic segmentation benchmarks, bringing significant gains (up to +3.2 PQ^Th and +2.0 overall PQ) on the COCO dataset — only requiring a short amount of fine-tuning. OCFusion is trained with the ground truth relation derived automatically from the existing dataset annotations. We obtain state-of-the-art results on COCO and show competitive results on the Cityscapes panoptic segmentation benchmark.
Tasks Instance Segmentation, Panoptic Segmentation, Semantic Segmentation
Published 2019-06-13
URL https://arxiv.org/abs/1906.05896v3
PDF https://arxiv.org/pdf/1906.05896v3.pdf
PWC https://paperswithcode.com/paper/learning-instance-occlusion-for-panoptic
Repo
Framework

BitcoinHeist: Topological Data Analysis for Ransomware Detection on the Bitcoin Blockchain

Title BitcoinHeist: Topological Data Analysis for Ransomware Detection on the Bitcoin Blockchain
Authors Cuneyt Gurcan Akcora, Yitao Li, Yulia R. Gel, Murat Kantarcioglu
Abstract Proliferation of cryptocurrencies (e.g., Bitcoin) that allow pseudo-anonymous transactions, has made it easier for ransomware developers to demand ransom by encrypting sensitive user data. The recently revealed strikes of ransomware attacks have already resulted in significant economic losses and societal harm across different sectors, ranging from local governments to health care. Most modern ransomware use Bitcoin for payments. However, although Bitcoin transactions are permanently recorded and publicly available, current approaches for detecting ransomware depend only on a couple of heuristics and/or tedious information gathering steps (e.g., running ransomware to collect ransomware related Bitcoin addresses). To our knowledge, none of the previous approaches have employed advanced data analytics techniques to automatically detect ransomware related transactions and malicious Bitcoin addresses. By capitalizing on the recent advances in topological data analysis, we propose an efficient and tractable data analytics framework to automatically detect new malicious addresses in a ransomware family, given only a limited records of previous transactions. Furthermore, our proposed techniques exhibit high utility to detect the emergence of new ransomware families, that is, ransomware with no previous records of transactions. Using the existing known ransomware data sets, we show that our proposed methodology provides significant improvements in precision and recall for ransomware transaction detection, compared to existing heuristic based approaches, and can be utilized to automate ransomware detection.
Tasks Topological Data Analysis
Published 2019-06-19
URL https://arxiv.org/abs/1906.07852v1
PDF https://arxiv.org/pdf/1906.07852v1.pdf
PWC https://paperswithcode.com/paper/bitcoinheist-topological-data-analysis-for
Repo
Framework

Learning to Advertise for Organic Traffic Maximization in E-Commerce Product Feeds

Title Learning to Advertise for Organic Traffic Maximization in E-Commerce Product Feeds
Authors Dagui Chen, Junqi Jin, Weinan Zhang, Fei Pan, Lvyin Niu, Chuan Yu, Jun Wang, Han Li, Jian Xu, Kun Gai
Abstract Most e-commerce product feeds provide blended results of advertised products and recommended products to consumers. The underlying advertising and recommendation platforms share similar if not exactly the same set of candidate products. Consumers’ behaviors on the advertised results constitute part of the recommendation model’s training data and therefore can influence the recommended results. We refer to this process as Leverage. Considering this mechanism, we propose a novel perspective that advertisers can strategically bid through the advertising platform to optimize their recommended organic traffic. By analyzing the real-world data, we first explain the principles of Leverage mechanism, i.e., the dynamic models of Leverage. Then we introduce a novel Leverage optimization problem and formulate it with a Markov Decision Process. To deal with the sample complexity challenge in model-free reinforcement learning, we propose a novel Hybrid Training Leverage Bidding (HTLB) algorithm which combines the real-world samples and the emulator-generated samples to boost the learning speed and stability. Our offline experiments as well as the results from the online deployment demonstrate the superior performance of our approach.
Tasks
Published 2019-08-19
URL https://arxiv.org/abs/1908.06698v1
PDF https://arxiv.org/pdf/1908.06698v1.pdf
PWC https://paperswithcode.com/paper/learning-to-advertise-for-organic-traffic
Repo
Framework

A Study on Agreement in PICO Span Annotations

Title A Study on Agreement in PICO Span Annotations
Authors Grace E. Lee, Aixin Sun
Abstract In evidence-based medicine, relevance of medical literature is determined by predefined relevance conditions. The conditions are defined based on PICO elements, namely, Patient, Intervention, Comparator, and Outcome. Hence, PICO annotations in medical literature are essential for automatic relevant document filtering. However, defining boundaries of text spans for PICO elements is not straightforward. In this paper, we study the agreement of PICO annotations made by multiple human annotators, including both experts and non-experts. Agreements are estimated by a standard span agreement (i.e., matching both labels and boundaries of text spans), and two types of relaxed span agreement (i.e., matching labels without guaranteeing matching boundaries of spans). Based on the analysis, we report two observations: (i) Boundaries of PICO span annotations by individual human annotators are very diverse. (ii) Despite the disagreement in span boundaries, general areas of the span annotations are broadly agreed by annotators. Our results suggest that applying a standard agreement alone may undermine the agreement of PICO spans, and adopting both a standard and a relaxed agreements is more suitable for PICO span evaluation.
Tasks
Published 2019-04-21
URL http://arxiv.org/abs/1904.09557v1
PDF http://arxiv.org/pdf/1904.09557v1.pdf
PWC https://paperswithcode.com/paper/a-study-on-agreement-in-pico-span-annotations
Repo
Framework

AutoCross: Automatic Feature Crossing for Tabular Data in Real-World Applications

Title AutoCross: Automatic Feature Crossing for Tabular Data in Real-World Applications
Authors Yuanfei Luo, Mengshuo Wang, Hao Zhou, Quanming Yao, WeiWei Tu, Yuqiang Chen, Qiang Yang, Wenyuan Dai
Abstract Feature crossing captures interactions among categorical features and is useful to enhance learning from tabular data in real-world businesses. In this paper, we present AutoCross, an automatic feature crossing tool provided by 4Paradigm to its customers, ranging from banks, hospitals, to Internet corporations. By performing beam search in a tree-structured space, AutoCross enables efficient generation of high-order cross features, which is not yet visited by existing works. Additionally, we propose successive mini-batch gradient descent and multi-granularity discretization to further improve efficiency and effectiveness, while ensuring simplicity so that no machine learning expertise or tedious hyper-parameter tuning is required. Furthermore, the algorithms are designed to reduce the computational, transmitting, and storage costs involved in distributed computing. Experimental results on both benchmark and real-world business datasets demonstrate the effectiveness and efficiency of AutoCross. It is shown that AutoCross can significantly enhance the performance of both linear and deep models.
Tasks
Published 2019-04-29
URL https://arxiv.org/abs/1904.12857v2
PDF https://arxiv.org/pdf/1904.12857v2.pdf
PWC https://paperswithcode.com/paper/autocross-automatic-feature-crossing-for
Repo
Framework

A multi-label classification method using a hierarchical and transparent representation for paper-reviewer recommendation

Title A multi-label classification method using a hierarchical and transparent representation for paper-reviewer recommendation
Authors Dong Zhang, Shu Zhao, Zhen Duan, Jie Chen, Yangping Zhang, Jie Tang
Abstract Paper-reviewer recommendation task is of significant academic importance for conference chairs and journal editors. How to effectively and accurately recommend reviewers for the submitted papers is a meaningful and still tough task. In this paper, we propose a Multi-Label Classification method using a hierarchical and transparent Representation named Hiepar-MLC. Further, we propose a simple multi-label-based reviewer assignment MLBRA strategy to select the appropriate reviewers. It is interesting that we also explore the paper-reviewer recommendation in the coarse-grained granularity.
Tasks Multi-Label Classification
Published 2019-12-19
URL https://arxiv.org/abs/1912.08976v1
PDF https://arxiv.org/pdf/1912.08976v1.pdf
PWC https://paperswithcode.com/paper/a-multi-label-classification-method-using-a
Repo
Framework

Analyzing ASR pretraining for low-resource speech-to-text translation

Title Analyzing ASR pretraining for low-resource speech-to-text translation
Authors Mihaela C. Stoian, Sameer Bansal, Sharon Goldwater
Abstract Previous work has shown that for low-resource source languages, automatic speech-to-text translation (AST) can be improved by pretraining an end-to-end model on automatic speech recognition (ASR) data from a high-resource language. However, it is not clear what factors –e.g., language relatedness or size of the pretraining data– yield the biggest improvements, or whether pretraining can be effectively combined with other methods such as data augmentation. Here, we experiment with pretraining on datasets of varying sizes, including languages related and unrelated to the AST source language. We find that the best predictor of final AST performance is the word error rate of the pretrained ASR model, and that differences in ASR/AST performance correlate with how phonetic information is encoded in the later RNN layers of our model. We also show that pretraining and data augmentation yield complementary benefits for AST.
Tasks Data Augmentation, Speech Recognition
Published 2019-10-23
URL https://arxiv.org/abs/1910.10762v2
PDF https://arxiv.org/pdf/1910.10762v2.pdf
PWC https://paperswithcode.com/paper/analyzing-asr-pretraining-for-low-resource
Repo
Framework

Retrosynthesis with Attention-Based NMT Model and Chemical Analysis of the “Wrong” Predictions

Title Retrosynthesis with Attention-Based NMT Model and Chemical Analysis of the “Wrong” Predictions
Authors Hongliang Duan, Ling Wang, Chengyun Zhang, Jianjun Li
Abstract We cast retrosynthesis as a machine translation problem by introducing a special Tensor2Tensor, an entire attention-based and fully data-driven model. Given a data set comprising about 50,000 diverse reactions extracted from USPTO patents, the model significantly outperforms seq2seq model (34.7%) on a top-1 accuracy by achieving 54.1%. For yielding better results, parameters such as batch size and training time are thoroughly investigated to train the model. Additionally, we offer a novel insight into the causes of grammatically invalid SMILES, and conduct a test in which experienced chemists pick out and analyze the “wrong” predictions that may be chemically plausible but differ from the ground truth. Actually, the effectiveness of our model is un-derestimated and the “true” top-1 accuracy can reach to 64.6%.
Tasks Machine Translation
Published 2019-08-02
URL https://arxiv.org/abs/1908.00727v1
PDF https://arxiv.org/pdf/1908.00727v1.pdf
PWC https://paperswithcode.com/paper/retrosynthesis-with-attention-based-nmt-model
Repo
Framework

Sliced generative models

Title Sliced generative models
Authors Szymon Knop, Marcin Mazur, Jacek Tabor, Igor Podolak, Przemysław Spurek
Abstract In this paper we discuss a class of AutoEncoder based generative models based on one dimensional sliced approach. The idea is based on the reduction of the discrimination between samples to one-dimensional case. Our experiments show that methods can be divided into two groups. First consists of methods which are a modification of standard normality tests, while the second is based on classical distances between samples. It turns out that both groups are correct generative models, but the second one gives a slightly faster decrease rate of Fr'{e}chet Inception Distance (FID).
Tasks
Published 2019-01-29
URL http://arxiv.org/abs/1901.10417v1
PDF http://arxiv.org/pdf/1901.10417v1.pdf
PWC https://paperswithcode.com/paper/sliced-generative-models
Repo
Framework

Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments

Title Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments
Authors Krishna Giri Narra, Zhifeng Lin, Yongqin Wang, Keshav Balasubramaniam, Murali Annavaram
Abstract This work presents Origami, which provides privacy-preserving inference for large deep neural network (DNN) models through a combination of enclave execution, cryptographic blinding, interspersed with accelerator-based computation. Origami partitions the ML model into multiple partitions. The first partition receives the encrypted user input within an SGX enclave. The enclave decrypts the input and then applies cryptographic blinding to the input data and the model parameters. Cryptographic blinding is a technique that adds noise to obfuscate data. Origami sends the obfuscated data for computation to an untrusted GPU/CPU. The blinding and de-blinding factors are kept private by the SGX enclave, thereby preventing any adversary from denoising the data, when the computation is offloaded to a GPU/CPU. The computed output is returned to the enclave, which decodes the computation on noisy data using the unblinding factors privately stored within SGX. This process may be repeated for each DNN layer, as has been done in prior work Slalom. However, the overhead of blinding and unblinding the data is a limiting factor to scalability. Origami relies on the empirical observation that the feature maps after the first several layers can not be used, even by a powerful conditional GAN adversary to reconstruct input. Hence, Origami dynamically switches to executing the rest of the DNN layers directly on an accelerator without needing any further cryptographic blinding intervention to preserve privacy. We empirically demonstrate that using Origami, a conditional GAN adversary, even with an unlimited inference budget, cannot reconstruct the input. We implement and demonstrate the performance gains of Origami using the VGG-16 and VGG-19 models. Compared to running the entire VGG-19 model within SGX, Origami inference improves the performance of private inference from 11x while using Slalom to 15.1x.
Tasks Denoising
Published 2019-12-07
URL https://arxiv.org/abs/1912.03485v1
PDF https://arxiv.org/pdf/1912.03485v1.pdf
PWC https://paperswithcode.com/paper/privacy-preserving-inference-in-machine
Repo
Framework

3D Magic Mirror: Automatic Video to 3D Caricature Translation

Title 3D Magic Mirror: Automatic Video to 3D Caricature Translation
Authors Yudong Guo, Luo Jiang, Lin Cai, Juyong Zhang
Abstract Caricature is an abstraction of a real person which distorts or exaggerates certain features, but still retains a likeness. While most existing works focus on 3D caricature reconstruction from 2D caricatures or translating 2D photos to 2D caricatures, this paper presents a real-time and automatic algorithm for creating expressive 3D caricatures with caricature style texture map from 2D photos or videos. To solve this challenging ill-posed reconstruction problem and cross-domain translation problem, we first reconstruct the 3D face shape for each frame, and then translate 3D face shape from normal style to caricature style by a novel identity and expression preserving VAE-CycleGAN. Based on a labeling formulation, the caricature texture map is constructed from a set of multi-view caricature images generated by CariGANs. The effectiveness and efficiency of our method are demonstrated by comparison with baseline implementations. The perceptual study shows that the 3D caricatures generated by our method meet people’s expectations of 3D caricature style.
Tasks Caricature
Published 2019-06-03
URL https://arxiv.org/abs/1906.00544v1
PDF https://arxiv.org/pdf/1906.00544v1.pdf
PWC https://paperswithcode.com/paper/190600544
Repo
Framework

Tails of Triangular Flows

Title Tails of Triangular Flows
Authors Priyank Jaini, Ivan Kobyzev, Marcus Brubaker, Yaoliang Yu
Abstract Triangular maps are a construct in probability theory that allows the transformation of any source density to any target density. We consider flow based models that learn these triangular transformations which we call triangular flows and study the properties of these triangular flows with the goal of capturing heavy tailed target distributions.In one dimension, we prove that the density quantile functions of the source and target density can characterize properties of the increasing push-forward transformation and show that no Lipschitz continuous increasing map can transform a light-tailed source to a heavy-tailed target density. We further precisely relate the asymptotic behavior of these density quantile functions with the existence of certain function moments of distributions. These results allow us to give a precise asymptotic rate at which an increasing transformation must grow to capture the tail properties of a target given the source distribution. In the multivariate case, we show that any increasing triangular map transforming a light-tailed source density to a heavy-tailed target density must have all eigenvalues of the Jacobian to be unbounded. Our analysis suggests the importance of source distribution in capturing heavy-tailed distributions and we discuss the implications for flow based models.
Tasks
Published 2019-07-10
URL https://arxiv.org/abs/1907.04481v1
PDF https://arxiv.org/pdf/1907.04481v1.pdf
PWC https://paperswithcode.com/paper/tails-of-triangular-flows
Repo
Framework

Unsupervised automatic classification of Scanning Electron Microscopy (SEM) images of CD4+ cells with varying extent of HIV virion infection

Title Unsupervised automatic classification of Scanning Electron Microscopy (SEM) images of CD4+ cells with varying extent of HIV virion infection
Authors John M. Wandeto, Birgitta Dresp-Langley
Abstract Archiving large sets of medical or cell images in digital libraries may require ordering randomly scattered sets of image data according to specific criteria, such as the spatial extent of a specific local color or contrast content that reveals different meaningful states of a physiological structure, tissue, or cell in a certain order, indicating progression or recession of a pathology, or the progressive response of a cell structure to treatment. Here we used a Self Organized Map (SOM)-based, fully automatic and unsupervised, classification procedure described in our earlier work and applied it to sets of minimally processed grayscale and/or color processed Scanning Electron Microscopy (SEM) images of CD4+ T-lymphocytes (so-called helper cells) with varying extent of HIV virion infection. It is shown that the quantization error in the SOM output after training permits to scale the spatial magnitude and the direction of change (+ or -) in local pixel contrast or color across images of a series with a reliability that exceeds that of any human expert. The procedure is easily implemented and fast, and represents a promising step towards low-cost automatic digital image archiving with minimal intervention of a human operator.
Tasks Quantization
Published 2019-04-30
URL http://arxiv.org/abs/1905.03700v1
PDF http://arxiv.org/pdf/1905.03700v1.pdf
PWC https://paperswithcode.com/paper/190503700
Repo
Framework

Magnetic Resonance Fingerprinting Reconstruction Using Recurrent Neural Networks

Title Magnetic Resonance Fingerprinting Reconstruction Using Recurrent Neural Networks
Authors Elisabeth Hoppe, Florian Thamm, Gregor Körzdörfer, Christopher Syben, Franziska Schirrmacher, Mathias Nittka, Josef Pfeuffer, Heiko Meyer, Andreas Maier
Abstract Magnetic Resonance Fingerprinting (MRF) is an imaging technique acquiring unique time signals for different tissues. Although the acquisition is highly accelerated, the reconstruction time remains a problem, as the state-of-the-art template matching compares every signal with a set of possible signals. To overcome this limitation, deep learning based approaches, e.g. Convolutional Neural Networks (CNNs) have been proposed. In this work, we investigate the applicability of Recurrent Neural Networks (RNNs) for this reconstruction problem, as the signals are correlated in time. Compared to previous methods based on CNNs, RNN models yield significantly improved results using in-vivo data.
Tasks Magnetic Resonance Fingerprinting
Published 2019-09-13
URL https://arxiv.org/abs/1909.06395v1
PDF https://arxiv.org/pdf/1909.06395v1.pdf
PWC https://paperswithcode.com/paper/magnetic-resonance-fingerprinting-1
Repo
Framework
comments powered by Disqus