January 27, 2020

2920 words 14 mins read

Paper Group ANR 1115

Paper Group ANR 1115

Sequential Learning of Convolutional Features for Effective Text Classification. Local Distribution Obfuscation via Probability Coupling. Visual Backpropagation. Federated Learning for Coalition Operations. A Review of Visual Trackers and Analysis of its Application to Mobile Robot. Adaptive Variance for Changing Sparse-Reward Environments. Deep Re …

Sequential Learning of Convolutional Features for Effective Text Classification

Title Sequential Learning of Convolutional Features for Effective Text Classification
Authors Avinash Madasu, Vijjini Anvesh Rao
Abstract Text classification has been one of the major problems in natural language processing. With the advent of deep learning, convolutional neural network (CNN) has been a popular solution to this task. However, CNNs which were first proposed for images, face many crucial challenges in the context of text processing, namely in their elementary blocks: convolution filters and max pooling. These challenges have largely been overlooked by the most existing CNN models proposed for text classification. In this paper, we present an experimental study on the fundamental blocks of CNNs in text categorization. Based on this critique, we propose Sequential Convolutional Attentive Recurrent Network (SCARN). The proposed SCARN model utilizes both the advantages of recurrent and convolutional structures efficiently in comparison to previously proposed recurrent convolutional models. We test our model on different text classification datasets across tasks like sentiment analysis and question classification. Extensive experiments establish that SCARN outperforms other recurrent convolutional architectures with significantly less parameters. Furthermore, SCARN achieves better performance compared to equally large various deep CNN and LSTM architectures.
Tasks Sentiment Analysis, Text Categorization, Text Classification
Published 2019-08-30
URL https://arxiv.org/abs/1909.00080v2
PDF https://arxiv.org/pdf/1909.00080v2.pdf
PWC https://paperswithcode.com/paper/sequential-learning-of-convolutional-features
Repo
Framework

Local Distribution Obfuscation via Probability Coupling

Title Local Distribution Obfuscation via Probability Coupling
Authors Yusuke Kawamoto, Takao Murakami
Abstract We introduce a general model for the local obfuscation of probability distributions by probabilistic perturbation, e.g., by adding differentially private noise, and investigate its theoretical properties. Specifically, we relax a notion of distribution privacy (DistP) by generalizing it to divergence, and propose local obfuscation mechanisms that provide divergence distribution privacy. To provide f-divergence distribution privacy, we prove that probabilistic perturbation noise should be added proportionally to the Earth mover’s distance between the probability distributions that we want to make indistinguishable. Furthermore, we introduce a local obfuscation mechanism, which we call a coupling mechanism, that provides divergence distribution privacy while optimizing the utility of obfuscated data by using exact/approximate auxiliary information on the input distributions we want to protect.
Tasks
Published 2019-07-13
URL https://arxiv.org/abs/1907.05991v2
PDF https://arxiv.org/pdf/1907.05991v2.pdf
PWC https://paperswithcode.com/paper/local-distribution-obfuscation-via
Repo
Framework

Visual Backpropagation

Title Visual Backpropagation
Authors Roy S. Freedman
Abstract We show how a declarative functional programming specification of backpropagation yields a visual and transparent implementation within spreadsheets. We call our method Visual Backpropagation. This backpropagation implementation exploits array worksheet formulas, manual calculation, and has a sequential order of computation similar to the processing of a systolic array. The implementation uses no hidden macros nor user-defined functions; there are no loops, assignment statements, or links to any procedural programs written in conventional languages. As an illustration, we compare a Visual Backpropagation solution to a Tensorflow (Python) solution on a standard regression problem.
Tasks
Published 2019-06-06
URL https://arxiv.org/abs/1906.04011v1
PDF https://arxiv.org/pdf/1906.04011v1.pdf
PWC https://paperswithcode.com/paper/visual-backpropagation
Repo
Framework

Federated Learning for Coalition Operations

Title Federated Learning for Coalition Operations
Authors D. Verma, S. Calo, S. Witherspoon, E. Bertino, A. Abu Jabal, A. Swami, G. Cirincione, S. Julier, G. White, G. de Mel, G. Pearson
Abstract Machine Learning in coalition settings requires combining insights available from data assets and knowledge repositories distributed across multiple coalition partners. In tactical environments, this requires sharing the assets, knowledge and models in a bandwidth-constrained environment, while staying in conformance with the privacy, security and other applicable policies for each coalition member. Federated Machine Learning provides an approach for such sharing. In its simplest version, federated machine learning could exchange training data available among the different coalition members, with each partner deciding which part of the training data from other partners to accept based on the quality and value of the offered data. In a more sophisticated version, coalition partners may exchange models learnt locally, which need to be transformed, accepted in entirety or in part based on the quality and value offered by each model, and fused together into an integrated model. In this paper, we examine the challenges present in creating federated learning solutions in coalition settings, and present the different flavors of federated learning that we have created as part of our research in the DAIS ITA. The challenges addressed include dealing with varying quality of data and models, determining the value offered by the data/model of each coalition partner, addressing the heterogeneity in data representation, labeling and AI model architecture selected by different coalition members, and handling the varying levels of trust present among members of the coalition. We also identify some open problems that remain to be addressed to create a viable solution for federated learning in coalition environments.
Tasks
Published 2019-10-14
URL https://arxiv.org/abs/1910.06799v1
PDF https://arxiv.org/pdf/1910.06799v1.pdf
PWC https://paperswithcode.com/paper/federated-learning-for-coalition-operations
Repo
Framework

A Review of Visual Trackers and Analysis of its Application to Mobile Robot

Title A Review of Visual Trackers and Analysis of its Application to Mobile Robot
Authors Shaoze You, Hua Zhu, Menggang Li, Yutan Li
Abstract Computer vision has received a significant attention in recent year, which is one of the important parts for robots to obtain information about the external environment. Visual trackers can provide the necessary physical and environmental parameters for the mobile robot, and their performance is related to the actual application of the robot. This study provides a comprehensive survey on visual trackers. Following a brief introduction, we first analyzed the basic framework and difficulties of visual trackers. Then the structure of generative and discriminative methods is introduced, and summarized the feature descriptors, modeling methods, and learning methods which be used in tracker. Later we reviewed and evaluated the state-of-the-art progress on discriminative trackers from three directions: correlation filter, deep learning and convolutional features. Finally, we analyzed the research direction of visual tracker used in mobile robot, as well as outlined the future trends for visual tracker on mobile robot.
Tasks
Published 2019-10-22
URL https://arxiv.org/abs/1910.09761v1
PDF https://arxiv.org/pdf/1910.09761v1.pdf
PWC https://paperswithcode.com/paper/a-review-of-visual-trackers-and-analysis-of
Repo
Framework

Adaptive Variance for Changing Sparse-Reward Environments

Title Adaptive Variance for Changing Sparse-Reward Environments
Authors Xingyu Lin, Pengsheng Guo, Carlos Florensa, David Held
Abstract Robots that are trained to perform a task in a fixed environment often fail when facing unexpected changes to the environment due to a lack of exploration. We propose a principled way to adapt the policy for better exploration in changing sparse-reward environments. Unlike previous works which explicitly model environmental changes, we analyze the relationship between the value function and the optimal exploration for a Gaussian-parameterized policy and show that our theory leads to an effective strategy for adjusting the variance of the policy, enabling fast adapt to changes in a variety of sparse-reward environments.
Tasks
Published 2019-03-15
URL https://arxiv.org/abs/1903.06309v2
PDF https://arxiv.org/pdf/1903.06309v2.pdf
PWC https://paperswithcode.com/paper/adaptive-variance-for-changing-sparse-reward
Repo
Framework

Deep Reinforced Self-Attention Masks for Abstractive Summarization (DR.SAS)

Title Deep Reinforced Self-Attention Masks for Abstractive Summarization (DR.SAS)
Authors Ankit Chadha, Mohamed Masoud
Abstract We present a novel architectural scheme to tackle the abstractive summarization problem based on the CNN/DMdataset which fuses Reinforcement Learning (RL) withUniLM, which is a pre-trained Deep Learning Model, to solve various natural language tasks. We have tested the limits of learning fine-grained attention in Transformers to improve the summarization quality. UniLM applies attention to the entire token space in a global fashion. We propose DR.SAS which applies the Actor-Critic (AC) algorithm to learn a dynamic self-attention distribution over the tokens to reduce redundancy and generate factual and coherent summaries to improve the quality of summarization. After performing hyperparameter tuning, we achievedbetter ROUGE results compared to the baseline. Our model tends to be more extractive/factual yet coherent in detail because of optimization over ROUGE rewards. We present detailed error analysis with examples of the strengths and limitations of our model. Our codebase will be publicly available on our GitHub.
Tasks Abstractive Text Summarization
Published 2019-12-30
URL https://arxiv.org/abs/2001.00009v1
PDF https://arxiv.org/pdf/2001.00009v1.pdf
PWC https://paperswithcode.com/paper/deep-reinforced-self-attention-masks-for
Repo
Framework

Visually-aware Recommendation with Aesthetic Features

Title Visually-aware Recommendation with Aesthetic Features
Authors Wenhui Yu, Xiangnan He, Jian Pei, Xu Chen, Li Xiong, Jinfei Liu, Zheng Qin
Abstract Visual information plays a critical role in human decision-making process. While recent developments on visually-aware recommender systems have taken the product image into account, none of them has considered the aesthetic aspect. We argue that the aesthetic factor is very important in modeling and predicting users’ preferences, especially for some fashion-related domains like clothing and jewelry. This work addresses the need of modeling aesthetic information in visually-aware recommender systems. Technically speaking, we make three key contributions in leveraging deep aesthetic features: (1) To describe the aesthetics of products, we introduce the aesthetic features extracted from product images by a deep aesthetic network. We incorporate these features into recommender system to model users’ preferences in the aesthetic aspect. (2) Since in clothing recommendation, time is very important for users to make decision, we design a new tensor decomposition model for implicit feedback data. The aesthetic features are then injected to the basic tensor model to capture the temporal dynamics of aesthetic preferences (e.g., seasonal patterns). (3) We also use the aesthetic features to optimize the learning strategy on implicit feedback data. We enrich the pairwise training samples by considering the similarity among items in the visual space and graph space; the key idea is that a user may likely have similar perception on similar items. We perform extensive experiments on several real-world datasets and demonstrate the usefulness of aesthetic features and the effectiveness of our proposed methods.
Tasks Decision Making, Recommendation Systems
Published 2019-05-02
URL https://arxiv.org/abs/1905.02009v1
PDF https://arxiv.org/pdf/1905.02009v1.pdf
PWC https://paperswithcode.com/paper/visually-aware-recommendation-with-aesthetic
Repo
Framework

The Normalization Method for Alleviating Pathological Sharpness in Wide Neural Networks

Title The Normalization Method for Alleviating Pathological Sharpness in Wide Neural Networks
Authors Ryo Karakida, Shotaro Akaho, Shun-ichi Amari
Abstract Normalization methods play an important role in enhancing the performance of deep learning while their theoretical understandings have been limited. To theoretically elucidate the effectiveness of normalization, we quantify the geometry of the parameter space determined by the Fisher information matrix (FIM), which also corresponds to the local shape of the loss landscape under certain conditions. We analyze deep neural networks with random initialization, which is known to suffer from a pathologically sharp shape of the landscape when the network becomes sufficiently wide. We reveal that batch normalization in the last layer contributes to drastically decreasing such pathological sharpness if the width and sample number satisfy a specific condition. In contrast, it is hard for batch normalization in the middle hidden layers to alleviate pathological sharpness in many settings. We also found that layer normalization cannot alleviate pathological sharpness either. Thus, we can conclude that batch normalization in the last layer significantly contributes to decreasing the sharpness induced by the FIM.
Tasks
Published 2019-06-07
URL https://arxiv.org/abs/1906.02926v2
PDF https://arxiv.org/pdf/1906.02926v2.pdf
PWC https://paperswithcode.com/paper/the-normalization-method-for-alleviating
Repo
Framework

Resurrecting Submodularity in Neural Abstractive Summarization

Title Resurrecting Submodularity in Neural Abstractive Summarization
Authors Simeng Han, Xiang Lin, Shafiq Joty
Abstract Submodularity is a desirable property for a variety of objectives in summarization in terms of content selection where the encode-decoder framework is deficient. We propose `diminishing attentions’, a class of novel attention mechanisms that are architecturally simple yet empirically effective to improve the coverage of neural abstractive summarization by exploiting the properties of submodular functions. Without adding any extra parameters to the Pointer-Generator baseline, our attention mechanism yields significant improvements in ROUGE scores and generates summaries of better quality. Our method within the Pointer-Generator framework outperforms the recently proposed Transformer model for summarization while using only 5 times less parameters. Our method also achieves state-of-the-art results in abstractive summarization when applied to the encoder-decoder attention in the Transformer model initialized with BERT. |
Tasks Abstractive Text Summarization
Published 2019-11-08
URL https://arxiv.org/abs/1911.03014v1
PDF https://arxiv.org/pdf/1911.03014v1.pdf
PWC https://paperswithcode.com/paper/resurrecting-submodularity-in-neural
Repo
Framework

Few-Shot Generalization for Single-Image 3D Reconstruction via Priors

Title Few-Shot Generalization for Single-Image 3D Reconstruction via Priors
Authors Bram Wallace, Bharath Hariharan
Abstract Recent work on single-view 3D reconstruction shows impressive results, but has been restricted to a few fixed categories where extensive training data is available. The problem of generalizing these models to new classes with limited training data is largely open. To address this problem, we present a new model architecture that reframes single-view 3D reconstruction as learnt, category agnostic refinement of a provided, category-specific prior. The provided prior shape for a novel class can be obtained from as few as one 3D shape from this class. Our model can start reconstructing objects from the novel class using this prior without seeing any training image for this class and without any retraining. Our model outperforms category-agnostic baselines and remains competitive with more sophisticated baselines that finetune on the novel categories. Additionally, our network is capable of improving the reconstruction given multiple views despite not being trained on task of multi-view reconstruction.
Tasks 3D Reconstruction, Single-View 3D Reconstruction
Published 2019-09-03
URL https://arxiv.org/abs/1909.01205v1
PDF https://arxiv.org/pdf/1909.01205v1.pdf
PWC https://paperswithcode.com/paper/few-shot-generalization-for-single-image-3d
Repo
Framework

KAS-term: Extracting Slovene Terms from Doctoral Theses via Supervised Machine Learning

Title KAS-term: Extracting Slovene Terms from Doctoral Theses via Supervised Machine Learning
Authors Nikola Ljubešić, Darja Fišer, Tomaž Erjavec
Abstract This paper presents a dataset and supervised learning experiments for term extraction from Slovene academic texts. Term candidates in the dataset were extracted via morphosyntactic patterns and annotated for their termness by four annotators. Experiments on the dataset show that most co-occurrence statistics, applied after morphosyntactic patterns and a frequency threshold, perform close to random and that the results can be significantly improved by combining, with supervised machine learning, all the seven statistic measures included in the dataset. On multi-word terms the model using all statistics obtains an AUC of 0.736 while the best single statistic produces only AUC 0.590. Among many additional candidate features, only adding multi-word morphosyntactic pattern information and length of the single-word term candidates achieves further improvements of the results.
Tasks
Published 2019-06-05
URL https://arxiv.org/abs/1906.02053v1
PDF https://arxiv.org/pdf/1906.02053v1.pdf
PWC https://paperswithcode.com/paper/kas-term-extracting-slovene-terms-from
Repo
Framework

FAB: A Robust Facial Landmark Detection Framework for Motion-Blurred Videos

Title FAB: A Robust Facial Landmark Detection Framework for Motion-Blurred Videos
Authors Keqiang Sun, Wayne Wu, Tinghao Liu, Shuo Yang, Quan Wang, Qiang Zhou, Zuochang Ye, Chen Qian
Abstract Recently, facial landmark detection algorithms have achieved remarkable performance on static images. However, these algorithms are neither accurate nor stable in motion-blurred videos. The missing of structure information makes it difficult for state-of-the-art facial landmark detection algorithms to yield good results. In this paper, we propose a framework named FAB that takes advantage of structure consistency in the temporal dimension for facial landmark detection in motion-blurred videos. A structure predictor is proposed to predict the missing face structural information temporally, which serves as a geometry prior. This allows our framework to work as a virtuous circle. On one hand, the geometry prior helps our structure-aware deblurring network generates high quality deblurred images which lead to better landmark detection results. On the other hand, better landmark detection results help structure predictor generate better geometry prior for the next frame. Moreover, it is a flexible video-based framework that can incorporate any static image-based methods to provide a performance boost on video datasets. Extensive experiments on Blurred-300VW, the proposed Real-world Motion Blur (RWMB) datasets and 300VW demonstrate the superior performance to the state-of-the-art methods. Datasets and models will be publicly available at https://keqiangsun.github.io/projects/FAB/FAB.html.
Tasks Deblurring, Facial Landmark Detection
Published 2019-10-26
URL https://arxiv.org/abs/1910.12100v1
PDF https://arxiv.org/pdf/1910.12100v1.pdf
PWC https://paperswithcode.com/paper/fab-a-robust-facial-landmark-detection-1
Repo
Framework

Hyperparameter-Free Losses for Model-Based Monocular Reconstruction

Title Hyperparameter-Free Losses for Model-Based Monocular Reconstruction
Authors Eduard Ramon, Guillermo Ruiz, Thomas Batard, Xavier Giró-i-Nieto
Abstract This work proposes novel hyperparameter-free losses for single view 3D reconstruction with morphable models (3DMM). We dispense with the hyperparameters used in other works by exploiting geometry, so that the shape of the object and the camera pose are jointly optimized in a sole term expression. This simplification reduces the optimization time and its complexity. Moreover, we propose a novel implicit regularization technique based on random virtual projections that does not require additional 2D or 3D annotations. Our experiments suggest that minimizing a shape reprojection error together with the proposed implicit regularization is especially suitable for applications that require precise alignment between geometry and image spaces, such as augmented reality. We evaluate our losses on a large scale dataset with 3D ground truth and publish our implementations to facilitate reproducibility and public benchmarking in this field.
Tasks 3D Reconstruction, Single-View 3D Reconstruction
Published 2019-08-16
URL https://arxiv.org/abs/1908.09001v1
PDF https://arxiv.org/pdf/1908.09001v1.pdf
PWC https://paperswithcode.com/paper/hyperparameter-free-losses-for-model-based
Repo
Framework

Learning multimodal representations for sample-efficient recognition of human actions

Title Learning multimodal representations for sample-efficient recognition of human actions
Authors Miguel Vasco, Francisco S. Melo, David Martins de Matos, Ana Paiva, Tetsunari Inamura
Abstract Humans interact in rich and diverse ways with the environment. However, the representation of such behavior by artificial agents is often limited. In this work we present \textit{motion concepts}, a novel multimodal representation of human actions in a household environment. A motion concept encompasses a probabilistic description of the kinematics of the action along with its contextual background, namely the location and the objects held during the performance. Furthermore, we present Online Motion Concept Learning (OMCL), a new algorithm which learns novel motion concepts from action demonstrations and recognizes previously learned motion concepts. The algorithm is evaluated on a virtual-reality household environment with the presence of a human avatar. OMCL outperforms standard motion recognition algorithms on an one-shot recognition task, attesting to its potential for sample-efficient recognition of human actions.
Tasks
Published 2019-03-06
URL http://arxiv.org/abs/1903.02511v1
PDF http://arxiv.org/pdf/1903.02511v1.pdf
PWC https://paperswithcode.com/paper/learning-multimodal-representations-for
Repo
Framework
comments powered by Disqus