January 31, 2020

3349 words 16 mins read

Paper Group ANR 46

Paper Group ANR 46

Joint Learning of Pre-Trained and Random Units for Domain Adaptation in Part-of-Speech Tagging. Diversified Hidden Markov Models for Sequential Labeling. Deep Residual Reinforcement Learning. Histopathologic Image Processing: A Review. Pointing Novel Objects in Image Captioning. Learning Non-Markovian Quantum Noise from Moiré-Enhanced Swap Spectros …

Joint Learning of Pre-Trained and Random Units for Domain Adaptation in Part-of-Speech Tagging

Title Joint Learning of Pre-Trained and Random Units for Domain Adaptation in Part-of-Speech Tagging
Authors Sara Meftah, Youssef Tamaazousti, Nasredine Semmar, Hassane Essafi, Fatiha Sadat
Abstract Fine-tuning neural networks is widely used to transfer valuable knowledge from high-resource to low-resource domains. In a standard fine-tuning scheme, source and target problems are trained using the same architecture. Although capable of adapting to new domains, pre-trained units struggle with learning uncommon target-specific patterns. In this paper, we propose to augment the target-network with normalised, weighted and randomly initialised units that beget a better adaptation while maintaining the valuable source knowledge. Our experiments on POS tagging of social media texts (Tweets domain) demonstrate that our method achieves state-of-the-art performances on 3 commonly used datasets.
Tasks Domain Adaptation, Part-Of-Speech Tagging
Published 2019-04-07
URL http://arxiv.org/abs/1904.03595v1
PDF http://arxiv.org/pdf/1904.03595v1.pdf
PWC https://paperswithcode.com/paper/joint-learning-of-pre-trained-and-random
Repo
Framework

Diversified Hidden Markov Models for Sequential Labeling

Title Diversified Hidden Markov Models for Sequential Labeling
Authors Maoying Qiao, Wei Bian, Richard Yida Xu, Dacheng Tao
Abstract Labeling of sequential data is a prevalent meta-problem for a wide range of real world applications. While the first-order Hidden Markov Models (HMM) provides a fundamental approach for unsupervised sequential labeling, the basic model does not show satisfying performance when it is directly applied to real world problems, such as part-of-speech tagging (PoS tagging) and optical character recognition (OCR). Aiming at improving performance, important extensions of HMM have been proposed in the literatures. One of the common key features in these extensions is the incorporation of proper prior information. In this paper, we propose a new extension of HMM, termed diversified Hidden Markov Models (dHMM), which utilizes a diversity-encouraging prior over the state-transition probabilities and thus facilitates more dynamic sequential labellings. Specifically, the diversity is modeled by a continuous determinantal point process prior, which we apply to both unsupervised and supervised scenarios. Learning and inference algorithms for dHMM are derived. Empirical evaluations on benchmark datasets for unsupervised PoS tagging and supervised OCR confirmed the effectiveness of dHMM, with competitive performance to the state-of-the-art.
Tasks Optical Character Recognition, Part-Of-Speech Tagging
Published 2019-04-05
URL http://arxiv.org/abs/1904.03170v1
PDF http://arxiv.org/pdf/1904.03170v1.pdf
PWC https://paperswithcode.com/paper/diversified-hidden-markov-models-for
Repo
Framework

Deep Residual Reinforcement Learning

Title Deep Residual Reinforcement Learning
Authors Shangtong Zhang, Wendelin Boehmer, Shimon Whiteson
Abstract We revisit residual algorithms in both model-free and model-based reinforcement learning settings. We propose the bidirectional target network technique to stabilize residual algorithms, yielding a residual version of DDPG that significantly outperforms vanilla DDPG in the DeepMind Control Suite benchmark. Moreover, we find the residual algorithm an effective approach to the distribution mismatch problem in model-based planning. Compared with the existing TD($k$) method, our residual-based method makes weaker assumptions about the model and yields a greater performance boost.
Tasks
Published 2019-05-03
URL https://arxiv.org/abs/1905.01072v3
PDF https://arxiv.org/pdf/1905.01072v3.pdf
PWC https://paperswithcode.com/paper/deep-residual-reinforcement-learning
Repo
Framework

Histopathologic Image Processing: A Review

Title Histopathologic Image Processing: A Review
Authors Jonathan de Matos, Alceu de Souza Britto Jr., Luiz E. S. Oliveira, Alessandro L. Koerich
Abstract Histopathologic Images (HI) are the gold standard for evaluation of some tumors. However, the analysis of such images is challenging even for experienced pathologists, resulting in problems of inter and intra observer. Besides that, the analysis is time and resource consuming. One of the ways to accelerate such an analysis is by using Computer Aided Diagnosis systems. In this work we present a literature review about the computing techniques to process HI, including shallow and deep methods. We cover the most common tasks for processing HI such as segmentation, feature extraction, unsupervised learning and supervised learning. A dataset section show some datasets found during the literature review. We also bring a study case of breast cancer classification using a mix of deep and shallow machine learning methods. The proposed method obtained an accuracy of 91% in the best case, outperforming the compared baseline of the dataset.
Tasks
Published 2019-04-16
URL http://arxiv.org/abs/1904.07900v1
PDF http://arxiv.org/pdf/1904.07900v1.pdf
PWC https://paperswithcode.com/paper/histopathologic-image-processing-a-review
Repo
Framework

Pointing Novel Objects in Image Captioning

Title Pointing Novel Objects in Image Captioning
Authors Yehao Li, Ting Yao, Yingwei Pan, Hongyang Chao, Tao Mei
Abstract Image captioning has received significant attention with remarkable improvements in recent advances. Nevertheless, images in the wild encapsulate rich knowledge and cannot be sufficiently described with models built on image-caption pairs containing only in-domain objects. In this paper, we propose to address the problem by augmenting standard deep captioning architectures with object learners. Specifically, we present Long Short-Term Memory with Pointing (LSTM-P) — a new architecture that facilitates vocabulary expansion and produces novel objects via pointing mechanism. Technically, object learners are initially pre-trained on available object recognition data. Pointing in LSTM-P then balances the probability between generating a word through LSTM and copying a word from the recognized objects at each time step in decoder stage. Furthermore, our captioning encourages global coverage of objects in the sentence. Extensive experiments are conducted on both held-out COCO image captioning and ImageNet datasets for describing novel objects, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, we obtain an average of 60.9% in F1 score on held-out COCO~dataset.
Tasks Image Captioning, Object Recognition
Published 2019-04-25
URL http://arxiv.org/abs/1904.11251v1
PDF http://arxiv.org/pdf/1904.11251v1.pdf
PWC https://paperswithcode.com/paper/pointing-novel-objects-in-image-captioning
Repo
Framework

Learning Non-Markovian Quantum Noise from Moiré-Enhanced Swap Spectroscopy with Deep Evolutionary Algorithm

Title Learning Non-Markovian Quantum Noise from Moiré-Enhanced Swap Spectroscopy with Deep Evolutionary Algorithm
Authors Murphy Yuezhen Niu, Vadim Smelyanskyi, Paul Klimov, Sergio Boixo, Rami Barends, Julian Kelly, Yu Chen, Kunal Arya, Brian Burkett, Dave Bacon, Zijun Chen, Ben Chiaro, Roberto Collins, Andrew Dunsworth, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa Giustina, Rob Graff, Trent Huang, Evan Jeffrey, David Landhuis, Erik Lucero, Anthony Megrant, Josh Mutus, Xiao Mi, Ofer Naaman, Matthew Neeley, Charles Neill, Chris Quintana, Pedram Roushan, John M. Martinis, Hartmut Neven
Abstract Two-level-system (TLS) defects in amorphous dielectrics are a major source of noise and decoherence in solid-state qubits. Gate-dependent non-Markovian errors caused by TLS-qubit coupling are detrimental to fault-tolerant quantum computation and have not been rigorously treated in the existing literature. In this work, we derive the non-Markovian dynamics between TLS and qubits during a SWAP-like two-qubit gate and the associated average gate fidelity for frequency-tunable Transmon qubits. This gate dependent error model facilitates using qubits as sensors to simultaneously learn practical imperfections in both the qubit’s environment and control waveforms. We combine the-state-of-art machine learning algorithm with Moir'{e}-enhanced swap spectroscopy to achieve robust learning using noisy experimental data. Deep neural networks are used to represent the functional map from experimental data to TLS parameters and are trained through an evolutionary algorithm. Our method achieves the highest learning efficiency and robustness against experimental imperfections to-date, representing an important step towards in-situ quantum control optimization over environmental and control defects.
Tasks
Published 2019-12-09
URL https://arxiv.org/abs/1912.04368v1
PDF https://arxiv.org/pdf/1912.04368v1.pdf
PWC https://paperswithcode.com/paper/learning-non-markovian-quantum-noise-from
Repo
Framework

IntersectGAN: Learning Domain Intersection for Generating Images with Multiple Attributes

Title IntersectGAN: Learning Domain Intersection for Generating Images with Multiple Attributes
Authors Zehui Yao, Boyan Zhang, Zhiyong Wang, Wanli Ouyang, Dong Xu, Dagan Feng
Abstract Generative adversarial networks (GANs) have demonstrated great success in generating various visual content. However, images generated by existing GANs are often of attributes (e.g., smiling expression) learned from one image domain. As a result, generating images of multiple attributes requires many real samples possessing multiple attributes which are very resource expensive to be collected. In this paper, we propose a novel GAN, namely IntersectGAN, to learn multiple attributes from different image domains through an intersecting architecture. For example, given two image domains $X_1$ and $X_2$ with certain attributes, the intersection $X_1 \cap X_2$ denotes a new domain where images possess the attributes from both $X_1$ and $X_2$ domains. The proposed IntersectGAN consists of two discriminators $D_1$ and $D_2$ to distinguish between generated and real samples of different domains, and three generators where the intersection generator is trained against both discriminators. And an overall adversarial loss function is defined over three generators. As a result, our proposed IntersectGAN can be trained on multiple domains of which each presents one specific attribute, and eventually eliminates the need of real sample images simultaneously possessing multiple attributes. By using the CelebFaces Attributes dataset, our proposed IntersectGAN is able to produce high quality face images possessing multiple attributes (e.g., a face with black hair and a smiling expression). Both qualitative and quantitative evaluations are conducted to compare our proposed IntersectGAN with other baseline methods. Besides, several different applications of IntersectGAN have been explored with promising results.
Tasks
Published 2019-09-21
URL https://arxiv.org/abs/1909.09767v2
PDF https://arxiv.org/pdf/1909.09767v2.pdf
PWC https://paperswithcode.com/paper/190909767
Repo
Framework

Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning

Title Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning
Authors Nathan Kallus, Masatoshi Uehara
Abstract Off-policy evaluation (OPE) in reinforcement learning is notoriously difficult in long- and infinite-horizon settings due to diminishing overlap between behavior and target policies. In this paper, we study the role of Markovian, time-invariant, and ergodic structure in efficient OPE. We first derive the efficiency limits for OPE when one assumes each of these structures. This precisely characterizes the curse of horizon: in time-variant processes, OPE is only feasible in the near-on-policy setting, where behavior and target policies are sufficiently similar. But, in ergodic time-invariant Markov decision processes, our bounds show that truly-off-policy evaluation is feasible, even with only just one dependent trajectory, and provide the limits of how well we could hope to do. We develop a new estimator based on Double Reinforcement Learning (DRL) that leverages this structure for OPE. Our DRL estimator simultaneously uses estimated stationary density ratios and $q$-functions and remains efficient when both are estimated at slow, nonparametric rates and remains consistent when either is estimated consistently. We investigate these properties and the performance benefits of leveraging the problem structure for more efficient OPE.
Tasks
Published 2019-09-12
URL https://arxiv.org/abs/1909.05850v2
PDF https://arxiv.org/pdf/1909.05850v2.pdf
PWC https://paperswithcode.com/paper/efficiently-breaking-the-curse-of-horizon
Repo
Framework

A Saliency Dataset of Head and Eye Movements for Augmented Reality

Title A Saliency Dataset of Head and Eye Movements for Augmented Reality
Authors Yucheng Zhu, Dandan Zhu, Yiwei Yang, Huiyu Duan, Qiangqiang Zhou, Xiongkuo Min, Jiantao Zhou, Guangtao Zhai, Xiaokang Yang
Abstract In augmented reality (AR), correct and precise estimations of user’s visual fixations and head movements can enhance the quality of experience by allocating more computation resources for the analysing, rendering and 3D registration on the areas of interest. However, there is no research about understanding the visual exploration of users when using an AR system or modeling AR visual attention. To bridge the gap between the real-world scene and the scene augmented by virtual information, we construct the ARVR saliency dataset with 100 diverse videos evaluated by 20 people. The virtual reality (VR) technique is employed to simulate the real-world, and annotations of object recognition and tracking as augmented contents are blended into the omnidirectional videos. Users can get the sense of experiencing AR when watching the augmented videos. The saliency annotations of head and eye movements for both original and augmented videos are collected which constitute the ARVR dataset.
Tasks Object Recognition
Published 2019-12-12
URL https://arxiv.org/abs/1912.05971v1
PDF https://arxiv.org/pdf/1912.05971v1.pdf
PWC https://paperswithcode.com/paper/a-saliency-dataset-of-head-and-eye-movements
Repo
Framework

A Computational Framework for Motor Skill Acquisition

Title A Computational Framework for Motor Skill Acquisition
Authors Krishn Bera, Tejas Savalia, Bapi Raju
Abstract There have been numerous attempts in explaining the general learning behaviours by various cognitive models. Multiple hypotheses have been put further to qualitatively argue the best-fit model for motor skill acquisition task and its variations. In this context, for a discrete sequence production (DSP) task, one of the most insightful models is Verwey’s Dual Processor Model (DPM). It largely explains the learning and behavioural phenomenon of skilled discrete key-press sequences without providing any concrete computational basis of reinforcement. Therefore, we propose a quantitative explanation for Verwey’s DPM hypothesis by experimentally establishing a general computational framework for motor skill learning. We attempt combining the qualitative and quantitative theories based on a best-fit model of the experimental simulations of variations of dual processor models. The fundamental premise of sequential decision making for skill learning is based on interacting model-based (MB) and model-free (MF) reinforcement learning (RL) processes. Our unifying framework shows the proposed idea agrees well to Verwey’s DPM and Fitts’ three phases of skill learning. The accuracy of our model can further be validated by its statistical fit with the human-generated data on simple environment tasks like the grid-world.
Tasks Decision Making
Published 2019-01-03
URL http://arxiv.org/abs/1901.01856v1
PDF http://arxiv.org/pdf/1901.01856v1.pdf
PWC https://paperswithcode.com/paper/a-computational-framework-for-motor-skill
Repo
Framework

Associative Alignment for Few-shot Image Classification

Title Associative Alignment for Few-shot Image Classification
Authors Arman Afrasiyabi, Jean-François Lalonde, Christian Gagné
Abstract Few-shot image classification aims at training a model from only a few examples for each of the novel classes. This paper proposes the idea of associative alignment for leveraging part of the base data by aligning the novel training instances to the closely related ones in the base training set. This expands the size of the effective novel training set by adding extra related base instances to the few novel ones, thereby allowing a constructive fine-tuning. We propose two associative alignment strategies: 1) a metric-learning loss for minimizing the distance between related base samples and the centroid of novel instances in the feature space, and 2) a conditional adversarial alignment loss based on the Wasserstein distance. Experiments on four standard datasets and three popular backbones demonstrate that combining our centroid-based alignment loss results in absolute accuracy improvements of 4.4%, 1.2%, and 6.2% in 5-shot learning over the state of the art for object recognition, fine-grained classification, and cross-domain adaptation, respectively.
Tasks Domain Adaptation, Few-Shot Image Classification, Image Classification, Meta-Learning, Metric Learning, Object Recognition
Published 2019-12-11
URL https://arxiv.org/abs/1912.05094v2
PDF https://arxiv.org/pdf/1912.05094v2.pdf
PWC https://paperswithcode.com/paper/associative-alignment-for-few-shot-image
Repo
Framework

How to iron out rough landscapes and get optimal performances: Averaged Gradient Descent and its application to tensor PCA

Title How to iron out rough landscapes and get optimal performances: Averaged Gradient Descent and its application to tensor PCA
Authors Giulio Biroli, Chiara Cammarota, Federico Ricci-Tersenghi
Abstract In many high-dimensional estimation problems the main task consists in minimizing a cost function, which is often strongly non-convex when scanned in the space of parameters to be estimated. A standard solution to flatten the corresponding rough landscape consists in summing the losses associated to different data points and obtain a smoother empirical risk. Here we propose a complementary method that works for a single data point. The main idea is that a large amount of the roughness is uncorrelated in different parts of the landscape. One can then substantially reduce the noise by evaluating an empirical average of the gradient obtained as a sum over many random independent positions in the space of parameters to be optimized. We present an algorithm, called Averaged Gradient Descent, based on this idea and we apply it to tensor PCA, which is a very hard estimation problem. We show that Averaged Gradient Descent over-performs physical algorithms such as gradient descent and approximate message passing and matches the best algorithmic thresholds known so far, obtained by tensor unfolding and methods based on sum-of-squares.
Tasks
Published 2019-05-29
URL https://arxiv.org/abs/1905.12294v3
PDF https://arxiv.org/pdf/1905.12294v3.pdf
PWC https://paperswithcode.com/paper/how-to-iron-out-rough-landscapes-and-get
Repo
Framework

Evaluation of Greek Word Embeddings

Title Evaluation of Greek Word Embeddings
Authors Stamatis Outsios, Christos Karatsalos, Konstantinos Skianis, Michalis Vazirgiannis
Abstract Since word embeddings have been the most popular input for many NLP tasks, evaluating their quality is of critical importance. Most research efforts are focusing on English word embeddings. This paper addresses the problem of constructing and evaluating such models for the Greek language. We created a new word analogy corpus considering the original English Word2vec word analogy corpus and some specific linguistic aspects of the Greek language as well. Moreover, we created a Greek version of WordSim353 corpora for a basic evaluation of word similarities. We tested seven word vector models and our evaluation showed that we are able to create meaningful representations. Last, we discovered that the morphological complexity of the Greek language and polysemy can influence the quality of the resulting word embeddings.
Tasks Word Embeddings
Published 2019-04-08
URL https://arxiv.org/abs/1904.04032v2
PDF https://arxiv.org/pdf/1904.04032v2.pdf
PWC https://paperswithcode.com/paper/evaluation-of-greek-word-embeddings
Repo
Framework

Automated Quality Control in Image Segmentation: Application to the UK Biobank Cardiac MR Imaging Study

Title Automated Quality Control in Image Segmentation: Application to the UK Biobank Cardiac MR Imaging Study
Authors Robert Robinson, Vanya V. Valindria, Wenjia Bai, Ozan Oktay, Bernhard Kainz, Hideaki Suzuki, Mihir M. Sanghvi, Nay Aung, Jos$é$ Miguel Paiva, Filip Zemrak, Kenneth Fung, Elena Lukaschuk, Aaron M. Lee, Valentina Carapella, Young Jin Kim, Stefan K. Piechnik, Stefan Neubauer, Steffen E. Petersen, Chris Page, Paul M. Matthews, Daniel Rueckert, Ben Glocker
Abstract Background: The trend towards large-scale studies including population imaging poses new challenges in terms of quality control (QC). This is a particular issue when automatic processing tools, e.g. image segmentation methods, are employed to derive quantitative measures or biomarkers for later analyses. Manual inspection and visual QC of each segmentation isn’t feasible at large scale. However, it’s important to be able to automatically detect when a segmentation method fails so as to avoid inclusion of wrong measurements into subsequent analyses which could lead to incorrect conclusions. Methods: To overcome this challenge, we explore an approach for predicting segmentation quality based on Reverse Classification Accuracy, which enables us to discriminate between successful and failed segmentations on a per-cases basis. We validate this approach on a new, large-scale manually-annotated set of 4,800 cardiac magnetic resonance scans. We then apply our method to a large cohort of 7,250 cardiac MRI on which we have performed manual QC. Results: We report results used for predicting segmentation quality metrics including Dice Similarity Coefficient (DSC) and surface-distance measures. As initial validation, we present data for 400 scans demonstrating 99% accuracy for classifying low and high quality segmentations using predicted DSC scores. As further validation we show high correlation between real and predicted scores and 95% classification accuracy on 4,800 scans for which manual segmentations were available. We mimic real-world application of the method on 7,250 cardiac MRI where we show good agreement between predicted quality metrics and manual visual QC scores. Conclusions: We show that RCA has the potential for accurate and fully automatic segmentation QC on a per-case basis in the context of large-scale population imaging as in the UK Biobank Imaging Study.
Tasks Semantic Segmentation
Published 2019-01-27
URL http://arxiv.org/abs/1901.09351v1
PDF http://arxiv.org/pdf/1901.09351v1.pdf
PWC https://paperswithcode.com/paper/automated-quality-control-in-image
Repo
Framework

Deep Learning Methods for Event Verification and Image Repurposing Detection

Title Deep Learning Methods for Event Verification and Image Repurposing Detection
Authors M. Goebel, A. Flenner, L. Nataraj, B. S. Manjunath
Abstract The authenticity of images posted on social media is an issue of growing concern. Many algorithms have been developed to detect manipulated images, but few have investigated the ability of deep neural network based approaches to verify the authenticity of image labels, such as event names. In this paper, we propose several novel methods to predict if an image was captured at one of several noteworthy events. We use a set of images from several recorded events such as storms, marathons, protests, and other large public gatherings. Two strategies of applying pre-trained Imagenet network for event verification are presented, with two modifications for each strategy. The first method uses the features from the last convolutional layer of a pre-trained network as input to a classifier. We also consider the effects of tuning the convolutional weights of the pre-trained network to improve classification. The second method combines many features extracted from smaller scales and uses the output of a pre-trained network as the input to a second classifier. For both methods, we investigated several different classifiers and tested many different pre-trained networks. Our experiments demonstrate both these approaches are effective for event verification and image re-purposing detection. The classification at the global scale tends to marginally outperform our tested local methods and fine tuning the network further improves the results.
Tasks
Published 2019-02-11
URL http://arxiv.org/abs/1902.04038v1
PDF http://arxiv.org/pdf/1902.04038v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-methods-for-event-verification
Repo
Framework
comments powered by Disqus