April 3, 2020

3290 words 16 mins read

Paper Group AWR 28

Paper Group AWR 28

Using Reinforcement Learning in the Algorithmic Trading Problem. Hyperspectral Classification Based on 3D Asymmetric Inception Network with Data Fusion Transfer Learning. Variational Inference with Vine Copulas: An efficient Approach for Bayesian Computer Model Calibration. Ensemble neural network forecasts with singular value decomposition. Error …

Using Reinforcement Learning in the Algorithmic Trading Problem

Title Using Reinforcement Learning in the Algorithmic Trading Problem
Authors Evgeny Ponomarev, Ivan Oseledets, Andrzej Cichocki
Abstract The development of reinforced learning methods has extended application to many areas including algorithmic trading. In this paper trading on the stock exchange is interpreted into a game with a Markov property consisting of states, actions, and rewards. A system for trading the fixed volume of a financial instrument is proposed and experimentally tested; this is based on the asynchronous advantage actor-critic method with the use of several neural network architectures. The application of recurrent layers in this approach is investigated. The experiments were performed on real anonymized data. The best architecture demonstrated a trading strategy for the RTS Index futures (MOEX:RTSI) with a profitability of 66% per annum accounting for commission. The project source code is available via the following link: http://github.com/evgps/a3c_trading.
Tasks
Published 2020-02-26
URL https://arxiv.org/abs/2002.11523v1
PDF https://arxiv.org/pdf/2002.11523v1.pdf
PWC https://paperswithcode.com/paper/using-reinforcement-learning-in-the
Repo https://github.com/evgps/a3c_trading
Framework tf

Hyperspectral Classification Based on 3D Asymmetric Inception Network with Data Fusion Transfer Learning

Title Hyperspectral Classification Based on 3D Asymmetric Inception Network with Data Fusion Transfer Learning
Authors Haokui Zhang, Yu Liu, Bei Fang, Ying Li, Lingqiao Liu, Ian Reid
Abstract Hyperspectral image(HSI) classification has been improved with convolutional neural network(CNN) in very recent years. Being different from the RGB datasets, different HSI datasets are generally captured by various remote sensors and have different spectral configurations. Moreover, each HSI dataset only contains very limited training samples and thus it is prone to overfitting when using deep CNNs. In this paper, we first deliver a 3D asymmetric inception network, AINet, to overcome the overfitting problem. With the emphasis on spectral signatures over spatial contexts of HSI data, AINet can convey and classify the features effectively. In addition, the proposed data fusion transfer learning strategy is beneficial in boosting the classification performance. Extensive experiments show that the proposed approach beat all of the state-of-art methods on several HSI benchmarks, including Pavia University, Indian Pines and Kennedy Space Center(KSC). Code can be found at: https://github.com/UniLauX/AINet.
Tasks Transfer Learning
Published 2020-02-11
URL https://arxiv.org/abs/2002.04227v1
PDF https://arxiv.org/pdf/2002.04227v1.pdf
PWC https://paperswithcode.com/paper/hyperspectral-classification-based-on-3d
Repo https://github.com/UniLauX/AINet
Framework pytorch

Variational Inference with Vine Copulas: An efficient Approach for Bayesian Computer Model Calibration

Title Variational Inference with Vine Copulas: An efficient Approach for Bayesian Computer Model Calibration
Authors Vojtech Kejzlar, Tapabrata Maiti
Abstract With the advancements of computer architectures, the use of computational models proliferates to solve complex problems in many scientific applications such as nuclear physics and climate research. However, the potential of such models is often hindered because they tend to be computationally expensive and consequently ill-fitting for uncertainty quantification. Furthermore, they are usually not calibrated with real-time observations. We develop a computationally efficient algorithm based on variational Bayes inference (VBI) for calibration of computer models with Gaussian processes. Unfortunately, the speed and scalability of VBI diminishes when applied to the calibration framework with dependent data. To preserve the efficiency of VBI, we adopt a pairwise decomposition of the data likelihood using vine copulas that separate the information on dependence structure in data from their marginal distributions. We provide both theoretical and empirical evidence for the computational scalability of our methodology and describe all the necessary details for an efficient implementation of the proposed algorithm. We also demonstrated the opportunities given by our method for practitioners on a real data example through calibration of the Liquid Drop Model of nuclear binding energies.
Tasks Calibration, Gaussian Processes
Published 2020-03-28
URL https://arxiv.org/abs/2003.12890v1
PDF https://arxiv.org/pdf/2003.12890v1.pdf
PWC https://paperswithcode.com/paper/variational-inference-with-vine-copulas-an
Repo https://github.com/kejzlarv/VBI_Calibration
Framework none

Ensemble neural network forecasts with singular value decomposition

Title Ensemble neural network forecasts with singular value decomposition
Authors Sebastian Scher, Gabriele Messori
Abstract Ensemble weather forecasts enable a measure of uncertainty to be attached to each forecast by computing the ensemble’s spread. However, generating an ensemble with a good error-spread relationship is far from trivial, and a wide range of approaches to achieve this have been explored. Random perturbations of the initial model state typically provide unsatisfactory results when applied to numerical weather prediction models. Singular value decomposition has proved more successful in this context, and as a result has been widely used for creating perturbed initial states of weather prediction models. We demonstrate how to apply the technique of singular value decomposition to purely neural-network based forecasts. Additionally, we explore the use of random initial perturbations for neural network ensembles, and the creation of neural network ensembles via retraining the network. We find that the singular value decomposition results in ensemble forecasts that have some probabilistic skill, but are inferior to the ensemble created by retraining the neural network several times. Compared to random initial perturbations, the singular value technique performs better when forecasting a simple general circulation model, comparably when forecasting atmospheric reanalysis data, and worse when forecasting the lorenz95 system - a highly idealized model designed to mimic certain aspects of the mid-latitude atmosphere.
Tasks
Published 2020-02-13
URL https://arxiv.org/abs/2002.05398v1
PDF https://arxiv.org/pdf/2002.05398v1.pdf
PWC https://paperswithcode.com/paper/ensemble-neural-network-forecasts-with
Repo https://github.com/sipposip/nn-svd-weather
Framework tf

Error detection in Knowledge Graphs: Path Ranking, Embeddings or both?

Title Error detection in Knowledge Graphs: Path Ranking, Embeddings or both?
Authors R. Fasoulis, K. Bougiatiotis, F. Aisopos, A. Nentidis, G. Paliouras
Abstract This paper attempts to compare and combine different approaches for de-tecting errors in Knowledge Graphs. Knowledge Graphs constitute a mainstreamapproach for the representation of relational information on big heterogeneous data,however, they may contain a big amount of imputed noise when constructed auto-matically. To address this problem, different error detection methodologies have beenproposed, mainly focusing on path ranking and representation learning. This workpresents various mainstream approaches and proposes a novel hybrid and modularmethodology for the task. We compare these methods on two benchmarks and one real-world biomedical publications dataset, showcasing the potential of our approach anddrawing insights regarding the state-of-art in error detection in Knowledge Graphs
Tasks Knowledge Graphs, Representation Learning
Published 2020-02-19
URL https://arxiv.org/abs/2002.08762v1
PDF https://arxiv.org/pdf/2002.08762v1.pdf
PWC https://paperswithcode.com/paper/error-detection-in-knowledge-graphs-path
Repo https://github.com/RomFas/PRGE
Framework none

ML-SIM: A deep neural network for reconstruction of structured illumination microscopy images

Title ML-SIM: A deep neural network for reconstruction of structured illumination microscopy images
Authors Charles N. Christensen, Edward N. Ward, Pietro Lio, Clemens F. Kaminski
Abstract Structured illumination microscopy (SIM) has become an important technique for optical super-resolution imaging because it allows a doubling of image resolution at speeds compatible for live-cell imaging. However, the reconstruction of SIM images is often slow and prone to artefacts. Here we propose a versatile reconstruction method, ML-SIM, which makes use of machine learning. The model is an end-to-end deep residual neural network that is trained on a simulated data set to be free of common SIM artefacts. ML-SIM is thus robust to noise and irregularities in the illumination patterns of the raw SIM input frames. The reconstruction method is widely applicable and does not require the acquisition of experimental training data. Since the training data are generated from simulations of the SIM process on images from generic libraries the method can be efficiently adapted to specific experimental SIM implementations. The reconstruction quality enabled by our method is compared with traditional SIM reconstruction methods, and we demonstrate advantages in terms of noise, reconstruction fidelity and contrast for both simulated and experimental inputs. In addition, reconstruction of one SIM frame typically only takes ~100ms to perform on PCs with modern Nvidia graphics cards, making the technique compatible with real-time imaging. The full implementation and the trained networks are available at http://ML-SIM.com.
Tasks Super-Resolution
Published 2020-03-24
URL https://arxiv.org/abs/2003.11064v1
PDF https://arxiv.org/pdf/2003.11064v1.pdf
PWC https://paperswithcode.com/paper/ml-sim-a-deep-neural-network-for
Repo https://github.com/charlesnchr/ML-SIM
Framework none

Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain Adaptation on Person Re-identification

Title Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain Adaptation on Person Re-identification
Authors Yixiao Ge, Dapeng Chen, Hongsheng Li
Abstract Person re-identification (re-ID) aims at identifying the same persons’ images across different cameras. However, domain diversities between different datasets pose an evident challenge for adapting the re-ID model trained on one dataset to another one. State-of-the-art unsupervised domain adaptation methods for person re-ID transferred the learned knowledge from the source domain by optimizing with pseudo labels created by clustering algorithms on the target domain. Although they achieved state-of-the-art performances, the inevitable label noise caused by the clustering procedure was ignored. Such noisy pseudo labels substantially hinders the model’s capability on further improving feature representations on the target domain. In order to mitigate the effects of noisy pseudo labels, we propose to softly refine the pseudo labels in the target domain by proposing an unsupervised framework, Mutual Mean-Teaching (MMT), to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels in an alternative training manner. In addition, the common practice is to adopt both the classification loss and the triplet loss jointly for achieving optimal performances in person re-ID models. However, conventional triplet loss cannot work with softly refined labels. To solve this problem, a novel soft softmax-triplet loss is proposed to support learning with soft pseudo triplet labels for achieving the optimal domain adaptation performance. The proposed MMT framework achieves considerable improvements of 14.4%, 18.2%, 13.1% and 16.4% mAP on Market-to-Duke, Duke-to-Market, Market-to-MSMT and Duke-to-MSMT unsupervised domain adaptation tasks. Code is available at https://github.com/yxgeee/MMT.
Tasks Person Re-Identification, Unsupervised Domain Adaptation, Unsupervised Person Re-Identification
Published 2020-01-06
URL https://arxiv.org/abs/2001.01526v2
PDF https://arxiv.org/pdf/2001.01526v2.pdf
PWC https://paperswithcode.com/paper/mutual-mean-teaching-pseudo-label-refinery-1
Repo https://github.com/yxgeee/MMT
Framework pytorch

Morfessor EM+Prune: Improved Subword Segmentation with Expectation Maximization and Pruning

Title Morfessor EM+Prune: Improved Subword Segmentation with Expectation Maximization and Pruning
Authors Stig-Arne Grönroos, Sami Virpioja, Mikko Kurimo
Abstract Data-driven segmentation of words into subword units has been used in various natural language processing applications such as automatic speech recognition and statistical machine translation for almost 20 years. Recently it has became more widely adopted, as models based on deep neural networks often benefit from subword units even for morphologically simpler languages. In this paper, we discuss and compare training algorithms for a unigram subword model, based on the Expectation Maximization algorithm and lexicon pruning. Using English, Finnish, North Sami, and Turkish data sets, we show that this approach is able to find better solutions to the optimization problem defined by the Morfessor Baseline model than its original recursive training algorithm. The improved optimization also leads to higher morphological segmentation accuracy when compared to a linguistic gold standard. We publish implementations of the new algorithms in the widely-used Morfessor software package.
Tasks Machine Translation, Speech Recognition
Published 2020-03-06
URL https://arxiv.org/abs/2003.03131v1
PDF https://arxiv.org/pdf/2003.03131v1.pdf
PWC https://paperswithcode.com/paper/morfessor-emprune-improved-subword
Repo https://github.com/Waino/OpenNMT-py
Framework pytorch

Collaborative Video Object Segmentation by Foreground-Background Integration

Title Collaborative Video Object Segmentation by Foreground-Background Integration
Authors Zongxin Yang, Yunchao Wei, Yi Yang
Abstract In this paper, we investigate the principles of embedding learning between the given reference and the predicted sequence to tackle the challenging semi-supervised video object segmentation. Different from previous practices that only explore the embedding learning using pixels from foreground object (s), we consider background should be equally treated and thus propose Collaborative video object segmentation by Foreground-Background Integration (CFBI) approach. Our CFBI implicitly imposes the feature embedding from the target foreground object and its corresponding background to be contrastive, promoting the segmentation results accordingly. With the feature embedding from both foreground and background, our CFBI performs the matching process between the reference and the predicted sequence from both pixel and instance levels, making the CFBI be robust to various object scales. We conduct extensive experiments on three popular benchmarks, ie, DAVIS 2016, DAVIS 2017, and YouTube-VOS. Our CFBI achieves the performance (J&F) of 89.4%, 81.9%, and 81.0%, respectively, outperforming all other state-of-the-art methods. Code will be available at https://github.com/z-x-yang/CFBI.
Tasks Semantic Segmentation, Semi-supervised Video Object Segmentation, Video Object Segmentation, Video Semantic Segmentation
Published 2020-03-18
URL https://arxiv.org/abs/2003.08333v1
PDF https://arxiv.org/pdf/2003.08333v1.pdf
PWC https://paperswithcode.com/paper/collaborative-video-object-segmentation-by
Repo https://github.com/z-x-yang/CFBI
Framework none

Statistical stability indices for LIME: obtaining reliable explanations for Machine Learning models

Title Statistical stability indices for LIME: obtaining reliable explanations for Machine Learning models
Authors Giorgio Visani, Enrico Bagli, Federico Chesani, Alessandro Poluzzi, Davide Capuzzo
Abstract Nowadays we are witnessing a transformation of the business processes towards a more computation driven approach. The ever increasing usage of Machine Learning techniques is the clearest example of such trend. This sort of revolution is often providing advantages, such as an increase in prediction accuracy and a reduced time to obtain the results. However, these methods present a major drawback: it is very difficult to understand on what grounds the algorithm took the decision. To address this issue we consider the LIME method. We give a general background on LIME then, we focus on the stability issue: employing the method repeated times, under the same conditions, may yield to different explanations. Two complementary indices are proposed, to measure LIME stability. It is important for the practitioner to be aware of the issue, as well as to have a tool for spotting it. Stability guarantees LIME explanations to be reliable, therefore a stability assessment, made through the proposed indices, is crucial. As a case study, we apply both Machine Learning and classical statistical techniques to Credit Risk data. We test LIME on the Machine Learning algorithm and check its stability. Eventually, we examine the goodness of the explanations returned.
Tasks
Published 2020-01-31
URL https://arxiv.org/abs/2001.11757v1
PDF https://arxiv.org/pdf/2001.11757v1.pdf
PWC https://paperswithcode.com/paper/statistical-stability-indices-for-lime
Repo https://github.com/giorgiovisani/lime_stability
Framework none

Transductive Few-shot Learning with Meta-Learned Confidence

Title Transductive Few-shot Learning with Meta-Learned Confidence
Authors Seong Min Kye, Hae Beom Lee, Hoirin Kim, Sung Ju Hwang
Abstract We propose a novel transductive inference framework for metric-based meta-learning models, which updates the prototype of each class with the confidence-weighted average of all the support and query samples. However, a caveat here is that the model confidence may be unreliable, which could lead to incorrect prediction in the transductive setting. To tackle this issue, we further propose to meta-learn to assign correct confidence scores to unlabeled queries. Specifically, we meta-learn the parameters of the distance-metric, such that the model can improve its transductive inference performance on unseen tasks with the generated confidence scores. We also consider various types of uncertainties to further enhance the reliability of the meta-learned confidence. We combine our transductive meta-learning scheme, Meta-Confidence Transduction (MCT) with a novel dense classifier, Dense Feature Matching Network (DFMN), which performs both instance-level and feature-level classification without global average pooling and validate it on four benchmark datasets. Our model achieves state-of-the-art results on all datasets, outperforming existing state-of-the-art models by 11.11% and 7.68% on miniImageNet and tieredImageNet dataset respectively. Further qualitative analysis confirms that this impressive performance gain is indeed due to its ability to assign high confidence to instances with the correct labels.
Tasks Few-Shot Image Classification, Few-Shot Learning, Meta-Learning
Published 2020-02-27
URL https://arxiv.org/abs/2002.12017v1
PDF https://arxiv.org/pdf/2002.12017v1.pdf
PWC https://paperswithcode.com/paper/transductive-few-shot-learning-with-meta
Repo https://github.com/seongmin-kye/MCT_DFMN
Framework pytorch

Off-Policy Evaluation and Learning for External Validity under a Covariate Shift

Title Off-Policy Evaluation and Learning for External Validity under a Covariate Shift
Authors Masahiro Kato, Masatoshi Uehara, Shota Yasui
Abstract We consider the evaluation and training of a new policy for the evaluation data by using the historical data obtained from a different policy. The goal of off-policy evaluation (OPE) is to estimate the expected reward of a new policy over the evaluation data, and that of off-policy learning (OPL) is to find a new policy that maximizes the expected reward over the evaluation data. Although the standard OPE and OPL assume the same distribution of covariate between the historical and evaluation data, there often exists a problem of a covariate shift, i.e., the distribution of the covariate of the historical data is different from that of the evaluation data. In this paper, we derive the efficiency bound of OPE under a covariate shift. Then, we propose doubly robust and efficient estimators for OPE and OPL under a covariate shift by using an estimator of the density ratio between the distributions of the historical and evaluation data. We also discuss other possible estimators and compare their theoretical properties. Finally, we confirm the effectiveness of the proposed estimators through experiments.
Tasks
Published 2020-02-26
URL https://arxiv.org/abs/2002.11642v1
PDF https://arxiv.org/pdf/2002.11642v1.pdf
PWC https://paperswithcode.com/paper/off-policy-evaluation-and-learning-for
Repo https://github.com/MasaKat0/OPE_CS
Framework none

Cascaded Human-Object Interaction Recognition

Title Cascaded Human-Object Interaction Recognition
Authors Tianfei Zhou, Wenguan Wang, Siyuan Qi, Haibin Ling, Jianbing Shen
Abstract Rapid progress has been witnessed for human-object interaction (HOI) recognition, but most existing models are confined to single-stage reasoning pipelines. Considering the intrinsic complexity of the task, we introduce a cascade architecture for a multi-stage, coarse-to-fine HOI understanding. At each stage, an instance localization network progressively refines HOI proposals and feeds them into an interaction recognition network. Each of the two networks is also connected to its predecessor at the previous stage, enabling cross-stage information propagation. The interaction recognition network has two crucial parts: a relation ranking module for high-quality HOI proposal selection and a triple-stream classifier for relation prediction. With our carefully-designed human-centric relation features, these two modules work collaboratively towards effective interaction understanding. Further beyond relation detection on a bounding-box level, we make our framework flexible to perform fine-grained pixel-wise relation segmentation; this provides a new glimpse into better relation modeling. Our approach reached the $1^{st}$ place in the ICCV2019 Person in Context Challenge, on both relation detection and segmentation tasks. It also shows promising results on V-COCO.
Tasks Human-Object Interaction Detection
Published 2020-03-09
URL https://arxiv.org/abs/2003.04262v2
PDF https://arxiv.org/pdf/2003.04262v2.pdf
PWC https://paperswithcode.com/paper/cascaded-human-object-interaction-recognition
Repo https://github.com/tfzhou/C-HOI
Framework pytorch

Supervised Domain Adaptation using Graph Embedding

Title Supervised Domain Adaptation using Graph Embedding
Authors Lukas Hedegaard, Omar Ali Sheikh-Omar, Alexandros Iosifidis
Abstract Getting deep convolutional neural networks to perform well requires a large amount of training data. When the available labelled data is small, it is often beneficial to use transfer learning to leverage a related larger dataset (source) in order to improve the performance on the small dataset (target). Among the transfer learning approaches, domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them. In this paper, we consider the domain adaptation problem from the perspective of dimensionality reduction and propose a generic framework based on graph embedding. Instead of solving the generalised eigenvalue problem, we formulate the graph-preserving criterion as a loss in the neural network and learn a domain-invariant feature transformation in an end-to-end fashion. We show that the proposed approach leads to a powerful Domain Adaptation framework; a simple LDA-inspired instantiation of the framework leads to state-of-the-art performance on two of the most widely used Domain Adaptation benchmarks, Office31 and MNIST to USPS datasets.
Tasks Dimensionality Reduction, Domain Adaptation, Graph Embedding, Transfer Learning
Published 2020-03-09
URL https://arxiv.org/abs/2003.04063v1
PDF https://arxiv.org/pdf/2003.04063v1.pdf
PWC https://paperswithcode.com/paper/supervised-domain-adaptation-using-graph
Repo https://github.com/lukashedegaard/dage
Framework tf

A Nonparametric Off-Policy Policy Gradient

Title A Nonparametric Off-Policy Policy Gradient
Authors Samuele Tosatto, Joao Carvalho, Hany Abdulsamad, Jan Peters
Abstract Reinforcement learning (RL) algorithms still suffer from high sample complexity despite outstanding recent successes. The need for intensive interactions with the environment is especially observed in many widely popular policy gradient algorithms that perform updates using on-policy samples. The price of such inefficiency becomes evident in real-world scenarios such as interaction-driven robot learning, where the success of RL has been rather limited. We address this issue by building on the general sample efficiency of off-policy algorithms. With nonparametric regression and density estimation methods we construct a nonparametric Bellman equation in a principled manner, which allows us to obtain closed-form estimates of the value function, and to analytically express the full policy gradient. We provide a theoretical analysis of our estimate to show that it is consistent under mild smoothness assumptions and empirically show that our approach has better sample efficiency than state-of-the-art policy gradient methods.
Tasks Density Estimation, Policy Gradient Methods
Published 2020-01-08
URL https://arxiv.org/abs/2001.02435v2
PDF https://arxiv.org/pdf/2001.02435v2.pdf
PWC https://paperswithcode.com/paper/a-nonparametric-offpolicy-policy-gradient
Repo https://github.com/jacarvalho/nopg
Framework pytorch
comments powered by Disqus