January 25, 2020

3324 words 16 mins read

Paper Group ANR 1637

Paper Group ANR 1637

Computational Socioeconomics. Listening while Speaking and Visualizing: Improving ASR through Multimodal Chain. Fitted Q-Learning in Mean-field Games. Automatic detection of lesion load change in Multiple Sclerosis using convolutional neural networks with segmentation confidence. Generalised learning of time-series: Ornstein-Uhlenbeck processes. Vi …

Computational Socioeconomics

Title Computational Socioeconomics
Authors Jian Gao, Yi-Cheng Zhang, Tao Zhou
Abstract Uncovering the structure of socioeconomic systems and timely estimation of socioeconomic status are significant for economic development. The understanding of socioeconomic processes provides foundations to quantify global economic development, to map regional industrial structure, and to infer individual socioeconomic status. In this review, we will make a brief manifesto about a new interdisciplinary research field named Computational Socioeconomics, followed by detailed introduction about data resources, computational tools, data-driven methods, theoretical models and novel applications at multiple resolutions, including the quantification of global economic inequality and complexity, the map of regional industrial structure and urban perception, the estimation of individual socioeconomic status and demographic, and the real-time monitoring of emergent events. This review, together with pioneering works we have highlighted, will draw increasing interdisciplinary attentions and induce a methodological shift in future socioeconomic studies.
Tasks
Published 2019-05-15
URL http://arxiv.org/abs/1905.06166v1
PDF http://arxiv.org/pdf/1905.06166v1.pdf
PWC https://paperswithcode.com/paper/computational-socioeconomics
Repo
Framework

Listening while Speaking and Visualizing: Improving ASR through Multimodal Chain

Title Listening while Speaking and Visualizing: Improving ASR through Multimodal Chain
Authors Johanes Effendi, Andros Tjandra, Sakriani Sakti, Satoshi Nakamura
Abstract Previously, a machine speech chain, which is based on sequence-to-sequence deep learning, was proposed to mimic speech perception and production behavior. Such chains separately processed listening and speaking by automatic speech recognition (ASR) and text-to-speech synthesis (TTS) and simultaneously enabled them to teach each other in semi-supervised learning when they received unpaired data. Unfortunately, this speech chain study is limited to speech and textual modalities. In fact, natural communication is actually multimodal and involves both auditory and visual sensory systems. Although the said speech chain reduces the requirement of having a full amount of paired data, in this case we still need a large amount of unpaired data. In this research, we take a further step and construct a multimodal chain and design a closely knit chain architecture that combines ASR, TTS, image captioning, and image production models into a single framework. The framework allows the training of each component without requiring a large number of parallel multimodal data. Our experimental results also show that an ASR can be further trained without speech and text data and cross-modal data augmentation remains possible through our proposed chain, which improves the ASR performance.
Tasks Data Augmentation, Image Captioning, Image Retrieval, Speech Recognition, Speech Synthesis, Text-To-Speech Synthesis
Published 2019-06-03
URL https://arxiv.org/abs/1906.00579v3
PDF https://arxiv.org/pdf/1906.00579v3.pdf
PWC https://paperswithcode.com/paper/190600579
Repo
Framework

Fitted Q-Learning in Mean-field Games

Title Fitted Q-Learning in Mean-field Games
Authors Berkay Anahtarcı, Can Deha Karıksız, Naci Saldi
Abstract In the literature, existence of equilibria for discrete-time mean field games has been in general established via Kakutani’s Fixed Point Theorem. However, this fixed point theorem does not entail any iterative scheme for computing equilibria. In this paper, we first propose a Q-iteration algorithm to compute equilibria for mean-field games with known model using Banach Fixed Point Theorem. Then, we generalize this algorithm to model-free setting using fitted Q-iteration algorithm and establish the probabilistic convergence of the proposed iteration. Then, using the output of this learning algorithm, we construct an approximate Nash equilibrium for finite-agent stochastic game with mean-field interaction between agents.
Tasks Q-Learning
Published 2019-12-31
URL https://arxiv.org/abs/1912.13309v1
PDF https://arxiv.org/pdf/1912.13309v1.pdf
PWC https://paperswithcode.com/paper/fitted-q-learning-in-mean-field-games
Repo
Framework

Automatic detection of lesion load change in Multiple Sclerosis using convolutional neural networks with segmentation confidence

Title Automatic detection of lesion load change in Multiple Sclerosis using convolutional neural networks with segmentation confidence
Authors Richard McKinley, Lorenz Grunder, Rik Wepfer, Fabian Aschwanden, Tim Fischer, Christoph Friedli, Raphaela Muri, Christian Rummel, Rajeev Verma, Christian Weisstanner, Mauricio Reyes, Anke Salmen, Andrew Chan, Roland Wiest, Franca Wagner
Abstract The detection of new or enlarged white-matter lesions in multiple sclerosis is a vital task in the monitoring of patients undergoing disease-modifying treatment for multiple sclerosis. However, the definition of ‘new or enlarged’ is not fixed, and it is known that lesion-counting is highly subjective, with high degree of inter- and intra-rater variability. Automated methods for lesion quantification hold the potential to make the detection of new and enlarged lesions consistent and repeatable. However, the majority of lesion segmentation algorithms are not evaluated for their ability to separate progressive from stable patients, despite this being a pressing clinical use-case. In this paper we show that change in volumetric measurements of lesion load alone is not a good method for performing this separation, even for highly performing segmentation methods. Instead, we propose a method for identifying lesion changes of high certainty, and establish on a dataset of longitudinal multiple sclerosis cases that this method is able to separate progressive from stable timepoints with a very high level of discrimination (AUC = 0.99), while changes in lesion volume are much less able to perform this separation (AUC = 0.71). Validation of the method on a second external dataset confirms that the method is able to generalize beyond the setting in which it was trained, achieving an accuracy of 83% in separating stable and progressive timepoints. Both lesion volume and count have previously been shown to be strong predictors of disease course across a population. However, we demonstrate that for individual patients, changes in these measures are not an adequate means of establishing no evidence of disease activity. Meanwhile, directly detecting tissue which changes, with high confidence, from non-lesion to lesion is a feasible methodology for identifying radiologically active patients.
Tasks Lesion Segmentation
Published 2019-04-05
URL http://arxiv.org/abs/1904.03041v1
PDF http://arxiv.org/pdf/1904.03041v1.pdf
PWC https://paperswithcode.com/paper/automatic-detection-of-lesion-load-change-in
Repo
Framework

Generalised learning of time-series: Ornstein-Uhlenbeck processes

Title Generalised learning of time-series: Ornstein-Uhlenbeck processes
Authors Mehmet Süzen, Alper Yegenoglu
Abstract In machine learning, statistics, econometrics and statistical physics cross-validation (CV) is used as a standard approach in quantifying the generalization performance of a statistical model. In practice, direct usage of CV is avoided for time-series due to several issues. A direct application of CV in time-series leads to the loss of serial correlations, a requirement of preserving any non-stationarity and the prediction of the past data using future data. In this work, we propose a meta-algorithm called reconstructive cross-validation (rCV ) that avoids all these issues. At first, k folds are formed with non-overlapping randomly selected subsets of the original time-series. Then, we generate k new partial time-series by removing data points from a given fold: every new partial time-series have missing points at random from a different entire fold. A suitable imputation or a smoothing technique is used to reconstruct k time-series. We call these reconstructions secondary models. Thereafter, we build the primary k time-series models using new time-series coming from the secondary models. The performance of the primary models is evaluated simultaneously by computing the deviations from the originally removed data points and out-of-sample (OSS) data. These amounts to reconstruction and prediction errors. If the secondary models use a technique that retains the data points exactly, such as Gaussian process regression, there will be no errors present on the data points that are not removed. By this procedure serial correlations are retained, any non-stationarity is preserved within models and there will be no prediction of past data using the future data points. The cross-validation in time-series model can be practised with rCV. Moreover, we can build time-series learning curves by repeating rCV procedure with different k values.
Tasks Imputation, Time Series
Published 2019-10-21
URL https://arxiv.org/abs/1910.09394v2
PDF https://arxiv.org/pdf/1910.09394v2.pdf
PWC https://paperswithcode.com/paper/generalised-learning-of-time-series-ornstein
Repo
Framework

Visual Reasoning of Feature Attribution with Deep Recurrent Neural Networks

Title Visual Reasoning of Feature Attribution with Deep Recurrent Neural Networks
Authors Chuan Wang, Takeshi Onishi, Keiichi Nemoto, Kwan-Liu Ma
Abstract Deep Recurrent Neural Network (RNN) has gained popularity in many sequence classification tasks. Beyond predicting a correct class for each data instance, data scientists also want to understand what differentiating factors in the data have contributed to the classification during the learning process. We present a visual analytics approach to facilitate this task by revealing the RNN attention for all data instances, their temporal positions in the sequences, and the attribution of variables at each value level. We demonstrate with real-world datasets that our approach can help data scientists to understand such dynamics in deep RNNs from the training results, hence guiding their modeling process.
Tasks Visual Reasoning
Published 2019-01-17
URL http://arxiv.org/abs/1901.05574v1
PDF http://arxiv.org/pdf/1901.05574v1.pdf
PWC https://paperswithcode.com/paper/visual-reasoning-of-feature-attribution-with
Repo
Framework

A Novel Task-Oriented Text Corpus in Silent Speech Recognition and its Natural Language Generation Construction Method

Title A Novel Task-Oriented Text Corpus in Silent Speech Recognition and its Natural Language Generation Construction Method
Authors Dong Cao, Dongdong Zhang, HaiBo Chen
Abstract Millions of people with severe speech disorders around the world may regain their communication capabilities through techniques of silent speech recognition (SSR). Using electroencephalography (EEG) as a biomarker for speech decoding has been popular for SSR. However, the lack of SSR text corpus has impeded the development of this technique. Here, we construct a novel task-oriented text corpus, which is utilized in the field of SSR. In the process of construction, we propose a task-oriented hybrid construction method based on natural language generation algorithm. The algorithm focuses on the strategy of data-to-text generation, and has two advantages including linguistic quality and high diversity. These two advantages use template-based method and deep neural networks respectively. In an SSR experiment with the generated text corpus, analysis results show that the performance of our hybrid construction method outperforms the pure method such as template-based natural language generation or neural natural language generation models.
Tasks Data-to-Text Generation, EEG, Speech Recognition, Text Generation
Published 2019-04-19
URL http://arxiv.org/abs/1905.01974v1
PDF http://arxiv.org/pdf/1905.01974v1.pdf
PWC https://paperswithcode.com/paper/190501974
Repo
Framework

Learning Blended, Precise Semantic Program Embeddings

Title Learning Blended, Precise Semantic Program Embeddings
Authors Ke Wang, Zhendong Su
Abstract Learning neural program embeddings is key to utilizing deep neural networks in program languages research — precise and efficient program representations enable the application of deep models to a wide range of program analysis tasks. Existing approaches predominately learn to embed programs from their source code, and, as a result, they do not capture deep, precise program semantics. On the other hand, models learned from runtime information critically depend on the quality of program executions, thus leading to trained models with highly variant quality. This paper tackles these inherent weaknesses of prior approaches by introducing a new deep neural network, \liger, which learns program representations from a mixture of symbolic and concrete execution traces. We have evaluated \liger on \coset, a recently proposed benchmark suite for evaluating neural program embeddings. Results show \liger (1) is significantly more accurate than the state-of-the-art syntax-based models Gated Graph Neural Network and code2vec in classifying program semantics, and (2) requires on average 10x fewer executions covering 74% fewer paths than the state-of-the-art dynamic model \dypro. Furthermore, we extend \liger to predict the name for a method from its body’s vector representation. Learning on the same set of functions (more than 170K in total), \liger significantly outperforms code2seq, the previous state-of-the-art for method name prediction.
Tasks Representation Learning
Published 2019-07-03
URL https://arxiv.org/abs/1907.02136v2
PDF https://arxiv.org/pdf/1907.02136v2.pdf
PWC https://paperswithcode.com/paper/a-hybrid-approach-for-learning-program
Repo
Framework

Constellation Loss: Improving the efficiency of deep metric learning loss functions for optimal embedding

Title Constellation Loss: Improving the efficiency of deep metric learning loss functions for optimal embedding
Authors Alfonso Medela, Artzai Picon
Abstract Metric learning has become an attractive field for research on the latest years. Loss functions like contrastive loss, triplet loss or multi-class N-pair loss have made possible generating models capable of tackling complex scenarios with the presence of many classes and scarcity on the number of images per class not only work to build classifiers, but to many other applications where measuring similarity is the key. Deep Neural Networks trained via metric learning also offer the possibility to solve few-shot learning problems. Currently used state of the art loss functions such as triplet and contrastive loss functions, still suffer from slow convergence due to the selection of effective training samples that has been partially solved by the multi-class N-pair loss by simultaneously adding additional samples from the different classes. In this work, we extend triplet and multiclass-N-pair loss function by proposing the constellation loss metric where the distances among all class combinations are simultaneously learned. We have compared our constellation loss for visual class embedding showing that our loss function over-performs the other methods by obtaining more compact clusters while achieving better classification results.
Tasks Few-Shot Learning, Metric Learning
Published 2019-05-25
URL https://arxiv.org/abs/1905.10675v1
PDF https://arxiv.org/pdf/1905.10675v1.pdf
PWC https://paperswithcode.com/paper/constellation-loss-improving-the-efficiency
Repo
Framework

Robust Mahalanobis Metric Learning via Geometric Approximation Algorithms

Title Robust Mahalanobis Metric Learning via Geometric Approximation Algorithms
Authors Diego Ihara, Neshat Mohammadi, Francesco Sgherzi, Anastasios Sidiropoulos
Abstract Learning Mahalanobis metric spaces is an important problem that has found numerous applications. Several algorithms have been designed for this problem, including Information Theoretic Metric Learning (ITML) [Davis et al. 2007] and Large Margin Nearest Neighbor (LMNN) classification [Weinberger and Saul 2009]. We study the problem of learning a Mahalanobis metric space in the presence of adversarial label noise. To that end, we consider a formulation of Mahalanobis metric learning as an optimization problem, where the objective is to minimize the number of violated similarity/dissimilarity constraints. We show that for any fixed ambient dimension, there exists a fully polynomial-time approximation scheme (FPTAS) with nearly-linear running time. This result is obtained using tools from the theory of linear programming in low dimensions. As a consequence, we obtain a fully-parallelizable algorithm that recovers a nearly-optimal metric space, even when a small fraction of the labels is corrupted adversarially. We also discuss improvements of the algorithm in practice, and present experimental results on real-world, synthetic, and poisoned data sets.
Tasks Metric Learning
Published 2019-05-24
URL https://arxiv.org/abs/1905.09989v3
PDF https://arxiv.org/pdf/1905.09989v3.pdf
PWC https://paperswithcode.com/paper/learning-mahalanobis-metric-spaces-via
Repo
Framework

Hierarchical Annotation of Images with Two-Alternative-Forced-Choice Metric Learning

Title Hierarchical Annotation of Images with Two-Alternative-Forced-Choice Metric Learning
Authors Niels Hellinga, Vlado Menkovski
Abstract Many tasks such as retrieval and recommendations can significantly benefit from structuring the data, commonly in a hierarchical way. To achieve this through annotations of high dimensional data such as images or natural text can be significantly labor intensive. We propose an approach for uncovering the hierarchical structure of data based on efficient discriminative testing rather than annotations of individual datapoints. Using two-alternative-forced-choice (2AFC) testing and deep metric learning we achieve embedding of the data in semantic space where we are able to successfully hierarchically cluster. We actively select triplets for the 2AFC test such that the modeling process is highly efficient with respect to the number of tests presented to the annotator. We empirically demonstrate the feasibility of the method by confirming the shape bias on synthetic data and extract hierarchical structure on the Fashion-MNIST dataset to a finer granularity than the original labels.
Tasks Metric Learning
Published 2019-05-23
URL https://arxiv.org/abs/1905.09523v2
PDF https://arxiv.org/pdf/1905.09523v2.pdf
PWC https://paperswithcode.com/paper/hierarchical-annotation-of-images-with-two
Repo
Framework

Generative Adversarial Networks for Failure Prediction

Title Generative Adversarial Networks for Failure Prediction
Authors Shuai Zheng, Ahmed Farahat, Chetan Gupta
Abstract Prognostics and Health Management (PHM) is an emerging engineering discipline which is concerned with the analysis and prediction of equipment health and performance. One of the key challenges in PHM is to accurately predict impending failures in the equipment. In recent years, solutions for failure prediction have evolved from building complex physical models to the use of machine learning algorithms that leverage the data generated by the equipment. However, failure prediction problems pose a set of unique challenges that make direct application of traditional classification and prediction algorithms impractical. These challenges include the highly imbalanced training data, the extremely high cost of collecting more failure samples, and the complexity of the failure patterns. Traditional oversampling techniques will not be able to capture such complexity and accordingly result in overfitting the training data. This paper addresses these challenges by proposing a novel algorithm for failure prediction using Generative Adversarial Networks (GAN-FP). GAN-FP first utilizes two GAN networks to simultaneously generate training samples and build an inference network that can be used to predict failures for new samples. GAN-FP first adopts an infoGAN to generate realistic failure and non-failure samples, and initialize the weights of the first few layers of the inference network. The inference network is then tuned by optimizing a weighted loss objective using only real failure and non-failure samples. The inference network is further tuned using a second GAN whose purpose is to guarantee the consistency between the generated samples and corresponding labels. GAN-FP can be used for other imbalanced classification problems as well.
Tasks
Published 2019-10-04
URL https://arxiv.org/abs/1910.02034v1
PDF https://arxiv.org/pdf/1910.02034v1.pdf
PWC https://paperswithcode.com/paper/generative-adversarial-networks-for-failure
Repo
Framework

Embeddings and Representation Learning for Structured Data

Title Embeddings and Representation Learning for Structured Data
Authors Benjamin Paaßen, Claudio Gallicchio, Alessio Micheli, Alessandro Sperduti
Abstract Performing machine learning on structured data is complicated by the fact that such data does not have vectorial form. Therefore, multiple approaches have emerged to construct vectorial representations of structured data, from kernel and distance approaches to recurrent, recursive, and convolutional neural networks. Recent years have seen heightened attention in this demanding field of research and several new approaches have emerged, such as metric learning on structured data, graph convolutional neural networks, and recurrent decoder networks for structured data. In this contribution, we provide an high-level overview of the state-of-the-art in representation learning and embeddings for structured data across a wide range of machine learning fields.
Tasks Metric Learning, Representation Learning
Published 2019-05-15
URL https://arxiv.org/abs/1905.06147v1
PDF https://arxiv.org/pdf/1905.06147v1.pdf
PWC https://paperswithcode.com/paper/embeddings-and-representation-learning-for
Repo
Framework

An intelligent financial portfolio trading strategy using deep Q-learning

Title An intelligent financial portfolio trading strategy using deep Q-learning
Authors Hyungjun Park, Min Kyu Sim, Dong Gu Choi
Abstract Portfolio traders strive to identify dynamic portfolio allocation schemes so that their total budgets are efficiently allocated through the investment horizon. This study proposes a novel portfolio trading strategy in which an intelligent agent is trained to identify an optimal trading action by using deep Q-learning. We formulate a Markov decision process model for the portfolio trading process, and the model adopts a discrete combinatorial action space, determining the trading direction at prespecified trading size for each asset, to ensure practical applicability. Our novel portfolio trading strategy takes advantage of three features to outperform in real-world trading. First, a mapping function is devised to handle and transform an initially found but infeasible action into a feasible action closest to the originally proposed ideal action. Second, by overcoming the dimensionality problem, this study establishes models of agent and Q-network for deriving a multi-asset trading strategy in the predefined action space. Last, this study introduces a technique that has the advantage of deriving a well-fitted multi-asset trading strategy by designing an agent to simulate all feasible actions in each state. To validate our approach, we conduct backtests for two representative portfolios and demonstrate superior results over the benchmark strategies.
Tasks Q-Learning
Published 2019-07-08
URL https://arxiv.org/abs/1907.03665v4
PDF https://arxiv.org/pdf/1907.03665v4.pdf
PWC https://paperswithcode.com/paper/an-intelligent-financial-portfolio-trading
Repo
Framework

Addressing the Loss-Metric Mismatch with Adaptive Loss Alignment

Title Addressing the Loss-Metric Mismatch with Adaptive Loss Alignment
Authors Chen Huang, Shuangfei Zhai, Walter Talbott, Miguel Angel Bautista, Shih-Yu Sun, Carlos Guestrin, Josh Susskind
Abstract In most machine learning training paradigms a fixed, often handcrafted, loss function is assumed to be a good proxy for an underlying evaluation metric. In this work we assess this assumption by meta-learning an adaptive loss function to directly optimize the evaluation metric. We propose a sample efficient reinforcement learning approach for adapting the loss dynamically during training. We empirically show how this formulation improves performance by simultaneously optimizing the evaluation metric and smoothing the loss landscape. We verify our method in metric learning and classification scenarios, showing considerable improvements over the state-of-the-art on a diverse set of tasks. Importantly, our method is applicable to a wide range of loss functions and evaluation metrics. Furthermore, the learned policies are transferable across tasks and data, demonstrating the versatility of the method.
Tasks Meta-Learning, Metric Learning
Published 2019-05-15
URL https://arxiv.org/abs/1905.05895v1
PDF https://arxiv.org/pdf/1905.05895v1.pdf
PWC https://paperswithcode.com/paper/addressing-the-loss-metric-mismatch-with
Repo
Framework
comments powered by Disqus