Paper Group ANR 1569
Regression and Singular Value Decomposition in Dynamic Graphs. Image retrieval method based on CNN and dimension reduction. Object Detection in Optical Remote Sensing Images: A Survey and A New Benchmark. Towards Robust and Stable Deep Learning Algorithms for Forward Backward Stochastic Differential Equations. Resolving challenges in deep learning- …
Regression and Singular Value Decomposition in Dynamic Graphs
Title | Regression and Singular Value Decomposition in Dynamic Graphs |
Authors | Mostafa Haghir Chehreghani |
Abstract | Most of real-world graphs are dynamic, i.e., they change over time. However, while problems such as regression and Singular Value Decomposition (SVD) have been studied for static graphs, they have not been investigated for dynamic graphs, yet. In this paper, we study regression and SVD over dynamic graphs. First, we present the notion of update-efficient matrix embedding that defines the conditions sufficient for a matrix embedding to be used for the dynamic graph regression problem (under $l_2$ norm). We prove that given an $n \times m$ update-efficient matrix embedding (e.g., adjacency matrix), after an update operation in the graph, the optimal solution of the graph regression problem for the revised graph can be computed in $O(nm)$ time. We also study dynamic graph regression under least absolute deviation. Then, we characterize a class of matrix embeddings that can be used to efficiently update SVD of a dynamic graph. For adjacency matrix and Laplacian matrix, we study those graph update operations for which SVD (and low rank approximation) can be updated efficiently. We show that for example if the matrix embedding of the graph is defined as its adjacency matrix, after an edge insertion or an edge deletion or an edge weight change in the graph, SVD of the graph can be updated in $O(n^2 log^2 n)$ time. Moreover, after a node insertion in the graph, rank-r approximation of the graph can be updated in $O(rn^2)$ time. |
Tasks | Graph Regression |
Published | 2019-03-26 |
URL | https://arxiv.org/abs/1903.10699v3 |
https://arxiv.org/pdf/1903.10699v3.pdf | |
PWC | https://paperswithcode.com/paper/on-the-theory-of-dynamic-graph-regression |
Repo | |
Framework | |
Image retrieval method based on CNN and dimension reduction
Title | Image retrieval method based on CNN and dimension reduction |
Authors | Zhihao Cao, Shaomin Mu, Yongyu Xu, Mengping Dong |
Abstract | An image retrieval method based on convolution neural network and dimension reduction is proposed in this paper. Convolution neural network is used to extract high-level features of images, and to solve the problem that the extracted feature dimensions are too high and have strong correlation, multilinear principal component analysis is used to reduce the dimension of features. The features after dimension reduction are binary hash coded for fast image retrieval. Experiments show that the method proposed in this paper has better retrieval effect than the retrieval method based on principal component analysis on the e-commerce image datasets. |
Tasks | Dimensionality Reduction, Image Retrieval |
Published | 2019-01-13 |
URL | http://arxiv.org/abs/1901.03924v1 |
http://arxiv.org/pdf/1901.03924v1.pdf | |
PWC | https://paperswithcode.com/paper/image-retrieval-method-based-on-cnn-and |
Repo | |
Framework | |
Object Detection in Optical Remote Sensing Images: A Survey and A New Benchmark
Title | Object Detection in Optical Remote Sensing Images: A Survey and A New Benchmark |
Authors | Ke Li, Gang Wan, Gong Cheng, Liqiu Meng, Junwei Han |
Abstract | Substantial efforts have been devoted more recently to presenting various methods for object detection in optical remote sensing images. However, the current survey of datasets and deep learning based methods for object detection in optical remote sensing images is not adequate. Moreover, most of the existing datasets have some shortcomings, for example, the numbers of images and object categories are small scale, and the image diversity and variations are insufficient. These limitations greatly affect the development of deep learning based object detection methods. In the paper, we provide a comprehensive review of the recent deep learning based object detection progress in both the computer vision and earth observation communities. Then, we propose a large-scale, publicly available benchmark for object DetectIon in Optical Remote sensing images, which we name as DIOR. The dataset contains 23463 images and 192472 instances, covering 20 object classes. The proposed DIOR dataset 1) is large-scale on the object categories, on the object instance number, and on the total image number; 2) has a large range of object size variations, not only in terms of spatial resolutions, but also in the aspect of inter- and intra-class size variability across objects; 3) holds big variations as the images are obtained with different imaging conditions, weathers, seasons, and image quality; and 4) has high inter-class similarity and intra-class diversity. The proposed benchmark can help the researchers to develop and validate their data-driven methods. Finally, we evaluate several state-of-the-art approaches on our DIOR dataset to establish a baseline for future research. |
Tasks | Object Detection |
Published | 2019-08-31 |
URL | https://arxiv.org/abs/1909.00133v2 |
https://arxiv.org/pdf/1909.00133v2.pdf | |
PWC | https://paperswithcode.com/paper/object-detection-in-optical-remote-sensing |
Repo | |
Framework | |
Towards Robust and Stable Deep Learning Algorithms for Forward Backward Stochastic Differential Equations
Title | Towards Robust and Stable Deep Learning Algorithms for Forward Backward Stochastic Differential Equations |
Authors | Batuhan Güler, Alexis Laignelet, Panos Parpas |
Abstract | Applications in quantitative finance such as optimal trade execution, risk management of options, and optimal asset allocation involve the solution of high dimensional and nonlinear Partial Differential Equations (PDEs). The connection between PDEs and systems of Forward-Backward Stochastic Differential Equations (FBSDEs) enables the use of advanced simulation techniques to be applied even in the high dimensional setting. Unfortunately, when the underlying application contains nonlinear terms, then classical methods both for simulation and numerical methods for PDEs suffer from the curse of dimensionality. Inspired by the success of deep learning, several researchers have recently proposed to address the solution of FBSDEs using deep learning. We discuss the dynamical systems point of view of deep learning and compare several architectures in terms of stability, generalization, and robustness. In order to speed up the computations, we propose to use a multilevel discretization technique. Our preliminary results suggest that the multilevel discretization method improves solutions times by an order of magnitude compared to existing methods without sacrificing stability or robustness. |
Tasks | |
Published | 2019-10-25 |
URL | https://arxiv.org/abs/1910.11623v1 |
https://arxiv.org/pdf/1910.11623v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-robust-and-stable-deep-learning |
Repo | |
Framework | |
Resolving challenges in deep learning-based analyses of histopathological images using explanation methods
Title | Resolving challenges in deep learning-based analyses of histopathological images using explanation methods |
Authors | Miriam Hägele, Philipp Seegerer, Sebastian Lapuschkin, Michael Bockmayr, Wojciech Samek, Frederick Klauschen, Klaus-Robert Müller, Alexander Binder |
Abstract | Deep learning has recently gained popularity in digital pathology due to its high prediction quality. However, the medical domain requires explanation and insight for a better understanding beyond standard quantitative performance evaluation. Recently, explanation methods have emerged, which are so far still rarely used in medicine. This work shows their application to generate heatmaps that allow to resolve common challenges encountered in deep learning-based digital histopathology analyses. These challenges comprise biases typically inherent to histopathology data. We study binary classification tasks of tumor tissue discrimination in publicly available haematoxylin and eosin slides of various tumor entities and investigate three types of biases: (1) biases which affect the entire dataset, (2) biases which are by chance correlated with class labels and (3) sampling biases. While standard analyses focus on patch-level evaluation, we advocate pixel-wise heatmaps, which offer a more precise and versatile diagnostic instrument and furthermore help to reveal biases in the data. This insight is shown to not only detect but also to be helpful to remove the effects of common hidden biases, which improves generalization within and across datasets. For example, we could see a trend of improved area under the receiver operating characteristic curve by 5% when reducing a labeling bias. Explanation techniques are thus demonstrated to be a helpful and highly relevant tool for the development and the deployment phases within the life cycle of real-world applications in digital pathology. |
Tasks | |
Published | 2019-08-15 |
URL | https://arxiv.org/abs/1908.06943v1 |
https://arxiv.org/pdf/1908.06943v1.pdf | |
PWC | https://paperswithcode.com/paper/resolving-challenges-in-deep-learning-based |
Repo | |
Framework | |
Extreme events evaluation using CRPS distributions
Title | Extreme events evaluation using CRPS distributions |
Authors | Maxime Taillardat, Anne-Laure Fougères, Philippe Naveau, Raphaël de Fondeville |
Abstract | Verification of ensemble forecasts for extreme events remains a challenging question. The general public as well as the media naturely pay particular attention on extreme events and conclude about the global predictive performance of ensembles, which are often unskillful when they are needed. Ashing classical verification tools to focus on such events can lead to unexpected behaviors. To square up these effects, thresholded and weighted scoring rules have been developed. Most of them use derivations of the Continuous Ranked Probability Score (CRPS). However, some properties of the CRPS for extreme events generate undesirable effects on the quality of verification. Using theoretical arguments and simulation examples, we illustrate some pitfalls of conventional verification tools and propose a different direction to assess ensemble forecasts using extreme value theory, considering proper scores as random variables. |
Tasks | |
Published | 2019-05-10 |
URL | https://arxiv.org/abs/1905.04022v1 |
https://arxiv.org/pdf/1905.04022v1.pdf | |
PWC | https://paperswithcode.com/paper/extreme-events-evaluation-using-crps |
Repo | |
Framework | |
Latent Variable Session-Based Recommendation
Title | Latent Variable Session-Based Recommendation |
Authors | David Rohde, Stephen Bonner |
Abstract | Session based recommendation provides an attractive alternative to the traditional feature engineering approach to recommendation. Feature engineering approaches require hand tuned features of the users history to be created to produce a context vector. In contrast a session based approach is able to dynamically model the users state as they act. We present a probabilistic framework for session based recommendation. A latent variable for the user state is updated as the user views more items and we learn more about their interests. The latent variable model is conceptually simple and elegant; yet requires sophisticated computational technique to approximate the integral over the latent variable. We provide computational solutions using both the re-parameterization trick and also using the Bouchard bound for the softmax function, we further explore employing a variational auto-encoder and a variational Expectation-Maximization algorithm for tightening the variational bound. The model performs well against a number of baselines. The intuitive nature of the model allows an elegant formulation combining correlations between items and their popularity and that sheds light on other popular recommendation methods. An attractive feature of the latent variable approach is that, as the user continues to act, the posterior on the user’s state tightens reflecting the recommender system’s increased knowledge about that user. |
Tasks | Feature Engineering, Session-Based Recommendations |
Published | 2019-04-24 |
URL | https://arxiv.org/abs/1904.10784v3 |
https://arxiv.org/pdf/1904.10784v3.pdf | |
PWC | https://paperswithcode.com/paper/latent-variable-session-based-recommendation |
Repo | |
Framework | |
HHHFL: Hierarchical Heterogeneous Horizontal Federated Learning for Electroencephalography
Title | HHHFL: Hierarchical Heterogeneous Horizontal Federated Learning for Electroencephalography |
Authors | Dashan Gao, Ce Ju, Xiguang Wei, Yang Liu, Tianjian Chen, Qiang Yang |
Abstract | Electroencephalography (EEG) classification techniques have been widely studied for human behavior and emotion recognition tasks. But it is still a challenging issue since the data may vary from subject to subject, may change over time for the same subject, and maybe heterogeneous. Recent years, increasing privacy-preserving demands poses new challenges to this task. The data heterogeneity, as well as the privacy constraint of the EEG data, is not concerned in previous studies. To fill this gap, in this paper, we propose a heterogeneous federated learning approach to train machine learning models over heterogeneous EEG data, while preserving the data privacy of each party. To verify the effectiveness of our approach, we conduct experiments on a real-world EEG dataset, consisting of heterogeneous data collected from diverse devices. Our approach achieves consistent performance improvement on every task. |
Tasks | EEG, Emotion Recognition |
Published | 2019-09-11 |
URL | https://arxiv.org/abs/1909.05784v1 |
https://arxiv.org/pdf/1909.05784v1.pdf | |
PWC | https://paperswithcode.com/paper/hhhfl-hierarchical-heterogeneous-horizontal |
Repo | |
Framework | |
Design and Challenges of Cloze-Style Reading Comprehension Tasks on Multiparty Dialogue
Title | Design and Challenges of Cloze-Style Reading Comprehension Tasks on Multiparty Dialogue |
Authors | Changmao Li, Tianhao Liu, Jinho Choi |
Abstract | This paper analyzes challenges in cloze-style reading comprehension on multiparty dialogue and suggests two new tasks for more comprehensive predictions of personal entities in daily conversations. We first demonstrate that there are substantial limitations to the evaluation methods of previous work, namely that randomized assignment of samples to training and test data substantially decreases the complexity of cloze-style reading comprehension. According to our analysis, replacing the random data split with a chronological data split reduces test accuracy on previous single-variable passage completion task from 72% to 34%, that leaves much more room to improve. Our proposed tasks extend the previous single-variable passage completion task by replacing more character mentions with variables. Several deep learning models are developed to validate these three tasks. A thorough error analysis is provided to understand the challenges and guide the future direction of this research. |
Tasks | Reading Comprehension |
Published | 2019-11-02 |
URL | https://arxiv.org/abs/1911.00773v1 |
https://arxiv.org/pdf/1911.00773v1.pdf | |
PWC | https://paperswithcode.com/paper/design-and-challenges-of-cloze-style-reading |
Repo | |
Framework | |
Predicting Social Perception from Faces: A Deep Learning Approach
Title | Predicting Social Perception from Faces: A Deep Learning Approach |
Authors | U. Messer, S. Fausser |
Abstract | Warmth and competence represent the fundamental traits in social judgment that determine emotional reactions and behavioral intentions towards social targets. This research investigates whether an algorithm can learn visual representations of social categorization and accurately predict human perceivers’ impressions of warmth and competence in face images. In addition, this research unravels which areas of a face are important for the classification of warmth and competence. We use Deep Convolutional Neural Networks to extract features from face images and the Gradient-weighted Class Activation Mapping (Grad CAM) method to understand the importance of face regions for the classification. Given a single face image the trained algorithm could correctly predict warmth impressions with an accuracy of about 90% and competence impressions with an accuracy of about 80%. The findings have implications for the automated processing of faces and the design of artificial characters. |
Tasks | |
Published | 2019-06-29 |
URL | https://arxiv.org/abs/1907.00217v1 |
https://arxiv.org/pdf/1907.00217v1.pdf | |
PWC | https://paperswithcode.com/paper/predicting-social-perception-from-faces-a |
Repo | |
Framework | |
Robust Subspace Discovery by Block-diagonal Adaptive Locality-constrained Representation
Title | Robust Subspace Discovery by Block-diagonal Adaptive Locality-constrained Representation |
Authors | Zhao Zhang, Jiahuan Ren, Sheng Li, Richang Hong, Zhengjun Zha, Meng Wang |
Abstract | We propose a novel and unsupervised representation learning model, i.e., Robust Block-Diagonal Adaptive Locality-constrained Latent Representation (rBDLR). rBDLR is able to recover multi-subspace structures and extract the adaptive locality-preserving salient features jointly. Leveraging on the Frobenius-norm based latent low-rank representation model, rBDLR jointly learns the coding coefficients and salient features, and improves the results by enhancing the robustness to outliers and errors in given data, preserving local information of salient features adaptively and ensuring the block-diagonal structures of the coefficients. To improve the robustness, we perform the latent representation and adaptive weighting in a recovered clean data space. To force the coefficients to be block-diagonal, we perform auto-weighting by minimizing the reconstruction error based on salient features, constrained using a block-diagonal regularizer. This ensures that a strict block-diagonal weight matrix can be obtained and salient features will possess the adaptive locality preserving ability. By minimizing the difference between the coefficient and weights matrices, we can obtain a block-diagonal coefficients matrix and it can also propagate and exchange useful information between salient features and coefficients. Extensive results demonstrate the superiority of rBDLR over other state-of-the-art methods. |
Tasks | Representation Learning, Unsupervised Representation Learning |
Published | 2019-08-04 |
URL | https://arxiv.org/abs/1908.01266v1 |
https://arxiv.org/pdf/1908.01266v1.pdf | |
PWC | https://paperswithcode.com/paper/robust-subspace-discovery-by-block-diagonal |
Repo | |
Framework | |
Q-Learning for Continuous Actions with Cross-Entropy Guided Policies
Title | Q-Learning for Continuous Actions with Cross-Entropy Guided Policies |
Authors | Riley Simmons-Edler, Ben Eisner, Eric Mitchell, Sebastian Seung, Daniel Lee |
Abstract | Off-Policy reinforcement learning (RL) is an important class of methods for many problem domains, such as robotics, where the cost of collecting data is high and on-policy methods are consequently intractable. Standard methods for applying Q-learning to continuous-valued action domains involve iteratively sampling the Q-function to find a good action (e.g. via hill-climbing), or by learning a policy network at the same time as the Q-function (e.g. DDPG). Both approaches make tradeoffs between stability, speed, and accuracy. We propose a novel approach, called Cross-Entropy Guided Policies, or CGP, that draws inspiration from both classes of techniques. CGP aims to combine the stability and performance of iterative sampling policies with the low computational cost of a policy network. Our approach trains the Q-function using iterative sampling with the Cross-Entropy Method (CEM), while training a policy network to imitate CEM’s sampling behavior. We demonstrate that our method is more stable to train than state of the art policy network methods, while preserving equivalent inference time compute costs, and achieving competitive total reward on standard benchmarks. |
Tasks | Q-Learning |
Published | 2019-03-25 |
URL | https://arxiv.org/abs/1903.10605v3 |
https://arxiv.org/pdf/1903.10605v3.pdf | |
PWC | https://paperswithcode.com/paper/q-learning-for-continuous-actions-with-cross |
Repo | |
Framework | |
Deep User Identification Model with Multiple Biometrics
Title | Deep User Identification Model with Multiple Biometrics |
Authors | Hyoung-Kyu Song, Ebrahim AlAlkeem, Jaewoong Yun, Tae-Ho Kim, Tae-Ho Kim, Hyerin Yoo, Dasom Heo, Chan Yeob Yeun, Myungsu Chae |
Abstract | Identification using biometrics is an important yet challenging task. Abundant research has been conducted on identifying personal identity or gender using given signals. Various types of biometrics such as electrocardiogram (ECG), electroencephalogram (EEG), face, fingerprint, and voice have been used for these tasks. Most research has only focused on single modality or a single task, while the combination of input modality or tasks is yet to be investigated. In this paper, we propose deep identification and gender classification using multimodal biometrics. Our model uses ECG, fingerprint, and facial data. It then performs two tasks: gender identification and classification. By engaging multi-modality, a single model can handle various input domains without training each modality independently, and the correlation between domains can increase its generalization performance on the tasks. |
Tasks | EEG |
Published | 2019-09-03 |
URL | https://arxiv.org/abs/1909.05417v1 |
https://arxiv.org/pdf/1909.05417v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-user-identification-model-with-multiple |
Repo | |
Framework | |
Learning to Optimize Variational Quantum Circuits to Solve Combinatorial Problems
Title | Learning to Optimize Variational Quantum Circuits to Solve Combinatorial Problems |
Authors | Sami Khairy, Ruslan Shaydulin, Lukasz Cincio, Yuri Alexeev, Prasanna Balaprakash |
Abstract | Quantum computing is a computational paradigm with the potential to outperform classical methods for a variety of problems. Proposed recently, the Quantum Approximate Optimization Algorithm (QAOA) is considered as one of the leading candidates for demonstrating quantum advantage in the near term. QAOA is a variational hybrid quantum-classical algorithm for approximately solving combinatorial optimization problems. The quality of the solution obtained by QAOA for a given problem instance depends on the performance of the classical optimizer used to optimize the variational parameters. In this paper, we formulate the problem of finding optimal QAOA parameters as a learning task in which the knowledge gained from solving training instances can be leveraged to find high-quality solutions for unseen test instances. To this end, we develop two machine-learning-based approaches. Our first approach adopts a reinforcement learning (RL) framework to learn a policy network to optimize QAOA circuits. Our second approach adopts a kernel density estimation (KDE) technique to learn a generative model of optimal QAOA parameters. In both approaches, the training procedure is performed on small-sized problem instances that can be simulated on a classical computer; yet the learned RL policy and the generative model can be used to efficiently solve larger problems. Extensive simulations using the IBM Qiskit Aer quantum circuit simulator demonstrate that our proposed RL- and KDE-based approaches reduce the optimality gap by factors up to 30.15 when compared with other commonly used off-the-shelf optimizers. |
Tasks | Combinatorial Optimization, Density Estimation |
Published | 2019-11-25 |
URL | https://arxiv.org/abs/1911.11071v1 |
https://arxiv.org/pdf/1911.11071v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-optimize-variational-quantum |
Repo | |
Framework | |
A Deep Evolutionary Approach to Bioinspired Classifier Optimisation for Brain-Machine Interaction
Title | A Deep Evolutionary Approach to Bioinspired Classifier Optimisation for Brain-Machine Interaction |
Authors | Jordan J. Bird, Diego R. Faria, Luis J. Manso, Anikó Ekárt, Christopher D. Buckingham |
Abstract | This study suggests a new approach to EEG data classification by exploring the idea of using evolutionary computation to both select useful discriminative EEG features and optimise the topology of Artificial Neural Networks. An evolutionary algorithm is applied to select the most informative features from an initial set of 2550 EEG statistical features. Optimisation of a Multilayer Perceptron (MLP) is performed with an evolutionary approach before classification to estimate the best hyperparameters of the network. Deep learning and tuning with Long Short-Term Memory (LSTM) are also explored, and Adaptive Boosting of the two types of models is tested for each problem. Three experiments are provided for comparison using different classifiers: one for attention state classification, one for emotional sentiment classification, and a third experiment in which the goal is to guess the number a subject is thinking of. The obtained results show that an Adaptive Boosted LSTM can achieve an accuracy of 84.44%, 97.06%, and 9.94% on the attentional, emotional, and number datasets, respectively. An evolutionary-optimised MLP achieves results close to the Adaptive Boosted LSTM for the two first experiments and significantly higher for the number-guessing experiment with an Adaptive Boosted DEvo MLP reaching 31.35%, while being significantly quicker to train and classify. In particular, the accuracy of the nonboosted DEvo MLP was of 79.81%, 96.11%, and 27.07% in the same benchmarks. Two datasets for the experiments were gathered using a Muse EEG headband with four electrodes corresponding to TP9, AF7, AF8, and TP10 locations of the international EEG placement standard. The EEG MindBigData digits dataset was gathered from the TP9, FP1, FP2, and TP10 locations. |
Tasks | EEG, Sentiment Analysis |
Published | 2019-08-13 |
URL | https://arxiv.org/abs/1908.04784v1 |
https://arxiv.org/pdf/1908.04784v1.pdf | |
PWC | https://paperswithcode.com/paper/a-deep-evolutionary-approach-to-bioinspired |
Repo | |
Framework | |