Paper Group ANR 204
Variance-Based Risk Estimations in Markov Processes via Transformation with State Lumping. Resilient Supplier Selection in Logistics 4.0 with Heterogeneous Information. Merging Deterministic Policy Gradient Estimations with Varied Bias-Variance Tradeoff for Effective Deep Reinforcement Learning. Training Provably Robust Models by Polyhedral Envelop …
Variance-Based Risk Estimations in Markov Processes via Transformation with State Lumping
Title | Variance-Based Risk Estimations in Markov Processes via Transformation with State Lumping |
Authors | Shuai Ma, Jia Yuan Yu |
Abstract | Variance plays a crucial role in risk-sensitive reinforcement learning, and most risk measures can be analyzed via variance. In this paper, we consider two law-invariant risks as examples: mean-variance risk and exponential utility risk. With the aid of the state-augmentation transformation (SAT), we show that, the two risks can be estimated in Markov decision processes (MDPs) with a stochastic transition-based reward and a randomized policy. To relieve the enlarged state space, a novel definition of isotopic states is proposed for state lumping, considering the special structure of the transformed transition probability. In the numerical experiment, we illustrate state lumping in the SAT, errors from a naive reward simplification, and the validity of the SAT for the two risk estimations. |
Tasks | |
Published | 2019-07-09 |
URL | https://arxiv.org/abs/1907.05231v1 |
https://arxiv.org/pdf/1907.05231v1.pdf | |
PWC | https://paperswithcode.com/paper/variance-based-risk-estimations-in-markov |
Repo | |
Framework | |
Resilient Supplier Selection in Logistics 4.0 with Heterogeneous Information
Title | Resilient Supplier Selection in Logistics 4.0 with Heterogeneous Information |
Authors | Md Mahmudul Hassan, Dizuo Jiang, A. M. M. Sharif Ullah, Md. Noor-E-Alam |
Abstract | Supplier selection problem has gained extensive attention in the prior studies. However, research based on Fuzzy Multi-Attribute Decision Making (F-MADM) approach in ranking resilient suppliers in logistic 4 is still in its infancy. Traditional MADM approach fails to address the resilient supplier selection problem in logistic 4 primarily because of the large amount of data concerning some attributes that are quantitative, yet difficult to process while making decisions. Besides, some qualitative attributes prevalent in logistic 4 entail imprecise perceptual or judgmental decision relevant information, and are substantially different than those considered in traditional suppler selection problems. This study develops a Decision Support System (DSS) that will help the decision maker to incorporate and process such imprecise heterogeneous data in a unified framework to rank a set of resilient suppliers in the logistic 4 environment. The proposed framework induces a triangular fuzzy number from large-scale temporal data using probability-possibility consistency principle. Large number of non-temporal data presented graphically are computed by extracting granular information that are imprecise in nature. Fuzzy linguistic variables are used to map the qualitative attributes. Finally, fuzzy based TOPSIS method is adopted to generate the ranking score of alternative suppliers. These ranking scores are used as input in a Multi-Choice Goal Programming (MCGP) model to determine optimal order allocation for respective suppliers. Finally, a sensitivity analysis assesses how the Suppliers Cost versus Resilience Index (SCRI) changes when differential priorities are set for respective cost and resilience attributes. |
Tasks | Decision Making |
Published | 2019-04-10 |
URL | https://arxiv.org/abs/1904.09837v3 |
https://arxiv.org/pdf/1904.09837v3.pdf | |
PWC | https://paperswithcode.com/paper/190409837 |
Repo | |
Framework | |
Merging Deterministic Policy Gradient Estimations with Varied Bias-Variance Tradeoff for Effective Deep Reinforcement Learning
Title | Merging Deterministic Policy Gradient Estimations with Varied Bias-Variance Tradeoff for Effective Deep Reinforcement Learning |
Authors | Gang Chen |
Abstract | Deep reinforcement learning (DRL) on Markov decision processes (MDPs) with continuous action spaces is often approached by directly updating parametric policies along the direction of estimated policy gradients (PGs). Previous research revealed that the performance of these PG algorithms depends heavily on the bias-variance tradeoff involved in estimating and using PGs. A notable approach towards balancing this tradeoff is to merge both on-policy and off-policy gradient estimations for the purpose of training stochastic policies. However this method cannot be utilized directly by sample-efficient off-policy PG algorithms such as Deep Deterministic Policy Gradient (DDPG) and twin-delayed DDPG (TD3), which have been designed to train deterministic policies. It is hence important to develop new techniques to merge multiple off-policy estimations of deterministic PG (DPG). Driven by this research question, this paper introduces elite DPG which will be estimated differently from conventional DPG to emphasize on the variance reduction effect at the expense of increased learning bias. To mitigate the extra bias, policy consolidation techniques will be developed to distill policy behavioral knowledge from elite trajectories and use the distilled generative model to further regularize policy training. Moreover, we will study both theoretically and experimentally two different DPG merging methods, i.e., interpolation merging and two-step merging, with the aim to induce varied bias-variance tradeoff through combined use of both conventional DPG and elite DPG. Experiments on six benchmark control tasks confirm that these two merging methods can noticeably improve the learning performance of TD3, significantly outperforming several state-of-the-art DRL algorithms. |
Tasks | |
Published | 2019-11-24 |
URL | https://arxiv.org/abs/1911.10527v1 |
https://arxiv.org/pdf/1911.10527v1.pdf | |
PWC | https://paperswithcode.com/paper/merging-deterministic-policy-gradient |
Repo | |
Framework | |
Training Provably Robust Models by Polyhedral Envelope Regularization
Title | Training Provably Robust Models by Polyhedral Envelope Regularization |
Authors | Chen Liu, Mathieu Salzmann, Sabine Süsstrunk |
Abstract | Training certifiable neural networks enables one to obtain models with robustness guarantees against adversarial attacks. In this work, we introduce a framework to bound the adversary-free region in the neighborhood of the input data by a polyhedral envelope, which yields finer-grained certified robustness. We further introduce polyhedral envelope regularization (PER) to encourage larger polyhedral envelopes and thus improve the provable robustness of the models. We demonstrate the flexibility and effectiveness of our framework on standard benchmarks; it applies to networks of different architectures and general activation functions. Compared with the state-of-the-art methods, PER has very little computational overhead and better robustness guarantees without over-regularizing the model. |
Tasks | |
Published | 2019-12-10 |
URL | https://arxiv.org/abs/1912.04792v2 |
https://arxiv.org/pdf/1912.04792v2.pdf | |
PWC | https://paperswithcode.com/paper/on-certifying-robust-models-by-polyhedral |
Repo | |
Framework | |
An Empirical Comparison of FAISS and FENSHSES for Nearest Neighbor Search in Hamming Space
Title | An Empirical Comparison of FAISS and FENSHSES for Nearest Neighbor Search in Hamming Space |
Authors | Cun Mu, Binwei Yang, Zheng Yan |
Abstract | In this paper, we compare the performances of FAISS and FENSHSES on nearest neighbor search in Hamming space–a fundamental task with ubiquitous applications in nowadays eCommerce. Comprehensive evaluations are made in terms of indexing speed, search latency and RAM consumption. This comparison is conducted towards a better understanding on trade-offs between nearest neighbor search systems implemented in main memory and the ones implemented in secondary memory, which is largely unaddressed in literature. |
Tasks | |
Published | 2019-06-24 |
URL | https://arxiv.org/abs/1906.10095v2 |
https://arxiv.org/pdf/1906.10095v2.pdf | |
PWC | https://paperswithcode.com/paper/an-empirical-comparison-of-faiss-and-fenshses |
Repo | |
Framework | |
Rolling-Shutter-Aware Differential SfM and Image Rectification
Title | Rolling-Shutter-Aware Differential SfM and Image Rectification |
Authors | Bingbing Zhuang, Loong-Fah Cheong, Gim Hee Lee |
Abstract | In this paper, we develop a modified differential Structure from Motion (SfM) algorithm that can estimate relative pose from two consecutive frames despite of Rolling Shutter (RS) artifacts. In particular, we show that under constant velocity assumption, the errors induced by the rolling shutter effect can be easily rectified by a linear scaling operation on each optical flow. We further propose a 9-point algorithm to recover the relative pose of a rolling shutter camera that undergoes constant acceleration motion. We demonstrate that the dense depth maps recovered from the relative pose of the RS camera can be used in a RS-aware warping for image rectification to recover high-quality Global Shutter (GS) images. Experiments on both synthetic and real RS images show that our RS-aware differential SfM algorithm produces more accurate results on relative pose estimation and 3D reconstruction from images distorted by RS effect compared to standard SfM algorithms that assume a GS camera model. We also demonstrate that our RS-aware warping for image rectification method outperforms state-of-the-art commercial software products, i.e. Adobe After Effects and Apple Imovie, at removing RS artifacts. |
Tasks | 3D Reconstruction, Optical Flow Estimation, Pose Estimation |
Published | 2019-03-10 |
URL | https://arxiv.org/abs/1903.03943v2 |
https://arxiv.org/pdf/1903.03943v2.pdf | |
PWC | https://paperswithcode.com/paper/rolling-shutter-aware-differential-sfm-and-1 |
Repo | |
Framework | |
Geometric Considerations of a Good Dictionary for Koopman Analysis of Dynamical Systems
Title | Geometric Considerations of a Good Dictionary for Koopman Analysis of Dynamical Systems |
Authors | Erik Bollt |
Abstract | Representation of a dynamical system in terms of simplifying modes is a central premise of reduced order modelling and a primary concern of the increasingly popular DMD (dynamic mode decomposition) empirical interpretation of Koopman operator analysis of complex systems. In the spirit of optimal approximation and reduced order modelling the goal of DMD methods and variants are to describe the dynamical evolution as a linear evolution in an appropriately transformed lower rank space, as best as possible. However, as far as we know there has not been an in depth study regarding the underlying geometry as related to an efficient representation. To this end we present that a good dictionary, that quite different from other’s constructions, we need only to construct optimal initial data functions on a transverse co-dimension one set. Then the eigenfunctions on a subdomain follows the method of characteristics. The underlying geometry of Koopman eigenfunctions involves an extreme multiplicity whereby infinitely many eigenfunctions correspond to each eigenvalue that we resolved by our new concept as a quotient set of functions, in terms of matched level sets. We call this equivalence class of functions a ``primary eigenfunction” to further help us to resolve the relationship between the large number of eigenfunctions in perhaps an otherwise low dimensional phase space. This construction allows us to understand the geometric relationships between the numerous eigenfunctions in a useful way. Aspects are discussed how the underlying spectral decomposition as the point spectrum and continuous spectrum fundamentally rely on the domain. | |
Tasks | |
Published | 2019-12-18 |
URL | https://arxiv.org/abs/1912.09570v1 |
https://arxiv.org/pdf/1912.09570v1.pdf | |
PWC | https://paperswithcode.com/paper/geometric-considerations-of-a-good-dictionary |
Repo | |
Framework | |
High dimensional precision medicine from patient-derived xenografts
Title | High dimensional precision medicine from patient-derived xenografts |
Authors | Naim U. Rashid, Daniel J. Luckett, Jingxiang Chen, Michael T. Lawson, Longshaokan Wang, Yunshu Zhang, Eric B. Laber, Yufeng Liu, Jen Jen Yeh, Donglin Zeng, Michael R. Kosorok |
Abstract | The complexity of human cancer often results in significant heterogeneity in response to treatment. Precision medicine offers potential to improve patient outcomes by leveraging this heterogeneity. Individualized treatment rules (ITRs) formalize precision medicine as maps from the patient covariate space into the space of allowable treatments. The optimal ITR is that which maximizes the mean of a clinical outcome in a population of interest. Patient-derived xenograft (PDX) studies permit the evaluation of multiple treatments within a single tumor and thus are ideally suited for estimating optimal ITRs. PDX data are characterized by correlated outcomes, a high-dimensional feature space, and a large number of treatments. Existing methods for estimating optimal ITRs do not take advantage of the unique structure of PDX data or handle the associated challenges well. In this paper, we explore machine learning methods for estimating optimal ITRs from PDX data. We analyze data from a large PDX study to identify biomarkers that are informative for developing personalized treatment recommendations in multiple cancers. We estimate optimal ITRs using regression-based approaches such as Q-learning and direct search methods such as outcome weighted learning. Finally, we implement a superlearner approach to combine a set of estimated ITRs and show that the resulting ITR performs better than any of the input ITRs, mitigating uncertainty regarding user choice of any particular ITR estimation methodology. Our results indicate that PDX data are a valuable resource for developing individualized treatment strategies in oncology. |
Tasks | Q-Learning |
Published | 2019-12-13 |
URL | https://arxiv.org/abs/1912.06667v1 |
https://arxiv.org/pdf/1912.06667v1.pdf | |
PWC | https://paperswithcode.com/paper/high-dimensional-precision-medicine-from |
Repo | |
Framework | |
Mean-field Langevin System, Optimal Control and Deep Neural Networks
Title | Mean-field Langevin System, Optimal Control and Deep Neural Networks |
Authors | Kaitong Hu, Anna Kazeykina, Zhenjie Ren |
Abstract | In this paper, we study a regularised relaxed optimal control problem and, in particular, we are concerned with the case where the control variable is of large dimension. We introduce a system of mean-field Langevin equations, the invariant measure of which is shown to be the optimal control of the initial problem under mild conditions. Therefore, this system of processes can be viewed as a continuous-time numerical algorithm for computing the optimal control. As an application, this result endorses the solvability of the stochastic gradient descent algorithm for a wide class of deep neural networks. |
Tasks | |
Published | 2019-09-16 |
URL | https://arxiv.org/abs/1909.07278v2 |
https://arxiv.org/pdf/1909.07278v2.pdf | |
PWC | https://paperswithcode.com/paper/mean-field-langevin-system-optimal-control |
Repo | |
Framework | |
Persian Signature Verification using Fully Convolutional Networks
Title | Persian Signature Verification using Fully Convolutional Networks |
Authors | Mohammad Rezaei, Nader Naderi |
Abstract | Fully convolutional networks (FCNs) have been recently used for feature extraction and classification in image and speech recognition, where their inputs have been raw signal or other complicated features. Persian signature verification is done using conventional convolutional neural networks (CNNs). In this paper, we propose to use FCN for learning a robust feature extraction from the raw signature images. FCN can be considered as a variant of CNN where its fully connected layers are replaced with a global pooling layer. In the proposed manner, FCN inputs are raw signature images and convolution filter size is fixed. Recognition accuracy on UTSig database, shows that FCN with a global average pooling outperforms CNN. |
Tasks | Speech Recognition |
Published | 2019-09-20 |
URL | https://arxiv.org/abs/1909.09720v1 |
https://arxiv.org/pdf/1909.09720v1.pdf | |
PWC | https://paperswithcode.com/paper/190909720 |
Repo | |
Framework | |
A model for predicting price polarity of real estate properties using information of real estate market websites
Title | A model for predicting price polarity of real estate properties using information of real estate market websites |
Authors | Vladimir Vargas-Calderón, Jorge E. Camargo |
Abstract | This paper presents a model that uses the information that sellers publish in real estate market websites to predict whether a property has higher or lower price than the average price of its similar properties. The model learns the correlation between price and information (text descriptions and features) of real estate properties through automatic identification of latent semantic content given by a machine learning model based on doc2vec and xgboost. The proposed model was evaluated with a data set of 57,516 publications of real estate properties collected from 2016 to 2018 of Bogot'a city. Results show that the accuracy of a classifier that involves text descriptions is slightly higher than a classifier that only uses features of the real estate properties, as text descriptions tends to contain detailed information about the property. |
Tasks | |
Published | 2019-11-19 |
URL | https://arxiv.org/abs/1911.08382v1 |
https://arxiv.org/pdf/1911.08382v1.pdf | |
PWC | https://paperswithcode.com/paper/a-model-for-predicting-price-polarity-of-real |
Repo | |
Framework | |
Conformal Symplectic and Relativistic Optimization
Title | Conformal Symplectic and Relativistic Optimization |
Authors | Guilherme França, Jeremias Sulam, Daniel P. Robinson, René Vidal |
Abstract | Recent work in machine learning has shown that optimization algorithms such as Nesterov’s accelerated gradient can be obtained as the discretization of a continuous dynamical system. Since different discretizations can lead to different algorithms, it is important to choose the ones that preserve certain structural properties of the dynamical system, such as critical points, stability and convergence rates. In this paper we study structure-preserving discretizations for certain classes of dissipative systems, which allow us to analyze properties of existing accelerated algorithms as well as introduce new ones. In particular, we consider two classes of conformal Hamiltonian systems whose trajectories lie on a symplectic manifold, namely a classical mechanical system with linear dissipation and its relativistic extension, and propose discretizations based on conformal symplectic integrators which preserve this underlying symplectic geometry. We argue that conformal symplectic integrators can preserve convergence rates of the continuous system up to a negligible error. As a surprising consequence of our construction, we show that the well-known and widely used classical momentum method is a symplectic integrator, while the popular Nesterov’s accelerated gradient is not. Moreover, we introduce a relativistic generalization of classical momentum, called relativistic gradient descent, which is symplectic, includes normalization of the momentum, and may result in more stable/faster optimization for some problems. |
Tasks | |
Published | 2019-03-11 |
URL | https://arxiv.org/abs/1903.04100v3 |
https://arxiv.org/pdf/1903.04100v3.pdf | |
PWC | https://paperswithcode.com/paper/conformal-symplectic-and-relativistic |
Repo | |
Framework | |
Thompson Sampling on Symmetric $α$-Stable Bandits
Title | Thompson Sampling on Symmetric $α$-Stable Bandits |
Authors | Abhimanyu Dubey, Alex Pentland |
Abstract | Thompson Sampling provides an efficient technique to introduce prior knowledge in the multi-armed bandit problem, along with providing remarkable empirical performance. In this paper, we revisit the Thompson Sampling algorithm under rewards drawn from symmetric $\alpha$-stable distributions, which are a class of heavy-tailed probability distributions utilized in finance and economics, in problems such as modeling stock prices and human behavior. We present an efficient framework for posterior inference, which leads to two algorithms for Thompson Sampling in this setting. We prove finite-time regret bounds for both algorithms, and demonstrate through a series of experiments the stronger performance of Thompson Sampling in this setting. With our results, we provide an exposition of symmetric $\alpha$-stable distributions in sequential decision-making, and enable sequential Bayesian inference in applications from diverse fields in finance and complex systems that operate on heavy-tailed features. |
Tasks | Bayesian Inference, Decision Making |
Published | 2019-07-08 |
URL | https://arxiv.org/abs/1907.03821v2 |
https://arxiv.org/pdf/1907.03821v2.pdf | |
PWC | https://paperswithcode.com/paper/thompson-sampling-on-symmetric-stable-bandits |
Repo | |
Framework | |
Team EP at TAC 2018: Automating data extraction in systematic reviews of environmental agents
Title | Team EP at TAC 2018: Automating data extraction in systematic reviews of environmental agents |
Authors | Artur Nowak, Paweł Kunstman |
Abstract | We describe our entry for the Systematic Review Information Extraction track of the 2018 Text Analysis Conference. Our solution is an end-to-end, deep learning, sequence tagging model based on the BI-LSTM-CRF architecture. However, we use interleaved, alternating LSTM layers with highway connections instead of the more traditional approach, where last hidden states of both directions are concatenated to create an input to the next layer. We also make extensive use of pre-trained word embeddings, namely GloVe and ELMo. Thanks to a number of regularization techniques, we were able to achieve relatively large capacity of the model (31.3M+ of trainable parameters) for the size of training set (100 documents, less than 200K tokens). The system’s official score was 60.9% (micro-F1) and it ranked first for the Task 1. Additionally, after rectifying an obvious mistake in the submission format, the system scored 67.35%. |
Tasks | Word Embeddings |
Published | 2019-01-07 |
URL | http://arxiv.org/abs/1901.02081v1 |
http://arxiv.org/pdf/1901.02081v1.pdf | |
PWC | https://paperswithcode.com/paper/team-ep-at-tac-2018-automating-data |
Repo | |
Framework | |
Bayesian deep learning with hierarchical prior: Predictions from limited and noisy data
Title | Bayesian deep learning with hierarchical prior: Predictions from limited and noisy data |
Authors | Xihaier Luo, Ahsan Kareem |
Abstract | Datasets in engineering applications are often limited and contaminated, mainly due to unavoidable measurement noise and signal distortion. Thus, using conventional data-driven approaches to build a reliable discriminative model, and further applying this identified surrogate to uncertainty analysis remains to be very challenging. A deep learning approach is presented to provide predictions based on limited and noisy data. To address noise perturbation, the Bayesian learning method that naturally facilitates an automatic updating mechanism is considered to quantify and propagate model uncertainties into predictive quantities. Specifically, hierarchical Bayesian modeling (HBM) is first adopted to describe model uncertainties, which allows the prior assumption to be less subjective, while also makes the proposed surrogate more robust. Next, the Bayesian inference is seamlessly integrated into the DL framework, which in turn supports probabilistic programming by yielding a probability distribution of the quantities of interest rather than their point estimates. Variational inference (VI) is implemented for the posterior distribution analysis where the intractable marginalization of the likelihood function over parameter space is framed in an optimization format, and stochastic gradient descent method is applied to solve this optimization problem. Finally, Monte Carlo simulation is used to obtain an unbiased estimator in the predictive phase of Bayesian inference, where the proposed Bayesian deep learning (BDL) scheme is able to offer confidence bounds for the output estimation by analyzing propagated uncertainties. The effectiveness of Bayesian shrinkage is demonstrated in improving predictive performance using contaminated data, and various examples are provided to illustrate concepts, methodologies, and algorithms of this proposed BDL modeling technique. |
Tasks | Bayesian Inference, Probabilistic Programming |
Published | 2019-07-08 |
URL | https://arxiv.org/abs/1907.04240v1 |
https://arxiv.org/pdf/1907.04240v1.pdf | |
PWC | https://paperswithcode.com/paper/bayesian-deep-learning-with-hierarchical |
Repo | |
Framework | |