Paper Group ANR 626
Towards Accurate Generative Models of Video: A New Metric & Challenges. XBART: Accelerated Bayesian Additive Regression Trees. The QLBS Q-Learner Goes NuQLear: Fitted Q Iteration, Inverse RL, and Option Portfolios. Learning from Positive and Unlabeled Data under the Selected At Random Assumption. A Possibility Distribution Based Multi-Criteria Deci …
Towards Accurate Generative Models of Video: A New Metric & Challenges
Title | Towards Accurate Generative Models of Video: A New Metric & Challenges |
Authors | Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, Sylvain Gelly |
Abstract | Recent advances in deep generative models have lead to remarkable progress in synthesizing high quality images. Following their successful application in image processing and representation learning, an important next step is to consider videos. Learning generative models of video is a much harder task, requiring a model to capture the temporal dynamics of a scene, in addition to the visual presentation of objects. While recent attempts at formulating generative models of video have had some success, current progress is hampered by (1) the lack of qualitative metrics that consider visual quality, temporal coherence, and diversity of samples, and (2) the wide gap between purely synthetic video data sets and challenging real-world data sets in terms of complexity. To this extent we propose Fr'{e}chet Video Distance (FVD), a new metric for generative models of video, and StarCraft 2 Videos (SCV), a benchmark of game play from custom starcraft 2 scenarios that challenge the current capabilities of generative models of video. We contribute a large-scale human study, which confirms that FVD correlates well with qualitative human judgment of generated videos, and provide initial benchmark results on SCV. |
Tasks | Representation Learning, Starcraft |
Published | 2018-12-03 |
URL | http://arxiv.org/abs/1812.01717v2 |
http://arxiv.org/pdf/1812.01717v2.pdf | |
PWC | https://paperswithcode.com/paper/towards-accurate-generative-models-of-video-a |
Repo | |
Framework | |
XBART: Accelerated Bayesian Additive Regression Trees
Title | XBART: Accelerated Bayesian Additive Regression Trees |
Authors | Jingyu He, Saar Yalov, P. Richard Hahn |
Abstract | Bayesian additive regression trees (BART) (Chipman et. al., 2010) is a powerful predictive model that often outperforms alternative models at out-of-sample prediction. BART is especially well-suited to settings with unstructured predictor variables and substantial sources of unmeasured variation as is typical in the social, behavioral and health sciences. This paper develops a modified version of BART that is amenable to fast posterior estimation. We present a stochastic hill climbing algorithm that matches the remarkable predictive accuracy of previous BART implementations, but is many times faster and less memory intensive. Simulation studies show that the new method is comparable in computation time and more accurate at function estimation than both random forests and gradient boosting. |
Tasks | |
Published | 2018-10-04 |
URL | http://arxiv.org/abs/1810.02215v3 |
http://arxiv.org/pdf/1810.02215v3.pdf | |
PWC | https://paperswithcode.com/paper/xbart-accelerated-bayesian-additive |
Repo | |
Framework | |
The QLBS Q-Learner Goes NuQLear: Fitted Q Iteration, Inverse RL, and Option Portfolios
Title | The QLBS Q-Learner Goes NuQLear: Fitted Q Iteration, Inverse RL, and Option Portfolios |
Authors | Igor Halperin |
Abstract | The QLBS model is a discrete-time option hedging and pricing model that is based on Dynamic Programming (DP) and Reinforcement Learning (RL). It combines the famous Q-Learning method for RL with the Black-Scholes (-Merton) model’s idea of reducing the problem of option pricing and hedging to the problem of optimal rebalancing of a dynamic replicating portfolio for the option, which is made of a stock and cash. Here we expand on several NuQLear (Numerical Q-Learning) topics with the QLBS model. First, we investigate the performance of Fitted Q Iteration for a RL (data-driven) solution to the model, and benchmark it versus a DP (model-based) solution, as well as versus the BSM model. Second, we develop an Inverse Reinforcement Learning (IRL) setting for the model, where we only observe prices and actions (re-hedges) taken by a trader, but not rewards. Third, we outline how the QLBS model can be used for pricing portfolios of options, rather than a single option in isolation, thus providing its own, data-driven and model independent solution to the (in)famous volatility smile problem of the Black-Scholes model. |
Tasks | Q-Learning |
Published | 2018-01-17 |
URL | http://arxiv.org/abs/1801.06077v1 |
http://arxiv.org/pdf/1801.06077v1.pdf | |
PWC | https://paperswithcode.com/paper/the-qlbs-q-learner-goes-nuqlear-fitted-q |
Repo | |
Framework | |
Learning from Positive and Unlabeled Data under the Selected At Random Assumption
Title | Learning from Positive and Unlabeled Data under the Selected At Random Assumption |
Authors | Jessa Bekker, Jesse Davis |
Abstract | For many interesting tasks, such as medical diagnosis and web page classification, a learner only has access to some positively labeled examples and many unlabeled examples. Learning from this type of data requires making assumptions about the true distribution of the classes and/or the mechanism that was used to select the positive examples to be labeled. The commonly made assumptions, separability of the classes and positive examples being selected completely at random, are very strong. This paper proposes a weaker assumption that assumes the positive examples to be selected at random, conditioned on some of the attributes. To learn under this assumption, an EM method is proposed. Experiments show that our method is not only very capable of learning under this assumption, but it also outperforms the state of the art for learning under the selected completely at random assumption. |
Tasks | Medical Diagnosis |
Published | 2018-08-27 |
URL | http://arxiv.org/abs/1808.08755v1 |
http://arxiv.org/pdf/1808.08755v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-from-positive-and-unlabeled-data |
Repo | |
Framework | |
A Possibility Distribution Based Multi-Criteria Decision Algorithm for Resilient Supplier Selection Problems
Title | A Possibility Distribution Based Multi-Criteria Decision Algorithm for Resilient Supplier Selection Problems |
Authors | Dizuo Jiang, Md Mahmudul Hassan, Tasnim Ibn Faiz, Md. Noor-E-Alam |
Abstract | Thus far, limited research has been performed on resilient supplier selection - a problem that requires simultaneous consideration of a set of numerical and linguistic evaluation criteria, which are substantially different from traditional supplier selection problem. Essentially, resilient supplier selection entails key sourcing decision for an organization to gain competitive advantage. In the presence of multiple conflicting evaluation criteria, contradicting decision makers, and imprecise decision relevant information (DRI), this problem becomes even more difficult to solve with the classical optimization approaches. However, prior research focusing on MCDA based supplier selection problem has been lacking in the ability to provide a seamless integration of numerical and linguistic evaluation criteria along with the consideration of multiple decision makers. To address these challenges, we present a comprehensive decision-making framework for ranking a set of suppliers from resiliency perspective. The proposed algorithm is capable of leveraging imprecise and aggregated DRI obtained from crisp numerical assessments and reliability adjusted linguistic appraisals from a group of decision makers. We adapt two popular tools - Single Valued Neutrosophic Sets (SVNS) and Interval-valued fuzzy sets (IVFS), and for the first time extend them to incorporate both crisp and linguistic evaluations in a group decision making platform to obtain aggregated SVNS and IVFS decision matrix. This information is then used to rank the resilient suppliers by using TOPSIS method. We present a case study to illustrate the mechanism of the proposed algorithm. |
Tasks | Decision Making |
Published | 2018-06-04 |
URL | http://arxiv.org/abs/1806.01650v2 |
http://arxiv.org/pdf/1806.01650v2.pdf | |
PWC | https://paperswithcode.com/paper/a-possibility-distribution-based-multi |
Repo | |
Framework | |
BlockPuzzle - A Challenge in Physical Reasoning and Generalization for Robot Learning
Title | BlockPuzzle - A Challenge in Physical Reasoning and Generalization for Robot Learning |
Authors | Yixiu Zhao, Ziyin Liu |
Abstract | In this work we propose a novel task framework under which a variety of physical reasoning puzzles can be constructed using very simple rules. Under sparse reward settings, most of these tasks can be very challenging for a reinforcement learning agent to learn. We build several simple environments with this task framework in Mujoco and OpenAI gym and attempt to solve them. We are able to solve the environments by designing curricula to guide the agent in learning and using imitation learning methods to transfer knowledge from a simpler environment. This is only a first step for the task framework, and further research on how to solve the harder tasks and transfer knowledge between tasks is needed. |
Tasks | Imitation Learning |
Published | 2018-11-30 |
URL | http://arxiv.org/abs/1812.00091v1 |
http://arxiv.org/pdf/1812.00091v1.pdf | |
PWC | https://paperswithcode.com/paper/blockpuzzle-a-challenge-in-physical-reasoning |
Repo | |
Framework | |
Recursive Neural Network Based Preordering for English-to-Japanese Machine Translation
Title | Recursive Neural Network Based Preordering for English-to-Japanese Machine Translation |
Authors | Yuki Kawara, Chenhui Chu, Yuki Arase |
Abstract | The word order between source and target languages significantly influences the translation quality in machine translation. Preordering can effectively address this problem. Previous preordering methods require a manual feature design, making language dependent design costly. In this paper, we propose a preordering method with a recursive neural network that learns features from raw inputs. Experiments show that the proposed method achieves comparable gain in translation quality to the state-of-the-art method but without a manual feature design. |
Tasks | Machine Translation |
Published | 2018-05-25 |
URL | http://arxiv.org/abs/1805.10187v1 |
http://arxiv.org/pdf/1805.10187v1.pdf | |
PWC | https://paperswithcode.com/paper/recursive-neural-network-based-preordering |
Repo | |
Framework | |
TV-regularized CT Reconstruction and Metal Artifact Reduction Using Inequality Constraints with Preconditioning
Title | TV-regularized CT Reconstruction and Metal Artifact Reduction Using Inequality Constraints with Preconditioning |
Authors | Clemens Schiffer |
Abstract | Total variation(TV) regularization is applied to X-Ray computed tomography(CT) in an effort to reduce metal artifacts. Tikhonov regularization with $L^2$ data fidelity term and total variation regularization is augmented in this novel model by inequality constraints on sinogram data affected by metal to model errors caused by metal. The formulated problem is discretized and solved using the Chambolle-Pock algorithm. Faster convergence is achieved using preconditioning in a Douglas-Rachford spitting method as well as Advanced Direction Method of Multipliers(ADMM). The methods are applied to real and synthetic data demonstrating feasibility of the model to reduce metal artifacts. Technical details of CT data used and its processing are given in the appendix. |
Tasks | Computed Tomography (CT), Metal Artifact Reduction |
Published | 2018-10-08 |
URL | http://arxiv.org/abs/1810.03275v1 |
http://arxiv.org/pdf/1810.03275v1.pdf | |
PWC | https://paperswithcode.com/paper/tv-regularized-ct-reconstruction-and-metal |
Repo | |
Framework | |
Learning Short-Cut Connections for Object Counting
Title | Learning Short-Cut Connections for Object Counting |
Authors | Daniel Oñoro-Rubio, Mathias Niepert, Roberto J. López-Sastre |
Abstract | Object counting is an important task in computer vision due to its growing demand in applications such as traffic monitoring or surveillance. In this paper, we consider object counting as a learning problem of a joint feature extraction and pixel-wise object density estimation with Convolutional-Deconvolutional networks. We introduce a novel counting model, named Gated U-Net (GU-Net). Specifically, we propose to enrich the U-Net architecture with the concept of learnable short-cut connections. Standard short-cut connections are connections between layers in deep neural networks which skip at least one intermediate layer. Instead of simply setting short-cut connections, we propose to learn these connections from data. Therefore, our short-cuts can work as gating units, which optimize the flow of information between convolutional and deconvolutional layers in the U-Net architecture. We evaluate the introduced GU-Net architecture on three commonly used benchmark data sets for object counting. GU-Nets consistently outperform the base U-Net architecture, and achieve state-of-the-art performance. |
Tasks | Density Estimation, Object Counting |
Published | 2018-05-08 |
URL | http://arxiv.org/abs/1805.02919v2 |
http://arxiv.org/pdf/1805.02919v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-short-cut-connections-for-object |
Repo | |
Framework | |
Fully Convolutional Multi-scale Residual DenseNets for Cardiac Segmentation and Automated Cardiac Diagnosis using Ensemble of Classifiers
Title | Fully Convolutional Multi-scale Residual DenseNets for Cardiac Segmentation and Automated Cardiac Diagnosis using Ensemble of Classifiers |
Authors | Mahendra Khened, Varghese Alex Kollerathu, Ganapathy Krishnamurthi |
Abstract | Deep fully convolutional neural network (FCN) based architectures have shown great potential in medical image segmentation. However, such architectures usually have millions of parameters and inadequate number of training samples leading to over-fitting and poor generalization. In this paper, we present a novel highly parameter and memory efficient FCN based architecture for medical image analysis. We propose a novel up-sampling path which incorporates long skip and short-cut connections to overcome the feature map explosion in FCN like architectures. In order to processes the input images at multiple scales and view points simultaneously, we propose to incorporate Inception module’s parallel structures. We also propose a novel dual loss function whose weighting scheme allows to combine advantages of cross-entropy and dice loss. We have validated our proposed network architecture on two publicly available datasets, namely: (i) Automated Cardiac Disease Diagnosis Challenge (ACDC-2017), (ii) Left Ventricular Segmentation Challenge (LV-2011). Our approach in ACDC-2017 challenge stands second place for segmentation and first place in automated cardiac disease diagnosis tasks with an accuracy of 100%. In the LV-2011 challenge our approach attained 0.74 Jaccard index, which is so far the highest published result in fully automated algorithms. From the segmentation we extracted clinically relevant cardiac parameters and hand-crafted features which reflected the clinical diagnostic analysis to train an ensemble system for cardiac disease classification. Our approach combined both cardiac segmentation and disease diagnosis into a fully automated framework which is computational efficient and hence has the potential to be incorporated in computer-aided diagnosis (CAD) tools for clinical application. |
Tasks | Cardiac Segmentation, Medical Image Segmentation, Semantic Segmentation |
Published | 2018-01-16 |
URL | http://arxiv.org/abs/1801.05173v1 |
http://arxiv.org/pdf/1801.05173v1.pdf | |
PWC | https://paperswithcode.com/paper/fully-convolutional-multi-scale-residual |
Repo | |
Framework | |
Adaptively Pruning Features for Boosted Decision Trees
Title | Adaptively Pruning Features for Boosted Decision Trees |
Authors | Maryam Aziz, Jesse Anderton, Javed Aslam |
Abstract | Boosted decision trees enjoy popularity in a variety of applications; however, for large-scale datasets, the cost of training a decision tree in each round can be prohibitively expensive. Inspired by ideas from the multi-arm bandit literature, we develop a highly efficient algorithm for computing exact greedy-optimal decision trees, outperforming the state-of-the-art Quick Boost method. We further develop a framework for deriving lower bounds on the problem that applies to a wide family of conceivable algorithms for the task (including our algorithm and Quick Boost), and we demonstrate empirically on a wide variety of data sets that our algorithm is near-optimal within this family of algorithms. We also derive a lower bound applicable to any algorithm solving the task, and we demonstrate that our algorithm empirically achieves performance close to this best-achievable lower bound. |
Tasks | |
Published | 2018-05-19 |
URL | http://arxiv.org/abs/1805.07592v1 |
http://arxiv.org/pdf/1805.07592v1.pdf | |
PWC | https://paperswithcode.com/paper/adaptively-pruning-features-for-boosted |
Repo | |
Framework | |
Evaluating Fairness Metrics in the Presence of Dataset Bias
Title | Evaluating Fairness Metrics in the Presence of Dataset Bias |
Authors | J. Henry Hinnefeld, Peter Cooman, Nat Mammo, Rupert Deese |
Abstract | Data-driven algorithms play a large role in decision making across a variety of industries. Increasingly, these algorithms are being used to make decisions that have significant ramifications for people’s social and economic well-being, e.g. in sentencing, loan approval, and policing. Amid the proliferation of such systems there is a growing concern about their potential discriminatory impact. In particular, machine learning systems which are trained on biased data have the potential to learn and perpetuate those biases. A central challenge for practitioners is thus to determine whether their models display discriminatory bias. Here we present a case study in which we frame the issue of bias detection as a causal inference problem with observational data. We enumerate two main causes of bias, sampling bias and label bias, and we investigate the abilities of six different fairness metrics to detect each bias type. Based on these investigations, we propose a set of best practice guidelines to select the fairness metric that is most likely to detect bias if it is present. Additionally, we aim to identify the conditions in which certain fairness metrics may fail to detect bias and instead give practitioners a false belief that their biased model is making fair decisions. |
Tasks | Causal Inference, Decision Making |
Published | 2018-09-24 |
URL | http://arxiv.org/abs/1809.09245v1 |
http://arxiv.org/pdf/1809.09245v1.pdf | |
PWC | https://paperswithcode.com/paper/evaluating-fairness-metrics-in-the-presence |
Repo | |
Framework | |
Predicting Tactical Solutions to Operational Planning Problems under Imperfect Information
Title | Predicting Tactical Solutions to Operational Planning Problems under Imperfect Information |
Authors | Eric Larsen, Sébastien Lachapelle, Yoshua Bengio, Emma Frejinger, Simon Lacoste-Julien, Andrea Lodi |
Abstract | This paper offers a methodological contribution at the intersection of machine learning and operations research. Namely, we propose a methodology to quickly predict tactical solutions to a given operational problem. In this context, the tactical solution is less detailed than the operational one but it has to be computed in very short time and under imperfect information. The problem is of importance in various applications where tactical and operational planning problems are interrelated and information about the operational problem is revealed over time. This is for instance the case in certain capacity planning and demand management systems. We formulate the problem as a two-stage optimal prediction stochastic program whose solution we predict with a supervised machine learning algorithm. The training data set consists of a large number of deterministic (second stage) problems generated by controlled probabilistic sampling. The labels are computed based on solutions to the deterministic problems (solved independently and offline) employing appropriate aggregation and subselection methods to address uncertainty. Results on our motivating application in load planning for rail transportation show that deep learning algorithms produce highly accurate predictions in very short computing time (milliseconds or less). The prediction accuracy is comparable to solutions computed by sample average approximation of the stochastic program. |
Tasks | Stochastic Optimization |
Published | 2018-07-31 |
URL | http://arxiv.org/abs/1807.11876v3 |
http://arxiv.org/pdf/1807.11876v3.pdf | |
PWC | https://paperswithcode.com/paper/predicting-solution-summaries-to-integer |
Repo | |
Framework | |
Unsupervised Despeckling
Title | Unsupervised Despeckling |
Authors | Deepak Mishra, Santanu Chaudhury, Mukul Sarkar, Arvinder Singh Soin |
Abstract | Contrast and quality of ultrasound images are adversely affected by the excessive presence of speckle. However, being an inherent imaging property, speckle helps in tissue characterization and tracking. Thus, despeckling of the ultrasound images requires the reduction of speckle extent without any oversmoothing. In this letter, we aim to address the despeckling problem using an unsupervised deep adversarial approach. A despeckling residual neural network (DRNN) is trained with an adversarial loss imposed by a discriminator. The discriminator tries to differentiate between the despeckled images generated by the DRNN and the set of high-quality images. Further to prevent the developed DRNN from oversmoothing, a structural loss term is used along with the adversarial loss. Experimental evaluations show that the proposed DRNN is able to outperform the state-of-the-art despeckling approaches. |
Tasks | |
Published | 2018-01-10 |
URL | http://arxiv.org/abs/1801.03318v1 |
http://arxiv.org/pdf/1801.03318v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-despeckling |
Repo | |
Framework | |
A Critical Investigation of Deep Reinforcement Learning for Navigation
Title | A Critical Investigation of Deep Reinforcement Learning for Navigation |
Authors | Vikas Dhiman, Shurjo Banerjee, Brent Griffin, Jeffrey M Siskind, Jason J Corso |
Abstract | The navigation problem is classically approached in two steps: an exploration step, where map-information about the environment is gathered; and an exploitation step, where this information is used to navigate efficiently. Deep reinforcement learning (DRL) algorithms, alternatively, approach the problem of navigation in an end-to-end fashion. Inspired by the classical approach, we ask whether DRL algorithms are able to inherently explore, gather and exploit map-information over the course of navigation. We build upon Mirowski et al. [2017] work and introduce a systematic suite of experiments that vary three parameters: the agent’s starting location, the agent’s target location, and the maze structure. We choose evaluation metrics that explicitly measure the algorithm’s ability to gather and exploit map-information. Our experiments show that when trained and tested on the same maps, the algorithm successfully gathers and exploits map-information. However, when trained and tested on different sets of maps, the algorithm fails to transfer the ability to gather and exploit map-information to unseen maps. Furthermore, we find that when the goal location is randomized and the map is kept static, the algorithm is able to gather and exploit map-information but the exploitation is far from optimal. We open-source our experimental suite in the hopes that it serves as a framework for the comparison of future algorithms and leads to the discovery of robust alternatives to classical navigation methods. |
Tasks | |
Published | 2018-02-07 |
URL | http://arxiv.org/abs/1802.02274v2 |
http://arxiv.org/pdf/1802.02274v2.pdf | |
PWC | https://paperswithcode.com/paper/a-critical-investigation-of-deep |
Repo | |
Framework | |