Paper Group ANR 1702
Robust Regression for Safe Exploration in Control. The Six Fronts of the Generative Adversarial Networks. Tools for Mathematical Ludology. General Probabilistic Surface Optimization and Log Density Estimation. Tensor Recovery from Noisy and Multi-Level Quantized Measurements. Predicting Discourse Structure using Distant Supervision from Sentiment. …
Robust Regression for Safe Exploration in Control
Title | Robust Regression for Safe Exploration in Control |
Authors | Anqi Liu, Guanya Shi, Soon-Jo Chung, Anima Anandkumar, Yisong Yue |
Abstract | We study the problem of safe learning and exploration in sequential control problems. The goal is to safely collect data samples from an operating environment to learn an optimal controller. A central challenge in this setting is how to quantify uncertainty in order to choose provably-safe actions that allow us to collect useful data and reduce uncertainty, thereby achieving both improved safety and optimality. To address this challenge, we present a deep robust regression model that is trained to directly predict the uncertainty bounds for safe exploration. We then show how to integrate our robust regression approach with model-based control methods by learning a dynamic model with robustness bounds. We derive generalization bounds under domain shifts for learning and connect them with safety and stability bounds in control. We demonstrate empirically that our robust regression approach can outperform conventional Gaussian process (GP) based safe exploration in settings where it is difficult to specify a good GP prior. |
Tasks | Safe Exploration |
Published | 2019-06-13 |
URL | https://arxiv.org/abs/1906.05819v1 |
https://arxiv.org/pdf/1906.05819v1.pdf | |
PWC | https://paperswithcode.com/paper/robust-regression-for-safe-exploration-in |
Repo | |
Framework | |
The Six Fronts of the Generative Adversarial Networks
Title | The Six Fronts of the Generative Adversarial Networks |
Authors | Alceu Bissoto, Eduardo Valle, Sandra Avila |
Abstract | Generative Adversarial Networks fostered a newfound interest in generative models, resulting in a swelling wave of new works that new-coming researchers may find formidable to surf. In this paper, we intend to help those researchers, by splitting that incoming wave into six “fronts”: Architectural Contributions, Conditional Techniques, Normalization and Constraint Contributions, Loss Functions, Image-to-image Translations, and Validation Metrics. The division in fronts organizes literature into approachable blocks, ultimately communicating to the reader how the area is evolving. Previous surveys in the area, which this works also tabulates, focus on a few of those fronts, leaving a gap that we propose to fill with a more integrated, comprehensive overview. Here, instead of an exhaustive survey, we opt for a straightforward review: our target is to be an entry point to this vast literature, and also to be able to update experienced researchers to the newest techniques. |
Tasks | |
Published | 2019-10-29 |
URL | https://arxiv.org/abs/1910.13076v1 |
https://arxiv.org/pdf/1910.13076v1.pdf | |
PWC | https://paperswithcode.com/paper/the-six-fronts-of-the-generative-adversarial |
Repo | |
Framework | |
Tools for Mathematical Ludology
Title | Tools for Mathematical Ludology |
Authors | Paul Riggins, David McPherson |
Abstract | We propose the study of mathematical ludology, which aims to formally interrogate questions of interest to game studies and game design in particular. The goal is to extend our mathematical understanding of complex games beyond decision-making—the typical focus of game theory and artificial intelligence efforts—to explore other aspects such as game mechanics, structure, relationships between games, and connections between game rules and user-interfaces, as well as exploring related gameplay phenomena and typical player behavior. In this paper, we build a basic foundation for this line of study by developing a hierarchy of game descriptions, mathematical formalism to compactly describe complex discrete games, and equivalence relations on the space of game systems. |
Tasks | Decision Making |
Published | 2019-12-06 |
URL | https://arxiv.org/abs/1912.03295v2 |
https://arxiv.org/pdf/1912.03295v2.pdf | |
PWC | https://paperswithcode.com/paper/tools-for-mathematical-ludology |
Repo | |
Framework | |
General Probabilistic Surface Optimization and Log Density Estimation
Title | General Probabilistic Surface Optimization and Log Density Estimation |
Authors | Dmitry Kopitkov, Vadim Indelman |
Abstract | In this paper we contribute a novel algorithm family, which generalizes many unsupervised techniques including unnormalized and energy models, and allows to infer different statistical modalities (e.g.~data likelihood and ratio between densities) from data samples. The proposed unsupervised technique, named Probabilistic Surface Optimization (PSO), views a neural network (NN) as a flexible surface which can be pushed according to loss-specific virtual stochastic forces, where a dynamical equilibrium is achieved when the point-wise forces on the surface become equal. Concretely, the surface is pushed up and down at points sampled from two different distributions, with overall up and down forces becoming functions of these two distribution densities and of force intensity magnitudes defined by loss of a particular PSO instance. The eventual force equilibrium upon convergence enforces the NN to be equal to various statistical functions depending on the used magnitude functions, such as data density. Furthermore, this dynamical-statistical equilibrium is extremely intuitive and useful, providing many implications and possible usages in probabilistic inference. Further, we connect PSO to numerous existing statistical works which are also PSO instances, and derive new PSO-based inference methods as demonstration of PSO exceptional usability. Likewise, based on the insights coming from the virtual-force perspective we analyse PSO stability and propose new ways to improve it. Finally, we present new instances of PSO, termed PSO-LDE, for data density estimation on logarithmic scale and also provide a new NN block-diagonal architecture for increased surface flexibility, which significantly improves estimation accuracy. Both PSO-LDE and the new architecture are combined together as a new density estimation technique. We demonstrate this technique to be superior over state-of-the-art baselines. |
Tasks | Density Estimation |
Published | 2019-03-25 |
URL | https://arxiv.org/abs/1903.10567v2 |
https://arxiv.org/pdf/1903.10567v2.pdf | |
PWC | https://paperswithcode.com/paper/general-probabilistic-surface-optimization |
Repo | |
Framework | |
Tensor Recovery from Noisy and Multi-Level Quantized Measurements
Title | Tensor Recovery from Noisy and Multi-Level Quantized Measurements |
Authors | Ren Wang, Meng Wang, Jinjun Xiong |
Abstract | Higher-order tensors can represent scores in a rating system, frames in a video, and images of the same subject. In practice, the measurements are often highly quantized due to the sampling strategies or the quality of devices. Existing works on tensor recovery have focused on data losses and random noises. Only a few works consider tensor recovery from quantized measurements but are restricted to binary measurements. This paper, for the first time, addresses the problem of tensor recovery from multi-level quantized measurements. Leveraging the low-rank property of the tensor, this paper proposes a nonconvex optimization problem for tensor recovery. We provide a theoretical upper bound of the recovery error, which diminishes to zero when the sizes of dimensions increase to infinity. Our error bound significantly improves over the existing results in one-bit tensor recovery and quantized matrix recovery. A tensor-based alternating proximal gradient descent algorithm with a convergence guarantee is proposed to solve the nonconvex problem. Our recovery method can handle data losses and do not need the information of the quantization rule. The method is validated on synthetic data, image datasets, and music recommender datasets. |
Tasks | Quantization |
Published | 2019-12-05 |
URL | https://arxiv.org/abs/1912.02588v1 |
https://arxiv.org/pdf/1912.02588v1.pdf | |
PWC | https://paperswithcode.com/paper/tensor-recovery-from-noisy-and-multi-level |
Repo | |
Framework | |
Predicting Discourse Structure using Distant Supervision from Sentiment
Title | Predicting Discourse Structure using Distant Supervision from Sentiment |
Authors | Patrick Huber, Giuseppe Carenini |
Abstract | Discourse parsing could not yet take full advantage of the neural NLP revolution, mostly due to the lack of annotated datasets. We propose a novel approach that uses distant supervision on an auxiliary task (sentiment classification), to generate abundant data for RST-style discourse structure prediction. Our approach combines a neural variant of multiple-instance learning, using document-level supervision, with an optimal CKY-style tree generation algorithm. In a series of experiments, we train a discourse parser (for only structure prediction) on our automatically generated dataset and compare it with parsers trained on human-annotated corpora (news domain RST-DT and Instructional domain). Results indicate that while our parser does not yet match the performance of a parser trained and tested on the same dataset (intra-domain), it does perform remarkably well on the much more difficult and arguably more useful task of inter-domain discourse structure prediction, where the parser is trained on one domain and tested/applied on another one. |
Tasks | Multiple Instance Learning, Sentiment Analysis |
Published | 2019-10-30 |
URL | https://arxiv.org/abs/1910.14176v1 |
https://arxiv.org/pdf/1910.14176v1.pdf | |
PWC | https://paperswithcode.com/paper/predicting-discourse-structure-using-distant |
Repo | |
Framework | |
Towards Oracle Knowledge Distillation with Neural Architecture Search
Title | Towards Oracle Knowledge Distillation with Neural Architecture Search |
Authors | Minsoo Kang, Jonghwan Mun, Bohyung Han |
Abstract | We present a novel framework of knowledge distillation that is capable of learning powerful and efficient student models from ensemble teacher networks. Our approach addresses the inherent model capacity issue between teacher and student and aims to maximize benefit from teacher models during distillation by reducing their capacity gap. Specifically, we employ a neural architecture search technique to augment useful structures and operations, where the searched network is appropriate for knowledge distillation towards student models and free from sacrificing its performance by fixing the network capacity. We also introduce an oracle knowledge distillation loss to facilitate model search and distillation using an ensemble-based teacher model, where a student network is learned to imitate oracle performance of the teacher. We perform extensive experiments on the image classification datasets—CIFAR-100 and TinyImageNet—using various networks. We also show that searching for a new student model is effective in both accuracy and memory size and that the searched models often outperform their teacher models thanks to neural architecture search with oracle knowledge distillation. |
Tasks | Image Classification, Neural Architecture Search |
Published | 2019-11-29 |
URL | https://arxiv.org/abs/1911.13019v1 |
https://arxiv.org/pdf/1911.13019v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-oracle-knowledge-distillation-with |
Repo | |
Framework | |
EDAS: Efficient and Differentiable Architecture Search
Title | EDAS: Efficient and Differentiable Architecture Search |
Authors | Hyeong Gwon Hong, Pyunghwan Ahn, Junmo Kim |
Abstract | Transferrable neural architecture search can be viewed as a binary optimization problem where a single optimal path should be selected among candidate paths in each edge within the repeated cell block of the directed a cyclic graph form. Recently, the field of differentiable architecture search attempts to relax the search problem continuously using a one-shot network that combines all the candidate paths in search space. However, when the one-shot network is pruned to the model in the discrete architecture space by the derivation algorithm, performance is significantly degraded to an almost random estimator. To reduce the quantization error from the heavy use of relaxation, we only sample a single edge to relax the corresponding variable and clamp variables in the other edges to zero or one. By this method, there is no performance drop after pruning the one-shot network by derivation algorithm, due to the preservation of the discrete nature of optimization variables during the search. Furthermore, the minimization of relaxation degree allows searching in a deeper network to discover better performance with remarkable search cost reduction (0.125 GPU days) compared to previous methods. By adding several regularization methods that help explore within the search space, we could obtain the network with notable performances on CIFAR-10, CIFAR-100, and ImageNet. |
Tasks | Neural Architecture Search, Quantization |
Published | 2019-12-03 |
URL | https://arxiv.org/abs/1912.01237v2 |
https://arxiv.org/pdf/1912.01237v2.pdf | |
PWC | https://paperswithcode.com/paper/edas-efficient-and-differentiable |
Repo | |
Framework | |
Imitating by generating: deep generative models for imitation of interactive tasks
Title | Imitating by generating: deep generative models for imitation of interactive tasks |
Authors | Judith Bütepage, Ali Ghadirzadeh, Özge Öztimur Karadag, Mårten Björkman, Danica Kragic |
Abstract | To coordinate actions with an interaction partner requires a constant exchange of sensorimotor signals. Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner. They require the ability to predict and adapt to one’s partner during an interaction. In this work we want to explore these ideas in a human-robot interaction setting in which a robot is required to learn interactive tasks from a combination of observational and kinesthetic learning. To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner and (3) generation of robot joint trajectories matching the human motion. To test these ideas, we collect human-human interaction data and human-robot interaction data of four interactive tasks “hand-shake”, “hand-wave”, “parachute fist-bump” and “rocket fist-bump”. We demonstrate experimentally the importance of predictive and adaptive components as well as low-level abstractions to successfully learn to imitate human behavior in interactive social tasks. |
Tasks | Imitation Learning, motion prediction |
Published | 2019-10-14 |
URL | https://arxiv.org/abs/1910.06031v1 |
https://arxiv.org/pdf/1910.06031v1.pdf | |
PWC | https://paperswithcode.com/paper/imitating-by-generating-deep-generative |
Repo | |
Framework | |
Search Algorithms for Mastermind
Title | Search Algorithms for Mastermind |
Authors | Anthony D. Rhodes |
Abstract | his paper presents two novel approaches to solving the classic board game mastermind, including a variant of simulated annealing (SA) and a technique we term maximum expected reduction in consistency (MERC). In addition, we compare search results for these algorithms to two baseline search methods: a random, uninformed search and the method of minimizing maximum query partition sets as originally developed by both Donald Knuth and Peter Norvig. |
Tasks | |
Published | 2019-08-16 |
URL | https://arxiv.org/abs/1908.06183v1 |
https://arxiv.org/pdf/1908.06183v1.pdf | |
PWC | https://paperswithcode.com/paper/search-algorithms-for-mastermind |
Repo | |
Framework | |
Word Sense Disambiguation using Diffusion Kernel PCA
Title | Word Sense Disambiguation using Diffusion Kernel PCA |
Authors | Bilge Sipal, Ozcan Sari, Asena Teke, Nurullah Demirci |
Abstract | One of the major problems in natural language processing (NLP) is the word sense disambiguation (WSD) problem. It is the task of computationally identifying the right sense of a polysemous word based on its context. Resolving the WSD problem boosts the accuracy of many NLP focused algorithms such as text classification and machine translation. In this paper, we introduce a new supervised algorithm for WSD, that is based on Kernel PCA and Semantic Diffusion Kernel, which is called Diffusion Kernel PCA (DKPCA). DKPCA grasps the semantic similarities within terms, and it is based on PCA. These properties enable us to perform feature extraction and dimension reduction guided by semantic similarities and within the algorithm. Our empirical results on SensEval data demonstrate that DKPCA achieves higher or very close accuracy results compared to SVM and KPCA with various well-known kernels when the labeled data ratio is meager. Considering the scarcity of labeled data, whereas large quantities of unlabeled textual data are easily accessible, these are highly encouraging first results to develop DKPCA further. |
Tasks | Dimensionality Reduction, Machine Translation, Text Classification, Word Sense Disambiguation |
Published | 2019-07-21 |
URL | https://arxiv.org/abs/1908.01832v1 |
https://arxiv.org/pdf/1908.01832v1.pdf | |
PWC | https://paperswithcode.com/paper/word-sense-disambiguation-using-diffusion |
Repo | |
Framework | |
Korean-to-Chinese Machine Translation using Chinese Character as Pivot Clue
Title | Korean-to-Chinese Machine Translation using Chinese Character as Pivot Clue |
Authors | Jeonghyeok Park, Hai Zhao |
Abstract | Korean-Chinese is a low resource language pair, but Korean and Chinese have a lot in common in terms of vocabulary. Sino-Korean words, which can be converted into corresponding Chinese characters, account for more than fifty of the entire Korean vocabulary. Motivated by this, we propose a simple linguistically motivated solution to improve the performance of the Korean-to-Chinese neural machine translation model by using their common vocabulary. We adopt Chinese characters as a translation pivot by converting Sino-Korean words in Korean sentences to Chinese characters and then train the machine translation model with the converted Korean sentences as source sentences. The experimental results on Korean-to-Chinese translation demonstrate that the models with the proposed method improve translation quality up to 1.5 BLEU points in comparison to the baseline models. |
Tasks | Machine Translation |
Published | 2019-11-25 |
URL | https://arxiv.org/abs/1911.11008v1 |
https://arxiv.org/pdf/1911.11008v1.pdf | |
PWC | https://paperswithcode.com/paper/korean-to-chinese-machine-translation-using |
Repo | |
Framework | |
AdaFair: Cumulative Fairness Adaptive Boosting
Title | AdaFair: Cumulative Fairness Adaptive Boosting |
Authors | Vasileios Iosifidis, Eirini Ntoutsi |
Abstract | The widespread use of ML-based decision making in domains with high societal impact such as recidivism, job hiring and loan credit has raised a lot of concerns regarding potential discrimination. In particular, in certain cases it has been observed that ML algorithms can provide different decisions based on sensitive attributes such as gender or race and therefore can lead to discrimination. Although, several fairness-aware ML approaches have been proposed, their focus has been largely on preserving the overall classification accuracy while improving fairness in predictions for both protected and non-protected groups (defined based on the sensitive attribute(s)). The overall accuracy however is not a good indicator of performance in case of class imbalance, as it is biased towards the majority class. As we will see in our experiments, many of the fairness-related datasets suffer from class imbalance and therefore, tackling fairness requires also tackling the imbalance problem. To this end, we propose AdaFair, a fairness-aware classifier based on AdaBoost that further updates the weights of the instances in each boosting round taking into account a cumulative notion of fairness based upon all current ensemble members, while explicitly tackling class-imbalance by optimizing the number of ensemble members for balanced classification error. Our experiments show that our approach can achieve parity in true positive and true negative rates for both protected and non-protected groups, while it significantly outperforms existing fairness-aware methods up to 25% in terms of balanced error. |
Tasks | Decision Making |
Published | 2019-09-17 |
URL | https://arxiv.org/abs/1909.08982v1 |
https://arxiv.org/pdf/1909.08982v1.pdf | |
PWC | https://paperswithcode.com/paper/adafair-cumulative-fairness-adaptive-boosting |
Repo | |
Framework | |
A Bayesian Solution to the M-Bias Problem
Title | A Bayesian Solution to the M-Bias Problem |
Authors | David Rohde |
Abstract | It is common practice in using regression type models for inferring causal effects, that inferring the correct causal relationship requires extra covariates are included or ``adjusted for’'. Without performing this adjustment erroneous causal effects can be inferred. Given this phenomenon it is common practice to include as many covariates as possible, however such advice comes unstuck in the presence of M-bias. M-Bias is a problem in causal inference where the correct estimation of treatment effects requires that certain variables are not adjusted for i.e. are simply neglected from inclusion in the model. This issue caused a storm of controversy in 2009 when Rubin, Pearl and others disagreed about if it could be problematic to include additional variables in models when inferring causal effects. This paper makes two contributions to this issue. Firstly we provide a Bayesian solution to the M-Bias problem. The solution replicates Pearl’s solution, but consistent with Rubin’s advice we condition on all variables. Secondly the fact that we are able to offer a solution to this problem in Bayesian terms shows that it is indeed possible to represent causal relationships within the Bayesian paradigm, albeit in an extended space. We make several remarks on the similarities and differences between causal graphical models which implement the do-calculus and probabilistic graphical models which enable Bayesian statistics. We hope this work will stimulate more research on unifying Pearl’s causal calculus using causal graphical models with traditional Bayesian statistics and probabilistic graphical models. | |
Tasks | Causal Inference |
Published | 2019-06-17 |
URL | https://arxiv.org/abs/1906.07136v1 |
https://arxiv.org/pdf/1906.07136v1.pdf | |
PWC | https://paperswithcode.com/paper/a-bayesian-solution-to-the-m-bias-problem |
Repo | |
Framework | |
On Selecting Stable Predictors in Time Series Models
Title | On Selecting Stable Predictors in Time Series Models |
Authors | Avleen S. Bijral |
Abstract | We extend the feature selection methodology to dependent data and propose a novel time series predictor selection scheme that accommodates statistical dependence in a more typical i.i.d sub-sampling based framework. Furthermore, the machinery of mixing stationary processes allows us to quantify the improvements of our approach over any base predictor selection method (such as lasso) even in a finite sample setting. Using the lasso as a base procedure we demonstrate the applicability of our methods to simulated and several real time series datasets. |
Tasks | Feature Selection, Time Series |
Published | 2019-05-18 |
URL | https://arxiv.org/abs/1905.07659v1 |
https://arxiv.org/pdf/1905.07659v1.pdf | |
PWC | https://paperswithcode.com/paper/on-selecting-stable-predictors-in-time-series |
Repo | |
Framework | |