Paper Group ANR 833
Mathematical Theory of Evidence Versus Evidence. Delegating via Quitting Games. Natural Gradients in Practice: Non-Conjugate Variational Inference in Gaussian Process Models. UNIQ: Uniform Noise Injection for Non-Uniform Quantization of Neural Networks. Residual Unfairness in Fair Machine Learning from Prejudiced Data. The Evolution of Gene Dominan …
Mathematical Theory of Evidence Versus Evidence
Title | Mathematical Theory of Evidence Versus Evidence |
Authors | Mieczysław Kłopotek |
Abstract | This paper is concerned with the apparent greatest weakness of the Mathematical Theory of Evidence (MTE) of Shafer \cite{Shafer:76}, which has been strongly criticized by Wasserman \cite{Wasserman:92ijar}. Weaknesses of Shafer’s proposal \cite{Shafer:90b} of probabilistic interpretation of MTE belief functions is demonstrated. Thereafter a new probabilistic interpretation of MTE conforming both to definition of belief function and to Dempster’s rule of combination of independent evidence. It is shown that shaferian conditioning of belief functions on observations \cite{Shafer:90b} may be treated as selection combined with modification of data, that is data is not viewed as it is but it is casted into one’s beliefs in what it should be like. |
Tasks | |
Published | 2018-11-09 |
URL | http://arxiv.org/abs/1811.04787v1 |
http://arxiv.org/pdf/1811.04787v1.pdf | |
PWC | https://paperswithcode.com/paper/mathematical-theory-of-evidence-versus |
Repo | |
Framework | |
Delegating via Quitting Games
Title | Delegating via Quitting Games |
Authors | Juan Afanador, Nir Oren, Murilo S. Baptista |
Abstract | Delegation allows an agent to request that another agent completes a task. In many situations the task may be delegated onwards, and this process can repeat until it is eventually, successfully or unsuccessfully, performed. We consider policies to guide an agent in choosing who to delegate to when such recursive interactions are possible. These policies, based on quitting games and multi-armed bandits, were empirically tested for effectiveness. Our results indicate that the quitting game based policies outperform those which do not explicitly account for the recursive nature of delegation. |
Tasks | Multi-Armed Bandits |
Published | 2018-04-20 |
URL | http://arxiv.org/abs/1804.07464v1 |
http://arxiv.org/pdf/1804.07464v1.pdf | |
PWC | https://paperswithcode.com/paper/delegating-via-quitting-games |
Repo | |
Framework | |
Natural Gradients in Practice: Non-Conjugate Variational Inference in Gaussian Process Models
Title | Natural Gradients in Practice: Non-Conjugate Variational Inference in Gaussian Process Models |
Authors | Hugh Salimbeni, Stefanos Eleftheriadis, James Hensman |
Abstract | The natural gradient method has been used effectively in conjugate Gaussian process models, but the non-conjugate case has been largely unexplored. We examine how natural gradients can be used in non-conjugate stochastic settings, together with hyperparameter learning. We conclude that the natural gradient can significantly improve performance in terms of wall-clock time. For ill-conditioned posteriors the benefit of the natural gradient method is especially pronounced, and we demonstrate a practical setting where ordinary gradients are unusable. We show how natural gradients can be computed efficiently and automatically in any parameterization, using automatic differentiation. Our code is integrated into the GPflow package. |
Tasks | |
Published | 2018-03-24 |
URL | http://arxiv.org/abs/1803.09151v1 |
http://arxiv.org/pdf/1803.09151v1.pdf | |
PWC | https://paperswithcode.com/paper/natural-gradients-in-practice-non-conjugate |
Repo | |
Framework | |
UNIQ: Uniform Noise Injection for Non-Uniform Quantization of Neural Networks
Title | UNIQ: Uniform Noise Injection for Non-Uniform Quantization of Neural Networks |
Authors | Chaim Baskin, Eli Schwartz, Evgenii Zheltonozhskii, Natan Liss, Raja Giryes, Alex M. Bronstein, Avi Mendelson |
Abstract | We present a novel method for neural network quantization that emulates a non-uniform $k$-quantile quantizer, which adapts to the distribution of the quantized parameters. Our approach provides a novel alternative to the existing uniform quantization techniques for neural networks. We suggest to compare the results as a function of the bit-operations (BOPS) performed, assuming a look-up table availability for the non-uniform case. In this setup, we show the advantages of our strategy in the low computational budget regime. While the proposed solution is harder to implement in hardware, we believe it sets a basis for new alternatives to neural networks quantization. |
Tasks | Quantization |
Published | 2018-04-29 |
URL | http://arxiv.org/abs/1804.10969v3 |
http://arxiv.org/pdf/1804.10969v3.pdf | |
PWC | https://paperswithcode.com/paper/uniq-uniform-noise-injection-for-non-uniform |
Repo | |
Framework | |
Residual Unfairness in Fair Machine Learning from Prejudiced Data
Title | Residual Unfairness in Fair Machine Learning from Prejudiced Data |
Authors | Nathan Kallus, Angela Zhou |
Abstract | Recent work in fairness in machine learning has proposed adjusting for fairness by equalizing accuracy metrics across groups and has also studied how datasets affected by historical prejudices may lead to unfair decision policies. We connect these lines of work and study the residual unfairness that arises when a fairness-adjusted predictor is not actually fair on the target population due to systematic censoring of training data by existing biased policies. This scenario is particularly common in the same applications where fairness is a concern. We characterize theoretically the impact of such censoring on standard fairness metrics for binary classifiers and provide criteria for when residual unfairness may or may not appear. We prove that, under certain conditions, fairness-adjusted classifiers will in fact induce residual unfairness that perpetuates the same injustices, against the same groups, that biased the data to begin with, thus showing that even state-of-the-art fair machine learning can have a “bias in, bias out” property. When certain benchmark data is available, we show how sample reweighting can estimate and adjust fairness metrics while accounting for censoring. We use this to study the case of Stop, Question, and Frisk (SQF) and demonstrate that attempting to adjust for fairness perpetuates the same injustices that the policy is infamous for. |
Tasks | Accuracy Metrics |
Published | 2018-06-07 |
URL | http://arxiv.org/abs/1806.02887v1 |
http://arxiv.org/pdf/1806.02887v1.pdf | |
PWC | https://paperswithcode.com/paper/residual-unfairness-in-fair-machine-learning |
Repo | |
Framework | |
The Evolution of Gene Dominance through the Baldwin Effect
Title | The Evolution of Gene Dominance through the Baldwin Effect |
Authors | Larry Bull |
Abstract | It has recently been suggested that the fundamental haploid-diploid cycle of eukaryotic sex exploits a rudimentary form of the Baldwin effect. Thereafter the other associated phenomena can be explained as evolution tuning the amount and frequency of learning experienced by an organism. Using the well-known NK model of fitness landscapes it is here shown that the emergence of dominance can also be explained under this view of eukaryotic evolution. |
Tasks | |
Published | 2018-11-08 |
URL | http://arxiv.org/abs/1811.04073v1 |
http://arxiv.org/pdf/1811.04073v1.pdf | |
PWC | https://paperswithcode.com/paper/the-evolution-of-gene-dominance-through-the |
Repo | |
Framework | |
A Parameter Estimation of Fractional Order Grey Model Based on Adaptive Dynamic Cat Swarm Algorithm
Title | A Parameter Estimation of Fractional Order Grey Model Based on Adaptive Dynamic Cat Swarm Algorithm |
Authors | Binyan Lin, Fei Gao, Meng Wang, Yuyao Xiong, Ansheng Li |
Abstract | In this paper, we utilize ADCSO (Adaptive Dynamic Cat Swarm Optimization) to estimate the parameters of Fractional Order Grey Model. The parameters of Fractional Order Grey Model affect the prediction accuracy of the model. In order to solve the problem that general swarm intelligence algorithms easily fall into the local optimum and optimize the accuracy of the model, ADCSO is utilized to reduce the error of the model. Experimental results for the data of container throughput of Wuhan Port and marine capture productions of Zhejiang Province show that the different parameter values affect the prediction results. The parameters estimated by ADCSO make the prediction error of the model smaller and the convergence speed higher, and it is not easy to fall into the local convergence compared with PSO (Particle Swarm Optimization) and LSM (Least Square Method). The feasibility and advantage of ADCSO for the parameter estimation of Fractional Order Grey Model are verified. |
Tasks | |
Published | 2018-05-22 |
URL | http://arxiv.org/abs/1805.08680v2 |
http://arxiv.org/pdf/1805.08680v2.pdf | |
PWC | https://paperswithcode.com/paper/a-parameter-estimation-of-fractional-order |
Repo | |
Framework | |
Practical Batch Bayesian Optimization for Less Expensive Functions
Title | Practical Batch Bayesian Optimization for Less Expensive Functions |
Authors | Vu Nguyen, Sunil Gupta, Santu Rana, Cheng Li, Svetha Venkatesh |
Abstract | Bayesian optimization (BO) and its batch extensions are successful for optimizing expensive black-box functions. However, these traditional BO approaches are not yet ideal for optimizing less expensive functions when the computational cost of BO can dominate the cost of evaluating the blackbox function. Examples of these less expensive functions are cheap machine learning models, inexpensive physical experiment through simulators, and acquisition function optimization in Bayesian optimization. In this paper, we consider a batch BO setting for situations where function evaluations are less expensive. Our model is based on a new exploration strategy using geometric distance that provides an alternative way for exploration, selecting a point far from the observed locations. Using that intuition, we propose to use Sobol sequence to guide exploration that will get rid of running multiple global optimization steps as used in previous works. Based on the proposed distance exploration, we present an efficient batch BO approach. We demonstrate that our approach outperforms other baselines and global optimization methods when the function evaluations are less expensive. |
Tasks | |
Published | 2018-11-05 |
URL | http://arxiv.org/abs/1811.01466v1 |
http://arxiv.org/pdf/1811.01466v1.pdf | |
PWC | https://paperswithcode.com/paper/practical-batch-bayesian-optimization-for |
Repo | |
Framework | |
Image Captioning
Title | Image Captioning |
Authors | Vikram Mullachery, Vishal Motwani |
Abstract | This paper discusses and demonstrates the outcomes from our experimentation on Image Captioning. Image captioning is a much more involved task than image recognition or classification, because of the additional challenge of recognizing the interdependence between the objects/concepts in the image and the creation of a succinct sentential narration. Experiments on several labeled datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. As a toy application, we apply image captioning to create video captions, and we advance a few hypotheses on the challenges we encountered. |
Tasks | Image Captioning |
Published | 2018-05-13 |
URL | http://arxiv.org/abs/1805.09137v1 |
http://arxiv.org/pdf/1805.09137v1.pdf | |
PWC | https://paperswithcode.com/paper/image-captioning |
Repo | |
Framework | |
Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits
Title | Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits |
Authors | Zeyuan Allen-Zhu, Sébastien Bubeck, Yuanzhi Li |
Abstract | Regret bounds in online learning compare the player’s performance to $L^$, the optimal performance in hindsight with a fixed strategy. Typically such bounds scale with the square root of the time horizon $T$. The more refined concept of first-order regret bound replaces this with a scaling $\sqrt{L^}$, which may be much smaller than $\sqrt{T}$. It is well known that minor variants of standard algorithms satisfy first-order regret bounds in the full information and multi-armed bandit settings. In a COLT 2017 open problem, Agarwal, Krishnamurthy, Langford, Luo, and Schapire raised the issue that existing techniques do not seem sufficient to obtain first-order regret bounds for the contextual bandit problem. In the present paper, we resolve this open problem by presenting a new strategy based on augmenting the policy space. |
Tasks | Multi-Armed Bandits |
Published | 2018-02-09 |
URL | http://arxiv.org/abs/1802.03386v1 |
http://arxiv.org/pdf/1802.03386v1.pdf | |
PWC | https://paperswithcode.com/paper/make-the-minority-great-again-first-order |
Repo | |
Framework | |
Machine learning electron correlation in a disordered medium
Title | Machine learning electron correlation in a disordered medium |
Authors | Jianhua Ma, Puhan Zhang, Yaohua Tan, Avik W. Ghosh, Gia-Wei Chern |
Abstract | Learning from data has led to a paradigm shift in computational materials science. In particular, it has been shown that neural networks can learn the potential energy surface and interatomic forces through examples, thus bypassing the computationally expensive density functional theory calculations. Combining many-body techniques with a deep learning approach, we demonstrate that a fully-connected neural network is able to learn the complex collective behavior of electrons in strongly correlated systems. Specifically, we consider the Anderson-Hubbard (AH) model, which is a canonical system for studying the interplay between electron correlation and strong localization. The ground states of the AH model on a square lattice are obtained using the real-space Gutzwiller method. The obtained solutions are used to train a multi-task multi-layer neural network, which subsequently can accurately predict quantities such as the local probability of double occupation and the quasiparticle weight, given the disorder potential in the neighborhood as the input. |
Tasks | |
Published | 2018-10-04 |
URL | http://arxiv.org/abs/1810.02323v1 |
http://arxiv.org/pdf/1810.02323v1.pdf | |
PWC | https://paperswithcode.com/paper/machine-learning-electron-correlation-in-a |
Repo | |
Framework | |
Script Identification in Natural Scene Image and Video Frame using Attention based Convolutional-LSTM Network
Title | Script Identification in Natural Scene Image and Video Frame using Attention based Convolutional-LSTM Network |
Authors | Ankan Kumar Bhunia, Aishik Konwer, Ayan Kumar Bhunia, Abir Bhowmick, Partha P. Roy, Umapada Pal |
Abstract | Script identification plays a significant role in analysing documents and videos. In this paper, we focus on the problem of script identification in scene text images and video scripts. Because of low image quality, complex background and similar layout of characters shared by some scripts like Greek, Latin, etc., text recognition in those cases become challenging. In this paper, we propose a novel method that involves extraction of local and global features using CNN-LSTM framework and weighting them dynamically for script identification. First, we convert the images into patches and feed them into a CNN-LSTM framework. Attention-based patch weights are calculated applying softmax layer after LSTM. Next, we do patch-wise multiplication of these weights with corresponding CNN to yield local features. Global features are also extracted from last cell state of LSTM. We employ a fusion technique which dynamically weights the local and global features for an individual patch. Experiments have been done in four public script identification datasets: SIW-13, CVSI2015, ICDAR-17 and MLe2e. The proposed framework achieves superior results in comparison to conventional methods. |
Tasks | |
Published | 2018-01-01 |
URL | http://arxiv.org/abs/1801.00470v4 |
http://arxiv.org/pdf/1801.00470v4.pdf | |
PWC | https://paperswithcode.com/paper/script-identification-in-natural-scene-image |
Repo | |
Framework | |
Privacy Amplification by Iteration
Title | Privacy Amplification by Iteration |
Authors | Vitaly Feldman, Ilya Mironov, Kunal Talwar, Abhradeep Thakurta |
Abstract | Many commonly used learning algorithms work by iteratively updating an intermediate solution using one or a few data points in each iteration. Analysis of differential privacy for such algorithms often involves ensuring privacy of each step and then reasoning about the cumulative privacy cost of the algorithm. This is enabled by composition theorems for differential privacy that allow releasing of all the intermediate results. In this work, we demonstrate that for contractive iterations, not releasing the intermediate results strongly amplifies the privacy guarantees. We describe several applications of this new analysis technique to solving convex optimization problems via noisy stochastic gradient descent. For example, we demonstrate that a relatively small number of non-private data points from the same distribution can be used to close the gap between private and non-private convex optimization. In addition, we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied. |
Tasks | |
Published | 2018-08-20 |
URL | http://arxiv.org/abs/1808.06651v2 |
http://arxiv.org/pdf/1808.06651v2.pdf | |
PWC | https://paperswithcode.com/paper/privacy-amplification-by-iteration |
Repo | |
Framework | |
Stepwise Acquisition of Dialogue Act Through Human-Robot Interaction
Title | Stepwise Acquisition of Dialogue Act Through Human-Robot Interaction |
Authors | Akane Matsushima, Ryosuke Kanajiri, Yusuke Hattori, Chie Fukada, Natsuki Oka |
Abstract | A dialogue act (DA) represents the meaning of an utterance at the illocutionary force level (Austin 1962) such as a question, a request, and a greeting. Since DAs take charge of the most fundamental part of communication, we believe that the elucidation of DA learning mechanism is important for cognitive science and artificial intelligence. The purpose of this study is to verify that scaffolding takes place when a human teaches a robot, and to let a robot learn to estimate DAs and to make a response based on them step by step utilizing scaffolding provided by a human. To realize that, it is necessary for the robot to detect changes in utterance and rewards given by the partner and continue learning accordingly. Experimental results demonstrated that participants who continued interaction for a sufficiently long time often gave scaffolding for the robot. Although the number of experiments is still insufficient to obtain a definite conclusion, we observed that 1) the robot quickly learned to respond to DAs in most cases if the participants only spoke utterances that match the situation, 2) in the case of participants who builds scaffolding differently from what we assumed, learning did not proceed quickly, and 3) the robot could learn to estimate DAs almost exactly if the participants kept interaction for a sufficiently long time even if the scaffolding was unexpected. |
Tasks | |
Published | 2018-10-23 |
URL | http://arxiv.org/abs/1810.09949v2 |
http://arxiv.org/pdf/1810.09949v2.pdf | |
PWC | https://paperswithcode.com/paper/stepwise-acquisition-of-dialogue-act-through |
Repo | |
Framework | |
Learning to Design Circuits
Title | Learning to Design Circuits |
Authors | Hanrui Wang, Jiacheng Yang, Hae-Seung Lee, Song Han |
Abstract | Analog IC design relies on human experts to search for parameters that satisfy circuit specifications with their experience and intuitions, which is highly labor intensive, time consuming and suboptimal. Machine learning is a promising tool to automate this process. However, supervised learning is difficult for this task due to the low availability of training data: 1) Circuit simulation is slow, thus generating large-scale dataset is time-consuming; 2) Most circuit designs are propitiatory IPs within individual IC companies, making it expensive to collect large-scale datasets. We propose Learning to Design Circuits (L2DC) to leverage reinforcement learning that learns to efficiently generate new circuits data and to optimize circuits. We fix the schematic, and optimize the parameters of the transistors automatically by training an RL agent with no prior knowledge about optimizing circuits. After iteratively getting observations, generating a new set of transistor parameters, getting a reward, and adjusting the model, L2DC is able to optimize circuits. We evaluate L2DC on two transimpedance amplifiers. Trained for a day, our RL agent can achieve comparable or better performance than human experts trained for a quarter. It first learns to meet hard-constraints (eg. gain, bandwidth), and then learns to optimize good-to-have targets (eg. area, power). Compared with grid search-aided human design, L2DC can achieve $\mathbf{250}\boldsymbol{\times}$ higher sample efficiency with comparable performance. Under the same runtime constraint, the performance of L2DC is also better than Bayesian Optimization. |
Tasks | |
Published | 2018-12-05 |
URL | https://arxiv.org/abs/1812.02734v4 |
https://arxiv.org/pdf/1812.02734v4.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-design-circuits |
Repo | |
Framework | |