Paper Group ANR 874
Automatic extrinsic calibration between a camera and a 3D Lidar using 3D point and plane correspondences. Principled Frameworks for Evaluating Ethics in NLP Systems. Countering the Effects of Lead Bias in News Summarization via Multi-Stage Training and Auxiliary Losses. Non-asymptotic error bounds for scaled underdamped Langevin MCMC. Complementary …
Automatic extrinsic calibration between a camera and a 3D Lidar using 3D point and plane correspondences
Title | Automatic extrinsic calibration between a camera and a 3D Lidar using 3D point and plane correspondences |
Authors | Surabhi Verma, Julie Stephany Berrio, Stewart Worrall, Eduardo Nebot |
Abstract | This paper proposes an automated method to obtain the extrinsic calibration parameters between a camera and a 3D lidar with as low as 16 beams. We use a checkerboard as a reference to obtain features of interest in both sensor frames. The calibration board centre point and normal vector are automatically extracted from the lidar point cloud by exploiting the geometry of the board. The corresponding features in the camera image are obtained from the camera’s extrinsic matrix. We explain the reasons behind selecting these features, and why they are more robust compared to other possibilities. To obtain the optimal extrinsic parameters, we choose a genetic algorithm to address the highly non-linear state space. The process is automated after defining the bounds of the 3D experimental region relative to the lidar, and the true board dimensions. In addition, the camera is assumed to be intrinsically calibrated. Our method requires a minimum of 3 checkerboard poses, and the calibration accuracy is demonstrated by evaluating our algorithm using real world and simulated features. |
Tasks | Calibration |
Published | 2019-04-29 |
URL | http://arxiv.org/abs/1904.12433v1 |
http://arxiv.org/pdf/1904.12433v1.pdf | |
PWC | https://paperswithcode.com/paper/automatic-extrinsic-calibration-between-a |
Repo | |
Framework | |
Principled Frameworks for Evaluating Ethics in NLP Systems
Title | Principled Frameworks for Evaluating Ethics in NLP Systems |
Authors | Shrimai Prabhumoye, Elijah Mayfield, Alan W Black |
Abstract | We critique recent work on ethics in natural language processing. Those discussions have focused on data collection, experimental design, and interventions in modeling. But we argue that we ought to first understand the frameworks of ethics that are being used to evaluate the fairness and justice of algorithmic systems. Here, we begin that discussion by outlining deontological ethics, and envision a research agenda prioritized by it. |
Tasks | |
Published | 2019-06-14 |
URL | https://arxiv.org/abs/1906.06425v1 |
https://arxiv.org/pdf/1906.06425v1.pdf | |
PWC | https://paperswithcode.com/paper/principled-frameworks-for-evaluating-ethics |
Repo | |
Framework | |
Countering the Effects of Lead Bias in News Summarization via Multi-Stage Training and Auxiliary Losses
Title | Countering the Effects of Lead Bias in News Summarization via Multi-Stage Training and Auxiliary Losses |
Authors | Matt Grenander, Yue Dong, Jackie Chi Kit Cheung, Annie Louis |
Abstract | Sentence position is a strong feature for news summarization, since the lead often (but not always) summarizes the key points of the article. In this paper, we show that recent neural systems excessively exploit this trend, which although powerful for many inputs, is also detrimental when summarizing documents where important content should be extracted from later parts of the article. We propose two techniques to make systems sensitive to the importance of content in different parts of the article. The first technique employs ‘unbiased’ data; i.e., randomly shuffled sentences of the source document, to pretrain the model. The second technique uses an auxiliary ROUGE-based loss that encourages the model to distribute importance scores throughout a document by mimicking sentence-level ROUGE scores on the training data. We show that these techniques significantly improve the performance of a competitive reinforcement learning based extractive system, with the auxiliary loss being more powerful than pretraining. |
Tasks | |
Published | 2019-09-08 |
URL | https://arxiv.org/abs/1909.04028v1 |
https://arxiv.org/pdf/1909.04028v1.pdf | |
PWC | https://paperswithcode.com/paper/countering-the-effects-of-lead-bias-in-news |
Repo | |
Framework | |
Non-asymptotic error bounds for scaled underdamped Langevin MCMC
Title | Non-asymptotic error bounds for scaled underdamped Langevin MCMC |
Authors | Tim Zajic |
Abstract | Recent works have derived non-asymptotic upper bounds for convergence of underdamped Langevin MCMC. We revisit these bound and consider introducing scaling terms in the underlying underdamped Langevin equation. In particular, we provide conditions under which an appropriate scaling allows to improve the error bounds in terms of the condition number of the underlying density of interest. |
Tasks | |
Published | 2019-12-06 |
URL | https://arxiv.org/abs/1912.03154v1 |
https://arxiv.org/pdf/1912.03154v1.pdf | |
PWC | https://paperswithcode.com/paper/non-asymptotic-error-bounds-for-scaled |
Repo | |
Framework | |
Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay
Title | Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay |
Authors | Mohammad Rostami, Soheil Kolouri, Praveen K. Pilly |
Abstract | Despite huge success, deep networks are unable to learn effectively in sequential multitask learning settings as they forget the past learned tasks after learning new tasks. Inspired from complementary learning systems theory, we address this challenge by learning a generative model that couples the current task to the past learned tasks through a discriminative embedding space. We learn an abstract level generative distribution in the embedding that allows the generation of data points to represent the experience. We sample from this distribution and utilize experience replay to avoid forgetting and simultaneously accumulate new knowledge to the abstract distribution in order to couple the current task with past experience. We demonstrate theoretically and empirically that our framework learns a distribution in the embedding that is shared across all task and as a result tackles catastrophic forgetting. |
Tasks | |
Published | 2019-03-11 |
URL | https://arxiv.org/abs/1903.04566v2 |
https://arxiv.org/pdf/1903.04566v2.pdf | |
PWC | https://paperswithcode.com/paper/complementary-learning-for-overcoming |
Repo | |
Framework | |
The sameAs Problem: A Survey on Identity Management in the Web of Data
Title | The sameAs Problem: A Survey on Identity Management in the Web of Data |
Authors | Joe Raad, Nathalie Pernelle, Fatiha Saïs, Wouter Beek, Frank van Harmelen |
Abstract | In a decentralised knowledge representation system such as the Web of Data, it is common and indeed desirable for different knowledge graphs to overlap. Whenever multiple names are used to denote the same thing, owl:sameAs statements are needed in order to link the data and foster reuse. Whilst the deductive value of such identity statements can be extremely useful in enhancing various knowledge-based systems, incorrect use of identity can have wide-ranging effects in a global knowledge space like the Web of Data. With several works already proven that identity in the Web is broken, this survey investigates the current state of this “sameAs problem”. An open discussion highlights the main weaknesses suffered by solutions in the literature, and draws open challenges to be faced in the future. |
Tasks | Knowledge Graphs |
Published | 2019-07-24 |
URL | https://arxiv.org/abs/1907.10528v1 |
https://arxiv.org/pdf/1907.10528v1.pdf | |
PWC | https://paperswithcode.com/paper/the-sameas-problem-a-survey-on-identity |
Repo | |
Framework | |
Scalable Explanation of Inferences on Large Graphs
Title | Scalable Explanation of Inferences on Large Graphs |
Authors | Chao Chen, Yifei Liu, Xi Zhang, Sihong Xie |
Abstract | Probabilistic inferences distill knowledge from graphs to aid human make important decisions. Due to the inherent uncertainty in the model and the complexity of the knowledge, it is desirable to help the end-users understand the inference outcomes. Different from deep or high-dimensional parametric models, the lack of interpretability in graphical models is due to the cyclic and long-range dependencies and the byzantine inference procedures. Prior works did not tackle cycles and make \textit{the} inferences interpretable. To close the gap, we formulate the problem of explaining probabilistic inferences as a constrained cross-entropy minimization problem to find simple subgraphs that faithfully approximate the inferences to be explained. We prove that the optimization is NP-hard, while the objective is not monotonic and submodular to guarantee efficient greedy approximation. We propose a general beam search algorithm to find simple trees to enhance the interpretability and diversity in the explanations, with parallelization and a pruning strategy to allow efficient search on large and dense graphs without hurting faithfulness. We demonstrate superior performance on 10 networks from 4 distinct applications, comparing favorably to other explanation methods. Regarding the usability of the explanation, we visualize the explanation in an interface that allows the end-users to explore the diverse search results and find more personalized and sensible explanations. |
Tasks | |
Published | 2019-08-13 |
URL | https://arxiv.org/abs/1908.06482v2 |
https://arxiv.org/pdf/1908.06482v2.pdf | |
PWC | https://paperswithcode.com/paper/scalable-explanation-of-inferences-on-large |
Repo | |
Framework | |
Trust Region Value Optimization using Kalman Filtering
Title | Trust Region Value Optimization using Kalman Filtering |
Authors | Shirli Di-Castro Shashua, Shie Mannor |
Abstract | Policy evaluation is a key process in reinforcement learning. It assesses a given policy using estimation of the corresponding value function. When using a parameterized function to approximate the value, it is common to optimize the set of parameters by minimizing the sum of squared Bellman Temporal Differences errors. However, this approach ignores certain distributional properties of both the errors and value parameters. Taking these distributions into account in the optimization process can provide useful information on the amount of confidence in value estimation. In this work we propose to optimize the value by minimizing a regularized objective function which forms a trust region over its parameters. We present a novel optimization method, the Kalman Optimization for Value Approximation (KOVA), based on the Extended Kalman Filter. KOVA minimizes the regularized objective function by adopting a Bayesian perspective over both the value parameters and noisy observed returns. This distributional property provides information on parameter uncertainty in addition to value estimates. We provide theoretical results of our approach and analyze the performance of our proposed optimizer on domains with large state and action spaces. |
Tasks | |
Published | 2019-01-23 |
URL | http://arxiv.org/abs/1901.07860v1 |
http://arxiv.org/pdf/1901.07860v1.pdf | |
PWC | https://paperswithcode.com/paper/trust-region-value-optimization-using-kalman |
Repo | |
Framework | |
Estimation of Monge Matrices
Title | Estimation of Monge Matrices |
Authors | Jan-Christian Hütter, Cheng Mao, Philippe Rigollet, Elina Robeva |
Abstract | Monge matrices and their permuted versions known as pre-Monge matrices naturally appear in many domains across science and engineering. While the rich structural properties of such matrices have long been leveraged for algorithmic purposes, little is known about their impact on statistical estimation. In this work, we propose to view this structure as a shape constraint and study the problem of estimating a Monge matrix subject to additive random noise. More specifically, we establish the minimax rates of estimation of Monge and pre-Monge matrices. In the case of pre-Monge matrices, the minimax-optimal least-squares estimator is not efficiently computable, and we propose two efficient estimators and establish their rates of convergence. Our theoretical findings are supported by numerical experiments. |
Tasks | |
Published | 2019-04-05 |
URL | http://arxiv.org/abs/1904.03136v1 |
http://arxiv.org/pdf/1904.03136v1.pdf | |
PWC | https://paperswithcode.com/paper/estimation-of-monge-matrices |
Repo | |
Framework | |
Globally optimal registration of noisy point clouds
Title | Globally optimal registration of noisy point clouds |
Authors | Rangaprasad Arun Srivatsan, Tejas Zodage, Howie Choset |
Abstract | Registration of 3D point clouds is a fundamental task in several applications of robotics and computer vision. While registration methods such as iterative closest point and variants are very popular, they are only locally optimal. There has been some recent work on globally optimal registration, but they perform poorly in the presence of noise in the measurements. In this work we develop a mixed integer programming-based approach for globally optimal registration that explicitly considers uncertainty in its optimization, and hence produces more accurate estimates. Furthermore, from a practical implementation perspective we develop a multi-step optimization that combines fast local methods with our accurate global formulation. Through extensive simulation and real world experiments we demonstrate improved performance over state-of-the-art methods for various level of noise and outliers in the data as well as for partial geometric overlap. |
Tasks | |
Published | 2019-08-22 |
URL | https://arxiv.org/abs/1908.08162v1 |
https://arxiv.org/pdf/1908.08162v1.pdf | |
PWC | https://paperswithcode.com/paper/globally-optimal-registration-of-noisy-point |
Repo | |
Framework | |
Safe Exploration for Interactive Machine Learning
Title | Safe Exploration for Interactive Machine Learning |
Authors | Matteo Turchetta, Felix Berkenkamp, Andreas Krause |
Abstract | In Interactive Machine Learning (IML), we iteratively make decisions and obtain noisy observations of an unknown function. While IML methods, e.g., Bayesian optimization and active learning, have been successful in applications, on real-world systems they must provably avoid unsafe decisions. To this end, safe IML algorithms must carefully learn about a priori unknown constraints without making unsafe decisions. Existing algorithms for this problem learn about the safety of all decisions to ensure convergence. This is sample-inefficient, as it explores decisions that are not relevant for the original IML objective. In this paper, we introduce a novel framework that renders any existing unsafe IML algorithm safe. Our method works as an add-on that takes suggested decisions as input and exploits regularity assumptions in terms of a Gaussian process prior in order to efficiently learn about their safety. As a result, we only explore the safe set when necessary for the IML problem. We apply our framework to safe Bayesian optimization and to safe exploration in deterministic Markov Decision Processes (MDP), which have been analyzed separately before. Our method outperforms other algorithms empirically. |
Tasks | Active Learning, Safe Exploration |
Published | 2019-10-30 |
URL | https://arxiv.org/abs/1910.13726v1 |
https://arxiv.org/pdf/1910.13726v1.pdf | |
PWC | https://paperswithcode.com/paper/safe-exploration-for-interactive-machine |
Repo | |
Framework | |
Mutual Information and the Edge of Chaos in Reservoir Computers
Title | Mutual Information and the Edge of Chaos in Reservoir Computers |
Authors | Thomas L. Carroll |
Abstract | A reservoir computer is a dynamical system that may be used to perform computations. A reservoir computer usually consists of a set of nonlinear nodes coupled together in a network so that there are feedback paths. Training the reservoir computer consists of inputing a signal of interest and fitting the time series signals of the reservoir computer nodes to a training signal that is related to the input signal. It is believed that dynamical systems function most efficiently as computers at the “edge of chaos”, the point at which the largest Lyapunov exponent of the dynamical system transitions from negative to positive. In this work I simulate several different reservoir computers and ask if the best performance really does come at this edge of chaos. I find that while it is possible to get optimum performance at the edge of chaos, there may also be parameter values where the edge of chaos regime produces poor performance. This ambiguous parameter dependance has implications for building reservoir computers from analog physical systems, where the parameter range is restricted. |
Tasks | Time Series |
Published | 2019-06-06 |
URL | https://arxiv.org/abs/1906.03186v2 |
https://arxiv.org/pdf/1906.03186v2.pdf | |
PWC | https://paperswithcode.com/paper/mutual-information-and-the-edge-of-chaos-in |
Repo | |
Framework | |
COBRA: Context-aware Bernoulli Neural Networks for Reputation Assessment
Title | COBRA: Context-aware Bernoulli Neural Networks for Reputation Assessment |
Authors | Leonit Zeynalvand, Tie Luo, Jie Zhang |
Abstract | Trust and reputation management (TRM) plays an increasingly important role in large-scale online environments such as multi-agent systems (MAS) and the Internet of Things (IoT). One main objective of TRM is to achieve accurate trust assessment of entities such as agents or IoT service providers. However, this encounters an accuracy-privacy dilemma as we identify in this paper, and we propose a framework called Context-aware Bernoulli Neural Network based Reputation Assessment (COBRA) to address this challenge. COBRA encapsulates agent interactions or transactions, which are prone to privacy leak, in machine learning models, and aggregates multiple such models using a Bernoulli neural network to predict a trust score for an agent. COBRA preserves agent privacy and retains interaction contexts via the machine learning models, and achieves more accurate trust prediction than a fully-connected neural network alternative. COBRA is also robust to security attacks by agents who inject fake machine learning models; notably, it is resistant to the 51-percent attack. The performance of COBRA is validated by our experiments using a real dataset, and by our simulations, where we also show that COBRA outperforms other state-of-the-art TRM systems. |
Tasks | |
Published | 2019-12-18 |
URL | https://arxiv.org/abs/1912.08446v2 |
https://arxiv.org/pdf/1912.08446v2.pdf | |
PWC | https://paperswithcode.com/paper/cobra-context-aware-bernoulli-neural-networks |
Repo | |
Framework | |
DataSist: A Python-based library for easy data analysis, visualization and modeling
Title | DataSist: A Python-based library for easy data analysis, visualization and modeling |
Authors | Rising Odegua, Festus Ikpotokin |
Abstract | A large amount of data is produced every second from modern information systems such as mobile devices, the world wide web, Internet of Things, social media, etc. Analysis and mining of this massive data requires a lot of advanced tools and techniques. Therefore, big data analytics and mining is currently an active and trending area of research because of the enormous benefits businesses and organizations derive from it. Numerous tools like Pandas, Numpy, STATA, SPSS, have been created to help analyze and mine these huge outburst of data and some have become so popular and widely used in the field. This paper presents a new python-based library, DataSist, which offers high level, intuitive and easy to use functions, and methods that helps data scientists/analyst to quickly analyze, mine and visualize big data sets. The objectives of this project were to (i) design a python library to aid data analysis process by abstracting low level syntax, (ii) increase productivity of data scientist by making them focus on what to do rather than how to do it. This project shows that data analysis can be automated and much faster when we abstract certain functions, and will serve as an important tool in the workflow of data scientists. |
Tasks | |
Published | 2019-11-09 |
URL | https://arxiv.org/abs/1911.03655v2 |
https://arxiv.org/pdf/1911.03655v2.pdf | |
PWC | https://paperswithcode.com/paper/datasist-a-python-based-library-for-easy-data |
Repo | |
Framework | |
NeuralSampler: Euclidean Point Cloud Auto-Encoder and Sampler
Title | NeuralSampler: Euclidean Point Cloud Auto-Encoder and Sampler |
Authors | Edoardo Remelli, Pierre Baque, Pascal Fua |
Abstract | Most algorithms that rely on deep learning-based approaches to generate 3D point sets can only produce clouds containing fixed number of points. Furthermore, they typically require large networks parameterized by many weights, which makes them hard to train. In this paper, we propose an auto-encoder architecture that can both encode and decode clouds of arbitrary size and demonstrate its effectiveness at upsampling sparse point clouds. Interestingly, we can do so using less than half as many parameters as state-of-the-art architectures while still delivering better performance. We will make our code base fully available. |
Tasks | |
Published | 2019-01-27 |
URL | http://arxiv.org/abs/1901.09394v1 |
http://arxiv.org/pdf/1901.09394v1.pdf | |
PWC | https://paperswithcode.com/paper/neuralsampler-euclidean-point-cloud-auto |
Repo | |
Framework | |