Paper Group ANR 1700
Autonomous Driving using Safe Reinforcement Learning by Incorporating a Regret-based Human Lane-Changing Decision Model. Model Adaptation via Model Interpolation and Boosting for Web Search Ranking. Mean-field Behaviour of Neural Tangent Kernel for Deep Neural Networks. Robust Cochlear Modiolar Axis Detection in CT. On modelling the emergence of lo …
Autonomous Driving using Safe Reinforcement Learning by Incorporating a Regret-based Human Lane-Changing Decision Model
Title | Autonomous Driving using Safe Reinforcement Learning by Incorporating a Regret-based Human Lane-Changing Decision Model |
Authors | Dong Chen, Longsheng Jiang, Yue Wang, Zhaojian Li |
Abstract | It is expected that many human drivers will still prefer to drive themselves even if the self-driving technologies are ready. Therefore, human-driven vehicles and autonomous vehicles (AVs) will coexist in a mixed traffic for a long time. To enable AVs to safely and efficiently maneuver in this mixed traffic, it is critical that the AVs can understand how humans cope with risks and make driving-related decisions. On the other hand, the driving environment is highly dynamic and ever-changing, and it is thus difficult to enumerate all the scenarios and hard-code the controllers. To face up these challenges, in this work, we incorporate a human decision-making model in reinforcement learning to control AVs for safe and efficient operations. Specifically, we adapt regret theory to describe a human driver’s lane-changing behavior, and fit the personalized models to individual drivers for predicting their lane-changing decisions. The predicted decisions are incorporated in the safety constraints for reinforcement learning in training and in implementation. We then use an extended version of double deep Q-network (DDQN) to train our AV controller within the safety set. By doing so, the amount of collisions in training is reduced to zero, while the training accuracy is not impinged. |
Tasks | Autonomous Driving, Autonomous Vehicles, Decision Making |
Published | 2019-10-10 |
URL | https://arxiv.org/abs/1910.04803v1 |
https://arxiv.org/pdf/1910.04803v1.pdf | |
PWC | https://paperswithcode.com/paper/autonomous-driving-using-safe-reinforcement |
Repo | |
Framework | |
Model Adaptation via Model Interpolation and Boosting for Web Search Ranking
Title | Model Adaptation via Model Interpolation and Boosting for Web Search Ranking |
Authors | Jianfeng Gao, Qiang Wu, Chris Burges, Krysta Svore, Yi Su, Nazan Khan, Shalin Shah, Hongyan Zhou |
Abstract | This paper explores two classes of model adaptation methods for Web search ranking: Model Interpolation and error-driven learning approaches based on a boosting algorithm. The results show that model interpolation, though simple, achieves the best results on all the open test sets where the test data is very different from the training data. The tree-based boosting algorithm achieves the best performance on most of the closed test sets where the test data and the training data are similar, but its performance drops significantly on the open test sets due to the instability of trees. Several methods are explored to improve the robustness of the algorithm, with limited success. |
Tasks | |
Published | 2019-07-22 |
URL | https://arxiv.org/abs/1907.09471v1 |
https://arxiv.org/pdf/1907.09471v1.pdf | |
PWC | https://paperswithcode.com/paper/model-adaptation-via-model-interpolation-and |
Repo | |
Framework | |
Mean-field Behaviour of Neural Tangent Kernel for Deep Neural Networks
Title | Mean-field Behaviour of Neural Tangent Kernel for Deep Neural Networks |
Authors | Soufiane Hayou, Arnaud Doucet, Judith Rousseau |
Abstract | Recent work by Jacot et al. (2018) has showed that training a neural network of any kind with gradient descent in parameter space is equivalent to kernel gradient descent in function space with Recent influential work by Jacot et al. (2018) has shown that training a neural network of any kind with gradient descent in parameter space is strongly related to kernel gradient descent in function space with respect to the Neural Tangent Kernel (NTK). Lee et al. (2019) built on this result by establishing that the output of a neural network trained using gradient descent can be approximated by a linear model for wide networks. In parallel, a recent line of studies (Schoenholz et al. (2017), Hayou et al. (2019)) has suggested that a special initialization known as the Edge of Chaos improves training. In this paper, we bridge the gap between these two concepts by quantifying the impact of the initialization and the activation function on the NTK when the network depth becomes large. We provide experiments illustrating our theoretical results. |
Tasks | |
Published | 2019-05-31 |
URL | https://arxiv.org/abs/1905.13654v5 |
https://arxiv.org/pdf/1905.13654v5.pdf | |
PWC | https://paperswithcode.com/paper/training-dynamics-of-deep-networks-using |
Repo | |
Framework | |
Robust Cochlear Modiolar Axis Detection in CT
Title | Robust Cochlear Modiolar Axis Detection in CT |
Authors | Wilhelm Wimmer, Clair Vandersteen, Nicolas Guevara, Marco Caversaccio, Hervé Delingette |
Abstract | The cochlea, the auditory part of the inner ear, is a spiral-shaped organ with large morphological variability. An individualized assessment of its shape is essential for clinical applications related to tonotopy and cochlear implantation. To unambiguously reference morphological parameters, reliable recognition of the cochlear modiolar axis in computed tomography (CT) images is required. The conventional method introduces measurement uncertainties, as it is based on manually selected and difficult to identify landmarks. Herein, we present an algorithm for robust modiolar axis detection in clinical CT images. We define the modiolar axis as the rotation component of the kinematic spiral motion inherent in the cochlear shape. For surface fitting, we use a compact shape representation in a 7-dimensional kinematic parameter space based on extended Pl"ucker coordinates. It is the first time such a kinematic representation is used for shape analysis in medical images. Robust surface fitting is achieved with an adapted approximate maximum likelihood method assuming a Student-t distribution, enabling axis detection even in partially available surface data. We verify the algorithm performance on a synthetic data set with cochlear surface subsets. In addition, we perform an experimental study with four experts in 23 human cochlea CT data sets to compare the automated detection with the manually found axes. Axes found from co-registered high resolution micro-CT scans are used for reference. Our experiments show that the algorithm reduces the alignment error providing more reliable modiolar axis detection for clinical and research applications. |
Tasks | Computed Tomography (CT) |
Published | 2019-07-03 |
URL | https://arxiv.org/abs/1907.01870v1 |
https://arxiv.org/pdf/1907.01870v1.pdf | |
PWC | https://paperswithcode.com/paper/robust-cochlear-modiolar-axis-detection-in-ct |
Repo | |
Framework | |
On modelling the emergence of logical thinking
Title | On modelling the emergence of logical thinking |
Authors | Cristian Ivan, Bipin Indurkhya |
Abstract | Recent progress in machine learning techniques have revived interest in building artificial general intelligence using these particular tools. There has been a tremendous success in applying them for narrow intellectual tasks such as pattern recognition, natural language processing and playing Go. The latter application vastly outperforms the strongest human player in recent years. However, these tasks are formalized by people in such ways that it has become “easy” for automated recipes to find better solutions than humans do. In the sense of John Searle’s Chinese Room Argument, the computer playing Go does not actually understand anything from the game. Thinking like a human mind requires to go beyond the curve fitting paradigm of current systems. There is a fundamental limit to what they can achieve currently as only very specific problem formalization can increase their performances in particular tasks. In this paper, we argue than one of the most important aspects of the human mind is its capacity for logical thinking, which gives rise to many intellectual expressions that differentiate us from animal brains. We propose to model the emergence of logical thinking based on Piaget’s theory of cognitive development. |
Tasks | |
Published | 2019-05-23 |
URL | https://arxiv.org/abs/1905.09730v1 |
https://arxiv.org/pdf/1905.09730v1.pdf | |
PWC | https://paperswithcode.com/paper/on-modelling-the-emergence-of-logical |
Repo | |
Framework | |
Optimal Transport Relaxations with Application to Wasserstein GANs
Title | Optimal Transport Relaxations with Application to Wasserstein GANs |
Authors | Saied Mahdian, Jose Blanchet, Peter Glynn |
Abstract | We propose a family of relaxations of the optimal transport problem which regularize the problem by introducing an additional minimization step over a small region around one of the underlying transporting measures. The type of regularization that we obtain is related to smoothing techniques studied in the optimization literature. When using our approach to estimate optimal transport costs based on empirical measures, we obtain statistical learning bounds which are useful to guide the amount of regularization, while maintaining good generalization properties. To illustrate the computational advantages of our regularization approach, we apply our method to training Wasserstein GANs. We obtain running time improvements, relative to current benchmarks, with no deterioration in testing performance (via FID). The running time improvement occurs because our new optimality-based threshold criterion reduces the number of expensive iterates of the generating networks, while increasing the number of actor-critic iterations. |
Tasks | |
Published | 2019-06-07 |
URL | https://arxiv.org/abs/1906.03317v1 |
https://arxiv.org/pdf/1906.03317v1.pdf | |
PWC | https://paperswithcode.com/paper/optimal-transport-relaxations-with |
Repo | |
Framework | |
Best Practices for Scientific Research on Neural Architecture Search
Title | Best Practices for Scientific Research on Neural Architecture Search |
Authors | Marius Lindauer, Frank Hutter |
Abstract | Finding a well-performing architecture is often tedious for both DL practitioners and researchers, leading to tremendous interest in the automation of this task by means of neural architecture search (NAS). Although the community has made major strides in developing better NAS methods, the quality of scientific empirical evaluations in the young field of NAS is still lacking behind that of other areas of machine learning. To address this issue, we describe a set of possible issues and ways to avoid them, leading to the NAS best practices checklist available at http://automl.org/nas_checklist.pdf. |
Tasks | Neural Architecture Search |
Published | 2019-09-05 |
URL | https://arxiv.org/abs/1909.02453v2 |
https://arxiv.org/pdf/1909.02453v2.pdf | |
PWC | https://paperswithcode.com/paper/best-practices-for-scientific-research-on |
Repo | |
Framework | |
Self-labelling via simultaneous clustering and representation learning
Title | Self-labelling via simultaneous clustering and representation learning |
Authors | Yuki Markus Asano, Christian Rupprecht, Andrea Vedaldi |
Abstract | Combining clustering and representation learning is one of the most promising approaches for unsupervised learning of deep neural networks. However, doing so naively leads to ill posed learning problems with degenerate solutions. In this paper, we propose a novel and principled learning formulation that addresses these issues. The method is obtained by maximizing the information between labels and input data indices. We show that this criterion extends standard crossentropy minimization to an optimal transport problem, which we solve efficiently for millions of input images and thousands of labels using a fast variant of the Sinkhorn-Knopp algorithm. The resulting method is able to self-label visual data so as to train highly competitive image representations without manual labels. Our method achieves state of the art representation learning performance for AlexNet and ResNet-50 on SVHN, CIFAR-10, CIFAR-100 and ImageNet and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline. Code and models are available. |
Tasks | Representation Learning |
Published | 2019-11-13 |
URL | https://arxiv.org/abs/1911.05371v3 |
https://arxiv.org/pdf/1911.05371v3.pdf | |
PWC | https://paperswithcode.com/paper/self-labelling-via-simultaneous-clustering-1 |
Repo | |
Framework | |
Object Parsing in Sequences Using CoordConv Gated Recurrent Networks
Title | Object Parsing in Sequences Using CoordConv Gated Recurrent Networks |
Authors | Ayush Gaud, Y V S Harish, K Madhava Krishna |
Abstract | We present a monocular object parsing framework for consistent keypoint localization by capturing temporal correlation on sequential data. In this paper, we propose a novel recurrent network based architecture to model long-range dependencies between intermediate features which are highly useful in tasks like keypoint localization and tracking. We leverage the expressiveness of the popular stacked hourglass architecture and augment it by adopting memory units between intermediate layers of the network with weights shared across stages for video frames. We observe that this weight sharing scheme not only enables us to frame hourglass architecture as a recurrent network but also prove to be highly effective in producing increasingly refined estimates for sequential tasks. Furthermore, we propose a new memory cell, we call CoordConvGRU which learns to selectively preserve spatio-temporal correlation and showcase our results on the keypoint localization task. The experiments show that our approach is able to model the motion dynamics between the frames and significantly outperforms the baseline hourglass network. Even though our network is trained on a synthetically rendered dataset, we observe that with minimal fine tuning on 300 real images we are able to achieve performance at par with various state-of-the-art methods trained with the same level of supervisory inputs. By using a simpler architecture than other methods enables us to run it in real time on a standard GPU which is desirable for such applications. Finally, we make our architectures and 524 annotated sequences of cars from KITTI dataset publicly available. |
Tasks | |
Published | 2019-10-02 |
URL | https://arxiv.org/abs/1910.00895v1 |
https://arxiv.org/pdf/1910.00895v1.pdf | |
PWC | https://paperswithcode.com/paper/object-parsing-in-sequences-using-coordconv |
Repo | |
Framework | |
Sparse data interpolation using the geodesic distance affinity space
Title | Sparse data interpolation using the geodesic distance affinity space |
Authors | Mikhail G. Mozerov, Fei Yang, Joost van de Weijer |
Abstract | In this paper, we adapt the geodesic distance-based recursive filter to the sparse data interpolation problem. The proposed technique is general and can be easily applied to any kind of sparse data. We demonstrate the superiority over other interpolation techniques in three experiments for qualitative and quantitative evaluation. In addition, we compare our method with the popular interpolation algorithm presented in the EpicFlow optical flow paper that is intuitively motivated by a similar geodesic distance principle. The comparison shows that our algorithm is more accurate and considerably faster than the EpicFlow interpolation technique. |
Tasks | Optical Flow Estimation |
Published | 2019-05-06 |
URL | https://arxiv.org/abs/1905.02229v1 |
https://arxiv.org/pdf/1905.02229v1.pdf | |
PWC | https://paperswithcode.com/paper/sparse-data-interpolation-using-the-geodesic |
Repo | |
Framework | |
Polynomial Rewritings from Expressive Description Logics with Closed Predicates to Variants of Datalog
Title | Polynomial Rewritings from Expressive Description Logics with Closed Predicates to Variants of Datalog |
Authors | Shqiponja Ahmetaj, Magdalena Ortiz, Mantas Simkus |
Abstract | In many scenarios, complete and incomplete information coexist. For this reason, the knowledge representation and database communities have long shown interest in simultaneously supporting the closed- and the open-world views when reasoning about logic theories. Here we consider the setting of querying possibly incomplete data using logic theories, formalized as the evaluation of an ontology-mediated query (OMQ) that pairs a query with a theory, sometimes called an ontology, expressing background knowledge. This can be further enriched by specifying a set of closed predicates from the theory that are to be interpreted under the closed-world assumption, while the rest are interpreted with the open-world view. In this way we can retrieve more precise answers to queries by leveraging the partial completeness of the data. The central goal of this paper is to understand the relative expressiveness of OMQ languages in which the ontology is written in the expressive Description Logic (DL) ALCHOI and includes a set of closed predicates. We consider a restricted class of conjunctive queries. Our main result is to show that every query in this non-monotonic query language can be translated in polynomial time into Datalog with negation under the stable model semantics. To overcome the challenge that Datalog has no direct means to express the existential quantification present in ALCHOI, we define a two-player game that characterizes the satisfaction of the ontology, and design a Datalog query that can decide the existence of a winning strategy for the game. If there are no closed predicates, that is in the case of querying a plain ALCHOI knowledge base, our translation yields a positive disjunctive Datalog program of polynomial size. To the best of our knowledge, unlike previous translations for related fragments with expressive (non-Horn) DLs, these are the first polynomial time translations. |
Tasks | |
Published | 2019-12-16 |
URL | https://arxiv.org/abs/1912.07475v1 |
https://arxiv.org/pdf/1912.07475v1.pdf | |
PWC | https://paperswithcode.com/paper/polynomial-rewritings-from-expressive |
Repo | |
Framework | |
ALGAMES: A Fast Solver for Constrained Dynamic Games
Title | ALGAMES: A Fast Solver for Constrained Dynamic Games |
Authors | Simon Le Cleac’h, Mac Schwager, Zachary Manchester |
Abstract | Dynamic games are an effective paradigm for dealing with the control of multiple interacting actors. This paper introduces ALGAMES (Augmented Lagrangian GAME-theoretic Solver), a solver that handles trajectory optimization problems with multiple actors and general nonlinear state and input constraints. Its novelty resides in satisfying the first order optimality conditions with a quasi-Newton root-finding algorithm and rigorously enforcing constraints using an augmented Lagrangian formulation. We evaluate our solver in the context of autonomous driving on scenarios with a strong level of interactions between the vehicles. We assess the robustness of the solver using Monte Carlo simulations. It is able to reliably solve complex problems like ramp merging with three vehicles three times faster than a state-of-the-art DDP-based approach. A model predictive control (MPC) implementation of the algorithm demonstrates real-time performance on complex autonomous driving scenarios with an update frequency higher than 60 Hz. |
Tasks | Autonomous Driving |
Published | 2019-10-22 |
URL | https://arxiv.org/abs/1910.09713v2 |
https://arxiv.org/pdf/1910.09713v2.pdf | |
PWC | https://paperswithcode.com/paper/algames-a-fast-solver-for-constrained-dynamic |
Repo | |
Framework | |
An Improved Historical Embedding without Alignment
Title | An Improved Historical Embedding without Alignment |
Authors | Xiaofei Xu, Ke Deng, Fei Hu, Li Li |
Abstract | Many words have evolved in meaning as a result of cultural and social change. Understanding such changes is crucial for modelling language and cultural evolution. Low-dimensional embedding methods have shown promise in detecting words’ meaning change by encoding them into dense vectors. However, when exploring semantic change of words over time, these methods require the alignment of word embeddings across different time periods. This process is computationally expensive, prohibitively time consuming and suffering from contextual variability. In this paper, we propose a new and scalable method for encoding words from different time periods into one dense vector space. This can greatly improve performance when it comes to identifying words that have changed in meaning over time. We evaluated our method on dataset from Google Books N-gram. Our method outperformed three other popular methods in terms of the number of words correctly identified to have changed in meaning. Additionally, we provide an intuitive visualization of the semantic evolution of some words extracted by our method |
Tasks | Word Embeddings |
Published | 2019-10-19 |
URL | https://arxiv.org/abs/1910.08692v1 |
https://arxiv.org/pdf/1910.08692v1.pdf | |
PWC | https://paperswithcode.com/paper/an-improved-historical-embedding-without |
Repo | |
Framework | |
Estimator Vectors: OOV Word Embeddings based on Subword and Context Clue Estimates
Title | Estimator Vectors: OOV Word Embeddings based on Subword and Context Clue Estimates |
Authors | Raj Patel, Carlotta Domeniconi |
Abstract | Semantic representations of words have been successfully extracted from unlabeled corpuses using neural network models like word2vec. These representations are generally high quality and are computationally inexpensive to train, making them popular. However, these approaches generally fail to approximate out of vocabulary (OOV) words, a task humans can do quite easily, using word roots and context clues. This paper proposes a neural network model that learns high quality word representations, subword representations, and context clue representations jointly. Learning all three types of representations together enhances the learning of each, leading to enriched word vectors, along with strong estimates for OOV words, via the combination of the corresponding context clue and subword embeddings. Our model, called Estimator Vectors (EV), learns strong word embeddings and is competitive with state of the art methods for OOV estimation. |
Tasks | Word Embeddings |
Published | 2019-10-18 |
URL | https://arxiv.org/abs/1910.10491v1 |
https://arxiv.org/pdf/1910.10491v1.pdf | |
PWC | https://paperswithcode.com/paper/estimator-vectors-oov-word-embeddings-based |
Repo | |
Framework | |
A Survey of Deep Learning Techniques for Autonomous Driving
Title | A Survey of Deep Learning Techniques for Autonomous Driving |
Authors | Sorin Grigorescu, Bogdan Trasnea, Tiberiu Cocias, Gigel Macesanu |
Abstract | The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence. The objective of this paper is to survey the current state-of-the-art on deep learning technologies used in autonomous driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration and motion control algorithms. We investigate both the modular perception-planning-action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources and computational hardware. The comparison presented in this survey helps to gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices |
Tasks | Autonomous Driving |
Published | 2019-10-17 |
URL | https://arxiv.org/abs/1910.07738v2 |
https://arxiv.org/pdf/1910.07738v2.pdf | |
PWC | https://paperswithcode.com/paper/a-survey-of-deep-learning-techniques-for-1 |
Repo | |
Framework | |