Paper Group ANR 359
The Neighbours’ Similar Fitness Property for Local Search. FormulaZero: Distributionally Robust Online Adaptation via Offline Population Synthesis. Computing rank-revealing factorizations of matrices stored out-of-core. 3DSSD: Point-based 3D Single Stage Object Detector. Multi-object Monocular SLAM for Dynamic Environments. PSF–NET: A Non-parametr …
The Neighbours’ Similar Fitness Property for Local Search
Title | The Neighbours’ Similar Fitness Property for Local Search |
Authors | Mark Wallace, Aldeida Aleti |
Abstract | For most practical optimisation problems local search outperforms random sampling - despite the “No Free Lunch Theorem”. This paper introduces a property of search landscapes termed Neighbours’ Similar Fitness (NSF) that underlies the good performance of neighbourhood search in terms of local improvement. Though necessary, NSF is not sufficient to ensure that searching for improvement among the neighbours of a good solution is better than random search. The paper introduces an additional (natural) property which supports a general proof that, for NSF landscapes, neighbourhood search beats random search. |
Tasks | |
Published | 2020-01-09 |
URL | https://arxiv.org/abs/2001.02872v1 |
https://arxiv.org/pdf/2001.02872v1.pdf | |
PWC | https://paperswithcode.com/paper/the-neighbours-similar-fitness-property-for |
Repo | |
Framework | |
FormulaZero: Distributionally Robust Online Adaptation via Offline Population Synthesis
Title | FormulaZero: Distributionally Robust Online Adaptation via Offline Population Synthesis |
Authors | Aman Sinha, Matthew O’Kelly, Hongrui Zheng, Rahul Mangharam, John Duchi, Russ Tedrake |
Abstract | Balancing performance and safety is crucial to deploying autonomous vehicles in multi-agent environments. In particular, autonomous racing is a domain that penalizes safe but conservative policies, highlighting the need for robust, adaptive strategies. Current approaches either make simplifying assumptions about other agents or lack robust mechanisms for online adaptation. This work makes algorithmic contributions to both challenges. First, to generate a realistic, diverse set of opponents, we develop a novel method for self-play based on replica-exchange Markov chain Monte Carlo. Second, we propose a distributionally robust bandit optimization procedure that adaptively adjusts risk aversion relative to uncertainty in beliefs about opponents’ behaviors. We rigorously quantify the tradeoffs in performance and robustness when approximating these computations in real-time motion-planning, and we demonstrate our methods experimentally on autonomous vehicles that achieve scaled speeds comparable to Formula One racecars. |
Tasks | Autonomous Vehicles, Motion Planning |
Published | 2020-03-09 |
URL | https://arxiv.org/abs/2003.03900v1 |
https://arxiv.org/pdf/2003.03900v1.pdf | |
PWC | https://paperswithcode.com/paper/formulazero-distributionally-robust-online |
Repo | |
Framework | |
Computing rank-revealing factorizations of matrices stored out-of-core
Title | Computing rank-revealing factorizations of matrices stored out-of-core |
Authors | Nathan Heavner, Per-Gunnar Martinsson, Gregorio Quintana-Ortí |
Abstract | This paper describes efficient algorithms for computing rank-revealing factorizations of matrices that are too large to fit in RAM, and must instead be stored on slow external memory devices such as solid-state or spinning disk hard drives (out-of-core or out-of-memory). Traditional algorithms for computing rank revealing factorizations, such as the column pivoted QR factorization, or techniques for computing a full singular value decomposition of a matrix, are very communication intensive. They are naturally expressed as a sequence of matrix-vector operations, which become prohibitively expensive when data is not available in main memory. Randomization allows these methods to be reformulated so that large contiguous blocks of the matrix can be processed in bulk. The paper describes two distinct methods. The first is a blocked version of column pivoted Householder QR, organized as a “left-looking” method to minimize the number of write operations (which are more expensive than read operations on a spinning disk drive). The second method results in a so called UTV factorization which expresses a matrix $A$ as $A = U T V^*$ where $U$ and $V$ are unitary, and $T$ is triangular. This method is organized as an algorithm-by-blocks, in which floating point operations overlap read and write operations. The second method incorporates power iterations, and is exceptionally good at revealing the numerical rank; it can often be used as a substitute for a full singular value decomposition. Numerical experiments demonstrate that the new algorithms are almost as fast when processing data stored on a hard drive as traditional algorithms are for data stored in main memory. To be precise, the computational time for fully factorizing an $n\times n$ matrix scales as $cn^{3}$, with a scaling constant $c$ that is only marginally larger when the matrix is stored out of core. |
Tasks | |
Published | 2020-02-17 |
URL | https://arxiv.org/abs/2002.06960v2 |
https://arxiv.org/pdf/2002.06960v2.pdf | |
PWC | https://paperswithcode.com/paper/computing-rank-revealing-factorizations-of |
Repo | |
Framework | |
3DSSD: Point-based 3D Single Stage Object Detector
Title | 3DSSD: Point-based 3D Single Stage Object Detector |
Authors | Zetong Yang, Yanan Sun, Shu Liu, Jiaya Jia |
Abstract | Currently, there have been many kinds of voxel-based 3D single stage detectors, while point-based single stage methods are still underexplored. In this paper, we first present a lightweight and effective point-based 3D single stage object detector, named 3DSSD, achieving a good balance between accuracy and efficiency. In this paradigm, all upsampling layers and refinement stage, which are indispensable in all existing point-based methods, are abandoned to reduce the large computation cost. We novelly propose a fusion sampling strategy in downsampling process to make detection on less representative points feasible. A delicate box prediction network including a candidate generation layer, an anchor-free regression head with a 3D center-ness assignment strategy is designed to meet with our demand of accuracy and speed. Our paradigm is an elegant single stage anchor-free framework, showing great superiority to other existing methods. We evaluate 3DSSD on widely used KITTI dataset and more challenging nuScenes dataset. Our method outperforms all state-of-the-art voxel-based single stage methods by a large margin, and has comparable performance to two stage point-based methods as well, with inference speed more than 25 FPS, 2x faster than former state-of-the-art point-based methods. |
Tasks | |
Published | 2020-02-24 |
URL | https://arxiv.org/abs/2002.10187v1 |
https://arxiv.org/pdf/2002.10187v1.pdf | |
PWC | https://paperswithcode.com/paper/3dssd-point-based-3d-single-stage-object |
Repo | |
Framework | |
Multi-object Monocular SLAM for Dynamic Environments
Title | Multi-object Monocular SLAM for Dynamic Environments |
Authors | Gokul B. Nair, Swapnil Daga, Rahul Sajnani, Anirudha Ramesh, Junaid Ahmed Ansari, K. Madhava Krishna |
Abstract | Multibody monocular SLAM in dynamic environments remains a long-standing challenge in terms of perception and state estimation. Although theoretical solutions exist, practice lags behind, predominantly due to the lack of robust perceptual and predictive models of dynamic participants. The quintessential challenge in Multi-body monocular SLAM in dynamic scenes stems from the problem of unobservability as it is not possible to triangulate a moving object from a moving monocular camera. Under restrictions of object motion the problem can be solved, however even here one is entailed to solve for the single family solution to the relative scale problem. The relative scale problem exists since the dynamic objects that get reconstructed with the monocular camera have a different scale vis a vis the scale space in which the stationary scene is reconstructed. We solve this rather intractable problem by reconstructing dynamic vehicles/participants in single view in metric scale through an object SLAM pipeline. Further, we lift the ego vehicle trajectory obtained from Monocular ORB-SLAM also into metric scales making use of ground plane features thereby resolving the relative scale problem. We present a multi pose-graph optimization formulation to estimate the pose and track dynamic objects in the environment. This optimization helps us reduce the average error in trajectories of multiple bodies in KITTI Tracking sequences. To the best of our knowledge, our method is the first practical monocular multi-body SLAM system to perform dynamic multi-object and ego localization in a unified framework in metric scale. |
Tasks | |
Published | 2020-02-10 |
URL | https://arxiv.org/abs/2002.03528v1 |
https://arxiv.org/pdf/2002.03528v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-object-monocular-slam-for-dynamic |
Repo | |
Framework | |
PSF–NET: A Non-parametric Point Spread Function Model for Ground Based Optical Telescopes
Title | PSF–NET: A Non-parametric Point Spread Function Model for Ground Based Optical Telescopes |
Authors | Peng Jia, Xuebo Wu, Yi Huang, Bojun Cai, Dongmei Cai |
Abstract | Ground based optical telescopes are seriously affected by atmospheric turbulence induced aberrations. Understanding properties of these aberrations is important both for instruments design and image restoration methods development. Because the point spread function can reflect performance of the whole optic system, it is appropriate to use the point spread function to describe atmospheric turbulence induced aberrations. Assuming point spread functions induced by the atmospheric turbulence with the same profile belong to the same manifold space, we propose a non-parametric point spread function – PSF-NET. The PSF-NET has a cycle convolutional neural network structure and is a statistical representation of the manifold space of PSFs induced by the atmospheric turbulence with the same profile. Testing the PSF-NET with simulated and real observation data, we find that a well trained PSF–NET can restore any short exposure images blurred by atmospheric turbulence with the same profile. Besides, we further use the impulse response of the PSF-NET, which can be viewed as the statistical mean PSF, to analyze interpretation properties of the PSF-NET. We find that variations of statistical mean PSFs are caused by variations of the atmospheric turbulence profile: as the difference of the atmospheric turbulence profile increases, the difference between statistical mean PSFs also increases. The PSF-NET proposed in this paper provides a new way to analyze atmospheric turbulence induced aberrations, which would be benefit to develop new observation methods for ground based optical telescopes. |
Tasks | Image Restoration |
Published | 2020-03-02 |
URL | https://arxiv.org/abs/2003.00615v1 |
https://arxiv.org/pdf/2003.00615v1.pdf | |
PWC | https://paperswithcode.com/paper/psf-net-a-non-parametric-point-spread |
Repo | |
Framework | |
H-OWAN: Multi-distorted Image Restoration with Tensor 1x1 Convolution
Title | H-OWAN: Multi-distorted Image Restoration with Tensor 1x1 Convolution |
Authors | Zihao Huang, Chao Li, Feng Duan, Qibin Zhao |
Abstract | It is a challenging task to restore images from their variants with combined distortions. In the existing works, a promising strategy is to apply parallel “operations” to handle different types of distortion. However, in the feature fusion phase, a small number of operations would dominate the restoration result due to the features’ heterogeneity by different operations. To this end, we introduce the tensor 1x1 convolutional layer by imposing high-order tensor (outer) product, by which we not only harmonize the heterogeneous features but also take additional non-linearity into account. To avoid the unacceptable kernel size resulted from the tensor product, we construct the kernels with tensor network decomposition, which is able to convert the exponential growth of the dimension to linear growth. Armed with the new layer, we propose High-order OWAN for multi-distorted image restoration. In the numerical experiments, the proposed net outperforms the previous state-of-the-art and shows promising performance even in more difficult tasks. |
Tasks | Image Restoration |
Published | 2020-01-29 |
URL | https://arxiv.org/abs/2001.10853v1 |
https://arxiv.org/pdf/2001.10853v1.pdf | |
PWC | https://paperswithcode.com/paper/h-owan-multi-distorted-image-restoration-with |
Repo | |
Framework | |
Routing-Led Placement of VNFs in Arbitrary Networks
Title | Routing-Led Placement of VNFs in Arbitrary Networks |
Authors | Joseph Billingsley, Ke Li, Wang Miao, Geyong Min, Nektarios Georgalas |
Abstract | The ever increasing demand for computing resources has led to the creation of hyperscale datacentres with tens of thousands of servers. As demand continues to rise, new technologies must be incorporated to ensure high quality services can be provided without the damaging environmental impact of high energy consumption. Virtualisation technology such as network function virtualisation (NFV) allows for the creation of services by connecting component parts known as virtual network functions (VNFs). VNFs cam be used to maximally utilise available datacentre resources by optimising the placement and routes of VNFs, to maintain a high quality of service whilst minimising energy costs. Current research on this problem has focussed on placing VNFs and considered routing as a secondary concern. In this work we argue that the opposite approach, a routing-led approach is preferable. We propose a novel routing-led algorithm and analyse each of the component parts over a range of different topologies on problems with up to 16000 variables and compare its performance against a traditional placement based algorithm. Empirical results show that our routing-led algorithm can produce significantly better, faster solutions to large problem instances on a range of datacentre topologies. |
Tasks | |
Published | 2020-01-30 |
URL | https://arxiv.org/abs/2001.11565v1 |
https://arxiv.org/pdf/2001.11565v1.pdf | |
PWC | https://paperswithcode.com/paper/routing-led-placement-of-vnfs-in-arbitrary |
Repo | |
Framework | |
Stochastic Subspace Cubic Newton Method
Title | Stochastic Subspace Cubic Newton Method |
Authors | Filip Hanzely, Nikita Doikov, Peter Richtárik, Yurii Nesterov |
Abstract | In this paper, we propose a new randomized second-order optimization algorithm—Stochastic Subspace Cubic Newton (SSCN)—for minimizing a high dimensional convex function $f$. Our method can be seen both as a {\em stochastic} extension of the cubically-regularized Newton method of Nesterov and Polyak (2006), and a {\em second-order} enhancement of stochastic subspace descent of Kozak et al. (2019). We prove that as we vary the minibatch size, the global convergence rate of SSCN interpolates between the rate of stochastic coordinate descent (CD) and the rate of cubic regularized Newton, thus giving new insights into the connection between first and second-order methods. Remarkably, the local convergence rate of SSCN matches the rate of stochastic subspace descent applied to the problem of minimizing the quadratic function $\frac12 (x-x^*)^\top \nabla^2f(x^*)(x-x^*)$, where $x^*$ is the minimizer of $f$, and hence depends on the properties of $f$ at the optimum only. Our numerical experiments show that SSCN outperforms non-accelerated first-order CD algorithms while being competitive to their accelerated variants. |
Tasks | |
Published | 2020-02-21 |
URL | https://arxiv.org/abs/2002.09526v1 |
https://arxiv.org/pdf/2002.09526v1.pdf | |
PWC | https://paperswithcode.com/paper/stochastic-subspace-cubic-newton-method |
Repo | |
Framework | |
COVID-19 and Computer Audition: An Overview on What Speech & Sound Analysis Could Contribute in the SARS-CoV-2 Corona Crisis
Title | COVID-19 and Computer Audition: An Overview on What Speech & Sound Analysis Could Contribute in the SARS-CoV-2 Corona Crisis |
Authors | Björn W. Schuller, Dagmar M. Schuller, Kun Qian, Juan Liu, Huaiyuan Zheng, Xiao Li |
Abstract | At the time of writing, the world population is suffering from more than 10,000 registered COVID-19 disease epidemic induced deaths since the outbreak of the Corona virus more than three months ago now officially known as SARS-CoV-2. Since, tremendous efforts have been made worldwide to counter-steer and control the epidemic by now labelled as pandemic. In this contribution, we provide an overview on the potential for computer audition (CA), i.e., the usage of speech and sound analysis by artificial intelligence to help in this scenario. We first survey which types of related or contextually significant phenomena can be automatically assessed from speech or sound. These include the automatic recognition and monitoring of breathing, dry and wet coughing or sneezing sounds, speech under cold, eating behaviour, sleepiness, or pain to name but a few. Then, we consider potential use-cases for exploitation. These include risk assessment and diagnosis based on symptom histograms and their development over time, as well as monitoring of spread, social distancing and its effects, treatment and recovery, and patient wellbeing. We quickly guide further through challenges that need to be faced for real-life usage. We come to the conclusion that CA appears ready for implementation of (pre-)diagnosis and monitoring tools, and more generally provides rich and significant, yet so far untapped potential in the fight against COVID-19 spread. |
Tasks | |
Published | 2020-03-24 |
URL | https://arxiv.org/abs/2003.11117v1 |
https://arxiv.org/pdf/2003.11117v1.pdf | |
PWC | https://paperswithcode.com/paper/covid-19-and-computer-audition-an-overview-on |
Repo | |
Framework | |
Optimal rates for independence testing via $U$-statistic permutation tests
Title | Optimal rates for independence testing via $U$-statistic permutation tests |
Authors | Thomas B. Berrett, Ioannis Kontoyiannis, Richard J. Samworth |
Abstract | We study the problem of independence testing given independent and identically distributed pairs taking values in a $\sigma$-finite, separable measure space. Defining a natural measure of dependence $D(f)$ as the squared $L_2$-distance between a joint density $f$ and the product of its marginals, we first show that there is no valid test of independence that is uniformly consistent against alternatives of the form ${f: D(f) \geq \rho^2 }$. We therefore restrict attention to alternatives that impose additional Sobolev-type smoothness constraints, and define a permutation test based on a basis expansion and a $U$-statistic estimator of $D(f)$ that we prove is minimax optimal in terms of its separation rates in many instances. Finally, for the case of a Fourier basis on $[0,1]^2$, we provide an approximation to the power function that offers several additional insights. |
Tasks | |
Published | 2020-01-15 |
URL | https://arxiv.org/abs/2001.05513v1 |
https://arxiv.org/pdf/2001.05513v1.pdf | |
PWC | https://paperswithcode.com/paper/optimal-rates-for-independence-testing-via-u |
Repo | |
Framework | |
Transformers as Soft Reasoners over Language
Title | Transformers as Soft Reasoners over Language |
Authors | Peter Clark, Oyvind Tafjord, Kyle Richardson |
Abstract | AI has long pursued the goal of having systems reason over explicitly provided knowledge, but building suitable representations has proved challenging. Here we explore whether transformers can similarly learn to reason (or emulate reasoning), but using rules expressed in language, thus bypassing a formal representation. We provide the first demonstration that this is possible, and characterize the extent of this capability. To do this, we use a collection of synthetic datasets that test increasing levels of reasoning complexity (number of rules, presence of negation, and depth of chaining). We find transformers appear to learn rule-based reasoning with high (99%) accuracy on these datasets, and in a way that generalizes to test data requiring substantially deeper chaining than in the training data (95%+ scores). We also demonstrate that the models transfer well to two hand-authored rulebases, and to rulebases paraphrased into more natural language. These findings are significant as it suggests a new role for transformers, namely as a limited “soft theorem prover” operating over explicit theories in language. This in turn suggests new possibilities for explainability, correctability, and counterfactual reasoning in question-answering. All datasets and a live demo are available at http://rule-reasoning.apps.allenai.org/ |
Tasks | Question Answering |
Published | 2020-02-14 |
URL | https://arxiv.org/abs/2002.05867v1 |
https://arxiv.org/pdf/2002.05867v1.pdf | |
PWC | https://paperswithcode.com/paper/transformers-as-soft-reasoners-over-language |
Repo | |
Framework | |
Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems
Title | Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems |
Authors | Dimitrios Boursinos, Xenofon Koutsoukos |
Abstract | Cyber-physical systems (CPS) can benefit by the use of learning enabled components (LECs) such as deep neural networks (DNNs) for perception and desicion making tasks. However, DNNs are typically non-transparent making reasoning about their predictions very difficult, and hence their application to safety-critical systems is very challenging. LECs could be integrated easier into CPS if their predictions could be complemented with a confidence measure that quantifies how much we trust their output. The paper presents an approach for computing confidence bounds based on Inductive Conformal Prediction (ICP). We train a Triplet Network architecture to learn representations of the input data that can be used to estimate the similarity between test examples and examples in the training data set. Then, these representations are used to estimate the confidence of a set predictor based on the DNN architecture used in the triplet based on ICP. The approach is evaluated using a robotic navigation benchmark and the results show that we can computed trusted confidence bounds efficiently in real-time. |
Tasks | |
Published | 2020-03-11 |
URL | https://arxiv.org/abs/2003.05107v1 |
https://arxiv.org/pdf/2003.05107v1.pdf | |
PWC | https://paperswithcode.com/paper/trusted-confidence-bounds-for-learning |
Repo | |
Framework | |
Representations, Metrics and Statistics For Shape Analysis of Elastic Graphs
Title | Representations, Metrics and Statistics For Shape Analysis of Elastic Graphs |
Authors | Xiaoyang Guo, Anuj Srivastava |
Abstract | Past approaches for statistical shape analysis of objects have focused mainly on objects within the same topological classes, e.g., scalar functions, Euclidean curves, or surfaces, etc. For objects that differ in more complex ways, the current literature offers only topological methods. This paper introduces a far-reaching geometric approach for analyzing shapes of graphical objects, such as road networks, blood vessels, brain fiber tracts, etc. It represents such objects, exhibiting differences in both geometries and topologies, as graphs made of curves with arbitrary shapes (edges) and connected at arbitrary junctions (nodes). To perform statistical analyses, one needs mathematical representations, metrics and other geometrical tools, such as geodesics, means, and covariances. This paper utilizes a quotient structure to develop efficient algorithms for computing these quantities, leading to useful statistical tools, including principal component analysis and analytical statistical testing and modeling of graphical shapes. The efficacy of this framework is demonstrated using various simulated as well as the real data from neurons and brain arterial networks. |
Tasks | |
Published | 2020-02-29 |
URL | https://arxiv.org/abs/2003.00287v1 |
https://arxiv.org/pdf/2003.00287v1.pdf | |
PWC | https://paperswithcode.com/paper/representations-metrics-and-statistics-for |
Repo | |
Framework | |
Dataset of Segmented Nuclei in Hematoxylin and Eosin Stained Histopathology Images of 10 Cancer Types
Title | Dataset of Segmented Nuclei in Hematoxylin and Eosin Stained Histopathology Images of 10 Cancer Types |
Authors | Le Hou, Rajarsi Gupta, John S. Van Arnam, Yuwei Zhang, Kaustubh Sivalenka, Dimitris Samaras, Tahsin M. Kurc, Joel H. Saltz |
Abstract | The distribution and appearance of nuclei are essential markers for the diagnosis and study of cancer. Despite the importance of nuclear morphology, there is a lack of large scale, accurate, publicly accessible nucleus segmentation data. To address this, we developed an analysis pipeline that segments nuclei in whole slide tissue images from multiple cancer types with a quality control process. We have generated nucleus segmentation results in 5,060 Whole Slide Tissue images from 10 cancer types in The Cancer Genome Atlas. One key component of our work is that we carried out a multi-level quality control process (WSI-level and image patch-level), to evaluate the quality of our segmentation results. The image patch-level quality control used manual segmentation ground truth data from 1,356 sampled image patches. The datasets we publish in this work consist of roughly 5 billion quality controlled nuclei from more than 5,060 TCGA WSIs from 10 different TCGA cancer types and 1,356 manually segmented TCGA image patches from the same 10 cancer types plus additional 4 cancer types. |
Tasks | |
Published | 2020-02-18 |
URL | https://arxiv.org/abs/2002.07913v1 |
https://arxiv.org/pdf/2002.07913v1.pdf | |
PWC | https://paperswithcode.com/paper/dataset-of-segmented-nuclei-in-hematoxylin |
Repo | |
Framework | |