Paper Group ANR 757
Analysing Errors of Open Information Extraction Systems. Hierarchical State Abstractions for Decision-Making Problems with Computational Constraints. Exploiting Points and Lines in Regression Forests for RGB-D Camera Relocalization. Unsupervised Classification of Intrusive Igneous Rock Thin Section Images using Edge Detection and Colour Analysis. P …
Analysing Errors of Open Information Extraction Systems
Title | Analysing Errors of Open Information Extraction Systems |
Authors | Rudolf Schneider, Tom Oberhauser, Tobias Klatt, Felix A. Gers, Alexander Löser |
Abstract | We report results on benchmarking Open Information Extraction (OIE) systems using RelVis, a toolkit for benchmarking Open Information Extraction systems. Our comprehensive benchmark contains three data sets from the news domain and one data set from Wikipedia with overall 4522 labeled sentences and 11243 binary or n-ary OIE relations. In our analysis on these data sets we compared the performance of four popular OIE systems, ClausIE, OpenIE 4.2, Stanford OpenIE and PredPatt. In addition, we evaluated the impact of five common error classes on a subset of 749 n-ary tuples. From our deep analysis we unreveal important research directions for a next generation of OIE systems. |
Tasks | Open Information Extraction |
Published | 2017-07-24 |
URL | http://arxiv.org/abs/1707.07499v1 |
http://arxiv.org/pdf/1707.07499v1.pdf | |
PWC | https://paperswithcode.com/paper/analysing-errors-of-open-information |
Repo | |
Framework | |
Hierarchical State Abstractions for Decision-Making Problems with Computational Constraints
Title | Hierarchical State Abstractions for Decision-Making Problems with Computational Constraints |
Authors | Daniel T. Larsson, Daniel Braun, Panagiotis Tsiotras |
Abstract | In this semi-tutorial paper, we first review the information-theoretic approach to account for the computational costs incurred during the search for optimal actions in a sequential decision-making problem. The traditional (MDP) framework ignores computational limitations while searching for optimal policies, essentially assuming that the acting agent is perfectly rational and aims for exact optimality. Using the free-energy, a variational principle is introduced that accounts not only for the value of a policy alone, but also considers the cost of finding this optimal policy. The solution of the variational equations arising from this formulation can be obtained using familiar Bellman-like value iterations from dynamic programming (DP) and the Blahut-Arimoto (BA) algorithm from rate distortion theory. Finally, we demonstrate the utility of the approach for generating hierarchies of state abstractions that can be used to best exploit the available computational resources. A numerical example showcases these concepts for a path-planning problem in a grid world environment. |
Tasks | Decision Making |
Published | 2017-10-22 |
URL | http://arxiv.org/abs/1710.07990v1 |
http://arxiv.org/pdf/1710.07990v1.pdf | |
PWC | https://paperswithcode.com/paper/hierarchical-state-abstractions-for-decision |
Repo | |
Framework | |
Exploiting Points and Lines in Regression Forests for RGB-D Camera Relocalization
Title | Exploiting Points and Lines in Regression Forests for RGB-D Camera Relocalization |
Authors | Lili Meng, Frederick Tung, James J. Little, Julien Valentin, Clarence de Silva |
Abstract | Camera relocalization plays a vital role in many robotics and computer vision tasks, such as global localization, recovery from tracking failure and loop closure detection. Recent random forests based methods exploit randomly sampled pixel comparison features to predict 3D world locations for 2D image locations to guide the camera pose optimization. However, these image features are only sampled randomly in the images, without considering the spatial structures or geometric information, leading to large errors or failure cases with the existence of poorly textured areas or in motion blur. Line segment features are more robust in these environments. In this work, we propose to jointly exploit points and lines within the framework of uncertainty driven regression forests. The proposed approach is thoroughly evaluated on three publicly available datasets against several strong state-of-the-art baselines in terms of several different error metrics. Experimental results prove the efficacy of our method, showing superior or on-par state-of-the-art performance. |
Tasks | Camera Relocalization, Loop Closure Detection |
Published | 2017-10-28 |
URL | http://arxiv.org/abs/1710.10519v3 |
http://arxiv.org/pdf/1710.10519v3.pdf | |
PWC | https://paperswithcode.com/paper/exploiting-points-and-lines-in-regression |
Repo | |
Framework | |
Unsupervised Classification of Intrusive Igneous Rock Thin Section Images using Edge Detection and Colour Analysis
Title | Unsupervised Classification of Intrusive Igneous Rock Thin Section Images using Edge Detection and Colour Analysis |
Authors | S. Joseph, H. Ujir, I. Hipiny |
Abstract | Classification of rocks is one of the fundamental tasks in a geological study. The process requires a human expert to examine sampled thin section images under a microscope. In this study, we propose a method that uses microscope automation, digital image acquisition, edge detection and colour analysis (histogram). We collected 60 digital images from 20 standard thin sections using a digital camera mounted on a conventional microscope. Each image is partitioned into a finite number of cells that form a grid structure. Edge and colour profile of pixels inside each cell determine its classification. The individual cells then determine the thin section image classification via a majority voting scheme. Our method yielded successful results as high as 90% to 100% precision. |
Tasks | Edge Detection, Image Classification |
Published | 2017-09-30 |
URL | http://arxiv.org/abs/1710.00189v1 |
http://arxiv.org/pdf/1710.00189v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-classification-of-intrusive |
Repo | |
Framework | |
Personalized Pancreatic Tumor Growth Prediction via Group Learning
Title | Personalized Pancreatic Tumor Growth Prediction via Group Learning |
Authors | Ling Zhang, Le Lu, Ronald M. Summers, Electron Kebebew, Jianhua Yao |
Abstract | Tumor growth prediction, a highly challenging task, has long been viewed as a mathematical modeling problem, where the tumor growth pattern is personalized based on imaging and clinical data of a target patient. Though mathematical models yield promising results, their prediction accuracy may be limited by the absence of population trend data and personalized clinical characteristics. In this paper, we propose a statistical group learning approach to predict the tumor growth pattern that incorporates both the population trend and personalized data, in order to discover high-level features from multimodal imaging data. A deep convolutional neural network approach is developed to model the voxel-wise spatio-temporal tumor progression. The deep features are combined with the time intervals and the clinical factors to feed a process of feature selection. Our predictive model is pretrained on a group data set and personalized on the target patient data to estimate the future spatio-temporal progression of the patient’s tumor. Multimodal imaging data at multiple time points are used in the learning, personalization and inference stages. Our method achieves a Dice coefficient of 86.8% +- 3.6% and RVD of 7.9% +- 5.4% on a pancreatic tumor data set, outperforming the DSC of 84.4% +- 4.0% and RVD 13.9% +- 9.8% obtained by a previous state-of-the-art model-based method. |
Tasks | Feature Selection |
Published | 2017-06-01 |
URL | http://arxiv.org/abs/1706.00493v1 |
http://arxiv.org/pdf/1706.00493v1.pdf | |
PWC | https://paperswithcode.com/paper/personalized-pancreatic-tumor-growth |
Repo | |
Framework | |
Learning-based Image Enhancement for Visual Odometry in Challenging HDR Environments
Title | Learning-based Image Enhancement for Visual Odometry in Challenging HDR Environments |
Authors | Ruben Gomez-Ojeda, Zichao Zhang, Javier Gonzalez-Jimenez, Davide Scaramuzza |
Abstract | One of the main open challenges in visual odometry (VO) is the robustness to difficult illumination conditions or high dynamic range (HDR) environments. The main difficulties in these situations come from both the limitations of the sensors and the inability to perform a successful tracking of interest points because of the bold assumptions in VO, such as brightness constancy. We address this problem from a deep learning perspective, for which we first fine-tune a Deep Neural Network (DNN) with the purpose of obtaining enhanced representations of the sequences for VO. Then, we demonstrate how the insertion of Long Short Term Memory (LSTM) allows us to obtain temporally consistent sequences, as the estimation depends on previous states. However, the use of very deep networks does not allow the insertion into a real-time VO framework; therefore, we also propose a Convolutional Neural Network (CNN) of reduced size capable of performing faster. Finally, we validate the enhanced representations by evaluating the sequences produced by the two architectures in several state-of-art VO algorithms, such as ORB-SLAM and DSO. |
Tasks | Image Enhancement, Visual Odometry |
Published | 2017-07-05 |
URL | http://arxiv.org/abs/1707.01274v3 |
http://arxiv.org/pdf/1707.01274v3.pdf | |
PWC | https://paperswithcode.com/paper/learning-based-image-enhancement-for-visual |
Repo | |
Framework | |
Simultaneously Color-Depth Super-Resolution with Conditional Generative Adversarial Network
Title | Simultaneously Color-Depth Super-Resolution with Conditional Generative Adversarial Network |
Authors | Lijun Zhao, Huihui Bai, Jie Liang, Bing Zeng, Anhong Wang, Yao Zhao |
Abstract | Recently, Generative Adversarial Network (GAN) has been found wide applications in style transfer, image-to-image translation and image super-resolution. In this paper, a color-depth conditional GAN is proposed to concurrently resolve the problems of depth super-resolution and color super-resolution in 3D videos. Firstly, given the low-resolution depth image and low-resolution color image, a generative network is proposed to leverage mutual information of color image and depth image to enhance each other in consideration of the geometry structural dependency of color-depth image in the same scene. Secondly, three loss functions, including data loss, total variation loss, and 8-connected gradient difference loss are introduced to train this generative network in order to keep generated images close to the real ones, in addition to the adversarial loss. Experimental results demonstrate that the proposed approach produces high-quality color image and depth image from low-quality image pair, and it is superior to several other leading methods. Besides, we use the same neural network framework to resolve the problem of image smoothing and edge detection at the same time. |
Tasks | Edge Detection, Image Super-Resolution, Image-to-Image Translation, Style Transfer, Super-Resolution |
Published | 2017-08-30 |
URL | http://arxiv.org/abs/1708.09105v3 |
http://arxiv.org/pdf/1708.09105v3.pdf | |
PWC | https://paperswithcode.com/paper/simultaneously-color-depth-super-resolution |
Repo | |
Framework | |
Learning task structure via sparsity grouped multitask learning
Title | Learning task structure via sparsity grouped multitask learning |
Authors | Meghana Kshirsagar, Eunho Yang, Aurélie C. Lozano |
Abstract | Sparse mapping has been a key methodology in many high-dimensional scientific problems. When multiple tasks share the set of relevant features, learning them jointly in a group drastically improves the quality of relevant feature selection. However, in practice this technique is used limitedly since such grouping information is usually hidden. In this paper, our goal is to recover the group structure on the sparsity patterns and leverage that information in the sparse learning. Toward this, we formulate a joint optimization problem in the task parameter and the group membership, by constructing an appropriate regularizer to encourage sparse learning as well as correct recovery of task groups. We further demonstrate that our proposed method recovers groups and the sparsity patterns in the task parameters accurately by extensive experiments. |
Tasks | Feature Selection, Sparse Learning |
Published | 2017-05-13 |
URL | http://arxiv.org/abs/1705.04886v2 |
http://arxiv.org/pdf/1705.04886v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-task-structure-via-sparsity-grouped |
Repo | |
Framework | |
Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness
Title | Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness |
Authors | Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, David P. Woodruff |
Abstract | Understanding the singular value spectrum of a matrix $A \in \mathbb{R}^{n \times n}$ is a fundamental task in countless applications. In matrix multiplication time, it is possible to perform a full SVD and directly compute the singular values $\sigma_1,…,\sigma_n$. However, little is known about algorithms that break this runtime barrier. Using tools from stochastic trace estimation, polynomial approximation, and fast system solvers, we show how to efficiently isolate different ranges of $A$'s spectrum and approximate the number of singular values in these ranges. We thus effectively compute a histogram of the spectrum, which can stand in for the true singular values in many applications. We use this primitive to give the first algorithms for approximating a wide class of symmetric matrix norms in faster than matrix multiplication time. For example, we give a $(1 + \epsilon)$ approximation algorithm for the Schatten-$1$ norm (the nuclear norm) running in just $\tilde O((nnz(A)n^{1/3} + n^2)\epsilon^{-3})$ time for $A$ with uniform row sparsity or $\tilde O(n^{2.18} \epsilon^{-3})$ time for dense matrices. The runtime scales smoothly for general Schatten-$p$ norms, notably becoming $\tilde O (p \cdot nnz(A) \epsilon^{-3})$ for any $p \ge 2$. At the same time, we show that the complexity of spectrum approximation is inherently tied to fast matrix multiplication in the small $\epsilon$ regime. We prove that achieving milder $\epsilon$ dependencies in our algorithms would imply faster than matrix multiplication time triangle detection for general graphs. This further implies that highly accurate algorithms running in subcubic time yield subcubic time matrix multiplication. As an application of our bounds, we show that precisely computing all effective resistances in a graph in less than matrix multiplication time is likely difficult, barring a major algorithmic breakthrough. |
Tasks | |
Published | 2017-04-13 |
URL | http://arxiv.org/abs/1704.04163v3 |
http://arxiv.org/pdf/1704.04163v3.pdf | |
PWC | https://paperswithcode.com/paper/spectrum-approximation-beyond-fast-matrix |
Repo | |
Framework | |
Robust Tuning Datasets for Statistical Machine Translation
Title | Robust Tuning Datasets for Statistical Machine Translation |
Authors | Preslav Nakov, Stephan Vogel |
Abstract | We explore the idea of automatically crafting a tuning dataset for Statistical Machine Translation (SMT) that makes the hyper-parameters of the SMT system more robust with respect to some specific deficiencies of the parameter tuning algorithms. This is an under-explored research direction, which can allow better parameter tuning. In this paper, we achieve this goal by selecting a subset of the available sentence pairs, which are more suitable for specific combinations of optimizers, objective functions, and evaluation measures. We demonstrate the potential of the idea with the pairwise ranking optimization (PRO) optimizer, which is known to yield too short translations. We show that the learning problem can be alleviated by tuning on a subset of the development set, selected based on sentence length. In particular, using the longest 50% of the tuning sentences, we achieve two-fold tuning speedup, and improvements in BLEU score that rival those of alternatives, which fix BLEU+1’s smoothing instead. |
Tasks | Machine Translation |
Published | 2017-10-01 |
URL | http://arxiv.org/abs/1710.00346v1 |
http://arxiv.org/pdf/1710.00346v1.pdf | |
PWC | https://paperswithcode.com/paper/robust-tuning-datasets-for-statistical |
Repo | |
Framework | |
Symbolic Analysis-based Reduced Order Markov Modeling of Time Series Data
Title | Symbolic Analysis-based Reduced Order Markov Modeling of Time Series Data |
Authors | Devesh K Jha, Nurali Virani, Jan Reimann, Abhishek Srivastav, Asok Ray |
Abstract | This paper presents a technique for reduced-order Markov modeling for compact representation of time-series data. In this work, symbolic dynamics-based tools have been used to infer an approximate generative Markov model. The time-series data are first symbolized by partitioning the continuous measurement space of the signal and then, the discrete sequential data are modeled using symbolic dynamics. In the proposed approach, the size of temporal memory of the symbol sequence is estimated from spectral properties of the resulting stochastic matrix corresponding to a first-order Markov model of the symbol sequence. Then, hierarchical clustering is used to represent the states of the corresponding full-state Markov model to construct a reduced-order or size Markov model with a non-deterministic algebraic structure. Subsequently, the parameters of the reduced-order Markov model are identified from the original model by making use of a Bayesian inference rule. The final model is selected using information-theoretic criteria. The proposed concept is elucidated and validated on two different data sets as examples. The first example analyzes a set of pressure data from a swirl-stabilized combustor, where controlled protocols are used to induce flame instabilities. Variations in the complexity of the derived Markov model represent how the system operating condition changes from a stable to an unstable combustion regime. In the second example, the data set is taken from NASA’s data repository for prognostics of bearings on rotating shafts. We show that, even with a very small state-space, the reduced-order models are able to achieve comparable performance and that the proposed approach provides flexibility in the selection of a final model for representation and learning. |
Tasks | Bayesian Inference, Time Series |
Published | 2017-09-26 |
URL | http://arxiv.org/abs/1709.09274v1 |
http://arxiv.org/pdf/1709.09274v1.pdf | |
PWC | https://paperswithcode.com/paper/symbolic-analysis-based-reduced-order-markov |
Repo | |
Framework | |
Scalable Support Vector Clustering Using Budget
Title | Scalable Support Vector Clustering Using Budget |
Authors | Tung Pham, Trung Le, Hang Dang |
Abstract | Owing to its application in solving the difficult and diverse clustering or outlier detection problem, support-based clustering has recently drawn plenty of attention. Support-based clustering method always undergoes two phases: finding the domain of novelty and performing clustering assignment. To find the domain of novelty, the training time given by the current solvers is typically over-quadratic in the training size, and hence precluding the usage of support-based clustering method for large-scale datasets. In this paper, we propose applying Stochastic Gradient Descent (SGD) framework to the first phase of support-based clustering for finding the domain of novelty and a new strategy to perform the clustering assignment. However, the direct application of SGD to the first phase of support-based clustering is vulnerable to the curse of kernelization, that is, the model size linearly grows up with the data size accumulated overtime. To address this issue, we invoke the budget approach which allows us to restrict the model size to a small budget. Our new strategy for clustering assignment enables a fast computation by means of reducing the task of clustering assignment on the full training set to the same task on a significantly smaller set. We also provide a rigorous theoretical analysis about the convergence rate for the proposed method. Finally, we validate our proposed method on the well-known datasets for clustering to show that the proposed method offers a comparable clustering quality while simultaneously achieving significant speedup in comparison with the baselines. |
Tasks | Outlier Detection |
Published | 2017-09-19 |
URL | http://arxiv.org/abs/1709.06444v1 |
http://arxiv.org/pdf/1709.06444v1.pdf | |
PWC | https://paperswithcode.com/paper/scalable-support-vector-clustering-using |
Repo | |
Framework | |
Topometric Localization with Deep Learning
Title | Topometric Localization with Deep Learning |
Authors | Gabriel L. Oliveira, Noha Radwan, Wolfram Burgard, Thomas Brox |
Abstract | Compared to LiDAR-based localization methods, which provide high accuracy but rely on expensive sensors, visual localization approaches only require a camera and thus are more cost-effective while their accuracy and reliability typically is inferior to LiDAR-based methods. In this work, we propose a vision-based localization approach that learns from LiDAR-based localization methods by using their output as training data, thus combining a cheap, passive sensor with an accuracy that is on-par with LiDAR-based localization. The approach consists of two deep networks trained on visual odometry and topological localization, respectively, and a successive optimization to combine the predictions of these two networks. We evaluate the approach on a new challenging pedestrian-based dataset captured over the course of six months in varying weather conditions with a high degree of noise. The experiments demonstrate that the localization errors are up to 10 times smaller than with traditional vision-based localization methods. |
Tasks | Visual Localization, Visual Odometry |
Published | 2017-06-27 |
URL | http://arxiv.org/abs/1706.08775v1 |
http://arxiv.org/pdf/1706.08775v1.pdf | |
PWC | https://paperswithcode.com/paper/topometric-localization-with-deep-learning |
Repo | |
Framework | |
Challenges in Monocular Visual Odometry: Photometric Calibration, Motion Bias and Rolling Shutter Effect
Title | Challenges in Monocular Visual Odometry: Photometric Calibration, Motion Bias and Rolling Shutter Effect |
Authors | Nan Yang, Rui Wang, Xiang Gao, Daniel Cremers |
Abstract | Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects: photometric calibration, motion bias and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counter-intuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a sub-pixel accuracy refinement of ORB-SLAM which boosts its performance. |
Tasks | Calibration, Monocular Visual Odometry, Simultaneous Localization and Mapping, Visual Odometry |
Published | 2017-05-11 |
URL | http://arxiv.org/abs/1705.04300v4 |
http://arxiv.org/pdf/1705.04300v4.pdf | |
PWC | https://paperswithcode.com/paper/challenges-in-monocular-visual-odometry |
Repo | |
Framework | |
Monocular Visual Odometry with a Rolling Shutter Camera
Title | Monocular Visual Odometry with a Rolling Shutter Camera |
Authors | Chang-Ryeol Lee, Kuk-Jin Yoon |
Abstract | Rolling Shutter (RS) cameras have become popularized because of low-cost imaging capability. However, the RS cameras suffer from undesirable artifacts when the camera or the subject is moving, or illumination condition changes. For that reason, Monocular Visual Odometry (MVO) with RS cameras produces inaccurate ego-motion estimates. Previous works solve this RS distortion problem with motion prediction from images and/or inertial sensors. However, the MVO still has trouble in handling the RS distortion when the camera motion changes abruptly (e.g. vibration of mobile cameras causes extremely fast motion instantaneously). To address the problem, we propose the novel MVO algorithm in consideration of the geometric characteristics of RS cameras. The key idea of the proposed algorithm is the new RS essential matrix which incorporates the instantaneous angular and linear velocities at each frame. Our algorithm produces accurate and robust ego-motion estimates in an online manner, and is applicable to various mobile applications with RS cameras. The superiority of the proposed algorithm is validated through quantitative and qualitative comparison on both synthetic and real dataset. |
Tasks | Monocular Visual Odometry, motion prediction, Visual Odometry |
Published | 2017-04-24 |
URL | http://arxiv.org/abs/1704.07163v1 |
http://arxiv.org/pdf/1704.07163v1.pdf | |
PWC | https://paperswithcode.com/paper/monocular-visual-odometry-with-a-rolling |
Repo | |
Framework | |