Paper Group ANR 154
Evidential Label Propagation Algorithm for Graphs. SHAPE: Linear-Time Camera Pose Estimation With Quadratic Error-Decay. Probabilistic graphical model based approach for water mapping using GaoFen-2 (GF-2) high resolution imagery and Landsat 8 time series. Towards Semantic Integration of Heterogeneous Sensor Data with Indigenous Knowledge for Droug …
Evidential Label Propagation Algorithm for Graphs
Title | Evidential Label Propagation Algorithm for Graphs |
Authors | Kuang Zhou, Arnaud Martin, Quan Pan, Zhun-Ga Liu |
Abstract | Community detection has attracted considerable attention crossing many areas as it can be used for discovering the structure and features of complex networks. With the increasing size of social networks in real world, community detection approaches should be fast and accurate. The Label Propagation Algorithm (LPA) is known to be one of the near-linear solutions and benefits of easy implementation, thus it forms a good basis for efficient community detection methods. In this paper, we extend the update rule and propagation criterion of LPA in the framework of belief functions. A new community detection approach, called Evidential Label Propagation (ELP), is proposed as an enhanced version of conventional LPA. The node influence is first defined to guide the propagation process. The plausibility is used to determine the domain label of each node. The update order of nodes is discussed to improve the robustness of the method. ELP algorithm will converge after the domain labels of all the nodes become unchanged. The mass assignments are calculated finally as memberships of nodes. The overlapping nodes and outliers can be detected simultaneously through the proposed method. The experimental results demonstrate the effectiveness of ELP. |
Tasks | Community Detection |
Published | 2016-06-13 |
URL | http://arxiv.org/abs/1606.03832v1 |
http://arxiv.org/pdf/1606.03832v1.pdf | |
PWC | https://paperswithcode.com/paper/evidential-label-propagation-algorithm-for |
Repo | |
Framework | |
SHAPE: Linear-Time Camera Pose Estimation With Quadratic Error-Decay
Title | SHAPE: Linear-Time Camera Pose Estimation With Quadratic Error-Decay |
Authors | Alireza Ghasemi, Adam Scholefield, Martin Vetterli |
Abstract | We propose a novel camera pose estimation or perspective-n-point (PnP) algorithm, based on the idea of consistency regions and half-space intersections. Our algorithm has linear time-complexity and a squared reconstruction error that decreases at least quadratically, as the number of feature point correspondences increase. Inspired by ideas from triangulation and frame quantisation theory, we define consistent reconstruction and then present SHAPE, our proposed consistent pose estimation algorithm. We compare this algorithm with state-of-the-art pose estimation techniques in terms of accuracy and error decay rate. The experimental results verify our hypothesis on the optimal worst-case quadratic decay and demonstrate its promising performance compared to other approaches. |
Tasks | Pose Estimation |
Published | 2016-02-24 |
URL | http://arxiv.org/abs/1602.07535v1 |
http://arxiv.org/pdf/1602.07535v1.pdf | |
PWC | https://paperswithcode.com/paper/shape-linear-time-camera-pose-estimation-with |
Repo | |
Framework | |
Probabilistic graphical model based approach for water mapping using GaoFen-2 (GF-2) high resolution imagery and Landsat 8 time series
Title | Probabilistic graphical model based approach for water mapping using GaoFen-2 (GF-2) high resolution imagery and Landsat 8 time series |
Authors | Luyan Ji, Jie Wang, Xiurui Geng, Peng Gong |
Abstract | The objective of this paper is to evaluate the potential of Gaofen-2 (GF-2) high resolution multispectral sensor (MS) and panchromatic (PAN) imagery on water mapping. Difficulties of water mapping on high resolution data includes: 1) misclassification between water and shadows or other low-reflectance ground objects, which is mostly caused by the spectral similarity within the given band range; 2) small water bodies with size smaller than the spatial resolution of MS image. To solve the confusion between water and low-reflectance objects, the Landsat 8 time series with two shortwave infrared (SWIR) bands is added because water has extremely strong absorption in SWIR. In order to integrate the three multi-sensor, multi-resolution data sets, the probabilistic graphical model (PGM) is utilized here with conditional probability distribution defined mainly based on the size of each object. For comparison, results from the SVM classifier on the PCA fused and MS data, thresholding method on the PAN image, and water index method on the Landsat data are computed. The confusion matrices are calculated for all the methods. The results demonstrate that the PGM method can achieve the best performance with the highest overall accuracy. Moreover, small rivers can also be extracted by adding weight on the PAN result in PGM. Finally, the post-classification procedure is applied on the PGM result to further exclude misclassification in shadow and water-land boundary regions. Accordingly, the producer’s, user’s and overall accuracy are all increased, indicating the effectiveness of our method. |
Tasks | Time Series |
Published | 2016-12-22 |
URL | http://arxiv.org/abs/1612.07801v1 |
http://arxiv.org/pdf/1612.07801v1.pdf | |
PWC | https://paperswithcode.com/paper/probabilistic-graphical-model-based-approach |
Repo | |
Framework | |
Towards Semantic Integration of Heterogeneous Sensor Data with Indigenous Knowledge for Drought Forecasting
Title | Towards Semantic Integration of Heterogeneous Sensor Data with Indigenous Knowledge for Drought Forecasting |
Authors | Adeyinka K. Akanbi, Muthoni Masinde |
Abstract | In the Internet of Things (IoT) domain, various heterogeneous ubiquitous devices would be able to connect and communicate with each other seamlessly, irrespective of the domain. Semantic representation of data through detailed standardized annotation has shown to improve the integration of the interconnected heterogeneous devices. However, the semantic representation of these heterogeneous data sources for environmental monitoring systems is not yet well supported. To achieve the maximum benefits of IoT for drought forecasting, a dedicated semantic middleware solution is required. This research proposes a middleware that semantically represents and integrates heterogeneous data sources with indigenous knowledge based on a unified ontology for an accurate IoT-based drought early warning system (DEWS). |
Tasks | |
Published | 2016-01-08 |
URL | http://arxiv.org/abs/1601.01920v1 |
http://arxiv.org/pdf/1601.01920v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-semantic-integration-of-heterogeneous |
Repo | |
Framework | |
Optimal control for a robotic exploration, pick-up and delivery problem
Title | Optimal control for a robotic exploration, pick-up and delivery problem |
Authors | Vladislav Nenchev, Christos G. Cassandras, Jörg Raisch |
Abstract | This paper addresses an optimal control problem for a robot that has to find and collect a finite number of objects and move them to a depot in minimum time. The robot has fourth-order dynamics that change instantaneously at any pick-up or drop-off of an object. The objects are modeled by point masses with a-priori unknown locations in a bounded two-dimensional space that may contain unknown obstacles. For this hybrid system, an Optimal Control Problem (OCP) is approximately solved by a receding horizon scheme, where the derived lower bound for the cost-to-go is evaluated for the worst and for a probabilistic case, assuming a uniform distribution of the objects. First, a time-driven approximate solution based on time and position space discretization and mixed integer programming is presented. Due to the high computational cost of this solution, an alternative event-driven approximate approach based on a suitable motion parameterization and gradient-based optimization is proposed. The solutions are compared in a numerical example, suggesting that the latter approach offers a significant computational advantage while yielding similar qualitative results compared to the former. The methods are particularly relevant for various robotic applications like automated cleaning, search and rescue, harvesting or manufacturing. |
Tasks | |
Published | 2016-07-05 |
URL | http://arxiv.org/abs/1607.01202v1 |
http://arxiv.org/pdf/1607.01202v1.pdf | |
PWC | https://paperswithcode.com/paper/optimal-control-for-a-robotic-exploration |
Repo | |
Framework | |
PRIIME: A Generic Framework for Interactive Personalized Interesting Pattern Discovery
Title | PRIIME: A Generic Framework for Interactive Personalized Interesting Pattern Discovery |
Authors | Mansurul Bhuiyan, Mohammad Al Hasan |
Abstract | The traditional frequent pattern mining algorithms generate an exponentially large number of patterns of which a substantial proportion are not much significant for many data analysis endeavors. Discovery of a small number of personalized interesting patterns from the large output set according to a particular user’s interest is an important as well as challenging task. Existing works on pattern summarization do not solve this problem from the personalization viewpoint. In this work, we propose an interactive pattern discovery framework named PRIIME which identifies a set of interesting patterns for a specific user without requiring any prior input on the interestingness measure of patterns from the user. The proposed framework is generic to support discovery of the interesting set, sequence and graph type patterns. We develop a softmax classification based iterative learning algorithm that uses a limited number of interactive feedback from the user to learn her interestingness profile, and use this profile for pattern recommendation. To handle sequence and graph type patterns PRIIME adopts a neural net (NN) based unsupervised feature construction approach. We also develop a strategy that combines exploration and exploitation to select patterns for feedback. We show experimental results on several real-life datasets to validate the performance of the proposed method. We also compare with the existing methods of interactive pattern discovery to show that our method is substantially superior in performance. To portray the applicability of the framework, we present a case study from the real-estate domain. |
Tasks | |
Published | 2016-07-19 |
URL | http://arxiv.org/abs/1607.05749v1 |
http://arxiv.org/pdf/1607.05749v1.pdf | |
PWC | https://paperswithcode.com/paper/priime-a-generic-framework-for-interactive |
Repo | |
Framework | |
Probe-based Rapid Hybrid Hyperspectral and Tissue Surface Imaging Aided by Fully Convolutional Networks
Title | Probe-based Rapid Hybrid Hyperspectral and Tissue Surface Imaging Aided by Fully Convolutional Networks |
Authors | Jianyu Lin, Neil T. Clancy, Xueqing Sun, Ji Qi, Mirek Janatka, Danail Stoyanov, Daniel S. Elson |
Abstract | Tissue surface shape and reflectance spectra provide rich intra-operative information useful in surgical guidance. We propose a hybrid system which displays an endoscopic image with a fast joint inspection of tissue surface shape using structured light (SL) and hyperspectral imaging (HSI). For SL a miniature fibre probe is used to project a coloured spot pattern onto the tissue surface. In HSI mode standard endoscopic illumination is used, with the fibre probe collecting reflected light and encoding the spatial information into a linear format that can be imaged onto the slit of a spectrograph. Correspondence between the arrangement of fibres at the distal and proximal ends of the bundle was found using spectral encoding. Then during pattern decoding, a fully convolutional network (FCN) was used for spot detection, followed by a matching propagation algorithm for spot identification. This method enabled fast reconstruction (12 frames per second) using a GPU. The hyperspectral image was combined with the white light image and the reconstructed surface, showing the spectral information of different areas. Validation of this system using phantom and ex vivo experiments has been demonstrated. |
Tasks | |
Published | 2016-06-15 |
URL | http://arxiv.org/abs/1606.04766v1 |
http://arxiv.org/pdf/1606.04766v1.pdf | |
PWC | https://paperswithcode.com/paper/probe-based-rapid-hybrid-hyperspectral-and |
Repo | |
Framework | |
Learning binary or real-valued time-series via spike-timing dependent plasticity
Title | Learning binary or real-valued time-series via spike-timing dependent plasticity |
Authors | Takayuki Osogami |
Abstract | A dynamic Boltzmann machine (DyBM) has been proposed as a model of a spiking neural network, and its learning rule of maximizing the log-likelihood of given time-series has been shown to exhibit key properties of spike-timing dependent plasticity (STDP), which had been postulated and experimentally confirmed in the field of neuroscience as a learning rule that refines the Hebbian rule. Here, we relax some of the constraints in the DyBM in a way that it becomes more suitable for computation and learning. We show that learning the DyBM can be considered as logistic regression for binary-valued time-series. We also show how the DyBM can learn real-valued data in the form of a Gaussian DyBM and discuss its relation to the vector autoregressive (VAR) model. The Gaussian DyBM extends the VAR by using additional explanatory variables, which correspond to the eligibility traces of the DyBM and capture long term dependency of the time-series. Numerical experiments show that the Gaussian DyBM significantly improves the predictive accuracy over VAR. |
Tasks | Time Series |
Published | 2016-12-15 |
URL | http://arxiv.org/abs/1612.04897v1 |
http://arxiv.org/pdf/1612.04897v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-binary-or-real-valued-time-series |
Repo | |
Framework | |
How to advance general game playing artificial intelligence by player modelling
Title | How to advance general game playing artificial intelligence by player modelling |
Authors | Benjamin Ultan Cowley |
Abstract | General game playing artificial intelligence has recently seen important advances due to the various techniques known as ‘deep learning’. However the advances conceal equally important limitations in their reliance on: massive data sets; fortuitously constructed problems; and absence of any human-level complexity, including other human opponents. On the other hand, deep learning systems which do beat human champions, such as in Go, do not generalise well. The power of deep learning simultaneously exposes its weakness. Given that deep learning is mostly clever reconfigurations of well-established methods, moving beyond the state of art calls for forward-thinking visionary solutions, not just more of the same. I present the argument that general game playing artificial intelligence will require a generalised player model. This is because games are inherently human artefacts which therefore, as a class of problems, contain cases which require a human-style problem solving approach. I relate this argument to the performance of state of art general game playing agents. I then describe a concept for a formal category theoretic basis to a generalised player model. This formal model approach integrates my existing ‘Behavlets’ method for psychologically-derived player modelling: Cowley, B., Charles, D. (2016). Behavlets: a Method for Practical Player Modelling using Psychology-Based Player Traits and Domain Specific Features. User Modeling and User-Adapted Interaction, 26(2), 257-306. |
Tasks | |
Published | 2016-06-01 |
URL | http://arxiv.org/abs/1606.00401v3 |
http://arxiv.org/pdf/1606.00401v3.pdf | |
PWC | https://paperswithcode.com/paper/how-to-advance-general-game-playing |
Repo | |
Framework | |
Multiplex visibility graphs to investigate recurrent neural networks dynamics
Title | Multiplex visibility graphs to investigate recurrent neural networks dynamics |
Authors | Filippo Maria Bianchi, Lorenzo Livi, Cesare Alippi, Robert Jenssen |
Abstract | A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning of such hyperparameters may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize the internal RNN dynamics. Through this insight, we are able to design a principled unsupervised method to derive configurations with maximized performances, in terms of prediction error and memory capacity. In particular, we propose to model time series of neurons activations with the recently introduced horizontal visibility graphs, whose topological properties reflect important dynamical features of the underlying dynamic system. Successively, each graph becomes a layer of a larger structure, called multiplex. We show that topological properties of such a multiplex reflect important features of RNN dynamics and are used to guide the tuning procedure. To validate the proposed method, we consider a class of RNNs called echo state networks. We perform experiments and discuss results on several benchmarks and real-world dataset of call data records. |
Tasks | Time Series |
Published | 2016-09-10 |
URL | http://arxiv.org/abs/1609.03068v3 |
http://arxiv.org/pdf/1609.03068v3.pdf | |
PWC | https://paperswithcode.com/paper/multiplex-visibility-graphs-to-investigate |
Repo | |
Framework | |
Zero-Resource Translation with Multi-Lingual Neural Machine Translation
Title | Zero-Resource Translation with Multi-Lingual Neural Machine Translation |
Authors | Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman Vural, Kyunghyun Cho |
Abstract | In this paper, we propose a novel finetuning algorithm for the recently introduced multi-way, mulitlingual neural machine translate that enables zero-resource machine translation. When used together with novel many-to-one translation strategies, we empirically show that this finetuning algorithm allows the multi-way, multilingual model to translate a zero-resource language pair (1) as well as a single-pair neural translation model trained with up to 1M direct parallel sentences of the same language pair and (2) better than pivot-based translation strategy, while keeping only one additional copy of attention-related parameters. |
Tasks | Machine Translation |
Published | 2016-06-13 |
URL | http://arxiv.org/abs/1606.04164v1 |
http://arxiv.org/pdf/1606.04164v1.pdf | |
PWC | https://paperswithcode.com/paper/zero-resource-translation-with-multi-lingual |
Repo | |
Framework | |
Variance Reduction for Faster Non-Convex Optimization
Title | Variance Reduction for Faster Non-Convex Optimization |
Authors | Zeyuan Allen-Zhu, Elad Hazan |
Abstract | We consider the fundamental problem in non-convex optimization of efficiently reaching a stationary point. In contrast to the convex case, in the long history of this basic problem, the only known theoretical results on first-order non-convex optimization remain to be full gradient descent that converges in $O(1/\varepsilon)$ iterations for smooth objectives, and stochastic gradient descent that converges in $O(1/\varepsilon^2)$ iterations for objectives that are sum of smooth functions. We provide the first improvement in this line of research. Our result is based on the variance reduction trick recently introduced to convex optimization, as well as a brand new analysis of variance reduction that is suitable for non-convex optimization. For objectives that are sum of smooth functions, our first-order minibatch stochastic method converges with an $O(1/\varepsilon)$ rate, and is faster than full gradient descent by $\Omega(n^{1/3})$. We demonstrate the effectiveness of our methods on empirical risk minimizations with non-convex loss functions and training neural nets. |
Tasks | |
Published | 2016-03-17 |
URL | http://arxiv.org/abs/1603.05643v2 |
http://arxiv.org/pdf/1603.05643v2.pdf | |
PWC | https://paperswithcode.com/paper/variance-reduction-for-faster-non-convex |
Repo | |
Framework | |
Unbiased split variable selection for random survival forests using maximally selected rank statistics
Title | Unbiased split variable selection for random survival forests using maximally selected rank statistics |
Authors | Marvin N. Wright, Theresa Dankowski, Andreas Ziegler |
Abstract | The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistics, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. |
Tasks | |
Published | 2016-05-11 |
URL | http://arxiv.org/abs/1605.03391v2 |
http://arxiv.org/pdf/1605.03391v2.pdf | |
PWC | https://paperswithcode.com/paper/unbiased-split-variable-selection-for-random |
Repo | |
Framework | |
Convolutional Network for Attribute-driven and Identity-preserving Human Face Generation
Title | Convolutional Network for Attribute-driven and Identity-preserving Human Face Generation |
Authors | Mu Li, Wangmeng Zuo, David Zhang |
Abstract | This paper focuses on the problem of generating human face pictures from specific attributes. The existing CNN-based face generation models, however, either ignore the identity of the generated face or fail to preserve the identity of the reference face image. Here we address this problem from the view of optimization, and suggest an optimization model to generate human face with the given attributes while keeping the identity of the reference image. The attributes can be obtained from the attribute-guided image or by tuning the attribute features of the reference image. With the deep convolutional network “VGG-Face”, the loss is defined on the convolutional feature maps. We then apply the gradient decent algorithm to solve this optimization problem. The results validate the effectiveness of our method for attribute driven and identity-preserving face generation. |
Tasks | Face Generation |
Published | 2016-08-23 |
URL | http://arxiv.org/abs/1608.06434v1 |
http://arxiv.org/pdf/1608.06434v1.pdf | |
PWC | https://paperswithcode.com/paper/convolutional-network-for-attribute-driven |
Repo | |
Framework | |
Did Evolution get it right? An evaluation of Near-Infrared imaging in semantic scene segmentation using deep learning
Title | Did Evolution get it right? An evaluation of Near-Infrared imaging in semantic scene segmentation using deep learning |
Authors | J. Rafid Siddiqui |
Abstract | Animals have evolved to restrict their sensing capabilities to certain region of electromagnetic spectrum. This is surprisingly a very narrow band on a vast scale which makes one think if there is a systematic bias underlying such selective filtration. The situation becomes even more intriguing when we find a sharp cutoff point at Near-infrared point whereby almost all animal vision systems seem to have a lower bound. This brings us to an interesting question: did evolution “intentionally” performed such a restriction in order to evolve higher visual cognition? In this work this question is addressed by experimenting with Near-infrared images for their potential applicability in higher visual processing such as semantic segmentation. A modified version of Fully Convolutional Networks are trained on NIR images and RGB images respectively and compared for their respective effectiveness in the wake of semantic segmentation. The results from the experiments show that visible part of the spectrum alone is sufficient for the robust semantic segmentation of the indoor as well as outdoor scenes. |
Tasks | Scene Segmentation, Semantic Segmentation |
Published | 2016-11-27 |
URL | http://arxiv.org/abs/1611.08815v1 |
http://arxiv.org/pdf/1611.08815v1.pdf | |
PWC | https://paperswithcode.com/paper/did-evolution-get-it-right-an-evaluation-of |
Repo | |
Framework | |