October 16, 2019

3069 words 15 mins read

Paper Group ANR 1027

Paper Group ANR 1027

EA-CG: An Approximate Second-Order Method for Training Fully-Connected Neural Networks. Regularized Fourier Ptychography using an Online Plug-and-Play Algorithm. Tractable Learning and Inference for Large-Scale Probabilistic Boolean Networks. Polygonal approximation of digital planar curve using novel significant measure. Deep Gaussian Processes wi …

EA-CG: An Approximate Second-Order Method for Training Fully-Connected Neural Networks

Title EA-CG: An Approximate Second-Order Method for Training Fully-Connected Neural Networks
Authors Sheng-Wei Chen, Chun-Nan Chou, Edward Y. Chang
Abstract For training fully-connected neural networks (FCNNs), we propose a practical approximate second-order method including: 1) an approximation of the Hessian matrix and 2) a conjugate gradient (CG) based method. Our proposed approximate Hessian matrix is memory-efficient and can be applied to any FCNNs where the activation and criterion functions are twice differentiable. We devise a CG-based method incorporating one-rank approximation to derive Newton directions for training FCNNs, which significantly reduces both space and time complexity. This CG-based method can be employed to solve any linear equation where the coefficient matrix is Kronecker-factored, symmetric and positive definite. Empirical studies show the efficacy and efficiency of our proposed method.
Tasks
Published 2018-02-19
URL http://arxiv.org/abs/1802.06502v3
PDF http://arxiv.org/pdf/1802.06502v3.pdf
PWC https://paperswithcode.com/paper/ea-cg-an-approximate-second-order-method-for
Repo
Framework

Regularized Fourier Ptychography using an Online Plug-and-Play Algorithm

Title Regularized Fourier Ptychography using an Online Plug-and-Play Algorithm
Authors Yu Sun, Shiqi Xu, Yunzhe Li, Lei Tian, Brendt Wohlberg, Ulugbek S. Kamilov
Abstract The plug-and-play priors (PnP) framework has been recently shown to achieve state-of-the-art results in regularized image reconstruction by leveraging a sophisticated denoiser within an iterative algorithm. In this paper, we propose a new online PnP algorithm for Fourier ptychographic microscopy (FPM) based on the fast iterative shrinkage/threshold algorithm (FISTA). Specifically, the proposed algorithm uses only a subset of measurements, which makes it scalable to a large set of measurements. We validate the algorithm by showing that it can lead to significant performance gains on both simulated and experimental data.
Tasks Image Reconstruction
Published 2018-10-31
URL http://arxiv.org/abs/1811.00120v2
PDF http://arxiv.org/pdf/1811.00120v2.pdf
PWC https://paperswithcode.com/paper/regularized-fourier-ptychography-using-an
Repo
Framework

Tractable Learning and Inference for Large-Scale Probabilistic Boolean Networks

Title Tractable Learning and Inference for Large-Scale Probabilistic Boolean Networks
Authors Ifigeneia Apostolopoulou, Diana Marculescu
Abstract Probabilistic Boolean Networks (PBNs) have been previously proposed so as to gain insights into complex dy- namical systems. However, identification of large networks and of the underlying discrete Markov Chain which describes their temporal evolution, still remains a challenge. In this paper, we introduce an equivalent representation for the PBN, the Stochastic Conjunctive Normal Form (SCNF), which paves the way to a scalable learning algorithm and helps predict long- run dynamic behavior of large-scale systems. Moreover, SCNF allows its efficient sampling so as to statistically infer multi- step transition probabilities which can provide knowledge on the activity levels of individual nodes in the long run.
Tasks
Published 2018-01-23
URL http://arxiv.org/abs/1801.07693v1
PDF http://arxiv.org/pdf/1801.07693v1.pdf
PWC https://paperswithcode.com/paper/tractable-learning-and-inference-for-large
Repo
Framework

Polygonal approximation of digital planar curve using novel significant measure

Title Polygonal approximation of digital planar curve using novel significant measure
Authors Mangayarkarasi Ramaiah, Dilip K. Prasad
Abstract This paper presents an iterative smoothing technique for polygonal approximation of digital image boundary. The technique starts with finest initial segmentation points of a curve. The contribution of initially segmented points towards preserving the original shape of the image boundary is determined by computing the significant measure of every initial segmentation points which is sensitive to sharp turns, which may be missed easily when conventional significant measures are used for detecting dominant points. The proposed method differentiates between the situations when a point on the curve between two points on a curve projects directly upon the line segment or beyond this line segment. It not only identifies these situations, but also computes its significant contribution for these situations differently. This situation-specific treatment allows preservation of points with high curvature even as revised set of dominant points are derived. The experimental results show that the proposed technique competes well with the state of the art techniques.
Tasks
Published 2018-12-21
URL http://arxiv.org/abs/1812.09271v1
PDF http://arxiv.org/pdf/1812.09271v1.pdf
PWC https://paperswithcode.com/paper/polygonal-approximation-of-digital-planar
Repo
Framework

Deep Gaussian Processes with Decoupled Inducing Inputs

Title Deep Gaussian Processes with Decoupled Inducing Inputs
Authors Marton Havasi, José Miguel Hernández-Lobato, Juan José Murillo-Fuentes
Abstract Deep Gaussian Processes (DGP) are hierarchical generalizations of Gaussian Processes (GP) that have proven to work effectively on a multiple supervised regression tasks. They combine the well calibrated uncertainty estimates of GPs with the great flexibility of multilayer models. In DGPs, given the inputs, the outputs of the layers are Gaussian distributions parameterized by their means and covariances. These layers are realized as Sparse GPs where the training data is approximated using a small set of pseudo points. In this work, we show that the computational cost of DGPs can be reduced with no loss in performance by using a separate, smaller set of pseudo points when calculating the layerwise variance while using a larger set of pseudo points when calculating the layerwise mean. This enabled us to train larger models that have lower cost and better predictive performance.
Tasks Gaussian Processes
Published 2018-01-09
URL http://arxiv.org/abs/1801.02939v1
PDF http://arxiv.org/pdf/1801.02939v1.pdf
PWC https://paperswithcode.com/paper/deep-gaussian-processes-with-decoupled
Repo
Framework

Spiking memristor logic gates are a type of time-variant perceptron

Title Spiking memristor logic gates are a type of time-variant perceptron
Authors Ella M. Gale
Abstract Memristors are low-power memory-holding resistors thought to be useful for neuromophic computing, which can compute via spike-interactions mediated through the device’s short-term memory. Using interacting spikes, it is possible to build an AND gate that computes OR at the same time, similarly a full adder can be built that computes the arithmetical sum of its inputs. Here we show how these gates can be understood by modelling the memristors as a novel type of perceptron: one which is sensitive to input order. The memristor’s memory can change the input weights for later inputs, and thus the memristor gates cannot be accurately described by a single perceptron, requiring either a network of time-invarient perceptrons or a complex time-varying self-reprogrammable perceptron. This work demonstrates the high functionality of memristor logic gates, and also that the addition of theasholding could enable the creation of a standard perceptron in hardware, which may have use in building neural net chips.
Tasks
Published 2018-01-08
URL http://arxiv.org/abs/1801.02508v1
PDF http://arxiv.org/pdf/1801.02508v1.pdf
PWC https://paperswithcode.com/paper/spiking-memristor-logic-gates-are-a-type-of
Repo
Framework

Vectorization of hypotheses and speech for faster beam search in encoder decoder-based speech recognition

Title Vectorization of hypotheses and speech for faster beam search in encoder decoder-based speech recognition
Authors Hiroshi Seki, Takaaki Hori, Shinji Watanabe
Abstract Attention-based encoder decoder network uses a left-to-right beam search algorithm in the inference step. The current beam search expands hypotheses and traverses the expanded hypotheses at the next time step. This traversal is implemented using a for-loop program in general, and it leads to speed down of the recognition process. In this paper, we propose a parallelism technique for beam search, which accelerates the search process by vectorizing multiple hypotheses to eliminate the for-loop program. We also propose a technique to batch multiple speech utterances for off-line recognition use, which reduces the for-loop program with regard to the traverse of multiple utterances. This extension is not trivial during beam search unlike during training due to several pruning and thresholding techniques for efficient decoding. In addition, our method can combine scores of external modules, RNNLM and CTC, in a batch as shallow fusion. We achieved 3.7 x speedup compared with the original beam search algorithm by vectoring hypotheses, and achieved 10.5 x speedup by further changing processing unit to GPU.
Tasks Speech Recognition
Published 2018-11-12
URL http://arxiv.org/abs/1811.04568v1
PDF http://arxiv.org/pdf/1811.04568v1.pdf
PWC https://paperswithcode.com/paper/vectorization-of-hypotheses-and-speech-for
Repo
Framework

Differentially Private False Discovery Rate Control

Title Differentially Private False Discovery Rate Control
Authors Cynthia Dwork, Weijie J. Su, Li Zhang
Abstract Differential privacy provides a rigorous framework for privacy-preserving data analysis. This paper proposes the first differentially private procedure for controlling the false discovery rate (FDR) in multiple hypothesis testing. Inspired by the Benjamini- Hochberg procedure (BHq), our approach is to first repeatedly add noise to the logarithms of the p-values to ensure differential privacy and to select an approximately smallest p-value serving as a promising candidate at each iteration; the selected p-values are further supplied to the BHq and our private procedure releases only the rejected ones. Apart from the privacy considerations, we develop a new technique that is based on a backward submartingale for proving FDR control of a broad class of multiple testing procedures, including our private procedure, and both the BHq step-up and step-down procedures. As a novel aspect, the proof works for arbitrary dependence between the true null and false null test statistics, while FDR control is maintained up to a small multiplicative factor. This theoretical guarantee is the first in the FDR literature to explain the empirical validity of the BHq procedure in three simulation studies.
Tasks
Published 2018-07-11
URL http://arxiv.org/abs/1807.04209v1
PDF http://arxiv.org/pdf/1807.04209v1.pdf
PWC https://paperswithcode.com/paper/differentially-private-false-discovery-rate
Repo
Framework

Infinite Curriculum Learning for Efficiently Detecting Gastric Ulcers in WCE Images

Title Infinite Curriculum Learning for Efficiently Detecting Gastric Ulcers in WCE Images
Authors Xiaolu Zhang, Shiwan Zhao, Lingxi Xie
Abstract The Wireless Capsule Endoscopy (WCE) is becoming a popular way of screening gastrointestinal system diseases and cancer. However, the time-consuming process in inspecting WCE data limits its applications and increases the cost of examinations. This paper considers WCE-based gastric ulcer detection, in which the major challenge is to detect the lesions in a local region. We propose an approach named infinite curriculum learning, which generalizes curriculum learning to an infinite sampling space by approximately measuring the difficulty of each patch by its scale. This allows us to adapt our model from local patches to global images gradually, leading to a consistent accuracy gain. Experiments are performed on a large dataset with more than 3 million WCE images. Our approach achieves a binary classification accuracy of 87%, and is able to detect some lesions mis-annotated by the physicians. In a real-world application, our approach can reduce the workload of a physician by 90%-98% in gastric ulcer screening.
Tasks
Published 2018-09-07
URL http://arxiv.org/abs/1809.02371v1
PDF http://arxiv.org/pdf/1809.02371v1.pdf
PWC https://paperswithcode.com/paper/infinite-curriculum-learning-for-efficiently
Repo
Framework

Dealing with Limited Backhaul Capacity in Millimeter Wave Systems: A Deep Reinforcement Learning Approach

Title Dealing with Limited Backhaul Capacity in Millimeter Wave Systems: A Deep Reinforcement Learning Approach
Authors Mingjie Feng, Shiwen Mao
Abstract Millimeter Wave (MmWave) communication is one of the key technology of the fifth generation (5G) wireless systems to achieve the expected 1000x data rate. With large bandwidth at mmWave band, the link capacity between users and base stations (BS) can be much higher compared to sub-6GHz wireless systems. Meanwhile, due to the high cost of infrastructure upgrade, it would be difficult for operators to drastically enhance the capacity of backhaul links between mmWave BSs and the core network. As a result, the data rate provided by backhaul may not be sufficient to support all mmWave links, the backhaul connection becomes the new bottleneck that limits the system performance. On the other hand, as mmWave channels are subject to random blockage, the data rates of mmWave users significantly vary over time. With limited backhaul capacity and highly dynamic data rates of users, how to allocate backhaul resource to each user remains a challenge for mmWave systems. In this article, we present a deep reinforcement learning (DRL) approach to address this challenge. By learning the blockage pattern, the system dynamics can be captured and predicted, resulting in efficient utilization of backhaul resource. We begin with a discussion on DRL and its application in wireless systems. We then investigate the problem backhaul resource allocation and present the DRL based solution. Finally, we discuss open problems for future research and conclude this article.
Tasks
Published 2018-12-27
URL http://arxiv.org/abs/1901.01119v1
PDF http://arxiv.org/pdf/1901.01119v1.pdf
PWC https://paperswithcode.com/paper/dealing-with-limited-backhaul-capacity-in
Repo
Framework

Collaborative Dense SLAM

Title Collaborative Dense SLAM
Authors Louis Gallagher, John B. McDonald
Abstract In this paper, we present a new system for live collaborative dense surface reconstruction. Cooperative robotics, multi participant augmented reality and human-robot interaction are all examples of situations where collaborative mapping can be leveraged for greater agent autonomy. Our system builds on ElasticFusion to allow a number of cameras starting with unknown initial relative positions to maintain local maps utilising the original algorithm. Carrying out visual place recognition across these local maps the system can identify when two maps overlap in space, providing an inter-map constraint from which the system can derive the relative poses of the two maps. Using these resulting pose constraints, our system performs map merging, allowing multiple cameras to fuse their measurements into a single shared reconstruction. The advantage of this approach is that it avoids replication of structures subsequent to loop closures, where multiple cameras traverse the same regions of the environment. Furthermore, it allows cameras to directly exploit and update regions of the environment previously mapped by other cameras within the system. We provide both quantitative and qualitative analyses using the synthetic ICL-NUIM dataset and the real-world Freiburg dataset including the impact of multi-camera mapping on surface reconstruction accuracy, camera pose estimation accuracy and overall processing time. We also include qualitative results in the form of sample reconstructions of room sized environments with up to 3 cameras undergoing intersecting and loopy trajectories.
Tasks Pose Estimation, Visual Place Recognition
Published 2018-11-19
URL http://arxiv.org/abs/1811.07632v2
PDF http://arxiv.org/pdf/1811.07632v2.pdf
PWC https://paperswithcode.com/paper/collaborative-dense-slam
Repo
Framework

Performance Metrics (Error Measures) in Machine Learning Regression, Forecasting and Prognostics: Properties and Typology

Title Performance Metrics (Error Measures) in Machine Learning Regression, Forecasting and Prognostics: Properties and Typology
Authors Alexei Botchkarev
Abstract Performance metrics (error measures) are vital components of the evaluation frameworks in various fields. The intention of this study was to overview of a variety of performance metrics and approaches to their classification. The main goal of the study was to develop a typology that will help to improve our knowledge and understanding of metrics and facilitate their selection in machine learning regression, forecasting and prognostics. Based on the analysis of the structure of numerous performance metrics, we propose a framework of metrics which includes four (4) categories: primary metrics, extended metrics, composite metrics, and hybrid sets of metrics. The paper identified three (3) key components (dimensions) that determine the structure and properties of primary metrics: method of determining point distance, method of normalization, method of aggregation of point distances over a data set.
Tasks
Published 2018-09-09
URL http://arxiv.org/abs/1809.03006v1
PDF http://arxiv.org/pdf/1809.03006v1.pdf
PWC https://paperswithcode.com/paper/performance-metrics-error-measures-in-machine
Repo
Framework

A Deep Learning Framework for Automatic Diagnosis in Lung Cancer

Title A Deep Learning Framework for Automatic Diagnosis in Lung Cancer
Authors Nikolay Burlutskiy, Feng Gu, Lena Kajland Wilen, Max Backman, Patrick Micke
Abstract We developed a deep learning framework that helps to automatically identify and segment lung cancer areas in patients’ tissue specimens. The study was based on a cohort of lung cancer patients operated at the Uppsala University Hospital. The tissues were reviewed by lung pathologists and then the cores were compiled to tissue micro-arrays (TMAs). For experiments, hematoxylin-eosin stained slides from 712 patients were scanned and then manually annotated. Then these scans and annotations were used to train segmentation models of the developed framework. The performance of the developed deep learning framework was evaluated on fully annotated TMA cores from 178 patients reaching pixel-wise precision of 0.80 and recall of 0.86. Finally, publicly available Stanford TMA cores were used to demonstrate high performance of the framework qualitatively.
Tasks
Published 2018-07-27
URL http://arxiv.org/abs/1807.10466v1
PDF http://arxiv.org/pdf/1807.10466v1.pdf
PWC https://paperswithcode.com/paper/a-deep-learning-framework-for-automatic
Repo
Framework

Learning Dynamic Embeddings from Temporal Interactions

Title Learning Dynamic Embeddings from Temporal Interactions
Authors Srijan Kumar, Xikun Zhang, Jure Leskovec
Abstract Modeling a sequence of interactions between users and items (e.g., products, posts, or courses) is crucial in domains such as e-commerce, social networking, and education to predict future interactions. Representation learning presents an attractive solution to model the dynamic evolution of user and item properties, where each user/item can be embedded in a euclidean space and its evolution can be modeled by dynamic changes in embedding. However, existing embedding methods either generate static embeddings, treat users and items independently, or are not scalable. Here we present JODIE, a coupled recurrent model to jointly learn the dynamic embeddings of users and items from a sequence of user-item interactions. JODIE has three components. First, the update component updates the user and item embedding from each interaction using their previous embeddings with the two mutually-recursive Recurrent Neural Networks. Second, a novel projection component is trained to forecast the embedding of users at any future time. Finally, the prediction component directly predicts the embedding of the item in a future interaction. For models that learn from a sequence of interactions, traditional training data batching cannot be done due to complex user-user dependencies. Therefore, we present a novel batching algorithm called t-Batch that generates time-consistent batches of training data that can run in parallel, giving massive speed-up. We conduct six experiments on two prediction tasks—future interaction prediction and state change prediction—using four real-world datasets. We show that JODIE outperforms six state-of-the-art algorithms in these tasks by up to 22.4%. Moreover, we show that JODIE is highly scalable and up to 9.2x faster than comparable models. As an additional experiment, we illustrate that JODIE can predict student drop-out from courses five interactions in advance.
Tasks Representation Learning
Published 2018-12-06
URL http://arxiv.org/abs/1812.02289v1
PDF http://arxiv.org/pdf/1812.02289v1.pdf
PWC https://paperswithcode.com/paper/learning-dynamic-embeddings-from-temporal
Repo
Framework

FANet: Quality-Aware Feature Aggregation Network for Robust RGB-T Tracking

Title FANet: Quality-Aware Feature Aggregation Network for Robust RGB-T Tracking
Authors Yabin Zhu, Chenglong Li, Bin Luo, Jin Tang
Abstract This paper investigates how to perform robust visual tracking in adverse and challenging conditions using complementary visual and thermal infrared data (RGBT tracking). We propose a novel deep network architecture called qualityaware Feature Aggregation Network (FANet) for robust RGBT tracking. Unlike existing RGBT trackers, our FANet aggregates hierarchical deep features within each modality to handle the challenge of significant appearance changes caused by deformation, low illumination, background clutter and occlusion. In particular, we employ the operations of max pooling to transform these hierarchical and multi-resolution features into uniform space with the same resolution, and use 1x1 convolution operation to compress feature dimensions to achieve more effective hierarchical feature aggregation. To model the interactions between RGB and thermal modalities, we elaborately design an adaptive aggregation subnetwork to integrate features from different modalities based on their reliabilities and thus are able to alleviate noise effects introduced by low-quality sources. The whole FANet is trained in an end-to-end manner. Extensive experiments on large-scale benchmark datasets demonstrate the high-accurate performance against other state-of-the-art RGBT tracking methods.
Tasks Rgb-T Tracking, Visual Tracking
Published 2018-11-24
URL https://arxiv.org/abs/1811.09855v2
PDF https://arxiv.org/pdf/1811.09855v2.pdf
PWC https://paperswithcode.com/paper/fanet-quality-aware-feature-aggregation
Repo
Framework
comments powered by Disqus