January 27, 2020

3092 words 15 mins read

Paper Group ANR 1136

Paper Group ANR 1136

Cascaded Parallel Filtering for Memory-Efficient Image-Based Localization. Evolving the pulmonary nodules diagnosis from classical approaches to deep learning aided decision support: three decades development course and future prospect. Self-enhancement of automatic tunnel accident detection (TAD) on CCTV by AI deep-learning. Restless dependent ban …

Cascaded Parallel Filtering for Memory-Efficient Image-Based Localization

Title Cascaded Parallel Filtering for Memory-Efficient Image-Based Localization
Authors Wentao Cheng, Weisi Lin, Kan Chen, Xinfeng Zhang
Abstract Image-based localization (IBL) aims to estimate the 6DOF camera pose for a given query image. The camera pose can be computed from 2D-3D matches between a query image and Structure-from-Motion (SfM) models. Despite recent advances in IBL, it remains difficult to simultaneously resolve the memory consumption and match ambiguity problems of large SfM models. In this work, we propose a cascaded parallel filtering method that leverages the feature, visibility and geometry information to filter wrong matches under binary feature representation. The core idea is that we divide the challenging filtering task into two parallel tasks before deriving an auxiliary camera pose for final filtering. One task focuses on preserving potentially correct matches, while another focuses on obtaining high quality matches to facilitate subsequent more powerful filtering. Moreover, our proposed method improves the localization accuracy by introducing a quality-aware spatial reconfiguration method and a principal focal length enhanced pose estimation method. Experimental results on real-world datasets demonstrate that our method achieves very competitive localization performances in a memory-efficient manner.
Tasks Image-Based Localization, Pose Estimation
Published 2019-08-16
URL https://arxiv.org/abs/1908.06141v1
PDF https://arxiv.org/pdf/1908.06141v1.pdf
PWC https://paperswithcode.com/paper/cascaded-parallel-filtering-for-memory
Repo
Framework

Evolving the pulmonary nodules diagnosis from classical approaches to deep learning aided decision support: three decades development course and future prospect

Title Evolving the pulmonary nodules diagnosis from classical approaches to deep learning aided decision support: three decades development course and future prospect
Authors Bo Liu, Wenhao Chi, Xinran Li, Peng Li, Wenhua Liang, Haiping Liu, Wei Wang, Jianxing He
Abstract Lung cancer is the commonest cause of cancer deaths worldwide, and its mortality can be reduced significantly by performing early diagnosis and screening. Since the 1960s, driven by the pressing needs to accurately and effectively interpret the massive volume of chest images generated daily, computer-assisted diagnosis of pulmonary nodule has opened up new opportunities to relax the limitation from physicians subjectivity, experiences and fatigue. It has been witnessed that significant and remarkable advances have been achieved since the 1980s, and consistent endeavors have been exerted to deal with the grand challenges on how to accurately detect the pulmonary nodules with high sensitivity at low false-positives rate as well as on how to precisely differentiate between benign and malignant nodules. The main goal of this investigation is to provide a comprehensive state-of-the-art review of the computer-assisted nodules detection and benign-malignant classification techniques developed over three decades, which have evolved from the complicated ad hoc analysis pipeline of conventional approaches to the simplified seamlessly integrated deep learning techniques. This review also identifies challenges and highlights opportunities for future work in learning models, learning algorithms and enhancement schemes for bridging current state to future prospect and satisfying future demand. As far as the authors know, it is the first review of the literature of the past thirty years development in computer-assisted diagnosis of lung nodules. We acknowledge the value of potential multidisciplinary researches that will make the computer-assisted diagnosis of pulmonary nodules enter into the main stream of clinical medicine, and raise the state-of-the-art clinical applications as well as increase both welfares of physicians and patients.
Tasks
Published 2019-01-23
URL http://arxiv.org/abs/1901.07858v2
PDF http://arxiv.org/pdf/1901.07858v2.pdf
PWC https://paperswithcode.com/paper/evolving-the-pulmonary-nodules-diagnosis-from
Repo
Framework

Self-enhancement of automatic tunnel accident detection (TAD) on CCTV by AI deep-learning

Title Self-enhancement of automatic tunnel accident detection (TAD) on CCTV by AI deep-learning
Authors Kyu-Beom Lee, Hyu-Soung Shin
Abstract The deep-learning-based tunnel accident detection (TAD) system (Lee 2019) has installed a system capable of monitoring 9 CCTVs at XX site in November, 2018. The initial deep-learning training was started by studying 70,914 labeled images and label data. However, sunlight, the tail light of a vehicle, and the warning light of the working vehicle were recognized as a fire, and many pedestrians were detected in the lane of the tunnel or a black elongated black object. To solve these problems, as shown in Fig. 1, the false detection data detected in the field were trained with labeled data and reapplied in the field. As a result, false detection of pedestrians and fire could be significantly reduced.
Tasks
Published 2019-10-11
URL https://arxiv.org/abs/1910.11072v1
PDF https://arxiv.org/pdf/1910.11072v1.pdf
PWC https://paperswithcode.com/paper/self-enhancement-of-automatic-tunnel-accident
Repo
Framework

Restless dependent bandits with fading memory

Title Restless dependent bandits with fading memory
Authors Oleksandr Zadorozhnyi, Gilles Blanchard, Alexandra Carpentier
Abstract We study the stochastic multi-armed bandit problem in the case when the arm samples are dependent over time and generated from so-called weak $\cC$-mixing processes. We establish a $\cC-$Mix Improved UCB agorithm and provide both problem-dependent and independent regret analysis in two different scenarios. In the first, so-called fast-mixing scenario, we show that pseudo-regret enjoys the same upper bound (up to a factor) as for i.i.d. observations; whereas in the second, slow mixing scenario, we discover a surprising effect, that the regret upper bound is similar to the independent case, with an incremental {\em additive} term which does not depend on the number of arms. The analysis of slow mixing scenario is supported with a minmax lower bound, which (up to a $\log(T)$ factor) matches the obtained upper bound.
Tasks
Published 2019-06-25
URL https://arxiv.org/abs/1906.10454v1
PDF https://arxiv.org/pdf/1906.10454v1.pdf
PWC https://paperswithcode.com/paper/restless-dependent-bandits-with-fading-memory
Repo
Framework

Secure Architectures Implementing Trusted Coalitions for Blockchained Distributed Learning (TCLearn)

Title Secure Architectures Implementing Trusted Coalitions for Blockchained Distributed Learning (TCLearn)
Authors Sebastien Lugan, Paul Desbordes, Luis Xavier Ramos Tormo, Axel Legay, Benoit Macq
Abstract Distributed learning across a coalition of organizations allows the members of the coalition to train and share a model without sharing the data used to optimize this model. In this paper, we propose new secure architectures that guarantee preservation of data privacy, trustworthy sequence of iterative learning and equitable sharing of the learned model among each member of the coalition by using adequate encryption and blockchain mechanisms. We exemplify its deployment in the case of the distributed optimization of a deep learning convolutional neural network trained on medical images.
Tasks Distributed Optimization
Published 2019-06-18
URL https://arxiv.org/abs/1906.07690v1
PDF https://arxiv.org/pdf/1906.07690v1.pdf
PWC https://paperswithcode.com/paper/secure-architectures-implementing-trusted
Repo
Framework

Importance Sampling via Local Sensitivity

Title Importance Sampling via Local Sensitivity
Authors Anant Raj, Cameron Musco, Lester Mackey
Abstract Given a loss function $F:\mathcal{X} \rightarrow \R^+$ that can be written as the sum of losses over a large set of inputs $a_1,\ldots, a_n$, it is often desirable to approximate $F$ by subsampling the input points. Strong theoretical guarantees require taking into account the importance of each point, measured by how much its individual loss contributes to $F(x)$. Maximizing this importance over all $x \in \mathcal{X}$ yields the \emph{sensitivity score} of $a_i$. Sampling with probabilities proportional to these scores gives strong guarantees, allowing one to approximately minimize of $F$ using just the subsampled points. Unfortunately, sensitivity sampling is difficult to apply since (1) it is unclear how to efficiently compute the sensitivity scores and (2) the sample size required is often impractically large. To overcome both obstacles we introduce \emph{local sensitivity}, which measures data point importance in a ball around some center $x_0$. We show that the local sensitivity can be efficiently estimated using the \emph{leverage scores} of a quadratic approximation to $F$ and that the sample size required to approximate $F$ around $x_0$ can be bounded. We propose employing local sensitivity sampling in an iterative optimization method and analyze its convergence when $F$ is smooth and convex.
Tasks
Published 2019-11-04
URL https://arxiv.org/abs/1911.01575v2
PDF https://arxiv.org/pdf/1911.01575v2.pdf
PWC https://paperswithcode.com/paper/importance-sampling-via-local-sensitivity
Repo
Framework

Coverage Testing of Deep Learning Models using Dataset Characterization

Title Coverage Testing of Deep Learning Models using Dataset Characterization
Authors Senthil Mani, Anush Sankaran, Srikanth Tamilselvam, Akshay Sethi
Abstract Deep Neural Networks (DNNs), with its promising performance, are being increasingly used in safety critical applications such as autonomous driving, cancer detection, and secure authentication. With growing importance in deep learning, there is a requirement for a more standardized framework to evaluate and test deep learning models. The primary challenge involved in automated generation of extensive test cases are: (i) neural networks are difficult to interpret and debug and (ii) availability of human annotators to generate specialized test points. In this research, we explain the necessity to measure the quality of a dataset and propose a test case generation system guided by the dataset properties. From a testing perspective, four different dataset quality dimensions are proposed: (i) equivalence partitioning, (ii) centroid positioning, (iii) boundary conditioning, and (iv) pair-wise boundary conditioning. The proposed system is evaluated on well known image classification datasets such as MNIST, Fashion-MNIST, CIFAR10, CIFAR100, and SVHN against popular deep learning models such as LeNet, ResNet-20, VGG-19. Further, we conduct various experiments to demonstrate the effectiveness of systematic test case generation system for evaluating deep learning models.
Tasks Autonomous Driving, Image Classification
Published 2019-11-17
URL https://arxiv.org/abs/1911.07309v1
PDF https://arxiv.org/pdf/1911.07309v1.pdf
PWC https://paperswithcode.com/paper/coverage-testing-of-deep-learning-models
Repo
Framework

Pro-Cam SSfM: Projector-Camera System for Structure and Spectral Reflectance from Motion

Title Pro-Cam SSfM: Projector-Camera System for Structure and Spectral Reflectance from Motion
Authors Chunyu Li, Yusuke Monno, Hironori Hidaka, Masatoshi Okutomi
Abstract In this paper, we propose a novel projector-camera system for practical and low-cost acquisition of a dense object 3D model with the spectral reflectance property. In our system, we use a standard RGB camera and leverage an off-the-shelf projector as active illumination for both the 3D reconstruction and the spectral reflectance estimation. We first reconstruct the 3D points while estimating the poses of the camera and the projector, which are alternately moved around the object, by combining multi-view structured light and structure-from-motion (SfM) techniques. We then exploit the projector for multispectral imaging and estimate the spectral reflectance of each 3D point based on a novel spectral reflectance estimation model considering the geometric relationship between the reconstructed 3D points and the estimated projector positions. Experimental results on several real objects demonstrate that our system can precisely acquire a dense 3D model with the full spectral reflectance property using off-the-shelf devices.
Tasks 3D Reconstruction
Published 2019-08-22
URL https://arxiv.org/abs/1908.08185v1
PDF https://arxiv.org/pdf/1908.08185v1.pdf
PWC https://paperswithcode.com/paper/pro-cam-ssfm-projector-camera-system-for
Repo
Framework

Un systeme de lemmatisation pour les applications de TALN

Title Un systeme de lemmatisation pour les applications de TALN
Authors Sadik Bessou, Mohamed Louail, Allaoua Refoufi, Zehour Kadem, Mohamed Touahria
Abstract This paper presents a method of stemming for the Arabian texts based on the linguistic techniques of the natural language processing. This method leans on the notion of scheme (one of the strong points of the morphology of the Arabian language). The advantage of this approach is that it doesn’t use a dictionary of inflexions but a smart dynamic recognition of the different words of the language.
Tasks
Published 2019-11-10
URL https://arxiv.org/abs/1911.03922v2
PDF https://arxiv.org/pdf/1911.03922v2.pdf
PWC https://paperswithcode.com/paper/un-systeme-de-lemmatisation-pour-les
Repo
Framework

Coin.AI: A Proof-of-Useful-Work Scheme for Blockchain-based Distributed Deep Learning

Title Coin.AI: A Proof-of-Useful-Work Scheme for Blockchain-based Distributed Deep Learning
Authors Alejandro Baldominos, Yago Saez
Abstract One decade ago, Bitcoin was introduced, becoming the first cryptocurrency and establishing the concept of “blockchain” as a distributed ledger. As of today, there are many different implementations of cryptocurrencies working over a blockchain, with different approaches and philosophies. However, many of them share one common feature: they require proof-of-work to support the generation of blocks (mining) and, eventually, the generation of money. This proof-of-work scheme often consists in the resolution of a cryptography problem, most commonly breaking a hash value, which can only be achieved through brute-force. The main drawback of proof-of-work is that it requires ridiculously large amounts of energy which do not have any useful outcome beyond supporting the currency. In this paper, we present a theoretical proposal that introduces a proof-of-useful-work scheme to support a cryptocurrency running over a blockchain, which we named Coin.AI. In this system, the mining scheme requires training deep learning models, and a block is only mined when the performance of such model exceeds a threshold. The distributed system allows for nodes to verify the models delivered by miners in an easy way (certainly much more efficiently than the mining process itself), determining when a block is to be generated. Additionally, this paper presents a proof-of-storage scheme for rewarding users that provide storage for the deep learning models, as well as a theoretical dissertation on how the mechanics of the system could be articulated with the ultimate goal of democratizing access to artificial intelligence.
Tasks
Published 2019-03-23
URL https://arxiv.org/abs/1903.09800v2
PDF https://arxiv.org/pdf/1903.09800v2.pdf
PWC https://paperswithcode.com/paper/coinai-a-proof-of-useful-work-scheme-for
Repo
Framework

Structured Compression by Weight Encryption for Unstructured Pruning and Quantization

Title Structured Compression by Weight Encryption for Unstructured Pruning and Quantization
Authors Se Jung Kwon, Dongsoo Lee, Byeongwook Kim, Parichay Kapoor, Baeseong Park, Gu-Yeon Wei
Abstract Model compression techniques, such as pruning and quantization, are becoming increasingly important to reduce the memory footprints and the amount of computations. Despite model size reduction, achieving performance enhancement on devices is, however, still challenging mainly due to the irregular representations of sparse matrix formats. This paper proposes a new weight representation scheme for Sparse Quantized Neural Networks, specifically achieved by fine-grained and unstructured pruning method. The representation is encrypted in a structured regular format, which can be efficiently decoded through XOR-gate network during inference in a parallel manner. We demonstrate various deep learning models that can be compressed and represented by our proposed format with fixed and high compression ratio. For example, for fully-connected layers of AlexNet on ImageNet dataset, we can represent the sparse weights by only 0.28 bits/weight for 1-bit quantization and 91% pruning rate with a fixed decoding rate and full memory bandwidth usage. Decoding through XOR-gate network can be performed without any model accuracy degradation with additional patch data associated with small overhead.
Tasks Model Compression, Quantization
Published 2019-05-24
URL https://arxiv.org/abs/1905.10138v2
PDF https://arxiv.org/pdf/1905.10138v2.pdf
PWC https://paperswithcode.com/paper/structured-compression-by-unstructured
Repo
Framework

Option Compatible Reward Inverse Reinforcement Learning

Title Option Compatible Reward Inverse Reinforcement Learning
Authors Rakhoon Hwang, Hanjin Lee, Hyung Ju Hwang
Abstract Reinforcement learning with complex tasks is a challenging problem. Often, expert demonstrations of complex multitasking operations are required to train agents. However, it is difficult to design a reward function for given complex tasks. In this paper, we solve a hierarchical inverse reinforcement learning (IRL) problem within the framework of options. A gradient method for parametrized options is used to deduce a defining equation for the Q-feature space, which leads to a reward feature space. Using a second-order optimality condition for option parameters, an optimal reward function is selected. Experimental results in both discrete and continuous domains confirm that our segmented rewards provide a solution to the IRL problem for multitasking operations and show good performance and robustness against the noise created by expert demonstrations.
Tasks
Published 2019-11-07
URL https://arxiv.org/abs/1911.02723v1
PDF https://arxiv.org/pdf/1911.02723v1.pdf
PWC https://paperswithcode.com/paper/option-compatible-reward-inverse
Repo
Framework

Peak Alignment of Gas Chromatography-Mass Spectrometry Data with Deep Learning

Title Peak Alignment of Gas Chromatography-Mass Spectrometry Data with Deep Learning
Authors Mike Li, X. Rosalind Wang
Abstract We present ChromAlignNet, a deep learning model for alignment of peaks in Gas Chromatography-Mass Spectrometry (GC-MS) data. In GC-MS data, a compound’s retention time (RT) may not stay fixed across multiple chromatograms. To use GC-MS data for biomarker discovery requires alignment of identical analyte’s RT from different samples. Current methods of alignment are all based on a set of formal, mathematical rules. We present a solution to GC-MS alignment using deep learning neural networks, which are more adept at complex, fuzzy data sets. We tested our model on several GC-MS data sets of various complexities and analysed the alignment results quantitatively. We show the model has very good performance (AUC $\sim 1$ for simple data sets and AUC $\sim 0.85$ for very complex data sets). Further, our model easily outperforms existing algorithms on complex data sets. Compared with existing methods, ChromAlignNet is very easy to use as it requires no user input of reference chromatograms and parameters. This method can easily be adapted to other similar data such as those from liquid chromatography. The source code is written in Python and available online.
Tasks
Published 2019-04-02
URL https://arxiv.org/abs/1904.01205v3
PDF https://arxiv.org/pdf/1904.01205v3.pdf
PWC https://paperswithcode.com/paper/peak-alignment-of-gc-ms-data-with-deep
Repo
Framework

Computer Science and Metaphysics: A Cross-Fertilization

Title Computer Science and Metaphysics: A Cross-Fertilization
Authors Daniel Kirchner, Christoph Benzmüller, Edward N. Zalta
Abstract Computational philosophy is the use of mechanized computational techniques to unearth philosophical insights that are either difficult or impossible to find using traditional philosophical methods. Computational metaphysics is computational philosophy with a focus on metaphysics. In this paper, we (a) develop results in modal metaphysics whose discovery was computer assisted, and (b) conclude that these results work not only to the obvious benefit of philosophy but also, less obviously, to the benefit of computer science, since the new computational techniques that led to these results may be more broadly applicable within computer science. The paper includes a description of our background methodology and how it evolved, and a discussion of our new results.
Tasks
Published 2019-05-01
URL https://arxiv.org/abs/1905.00787v4
PDF https://arxiv.org/pdf/1905.00787v4.pdf
PWC https://paperswithcode.com/paper/computer-science-and-metaphysics-a-cross
Repo
Framework

Aggregating E-commerce Search Results from Heterogeneous Sources via Hierarchical Reinforcement Learning

Title Aggregating E-commerce Search Results from Heterogeneous Sources via Hierarchical Reinforcement Learning
Authors Ryuichi Takanobu, Tao Zhuang, Minlie Huang, Jun Feng, Haihong Tang, Bo Zheng
Abstract In this paper, we investigate the task of aggregating search results from heterogeneous sources in an E-commerce environment. First, unlike traditional aggregated web search that merely presents multi-sourced results in the first page, this new task may present aggregated results in all pages and has to dynamically decide which source should be presented in the current page. Second, as pointed out by many existing studies, it is not trivial to rank items from heterogeneous sources because the relevance scores from different source systems are not directly comparable. To address these two issues, we decompose the task into two subtasks in a hierarchical structure: a high-level task for source selection where we model the sequential patterns of user behaviors onto aggregated results in different pages so as to understand user intents and select the relevant sources properly; and a low-level task for item presentation where we formulate a slot filling process to sequentially present the items instead of giving each item a relevance score when deciding the presentation order of heterogeneous items. Since both subtasks can be naturally formulated as sequential decision problems and learn from the future user feedback on search results, we build our model with hierarchical reinforcement learning. Extensive experiments demonstrate that our model obtains remarkable improvements in search performance metrics, and achieves a higher user satisfaction.
Tasks Hierarchical Reinforcement Learning, Slot Filling
Published 2019-02-24
URL http://arxiv.org/abs/1902.08882v1
PDF http://arxiv.org/pdf/1902.08882v1.pdf
PWC https://paperswithcode.com/paper/aggregating-e-commerce-search-results-from
Repo
Framework
comments powered by Disqus