Paper Group ANR 1069
Automatic Construction of Parallel Portfolios via Explicit Instance Grouping. CNNs for Surveillance Footage Scene Classification. Missing Slice Recovery for Tensors Using a Low-rank Model in Embedded Space. Particle Filtering Methods for Stochastic Optimization with Application to Large-Scale Empirical Risk Minimization. Structured Neural Network D …
Automatic Construction of Parallel Portfolios via Explicit Instance Grouping
Title | Automatic Construction of Parallel Portfolios via Explicit Instance Grouping |
Authors | Shengcai Liu, Ke Tang, Xin Yao |
Abstract | Simultaneously utilizing several complementary solvers is a simple yet effective strategy for solving computationally hard problems. However, manually building such solver portfolios typically requires considerable domain knowledge and plenty of human effort. As an alternative, automatic construction of parallel portfolios (ACPP) aims at automatically building effective parallel portfolios based on a given problem instance set and a given rich design space. One promising way to solve the ACPP problem is to explicitly group the instances into different subsets and promote a component solver to handle each of them.This paper investigates solving ACPP from this perspective, and especially studies how to obtain a good instance grouping.The experimental results showed that the parallel portfolios constructed by the proposed method could achieve consistently superior performances to the ones constructed by the state-of-the-art ACPP methods,and could even rival sophisticated hand-designed parallel solvers. |
Tasks | |
Published | 2018-04-17 |
URL | http://arxiv.org/abs/1804.06088v1 |
http://arxiv.org/pdf/1804.06088v1.pdf | |
PWC | https://paperswithcode.com/paper/automatic-construction-of-parallel-portfolios |
Repo | |
Framework | |
CNNs for Surveillance Footage Scene Classification
Title | CNNs for Surveillance Footage Scene Classification |
Authors | Utkarsh Contractor, Chinmayi Dixit, Deepti Mahajan |
Abstract | In this project, we adapt high-performing CNN architectures to differentiate between scenes with and without abandoned luggage. Using frames from two video datasets, we compare the results of training different architectures on each dataset as well as on combining the datasets. We additionally use network visualization techniques to gain insight into what the neural network sees, and the basis of the classification decision. We intend that our results benefit further work in applying CNNs in surveillance and security-related tasks. |
Tasks | Scene Classification |
Published | 2018-09-08 |
URL | http://arxiv.org/abs/1809.02766v1 |
http://arxiv.org/pdf/1809.02766v1.pdf | |
PWC | https://paperswithcode.com/paper/cnns-for-surveillance-footage-scene |
Repo | |
Framework | |
Missing Slice Recovery for Tensors Using a Low-rank Model in Embedded Space
Title | Missing Slice Recovery for Tensors Using a Low-rank Model in Embedded Space |
Authors | Tatsuya Yokota, Burak Erem, Seyhmus Guler, Simon K. Warfield, Hidekata Hontani |
Abstract | Let us consider a case where all of the elements in some continuous slices are missing in tensor data. In this case, the nuclear-norm and total variation regularization methods usually fail to recover the missing elements. The key problem is capturing some delay/shift-invariant structure. In this study, we consider a low-rank model in an embedded space of a tensor. For this purpose, we extend a delay embedding for a time series to a “multi-way delay-embedding transform” for a tensor, which takes a given incomplete tensor as the input and outputs a higher-order incomplete Hankel tensor. The higher-order tensor is then recovered by Tucker-based low-rank tensor factorization. Finally, an estimated tensor can be obtained by using the inverse multi-way delay embedding transform of the recovered higher-order tensor. Our experiments showed that the proposed method successfully recovered missing slices for some color images and functional magnetic resonance images. |
Tasks | Time Series |
Published | 2018-04-05 |
URL | http://arxiv.org/abs/1804.01736v1 |
http://arxiv.org/pdf/1804.01736v1.pdf | |
PWC | https://paperswithcode.com/paper/missing-slice-recovery-for-tensors-using-a |
Repo | |
Framework | |
Particle Filtering Methods for Stochastic Optimization with Application to Large-Scale Empirical Risk Minimization
Title | Particle Filtering Methods for Stochastic Optimization with Application to Large-Scale Empirical Risk Minimization |
Authors | Bin Liu |
Abstract | There is a recent interest in developing statistical filtering methods for stochastic optimization (FSO) by leveraging a probabilistic perspective of incremental proximity methods (IPMs). The existent FSO methods are derived based on the Kalman filter (KF) and extended KF (EKF). Different from classical stochastic optimization methods such as the stochastic gradient descent (SGD) and typical IPMs, such KF-type algorithms possess a desirable property, namely they do not require pre-scheduling of the learning rate for convergence. However, on the other side, they have inherent limitations inherited from the nature of KF mechanisms. It is a consensus that the class of particle filters (PFs) outperforms the KF and its variants remarkably for nonlinear and/or non-Gaussian statistical filtering tasks. Hence, it is natural to ask if the FSO methods can benefit from the PF theory to get around of limitations of the KF-type stochastic optimization methods. We provide an affirmative answer to the aforementioned question by developing two PF based stochastic optimizers (PFSOs). For performance evaluation, we apply them to address nonlinear least-square fitting using simulated data sets and empirical risk minimization for binary classification using real data sets. Experimental results demonstrate that PFSOs outperform remarkably existent methods in terms of numerical stability, convergence speed, and flexibility in handling different types of loss functions. |
Tasks | Stochastic Optimization |
Published | 2018-07-23 |
URL | https://arxiv.org/abs/1807.08534v10 |
https://arxiv.org/pdf/1807.08534v10.pdf | |
PWC | https://paperswithcode.com/paper/particle-filtering-methods-for-stochastic |
Repo | |
Framework | |
Structured Neural Network Dynamics for Model-based Control
Title | Structured Neural Network Dynamics for Model-based Control |
Authors | Alexander Broad, Ian Abraham, Todd Murphey, Brenna Argall |
Abstract | We present a structured neural network architecture that is inspired by linear time-varying dynamical systems. The network is designed to mimic the properties of linear dynamical systems which makes analysis and control simple. The architecture facilitates the integration of learned system models with gradient-based model predictive control algorithms, and removes the requirement of computing potentially costly derivatives online. We demonstrate the efficacy of this modeling technique in computing autonomous control policies through evaluation in a variety of standard continuous control domains. |
Tasks | Continuous Control |
Published | 2018-08-03 |
URL | http://arxiv.org/abs/1808.01184v1 |
http://arxiv.org/pdf/1808.01184v1.pdf | |
PWC | https://paperswithcode.com/paper/structured-neural-network-dynamics-for-model |
Repo | |
Framework | |
Bone marrow cells detection: A technique for the microscopic image analysis
Title | Bone marrow cells detection: A technique for the microscopic image analysis |
Authors | Haichao Cao, Hong Liu, Enmin Song |
Abstract | In the detection of myeloproliferative, the number of cells in each type of bone marrow cells (BMC) is an important parameter for the evaluation. In this study, we propose a new counting method, which also consists of three modules including localization, segmentation and classification. The localization of BMC is achieved from a color transformation enhanced BMC sample image and stepwise averaging method (SAM). In the nucleus segmentation, both SAM and Otsu’s method will be applied to obtain a weighted threshold for segmenting the patch into nucleus and non-nucleus. In the cytoplasm segmentation, a color weakening transformation, an improved region growing method and the K-Means algorithm are used. The connected cells with BMC will be separated by the marker-controlled watershed algorithm. The features will be extracted for the classification after the segmentation. In this study, the BMC are classified using the SVM, Random Forest, Artificial Neural Networks, Adaboost and Bayesian Networks into five classes including one outlier, namely, neutrophilic split granulocyte, neutrophilic stab granulocyte, metarubricyte, mature lymphocytes and the outlier (all other cells not listed). Our experimental results show that the best average recognition rate is 87.49% for the SVM. |
Tasks | |
Published | 2018-05-05 |
URL | http://arxiv.org/abs/1805.02058v1 |
http://arxiv.org/pdf/1805.02058v1.pdf | |
PWC | https://paperswithcode.com/paper/bone-marrow-cells-detection-a-technique-for |
Repo | |
Framework | |
Policy Search in Continuous Action Domains: an Overview
Title | Policy Search in Continuous Action Domains: an Overview |
Authors | Olivier Sigaud, Freek Stulp |
Abstract | Continuous action policy search is currently the focus of intensive research, driven both by the recent success of deep reinforcement learning algorithms and the emergence of competitors based on evolutionary algorithms. In this paper, we present a broad survey of policy search methods, providing a unified perspective on very different approaches, including also Bayesian Optimization and directed exploration methods. The main message of this overview is in the relationship between the families of methods, but we also outline some factors underlying sample efficiency properties of the various approaches. |
Tasks | |
Published | 2018-03-13 |
URL | https://arxiv.org/abs/1803.04706v5 |
https://arxiv.org/pdf/1803.04706v5.pdf | |
PWC | https://paperswithcode.com/paper/policy-search-in-continuous-action-domains-an |
Repo | |
Framework | |
Attributes in Multiple Facial Images
Title | Attributes in Multiple Facial Images |
Authors | Xudong Liu, Guodong Guo |
Abstract | Facial attribute recognition is conventionally computed from a single image. In practice, each subject may have multiple face images. Taking the eye size as an example, it should not change, but it may have different estimation in multiple images, which would make a negative impact on face recognition. Thus, how to compute these attributes corresponding to each subject rather than each single image is a profound work. To address this question, we deploy deep training for facial attributes prediction, and we explore the inconsistency issue among the attributes computed from each single image. Then, we develop two approaches to address the inconsistency issue. Experimental results show that the proposed methods can handle facial attribute estimation on either multiple still images or video frames, and can correct the incorrectly annotated labels. The experiments are conducted on two large public databases with annotations of facial attributes. |
Tasks | Face Recognition |
Published | 2018-05-23 |
URL | http://arxiv.org/abs/1805.09203v1 |
http://arxiv.org/pdf/1805.09203v1.pdf | |
PWC | https://paperswithcode.com/paper/attributes-in-multiple-facial-images |
Repo | |
Framework | |
Fooling OCR Systems with Adversarial Text Images
Title | Fooling OCR Systems with Adversarial Text Images |
Authors | Congzheng Song, Vitaly Shmatikov |
Abstract | We demonstrate that state-of-the-art optical character recognition (OCR) based on deep learning is vulnerable to adversarial images. Minor modifications to images of printed text, which do not change the meaning of the text to a human reader, cause the OCR system to “recognize” a different text where certain words chosen by the adversary are replaced by their semantic opposites. This completely changes the meaning of the output produced by the OCR system and by the NLP applications that use OCR for preprocessing their inputs. |
Tasks | Adversarial Text, Optical Character Recognition |
Published | 2018-02-15 |
URL | http://arxiv.org/abs/1802.05385v1 |
http://arxiv.org/pdf/1802.05385v1.pdf | |
PWC | https://paperswithcode.com/paper/fooling-ocr-systems-with-adversarial-text |
Repo | |
Framework | |
Redefining Ultrasound Compounding: Computational Sonography
Title | Redefining Ultrasound Compounding: Computational Sonography |
Authors | Rüdiger Göbl, Diana Mateus, Christoph Hennersperger, Maximilian Baust, Nassir Navab |
Abstract | Freehand three-dimensional ultrasound (3D-US) has gained considerable interest in research, but even today suffers from its high inter-operator variability in clinical practice. The high variability mainly arises from tracking inaccuracies as well as the directionality of the ultrasound data, being neglected in most of today’s reconstruction methods. By providing a novel paradigm for the acquisition and reconstruction of tracked freehand 3D ultrasound, this work presents the concept of Computational Sonography (CS) to model the directionality of ultrasound information. CS preserves the directionality of the acquired data, and allows for its exploitation by computational algorithms. In this regard, we propose a set of mathematical models to represent 3D-US data, inspired by the physics of ultrasound imaging. We compare different models of Computational Sonography to classical scalar compounding for freehand acquisitions, providing both an improved preservation of US directionality as well as improved image quality in 3D. The novel concept is evaluated for a set of phantom datasets, as well as for in-vivo acquisitions of muscoloskeletal and vascular applications. |
Tasks | |
Published | 2018-11-05 |
URL | http://arxiv.org/abs/1811.01534v1 |
http://arxiv.org/pdf/1811.01534v1.pdf | |
PWC | https://paperswithcode.com/paper/redefining-ultrasound-compounding |
Repo | |
Framework | |
Incremental Scene Synthesis
Title | Incremental Scene Synthesis |
Authors | Benjamin Planche, Xuejian Rong, Ziyan Wu, Srikrishna Karanam, Harald Kosch, YingLi Tian, Jan Ernst, Andreas Hutter |
Abstract | We present a method to incrementally generate complete 2D or 3D scenes with the following properties: (a) it is globally consistent at each step according to a learned scene prior, (b) real observations of a scene can be incorporated while observing global consistency, (c) unobserved regions can be hallucinated locally in consistence with previous observations, hallucinations and global priors, and (d) hallucinations are statistical in nature, i.e., different scenes can be generated from the same observations. To achieve this, we model the virtual scene, where an active agent at each step can either perceive an observed part of the scene or generate a local hallucination. The latter can be interpreted as the agent’s expectation at this step through the scene and can be applied to autonomous navigation. In the limit of observing real data at each point, our method converges to solving the SLAM problem. It can otherwise sample entirely imagined scenes from prior distributions. Besides autonomous agents, applications include problems where large data is required for building robust real-world applications, but few samples are available. We demonstrate efficacy on various 2D as well as 3D data. |
Tasks | Autonomous Navigation |
Published | 2018-11-29 |
URL | https://arxiv.org/abs/1811.12297v4 |
https://arxiv.org/pdf/1811.12297v4.pdf | |
PWC | https://paperswithcode.com/paper/incremental-scene-synthesis |
Repo | |
Framework | |
Are object detection assessment criteria ready for maritime computer vision?
Title | Are object detection assessment criteria ready for maritime computer vision? |
Authors | Dilip K. Prasad, Huixu Dong, Deepu Rajan, Chai Quek |
Abstract | Maritime vessels equipped with visible and infrared cameras can complement other conventional sensors for object detection. However, application of computer vision techniques in maritime domain received attention only recently. The maritime environment offers its own unique requirements and challenges. Assessment of the quality of detections is a fundamental need in computer vision. However, the conventional assessment metrics suitable for usual object detection are deficient in the maritime setting. Thus, a large body of related work in computer vision appears inapplicable to the maritime setting at the first sight. We discuss the problem of defining assessment metrics suitable for maritime computer vision. We consider new bottom edge proximity metrics as assessment metrics for maritime computer vision. These metrics indicate that existing computer vision approaches are indeed promising for maritime computer vision and can play a foundational role in the emerging field of maritime computer vision. |
Tasks | Object Detection |
Published | 2018-09-12 |
URL | https://arxiv.org/abs/1809.04659v2 |
https://arxiv.org/pdf/1809.04659v2.pdf | |
PWC | https://paperswithcode.com/paper/are-object-detection-assessment-criteria |
Repo | |
Framework | |
BigSR: an empirical study of real-time expressive RDF stream reasoning on modern Big Data platforms
Title | BigSR: an empirical study of real-time expressive RDF stream reasoning on modern Big Data platforms |
Authors | Xiangnan Ren, Olivier Curé, Hubert Naacke, Guohui Xiao |
Abstract | The trade-off between language expressiveness and system scalability (E&S) is a well-known problem in RDF stream reasoning. Higher expressiveness supports more complex reasoning logic, however, it may also hinder system scalability. Current research mainly focuses on logical frameworks suitable for stream reasoning as well as the implementation and the evaluation of prototype systems. These systems are normally developed in a centralized setting which suffer from inherent limited scalability, while an in-depth study of applying distributed solutions to cover E&S is still missing. In this paper, we aim to explore the feasibility of applying modern distributed computing frameworks to meet E&S all together. To do so, we first propose BigSR, a technical demonstrator that supports a positive fragment of the LARS framework. For the sake of generality and to cover a wide variety of use cases, BigSR relies on the two main execution models adopted by major distributed execution frameworks: Bulk Synchronous Processing (BSP) and Record-at-A-Time (RAT). Accordingly, we implement BigSR on top of Apache Spark Streaming (BSP model) and Apache Flink (RAT model). In order to conclude on the impacts of BSP and RAT on E&S, we analyze the ability of the two models to support distributed stream reasoning and identify several types of use cases characterized by their levels of support. This classification allows for quantifying the E&S trade-off by assessing the scalability of each type of use case \wrt its level of expressiveness. Then, we conduct a series of experiments with 15 queries from 4 different datasets. Our experiments show that BigSR over both BSP and RAT generally scales up to high throughput beyond million-triples per second (with or without recursion), and RAT attains sub-millisecond delay for stateless query operators. |
Tasks | |
Published | 2018-04-12 |
URL | http://arxiv.org/abs/1804.04367v1 |
http://arxiv.org/pdf/1804.04367v1.pdf | |
PWC | https://paperswithcode.com/paper/bigsr-an-empirical-study-of-real-time |
Repo | |
Framework | |
Contour location via entropy reduction leveraging multiple information sources
Title | Contour location via entropy reduction leveraging multiple information sources |
Authors | Alexandre N. Marques, Remi R. Lam, Karen E. Willcox |
Abstract | We introduce an algorithm to locate contours of functions that are expensive to evaluate. The problem of locating contours arises in many applications, including classification, constrained optimization, and performance analysis of mechanical and dynamical systems (reliability, probability of failure, stability, etc.). Our algorithm locates contours using information from multiple sources, which are available in the form of relatively inexpensive, biased, and possibly noisy approximations to the original function. Considering multiple information sources can lead to significant cost savings. We also introduce the concept of contour entropy, a formal measure of uncertainty about the location of the zero contour of a function approximated by a statistical surrogate model. Our algorithm locates contours efficiently by maximizing the reduction of contour entropy per unit cost. |
Tasks | |
Published | 2018-05-19 |
URL | http://arxiv.org/abs/1805.07489v3 |
http://arxiv.org/pdf/1805.07489v3.pdf | |
PWC | https://paperswithcode.com/paper/contour-location-via-entropy-reduction |
Repo | |
Framework | |
Improved Classification Based on Deep Belief Networks
Title | Improved Classification Based on Deep Belief Networks |
Authors | Jaehoon Koo, Diego Klabjan |
Abstract | For better classification generative models are used to initialize the model and model features before training a classifier. Typically it is needed to solve separate unsupervised and supervised learning problems. Generative restricted Boltzmann machines and deep belief networks are widely used for unsupervised learning. We developed several supervised models based on DBN in order to improve this two-phase strategy. Modifying the loss function to account for expectation with respect to the underlying generative model, introducing weight bounds, and multi-level programming are applied in model development. The proposed models capture both unsupervised and supervised objectives effectively. The computational study verifies that our models perform better than the two-phase training approach. |
Tasks | |
Published | 2018-04-25 |
URL | https://arxiv.org/abs/1804.09812v2 |
https://arxiv.org/pdf/1804.09812v2.pdf | |
PWC | https://paperswithcode.com/paper/improved-classification-based-on-deep-belief |
Repo | |
Framework | |