July 27, 2019

3106 words 15 mins read

Paper Group ANR 681

Paper Group ANR 681

APPD: Adaptive and Precise Pupil Boundary Detection using Entropy of Contour Gradients. A Machine Learning Framework for Resource Allocation Assisted by Cloud Computing. Super-resolution Using Constrained Deep Texture Synthesis. Cloud-based Deep Learning of Big EEG Data for Epileptic Seizure Prediction. Tosca: Operationalizing Commitments Over Info …

APPD: Adaptive and Precise Pupil Boundary Detection using Entropy of Contour Gradients

Title APPD: Adaptive and Precise Pupil Boundary Detection using Entropy of Contour Gradients
Authors Cihan Topal, Halil Ibrahim Cakir, Cuneyt Akinlar
Abstract Eye tracking spreads through a vast area of applications from ophthalmology, assistive technologies to gaming and virtual reality. Precisely detecting the pupil’s contour and center is the very first step in many of these tasks, hence needs to be performed accurately. Although detection of pupil is a simple problem when it is entirely visible; occlusions and oblique view angles complicate the solution. In this study, we propose APPD, an adaptive and precise pupil boundary detection method that is able to infer whether entire pupil is in clearly visible by a heuristic that estimates the shape of a contour in a computationally efficient way. Thus, a faster detection is performed with the assumption of no occlusions. If the heuristic fails, a more comprehensive search among extracted image features is executed to maintain accuracy. Furthermore, the algorithm can find out if there is no pupil as an helpful information for many applications. We provide a dataset containing 3904 high resolution eye images collected from 12 subjects and perform an extensive set of experiments to obtain quantitative results in terms of accuracy, localization and timing. The proposed method outperforms three other state of the art algorithms and has an average execution time $\sim$5 ms in single-thread on a standard laptop computer for 720p images.
Tasks Boundary Detection, Eye Tracking
Published 2017-09-19
URL http://arxiv.org/abs/1709.06366v2
PDF http://arxiv.org/pdf/1709.06366v2.pdf
PWC https://paperswithcode.com/paper/appd-adaptive-and-precise-pupil-boundary
Repo
Framework

A Machine Learning Framework for Resource Allocation Assisted by Cloud Computing

Title A Machine Learning Framework for Resource Allocation Assisted by Cloud Computing
Authors Jun-Bo Wang, Junyuan Wang, Yongpeng Wu, Jin-Yuan Wang, Huiling Zhu, Min Lin, Jiangzhou Wang
Abstract Conventionally, the resource allocation is formulated as an optimization problem and solved online with instantaneous scenario information. Since most resource allocation problems are not convex, the optimal solutions are very difficult to be obtained in real time. Lagrangian relaxation or greedy methods are then often employed, which results in performance loss. Therefore, the conventional methods of resource allocation are facing great challenges to meet the ever-increasing QoS requirements of users with scarce radio resource. Assisted by cloud computing, a huge amount of historical data on scenarios can be collected for extracting similarities among scenarios using machine learning. Moreover, optimal or near-optimal solutions of historical scenarios can be searched offline and stored in advance. When the measured data of current scenario arrives, the current scenario is compared with historical scenarios to find the most similar one. Then, the optimal or near-optimal solution in the most similar historical scenario is adopted to allocate the radio resources for the current scenario. To facilitate the application of new design philosophy, a machine learning framework is proposed for resource allocation assisted by cloud computing. An example of beam allocation in multi-user massive multiple-input-multiple-output (MIMO) systems shows that the proposed machine-learning based resource allocation outperforms conventional methods.
Tasks
Published 2017-12-16
URL http://arxiv.org/abs/1712.05929v1
PDF http://arxiv.org/pdf/1712.05929v1.pdf
PWC https://paperswithcode.com/paper/a-machine-learning-framework-for-resource
Repo
Framework

Super-resolution Using Constrained Deep Texture Synthesis

Title Super-resolution Using Constrained Deep Texture Synthesis
Authors Libin Sun, James Hays
Abstract Hallucinating high frequency image details in single image super-resolution is a challenging task. Traditional super-resolution methods tend to produce oversmoothed output images due to the ambiguity in mapping between low and high resolution patches. We build on recent success in deep learning based texture synthesis and show that this rich feature space can facilitate successful transfer and synthesis of high frequency image details to improve the visual quality of super-resolution results on a wide variety of natural textures and images.
Tasks Image Super-Resolution, Super-Resolution, Texture Synthesis
Published 2017-01-26
URL http://arxiv.org/abs/1701.07604v1
PDF http://arxiv.org/pdf/1701.07604v1.pdf
PWC https://paperswithcode.com/paper/super-resolution-using-constrained-deep
Repo
Framework

Cloud-based Deep Learning of Big EEG Data for Epileptic Seizure Prediction

Title Cloud-based Deep Learning of Big EEG Data for Epileptic Seizure Prediction
Authors Mohammad-Parsa Hosseini, Hamid Soltanian-Zadeh, Kost Elisevich, Dario Pompili
Abstract Developing a Brain-Computer Interface~(BCI) for seizure prediction can help epileptic patients have a better quality of life. However, there are many difficulties and challenges in developing such a system as a real-life support for patients. Because of the nonstationary nature of EEG signals, normal and seizure patterns vary across different patients. Thus, finding a group of manually extracted features for the prediction task is not practical. Moreover, when using implanted electrodes for brain recording massive amounts of data are produced. This big data calls for the need for safe storage and high computational resources for real-time processing. To address these challenges, a cloud-based BCI system for the analysis of this big EEG data is presented. First, a dimensionality-reduction technique is developed to increase classification accuracy as well as to decrease the communication bandwidth and computation time. Second, following a deep-learning approach, a stacked autoencoder is trained in two steps for unsupervised feature extraction and classification. Third, a cloud-computing solution is proposed for real-time analysis of big EEG data. The results on a benchmark clinical dataset illustrate the superiority of the proposed patient-specific BCI as an alternative method and its expected usefulness in real-life support of epilepsy patients.
Tasks Dimensionality Reduction, EEG, Seizure prediction
Published 2017-02-17
URL http://arxiv.org/abs/1702.05192v1
PDF http://arxiv.org/pdf/1702.05192v1.pdf
PWC https://paperswithcode.com/paper/cloud-based-deep-learning-of-big-eeg-data-for
Repo
Framework

Tosca: Operationalizing Commitments Over Information Protocols

Title Tosca: Operationalizing Commitments Over Information Protocols
Authors Thomas C. King, Akın Günay, Amit K. Chopra, Munindar P. Singh
Abstract The notion of commitment is widely studied as a high-level abstraction for modeling multiagent interaction. An important challenge is supporting flexible decentralized enactments of commitment specifications. In this paper, we combine recent advances on specifying commitments and information protocols. Specifically, we contribute Tosca, a technique for automatically synthesizing information protocols from commitment specifications. Our main result is that the synthesized protocols support commitment alignment, which is the idea that agents must make compatible inferences about their commitments despite decentralization.
Tasks
Published 2017-08-10
URL http://arxiv.org/abs/1708.03209v1
PDF http://arxiv.org/pdf/1708.03209v1.pdf
PWC https://paperswithcode.com/paper/tosca-operationalizing-commitments-over
Repo
Framework

Early stopping for kernel boosting algorithms: A general analysis with localized complexities

Title Early stopping for kernel boosting algorithms: A general analysis with localized complexities
Authors Yuting Wei, Fanny Yang, Martin J. Wainwright
Abstract Early stopping of iterative algorithms is a widely-used form of regularization in statistics, commonly used in conjunction with boosting and related gradient-type algorithms. Although consistency results have been established in some settings, such estimators are less well-understood than their analogues based on penalized regularization. In this paper, for a relatively broad class of loss functions and boosting algorithms (including L2-boost, LogitBoost and AdaBoost, among others), we exhibit a direct connection between the performance of a stopped iterate and the localized Gaussian complexity of the associated function class. This connection allows us to show that local fixed point analysis of Gaussian or Rademacher complexities, now standard in the analysis of penalized estimators, can be used to derive optimal stopping rules. We derive such stopping rules in detail for various kernel classes, and illustrate the correspondence of our theory with practice for Sobolev kernel classes.
Tasks
Published 2017-07-05
URL http://arxiv.org/abs/1707.01543v2
PDF http://arxiv.org/pdf/1707.01543v2.pdf
PWC https://paperswithcode.com/paper/early-stopping-for-kernel-boosting-algorithms
Repo
Framework

Neural Distributed Autoassociative Memories: A Survey

Title Neural Distributed Autoassociative Memories: A Survey
Authors V. I. Gritsenko, D. A. Rachkovskij, A. A. Frolov, R. Gayler, D. Kleyko, E. Osipov
Abstract Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.
Tasks
Published 2017-09-04
URL http://arxiv.org/abs/1709.00848v1
PDF http://arxiv.org/pdf/1709.00848v1.pdf
PWC https://paperswithcode.com/paper/neural-distributed-autoassociative-memories-a
Repo
Framework

Visual Based Navigation of Mobile Robots

Title Visual Based Navigation of Mobile Robots
Authors Shailja, Soumabh Bhowmick, Jayanta Mukhopadhyay
Abstract We have developed an algorithm to generate a complete map of the traversable region for a personal assistant robot using monocular vision only. Using multiple taken by a simple webcam, obstacle detection and avoidance algorithms have been developed. Simple Linear Iterative Clustering (SLIC) has been used for segmentation to reduce the memory and computation cost. A simple mapping technique using inverse perspective mapping and occupancy grids, which is robust, and supports very fast updates has been used to create the map for indoor navigation.
Tasks
Published 2017-12-15
URL http://arxiv.org/abs/1712.05482v1
PDF http://arxiv.org/pdf/1712.05482v1.pdf
PWC https://paperswithcode.com/paper/visual-based-navigation-of-mobile-robots
Repo
Framework

Development of An Android Application for Object Detection Based on Color, Shape, or Local Features

Title Development of An Android Application for Object Detection Based on Color, Shape, or Local Features
Authors Lamiaa A. Elrefaei, Mona Omar Al-musawa, Norah Abdullah Al-gohany
Abstract Object detection and recognition is an important task in many computer vision applications. In this paper an Android application was developed using Eclipse IDE and OpenCV3 Library. This application is able to detect objects in an image that is loaded from the mobile gallery, based on its color, shape, or local features. The image is processed in the HSV color domain for better color detection. Circular shapes are detected using Circular Hough Transform and other shapes are detected using Douglas-Peucker algorithm. BRISK (binary robust invariant scalable keypoints) local features were applied in the developed Android application for matching an object image in another scene image. The steps of the proposed detection algorithms are described, and the interfaces of the application are illustrated. The application is ported and tested on Galaxy S3, S6, and Note1 Smartphones. Based on the experimental results, the application is capable of detecting eleven different colors, detecting two dimensional geometrical shapes including circles, rectangles, triangles, and squares, and correctly match local features of object and scene images for different conditions. The application could be used as a standalone application, or as a part of another application such as Robot systems, traffic systems, e-learning applications, information retrieval and many others.
Tasks Information Retrieval, Object Detection
Published 2017-03-10
URL http://arxiv.org/abs/1703.03848v1
PDF http://arxiv.org/pdf/1703.03848v1.pdf
PWC https://paperswithcode.com/paper/development-of-an-android-application-for
Repo
Framework

Multi-Label Zero-Shot Learning with Structured Knowledge Graphs

Title Multi-Label Zero-Shot Learning with Structured Knowledge Graphs
Authors Chung-Wei Lee, Wei Fang, Chih-Kuan Yeh, Yu-Chiang Frank Wang
Abstract In this paper, we propose a novel deep learning architecture for multi-label zero-shot learning (ML-ZSL), which is able to predict multiple unseen class labels for each input instance. Inspired by the way humans utilize semantic knowledge between objects of interests, we propose a framework that incorporates knowledge graphs for describing the relationships between multiple labels. Our model learns an information propagation mechanism from the semantic label space, which can be applied to model the interdependencies between seen and unseen class labels. With such investigation of structured knowledge graphs for visual reasoning, we show that our model can be applied for solving multi-label classification and ML-ZSL tasks. Compared to state-of-the-art approaches, comparable or improved performances can be achieved by our method.
Tasks Knowledge Graphs, Multi-Label Classification, Visual Reasoning, Zero-Shot Learning
Published 2017-11-17
URL http://arxiv.org/abs/1711.06526v2
PDF http://arxiv.org/pdf/1711.06526v2.pdf
PWC https://paperswithcode.com/paper/multi-label-zero-shot-learning-with
Repo
Framework

NetSpam: a Network-based Spam Detection Framework for Reviews in Online Social Media

Title NetSpam: a Network-based Spam Detection Framework for Reviews in Online Social Media
Authors Saeedreza Shehnepoor, Mostafa Salehi, Reza Farahbakhsh, Noel Crespi
Abstract Nowadays, a big part of people rely on available content in social media in their decisions (e.g. reviews and feedback on a topic or product). The possibility that anybody can leave a review provide a golden opportunity for spammers to write spam reviews about products and services for different interests. Identifying these spammers and the spam content is a hot topic of research and although a considerable number of studies have been done recently toward this end, but so far the methodologies put forth still barely detect spam reviews, and none of them show the importance of each extracted feature type. In this study, we propose a novel framework, named NetSpam, which utilizes spam features for modeling review datasets as heterogeneous information networks to map spam detection procedure into a classification problem in such networks. Using the importance of spam features help us to obtain better results in terms of different metrics experimented on real-world review datasets from Yelp and Amazon websites. The results show that NetSpam outperforms the existing methods and among four categories of features; including review-behavioral, user-behavioral, reviewlinguistic, user-linguistic, the first type of features performs better than the other categories.
Tasks
Published 2017-03-10
URL http://arxiv.org/abs/1703.03609v1
PDF http://arxiv.org/pdf/1703.03609v1.pdf
PWC https://paperswithcode.com/paper/netspam-a-network-based-spam-detection
Repo
Framework

Squeeze-SegNet: A new fast Deep Convolutional Neural Network for Semantic Segmentation

Title Squeeze-SegNet: A new fast Deep Convolutional Neural Network for Semantic Segmentation
Authors Geraldin Nanfack, Azeddine Elhassouny, Rachid Oulad Haj Thami
Abstract The recent researches in Deep Convolutional Neural Network have focused their attention on improving accuracy that provide significant advances. However, if they were limited to classification tasks, nowadays with contributions from Scientific Communities who are embarking in this field, they have become very useful in higher level tasks such as object detection and pixel-wise semantic segmentation. Thus, brilliant ideas in the field of semantic segmentation with deep learning have completed the state of the art of accuracy, however this architectures become very difficult to apply in embedded systems as is the case for autonomous driving. We present a new Deep fully Convolutional Neural Network for pixel-wise semantic segmentation which we call Squeeze-SegNet. The architecture is based on Encoder-Decoder style. We use a SqueezeNet-like encoder and a decoder formed by our proposed squeeze-decoder module and upsample layer using downsample indices like in SegNet and we add a deconvolution layer to provide final multi-channel feature map. On datasets like Camvid or City-states, our net gets SegNet-level accuracy with less than 10 times fewer parameters than SegNet.
Tasks Autonomous Driving, Object Detection, Semantic Segmentation
Published 2017-11-15
URL http://arxiv.org/abs/1711.05491v1
PDF http://arxiv.org/pdf/1711.05491v1.pdf
PWC https://paperswithcode.com/paper/squeeze-segnet-a-new-fast-deep-convolutional
Repo
Framework

Communication-Efficient Algorithms for Decentralized and Stochastic Optimization

Title Communication-Efficient Algorithms for Decentralized and Stochastic Optimization
Authors Guanghui Lan, Soomin Lee, Yi Zhou
Abstract We present a new class of decentralized first-order methods for nonsmooth and stochastic optimization problems defined over multiagent networks. Considering that communication is a major bottleneck in decentralized optimization, our main goal in this paper is to develop algorithmic frameworks which can significantly reduce the number of inter-node communications. We first propose a decentralized primal-dual method which can find an $\epsilon$-solution both in terms of functional optimality gap and feasibility residual in $O(1/\epsilon)$ inter-node communication rounds when the objective functions are convex and the local primal subproblems are solved exactly. Our major contribution is to present a new class of decentralized primal-dual type algorithms, namely the decentralized communication sliding (DCS) methods, which can skip the inter-node communications while agents solve the primal subproblems iteratively through linearizations of their local objective functions. By employing DCS, agents can still find an $\epsilon$-solution in $O(1/\epsilon)$ (resp., $O(1/\sqrt{\epsilon})$) communication rounds for general convex functions (resp., strongly convex functions), while maintaining the $O(1/\epsilon^2)$ (resp., $O(1/\epsilon)$) bound on the total number of intra-node subgradient evaluations. We also present a stochastic counterpart for these algorithms, denoted by SDCS, for solving stochastic optimization problems whose objective function cannot be evaluated exactly. In comparison with existing results for decentralized nonsmooth and stochastic optimization, we can reduce the total number of inter-node communication rounds by orders of magnitude while still maintaining the optimal complexity bounds on intra-node stochastic subgradient evaluations. The bounds on the subgradient evaluations are actually comparable to those required for centralized nonsmooth and stochastic optimization.
Tasks Stochastic Optimization
Published 2017-01-14
URL http://arxiv.org/abs/1701.03961v2
PDF http://arxiv.org/pdf/1701.03961v2.pdf
PWC https://paperswithcode.com/paper/communication-efficient-algorithms-for-1
Repo
Framework

Improving Gibbs Sampler Scan Quality with DoGS

Title Improving Gibbs Sampler Scan Quality with DoGS
Authors Ioannis Mitliagkas, Lester Mackey
Abstract The pairwise influence matrix of Dobrushin has long been used as an analytical tool to bound the rate of convergence of Gibbs sampling. In this work, we use Dobrushin influence as the basis of a practical tool to certify and efficiently improve the quality of a discrete Gibbs sampler. Our Dobrushin-optimized Gibbs samplers (DoGS) offer customized variable selection orders for a given sampling budget and variable subset of interest, explicit bounds on total variation distance to stationarity, and certifiable improvements over the standard systematic and uniform random scan Gibbs samplers. In our experiments with joint image segmentation and object recognition, Markov chain Monte Carlo maximum likelihood estimation, and Ising model inference, DoGS consistently deliver higher-quality inferences with significantly smaller sampling budgets than standard Gibbs samplers.
Tasks Object Recognition, Semantic Segmentation
Published 2017-07-18
URL http://arxiv.org/abs/1707.05807v1
PDF http://arxiv.org/pdf/1707.05807v1.pdf
PWC https://paperswithcode.com/paper/improving-gibbs-sampler-scan-quality-with
Repo
Framework

Swift Linked Data Miner: Mining OWL 2 EL class expressions directly from online RDF datasets

Title Swift Linked Data Miner: Mining OWL 2 EL class expressions directly from online RDF datasets
Authors Jedrzej Potoniec, Piotr Jakubowski, Agnieszka Ławrynowicz
Abstract In this study, we present Swift Linked Data Miner, an interruptible algorithm that can directly mine an online Linked Data source (e.g., a SPARQL endpoint) for OWL 2 EL class expressions to extend an ontology with new SubClassOf: axioms. The algorithm works by downloading only a small part of the Linked Data source at a time, building a smart index in the memory and swiftly iterating over the index to mine axioms. We propose a transformation function from mined axioms to RDF Data Shapes. We show, by means of a crowdsourcing experiment, that most of the axioms mined by Swift Linked Data Miner are correct and can be added to an ontology. We provide a ready to use Prot'eg'e plugin implementing the algorithm, to support ontology engineers in their daily modeling work.
Tasks
Published 2017-10-19
URL http://arxiv.org/abs/1710.07114v1
PDF http://arxiv.org/pdf/1710.07114v1.pdf
PWC https://paperswithcode.com/paper/swift-linked-data-miner-mining-owl-2-el-class
Repo
Framework
comments powered by Disqus