May 7, 2019

3121 words 15 mins read

Paper Group ANR 93

Paper Group ANR 93

Lifted Convex Quadratic Programming. Caffeinated FPGAs: FPGA Framework For Convolutional Neural Networks. Pattern-Based Approach to the Workflow Satisfiability Problem with User-Independent Constraints. Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network Feature …

Lifted Convex Quadratic Programming

Title Lifted Convex Quadratic Programming
Authors Martin Mladenov, Leonard Kleinhans, Kristian Kersting
Abstract Symmetry is the essential element of lifted inference that has recently demon- strated the possibility to perform very efficient inference in highly-connected, but symmetric probabilistic models models. This raises the question, whether this holds for optimisation problems in general. Here we show that for a large class of optimisation methods this is actually the case. More precisely, we introduce the concept of fractional symmetries of convex quadratic programs (QPs), which lie at the heart of many machine learning approaches, and exploit it to lift, i.e., to compress QPs. These lifted QPs can then be tackled with the usual optimization toolbox (off-the-shelf solvers, cutting plane algorithms, stochastic gradients etc.). If the original QP exhibits symmetry, then the lifted one will generally be more compact, and hence their optimization is likely to be more efficient.
Tasks
Published 2016-06-14
URL http://arxiv.org/abs/1606.04486v1
PDF http://arxiv.org/pdf/1606.04486v1.pdf
PWC https://paperswithcode.com/paper/lifted-convex-quadratic-programming
Repo
Framework

Caffeinated FPGAs: FPGA Framework For Convolutional Neural Networks

Title Caffeinated FPGAs: FPGA Framework For Convolutional Neural Networks
Authors Roberto DiCecco, Griffin Lacey, Jasmina Vasiljevic, Paul Chow, Graham Taylor, Shawki Areibi
Abstract Convolutional Neural Networks (CNNs) have gained significant traction in the field of machine learning, particularly due to their high accuracy in visual recognition. Recent works have pushed the performance of GPU implementations of CNNs to significantly improve their classification and training times. With these improvements, many frameworks have become available for implementing CNNs on both CPUs and GPUs, with no support for FPGA implementations. In this work we present a modified version of the popular CNN framework Caffe, with FPGA support. This allows for classification using CNN models and specialized FPGA implementations with the flexibility of reprogramming the device when necessary, seamless memory transactions between host and device, simple-to-use test benches, and the ability to create pipelined layer implementations. To validate the framework, we use the Xilinx SDAccel environment to implement an FPGA-based Winograd convolution engine and show that the FPGA layer can be used alongside other layers running on a host processor to run several popular CNNs (AlexNet, GoogleNet, VGG A, Overfeat). The results show that our framework achieves 50 GFLOPS across 3x3 convolutions in the benchmarks. This is achieved within a practical framework, which will aid in future development of FPGA-based CNNs.
Tasks
Published 2016-09-30
URL http://arxiv.org/abs/1609.09671v1
PDF http://arxiv.org/pdf/1609.09671v1.pdf
PWC https://paperswithcode.com/paper/caffeinated-fpgas-fpga-framework-for
Repo
Framework

Pattern-Based Approach to the Workflow Satisfiability Problem with User-Independent Constraints

Title Pattern-Based Approach to the Workflow Satisfiability Problem with User-Independent Constraints
Authors Daniel Karapetyan, Andrew J. Parkes, Gregory Gutin, Andrei Gagarin
Abstract The fixed parameter tractable (FPT) approach is a powerful tool in tackling computationally hard problems. In this paper, we link FPT results to classic artificial intelligence (AI) techniques to show how they complement each other. Specifically, we consider the workflow satisfiability problem (WSP) which asks whether there exists an assignment of authorised users to the steps in a workflow specification, subject to certain constraints on the assignment. It was shown by Cohen et al. (JAIR 2014) that WSP restricted to the class of user-independent constraints (UI), covering many practical cases, admits FPT algorithms, i.e. can be solved in time exponential only in the number of steps $k$ and polynomial in the number of users $n$. Since usually $k « n$ in WSP, such FPT algorithms are of great practical interest. We present a new interpretation of the FPT nature of the WSP with UI constraints giving a decomposition of the problem into two levels. Exploiting this two-level split, we develop a new FPT algorithm that is by many orders of magnitude faster than the previous state-of-the-art WSP algorithm and also has only polynomial-space complexity. We also introduce new pseudo-Boolean (PB) and Constraint Satisfaction (CSP) formulations of the WSP with UI constraints which efficiently exploit this new decomposition of the problem and raise the novel issue of how to use general-purpose solvers to tackle FPT problems in a fashion that meets FPT efficiency expectations. In our computational study, we investigate, for the first time, the phase transition (PT) properties of the WSP, under a model for generation of random instances. We show how PT studies can be extended, in a novel fashion, to support empirical evaluation of scaling of FPT algorithms.
Tasks
Published 2016-04-19
URL https://arxiv.org/abs/1604.05636v4
PDF https://arxiv.org/pdf/1604.05636v4.pdf
PWC https://paperswithcode.com/paper/pattern-based-approach-to-the-workflow
Repo
Framework

Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network Features

Title Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network Features
Authors JT Turner, Kalyan Gupta, Brendan Morris, David W. Aha
Abstract Although recent advances in regional Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their response time is still slow for real-time performance. To address this issue, we propose a method for region proposal as an alternative to selective search, which is used in current state-of-the art object detection algorithms. We evaluate our Keypoint Density-based Region Proposal (KDRP) approach and show that it speeds up detection and classification on fine-grained tasks by 100% versus the existing selective search region proposal technique without compromising classification accuracy. KDRP makes the application of CNNs to real-time detection and classification feasible.
Tasks Object Detection
Published 2016-03-01
URL http://arxiv.org/abs/1603.00502v1
PDF http://arxiv.org/pdf/1603.00502v1.pdf
PWC https://paperswithcode.com/paper/keypoint-density-based-region-proposal-for
Repo
Framework

Stochastic Function Norm Regularization of Deep Networks

Title Stochastic Function Norm Regularization of Deep Networks
Authors Amal Rannen Triki, Matthew B. Blaschko
Abstract Deep neural networks have had an enormous impact on image analysis. State-of-the-art training methods, based on weight decay and DropOut, result in impressive performance when a very large training set is available. However, they tend to have large problems overfitting to small data sets. Indeed, the available regularization methods deal with the complexity of the network function only indirectly. In this paper, we study the feasibility of directly using the $L_2$ function norm for regularization. Two methods to integrate this new regularization in the stochastic backpropagation are proposed. Moreover, the convergence of these new algorithms is studied. We finally show that they outperform the state-of-the-art methods in the low sample regime on benchmark datasets (MNIST and CIFAR10). The obtained results demonstrate very clear improvement, especially in the context of small sample regimes with data laying in a low dimensional manifold. Source code of the method can be found at \url{https://github.com/AmalRT/DNN_Reg}.
Tasks
Published 2016-05-30
URL https://arxiv.org/abs/1605.09085v3
PDF https://arxiv.org/pdf/1605.09085v3.pdf
PWC https://paperswithcode.com/paper/stochastic-function-norm-regularization-of
Repo
Framework

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Title Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference
Authors Minh Dao, Xiang Xiang, Bulent Ayhan, Chiman Kwan, Trac D. Tran
Abstract In this paper, we propose a burnscar detection model for hyperspectral imaging (HSI) data. The proposed model contains two-processing steps in which the first step separate and then suppress the cloud information presenting in the data set using an RPCA algorithm and the second step detect the burnscar area in the low-rank component output of the first step. Experiments are conducted on the public MODIS dataset available at NASA official website.
Tasks
Published 2016-05-01
URL http://arxiv.org/abs/1605.00287v2
PDF http://arxiv.org/pdf/1605.00287v2.pdf
PWC https://paperswithcode.com/paper/detecting-burnscar-from-hyperspectral-imagery
Repo
Framework

Modeling Ambiguity, Subjectivity, and Diverging Viewpoints in Opinion Question Answering Systems

Title Modeling Ambiguity, Subjectivity, and Diverging Viewpoints in Opinion Question Answering Systems
Authors Mengting Wan, Julian McAuley
Abstract Product review websites provide an incredible lens into the wide variety of opinions and experiences of different people, and play a critical role in helping users discover products that match their personal needs and preferences. To help address questions that can’t easily be answered by reading others’ reviews, some review websites also allow users to pose questions to the community via a question-answering (QA) system. As one would expect, just as opinions diverge among different reviewers, answers to such questions may also be subjective, opinionated, and divergent. This means that answering such questions automatically is quite different from traditional QA tasks, where it is assumed that a single correct' answer is available. While recent work introduced the idea of question-answering using product reviews, it did not account for two aspects that we consider in this paper: (1) Questions have multiple, often divergent, answers, and this full spectrum of answers should somehow be used to train the system; and (2) What makes a good’ answer depends on the asker and the answerer, and these factors should be incorporated in order for the system to be more personalized. Here we build a new QA dataset with 800 thousand questions—and over 3.1 million answers—and show that explicitly accounting for personalization and ambiguity leads both to quantitatively better answers, but also a more nuanced view of the range of supporting, but subjective, opinions.
Tasks Question Answering
Published 2016-10-25
URL http://arxiv.org/abs/1610.08095v1
PDF http://arxiv.org/pdf/1610.08095v1.pdf
PWC https://paperswithcode.com/paper/modeling-ambiguity-subjectivity-and-diverging
Repo
Framework

Multi-Objective Design of State Feedback Controllers Using Reinforced Quantum-Behaved Particle Swarm Optimization

Title Multi-Objective Design of State Feedback Controllers Using Reinforced Quantum-Behaved Particle Swarm Optimization
Authors Kaveh Hassani, Won-Sook Lee
Abstract In this paper, a novel and generic multi-objective design paradigm is proposed which utilizes quantum-behaved PSO(QPSO) for deciding the optimal configuration of the LQR controller for a given problem considering a set of competing objectives. There are three main contributions introduced in this paper as follows. (1) The standard QPSO algorithm is reinforced with an informed initialization scheme based on the simulated annealing algorithm and Gaussian neighborhood selection mechanism. (2) It is also augmented with a local search strategy which integrates the advantages of memetic algorithm into conventional QPSO. (3) An aggregated dynamic weighting criterion is introduced that dynamically combines the soft and hard constraints with control objectives to provide the designer with a set of Pareto optimal solutions and lets her to decide the target solution based on practical preferences. The proposed method is compared against a gradient-based method, seven meta-heuristics, and the trial-and-error method on two control benchmarks using sensitivity analysis and full factorial parameter selection and the results are validated using one-tailed T-test. The experimental results suggest that the proposed method outperforms opponent methods in terms of controller effort, measures associated with transient response and criteria related to steady-state.
Tasks
Published 2016-07-04
URL http://arxiv.org/abs/1607.00765v1
PDF http://arxiv.org/pdf/1607.00765v1.pdf
PWC https://paperswithcode.com/paper/multi-objective-design-of-state-feedback
Repo
Framework

Proposal for Automatic License and Number Plate Recognition System for Vehicle Identification

Title Proposal for Automatic License and Number Plate Recognition System for Vehicle Identification
Authors Hamed Saghaei
Abstract In this paper, we propose an automatic and mechanized license and number plate recognition (LNPR) system which can extract the license plate number of the vehicles passing through a given location using image processing algorithms. No additional devices such as GPS or radio frequency identification (RFID) need to be installed for implementing the proposed system. Using special cameras, the system takes pictures from each passing vehicle and forwards the image to the computer for being processed by the LPR software. Plate recognition software uses different algorithms such as localization, orientation, normalization, segmentation and finally optical character recognition (OCR). The resulting data is applied to compare with the records on a database. Experimental results reveal that the presented system successfully detects and recognizes the vehicle number plate on real images. This system can also be used for security and traffic control.
Tasks Optical Character Recognition
Published 2016-10-09
URL http://arxiv.org/abs/1610.03341v1
PDF http://arxiv.org/pdf/1610.03341v1.pdf
PWC https://paperswithcode.com/paper/proposal-for-automatic-license-and-number
Repo
Framework

Joint System and Algorithm Design for Computationally Efficient Fan Beam Coded Aperture X-ray Coherent Scatter Imaging

Title Joint System and Algorithm Design for Computationally Efficient Fan Beam Coded Aperture X-ray Coherent Scatter Imaging
Authors Ikenna Odinaka, Joseph A. O’Sullivan, David G. Politte, Kenneth P. MacCabe, Yan Kaganovsky, Joel A. Greenberg, Manu Lakshmanan, Kalyani Krishnamurthy, Anuj Kapadia, Lawrence Carin, David J. Brady
Abstract In x-ray coherent scatter tomography, tomographic measurements of the forward scatter distribution are used to infer scatter densities within a volume. A radiopaque 2D pattern placed between the object and the detector array enables the disambiguation between different scatter events. The use of a fan beam source illumination to speed up data acquisition relative to a pencil beam presents computational challenges. To facilitate the use of iterative algorithms based on a penalized Poisson log-likelihood function, efficient computational implementation of the forward and backward models are needed. Our proposed implementation exploits physical symmetries and structural properties of the system and suggests a joint system-algorithm design, where the system design choices are influenced by computational considerations, and in turn lead to reduced reconstruction time. Computational-time speedups of approximately 146 and 32 are achieved in the computation of the forward and backward models, respectively. Results validating the forward model and reconstruction algorithm are presented on simulated analytic and Monte Carlo data.
Tasks
Published 2016-01-29
URL http://arxiv.org/abs/1603.06400v1
PDF http://arxiv.org/pdf/1603.06400v1.pdf
PWC https://paperswithcode.com/paper/joint-system-and-algorithm-design-for
Repo
Framework

Combining Answer Set Programming and Domain Heuristics for Solving Hard Industrial Problems (Application Paper)

Title Combining Answer Set Programming and Domain Heuristics for Solving Hard Industrial Problems (Application Paper)
Authors Carmine Dodaro, Philip Gasteiger, Nicola Leone, Benjamin Musitsch, Francesco Ricca, Kostyantyn Shchekotykhin
Abstract Answer Set Programming (ASP) is a popular logic programming paradigm that has been applied for solving a variety of complex problems. Among the most challenging real-world applications of ASP are two industrial problems defined by Siemens: the Partner Units Problem (PUP) and the Combined Configuration Problem (CCP). The hardest instances of PUP and CCP are out of reach for state-of-the-art ASP solvers. Experiments show that the performance of ASP solvers could be significantly improved by embedding domain-specific heuristics, but a proper effective integration of such criteria in off-the-shelf ASP implementations is not obvious. In this paper the combination of ASP and domain-specific heuristics is studied with the goal of effectively solving real-world problem instances of PUP and CCP. As a byproduct of this activity, the ASP solver WASP was extended with an interface that eases embedding new external heuristics in the solver. The evaluation shows that our domain-heuristic-driven ASP solver finds solutions for all the real-world instances of PUP and CCP ever provided by Siemens. This paper is under consideration for acceptance in TPLP.
Tasks
Published 2016-08-02
URL http://arxiv.org/abs/1608.00730v1
PDF http://arxiv.org/pdf/1608.00730v1.pdf
PWC https://paperswithcode.com/paper/combining-answer-set-programming-and-domain
Repo
Framework

Many Languages, One Parser

Title Many Languages, One Parser
Authors Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, Noah A. Smith
Abstract We train one multilingual model for dependency parsing and use it to parse sentences in several languages. The parsing model uses (i) multilingual word clusters and embeddings; (ii) token-level language information; and (iii) language-specific features (fine-grained POS tags). This input representation enables the parser not only to parse effectively in multiple languages, but also to generalize across languages based on linguistic universals and typological similarities, making it more effective to learn from limited annotations. Our parser’s performance compares favorably to strong baselines in a range of data scenarios, including when the target language has a large treebank, a small treebank, or no treebank for training.
Tasks Dependency Parsing
Published 2016-02-04
URL http://arxiv.org/abs/1602.01595v4
PDF http://arxiv.org/pdf/1602.01595v4.pdf
PWC https://paperswithcode.com/paper/many-languages-one-parser
Repo
Framework

Enforcing Template Representability and Temporal Consistency for Adaptive Sparse Tracking

Title Enforcing Template Representability and Temporal Consistency for Adaptive Sparse Tracking
Authors Xue Yang, Fei Han, Hua Wang, Hao Zhang
Abstract Sparse representation has been widely studied in visual tracking, which has shown promising tracking performance. Despite a lot of progress, the visual tracking problem is still a challenging task due to appearance variations over time. In this paper, we propose a novel sparse tracking algorithm that well addresses temporal appearance changes, by enforcing template representability and temporal consistency (TRAC). By modeling temporal consistency, our algorithm addresses the issue of drifting away from a tracking target. By exploring the templates’ long-term-short-term representability, the proposed method adaptively updates the dictionary using the most descriptive templates, which significantly improves the robustness to target appearance changes. We compare our TRAC algorithm against the state-of-the-art approaches on 12 challenging benchmark image sequences. Both qualitative and quantitative results demonstrate that our algorithm significantly outperforms previous state-of-the-art trackers.
Tasks Visual Tracking
Published 2016-04-30
URL http://arxiv.org/abs/1605.00170v1
PDF http://arxiv.org/pdf/1605.00170v1.pdf
PWC https://paperswithcode.com/paper/enforcing-template-representability-and
Repo
Framework

PicHunt: Social Media Image Retrieval for Improved Law Enforcement

Title PicHunt: Social Media Image Retrieval for Improved Law Enforcement
Authors Sonal Goel, Niharika Sachdeva, Ponnurangam Kumaraguru, A V Subramanyam, Divam Gupta
Abstract First responders are increasingly using social media to identify and reduce crime for well-being and safety of the society. Images shared on social media hurting religious, political, communal and other sentiments of people, often instigate violence and create law & order situations in society. This results in the need for first responders to inspect the spread of such images and users propagating them on social media. In this paper, we present a comparison between different hand-crafted features and a Convolutional Neural Network (CNN) model to retrieve similar images, which outperforms state-of-art hand-crafted features. We propose an Open-Source-Intelligent (OSINT) real-time image search system, robust to retrieve modified images that allows first responders to analyze the current spread of images, sentiments floating and details of users propagating such content. The system also aids officials to save time of manually analyzing the content by reducing the search space on an average by 67%.
Tasks Image Retrieval
Published 2016-08-02
URL http://arxiv.org/abs/1608.00905v2
PDF http://arxiv.org/pdf/1608.00905v2.pdf
PWC https://paperswithcode.com/paper/pichunt-social-media-image-retrieval-for
Repo
Framework

Parsimonious modeling with Information Filtering Networks

Title Parsimonious modeling with Information Filtering Networks
Authors Wolfram Barfuss, Guido Previde Massara, T. Di Matteo, Tomaso Aste
Abstract We introduce a methodology to construct parsimonious probabilistic models. This method makes use of Information Filtering Networks to produce a robust estimate of the global sparse inverse covariance from a simple sum of local inverse covariances computed on small sub-parts of the network. Being based on local and low-dimensional inversions, this method is computationally very efficient and statistically robust even for the estimation of inverse covariance of high-dimensional, noisy and short time-series. Applied to financial data our method results computationally more efficient than state-of-the-art methodologies such as Glasso producing, in a fraction of the computation time, models that can have equivalent or better performances but with a sparser inference structure. We also discuss performances with sparse factor models where we notice that relative performances decrease with the number of factors. The local nature of this approach allows us to perform computations in parallel and provides a tool for dynamical adaptation by partial updating when the properties of some variables change without the need of recomputing the whole model. This makes this approach particularly suitable to handle big datasets with large numbers of variables. Examples of practical application for forecasting, stress testing and risk allocation in financial systems are also provided.
Tasks Time Series
Published 2016-02-23
URL http://arxiv.org/abs/1602.07349v3
PDF http://arxiv.org/pdf/1602.07349v3.pdf
PWC https://paperswithcode.com/paper/parsimonious-modeling-with-information
Repo
Framework
comments powered by Disqus