May 6, 2019

2680 words 13 mins read

Paper Group ANR 288

Paper Group ANR 288

SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing. Towards a new quantum cognition model. Exact Exponent in Optimal Rates for Crowdsourcing. Multiview RGB-D Dataset for Object Instance Detection. Linear Regression with an Unknown Permutation: Statistical and Computational Limits. Meat adulteration detection throu …

SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing

Title SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing
Authors Ao Ren, Ji Li, Zhe Li, Caiwen Ding, Xuehai Qian, Qinru Qiu, Bo Yuan, Yanzhi Wang
Abstract With recent advancing of Internet of Things (IoTs), it becomes very attractive to implement the deep convolutional neural networks (DCNNs) onto embedded/portable systems. Presently, executing the software-based DCNNs requires high-performance server clusters in practice, restricting their widespread deployment on the mobile devices. To overcome this issue, considerable research efforts have been conducted in the context of developing highly-parallel and specific DCNN hardware, utilizing GPGPUs, FPGAs, and ASICs. Stochastic Computing (SC), which uses bit-stream to represent a number within [-1, 1] by counting the number of ones in the bit-stream, has a high potential for implementing DCNNs with high scalability and ultra-low hardware footprint. Since multiplications and additions can be calculated using AND gates and multiplexers in SC, significant reductions in power/energy and hardware footprint can be achieved compared to the conventional binary arithmetic implementations. The tremendous savings in power (energy) and hardware resources bring about immense design space for enhancing scalability and robustness for hardware DCNNs. This paper presents the first comprehensive design and optimization framework of SC-based DCNNs (SC-DCNNs). We first present the optimal designs of function blocks that perform the basic operations, i.e., inner product, pooling, and activation function. Then we propose the optimal design of four types of combinations of basic function blocks, named feature extraction blocks, which are in charge of extracting features from input feature maps. Besides, weight storage methods are investigated to reduce the area and power/energy consumption for storing weights. Finally, the whole SC-DCNN implementation is optimized, with feature extraction blocks carefully selected, to minimize area and power/energy consumption while maintaining a high network accuracy level.
Tasks
Published 2016-11-18
URL http://arxiv.org/abs/1611.05939v2
PDF http://arxiv.org/pdf/1611.05939v2.pdf
PWC https://paperswithcode.com/paper/sc-dcnn-highly-scalable-deep-convolutional
Repo
Framework

Towards a new quantum cognition model

Title Towards a new quantum cognition model
Authors Riccardo Franco
Abstract This article presents a new quantum-like model for cognition explicitly based on knowledge. It is shown that this model, called QKT (quantum knowledge-based theory), is able to coherently describe some experimental results that are problematic for the prior quantum-like decision models. In particular, I consider the experimental results relevant to the post-decision cognitive dissonance, the problems relevant to the question order effect and response replicability, and those relevant to the grand-reciprocity equations. A new set of postulates is proposed, which evidence the different meaning given to the projectors and to the quantum states. In the final part, I show that the use of quantum gates can help to better describe and understand the evolution of quantum-like models.
Tasks
Published 2016-11-23
URL http://arxiv.org/abs/1611.09212v1
PDF http://arxiv.org/pdf/1611.09212v1.pdf
PWC https://paperswithcode.com/paper/towards-a-new-quantum-cognition-model
Repo
Framework

Exact Exponent in Optimal Rates for Crowdsourcing

Title Exact Exponent in Optimal Rates for Crowdsourcing
Authors Chao Gao, Yu Lu, Dengyong Zhou
Abstract In many machine learning applications, crowdsourcing has become the primary means for label collection. In this paper, we study the optimal error rate for aggregating labels provided by a set of non-expert workers. Under the classic Dawid-Skene model, we establish matching upper and lower bounds with an exact exponent $mI(\pi)$ in which $m$ is the number of workers and $I(\pi)$ the average Chernoff information that characterizes the workers’ collective ability. Such an exact characterization of the error exponent allows us to state a precise sample size requirement $m>\frac{1}{I(\pi)}\log\frac{1}{\epsilon}$ in order to achieve an $\epsilon$ misclassification error. In addition, our results imply the optimality of various EM algorithms for crowdsourcing initialized by consistent estimators.
Tasks
Published 2016-05-25
URL http://arxiv.org/abs/1605.07696v2
PDF http://arxiv.org/pdf/1605.07696v2.pdf
PWC https://paperswithcode.com/paper/exact-exponent-in-optimal-rates-for
Repo
Framework

Multiview RGB-D Dataset for Object Instance Detection

Title Multiview RGB-D Dataset for Object Instance Detection
Authors Georgios Georgakis, Md Alimoor Reza, Arsalan Mousavian, Phi-Hung Le, Jana Kosecka
Abstract This paper presents a new multi-view RGB-D dataset of nine kitchen scenes, each containing several objects in realistic cluttered environments including a subset of objects from the BigBird dataset. The viewpoints of the scenes are densely sampled and objects in the scenes are annotated with bounding boxes and in the 3D point cloud. Also, an approach for detection and recognition is presented, which is comprised of two parts: i) a new multi-view 3D proposal generation method and ii) the development of several recognition baselines using AlexNet to score our proposals, which is trained either on crops of the dataset or on synthetically composited training images. Finally, we compare the performance of the object proposals and a detection baseline to the Washington RGB-D Scenes (WRGB-D) dataset and demonstrate that our Kitchen scenes dataset is more challenging for object detection and recognition. The dataset is available at: http://cs.gmu.edu/~robot/gmu-kitchens.html.
Tasks Object Detection
Published 2016-09-26
URL http://arxiv.org/abs/1609.07826v1
PDF http://arxiv.org/pdf/1609.07826v1.pdf
PWC https://paperswithcode.com/paper/multiview-rgb-d-dataset-for-object-instance
Repo
Framework

Linear Regression with an Unknown Permutation: Statistical and Computational Limits

Title Linear Regression with an Unknown Permutation: Statistical and Computational Limits
Authors Ashwin Pananjady, Martin J. Wainwright, Thomas A. Courtade
Abstract Consider a noisy linear observation model with an unknown permutation, based on observing $y = \Pi^* A x^* + w$, where $x^* \in \mathbb{R}^d$ is an unknown vector, $\Pi^*$ is an unknown $n \times n$ permutation matrix, and $w \in \mathbb{R}^n$ is additive Gaussian noise. We analyze the problem of permutation recovery in a random design setting in which the entries of the matrix $A$ are drawn i.i.d. from a standard Gaussian distribution, and establish sharp conditions on the SNR, sample size $n$, and dimension $d$ under which $\Pi^*$ is exactly and approximately recoverable. On the computational front, we show that the maximum likelihood estimate of $\Pi^*$ is NP-hard to compute, while also providing a polynomial time algorithm when $d =1$.
Tasks
Published 2016-08-09
URL http://arxiv.org/abs/1608.02902v1
PDF http://arxiv.org/pdf/1608.02902v1.pdf
PWC https://paperswithcode.com/paper/linear-regression-with-an-unknown-permutation
Repo
Framework

Meat adulteration detection through digital image analysis of histological cuts using LBP

Title Meat adulteration detection through digital image analysis of histological cuts using LBP
Authors João J. de Macedo Neto, Jefersson A. dos Santos, William Robson Schwartz
Abstract Food fraud has been an area of great concern due to its risk to public health, reduction of food quality or nutritional value and for its economic consequences. For this reason, it’s been object of regulation in many countries (e.g. [1], [2]). One type of food that has been frequently object of fraud through the addition of water or an aqueous solution is bovine meat. The traditional methods used to detect this kind of fraud are expensive, time-consuming and depend on physicochemical analysis that require complex laboratory techniques, specific for each added substance. In this paper, based on digital images of histological cuts of adulterated and not-adulterated (normal) bovine meat, we evaluate the of digital image analysis methods to identify the aforementioned kind of fraud, with focus on the Local Binary Pattern (LBP) algorithm.
Tasks
Published 2016-11-07
URL http://arxiv.org/abs/1611.02260v1
PDF http://arxiv.org/pdf/1611.02260v1.pdf
PWC https://paperswithcode.com/paper/meat-adulteration-detection-through-digital
Repo
Framework

Reducing Runtime by Recycling Samples

Title Reducing Runtime by Recycling Samples
Authors Jialei Wang, Hai Wang, Nathan Srebro
Abstract Contrary to the situation with stochastic gradient descent, we argue that when using stochastic methods with variance reduction, such as SDCA, SAG or SVRG, as well as their variants, it could be beneficial to reuse previously used samples instead of fresh samples, even when fresh samples are available. We demonstrate this empirically for SDCA, SAG and SVRG, studying the optimal sample size one should use, and also uncover be-havior that suggests running SDCA for an integer number of epochs could be wasteful.
Tasks
Published 2016-02-05
URL http://arxiv.org/abs/1602.02136v1
PDF http://arxiv.org/pdf/1602.02136v1.pdf
PWC https://paperswithcode.com/paper/reducing-runtime-by-recycling-samples
Repo
Framework

Dependence and Relevance: A probabilistic view

Title Dependence and Relevance: A probabilistic view
Authors Dan Geiger, David Heckerman
Abstract We examine three probabilistic concepts related to the sentence “two variables have no bearing on each other”. We explore the relationships between these three concepts and establish their relevance to the process of constructing similarity networks—a tool for acquiring probabilistic knowledge from human experts. We also establish a precise relationship between connectedness in Bayesian networks and relevance in probability.
Tasks
Published 2016-10-27
URL http://arxiv.org/abs/1611.02126v1
PDF http://arxiv.org/pdf/1611.02126v1.pdf
PWC https://paperswithcode.com/paper/dependence-and-relevance-a-probabilistic-view
Repo
Framework

Composite Kernel Local Angular Discriminant Analysis for Multi-Sensor Geospatial Image Analysis

Title Composite Kernel Local Angular Discriminant Analysis for Multi-Sensor Geospatial Image Analysis
Authors Saurabh Prasad, Minshan Cui, Lifeng Yan
Abstract With the emergence of passive and active optical sensors available for geospatial imaging, information fusion across sensors is becoming ever more important. An important aspect of single (or multiple) sensor geospatial image analysis is feature extraction - the process of finding “optimal” lower dimensional subspaces that adequately characterize class-specific information for subsequent analysis tasks, such as classification, change and anomaly detection etc. In recent work, we proposed and developed an angle-based discriminant analysis approach that projected data onto subspaces with maximal “angular” separability in the input (raw) feature space and Reproducible Kernel Hilbert Space (RKHS). We also developed an angular locality preserving variant of this algorithm. In this letter, we advance this work and make it suitable for information fusion - we propose and validate a composite kernel local angular discriminant analysis projection, that can operate on an ensemble of feature sources (e.g. from different sources), and project the data onto a unified space through composite kernels where the data are maximally separated in an angular sense. We validate this method with the multi-sensor University of Houston hyperspectral and LiDAR dataset, and demonstrate that the proposed method significantly outperforms other composite kernel approaches to sensor (information) fusion.
Tasks Anomaly Detection
Published 2016-07-18
URL http://arxiv.org/abs/1607.04939v1
PDF http://arxiv.org/pdf/1607.04939v1.pdf
PWC https://paperswithcode.com/paper/composite-kernel-local-angular-discriminant
Repo
Framework

Modeling Photographic Composition via Triangles

Title Modeling Photographic Composition via Triangles
Authors Zihan Zhou, Siqiong He, Jia Li, James Z. Wang
Abstract The capacity of automatically modeling photographic composition is valuable for many real-world machine vision applications such as digital photography, image retrieval, image understanding, and image aesthetics assessment. The triangle technique is among those indispensable composition methods on which professional photographers often rely. This paper proposes a system that can identify prominent triangle arrangements in two major categories of photographs: natural or urban scenes, and portraits. For the natural or urban scene pictures, the focus is on the effect of linear perspective. For portraits, we carefully examine the positioning of human subjects in a photo. We show that line analysis is highly advantageous for modeling composition in both categories. Based on the detected triangles, new mathematical descriptors for composition are formulated and used to retrieve similar images. Leveraging the rich source of high aesthetics photos online, similar approaches can potentially be incorporated in future smart cameras to enhance a person’s photo composition skills.
Tasks Image Retrieval
Published 2016-05-31
URL http://arxiv.org/abs/1605.09559v1
PDF http://arxiv.org/pdf/1605.09559v1.pdf
PWC https://paperswithcode.com/paper/modeling-photographic-composition-via
Repo
Framework

The Case for Temporal Transparency: Detecting Policy Change Events in Black-Box Decision Making Systems

Title The Case for Temporal Transparency: Detecting Policy Change Events in Black-Box Decision Making Systems
Authors Miguel Ferreira, Muhammad Bilal Zafar, Krishna P. Gummadi
Abstract Bringing transparency to black-box decision making systems (DMS) has been a topic of increasing research interest in recent years. Traditional active and passive approaches to make these systems transparent are often limited by scalability and/or feasibility issues. In this paper, we propose a new notion of black-box DMS transparency, named, temporal transparency, whose goal is to detect if/when the DMS policy changes over time, and is mostly invariant to the drawbacks of traditional approaches. We map our notion of temporal transparency to time series changepoint detection methods, and develop a framework to detect policy changes in real-world DMS’s. Experiments on New York Stop-question-and-frisk dataset reveal a number of publicly announced and unannounced policy changes, highlighting the utility of our framework.
Tasks Decision Making, Time Series
Published 2016-10-31
URL http://arxiv.org/abs/1610.10064v1
PDF http://arxiv.org/pdf/1610.10064v1.pdf
PWC https://paperswithcode.com/paper/the-case-for-temporal-transparency-detecting
Repo
Framework

Relation extraction from clinical texts using domain invariant convolutional neural network

Title Relation extraction from clinical texts using domain invariant convolutional neural network
Authors Sunil Kumar Sahu, Ashish Anand, Krishnadev Oruganty, Mahanandeeshwar Gattu
Abstract In recent years extracting relevant information from biomedical and clinical texts such as research articles, discharge summaries, or electronic health records have been a subject of many research efforts and shared challenges. Relation extraction is the process of detecting and classifying the semantic relation among entities in a given piece of texts. Existing models for this task in biomedical domain use either manually engineered features or kernel methods to create feature vector. These features are then fed to classifier for the prediction of the correct class. It turns out that the results of these methods are highly dependent on quality of user designed features and also suffer from curse of dimensionality. In this work we focus on extracting relations from clinical discharge summaries. Our main objective is to exploit the power of convolution neural network (CNN) to learn features automatically and thus reduce the dependency on manual feature engineering. We evaluate performance of the proposed model on i2b2-2010 clinical relation extraction challenge dataset. Our results indicate that convolution neural network can be a good model for relation exaction in clinical text without being dependent on expert’s knowledge on defining quality features.
Tasks Feature Engineering, Relation Extraction
Published 2016-06-30
URL http://arxiv.org/abs/1606.09370v1
PDF http://arxiv.org/pdf/1606.09370v1.pdf
PWC https://paperswithcode.com/paper/relation-extraction-from-clinical-texts-using
Repo
Framework

Persistent Homology of Attractors For Action Recognition

Title Persistent Homology of Attractors For Action Recognition
Authors Vinay Venkataraman, Karthikeyan Natesan Ramamurthy, Pavan Turaga
Abstract In this paper, we propose a novel framework for dynamical analysis of human actions from 3D motion capture data using topological data analysis. We model human actions using the topological features of the attractor of the dynamical system. We reconstruct the phase-space of time series corresponding to actions using time-delay embedding, and compute the persistent homology of the phase-space reconstruction. In order to better represent the topological properties of the phase-space, we incorporate the temporal adjacency information when computing the homology groups. The persistence of these homology groups encoded using persistence diagrams are used as features for the actions. Our experiments with action recognition using these features demonstrate that the proposed approach outperforms other baseline methods.
Tasks Motion Capture, Temporal Action Localization, Time Series, Topological Data Analysis
Published 2016-03-16
URL http://arxiv.org/abs/1603.05310v1
PDF http://arxiv.org/pdf/1603.05310v1.pdf
PWC https://paperswithcode.com/paper/persistent-homology-of-attractors-for-action
Repo
Framework

A Corpus-based Toy Model for DisCoCat

Title A Corpus-based Toy Model for DisCoCat
Authors Stefano Gogioso
Abstract The categorical compositional distributional (DisCoCat) model of meaning rigorously connects distributional semantics and pregroup grammars, and has found a variety of applications in computational linguistics. From a more abstract standpoint, the DisCoCat paradigm predicates the construction of a mapping from syntax to categorical semantics. In this work we present a concrete construction of one such mapping, from a toy model of syntax for corpora annotated with constituent structure trees, to categorical semantics taking place in a category of free R-semimodules over an involutive commutative semiring R.
Tasks
Published 2016-05-13
URL http://arxiv.org/abs/1605.04013v2
PDF http://arxiv.org/pdf/1605.04013v2.pdf
PWC https://paperswithcode.com/paper/a-corpus-based-toy-model-for-discocat
Repo
Framework

Sparse Filtered SIRT for Electron Tomography

Title Sparse Filtered SIRT for Electron Tomography
Authors Chen Mu, Chiwoo Park
Abstract Electron tomographic reconstruction is a method for obtaining a three-dimensional image of a specimen with a series of two dimensional microscope images taken from different viewing angles. Filtered backprojection, one of the most popular tomographic reconstruction methods, does not work well under the existence of image noises and missing wedges. This paper presents a new approach to largely mitigate the effect of noises and missing wedges. We propose a novel filtered backprojection that optimizes the filter of the backprojection operator in terms of a reconstruction error. This data-dependent filter adaptively chooses the spectral domains of signals and noises, suppressing the noise frequency bands, so it is very effective in denoising. We also propose the new filtered backprojection embedded within the simultaneous iterative reconstruction iteration for mitigating the effect of missing wedges. Our numerical study is presented to show the performance gain of the proposed approach over the state-of-the-art.
Tasks Denoising, Electron Tomography
Published 2016-08-04
URL http://arxiv.org/abs/1608.01686v2
PDF http://arxiv.org/pdf/1608.01686v2.pdf
PWC https://paperswithcode.com/paper/sparse-filtered-sirt-for-electron-tomography
Repo
Framework
comments powered by Disqus