October 19, 2019

2970 words 14 mins read

Paper Group ANR 181

Paper Group ANR 181

Eliciting Worker Preference for Task Completion. Continuously Constructive Deep Neural Networks. Seed-Point Based Geometric Partitioning of Nuclei Clumps. The Sparse Manifold Transform. On Polynomial time Constructions of Minimum Height Decision Tree. Knowledge-driven generative subspaces for modeling multi-view dependencies in medical data. Self-S …

Eliciting Worker Preference for Task Completion

Title Eliciting Worker Preference for Task Completion
Authors Mohammadreza Esfandiari, Senjuti Basu Roy, Sihem Amer-Yahia
Abstract Current crowdsourcing platforms provide little support for worker feedback. Workers are sometimes invited to post free text describing their experience and preferences in completing tasks. They can also use forums such as Turker Nation1 to exchange preferences on tasks and requesters. In fact, crowdsourcing platforms rely heavily on observing workers and inferring their preferences implicitly. In this work, we believe that asking workers to indicate their preferences explicitly improve their experience in task completion and hence, the quality of their contributions. Explicit elicitation can indeed help to build more accurate worker models for task completion that captures the evolving nature of worker preferences. We design a worker model whose accuracy is improved iteratively by requesting preferences for task factors such as required skills, task payment, and task relevance. We propose a generic framework, develop efficient solutions in realistic scenarios, and run extensive experiments that show the benefit of explicit preference elicitation over implicit ones with statistical significance.
Tasks
Published 2018-01-10
URL http://arxiv.org/abs/1801.03233v1
PDF http://arxiv.org/pdf/1801.03233v1.pdf
PWC https://paperswithcode.com/paper/eliciting-worker-preference-for-task
Repo
Framework

Continuously Constructive Deep Neural Networks

Title Continuously Constructive Deep Neural Networks
Authors Ozan İrsoy, Ethem Alpaydın
Abstract Traditionally, deep learning algorithms update the network weights whereas the network architecture is chosen manually, using a process of trial and error. In this work, we propose two novel approaches that automatically update the network structure while also learning its weights. The novelty of our approach lies in our parameterization where the depth, or additional complexity, is encapsulated continuously in the parameter space through control parameters that add additional complexity. We propose two methods: In tunnel networks, this selection is done at the level of a hidden unit, and in budding perceptrons, this is done at the level of a network layer; updating this control parameter introduces either another hidden unit or another hidden layer. We show the effectiveness of our methods on the synthetic two-spirals data and on two real data sets of MNIST and MIRFLICKR, where we see that our proposed methods, with the same set of hyperparameters, can correctly adjust the network complexity to the task complexity.
Tasks
Published 2018-04-07
URL http://arxiv.org/abs/1804.02491v1
PDF http://arxiv.org/pdf/1804.02491v1.pdf
PWC https://paperswithcode.com/paper/continuously-constructive-deep-neural
Repo
Framework

Seed-Point Based Geometric Partitioning of Nuclei Clumps

Title Seed-Point Based Geometric Partitioning of Nuclei Clumps
Authors James Kapaldo
Abstract When applying automatic analysis of fluorescence or histopathological images of cells, it is necessary to partition, or de-clump, partially overlapping cell nuclei. In this work, I describe a method of partitioning partially overlapping cell nuclei using a seed-point based geometric partitioning. The geometric partitioning creates two different types of cuts, cuts between two boundary vertices and cuts between one boundary vertex and a new vertex introduced to the boundary interior. The cuts are then ranked according to a scoring metric, and the highest scoring cuts are used. This method was tested on a set of 2420 clumps of nuclei and was found to produced better results than current popular analysis software.
Tasks
Published 2018-04-12
URL http://arxiv.org/abs/1804.04549v1
PDF http://arxiv.org/pdf/1804.04549v1.pdf
PWC https://paperswithcode.com/paper/seed-point-based-geometric-partitioning-of
Repo
Framework

The Sparse Manifold Transform

Title The Sparse Manifold Transform
Authors Yubei Chen, Dylan M. Paiton, Bruno A. Olshausen
Abstract We present a signal representation framework called the sparse manifold transform that combines key ideas from sparse coding, manifold learning, and slow feature analysis. It turns non-linear transformations in the primary sensory signal space into linear interpolations in a representational embedding space while maintaining approximate invertibility. The sparse manifold transform is an unsupervised and generative framework that explicitly and simultaneously models the sparse discreteness and low-dimensional manifold structure found in natural scenes. When stacked, it also models hierarchical composition. We provide a theoretical description of the transform and demonstrate properties of the learned representation on both synthetic data and natural videos.
Tasks
Published 2018-06-23
URL http://arxiv.org/abs/1806.08887v2
PDF http://arxiv.org/pdf/1806.08887v2.pdf
PWC https://paperswithcode.com/paper/the-sparse-manifold-transform
Repo
Framework

On Polynomial time Constructions of Minimum Height Decision Tree

Title On Polynomial time Constructions of Minimum Height Decision Tree
Authors Nader H. Bshouty, Waseem Makhoul
Abstract In this paper we study a polynomial time algorithms that for an input $A\subseteq {B_m}$ outputs a decision tree for $A$ of minimum depth. This problem has many applications that include, to name a few, computer vision, group testing, exact learning from membership queries and game theory. Arkin et al. and Moshkov gave a polynomial time $(\ln A)$- approximation algorithm (for the depth). The result of Dinur and Steurer for set cover implies that this problem cannot be approximated with ratio $(1-o(1))\cdot \ln A$, unless P=NP. Moskov the combinatorial measure of extended teaching dimension of $A$, $ETD(A)$. He showed that $ETD(A)$ is a lower bound for the depth of the decision tree for $A$ and then gave an {\it exponential time} $ETD(A)/\log(ETD(A))$-approximation algorithm. In this paper we further study the $ETD(A)$ measure and a new combinatorial measure, $DEN(A)$, that we call the density of the set $A$. We show that $DEN(A)\le ETD(A)+1$. We then give two results. The first result is that the lower bound $ETD(A)$ of Moshkov for the depth of the decision tree for $A$ is greater than the bounds that are obtained by the classical technique used in the literature. The second result is a polynomial time $(\ln 2) DEN(A)$-approximation (and therefore $(\ln 2) ETD(A)$-approximation) algorithm for the depth of the decision tree of $A$. We also show that a better approximation ratio implies P=NP. We then apply the above results to learning the class of disjunctions of predicates from membership queries. We show that the $ETD$ of this class is bounded from above by the degree $d$ of its Hasse diagram. We then show that Moshkov algorithm can be run in polynomial time and is $(d/\log d)$-approximation algorithm. This gives optimal algorithms when the degree is constant. For example, learning axis parallel rays over constant dimension space.
Tasks
Published 2018-02-01
URL http://arxiv.org/abs/1802.00233v1
PDF http://arxiv.org/pdf/1802.00233v1.pdf
PWC https://paperswithcode.com/paper/on-polynomial-time-constructions-of-minimum
Repo
Framework

Knowledge-driven generative subspaces for modeling multi-view dependencies in medical data

Title Knowledge-driven generative subspaces for modeling multi-view dependencies in medical data
Authors Parvathy Sudhir Pillai, Tze-Yun Leong
Abstract Early detection of Alzheimer’s disease (AD) and identification of potential risk/beneficial factors are important for planning and administering timely interventions or preventive measures. In this paper, we learn a disease model for AD that combines genotypic and phenotypic profiles, and cognitive health metrics of patients. We propose a probabilistic generative subspace that describes the correlative, complementary and domain-specific semantics of the dependencies in multi-view, multi-modality medical data. Guided by domain knowledge and using the latent consensus between abstractions of multi-view data, we model the fusion as a data generating process. We show that our approach can potentially lead to i) explainable clinical predictions and ii) improved AD diagnoses.
Tasks
Published 2018-12-03
URL http://arxiv.org/abs/1812.00509v1
PDF http://arxiv.org/pdf/1812.00509v1.pdf
PWC https://paperswithcode.com/paper/knowledge-driven-generative-subspaces-for
Repo
Framework

Self-Supervised GAN to Counter Forgetting

Title Self-Supervised GAN to Counter Forgetting
Authors Ting Chen, Xiaohua Zhai, Neil Houlsby
Abstract GANs involve training two networks in an adversarial game, where each network’s task depends on its adversary. Recently, several works have framed GAN training as an online or continual learning problem. We focus on the discriminator, which must perform classification under an (adversarially) shifting data distribution. When trained on sequential tasks, neural networks exhibit \emph{forgetting}. For GANs, discriminator forgetting leads to training instability. To counter forgetting, we encourage the discriminator to maintain useful representations by adding a self-supervision. Conditional GANs have a similar effect using labels. However, our self-supervised GAN does not require labels, and closes the performance gap between conditional and unconditional models. We show that, in doing so, the self-supervised discriminator learns better representations than regular GANs.
Tasks Continual Learning
Published 2018-10-27
URL http://arxiv.org/abs/1810.11598v2
PDF http://arxiv.org/pdf/1810.11598v2.pdf
PWC https://paperswithcode.com/paper/self-supervised-gan-to-counter-forgetting
Repo
Framework

Stochastic subgradient method converges at the rate $O(k^{-1/4})$ on weakly convex functions

Title Stochastic subgradient method converges at the rate $O(k^{-1/4})$ on weakly convex functions
Authors Damek Davis, Dmitriy Drusvyatskiy
Abstract We prove that the proximal stochastic subgradient method, applied to a weakly convex problem, drives the gradient of the Moreau envelope to zero at the rate $O(k^{-1/4})$. As a consequence, we resolve an open question on the convergence rate of the proximal stochastic gradient method for minimizing the sum of a smooth nonconvex function and a convex proximable function.
Tasks
Published 2018-02-08
URL http://arxiv.org/abs/1802.02988v3
PDF http://arxiv.org/pdf/1802.02988v3.pdf
PWC https://paperswithcode.com/paper/stochastic-subgradient-method-converges-at
Repo
Framework

Continual Classification Learning Using Generative Models

Title Continual Classification Learning Using Generative Models
Authors Frantzeska Lavda, Jason Ramapuram, Magda Gregorova, Alexandros Kalousis
Abstract Continual learning is the ability to sequentially learn over time by accommodating knowledge while retaining previously learned experiences. Neural networks can learn multiple tasks when trained on them jointly, but cannot maintain performance on previously learned tasks when tasks are presented one at a time. This problem is called catastrophic forgetting. In this work, we propose a classification model that learns continuously from sequentially observed tasks, while preventing catastrophic forgetting. We build on the lifelong generative capabilities of [10] and extend it to the classification setting by deriving a new variational bound on the joint log likelihood, $\log p(x; y)$.
Tasks Continual Learning
Published 2018-10-24
URL http://arxiv.org/abs/1810.10612v1
PDF http://arxiv.org/pdf/1810.10612v1.pdf
PWC https://paperswithcode.com/paper/continual-classification-learning-using
Repo
Framework

Systimator: A Design Space Exploration Methodology for Systolic Array based CNNs Acceleration on the FPGA-based Edge Nodes

Title Systimator: A Design Space Exploration Methodology for Systolic Array based CNNs Acceleration on the FPGA-based Edge Nodes
Authors Hazoor Ahmad, Muhammad Tanvir, Muhammad Abdullah Hanif, Muhammad Usama Javed, Rehan Hafiz, Muhammad Shafique
Abstract The evolution of IoT based smart applications demand porting of artificial intelligence algorithms to the edge computing devices. CNNs form a large part of these AI algorithms. Systolic array based CNN acceleration is being widely advocated due its ability to allow scalable architectures. However, CNNs are inherently memory and compute intensive algorithms, and hence pose significant challenges to be implemented on the resource-constrained edge computing devices. Memory-constrained low-cost FPGA based devices form a substantial fraction of these edge computing devices. Thus, when porting to such edge-computing devices, the designer is left unguided as to how to select a suitable systolic array configuration that could fit in the available hardware resources. In this paper we propose Systimator, a design space exploration based methodology that provides a set of design points that can be mapped within the memory bounds of the target FPGA device. The methodology is based upon an analytical model that is formulated to estimate the required resources for systolic arrays, assuming multiple data reuse patterns. The methodology further provides the performance estimates for each of the candidate design points. We show that Systimator provides an in-depth analysis of resource-requirement of systolic array based CNNs. We provide our resource estimation results for porting of convolutional layers of TINY YOLO, a CNN based object detector, on a Xilinx ARTIX 7 FPGA.
Tasks
Published 2018-12-15
URL http://arxiv.org/abs/1901.04986v2
PDF http://arxiv.org/pdf/1901.04986v2.pdf
PWC https://paperswithcode.com/paper/systimator-a-design-space-exploration
Repo
Framework

An image representation based convolutional network for DNA classification

Title An image representation based convolutional network for DNA classification
Authors Bojian Yin, Marleen Balvert, Davide Zambrano, Alexander Schönhuth, Sander Bohte
Abstract The folding structure of the DNA molecule combined with helper molecules, also referred to as the chromatin, is highly relevant for the functional properties of DNA. The chromatin structure is largely determined by the underlying primary DNA sequence, though the interaction is not yet fully understood. In this paper we develop a convolutional neural network that takes an image-representation of primary DNA sequence as its input, and predicts key determinants of chromatin structure. The method is developed such that it is capable of detecting interactions between distal elements in the DNA sequence, which are known to be highly relevant. Our experiments show that the method outperforms several existing methods both in terms of prediction accuracy and training time.
Tasks
Published 2018-06-13
URL http://arxiv.org/abs/1806.04931v1
PDF http://arxiv.org/pdf/1806.04931v1.pdf
PWC https://paperswithcode.com/paper/an-image-representation-based-convolutional
Repo
Framework

Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations

Title Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations
Authors Alexander Ororbia, Ankur Mali, C. Lee Giles, Daniel Kifer
Abstract Temporal models based on recurrent neural networks have proven to be quite powerful in a wide variety of applications. However, training these models often relies on back-propagation through time, which entails unfolding the network over many time steps, making the process of conducting credit assignment considerably more challenging. Furthermore, the nature of back-propagation itself does not permit the use of non-differentiable activation functions and is inherently sequential, making parallelization of the underlying training process difficult. Here, we propose the Parallel Temporal Neural Coding Network (P-TNCN), a biologically inspired model trained by the learning algorithm we call Local Representation Alignment. It aims to resolve the difficulties and problems that plague recurrent networks trained by back-propagation through time. The architecture requires neither unrolling in time nor the derivatives of its internal activation functions. We compare our model and learning procedure to other back-propagation through time alternatives (which also tend to be computationally expensive), including real-time recurrent learning, echo state networks, and unbiased online recurrent optimization. We show that it outperforms these on sequence modeling benchmarks such as Bouncing MNIST, a new benchmark we denote as Bouncing NotMNIST, and Penn Treebank. Notably, our approach can in some instances outperform full back-propagation through time as well as variants such as sparse attentive back-tracking. Significantly, the hidden unit correction phase of P-TNCN allows it to adapt to new datasets even if its synaptic weights are held fixed (zero-shot adaptation) and facilitates retention of prior generative knowledge when faced with a task sequence. We present results that show the P-TNCN’s ability to conduct zero-shot adaptation and online continual sequence modeling.
Tasks Continual Learning, Language Modelling
Published 2018-10-17
URL https://arxiv.org/abs/1810.07411v4
PDF https://arxiv.org/pdf/1810.07411v4.pdf
PWC https://paperswithcode.com/paper/continual-learning-of-recurrent-neural
Repo
Framework

A Deep Autoencoder System for Differentiation of Cancer Types Based on DNA Methylation State

Title A Deep Autoencoder System for Differentiation of Cancer Types Based on DNA Methylation State
Authors Mohammed Khwaja, Melpomeni Kalofonou, Chris Toumazou
Abstract A Deep Autoencoder based content retrieval algorithm is proposed for prediction and differentiation of cancer types based on the presence of epigenetic patterns of DNA methylation identified in genetic regions known as CpG islands. The developed deep learning system uses a CpG island state classification sub-system to complete sets of missing/incomplete island data in given human cell lines, and is then pipelined with an intricate set of statistical and signal processing methods to accurately predict the presence of cancer and further differentiate the type and cell of origin in the event of a positive result. The proposed system was trained with previously reported data derived from four case groups of cancer cell lines, achieving overall Sensitivity of 88.24%, Specificity of 83.33%, Accuracy of 84.75% and Matthews Correlation Coefficient of 0.687. The ability to predict and differentiate cancer types using epigenetic events as the identifying patterns was demonstrated in previously reported data sets from breast, lung, lymphoblastic leukemia and urological cancer cell lines, allowing the pipelined system to be robust and adjustable to other cancer cell lines or epigenetic events.
Tasks
Published 2018-10-02
URL http://arxiv.org/abs/1810.01243v2
PDF http://arxiv.org/pdf/1810.01243v2.pdf
PWC https://paperswithcode.com/paper/a-deep-autoencoder-system-for-differentiation
Repo
Framework

Pilot Comparative Study of Different Deep Features for Palmprint Identification in Low-Quality Images

Title Pilot Comparative Study of Different Deep Features for Palmprint Identification in Low-Quality Images
Authors A. S. Tarawneh, D. Chetverikov, A. B. Hassanat
Abstract Deep Convolutional Neural Networks (CNNs) are widespread, efficient tools of visual recognition. In this paper, we present a comparative study of three popular pre-trained CNN models: AlexNet, VGG-16 and VGG-19. We address the problem of palmprint identification in low-quality imagery and apply Support Vector Machines (SVMs) with all of the compared models. For the comparison, we use the MOHI palmprint image database whose images are characterized by low contrast, shadows, and varying illumination, scale, translation and rotation. Another, high-quality database called COEP is also considered to study the recognition gap between high-quality and low-quality imagery. Our experiments show that the deeper pre-trained CNN models, e.g., VGG-16 and VGG-19, tend to extract highly distinguishable features that recognize low-quality palmprints more efficiently than the less deep networks such as AlexNet. Furthermore, our experiments on the two databases using various models demonstrate that the features extracted from lower-level fully connected layers provide higher recognition rates than higher-layer features. Our results indicate that different pre-trained models can be efficiently used in touchless identification systems with low-quality palmprint images.
Tasks
Published 2018-04-10
URL http://arxiv.org/abs/1804.04602v1
PDF http://arxiv.org/pdf/1804.04602v1.pdf
PWC https://paperswithcode.com/paper/pilot-comparative-study-of-different-deep
Repo
Framework

Deep factorization for speech signal

Title Deep factorization for speech signal
Authors Lantian Li, Dong Wang, Yixiang Chen, Ying Shi, Zhiyuan Tang, Thomas Fang Zheng
Abstract Various informative factors mixed in speech signals, leading to great difficulty when decoding any of the factors. An intuitive idea is to factorize each speech frame into individual informative factors, though it turns out to be highly difficult. Recently, we found that speaker traits, which were assumed to be long-term distributional properties, are actually short-time patterns, and can be learned by a carefully designed deep neural network (DNN). This discovery motivated a cascade deep factorization (CDF) framework that will be presented in this paper. The proposed framework infers speech factors in a sequential way, where factors previously inferred are used as conditional variables when inferring other factors. We will show that this approach can effectively factorize speech signals, and using these factors, the original speech spectrum can be recovered with a high accuracy. This factorization and reconstruction approach provides potential values for many speech processing tasks, e.g., speaker recognition and emotion recognition, as will be demonstrated in the paper.
Tasks Emotion Recognition, Speaker Recognition
Published 2018-02-27
URL http://arxiv.org/abs/1803.00886v1
PDF http://arxiv.org/pdf/1803.00886v1.pdf
PWC https://paperswithcode.com/paper/deep-factorization-for-speech-signal
Repo
Framework
comments powered by Disqus