Paper Group ANR 81
Face Recognition Machine Vision System Using Eigenfaces. Question Answering and Question Generation as Dual Tasks. Bandit-Based Model Selection for Deformable Object Manipulation. On (Anti)Conditional Independence in Dempster-Shafer Theory. DeepTingle. Anti-spoofing Methods for Automatic SpeakerVerification System. Inferring the parameters of a Mar …
Face Recognition Machine Vision System Using Eigenfaces
Title | Face Recognition Machine Vision System Using Eigenfaces |
Authors | Fares Jalled |
Abstract | Face Recognition is a common problem in Machine Learning. This technology has already been widely used in our lives. For example, Facebook can automatically tag people’s faces in images, and also some mobile devices use face recognition to protect private security. Face images comes with different background, variant illumination, different facial expression and occlusion. There are a large number of approaches for the face recognition. Different approaches for face recognition have been experimented with specific databases which consist of single type, format and composition of image. Doing so, these approaches don’t suit with different face databases. One of the basic face recognition techniques is eigenface which is quite simple, efficient, and yields generally good results in controlled circumstances. So, this paper presents an experimental performance comparison of face recognition using Principal Component Analysis (PCA) and Normalized Principal Component Analysis (NPCA). The experiments are carried out on the ORL (ATT) and Indian face database (IFD) which contain variability in expression, pose, and facial details. The results obtained for the two methods have been compared by varying the number of training images. MATLAB is used for implementing algorithms also. |
Tasks | Face Recognition |
Published | 2017-05-08 |
URL | http://arxiv.org/abs/1705.02782v1 |
http://arxiv.org/pdf/1705.02782v1.pdf | |
PWC | https://paperswithcode.com/paper/face-recognition-machine-vision-system-using |
Repo | |
Framework | |
Question Answering and Question Generation as Dual Tasks
Title | Question Answering and Question Generation as Dual Tasks |
Authors | Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, Ming Zhou |
Abstract | We study the problem of joint question answering (QA) and question generation (QG) in this paper. Our intuition is that QA and QG have intrinsic connections and these two tasks could improve each other. On one side, the QA model judges whether the generated question of a QG model is relevant to the answer. On the other side, the QG model provides the probability of generating a question given the answer, which is a useful evidence that in turn facilitates QA. In this paper we regard QA and QG as dual tasks. We propose a training framework that trains the models of QA and QG simultaneously, and explicitly leverages their probabilistic correlation to guide the training process of both models. We implement a QG model based on sequence-to-sequence learning, and a QA model based on recurrent neural network. As all the components of the QA and QG models are differentiable, all the parameters involved in these two models could be conventionally learned with back propagation. We conduct experiments on three datasets. Empirical results show that our training framework improves both QA and QG tasks. The improved QA model performs comparably with strong baseline approaches on all three datasets. |
Tasks | Question Answering, Question Generation |
Published | 2017-06-07 |
URL | http://arxiv.org/abs/1706.02027v2 |
http://arxiv.org/pdf/1706.02027v2.pdf | |
PWC | https://paperswithcode.com/paper/question-answering-and-question-generation-as |
Repo | |
Framework | |
Bandit-Based Model Selection for Deformable Object Manipulation
Title | Bandit-Based Model Selection for Deformable Object Manipulation |
Authors | Dale McConachie, Dmitry Berenson |
Abstract | We present a novel approach to deformable object manipulation that does not rely on highly-accurate modeling. The key contribution of this paper is to formulate the task as a Multi-Armed Bandit problem, with each arm representing a model of the deformable object. To “pull” an arm and evaluate its utility, we use the arm’s model to generate a velocity command for the gripper(s) holding the object and execute it. As the task proceeds and the object deforms, the utility of each model can change. Our framework estimates these changes and balances exploration of the model set with exploitation of high-utility models. We also propose an approach based on Kalman Filtering for Non-stationary Multi-armed Normal Bandits (KF-MANB) to leverage the coupling between models to learn more from each arm pull. We demonstrate that our method outperforms previous methods on synthetic trials, and performs competitively on several manipulation tasks in simulation. |
Tasks | Deformable Object Manipulation, Model Selection |
Published | 2017-03-29 |
URL | http://arxiv.org/abs/1703.10254v1 |
http://arxiv.org/pdf/1703.10254v1.pdf | |
PWC | https://paperswithcode.com/paper/bandit-based-model-selection-for-deformable |
Repo | |
Framework | |
On (Anti)Conditional Independence in Dempster-Shafer Theory
Title | On (Anti)Conditional Independence in Dempster-Shafer Theory |
Authors | Mieczysław A. Kłopotek |
Abstract | This paper verifies a result of {Shenoy:94} concerning graphoidal structure of Shenoy’s notion of independence for Dempster-Shafer theory of belief functions. Shenoy proved that his notion of independence has graphoidal properties for positive normal valuations. The requirement of strict positive normal valuations as prerequisite for application of graphoidal properties excludes a wide class of DS belief functions. It excludes especially so-called probabilistic belief functions. It is demonstrated that the requirement of positiveness of valuation may be weakened in that it may be required that commonality function is non-zero for singleton sets instead, and the graphoidal properties for independence of belief function variables are then preserved. This means especially that probabilistic belief functions with all singleton sets as focal points possess graphoidal properties for independence. |
Tasks | |
Published | 2017-07-13 |
URL | http://arxiv.org/abs/1707.04277v1 |
http://arxiv.org/pdf/1707.04277v1.pdf | |
PWC | https://paperswithcode.com/paper/on-anticonditional-independence-in-dempster |
Repo | |
Framework | |
DeepTingle
Title | DeepTingle |
Authors | Ahmed Khalifa, Gabriella A. B. Barros, Julian Togelius |
Abstract | DeepTingle is a text prediction and classification system trained on the collected works of the renowned fantastic gay erotica author Chuck Tingle. Whereas the writing assistance tools you use everyday (in the form of predictive text, translation, grammar checking and so on) are trained on generic, purportedly “neutral” datasets, DeepTingle is trained on a very specific, internally consistent but externally arguably eccentric dataset. This allows us to foreground and confront the norms embedded in data-driven creativity and productivity assistance tools. As such tools effectively function as extensions of our cognition into technology, it is important to identify the norms they embed within themselves and, by extension, us. DeepTingle is realized as a web application based on LSTM networks and the GloVe word embedding, implemented in JavaScript with Keras-JS. |
Tasks | |
Published | 2017-05-09 |
URL | http://arxiv.org/abs/1705.03557v2 |
http://arxiv.org/pdf/1705.03557v2.pdf | |
PWC | https://paperswithcode.com/paper/deeptingle |
Repo | |
Framework | |
Anti-spoofing Methods for Automatic SpeakerVerification System
Title | Anti-spoofing Methods for Automatic SpeakerVerification System |
Authors | Galina Lavrentyeva, Sergey Novoselov, Konstantin Simonchik |
Abstract | Growing interest in automatic speaker verification (ASV)systems has lead to significant quality improvement of spoofing attackson them. Many research works confirm that despite the low equal er-ror rate (EER) ASV systems are still vulnerable to spoofing attacks. Inthis work we overview different acoustic feature spaces and classifiersto determine reliable and robust countermeasures against spoofing at-tacks. We compared several spoofing detection systems, presented so far,on the development and evaluation datasets of the Automatic SpeakerVerification Spoofing and Countermeasures (ASVspoof) Challenge 2015.Experimental results presented in this paper demonstrate that the useof magnitude and phase information combination provides a substantialinput into the efficiency of the spoofing detection systems. Also wavelet-based features show impressive results in terms of equal error rate. Inour overview we compare spoofing performance for systems based on dif-ferent classifiers. Comparison results demonstrate that the linear SVMclassifier outperforms the conventional GMM approach. However, manyresearchers inspired by the great success of deep neural networks (DNN)approaches in the automatic speech recognition, applied DNN in thespoofing detection task and obtained quite low EER for known and un-known type of spoofing attacks. |
Tasks | Speaker Verification, Speech Recognition |
Published | 2017-05-24 |
URL | http://arxiv.org/abs/1705.08865v1 |
http://arxiv.org/pdf/1705.08865v1.pdf | |
PWC | https://paperswithcode.com/paper/anti-spoofing-methods-for-automatic |
Repo | |
Framework | |
Inferring the parameters of a Markov process from snapshots of the steady state
Title | Inferring the parameters of a Markov process from snapshots of the steady state |
Authors | Simon Lee Dettmer, Johannes Berg |
Abstract | We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics. |
Tasks | |
Published | 2017-07-13 |
URL | http://arxiv.org/abs/1707.04114v3 |
http://arxiv.org/pdf/1707.04114v3.pdf | |
PWC | https://paperswithcode.com/paper/inferring-the-parameters-of-a-markov-process |
Repo | |
Framework | |
Learning Gating ConvNet for Two-Stream based Methods in Action Recognition
Title | Learning Gating ConvNet for Two-Stream based Methods in Action Recognition |
Authors | Jiagang Zhu, Wei Zou, Zheng Zhu |
Abstract | For the two-stream style methods in action recognition, fusing the two streams’ predictions is always by the weighted averaging scheme. This fusion method with fixed weights lacks of pertinence to different action videos and always needs trial and error on the validation set. In order to enhance the adaptability of two-stream ConvNets and improve its performance, an end-to-end trainable gated fusion method, namely gating ConvNet, for the two-stream ConvNets is proposed in this paper based on the MoE (Mixture of Experts) theory. The gating ConvNet takes the combination of feature maps from the same layer of the spatial and the temporal nets as input and adopts ReLU (Rectified Linear Unit) as the gating output activation function. To reduce the over-fitting of gating ConvNet caused by the redundancy of parameters, a new multi-task learning method is designed, which jointly learns the gating fusion weights for the two streams and learns the gating ConvNet for action classification. With our gated fusion method and multi-task learning approach, a high accuracy of 94.5% is achieved on the dataset UCF101. |
Tasks | Action Classification, Multi-Task Learning, Temporal Action Localization |
Published | 2017-09-12 |
URL | http://arxiv.org/abs/1709.03655v2 |
http://arxiv.org/pdf/1709.03655v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-gating-convnet-for-two-stream-based |
Repo | |
Framework | |
Self-organized Hierarchical Softmax
Title | Self-organized Hierarchical Softmax |
Authors | Yikang Shen, Shawn Tan, Chrisopher Pal, Aaron Courville |
Abstract | We propose a new self-organizing hierarchical softmax formulation for neural-network-based language models over large vocabularies. Instead of using a predefined hierarchical structure, our approach is capable of learning word clusters with clear syntactical and semantic meaning during the language model training process. We provide experiments on standard benchmarks for language modeling and sentence compression tasks. We find that this approach is as fast as other efficient softmax approximations, while achieving comparable or even better performance relative to similar full softmax models. |
Tasks | Language Modelling, Sentence Compression |
Published | 2017-07-26 |
URL | http://arxiv.org/abs/1707.08588v1 |
http://arxiv.org/pdf/1707.08588v1.pdf | |
PWC | https://paperswithcode.com/paper/self-organized-hierarchical-softmax |
Repo | |
Framework | |
Clinical Intervention Prediction and Understanding using Deep Networks
Title | Clinical Intervention Prediction and Understanding using Deep Networks |
Authors | Harini Suresh, Nathan Hunt, Alistair Johnson, Leo Anthony Celi, Peter Szolovits, Marzyeh Ghassemi |
Abstract | Real-time prediction of clinical interventions remains a challenge within intensive care units (ICUs). This task is complicated by data sources that are noisy, sparse, heterogeneous and outcomes that are imbalanced. In this paper, we integrate data from all available ICU sources (vitals, labs, notes, demographics) and focus on learning rich representations of this data to predict onset and weaning of multiple invasive interventions. In particular, we compare both long short-term memory networks (LSTM) and convolutional neural networks (CNN) for prediction of five intervention tasks: invasive ventilation, non-invasive ventilation, vasopressors, colloid boluses, and crystalloid boluses. Our predictions are done in a forward-facing manner to enable “real-time” performance, and predictions are made with a six hour gap time to support clinically actionable planning. We achieve state-of-the-art results on our predictive tasks using deep architectures. We explore the use of feature occlusion to interpret LSTM models, and compare this to the interpretability gained from examining inputs that maximally activate CNN outputs. We show that our models are able to significantly outperform baselines in intervention prediction, and provide insight into model learning, which is crucial for the adoption of such models in practice. |
Tasks | |
Published | 2017-05-23 |
URL | http://arxiv.org/abs/1705.08498v1 |
http://arxiv.org/pdf/1705.08498v1.pdf | |
PWC | https://paperswithcode.com/paper/clinical-intervention-prediction-and |
Repo | |
Framework | |
A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning
Title | A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning |
Authors | Ning Liu, Zhe Li, Zhiyuan Xu, Jielong Xu, Sheng Lin, Qinru Qiu, Jian Tang, Yanzhi Wang |
Abstract | Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in the cloud computing system. However, a complete cloud resource allocation framework exhibits high dimensions in state and action spaces, which prohibit the usefulness of traditional RL techniques. In addition, high power consumption has become one of the critical concerns in design and control of cloud computing systems, which degrades system reliability and increases cooling cost. An effective dynamic power management (DPM) policy should minimize power consumption while maintaining performance degradation within an acceptable level. Thus, a joint virtual machine (VM) resource allocation and power management framework is critical to the overall cloud computing system. Moreover, novel solution framework is necessary to address the even higher dimensions in state and action spaces. In this paper, we propose a novel hierarchical framework for solving the overall resource allocation and power management problem in cloud computing systems. The proposed hierarchical framework comprises a global tier for VM resource allocation to the servers and a local tier for distributed power management of local servers. The emerging deep reinforcement learning (DRL) technique, which can deal with complicated control problems with large state space, is adopted to solve the global tier problem. Furthermore, an autoencoder and a novel weight sharing structure are adopted to handle the high-dimensional state space and accelerate the convergence speed. On the other hand, the local tier of distributed server power managements comprises an LSTM based workload predictor and a model-free RL based power manager, operating in a distributed manner. |
Tasks | Decision Making |
Published | 2017-03-13 |
URL | http://arxiv.org/abs/1703.04221v2 |
http://arxiv.org/pdf/1703.04221v2.pdf | |
PWC | https://paperswithcode.com/paper/a-hierarchical-framework-of-cloud-resource |
Repo | |
Framework | |
Efficiently Summarising Event Sequences with Rich Interleaving Patterns
Title | Efficiently Summarising Event Sequences with Rich Interleaving Patterns |
Authors | Apratim Bhattacharyya, Jilles Vreeken |
Abstract | Discovering the key structure of a database is one of the main goals of data mining. In pattern set mining we do so by discovering a small set of patterns that together describe the data well. The richer the class of patterns we consider, and the more powerful our description language, the better we will be able to summarise the data. In this paper we propose \ourmethod, a novel greedy MDL-based method for summarising sequential data using rich patterns that are allowed to interleave. Experiments show \ourmethod is orders of magnitude faster than the state of the art, results in better models, as well as discovers meaningful semantics in the form patterns that identify multiple choices of values. |
Tasks | |
Published | 2017-01-27 |
URL | http://arxiv.org/abs/1701.08096v1 |
http://arxiv.org/pdf/1701.08096v1.pdf | |
PWC | https://paperswithcode.com/paper/efficiently-summarising-event-sequences-with |
Repo | |
Framework | |
MMCR4NLP: Multilingual Multiway Corpora Repository for Natural Language Processing
Title | MMCR4NLP: Multilingual Multiway Corpora Repository for Natural Language Processing |
Authors | Raj Dabre, Sadao Kurohashi |
Abstract | Multilinguality is gradually becoming ubiquitous in the sense that more and more researchers have successfully shown that using additional languages help improve the results in many Natural Language Processing tasks. Multilingual Multiway Corpora (MMC) contain the same sentence in multiple languages. Such corpora have been primarily used for Multi-Source and Pivot Language Machine Translation but are also useful for developing multilingual sequence taggers by transfer learning. While these corpora are available, they are not organized for multilingual experiments and researchers need to write boilerplate code every time they want to use said corpora. Moreover, because there is no official MMC collection it becomes difficult to compare against existing approaches. As such we present our work on creating a unified and systematically organized repository of MMC spanning a large number of languages. We also provide training, development and test splits for corpora where official splits are unavailable. We hope that this will help speed up the pace of multilingual NLP research and ensure that NLP researchers obtain results that are more trustable since they can be compared easily. We indicate corpora sources, extraction procedures if any and relevant statistics. We also make our collection public for research purposes. |
Tasks | Machine Translation, Transfer Learning |
Published | 2017-10-03 |
URL | http://arxiv.org/abs/1710.01025v3 |
http://arxiv.org/pdf/1710.01025v3.pdf | |
PWC | https://paperswithcode.com/paper/mmcr4nlp-multilingual-multiway-corpora |
Repo | |
Framework | |
Regularization of Deep Neural Networks with Spectral Dropout
Title | Regularization of Deep Neural Networks with Spectral Dropout |
Authors | Salman Khan, Munawar Hayat, Fatih Porikli |
Abstract | The big breakthrough on the ImageNet challenge in 2012 was partially due to the dropout' technique used to avoid overfitting. Here, we introduce a new approach called Spectral Dropout’ to improve the generalization ability of deep neural networks. We cast the proposed approach in the form of regular Convolutional Neural Network (CNN) weight layers using a decorrelation transform with fixed basis functions. Our spectral dropout method prevents overfitting by eliminating weak and `noisy’ Fourier domain coefficients of the neural network activations, leading to remarkably better results than the current regularization methods. Furthermore, the proposed is very efficient due to the fixed basis functions used for spectral transformation. In particular, compared to Dropout and Drop-Connect, our method significantly speeds up the network convergence rate during the training process (roughly x2), with considerably higher neuron pruning rates (an increase of ~ 30%). We demonstrate that the spectral dropout can also be used in conjunction with other regularization approaches resulting in additional performance gains. | |
Tasks | |
Published | 2017-11-23 |
URL | http://arxiv.org/abs/1711.08591v1 |
http://arxiv.org/pdf/1711.08591v1.pdf | |
PWC | https://paperswithcode.com/paper/regularization-of-deep-neural-networks-with |
Repo | |
Framework | |
Boundary-sensitive Network for Portrait Segmentation
Title | Boundary-sensitive Network for Portrait Segmentation |
Authors | Xianzhi Du, Xiaolong Wang, Dawei Li, Jingwen Zhu, Serafettin Tasci, Cameron Upright, Stephen Walsh, Larry Davis |
Abstract | Compared to the general semantic segmentation problem, portrait segmentation has higher precision requirement on boundary area. However, this problem has not been well studied in previous works. In this paper, we propose a boundary-sensitive deep neural network (BSN) for portrait segmentation. BSN introduces three novel techniques. First, an individual boundary-sensitive kernel is proposed by dilating the contour line and assigning the boundary pixels with multi-class labels. Second, a global boundary-sensitive kernel is employed as a position sensitive prior to further constrain the overall shape of the segmentation map. Third, we train a boundary-sensitive attribute classifier jointly with the segmentation network to reinforce the network with semantic boundary shape information. We have evaluated BSN on the current largest public portrait segmentation dataset, i.e, the PFCN dataset, as well as the portrait images collected from other three popular image segmentation datasets: COCO, COCO-Stuff, and PASCAL VOC. Our method achieves the superior quantitative and qualitative performance over state-of-the-arts on all the datasets, especially on the boundary area. |
Tasks | Semantic Segmentation |
Published | 2017-12-22 |
URL | http://arxiv.org/abs/1712.08675v2 |
http://arxiv.org/pdf/1712.08675v2.pdf | |
PWC | https://paperswithcode.com/paper/boundary-sensitive-network-for-portrait |
Repo | |
Framework | |