July 27, 2019

2784 words 14 mins read

Paper Group ANR 709

Paper Group ANR 709

Quantum Privacy-Preserving Perceptron. Newton-type Methods for Inference in Higher-Order Markov Random Fields. Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular Depth Estimation. Sound Event Detection in Synthetic Audio: Analysis of the DCASE 2016 Task Results. Recognizing Textures with Mobile Cameras for Pedestrian Safety Appli …

Quantum Privacy-Preserving Perceptron

Title Quantum Privacy-Preserving Perceptron
Authors Shenggang Ying, Mingsheng Ying, Yuan Feng
Abstract With the extensive applications of machine learning, the issue of private or sensitive data in the training examples becomes more and more serious: during the training process, personal information or habits may be disclosed to unexpected persons or organisations, which can cause serious privacy problems or even financial loss. In this paper, we present a quantum privacy-preserving algorithm for machine learning with perceptron. There are mainly two steps to protect original training examples. Firstly when checking the current classifier, quantum tests are employed to detect data user’s possible dishonesty. Secondly when updating the current classifier, private random noise is used to protect the original data. The advantages of our algorithm are: (1) it protects training examples better than the known classical methods; (2) it requires no quantum database and thus is easy to implement.
Tasks
Published 2017-07-31
URL http://arxiv.org/abs/1707.09893v1
PDF http://arxiv.org/pdf/1707.09893v1.pdf
PWC https://paperswithcode.com/paper/quantum-privacy-preserving-perceptron
Repo
Framework

Newton-type Methods for Inference in Higher-Order Markov Random Fields

Title Newton-type Methods for Inference in Higher-Order Markov Random Fields
Authors Hariprasad Kannan, Nikos Komodakis, Nikos Paragios
Abstract Linear programming relaxations are central to {\sc map} inference in discrete Markov Random Fields. The ability to properly solve the Lagrangian dual is a critical component of such methods. In this paper, we study the benefit of using Newton-type methods to solve the Lagrangian dual of a smooth version of the problem. We investigate their ability to achieve superior convergence behavior and to better handle the ill-conditioned nature of the formulation, as compared to first order methods. We show that it is indeed possible to efficiently apply a trust region Newton method for a broad range of {\sc map} inference problems. In this paper we propose a provably convergent and efficient framework that includes (i) excellent compromise between computational complexity and precision concerning the Hessian matrix construction, (ii) a damping strategy that aids efficient optimization, (iii) a truncation strategy coupled with a generic pre-conditioner for Conjugate Gradients, (iv) efficient sum-product computation for sparse clique potentials. Results for higher-order Markov Random Fields demonstrate the potential of this approach.
Tasks
Published 2017-09-05
URL http://arxiv.org/abs/1709.01237v1
PDF http://arxiv.org/pdf/1709.01237v1.pdf
PWC https://paperswithcode.com/paper/newton-type-methods-for-inference-in-higher
Repo
Framework

Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular Depth Estimation

Title Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular Depth Estimation
Authors Dan Xu, Elisa Ricci, Wanli Ouyang, Xiaogang Wang, Nicu Sebe
Abstract This paper addresses the problem of depth estimation from a single still image. Inspired by recent works on multi- scale convolutional neural networks (CNN), we propose a deep model which fuses complementary information derived from multiple CNN side outputs. Different from previous methods, the integration is obtained by means of continuous Conditional Random Fields (CRFs). In particular, we propose two different variations, one based on a cascade of multiple CRFs, the other on a unified graphical model. By designing a novel CNN implementation of mean-field updates for continuous CRFs, we show that both proposed models can be regarded as sequential deep networks and that training can be performed end-to-end. Through extensive experimental evaluation we demonstrate the effective- ness of the proposed approach and establish new state of the art results on publicly available datasets.
Tasks Depth Estimation, Monocular Depth Estimation
Published 2017-04-07
URL http://arxiv.org/abs/1704.02157v1
PDF http://arxiv.org/pdf/1704.02157v1.pdf
PWC https://paperswithcode.com/paper/multi-scale-continuous-crfs-as-sequential
Repo
Framework

Sound Event Detection in Synthetic Audio: Analysis of the DCASE 2016 Task Results

Title Sound Event Detection in Synthetic Audio: Analysis of the DCASE 2016 Task Results
Authors Grégoire Lafay, Emmanouil Benetos, Mathieu Lagrange
Abstract As part of the 2016 public evaluation challenge on Detection and Classification of Acoustic Scenes and Events (DCASE 2016), the second task focused on evaluating sound event detection systems using synthetic mixtures of office sounds. This task, which follows the `Event Detection - Office Synthetic’ task of DCASE 2013, studies the behaviour of tested algorithms when facing controlled levels of audio complexity with respect to background noise and polyphony/density, with the added benefit of a very accurate ground truth. This paper presents the task formulation, evaluation metrics, submitted systems, and provides a statistical analysis of the results achieved, with respect to various aspects of the evaluation dataset. |
Tasks Sound Event Detection
Published 2017-11-15
URL http://arxiv.org/abs/1711.05551v1
PDF http://arxiv.org/pdf/1711.05551v1.pdf
PWC https://paperswithcode.com/paper/sound-event-detection-in-synthetic-audio
Repo
Framework

Recognizing Textures with Mobile Cameras for Pedestrian Safety Applications

Title Recognizing Textures with Mobile Cameras for Pedestrian Safety Applications
Authors Shubham Jain, Marco Gruteser
Abstract As smartphone rooted distractions become commonplace, the lack of compelling safety measures has led to a rise in the number of injuries to distracted walkers. Various solutions address this problem by sensing a pedestrian’s walking environment. Existing camera-based approaches have been largely limited to obstacle detection and other forms of object detection. Instead, we present TerraFirma, an approach that performs material recognition on the pedestrian’s walking surface. We explore, first, how well commercial off-the-shelf smartphone cameras can learn texture to distinguish among paving materials in uncontrolled outdoor urban settings. Second, we aim at identifying when a distracted user is about to enter the street, which can be used to support safety functions such as warning the user to be cautious. To this end, we gather a unique dataset of street/sidewalk imagery from a pedestrian’s perspective, that spans major cities like New York, Paris, and London. We demonstrate that modern phone cameras can be enabled to distinguish materials of walking surfaces in urban areas with more than 90% accuracy, and accurately identify when pedestrians transition from sidewalk to street.
Tasks Material Recognition, Object Detection
Published 2017-11-01
URL http://arxiv.org/abs/1711.00558v1
PDF http://arxiv.org/pdf/1711.00558v1.pdf
PWC https://paperswithcode.com/paper/recognizing-textures-with-mobile-cameras-for
Repo
Framework

Learning Social Image Embedding with Deep Multimodal Attention Networks

Title Learning Social Image Embedding with Deep Multimodal Attention Networks
Authors Feiran Huang, Xiaoming Zhang, Zhoujun Li, Tao Mei, Yueying He, Zhonghua Zhao
Abstract Learning social media data embedding by deep models has attracted extensive research interest as well as boomed a lot of applications, such as link prediction, classification, and cross-modal search. However, for social images which contain both link information and multimodal contents (e.g., text description, and visual content), simply employing the embedding learnt from network structure or data content results in sub-optimal social image representation. In this paper, we propose a novel social image embedding approach called Deep Multimodal Attention Networks (DMAN), which employs a deep model to jointly embed multimodal contents and link information. Specifically, to effectively capture the correlations between multimodal contents, we propose a multimodal attention network to encode the fine-granularity relation between image regions and textual words. To leverage the network structure for embedding learning, a novel Siamese-Triplet neural network is proposed to model the links among images. With the joint deep model, the learnt embedding can capture both the multimodal contents and the nonlinear network information. Extensive experiments are conducted to investigate the effectiveness of our approach in the applications of multi-label classification and cross-modal search. Compared to state-of-the-art image embeddings, our proposed DMAN achieves significant improvement in the tasks of multi-label classification and cross-modal search.
Tasks Link Prediction, Multi-Label Classification
Published 2017-10-18
URL http://arxiv.org/abs/1710.06582v1
PDF http://arxiv.org/pdf/1710.06582v1.pdf
PWC https://paperswithcode.com/paper/learning-social-image-embedding-with-deep
Repo
Framework

Data-driven Feature Sampling for Deep Hyperspectral Classification and Segmentation

Title Data-driven Feature Sampling for Deep Hyperspectral Classification and Segmentation
Authors William M. Severa, Jerilyn A. Timlin, Suraj Kholwadwala, Conrad D. James, James B. Aimone
Abstract The high dimensionality of hyperspectral imaging forces unique challenges in scope, size and processing requirements. Motivated by the potential for an in-the-field cell sorting detector, we examine a $\textit{Synechocystis sp.}$ PCC 6803 dataset wherein cells are grown alternatively in nitrogen rich or deplete cultures. We use deep learning techniques to both successfully classify cells and generate a mask segmenting the cells/condition from the background. Further, we use the classification accuracy to guide a data-driven, iterative feature selection method, allowing the design neural networks requiring 90% fewer input features with little accuracy degradation.
Tasks Feature Selection
Published 2017-10-26
URL http://arxiv.org/abs/1710.09934v1
PDF http://arxiv.org/pdf/1710.09934v1.pdf
PWC https://paperswithcode.com/paper/data-driven-feature-sampling-for-deep
Repo
Framework

Stable Recovery Of Sparse Vectors From Random Sinusoidal Feature Maps

Title Stable Recovery Of Sparse Vectors From Random Sinusoidal Feature Maps
Authors Mohammadreza Soltani, Chinmay Hegde
Abstract Random sinusoidal features are a popular approach for speeding up kernel-based inference in large datasets. Prior to the inference stage, the approach suggests performing dimensionality reduction by first multiplying each data vector by a random Gaussian matrix, and then computing an element-wise sinusoid. Theoretical analysis shows that collecting a sufficient number of such features can be reliably used for subsequent inference in kernel classification and regression. In this work, we demonstrate that with a mild increase in the dimension of the embedding, it is also possible to reconstruct the data vector from such random sinusoidal features, provided that the underlying data is sparse enough. In particular, we propose a numerically stable algorithm for reconstructing the data vector given the nonlinear features, and analyze its sample complexity. Our algorithm can be extended to other types of structured inverse problems, such as demixing a pair of sparse (but incoherent) vectors. We support the efficacy of our approach via numerical experiments.
Tasks Dimensionality Reduction
Published 2017-01-23
URL http://arxiv.org/abs/1701.06607v2
PDF http://arxiv.org/pdf/1701.06607v2.pdf
PWC https://paperswithcode.com/paper/stable-recovery-of-sparse-vectors-from-random
Repo
Framework

Material Recognition CNNs and Hierarchical Planning for Biped Robot Locomotion on Slippery Terrain

Title Material Recognition CNNs and Hierarchical Planning for Biped Robot Locomotion on Slippery Terrain
Authors Martim Brandao, Yukitoshi Minami Shiguematsu, Kenji Hashimoto, Atsuo Takanishi
Abstract In this paper we tackle the problem of visually predicting surface friction for environments with diverse surfaces, and integrating this knowledge into biped robot locomotion planning. The problem is essential for autonomous robot locomotion since diverse surfaces with varying friction abound in the real world, from wood to ceramic tiles, grass or ice, which may cause difficulties or huge energy costs for robot locomotion if not considered. We propose to estimate friction and its uncertainty from visual estimation of material classes using convolutional neural networks, together with probability distribution functions of friction associated with each material. We then robustly integrate the friction predictions into a hierarchical (footstep and full-body) planning method using chance constraints, and optimize the same trajectory costs at both levels of the planning method for consistency. Our solution achieves fully autonomous perception and locomotion on slippery terrain, which considers not only friction and its uncertainty, but also collision, stability and trajectory cost. We show promising friction prediction results in real pictures of outdoor scenarios, and planning experiments on a real robot facing surfaces with different friction.
Tasks Material Recognition
Published 2017-06-27
URL http://arxiv.org/abs/1706.08685v1
PDF http://arxiv.org/pdf/1706.08685v1.pdf
PWC https://paperswithcode.com/paper/material-recognition-cnns-and-hierarchical
Repo
Framework

A Challenge Set Approach to Evaluating Machine Translation

Title A Challenge Set Approach to Evaluating Machine Translation
Authors Pierre Isabelle, Colin Cherry, George Foster
Abstract Neural machine translation represents an exciting leap forward in translation quality. But what longstanding weaknesses does it resolve, and which remain? We address these questions with a challenge set approach to translation evaluation and error analysis. A challenge set consists of a small set of sentences, each hand-designed to probe a system’s capacity to bridge a particular structural divergence between languages. To exemplify this approach, we present an English-French challenge set, and use it to analyze phrase-based and neural systems. The resulting analysis provides not only a more fine-grained picture of the strengths of neural systems, but also insight into which linguistic phenomena remain out of reach.
Tasks Machine Translation
Published 2017-04-24
URL http://arxiv.org/abs/1704.07431v5
PDF http://arxiv.org/pdf/1704.07431v5.pdf
PWC https://paperswithcode.com/paper/a-challenge-set-approach-to-evaluating
Repo
Framework

Lat-Net: Compressing Lattice Boltzmann Flow Simulations using Deep Neural Networks

Title Lat-Net: Compressing Lattice Boltzmann Flow Simulations using Deep Neural Networks
Authors Oliver Hennigh
Abstract Computational Fluid Dynamics (CFD) is a hugely important subject with applications in almost every engineering field, however, fluid simulations are extremely computationally and memory demanding. Towards this end, we present Lat-Net, a method for compressing both the computation time and memory usage of Lattice Boltzmann flow simulations using deep neural networks. Lat-Net employs convolutional autoencoders and residual connections in a fully differentiable scheme to compress the state size of a simulation and learn the dynamics on this compressed form. The result is a computationally and memory efficient neural network that can be iterated and queried to reproduce a fluid simulation. We show that once Lat-Net is trained, it can generalize to large grid sizes and complex geometries while maintaining accuracy. We also show that Lat-Net is a general method for compressing other Lattice Boltzmann based simulations such as Electromagnetism.
Tasks
Published 2017-05-25
URL http://arxiv.org/abs/1705.09036v1
PDF http://arxiv.org/pdf/1705.09036v1.pdf
PWC https://paperswithcode.com/paper/lat-net-compressing-lattice-boltzmann-flow
Repo
Framework

The Active Atlas: Combining 3D Anatomical Models with Texture Detectors

Title The Active Atlas: Combining 3D Anatomical Models with Texture Detectors
Authors Yuncong Chen, Lauren McElvain, Alex Tolpygo, Daniel Ferrante, Harvey Karten, Partha Mitra, David Kleinfeld, Yoav Freund
Abstract While modern imaging technologies such as fMRI have opened exciting new possibilities for studying the brain in vivo, histological sections remain the best way to study the anatomy of the brain at the level of single neurons. The histological atlas changed little since 1909 and localizing brain regions is a still a labor intensive process performed only by experienced neuro-anatomists. Existing digital atlases such as the Allen Brain atlas are limited to low resolution images which cannot identify the detailed structure of the neurons. We have developed a digital atlas methodology that combines information about the 3D organization of the brain and the detailed texture of neurons in different structures. Using the methodology we developed an atlas for the mouse brainstem and mid-brain, two regions for which there are currently no good atlases. Our atlas is “active” in that it can be used to automatically align a histological stack to the atlas, thus reducing the work of the neuroanatomist.
Tasks
Published 2017-02-28
URL http://arxiv.org/abs/1702.08606v3
PDF http://arxiv.org/pdf/1702.08606v3.pdf
PWC https://paperswithcode.com/paper/the-active-atlas-combining-3d-anatomical
Repo
Framework

Show, Attend and Interact: Perceivable Human-Robot Social Interaction through Neural Attention Q-Network

Title Show, Attend and Interact: Perceivable Human-Robot Social Interaction through Neural Attention Q-Network
Authors Ahmed Hussain Qureshi, Yutaka Nakamura, Yuichiro Yoshikawa, Hiroshi Ishiguro
Abstract For a safe, natural and effective human-robot social interaction, it is essential to develop a system that allows a robot to demonstrate the perceivable responsive behaviors to complex human behaviors. We introduce the Multimodal Deep Attention Recurrent Q-Network using which the robot exhibits human-like social interaction skills after 14 days of interacting with people in an uncontrolled real world. Each and every day during the 14 days, the system gathered robot interaction experiences with people through a hit-and-trial method and then trained the MDARQN on these experiences using end-to-end reinforcement learning approach. The results of interaction based learning indicate that the robot has learned to respond to complex human behaviors in a perceivable and socially acceptable manner.
Tasks Deep Attention
Published 2017-02-28
URL http://arxiv.org/abs/1702.08626v1
PDF http://arxiv.org/pdf/1702.08626v1.pdf
PWC https://paperswithcode.com/paper/show-attend-and-interact-perceivable-human
Repo
Framework

Geometry Guided Adversarial Facial Expression Synthesis

Title Geometry Guided Adversarial Facial Expression Synthesis
Authors Lingxiao Song, Zhihe Lu, Ran He, Zhenan Sun, Tieniu Tan
Abstract Facial expression synthesis has drawn much attention in the field of computer graphics and pattern recognition. It has been widely used in face animation and recognition. However, it is still challenging due to the high-level semantic presence of large and non-linear face geometry variations. This paper proposes a Geometry-Guided Generative Adversarial Network (G2-GAN) for photo-realistic and identity-preserving facial expression synthesis. We employ facial geometry (fiducial points) as a controllable condition to guide facial texture synthesis with specific expression. A pair of generative adversarial subnetworks are jointly trained towards opposite tasks: expression removal and expression synthesis. The paired networks form a mapping cycle between neutral expression and arbitrary expressions, which also facilitate other applications such as face transfer and expression invariant face recognition. Experimental results show that our method can generate compelling perceptual results on various facial expression synthesis databases. An expression invariant face recognition experiment is also performed to further show the advantages of our proposed method.
Tasks Face Recognition, Face Transfer, Texture Synthesis
Published 2017-12-10
URL http://arxiv.org/abs/1712.03474v1
PDF http://arxiv.org/pdf/1712.03474v1.pdf
PWC https://paperswithcode.com/paper/geometry-guided-adversarial-facial-expression
Repo
Framework

Global Normalization of Convolutional Neural Networks for Joint Entity and Relation Classification

Title Global Normalization of Convolutional Neural Networks for Joint Entity and Relation Classification
Authors Heike Adel, Hinrich Schütze
Abstract We introduce globally normalized convolutional neural networks for joint entity classification and relation extraction. In particular, we propose a way to utilize a linear-chain conditional random field output layer for predicting entity types and relations between entities at the same time. Our experiments show that global normalization outperforms a locally normalized softmax layer on a benchmark dataset.
Tasks Relation Classification, Relation Extraction
Published 2017-07-24
URL http://arxiv.org/abs/1707.07719v3
PDF http://arxiv.org/pdf/1707.07719v3.pdf
PWC https://paperswithcode.com/paper/global-normalization-of-convolutional-neural
Repo
Framework
comments powered by Disqus