July 26, 2019

3097 words 15 mins read

Paper Group ANR 798

Paper Group ANR 798

Identification of Strong Edges in AMP Chain Graphs. What your Facebook Profile Picture Reveals about your Personality. Adversarial Image Alignment and Interpolation. Continuous-Time Visual-Inertial Odometry for Event Cameras. End-to-End Relation Extraction using Markov Logic Networks. A Kinematic Chain Space for Monocular Motion Capture. Effective …

Identification of Strong Edges in AMP Chain Graphs

Title Identification of Strong Edges in AMP Chain Graphs
Authors Jose M. Peña
Abstract The essential graph is a distinguished member of a Markov equivalence class of AMP chain graphs. However, the directed edges in the essential graph are not necessarily strong or invariant, i.e. they may not be shared by every member of the equivalence class. Likewise for the undirected edges. In this paper, we develop a procedure for identifying which edges in an essential graph are strong. We also show how this makes it possible to bound some causal effects when the true chain graph is unknown.
Tasks
Published 2017-11-23
URL http://arxiv.org/abs/1711.09990v3
PDF http://arxiv.org/pdf/1711.09990v3.pdf
PWC https://paperswithcode.com/paper/identification-of-strong-edges-in-amp-chain
Repo
Framework

What your Facebook Profile Picture Reveals about your Personality

Title What your Facebook Profile Picture Reveals about your Personality
Authors Cristina Segalin, Fabio Celli, Luca Polonio, Michal Kosinski, David Stillwell, Nicu Sebe, Marco Cristani, Bruno Lepri
Abstract People spend considerable effort managing the impressions they give others. Social psychologists have shown that people manage these impressions differently depending upon their personality. Facebook and other social media provide a new forum for this fundamental process; hence, understanding people’s behaviour on social media could provide interesting insights on their personality. In this paper we investigate automatic personality recognition from Facebook profile pictures. We analyze the effectiveness of four families of visual features and we discuss some human interpretable patterns that explain the personality traits of the individuals. For example, extroverts and agreeable individuals tend to have warm colored pictures and to exhibit many faces in their portraits, mirroring their inclination to socialize; while neurotic ones have a prevalence of pictures of indoor places. Then, we propose a classification approach to automatically recognize personality traits from these visual features. Finally, we compare the performance of our classification approach to the one obtained by human raters and we show that computer-based classifications are significantly more accurate than averaged human-based classifications for Extraversion and Neuroticism.
Tasks
Published 2017-08-03
URL http://arxiv.org/abs/1708.01292v2
PDF http://arxiv.org/pdf/1708.01292v2.pdf
PWC https://paperswithcode.com/paper/what-your-facebook-profile-picture-reveals
Repo
Framework

Adversarial Image Alignment and Interpolation

Title Adversarial Image Alignment and Interpolation
Authors Viren Jain
Abstract Volumetric (3d) images are acquired for many scientific and biomedical purposes using imaging methods such as serial section microscopy, CT scans, and MRI. A frequent step in the analysis and reconstruction of such data is the alignment and registration of images that were acquired in succession along a spatial or temporal dimension. For example, in serial section electron microscopy, individual 2d sections are imaged via electron microscopy and then must be aligned to one another in order to produce a coherent 3d volume. State of the art approaches find image correspondences derived from patch matching and invariant feature detectors, and then solve optimization problems that rigidly or elastically deform series of images into an aligned volume. Here we show how fully convolutional neural networks trained with an adversarial loss function can be used for two tasks: (1) synthesis of missing or damaged image data from adjacent sections, and (2) fine-scale alignment of block-face electron microscopy data. Finally, we show how these two capabilities can be combined in order to produce artificial isotropic volumes from anisotropic image volumes using a super-resolution adversarial alignment and interpolation approach.
Tasks Super-Resolution
Published 2017-06-30
URL http://arxiv.org/abs/1707.00067v1
PDF http://arxiv.org/pdf/1707.00067v1.pdf
PWC https://paperswithcode.com/paper/adversarial-image-alignment-and-interpolation
Repo
Framework

Continuous-Time Visual-Inertial Odometry for Event Cameras

Title Continuous-Time Visual-Inertial Odometry for Event Cameras
Authors Elias Mueggler, Guillermo Gallego, Henri Rebecq, Davide Scaramuzza
Abstract Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, due to the fundamentally different structure of the sensor’s output, new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required. Recent work has shown that a continuous-time representation of the event camera pose can deal with the high temporal resolution and asynchronous nature of this sensor in a principled way. In this paper, we leverage such a continuous-time representation to perform visual-inertial odometry with an event camera. This representation allows direct integration of the asynchronous events with micro-second accuracy and the inertial measurements at high frequency. The event camera trajectory is approximated by a smooth curve in the space of rigid-body motions using cubic splines. This formulation significantly reduces the number of variables in trajectory estimation problems. We evaluate our method on real data from several scenes and compare the results against ground truth from a motion-capture system. We show that our method provides improved accuracy over the result of a state-of-the-art visual odometry method for event cameras. We also show that both the map orientation and scale can be recovered accurately by fusing events and inertial data. To the best of our knowledge, this is the first work on visual-inertial fusion with event cameras using a continuous-time framework.
Tasks Motion Capture, Visual Odometry
Published 2017-02-23
URL http://arxiv.org/abs/1702.07389v2
PDF http://arxiv.org/pdf/1702.07389v2.pdf
PWC https://paperswithcode.com/paper/continuous-time-visual-inertial-odometry-for
Repo
Framework

End-to-End Relation Extraction using Markov Logic Networks

Title End-to-End Relation Extraction using Markov Logic Networks
Authors Sachin Pawar, Pushpak Bhattacharya, Girish K. Palshikar
Abstract The task of end-to-end relation extraction consists of two sub-tasks: i) identifying entity mentions along with their types and ii) recognizing semantic relations among the entity mention pairs. %Identifying entity mentions along with their types and recognizing semantic relations among the entity mentions, are two very important problems in Information Extraction. It has been shown that for better performance, it is necessary to address these two sub-tasks jointly. We propose an approach for simultaneous extraction of entity mentions and relations in a sentence, by using inference in Markov Logic Networks (MLN). We learn three different classifiers : i) local entity classifier, ii) local relation classifier and iii) “pipeline” relation classifier which uses predictions of the local entity classifier. Predictions of these classifiers may be inconsistent with each other. We represent these predictions along with some domain knowledge using weighted first-order logic rules in an MLN and perform joint inference over the MLN to obtain a global output with minimum inconsistencies. Experiments on the ACE (Automatic Content Extraction) 2004 dataset demonstrate that our approach of joint extraction using MLNs outperforms the baselines of individual classifiers. Our end-to-end relation extraction performance is better than 2 out of 3 previous results reported on the ACE 2004 dataset.
Tasks Relation Extraction
Published 2017-12-04
URL http://arxiv.org/abs/1712.00988v1
PDF http://arxiv.org/pdf/1712.00988v1.pdf
PWC https://paperswithcode.com/paper/end-to-end-relation-extraction-using-markov
Repo
Framework

A Kinematic Chain Space for Monocular Motion Capture

Title A Kinematic Chain Space for Monocular Motion Capture
Authors Bastian Wandt, Hanno Ackermann, Bodo Rosenhahn
Abstract This paper deals with motion capture of kinematic chains (e.g. human skeletons) from monocular image sequences taken by uncalibrated cameras. We present a method based on projecting an observation into a kinematic chain space (KCS). An optimization of the nuclear norm is proposed that implicitly enforces structural properties of the kinematic chain. Unlike other approaches our method does not require specific camera or object motion and is not relying on training data or previously determined constraints such as particular body lengths. The proposed algorithm is able to reconstruct scenes with limited camera motion and previously unseen motions. It is not only applicable to human skeletons but also to other kinematic chains for instance animals or industrial robots. We achieve state-of-the-art results on different benchmark data bases and real world scenes.
Tasks Motion Capture
Published 2017-02-01
URL http://arxiv.org/abs/1702.00186v1
PDF http://arxiv.org/pdf/1702.00186v1.pdf
PWC https://paperswithcode.com/paper/a-kinematic-chain-space-for-monocular-motion
Repo
Framework

Effective sketching methods for value function approximation

Title Effective sketching methods for value function approximation
Authors Yangchen Pan, Erfan Sadeqi Azer, Martha White
Abstract High-dimensional representations, such as radial basis function networks or tile coding, are common choices for policy evaluation in reinforcement learning. Learning with such high-dimensional representations, however, can be expensive, particularly for matrix methods, such as least-squares temporal difference learning or quasi-Newton methods that approximate matrix step-sizes. In this work, we explore the utility of sketching for these two classes of algorithms. We highlight issues with sketching the high-dimensional features directly, which can incur significant bias. As a remedy, we demonstrate how to use sketching more sparingly, with only a left-sided sketch, that can still enable significant computational gains and the use of these matrix-based learning algorithms that are less sensitive to parameters. We empirically investigate these algorithms, in four domains with a variety of representations. Our aim is to provide insights into effective use of sketching in practice.
Tasks
Published 2017-08-03
URL http://arxiv.org/abs/1708.01298v1
PDF http://arxiv.org/pdf/1708.01298v1.pdf
PWC https://paperswithcode.com/paper/effective-sketching-methods-for-value
Repo
Framework

A novel metaheuristic method for solving constrained engineering optimization problems: Drone Squadron Optimization

Title A novel metaheuristic method for solving constrained engineering optimization problems: Drone Squadron Optimization
Authors Vinícius Veloso de Melo
Abstract Several constrained optimization problems have been adequately solved over the years thanks to advances in the metaheuristics area. In this paper, we evaluate a novel self-adaptive and auto-constructive metaheuristic called Drone Squadron Optimization (DSO) in solving constrained engineering design problems. This paper evaluates DSO with death penalty on three widely tested engineering design problems. Results show that the proposed approach is competitive with some very popular metaheuristics.
Tasks
Published 2017-08-04
URL http://arxiv.org/abs/1708.01368v1
PDF http://arxiv.org/pdf/1708.01368v1.pdf
PWC https://paperswithcode.com/paper/a-novel-metaheuristic-method-for-solving
Repo
Framework

Complex-valued image denosing based on group-wise complex-domain sparsity

Title Complex-valued image denosing based on group-wise complex-domain sparsity
Authors Vladimir Katkovnik, Mykola Ponomarenko, Karen Egiazarian
Abstract Phase imaging and wavefront reconstruction from noisy observations of complex exponent is a topic of this paper. It is a highly non-linear problem because the exponent is a 2{\pi}-periodic function of phase. The reconstruction of phase and amplitude is difficult. Even with an additive Gaussian noise in observations distributions of noisy components in phase and amplitude are signal dependent and non-Gaussian. Additional difficulties follow from a prior unknown correlation of phase and amplitude in real life scenarios. In this paper, we propose a new class of non-iterative and iterative complex domain filters based on group-wise sparsity in complex domain. This sparsity is based on the techniques implemented in Block-Matching 3D filtering (BM3D) and 3D/4D High-Order Singular Decomposition (HOSVD) exploited for spectrum design, analysis and filtering. The introduced algorithms are a generalization of the ideas used in the CD-BM3D algorithms presented in our previous publications. The algorithms are implemented as a MATLAB Toolbox. The efficiency of the algorithms is demonstrated by simulation tests.
Tasks
Published 2017-11-01
URL http://arxiv.org/abs/1711.00362v1
PDF http://arxiv.org/pdf/1711.00362v1.pdf
PWC https://paperswithcode.com/paper/complex-valued-image-denosing-based-on-group
Repo
Framework

MonoCap: Monocular Human Motion Capture using a CNN Coupled with a Geometric Prior

Title MonoCap: Monocular Human Motion Capture using a CNN Coupled with a Geometric Prior
Authors Xiaowei Zhou, Menglong Zhu, Georgios Pavlakos, Spyridon Leonardos, Kostantinos G. Derpanis, Kostas Daniilidis
Abstract Recovering 3D full-body human pose is a challenging problem with many applications. It has been successfully addressed by motion capture systems with body worn markers and multiple cameras. In this paper, we address the more challenging case of not only using a single camera but also not leveraging markers: going directly from 2D appearance to 3D geometry. Deep learning approaches have shown remarkable abilities to discriminatively learn 2D appearance features. The missing piece is how to integrate 2D, 3D and temporal information to recover 3D geometry and account for the uncertainties arising from the discriminative model. We introduce a novel approach that treats 2D joint locations as latent variables whose uncertainty distributions are given by a deep fully convolutional neural network. The unknown 3D poses are modeled by a sparse representation and the 3D parameter estimates are realized via an Expectation-Maximization algorithm, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Extensive evaluation on benchmark datasets shows that the proposed approach achieves greater accuracy over state-of-the-art baselines. Notably, the proposed approach does not require synchronized 2D-3D data for training and is applicable to “in-the-wild” images, which is demonstrated with the MPII dataset.
Tasks Motion Capture
Published 2017-01-09
URL http://arxiv.org/abs/1701.02354v2
PDF http://arxiv.org/pdf/1701.02354v2.pdf
PWC https://paperswithcode.com/paper/monocap-monocular-human-motion-capture-using
Repo
Framework

An Evaluation Framework and Database for MoCap-Based Gait Recognition Methods

Title An Evaluation Framework and Database for MoCap-Based Gait Recognition Methods
Authors Michal Balazia, Petr Sojka
Abstract As a contribution to reproducible research, this paper presents a framework and a database to improve the development, evaluation and comparison of methods for gait recognition from motion capture (MoCap) data. The evaluation framework provides implementation details and source codes of state-of-the-art human-interpretable geometric features as well as our own approaches where gait features are learned by a modification of Fisher’s Linear Discriminant Analysis with the Maximum Margin Criterion, and by a combination of Principal Component Analysis and Linear Discriminant Analysis. It includes a description and source codes of a mechanism for evaluating four class separability coefficients of feature space and four rank-based classifier performance metrics. This framework also contains a tool for learning a custom classifier and for classifying a custom query on a custom gallery. We provide an experimental database along with source codes for its extraction from the general CMU MoCap database.
Tasks Gait Recognition, Motion Capture
Published 2017-01-04
URL http://arxiv.org/abs/1701.00995v3
PDF http://arxiv.org/pdf/1701.00995v3.pdf
PWC https://paperswithcode.com/paper/an-evaluation-framework-and-database-for
Repo
Framework

Dropping Activation Outputs with Localized First-layer Deep Network for Enhancing User Privacy and Data Security

Title Dropping Activation Outputs with Localized First-layer Deep Network for Enhancing User Privacy and Data Security
Authors Hao Dong, Chao Wu, Zhen Wei, Yike Guo
Abstract Deep learning methods can play a crucial role in anomaly detection, prediction, and supporting decision making for applications like personal health-care, pervasive body sensing, etc. However, current architecture of deep networks suffers the privacy issue that users need to give out their data to the model (typically hosted in a server or a cluster on Cloud) for training or prediction. This problem is getting more severe for those sensitive health-care or medical data (e.g fMRI or body sensors measures like EEG signals). In addition to this, there is also a security risk of leaking these data during the data transmission from user to the model (especially when it’s through Internet). Targeting at these issues, in this paper we proposed a new architecture for deep network in which users don’t reveal their original data to the model. In our method, feed-forward propagation and data encryption are combined into one process: we migrate the first layer of deep network to users’ local devices, and apply the activation functions locally, and then use “dropping activation output” method to make the output non-invertible. The resulting approach is able to make model prediction without accessing users’ sensitive raw data. Experiment conducted in this paper showed that our approach achieves the desirable privacy protection requirement, and demonstrated several advantages over the traditional approach with encryption / decryption
Tasks Anomaly Detection, Decision Making, EEG
Published 2017-11-20
URL http://arxiv.org/abs/1711.07520v1
PDF http://arxiv.org/pdf/1711.07520v1.pdf
PWC https://paperswithcode.com/paper/dropping-activation-outputs-with-localized
Repo
Framework

Tensor Completion Algorithms in Big Data Analytics

Title Tensor Completion Algorithms in Big Data Analytics
Authors Qingquan Song, Hancheng Ge, James Caverlee, Xia Hu
Abstract Tensor completion is a problem of filling the missing or unobserved entries of partially observed tensors. Due to the multidimensional character of tensors in describing complex datasets, tensor completion algorithms and their applications have received wide attention and achievement in areas like data mining, computer vision, signal processing, and neuroscience. In this survey, we provide a modern overview of recent advances in tensor completion algorithms from the perspective of big data analytics characterized by diverse variety, large volume, and high velocity. We characterize these advances from four perspectives: general tensor completion algorithms, tensor completion with auxiliary information (variety), scalable tensor completion algorithms (volume), and dynamic tensor completion algorithms (velocity). Further, we identify several tensor completion applications on real-world data-driven problems and present some common experimental frameworks popularized in the literature. Our goal is to summarize these popular methods and introduce them to researchers and practitioners for promoting future research and applications. We conclude with a discussion of key challenges and promising research directions in this community for future exploration.
Tasks
Published 2017-11-28
URL http://arxiv.org/abs/1711.10105v2
PDF http://arxiv.org/pdf/1711.10105v2.pdf
PWC https://paperswithcode.com/paper/tensor-completion-algorithms-in-big-data
Repo
Framework

Similarity-based Multi-label Learning

Title Similarity-based Multi-label Learning
Authors Ryan A. Rossi, Nesreen K. Ahmed, Hoda Eldardiry, Rong Zhou
Abstract Multi-label classification is an important learning problem with many applications. In this work, we propose a principled similarity-based approach for multi-label learning called SML. We also introduce a similarity-based approach for predicting the label set size. The experimental results demonstrate the effectiveness of SML for multi-label classification where it is shown to compare favorably with a wide variety of existing algorithms across a range of evaluation criterion.
Tasks Multi-Label Classification, Multi-Label Learning
Published 2017-10-27
URL http://arxiv.org/abs/1710.10335v1
PDF http://arxiv.org/pdf/1710.10335v1.pdf
PWC https://paperswithcode.com/paper/similarity-based-multi-label-learning
Repo
Framework

Sleep Stage Classification Based on Multi-level Feature Learning and Recurrent Neural Networks via Wearable Device

Title Sleep Stage Classification Based on Multi-level Feature Learning and Recurrent Neural Networks via Wearable Device
Authors Xin Zhang, Weixuan Kou, Eric I-Chao Chang, He Gao, Yubo Fan, Yan Xu
Abstract This paper proposes a practical approach for automatic sleep stage classification based on a multi-level feature learning framework and Recurrent Neural Network (RNN) classifier using heart rate and wrist actigraphy derived from a wearable device. The feature learning framework is designed to extract low- and mid-level features. Low-level features capture temporal and frequency domain properties and mid-level features learn compositions and structural information of signals. Since sleep staging is a sequential problem with long-term dependencies, we take advantage of RNNs with Bidirectional Long Short-Term Memory (BLSTM) architectures for sequence data learning. To simulate the actual situation of daily sleep, experiments are conducted with a resting group in which sleep is recorded in resting state, and a comprehensive group in which both resting sleep and non-resting sleep are included.We evaluate the algorithm based on an eight-fold cross validation to classify five sleep stages (W, N1, N2, N3, and REM). The proposed algorithm achieves weighted precision, recall and F1 score of 58.0%, 60.3%, and 58.2% in the resting group and 58.5%, 61.1%, and 58.5% in the comprehensive group, respectively. Various comparison experiments demonstrate the effectiveness of feature learning and BLSTM. We further explore the influence of depth and width of RNNs on performance. Our method is specially proposed for wearable devices and is expected to be applicable for long-term sleep monitoring at home. Without using too much prior domain knowledge, our method has the potential to generalize sleep disorder detection.
Tasks Automatic Sleep Stage Classification
Published 2017-11-02
URL http://arxiv.org/abs/1711.00629v1
PDF http://arxiv.org/pdf/1711.00629v1.pdf
PWC https://paperswithcode.com/paper/sleep-stage-classification-based-on-multi
Repo
Framework
comments powered by Disqus