January 27, 2020

3045 words 15 mins read

Paper Group ANR 1080

Paper Group ANR 1080

Leveraging Contextual Embeddings for Detecting Diachronic Semantic Shift. How to improve CNN-based 6-DoF camera pose estimation. Linking Art through Human Poses. Solving The Exam Scheduling Problems in Central Exams With Genetic Algorithms. Training Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text. M …

Leveraging Contextual Embeddings for Detecting Diachronic Semantic Shift

Title Leveraging Contextual Embeddings for Detecting Diachronic Semantic Shift
Authors Matej Martinc, Petra Kralj Novak, Senja Pollak
Abstract We propose a new method that leverages contextual embeddings for the task of diachronic semantic shift detection by generating time specific word representations from BERT embeddings. The results of our experiments in the domain specific LiverpoolFC corpus suggest that the proposed method has performance comparable to the current state-of-the-art without requiring any time consuming domain adaptation on large corpora. The results on the newly created Brexit news corpus suggest that the method can be successfully used for the detection of a short-term yearly semantic shift. And lastly, the model also shows promising results in a multilingual settings, where the task was to detect differences and similarities between diachronic semantic shifts in different languages.
Tasks Domain Adaptation
Published 2019-12-02
URL https://arxiv.org/abs/1912.01072v2
PDF https://arxiv.org/pdf/1912.01072v2.pdf
PWC https://paperswithcode.com/paper/leveraging-contextual-embeddings-for
Repo
Framework

How to improve CNN-based 6-DoF camera pose estimation

Title How to improve CNN-based 6-DoF camera pose estimation
Authors Soroush Seifi, Tinne Tuytelaars
Abstract Convolutional neural networks (CNNs) and transfer learning have recently been used for 6 degrees of freedom (6-DoF) camera pose estimation. While they do not reach the same accuracy as visual SLAM-based approaches and are restricted to a specific environment, they excel in robustness and can be applied even to a single image. In this paper, we study PoseNet [1] and investigate modifications based on datasets’ characteristics to improve the accuracy of the pose estimates. In particular, we emphasize the importance of field-of-view over image resolution; we present a data augmentation scheme to reduce overfitting; we study the effect of Long-Short-Term-Memory (LSTM) cells. Lastly, we combine these modifications and improve PoseNet’s performance for monocular CNN based camera pose regression.
Tasks Data Augmentation, Pose Estimation, Transfer Learning
Published 2019-09-23
URL https://arxiv.org/abs/1909.10312v2
PDF https://arxiv.org/pdf/1909.10312v2.pdf
PWC https://paperswithcode.com/paper/190910312
Repo
Framework

Linking Art through Human Poses

Title Linking Art through Human Poses
Authors Tomas Jenicek, Ondřej Chum
Abstract We address the discovery of composition transfer in artworks based on their visual content. Automated analysis of large art collections, which are growing as a result of art digitization among museums and galleries, is an important tool for art history and assists cultural heritage preservation. Modern image retrieval systems offer good performance on visually similar artworks, but fail in the cases of more abstract composition transfer. The proposed approach links artworks through a pose similarity of human figures depicted in images. Human figures are the subject of a large fraction of visual art from middle ages to modernity and their distinctive poses were often a source of inspiration among artists. The method consists of two steps – fast pose matching and robust spatial verification. We experimentally show that explicit human pose matching is superior to standard content-based image retrieval methods on a manually annotated art composition transfer dataset.
Tasks Content-Based Image Retrieval, Image Retrieval
Published 2019-07-08
URL https://arxiv.org/abs/1907.03537v1
PDF https://arxiv.org/pdf/1907.03537v1.pdf
PWC https://paperswithcode.com/paper/linking-art-through-human-poses
Repo
Framework

Solving The Exam Scheduling Problems in Central Exams With Genetic Algorithms

Title Solving The Exam Scheduling Problems in Central Exams With Genetic Algorithms
Authors Murat Dener, M. Hanefi Calp
Abstract It is the efficient use of resources expected from an exam scheduling application. There are various criteria for efficient use of resources and for all tests to be carried out at minimum cost in the shortest possible time. It is aimed that educational institutions with such criteria successfully carry out central examination organizations. In the study, a two-stage genetic algorithm was developed. In the first stage, the assignment of courses to sessions was carried out. In the second stage, the students who participated in the test session were assigned to examination rooms. Purposes of the study are increasing the number of joint students participating in sessions, using the minimum number of buildings in the same session, and reducing the number of supervisors using the minimum number of classrooms possible. In this study, a general purpose exam scheduling solution for educational institutions was presented. The developed system can be used in different central examinations to create originality. Given the results of the sample application, it is seen that the proposed genetic algorithm gives successful results.1
Tasks
Published 2019-02-04
URL http://arxiv.org/abs/1902.01360v1
PDF http://arxiv.org/pdf/1902.01360v1.pdf
PWC https://paperswithcode.com/paper/solving-the-exam-scheduling-problems-in
Repo
Framework

Training Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text

Title Training Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Authors Toms Bergmanis, Sharon Goldwater
Abstract Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in low-resource languages. In addition (as shown here), in a low-resource setting, a lemmatizer can learn more from $n$ labeled examples of distinct words (types) than from $n$ (contiguous) labeled tokens, since the latter contain far fewer distinct types. To combine the efficiency of type-based learning with the benefits of context, we propose a way to train a context-sensitive lemmatizer with little or no labeled corpus data, using inflection tables from the UniMorph project and raw text examples from Wikipedia that provide sentence contexts for the unambiguous UniMorph examples. Despite these being unambiguous examples, the model successfully generalizes from them, leading to improved results (both overall, and especially on unseen words) in comparison to a baseline that does not use context.
Tasks Data Augmentation, Lemmatization
Published 2019-04-02
URL https://arxiv.org/abs/1904.01464v3
PDF https://arxiv.org/pdf/1904.01464v3.pdf
PWC https://paperswithcode.com/paper/data-augmentation-for-context-sensitive
Repo
Framework

Multivariate mathematical morphology for DCE-MRI image analysis in angiogenesis studies

Title Multivariate mathematical morphology for DCE-MRI image analysis in angiogenesis studies
Authors Guillaume Noyel, Jesus Angulo, Dominique Jeulin, Daniel Balvay, Charles-André Cuenod
Abstract We propose a new computer aided detection framework for tumours acquired on DCE-MRI (Dynamic Contrast Enhanced Magnetic Resonance Imaging) series on small animals. In this approach we consider DCE-MRI series as multivariate images. A full multivariate segmentation method based on dimensionality reduction, noise filtering, supervised classification and stochastic watershed is explained and tested on several data sets. The two main key-points introduced in this paper are noise reduction preserving contours and spatio temporal segmentation by stochastic watershed. Noise reduction is performed in a special way that selects factorial axes of Factor Correspondence Analysis in order to preserves contours. Then a spatio-temporal approach based on stochastic watershed is used to segment tumours. The results obtained are in accordance with the diagnosis of the medical doctors.
Tasks Dimensionality Reduction
Published 2019-10-28
URL https://arxiv.org/abs/1910.12704v1
PDF https://arxiv.org/pdf/1910.12704v1.pdf
PWC https://paperswithcode.com/paper/multivariate-mathematical-morphology-for-dce
Repo
Framework

Beyond Adaptive Submodularity: Adaptive Influence Maximization with Intermediary Constraints

Title Beyond Adaptive Submodularity: Adaptive Influence Maximization with Intermediary Constraints
Authors Shatian Wang, Zhen Xu, Van-Anh Truong
Abstract We consider a brand with a given budget that wants to promote a product over multiple rounds of influencer marketing. In each round, it commissions an influencer to promote the product over a social network, and then observes the subsequent diffusion of the product before adaptively choosing the next influencer to commission. This process terminates when the budget is exhausted. We assume that the diffusion process follows the popular Independent Cascade model. We also consider an online learning setting, where the brand initially does not know the diffusion parameters associated with the model, and has to gradually learn the parameters over time. Unlike in existing models, the rounds in our model are correlated through an intermediary constraint: each user can be commissioned for an unlimited number of times. However, each user will spread influence without commission at most once. Due to this added constraint, the order in which the influencers are chosen can change the influence spread, making obsolete existing analysis techniques that based on the notion of adaptive submodularity. We devise a sample path analysis to prove that a greedy policy that knows the diffusion parameters achieves at least $1-1/e - \epsilon$ times the expected reward of the optimal policy. In the online-learning setting, we are the first to consider a truly adaptive decision making framework, rather than assuming independent epochs, and adaptivity only within epochs. Under mild assumptions, we derive a regret bound for our algorithm. In our numerical experiments, we simulate information diffusions on four Twitter sub-networks, and compare our UCB-based learning algorithms with several baseline adaptive seeding strategies. Our learning algorithm consistently outperforms the baselines and achieves rewards close to the greedy policy that knows the true diffusion parameters.
Tasks Decision Making
Published 2019-11-08
URL https://arxiv.org/abs/1911.02986v1
PDF https://arxiv.org/pdf/1911.02986v1.pdf
PWC https://paperswithcode.com/paper/beyond-adaptive-submodularity-adaptive
Repo
Framework

Pose Graph Optimization for Unsupervised Monocular Visual Odometry

Title Pose Graph Optimization for Unsupervised Monocular Visual Odometry
Authors Yang Li, Yoshitaka Ushiku, Tatsuya Harada
Abstract Unsupervised Learning based monocular visual odometry (VO) has lately drawn significant attention for its potential in label-free leaning ability and robustness to camera parameters and environmental variations. However, partially due to the lack of drift correction technique, these methods are still by far less accurate than geometric approaches for large-scale odometry estimation. In this paper, we propose to leverage graph optimization and loop closure detection to overcome limitations of unsupervised learning based monocular visual odometry. To this end, we propose a hybrid VO system which combines an unsupervised monocular VO called NeuralBundler with a pose graph optimization back-end. NeuralBundler is a neural network architecture that uses temporal and spatial photometric loss as main supervision and generates a windowed pose graph consists of multi-view 6DoF constraints. We propose a novel pose cycle consistency loss to relieve the tensions in the windowed pose graph, leading to improved performance and robustness. In the back-end, a global pose graph is built from local and loop 6DoF constraints estimated by NeuralBundler and is optimized over SE(3). Empirical evaluation on the KITTI odometry dataset demonstrates that 1) NeuralBundler achieves state-of-the-art performance on unsupervised monocular VO estimation, and 2) our whole approach can achieve efficient loop closing and show favorable overall translational accuracy compared to established monocular SLAM systems.
Tasks Loop Closure Detection, Monocular Visual Odometry, Visual Odometry
Published 2019-03-15
URL http://arxiv.org/abs/1903.06315v1
PDF http://arxiv.org/pdf/1903.06315v1.pdf
PWC https://paperswithcode.com/paper/pose-graph-optimization-for-unsupervised
Repo
Framework

Deep least-squares methods: an unsupervised learning-based numerical method for solving elliptic PDEs

Title Deep least-squares methods: an unsupervised learning-based numerical method for solving elliptic PDEs
Authors Zhiqiang Cai, Jingshuang Chen, Min Liu, Xinyu Liu
Abstract This paper studies an unsupervised deep learning-based numerical approach for solving partial differential equations (PDEs). The approach makes use of the deep neural network to approximate solutions of PDEs through the compositional construction and employs least-squares functionals as loss functions to determine parameters of the deep neural network. There are various least-squares functionals for a partial differential equation. This paper focuses on the so-called first-order system least-squares (FOSLS) functional studied in [3], which is based on a first-order system of scalar second-order elliptic PDEs. Numerical results for second-order elliptic PDEs in one dimension are presented.
Tasks
Published 2019-11-05
URL https://arxiv.org/abs/1911.02109v1
PDF https://arxiv.org/pdf/1911.02109v1.pdf
PWC https://paperswithcode.com/paper/deep-least-squares-methods-an-unsupervised
Repo
Framework

Method of diagnosing heart disease based on deep learning ECG signal

Title Method of diagnosing heart disease based on deep learning ECG signal
Authors Jie Zhang, Bohao Li, Kexin Xiang, Xuegang Shi
Abstract The traditional method of diagnosing heart disease on ECG signal is artificial observation. Some have tried to combine expertise and signal processing to classify ECG signal by heart disease type. However, the currency is not so sufficient that it can be used in medical applications. We develop an algorithm that combines signal processing and deep learning to classify ECG signals into Normal AF other rhythm and noise, which help us solve this problem. It is demonstrated that we can obtain the time-frequency diagram of ECG signal by wavelet transform, and use DNN to classify the time-frequency diagram to find out the heart disease that the signal collector may have. Overall, an accuracy of 94 percent is achieved on the validation set. According to the evaluation criteria of PhysioNet/Computing in Cardiology (CinC) in 2017, the F1 score of this method is 0.957, which is higher than the first place in the competition in 2017.
Tasks
Published 2019-06-25
URL https://arxiv.org/abs/1907.01514v2
PDF https://arxiv.org/pdf/1907.01514v2.pdf
PWC https://paperswithcode.com/paper/method-of-diagnosing-heart-disease-based-on
Repo
Framework

Geometry-aware Generation of Adversarial and Cooperative Point Clouds

Title Geometry-aware Generation of Adversarial and Cooperative Point Clouds
Authors Yuxin Wen, Jiehong Lin, Ke Chen, Kui Jia
Abstract Recent studies show that machine learning models are vulnerable to adversarial examples. In 2D image domain, these examples are obtained by adding imperceptible noises to natural images. This paper studies adversarial generation of point clouds by learning to deform those approximating object surfaces of certain categories. As 2D manifolds embedded in the 3D Euclidean space, object surfaces enjoy the general properties of smoothness and fairness. We thus argue that in order to achieve imperceptible surface shape deformations, adversarial point clouds should have the same properties with similar degrees of smoothness/fairness to the benign ones, while being close to the benign ones as well when measured under certain distance metrics of point clouds. To this end, we propose a novel loss function to account for imperceptible, geometry-aware deformations of point clouds, and use the proposed loss in an adversarial objective to attack representative models of point set classifiers. Experiments show that our proposed method achieves stronger attacks than existing methods, without introduction of noticeable outliers and surface irregularities. In this work, we also investigate an opposite direction that learns to deform point clouds of object surfaces in the same geometry-aware, but cooperative manner. Cooperatively generated point clouds are more favored by machine learning models in terms of improved classification confidence or accuracy. We present experiments verifying that our proposed objective succeeds in learning cooperative shape deformations.
Tasks
Published 2019-12-24
URL https://arxiv.org/abs/1912.11171v1
PDF https://arxiv.org/pdf/1912.11171v1.pdf
PWC https://paperswithcode.com/paper/geometry-aware-generation-of-adversarial-and-1
Repo
Framework

Regression from Dependent Observations

Title Regression from Dependent Observations
Authors Constantinos Daskalakis, Nishanth Dikkala, Ioannis Panageas
Abstract The standard linear and logistic regression models assume that the response variables are independent, but share the same linear relationship to their corresponding vectors of covariates. The assumption that the response variables are independent is, however, too strong. In many applications, these responses are collected on nodes of a network, or some spatial or temporal domain, and are dependent. Examples abound in financial and meteorological applications, and dependencies naturally arise in social networks through peer effects. Regression with dependent responses has thus received a lot of attention in the Statistics and Economics literature, but there are no strong consistency results unless multiple independent samples of the vectors of dependent responses can be collected from these models. We present computationally and statistically efficient methods for linear and logistic regression models when the response variables are dependent on a network. Given one sample from a networked linear or logistic regression model and under mild assumptions, we prove strong consistency results for recovering the vector of coefficients and the strength of the dependencies, recovering the rates of standard regression under independent observations. We use projected gradient descent on the negative log-likelihood, or negative log-pseudolikelihood, and establish their strong convexity and consistency using concentration of measure for dependent random variables.
Tasks
Published 2019-05-08
URL https://arxiv.org/abs/1905.03353v2
PDF https://arxiv.org/pdf/1905.03353v2.pdf
PWC https://paperswithcode.com/paper/190503353
Repo
Framework

Activitynet 2019 Task 3: Exploring Contexts for Dense Captioning Events in Videos

Title Activitynet 2019 Task 3: Exploring Contexts for Dense Captioning Events in Videos
Authors Shizhe Chen, Yuqing Song, Yida Zhao, Qin Jin, Zhaoyang Zeng, Bei Liu, Jianlong Fu, Alexander Hauptmann
Abstract Contextual reasoning is essential to understand events in long untrimmed videos. In this work, we systematically explore different captioning models with various contexts for the dense-captioning events in video task, which aims to generate captions for different events in the untrimmed video. We propose five types of contexts as well as two categories of event captioning models, and evaluate their contributions for event captioning from both accuracy and diversity aspects. The proposed captioning models are plugged into our pipeline system for the dense video captioning challenge. The overall system achieves the state-of-the-art performance on the dense-captioning events in video task with 9.91 METEOR score on the challenge testing set.
Tasks Dense Video Captioning, Video Captioning
Published 2019-07-11
URL https://arxiv.org/abs/1907.05092v1
PDF https://arxiv.org/pdf/1907.05092v1.pdf
PWC https://paperswithcode.com/paper/activitynet-2019-task-3-exploring-contexts
Repo
Framework

Contributed Discussion of “A Bayesian Conjugate Gradient Method”

Title Contributed Discussion of “A Bayesian Conjugate Gradient Method”
Authors Francois-Xavier Briol, Francisco A. Diaz De la O, Peter O. Hristov
Abstract We would like to congratulate the authors of “A Bayesian Conjugate Gradient Method” on their insightful paper, and welcome this publication which we firmly believe will become a fundamental contribution to the growing field of probabilistic numerical methods and in particular the sub-field of Bayesian numerical methods. In this short piece, which will be published as a comment alongside the main paper, we first initiate a discussion on the choice of priors for solving linear systems, then propose an extension of the Bayesian conjugate gradient (BayesCG) algorithm for solving several related linear systems simultaneously.
Tasks
Published 2019-08-08
URL https://arxiv.org/abs/1908.02964v1
PDF https://arxiv.org/pdf/1908.02964v1.pdf
PWC https://paperswithcode.com/paper/contributed-discussion-of-a-bayesian
Repo
Framework

A Neural Network for Semi-Supervised Learning on Manifolds

Title A Neural Network for Semi-Supervised Learning on Manifolds
Authors Alexander Genkin, Anirvan M. Sengupta, Dmitri Chklovskii
Abstract Semi-supervised learning algorithms typically construct a weighted graph of data points to represent a manifold. However, an explicit graph representation is problematic for neural networks operating in the online setting. Here, we propose a feed-forward neural network capable of semi-supervised learning on manifolds without using an explicit graph representation. Our algorithm uses channels that represent localities on the manifold such that correlations between channels represent manifold structure. The proposed neural network has two layers. The first layer learns to build a representation of low-dimensional manifolds in the input data as proposed recently in [8]. The second learns to classify data using both occasional supervision and similarity of the manifold representation of the data. The channel carrying label information for the second layer is assumed to be “silent” most of the time. Learning in both layers is Hebbian, making our network design biologically plausible. We experimentally demonstrate the effect of semi-supervised learning on non-trivial manifolds.
Tasks
Published 2019-08-21
URL https://arxiv.org/abs/1908.08145v1
PDF https://arxiv.org/pdf/1908.08145v1.pdf
PWC https://paperswithcode.com/paper/a-neural-network-for-semi-supervised-learning
Repo
Framework
comments powered by Disqus