July 28, 2019

3521 words 17 mins read

Paper Group ANR 270

Paper Group ANR 270

A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays. Linking Tweets with Monolingual and Cross-Lingual News using Transformed Word Embeddings. Automatic Estimation of Fetal Abdominal Circumference from Ultrasound Images. Adaptive Regularization of Some Inverse Problems in Image Analysis. Subset Selection for Multiple Linea …

A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays

Title A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays
Authors Jens Grubert, Yuta Itoh, Kenneth Moser, J. Edward Swan II
Abstract Optical see-through head-mounted displays (OST HMDs) are a major output medium for Augmented Reality, which have seen significant growth in popularity and usage among the general public due to the growing release of consumer-oriented models, such as the Microsoft Hololens. Unlike Virtual Reality headsets, OST HMDs inherently support the addition of computer-generated graphics directly into the light path between a user’s eyes and their view of the physical world. As with most Augmented and Virtual Reality systems, the physical position of an OST HMD is typically determined by an external or embedded 6-Degree-of-Freedom tracking system. However, in order to properly render virtual objects, which are perceived as spatially aligned with the physical environment, it is also necessary to accurately measure the position of the user’s eyes within the tracking system’s coordinate frame. For over 20 years, researchers have proposed various calibration methods to determine this needed eye position. However, to date, there has not been a comprehensive overview of these procedures and their requirements. Hence, this paper surveys the field of calibration methods for OST HMDs. Specifically, it provides insights into the fundamentals of calibration techniques, and presents an overview of both manual and automatic approaches, as well as evaluation methods and metrics. Finally, it also identifies opportunities for future research. % relative to the tracking coordinate system, and, hence, its position in 3D space.
Tasks Calibration
Published 2017-09-13
URL http://arxiv.org/abs/1709.04299v1
PDF http://arxiv.org/pdf/1709.04299v1.pdf
PWC https://paperswithcode.com/paper/a-survey-of-calibration-methods-for-optical
Repo
Framework

Linking Tweets with Monolingual and Cross-Lingual News using Transformed Word Embeddings

Title Linking Tweets with Monolingual and Cross-Lingual News using Transformed Word Embeddings
Authors Aditya Mogadala, Dominik Jung, Achim Rettinger
Abstract Social media platforms have grown into an important medium to spread information about an event published by the traditional media, such as news articles. Grouping such diverse sources of information that discuss the same topic in varied perspectives provide new insights. But the gap in word usage between informal social media content such as tweets and diligently written content (e.g. news articles) make such assembling difficult. In this paper, we propose a transformation framework to bridge the word usage gap between tweets and online news articles across languages by leveraging their word embeddings. Using our framework, word embeddings extracted from tweets and news articles are aligned closer to each other across languages, thus facilitating the identification of similarity between news articles and tweets. Experimental results show a notable improvement over baselines for monolingual tweets and news articles comparison, while new findings are reported for cross-lingual comparison.
Tasks Word Embeddings
Published 2017-10-25
URL http://arxiv.org/abs/1710.09137v1
PDF http://arxiv.org/pdf/1710.09137v1.pdf
PWC https://paperswithcode.com/paper/linking-tweets-with-monolingual-and-cross
Repo
Framework

Automatic Estimation of Fetal Abdominal Circumference from Ultrasound Images

Title Automatic Estimation of Fetal Abdominal Circumference from Ultrasound Images
Authors Jaeseong Jang, Yejin Park, Bukweon Kim, Sung Min Lee, Ja-Young Kwon, Jin Keun Seo
Abstract Ultrasound diagnosis is routinely used in obstetrics and gynecology for fetal biometry, and owing to its time-consuming process, there has been a great demand for automatic estimation. However, the automated analysis of ultrasound images is complicated because they are patient-specific, operator-dependent, and machine-specific. Among various types of fetal biometry, the accurate estimation of abdominal circumference (AC) is especially difficult to perform automatically because the abdomen has low contrast against surroundings, non-uniform contrast, and irregular shape compared to other parameters.We propose a method for the automatic estimation of the fetal AC from 2D ultrasound data through a specially designed convolutional neural network (CNN), which takes account of doctors’ decision process, anatomical structure, and the characteristics of the ultrasound image. The proposed method uses CNN to classify ultrasound images (stomach bubble, amniotic fluid, and umbilical vein) and Hough transformation for measuring AC. We test the proposed method using clinical ultrasound data acquired from 56 pregnant women. Experimental results show that, with relatively small training samples, the proposed CNN provides sufficient classification results for AC estimation through the Hough transformation. The proposed method automatically estimates AC from ultrasound images. The method is quantitatively evaluated, and shows stable performance in most cases and even for ultrasound images deteriorated by shadowing artifacts. As a result of experiments for our acceptance check, the accuracies are 0.809 and 0.771 with the expert 1 and expert 2, respectively, while the accuracy between the two experts is 0.905. However, for cases of oversized fetus, when the amniotic fluid is not observed or the abdominal area is distorted, it could not correctly estimate AC.
Tasks
Published 2017-02-09
URL http://arxiv.org/abs/1702.02741v2
PDF http://arxiv.org/pdf/1702.02741v2.pdf
PWC https://paperswithcode.com/paper/automatic-estimation-of-fetal-abdominal
Repo
Framework

Adaptive Regularization of Some Inverse Problems in Image Analysis

Title Adaptive Regularization of Some Inverse Problems in Image Analysis
Authors Byung-Woo Hong, Ja-Keoung Koo, Martin Burger, Stefano Soatto
Abstract We present an adaptive regularization scheme for optimizing composite energy functionals arising in image analysis problems. The scheme automatically trades off data fidelity and regularization depending on the current data fit during the iterative optimization, so that regularization is strongest initially, and wanes as data fidelity improves, with the weight of the regularizer being minimized at convergence. We also introduce the use of a Huber loss function in both data fidelity and regularization terms, and present an efficient convex optimization algorithm based on the alternating direction method of multipliers (ADMM) using the equivalent relation between the Huber function and the proximal operator of the one-norm. We illustrate and validate our adaptive Huber-Huber model on synthetic and real images in segmentation, motion estimation, and denoising problems.
Tasks Denoising, Motion Estimation
Published 2017-05-09
URL http://arxiv.org/abs/1705.03350v1
PDF http://arxiv.org/pdf/1705.03350v1.pdf
PWC https://paperswithcode.com/paper/adaptive-regularization-of-some-inverse
Repo
Framework

Subset Selection for Multiple Linear Regression via Optimization

Title Subset Selection for Multiple Linear Regression via Optimization
Authors Young Woong Park, Diego Klabjan
Abstract Subset selection in multiple linear regression aims to choose a subset of candidate explanatory variables that tradeoff fitting error (explanatory power) and model complexity (number of variables selected). We build mathematical programming models for regression subset selection based on mean square and absolute errors, and minimal-redundancy-maximal-relevance criteria. The proposed models are tested using a linear-program-based branch-and-bound algorithm with tailored valid inequalities and big M values and are compared against the algorithms in the literature. For high dimensional cases, an iterative heuristic algorithm is proposed based on the mathematical programming models and a core set concept, and a randomized version of the algorithm is derived to guarantee convergence to the global optimum. From the computational experiments, we find that our models quickly find a quality solution while the rest of the time is spent to prove optimality; the iterative algorithms find solutions in a relatively short time and are competitive compared to state-of-the-art algorithms; using ad-hoc big M values is not recommended.
Tasks
Published 2017-01-27
URL https://arxiv.org/abs/1701.07920v2
PDF https://arxiv.org/pdf/1701.07920v2.pdf
PWC https://paperswithcode.com/paper/subset-selection-for-multiple-linear
Repo
Framework

Goal Conflict in Designing an Autonomous Artificial System

Title Goal Conflict in Designing an Autonomous Artificial System
Authors Mark Muraven
Abstract Research on human self-regulation has shown that people hold many goals simultaneously and have complex self-regulation mechanisms to deal with this goal conflict. Artificial autonomous systems may also need to find ways to cope with conflicting goals. Indeed, the intricate interplay among different goals may be critical to the design as well as long-term safety and stability of artificial autonomous systems. I discuss some of the critical features of the human self-regulation system and how it might be applied to an artificial system. Furthermore, the implications of goal conflict for the reliability and stability of artificial autonomous systems and ensuring their alignment with human goals and ethics is examined.
Tasks
Published 2017-03-18
URL http://arxiv.org/abs/1703.06354v1
PDF http://arxiv.org/pdf/1703.06354v1.pdf
PWC https://paperswithcode.com/paper/goal-conflict-in-designing-an-autonomous
Repo
Framework

Automating Carotid Intima-Media Thickness Video Interpretation with Convolutional Neural Networks

Title Automating Carotid Intima-Media Thickness Video Interpretation with Convolutional Neural Networks
Authors Jae Y. Shin, Nima Tajbakhsh, R. Todd Hurst, Christopher B. Kendall, Jianming Liang
Abstract Cardiovascular disease (CVD) is the leading cause of mortality yet largely preventable, but the key to prevention is to identify at-risk individuals before adverse events. For predicting individual CVD risk, carotid intima-media thickness (CIMT), a noninvasive ultrasound method, has proven to be valuable, offering several advantages over CT coronary artery calcium score. However, each CIMT examination includes several ultrasound videos, and interpreting each of these CIMT videos involves three operations: (1) select three end-diastolic ultrasound frames (EUF) in the video, (2) localize a region of interest (ROI) in each selected frame, and (3) trace the lumen-intima interface and the media-adventitia interface in each ROI to measure CIMT. These operations are tedious, laborious, and time consuming, a serious limitation that hinders the widespread utilization of CIMT in clinical practice. To overcome this limitation, this paper presents a new system to automate CIMT video interpretation. Our extensive experiments demonstrate that the suggested system significantly outperforms the state-of-the-art methods. The superior performance is attributable to our unified framework based on convolutional neural networks (CNNs) coupled with our informative image representation and effective post-processing of the CNN outputs, which are uniquely designed for each of the above three operations.
Tasks
Published 2017-06-02
URL http://arxiv.org/abs/1706.00719v1
PDF http://arxiv.org/pdf/1706.00719v1.pdf
PWC https://paperswithcode.com/paper/automating-carotid-intima-media-thickness
Repo
Framework

Weakly-supervised learning of visual relations

Title Weakly-supervised learning of visual relations
Authors Julia Peyre, Ivan Laptev, Cordelia Schmid, Josef Sivic
Abstract This paper introduces a novel approach for modeling visual relations between pairs of objects. We call relation a triplet of the form (subject, predicate, object) where the predicate is typically a preposition (eg. ‘under’, ‘in front of’) or a verb (‘hold’, ‘ride’) that links a pair of objects (subject, object). Learning such relations is challenging as the objects have different spatial configurations and appearances depending on the relation in which they occur. Another major challenge comes from the difficulty to get annotations, especially at box-level, for all possible triplets, which makes both learning and evaluation difficult. The contributions of this paper are threefold. First, we design strong yet flexible visual features that encode the appearance and spatial configuration for pairs of objects. Second, we propose a weakly-supervised discriminative clustering model to learn relations from image-level labels only. Third we introduce a new challenging dataset of unusual relations (UnRel) together with an exhaustive annotation, that enables accurate evaluation of visual relation retrieval. We show experimentally that our model results in state-of-the-art results on the visual relationship dataset significantly improving performance on previously unseen relations (zero-shot learning), and confirm this observation on our newly introduced UnRel dataset.
Tasks Zero-Shot Learning
Published 2017-07-29
URL http://arxiv.org/abs/1707.09472v1
PDF http://arxiv.org/pdf/1707.09472v1.pdf
PWC https://paperswithcode.com/paper/weakly-supervised-learning-of-visual
Repo
Framework

Learning mutational graphs of individual tumour evolution from single-cell and multi-region sequencing data

Title Learning mutational graphs of individual tumour evolution from single-cell and multi-region sequencing data
Authors Daniele Ramazzotti, Alex Graudenzi, Luca De Sano, Marco Antoniotti, Giulio Caravagna
Abstract Background. A large number of algorithms is being developed to reconstruct evolutionary models of individual tumours from genome sequencing data. Most methods can analyze multiple samples collected either through bulk multi-region sequencing experiments or the sequencing of individual cancer cells. However, rarely the same method can support both data types. Results. We introduce TRaIT, a computational framework to infer mutational graphs that model the accumulation of multiple types of somatic alterations driving tumour evolution. Compared to other tools, TRaIT supports multi-region and single-cell sequencing data within the same statistical framework, and delivers expressive models that capture many complex evolutionary phenomena. TRaIT improves accuracy, robustness to data-specific errors and computational complexity compared to competing methods. Conclusions. We show that the application of TRaIT to single-cell and multi-region cancer datasets can produce accurate and reliable models of single-tumour evolution, quantify the extent of intra-tumour heterogeneity and generate new testable experimental hypotheses.
Tasks
Published 2017-09-04
URL http://arxiv.org/abs/1709.01076v2
PDF http://arxiv.org/pdf/1709.01076v2.pdf
PWC https://paperswithcode.com/paper/learning-mutational-graphs-of-individual
Repo
Framework

Towards Recovery of Conditional Vectors from Conditional Generative Adversarial Networks

Title Towards Recovery of Conditional Vectors from Conditional Generative Adversarial Networks
Authors Sihao Ding, Andreas Wallin
Abstract A conditional Generative Adversarial Network allows for generating samples conditioned on certain external information. Being able to recover latent and conditional vectors from a condi- tional GAN can be potentially valuable in various applications, ranging from image manipulation for entertaining purposes to diagnosis of the neural networks for security purposes. In this work, we show that it is possible to recover both latent and conditional vectors from generated images given the generator of a conditional generative adversarial network. Such a recovery is not trivial due to the often multi-layered non-linearity of deep neural networks. Furthermore, the effect of such recovery applied on real natural images are investigated. We discovered that there exists a gap between the recovery performance on generated and real images, which we believe comes from the difference between generated data distribution and real data distribution. Experiments are conducted to evaluate the recovered conditional vectors and the reconstructed images from these recovered vectors quantitatively and qualitatively, showing promising results.
Tasks
Published 2017-12-06
URL http://arxiv.org/abs/1712.01833v1
PDF http://arxiv.org/pdf/1712.01833v1.pdf
PWC https://paperswithcode.com/paper/towards-recovery-of-conditional-vectors-from
Repo
Framework

Unassisted Quantitative Evaluation Of Despeckling Filters

Title Unassisted Quantitative Evaluation Of Despeckling Filters
Authors Luis Gomez, Raydonal Ospina, Alejandro C. Frery
Abstract SAR (Synthetic Aperture Radar) imaging plays a central role in Remote Sensing due to, among other important features, its ability to provide high-resolution, day-and-night and almost weather-independent images. SAR images are affected from a granular contamination, speckle, that can be described by a multiplicative model. Many despeckling techniques have been proposed in the literature, as well as measures of the quality of the results they provide. Assuming the multiplicative model, the observed image $Z$ is the product of two independent fields: the backscatter $X$ and the speckle $Y$. The result of any speckle filter is $\widehat X$, an estimator of the backscatter $X$, based solely on the observed data $Z$. An ideal estimator would be the one for which the ratio of the observed image to the filtered one $I=Z/\widehat X$ is only speckle: a collection of independent identically distributed samples from Gamma variates. We, then, assess the quality of a filter by the closeness of $I$ to the hypothesis that it is adherent to the statistical properties of pure speckle. We analyze filters through the ratio image they produce with regards to first- and second-order statistics: the former check marginal properties, while the latter verifies lack of structure. A new quantitative image-quality index is then defined, and applied to state-of-the-art despeckling filters. This new measure provides consistent results with commonly used quality measures (equivalent number of looks, PSNR, MSSIM, $\beta$ edge correlation, and preservation of the mean), and ranks the filters results also in agreement with their visual analysis. We conclude our study showing that the proposed measure can be successfully used to optimize the (often many) parameters that define a speckle filter.
Tasks
Published 2017-04-19
URL http://arxiv.org/abs/1704.05952v1
PDF http://arxiv.org/pdf/1704.05952v1.pdf
PWC https://paperswithcode.com/paper/unassisted-quantitative-evaluation-of
Repo
Framework

Look into Person: Self-supervised Structure-sensitive Learning and A New Benchmark for Human Parsing

Title Look into Person: Self-supervised Structure-sensitive Learning and A New Benchmark for Human Parsing
Authors Ke Gong, Xiaodan Liang, Dongyu Zhang, Xiaohui Shen, Liang Lin
Abstract Human parsing has recently attracted a lot of research interests due to its huge application potentials. However existing datasets have limited number of images and annotations, and lack the variety of human appearances and the coverage of challenging cases in unconstrained environment. In this paper, we introduce a new benchmark “Look into Person (LIP)” that makes a significant advance in terms of scalability, diversity and difficulty, a contribution that we feel is crucial for future developments in human-centric analysis. This comprehensive dataset contains over 50,000 elaborately annotated images with 19 semantic part labels, which are captured from a wider range of viewpoints, occlusions and background complexity. Given these rich annotations we perform detailed analyses of the leading human parsing approaches, gaining insights into the success and failures of these methods. Furthermore, in contrast to the existing efforts on improving the feature discriminative capability, we solve human parsing by exploring a novel self-supervised structure-sensitive learning approach, which imposes human pose structures into parsing results without resorting to extra supervision (i.e., no need for specifically labeling human joints in model training). Our self-supervised learning framework can be injected into any advanced neural networks to help incorporate rich high-level knowledge regarding human joints from a global perspective and improve the parsing results. Extensive evaluations on our LIP and the public PASCAL-Person-Part dataset demonstrate the superiority of our method.
Tasks Human Parsing, Semantic Segmentation
Published 2017-03-16
URL http://arxiv.org/abs/1703.05446v2
PDF http://arxiv.org/pdf/1703.05446v2.pdf
PWC https://paperswithcode.com/paper/look-into-person-self-supervised-structure
Repo
Framework

A probabilistic approach to emission-line galaxy classification

Title A probabilistic approach to emission-line galaxy classification
Authors R. S. de Souza, M. L. L. Dantas, M. V. Costa-Duarte, E. D. Feigelson, M. Killedar, P. -Y. Lablanche, R. Vilalta, A. Krone-Martins, R. Beck, F. Gieseke
Abstract We invoke a Gaussian mixture model (GMM) to jointly analyse two traditional emission-line classification schemes of galaxy ionization sources: the Baldwin-Phillips-Terlevich (BPT) and $\rm W_{H\alpha}$ vs. [NII]/H$\alpha$ (WHAN) diagrams, using spectroscopic data from the Sloan Digital Sky Survey Data Release 7 and SEAGal/STARLIGHT datasets. We apply a GMM to empirically define classes of galaxies in a three-dimensional space spanned by the $\log$ [OIII]/H$\beta$, $\log$ [NII]/H$\alpha$, and $\log$ EW(H${\alpha}$), optical parameters. The best-fit GMM based on several statistical criteria suggests a solution around four Gaussian components (GCs), which are capable to explain up to 97 per cent of the data variance. Using elements of information theory, we compare each GC to their respective astronomical counterpart. GC1 and GC4 are associated with star-forming galaxies, suggesting the need to define a new starburst subgroup. GC2 is associated with BPT’s Active Galaxy Nuclei (AGN) class and WHAN’s weak AGN class. GC3 is associated with BPT’s composite class and WHAN’s strong AGN class. Conversely, there is no statistical evidence – based on four GCs – for the existence of a Seyfert/LINER dichotomy in our sample. Notwithstanding, the inclusion of an additional GC5 unravels it. The GC5 appears associated to the LINER and Passive galaxies on the BPT and WHAN diagrams respectively. Subtleties aside, we demonstrate the potential of our methodology to recover/unravel different objects inside the wilderness of astronomical datasets, without lacking the ability to convey physically interpretable results. The probabilistic classifications from the GMM analysis are publicly available within the COINtoolbox (https://cointoolbox.github.io/GMM_Catalogue/).
Tasks
Published 2017-03-22
URL http://arxiv.org/abs/1703.07607v2
PDF http://arxiv.org/pdf/1703.07607v2.pdf
PWC https://paperswithcode.com/paper/a-probabilistic-approach-to-emission-line
Repo
Framework

High-dimensional robust regression and outliers detection with SLOPE

Title High-dimensional robust regression and outliers detection with SLOPE
Authors Alain Virouleau, Agathe Guilloux, Stéphane Gaïffas, Malgorzata Bogdan
Abstract The problems of outliers detection and robust regression in a high-dimensional setting are fundamental in statistics, and have numerous applications. Following a recent set of works providing methods for simultaneous robust regression and outliers detection, we consider in this paper a model of linear regression with individual intercepts, in a high-dimensional setting. We introduce a new procedure for simultaneous estimation of the linear regression coefficients and intercepts, using two dedicated sorted-$\ell_1$ penalizations, also called SLOPE. We develop a complete theory for this problem: first, we provide sharp upper bounds on the statistical estimation error of both the vector of individual intercepts and regression coefficients. Second, we give an asymptotic control on the False Discovery Rate (FDR) and statistical power for support selection of the individual intercepts. As a consequence, this paper is the first to introduce a procedure with guaranteed FDR and statistical power control for outliers detection under the mean-shift model. Numerical illustrations, with a comparison to recent alternative approaches, are provided on both simulated and several real-world datasets. Experiments are conducted using an open-source software written in Python and C++.
Tasks
Published 2017-12-07
URL http://arxiv.org/abs/1712.02640v1
PDF http://arxiv.org/pdf/1712.02640v1.pdf
PWC https://paperswithcode.com/paper/high-dimensional-robust-regression-and
Repo
Framework

Recruiting from the network: discovering Twitter users who can help combat Zika epidemics

Title Recruiting from the network: discovering Twitter users who can help combat Zika epidemics
Authors Paolo Missier, Callum McClean, Jonathan Carlton, Diego Cedrim, Leonardo Silva, Alessandro Garcia, Alexandre Plastino, Alexander Romanovsky
Abstract Tropical diseases like \textit{Chikungunya} and \textit{Zika} have come to prominence in recent years as the cause of serious, long-lasting, population-wide health problems. In large countries like Brasil, traditional disease prevention programs led by health authorities have not been particularly effective. We explore the hypothesis that monitoring and analysis of social media content streams may effectively complement such efforts. Specifically, we aim to identify selected members of the public who are likely to be sensitive to virus combat initiatives that are organised in local communities. Focusing on Twitter and on the topic of Zika, our approach involves (i) training a classifier to select topic-relevant tweets from the Twitter feed, and (ii) discovering the top users who are actively posting relevant content about the topic. We may then recommend these users as the prime candidates for direct engagement within their community. In this short paper we describe our analytical approach and prototype architecture, discuss the challenges of dealing with noisy and sparse signal, and present encouraging preliminary results.
Tasks
Published 2017-03-11
URL http://arxiv.org/abs/1703.03928v1
PDF http://arxiv.org/pdf/1703.03928v1.pdf
PWC https://paperswithcode.com/paper/recruiting-from-the-network-discovering
Repo
Framework
comments powered by Disqus