Paper Group ANR 93
SGP: Spotting Groups Polluting the Online Political Discourse. Sequential Neural Processes. Atrial Scar Quantification via Multi-scale CNN in the Graph-cuts Framework. Toward Automatic Threat Recognition for Airport X-ray Baggage Screening with Deep Convolutional Object Detection. Black-Box Inference for Non-Linear Latent Force Models. A Natural-la …
SGP: Spotting Groups Polluting the Online Political Discourse
Title | SGP: Spotting Groups Polluting the Online Political Discourse |
Authors | Junhao Wang, Sacha Levy, Ren Wang, Aayushi Kulshrestha, Reihaneh Rabbany |
Abstract | Social media sites are becoming a key factor in politics. These platforms are easy to manipulate for the purpose of distorting information space to confuse and distract voters. It is of paramount importance for social media platforms, users engaged with online political discussions, as well as government agencies to understand the dynamics on social media, and identify malicious groups engaging in misinformation campaigns and thus polluting the general discourse around a topic of interest. Past works to identify such disruptive patterns are mostly focused on analyzing user-generated content such as tweets. In this study, we take a holistic approach and propose SGP to provide an informative birds eye view of all the activities in these social media sites around a broad topic and detect coordinated groups suspicious of engaging in misinformation campaigns. To show the effectiveness of SGP, we deploy it to provide a concise overview of polluting activity on Twitter around the upcoming 2019 Canadian Federal Elections, by analyzing over 60 thousand user accounts connected through 3.4 million connections and 1.3 million hashtags. Users in the polluting groups detected by SGP-flag are over 4x more likely to become suspended while majority of these highly suspicious users detected by SGP-flag escaped Twitter’s suspending algorithm. Moreover, while few of the polluting hashtags detected are linked to misinformation campaigns, SGP-sig also flags others that have not been picked up on. More importantly, we also show that a large coordinated set of right-winged conservative groups based in the US are heavily engaged in Canadian politics. |
Tasks | |
Published | 2019-10-16 |
URL | https://arxiv.org/abs/1910.07130v4 |
https://arxiv.org/pdf/1910.07130v4.pdf | |
PWC | https://paperswithcode.com/paper/sgp-spotting-groups-polluting-the-online |
Repo | |
Framework | |
Sequential Neural Processes
Title | Sequential Neural Processes |
Authors | Gautam Singh, Jaesik Yoon, Youngsung Son, Sungjin Ahn |
Abstract | Neural Processes combine the strengths of neural networks and Gaussian processes to achieve both flexible learning and fast prediction in stochastic processes. However, a large class of problems comprises underlying temporal dependency structures in a sequence of stochastic processes that Neural Processes (NP) do not explicitly consider. In this paper, we propose Sequential Neural Processes (SNP) which incorporates a temporal state-transition model of stochastic processes and thus extends its modeling capabilities to dynamic stochastic processes. In applying SNP to dynamic 3D scene modeling, we introduce the Temporal Generative Query Networks. To our knowledge, this is the first 4D model that can deal with the temporal dynamics of 3D scenes. In experiments, we evaluate the proposed methods in dynamic (non-stationary) regression and 4D scene inference and rendering. |
Tasks | Gaussian Processes |
Published | 2019-06-24 |
URL | https://arxiv.org/abs/1906.10264v4 |
https://arxiv.org/pdf/1906.10264v4.pdf | |
PWC | https://paperswithcode.com/paper/sequential-neural-processes |
Repo | |
Framework | |
Atrial Scar Quantification via Multi-scale CNN in the Graph-cuts Framework
Title | Atrial Scar Quantification via Multi-scale CNN in the Graph-cuts Framework |
Authors | Lei Li, Fuping Wu, Guang Yang, Lingchao Xu, Tom Wong, Raad Mohiaddin, David Firmin, Jennifer Keegan, Xiahai Zhuang |
Abstract | Late gadolinium enhancement magnetic resonance imaging (LGE MRI) appears to be a promising alternative for scar assessment in patients with atrial fibrillation (AF). Automating the quantification and analysis of atrial scars can be challenging due to the low image quality. In this work, we propose a fully automated method based on the graph-cuts framework, where the potentials of the graph are learned on a surface mesh of the left atrium (LA) using a multi-scale convolutional neural network (MS-CNN). For validation, we have employed fifty-eight images with manual delineations. MS-CNN, which can efficiently incorporate both the local and global texture information of the images, has been shown to evidently improve the segmentation accuracy of the proposed graph-cuts based method. The segmentation could be further improved when the contribution between the t-link and n-link weights of the graph is balanced. The proposed method achieves a mean accuracy of 0.856 +- 0.033 and mean Dice score of 0.702 +- 0.071 for LA scar quantification. Compared with the conventional methods, which are based on the manual delineation of LA for initialization, our method is fully automatic and has demonstrated significantly better Dice score and accuracy (p < 0.01). The method is promising and can be useful in diagnosis and prognosis of AF. |
Tasks | |
Published | 2019-02-21 |
URL | http://arxiv.org/abs/1902.07877v1 |
http://arxiv.org/pdf/1902.07877v1.pdf | |
PWC | https://paperswithcode.com/paper/atrial-scar-quantification-via-multi-scale |
Repo | |
Framework | |
Toward Automatic Threat Recognition for Airport X-ray Baggage Screening with Deep Convolutional Object Detection
Title | Toward Automatic Threat Recognition for Airport X-ray Baggage Screening with Deep Convolutional Object Detection |
Authors | Kevin J Liang, John B. Sigman, Gregory P. Spell, Dan Strellis, William Chang, Felix Liu, Tejas Mehta, Lawrence Carin |
Abstract | For the safety of the traveling public, the Transportation Security Administration (TSA) operates security checkpoints at airports in the United States, seeking to keep dangerous items off airplanes. At these checkpoints, the TSA employs a fleet of X-ray scanners, such as the Rapiscan 620DV, so Transportation Security Officers (TSOs) can inspect the contents of carry-on possessions. However, identifying and locating all potential threats can be a challenging task. As a result, the TSA has taken a recent interest in deep learning-based automated detection algorithms that can assist TSOs. In a collaboration funded by the TSA, we collected a sizable new dataset of X-ray scans with a diverse set of threats in a wide array of contexts, trained several deep convolutional object detection models, and integrated such models into the Rapiscan 620DV, resulting in functional prototypes capable of operating in real time. We show performance of our models on held-out evaluation sets, analyze several design parameters, and demonstrate the potential of such systems for automated detection of threats that can be found in airports. |
Tasks | Object Detection |
Published | 2019-12-13 |
URL | https://arxiv.org/abs/1912.06329v1 |
https://arxiv.org/pdf/1912.06329v1.pdf | |
PWC | https://paperswithcode.com/paper/toward-automatic-threat-recognition-for |
Repo | |
Framework | |
Black-Box Inference for Non-Linear Latent Force Models
Title | Black-Box Inference for Non-Linear Latent Force Models |
Authors | Wil O. C. Ward, Tom Ryder, Dennis Prangle, Mauricio A. Álvarez |
Abstract | Latent force models are systems whereby there is a mechanistic model describing the dynamics of the system state, with some unknown forcing term that is approximated with a Gaussian process. If such dynamics are non-linear, it can be difficult to estimate the posterior state and forcing term jointly, particularly when there are system parameters that also need estimating. This paper uses black-box variational inference to jointly estimate the posterior, designing a multivariate extension to local inverse autoregressive flows as a flexible approximater of the system. We compare estimates on systems where the posterior is known, demonstrating the effectiveness of the approximation, and apply to problems with non-linear dynamics, multi-output systems and models with non-Gaussian likelihoods. |
Tasks | Gaussian Processes |
Published | 2019-06-21 |
URL | https://arxiv.org/abs/1906.09199v2 |
https://arxiv.org/pdf/1906.09199v2.pdf | |
PWC | https://paperswithcode.com/paper/variational-bridge-constructs-for-grey-box |
Repo | |
Framework | |
A Natural-language-based Visual Query Approach of Uncertain Human Trajectories
Title | A Natural-language-based Visual Query Approach of Uncertain Human Trajectories |
Authors | Zhaosong Huang, Ye Zhao, Wei Chen, Shengjie Gao, Kejie Yu, Weixia Xu, Mingjie Tang, Minfeng Zhu, Mingliang Xu |
Abstract | Visual querying is essential for interactively exploring massive trajectory data. However, the data uncertainty imposes profound challenges to fulfill advanced analytics requirements. On the one hand, many underlying data does not contain accurate geographic coordinates, e.g., positions of a mobile phone only refer to the regions (i.e., mobile cell stations) in which it resides, instead of accurate GPS coordinates. On the other hand, domain experts and general users prefer a natural way, such as using a natural language sentence, to access and analyze massive movement data. In this paper, we propose a visual analytics approach that can extract spatial-temporal constraints from a textual sentence and support an effective query method over uncertain mobile trajectory data. It is built up on encoding massive, spatially uncertain trajectories by the semantic information of the POIs and regions covered by them, and then storing the trajectory documents in text database with an effective indexing scheme. The visual interface facilitates query condition specification, situation-aware visualization, and semantic exploration of large trajectory data. Usage scenarios on real-world human mobility datasets demonstrate the effectiveness of our approach. |
Tasks | |
Published | 2019-08-01 |
URL | https://arxiv.org/abs/1908.00277v2 |
https://arxiv.org/pdf/1908.00277v2.pdf | |
PWC | https://paperswithcode.com/paper/a-natural-language-based-visual-query |
Repo | |
Framework | |
Learning Directed Graphical Models from Gaussian Data
Title | Learning Directed Graphical Models from Gaussian Data |
Authors | Katherine Fitch |
Abstract | In this paper, we introduce two new directed graphical models from Gaussian data: the Gaussian graphical interaction model (GGIM) and the Gaussian graphical conditional expectation model (GGCEM). The development of these models comes from considering stationary Gaussian processes on graphs, and leveraging the equations between the resulting steady-state covariance matrix and the Laplacian matrix representing the interaction graph. Through the presentation of conceptually straightforward theory, we develop the new models and provide interpretations of the edges in each graphical model in terms of statistical measures. We show that when restricted to undirected graphs, the Laplacian matrix representing a GGIM is equivalent to the standard inverse covariance matrix that encodes conditional dependence relationships. We demonstrate that the problem of learning sparse GGIMs and GGCEMs for a given observation set can be framed as a LASSO problem. By comparison with the problem of inverse covariance estimation, we prove a bound on the difference between the covariance matrix corresponding to a sparse GGIM and the covariance matrix corresponding to the $l_1$-norm penalized maximum log-likelihood estimate. In all, the new models present a novel perspective on directed relationships between variables and significantly expand on the state of the art in Gaussian graphical modeling. |
Tasks | Gaussian Processes |
Published | 2019-06-19 |
URL | https://arxiv.org/abs/1906.08050v2 |
https://arxiv.org/pdf/1906.08050v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-directed-graphical-models-from |
Repo | |
Framework | |
Multi-resolution Multi-task Gaussian Processes
Title | Multi-resolution Multi-task Gaussian Processes |
Authors | Oliver Hamelijnck, Theodoros Damoulas, Kangrui Wang, Mark Girolami |
Abstract | We consider evidence integration from potentially dependent observation processes under varying spatio-temporal sampling resolutions and noise levels. We develop a multi-resolution multi-task (MRGP) framework while allowing for both inter-task and intra-task multi-resolution and multi-fidelity. We develop shallow Gaussian Process (GP) mixtures that approximate the difficult to estimate joint likelihood with a composite one and deep GP constructions that naturally handle biases in the mean. By doing so, we generalize and outperform state of the art GP compositions and offer information-theoretic corrections and efficient variational approximations. We demonstrate the competitiveness of MRGPs on synthetic settings and on the challenging problem of hyper-local estimation of air pollution levels across London from multiple sensing modalities operating at disparate spatio-temporal resolutions. |
Tasks | Gaussian Processes |
Published | 2019-06-19 |
URL | https://arxiv.org/abs/1906.08344v2 |
https://arxiv.org/pdf/1906.08344v2.pdf | |
PWC | https://paperswithcode.com/paper/multi-resolution-multi-task-gaussian |
Repo | |
Framework | |
CAiRE_HKUST at SemEval-2019 Task 3: Hierarchical Attention for Dialogue Emotion Classification
Title | CAiRE_HKUST at SemEval-2019 Task 3: Hierarchical Attention for Dialogue Emotion Classification |
Authors | Genta Indra Winata, Andrea Madotto, Zhaojiang Lin, Jamin Shin, Yan Xu, Peng Xu, Pascale Fung |
Abstract | Detecting emotion from dialogue is a challenge that has not yet been extensively surveyed. One could consider the emotion of each dialogue turn to be independent, but in this paper, we introduce a hierarchical approach to classify emotion, hypothesizing that the current emotional state depends on previous latent emotions. We benchmark several feature-based classifiers using pre-trained word and emotion embeddings, state-of-the-art end-to-end neural network models, and Gaussian processes for automatic hyper-parameter search. In our experiments, hierarchical architectures consistently give significant improvements, and our best model achieves a 76.77% F1-score on the test set. |
Tasks | Emotion Classification, Gaussian Processes |
Published | 2019-06-10 |
URL | https://arxiv.org/abs/1906.04041v1 |
https://arxiv.org/pdf/1906.04041v1.pdf | |
PWC | https://paperswithcode.com/paper/caire_hkust-at-semeval-2019-task-3-1 |
Repo | |
Framework | |
Dispersion Characterization and Pulse Prediction with Machine Learning
Title | Dispersion Characterization and Pulse Prediction with Machine Learning |
Authors | Sanjaya Lohani, Erin M. Knutson, Wenlei Zhang, Ryan T. Glasser |
Abstract | In this work we demonstrate the efficacy of neural networks in the characterization of dispersive media. We also develop a neural network to make predictions for input probe pulses which propagate through a nonlinear dispersive medium, which may be applied to predicting optimal pulse shapes for a desired output. The setup requires only a single pulse for the probe, providing considerable simplification of the current method of dispersion characterization that requires frequency scanning across the entirety of the gain and absorption features. We show that the trained networks are able to predict pulse profiles as well as dispersive features that are nearly identical to their experimental counterparts. We anticipate that the use of machine learning in conjunction with optical communication and sensing methods, both classical and quantum, can provide signal enhancement and experimental simplifications even in the face of highly complex, layered nonlinear light-matter interactions. |
Tasks | |
Published | 2019-09-05 |
URL | https://arxiv.org/abs/1909.02526v1 |
https://arxiv.org/pdf/1909.02526v1.pdf | |
PWC | https://paperswithcode.com/paper/dispersion-characterization-and-pulse |
Repo | |
Framework | |
Deep Compositional Spatial Models
Title | Deep Compositional Spatial Models |
Authors | Andrew Zammit-Mangion, Tin Lok James Ng, Quan Vu, Maurizio Filippone |
Abstract | Nonstationary, anisotropic spatial processes are often used when modelling, analysing and predicting complex environmental phenomena. One such class of processes considers a stationary, isotropic process on a warped spatial domain. The warping function is generally difficult to fit and not constrained to be bijective, often resulting in ‘space-folding.’ Here, we propose modelling a bijective warping function through a composition of multiple elemental bijective functions in a deep-learning framework. We consider two cases; first, when these functions are known up to some weights that need to be estimated, and, second, when the weights in each layer are random. Inspired by recent methodological and technological advances in deep learning and deep Gaussian processes, we employ approximate Bayesian methods to make inference with these models using graphical processing units. Through simulation studies in one and two dimensions we show that the deep compositional spatial models are quick to fit, and are able to provide better predictions and uncertainty quantification than other deep stochastic models of similar complexity. We also show their remarkable capacity to model highly nonstationary, anisotropic spatial data using radiances from the MODIS instrument aboard the Aqua satellite. |
Tasks | Gaussian Processes |
Published | 2019-06-06 |
URL | https://arxiv.org/abs/1906.02840v1 |
https://arxiv.org/pdf/1906.02840v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-compositional-spatial-models |
Repo | |
Framework | |
SDM-NET: Deep Generative Network for Structured Deformable Mesh
Title | SDM-NET: Deep Generative Network for Structured Deformable Mesh |
Authors | Lin Gao, Jie Yang, Tong Wu, Yu-Jie Yuan, Hongbo Fu, Yu-Kun Lai, Hao Zhang |
Abstract | We introduce SDM-NET, a deep generative neural network which produces structured deformable meshes. Specifically, the network is trained to generate a spatial arrangement of closed, deformable mesh parts, which respect the global part structure of a shape collection, e.g., chairs, airplanes, etc. Our key observation is that while the overall structure of a 3D shape can be complex, the shape can usually be decomposed into a set of parts, each homeomorphic to a box, and the finer-scale geometry of the part can be recovered by deforming the box. The architecture of SDM-NET is that of a two-level variational autoencoder (VAE). At the part level, a PartVAE learns a deformable model of part geometries. At the structural level, we train a Structured Parts VAE (SP-VAE), which jointly learns the part structure of a shape collection and the part geometries, ensuring a coherence between global shape structure and surface details. Through extensive experiments and comparisons with the state-of-the-art deep generative models of shapes, we demonstrate the superiority of SDM-NET in generating meshes with visual quality, flexible topology, and meaningful structures, which benefit shape interpolation and other subsequently modeling tasks. |
Tasks | |
Published | 2019-08-13 |
URL | https://arxiv.org/abs/1908.04520v2 |
https://arxiv.org/pdf/1908.04520v2.pdf | |
PWC | https://paperswithcode.com/paper/sdm-net-deep-generative-network-for |
Repo | |
Framework | |
Standardized Assessment of Automatic Segmentation of White Matter Hyperintensities and Results of the WMH Segmentation Challenge
Title | Standardized Assessment of Automatic Segmentation of White Matter Hyperintensities and Results of the WMH Segmentation Challenge |
Authors | Hugo J. Kuijf, J. Matthijs Biesbroek, Jeroen de Bresser, Rutger Heinen, Simon Andermatt, Mariana Bento, Matt Berseth, Mikhail Belyaev, M. Jorge Cardoso, Adrià Casamitjana, D. Louis Collins, Mahsa Dadar, Achilleas Georgiou, Mohsen Ghafoorian, Dakai Jin, April Khademi, Jesse Knight, Hongwei Li, Xavier Lladó, Miguel Luna, Qaiser Mahmood, Richard McKinley, Alireza Mehrtash, Sébastien Ourselin, Bo-yong Park, Hyunjin Park, Sang Hyun Park, Simon Pezold, Elodie Puybareau, Leticia Rittner, Carole H. Sudre, Sergi Valverde, Verónica Vilaplana, Roland Wiest, Yongchao Xu, Ziyue Xu, Guodong Zeng, Jianguo Zhang, Guoyan Zheng, Christopher Chen, Wiesje van der Flier, Frederik Barkhof, Max A. Viergever, Geert Jan Biessels |
Abstract | Quantification of cerebral white matter hyperintensities (WMH) of presumed vascular origin is of key importance in many neurological research studies. Currently, measurements are often still obtained from manual segmentations on brain MR images, which is a laborious procedure. Automatic WMH segmentation methods exist, but a standardized comparison of the performance of such methods is lacking. We organized a scientific challenge, in which developers could evaluate their method on a standardized multi-center/-scanner image dataset, giving an objective comparison: the WMH Segmentation Challenge (https://wmh.isi.uu.nl/). Sixty T1+FLAIR images from three MR scanners were released with manual WMH segmentations for training. A test set of 110 images from five MR scanners was used for evaluation. Segmentation methods had to be containerized and submitted to the challenge organizers. Five evaluation metrics were used to rank the methods: (1) Dice similarity coefficient, (2) modified Hausdorff distance (95th percentile), (3) absolute log-transformed volume difference, (4) sensitivity for detecting individual lesions, and (5) F1-score for individual lesions. Additionally, methods were ranked on their inter-scanner robustness. Twenty participants submitted their method for evaluation. This paper provides a detailed analysis of the results. In brief, there is a cluster of four methods that rank significantly better than the other methods, with one clear winner. The inter-scanner robustness ranking shows that not all methods generalize to unseen scanners. The challenge remains open for future submissions and provides a public platform for method evaluation. |
Tasks | |
Published | 2019-04-01 |
URL | http://arxiv.org/abs/1904.00682v1 |
http://arxiv.org/pdf/1904.00682v1.pdf | |
PWC | https://paperswithcode.com/paper/standardized-assessment-of-automatic |
Repo | |
Framework | |
Aggregated Deep Local Features for Remote Sensing Image Retrieval
Title | Aggregated Deep Local Features for Remote Sensing Image Retrieval |
Authors | Raffaele Imbriaco, Clint Sebastian, Egor Bondarev, Peter H. N. de With |
Abstract | Remote Sensing Image Retrieval remains a challenging topic due to the special nature of Remote Sensing Imagery. Such images contain various different semantic objects, which clearly complicates the retrieval task. In this paper, we present an image retrieval pipeline that uses attentive, local convolutional features and aggregates them using the Vector of Locally Aggregated Descriptors (VLAD) to produce a global descriptor. We study various system parameters such as the multiplicative and additive attention mechanisms and descriptor dimensionality. We propose a query expansion method that requires no external inputs. Experiments demonstrate that even without training, the local convolutional features and global representation outperform other systems. After system tuning, we can achieve state-of-the-art or competitive results. Furthermore, we observe that our query expansion method increases overall system performance by about 3%, using only the top-three retrieved images. Finally, we show how dimensionality reduction produces compact descriptors with increased retrieval performance and fast retrieval computation times, e.g. 50% faster than the current systems. |
Tasks | Dimensionality Reduction, Image Retrieval |
Published | 2019-03-22 |
URL | http://arxiv.org/abs/1903.09469v1 |
http://arxiv.org/pdf/1903.09469v1.pdf | |
PWC | https://paperswithcode.com/paper/aggregated-deep-local-features-for-remote |
Repo | |
Framework | |
Identifying collaborators in large codebases
Title | Identifying collaborators in large codebases |
Authors | Waren Long, Vadim Markovtsev, Hugo Mougard, Egor Bulychev, Jan Hula |
Abstract | The way developers collaborate inside and particularly across teams often escapes management’s attention, despite a formal organization with designated teams being defined. Observability of the actual, organically formed engineering structure provides decision makers invaluable additional tools to manage their talent pool. To identify existing inter and intra-team interactions - and suggest relevant opportunities for suitable collaborations - this paper studies contributors’ commit activity, usage of programming languages, and code identifier topics by embedding and clustering them. We evaluate our findings collaborating with the GitLab organization, analyzing 117 of their open source projects. We show that we are able to restore their engineering organization in broad strokes, and also reveal hidden coding collaborations as well as justify in-house technical decisions. |
Tasks | |
Published | 2019-05-07 |
URL | https://arxiv.org/abs/1905.06782v1 |
https://arxiv.org/pdf/1905.06782v1.pdf | |
PWC | https://paperswithcode.com/paper/190506782 |
Repo | |
Framework | |