Paper Group ANR 726
An Argumentation-Based Reasoner to Assist Digital Investigation and Attribution of Cyber-Attacks. CNN-Based PET Sinogram Repair to Mitigate Defective Block Detectors. On Learning from Ghost Imaging without Imaging. Enhanced Seismic Imaging with Predictive Neural Networks for Geophysics. Summarization and Visualization of Large Volumes of Broadcast …
An Argumentation-Based Reasoner to Assist Digital Investigation and Attribution of Cyber-Attacks
Title | An Argumentation-Based Reasoner to Assist Digital Investigation and Attribution of Cyber-Attacks |
Authors | Erisa Karafili, Linna Wang, Emil C. Lupu |
Abstract | We expect an increase in the frequency and severity of cyber-attacks that comes along with the need for efficient security countermeasures. The process of attributing a cyber-attack helps to construct efficient and targeted mitigating and preventive security measures. In this work, we propose an argumentation-based reasoner (ABR) as a proof-of-concept tool that can help a forensics analyst during the analysis of forensic evidence and the attribution process. Given the evidence collected from a cyber-attack, our reasoner can assist the analyst during the investigation process, by helping him/her to analyze the evidence and identify who performed the attack. Furthermore, it suggests to the analyst where to focus further analyses by giving hints of the missing evidence or new investigation paths to follow. ABR is the first automatic reasoner that can combine both technical and social evidence in the analysis of a cyber-attack, and that can also cope with incomplete and conflicting information. To illustrate how ABR can assist in the analysis and attribution of cyber-attacks we have used examples of cyber-attacks and their analyses as reported in publicly available reports and online literature. We do not mean to either agree or disagree with the analyses presented therein or reach attribution conclusions. |
Tasks | |
Published | 2019-04-30 |
URL | https://arxiv.org/abs/1904.13173v2 |
https://arxiv.org/pdf/1904.13173v2.pdf | |
PWC | https://paperswithcode.com/paper/an-argumentation-based-approach-to-assist-in |
Repo | |
Framework | |
CNN-Based PET Sinogram Repair to Mitigate Defective Block Detectors
Title | CNN-Based PET Sinogram Repair to Mitigate Defective Block Detectors |
Authors | William Whiteley, Jens Gregor |
Abstract | Positron emission tomography (PET) scanners continue to increase sensitivity and axial coverage by adding an ever expanding array of block detectors. As they age, one or more block detectors may lose sensitivity due to a malfunction or component failure. The sinogram data missing as a result thereof can lead to artifacts and other image degradations. We propose to mitigate the effects of malfunctioning block detectors by carrying out sinogram repair using a deep convolutional neural network. Experiments using whole-body patient studies with varying amounts of raw data removed are used to show that the neural network significantly outperforms previously published methods with respect to normalized mean squared error for raw sinograms, a multi-scale structural similarity measure for reconstructed images and with regard to quantitative accuracy. |
Tasks | |
Published | 2019-08-27 |
URL | https://arxiv.org/abs/1908.10252v1 |
https://arxiv.org/pdf/1908.10252v1.pdf | |
PWC | https://paperswithcode.com/paper/cnn-based-pet-sinogram-repair-to-mitigate |
Repo | |
Framework | |
On Learning from Ghost Imaging without Imaging
Title | On Learning from Ghost Imaging without Imaging |
Authors | Issei Sato |
Abstract | Computational ghost imaging is an imaging technique in which an object is imaged from light collected using a single-pixel detector with no spatial resolution. Recently, ghost cytometry has been proposed for a high-speed cell-classification method that involves ghost imaging and machine learning in flow cytometry. Ghost cytometry skips the reconstruction of cell images from signals and directly used signals for cell-classification because this reconstruction is what creates the bottleneck in the high-speed analysis. In this paper, we provide theoretical analysis for learning from ghost imaging without imaging. |
Tasks | |
Published | 2019-03-14 |
URL | https://arxiv.org/abs/1903.06009v5 |
https://arxiv.org/pdf/1903.06009v5.pdf | |
PWC | https://paperswithcode.com/paper/on-learning-from-ghost-imaging-without |
Repo | |
Framework | |
Enhanced Seismic Imaging with Predictive Neural Networks for Geophysics
Title | Enhanced Seismic Imaging with Predictive Neural Networks for Geophysics |
Authors | Ping Lu, Yanyan Zhang, Jianxiong Chen, Yuan Xiao, George Zhao |
Abstract | We propose a predictive neural network architecture that can be utilized to update reference velocity models as inputs to the full waveform inversion. Deep learning models are explored to augment velocity model building workflows during processing the 3D seismic volume in salt-prone environments. Specifically, a neural network architecture, with 3D convolutional, de-convolutional layers, and 3D max-pooling, is designed to take standard amplitude 3D seismic volumes as an input. Enhanced data augmentations through generative adversarial networks and a weighted loss function enable the network to train with few sparsely annotated slices. Batch normalization is also applied for faster convergence. A 3D probability cube for salt bodies and inclusions is generated through ensembles of predictions from multiple models in order to reduce variance. Velocity models inferred from the proposed networks provide opportunities for FWI forward models to converge faster with an initial condition closer to the true model. In addition, in each iteration step, the probability cubes of salt bodies and inclusions inferred from the proposed networks can be used as a regularization term within the FWI forward modelling, which may result in an improved velocity model estimation while the output of seismic migration can be utilized as an input of the 3D neural network for subsequent iterations. |
Tasks | |
Published | 2019-08-11 |
URL | https://arxiv.org/abs/1908.03973v2 |
https://arxiv.org/pdf/1908.03973v2.pdf | |
PWC | https://paperswithcode.com/paper/enhanced-seismic-imaging-with-predictive |
Repo | |
Framework | |
Summarization and Visualization of Large Volumes of Broadcast Video Data
Title | Summarization and Visualization of Large Volumes of Broadcast Video Data |
Authors | Kumar Abhishek, Ashok Yogi |
Abstract | Over the past few years, there has been an astounding growth in the number of news channels as well as the amount of broadcast news video data. As a result, it is imperative that automated methods need to be developed in order to effectively summarize and store this voluminous data. Format detection of news videos plays an important role in news video analysis. Our problem involves building a robust and versatile news format detector, which identifies the different band elements in a news frame. Probabilistic progressive Hough transform has been used for the detection of band edges. The detected bands are classified as natural images, computer generated graphics (non-text) and text bands. A contrast based text detector has been used to identify the text regions from news frames. Two classifers have been trained and evaluated for the labeling of the detected bands as natural or artificial - Support Vector Machine (SVM) Classifer with RBF kernel, and Extreme Learning Machine (ELM) classifier. The classifiers have been trained on a dataset of 6000 images (3000 images of each class). The ELM classifier reports a balanced accuracy of 77.38%, while the SVM classifier outperforms it with a balanced accuracy of 96.5% using 10-fold cross-validation. The detected bands which have been fragmented due to the presence of gradients in the image have been merged using a three-tier hierarchical reasoning model. The bands were detected with a Jaccard Index of 0.8138, when compared to manually marked ground truth data. We have also presented an extensive literature review of previous work done towards news videos format detection, element band classification, and associative reasoning. |
Tasks | |
Published | 2019-01-12 |
URL | http://arxiv.org/abs/1901.03842v1 |
http://arxiv.org/pdf/1901.03842v1.pdf | |
PWC | https://paperswithcode.com/paper/summarization-and-visualization-of-large |
Repo | |
Framework | |
Can You Really Backdoor Federated Learning?
Title | Can You Really Backdoor Federated Learning? |
Authors | Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, H. Brendan McMahan |
Abstract | The decentralized nature of federated learning makes detecting and defending against adversarial attacks a challenging task. This paper focuses on backdoor attacks in the federated learning setting, where the goal of the adversary is to reduce the performance of the model on targeted tasks while maintaining good performance on the main task. Unlike existing works, we allow non-malicious clients to have correctly labeled samples from the targeted tasks. We conduct a comprehensive study of backdoor attacks and defenses for the EMNIST dataset, a real-life, user-partitioned, and non-iid dataset. We observe that in the absence of defenses, the performance of the attack largely depends on the fraction of adversaries present and the “complexity’’ of the targeted task. Moreover, we show that norm clipping and “weak’’ differential privacy mitigate the attacks without hurting the overall performance. We have implemented the attacks and defenses in TensorFlow Federated (TFF), a TensorFlow framework for federated learning. In open-sourcing our code, our goal is to encourage researchers to contribute new attacks and defenses and evaluate them on standard federated datasets. |
Tasks | |
Published | 2019-11-18 |
URL | https://arxiv.org/abs/1911.07963v2 |
https://arxiv.org/pdf/1911.07963v2.pdf | |
PWC | https://paperswithcode.com/paper/can-you-really-backdoor-federated-learning |
Repo | |
Framework | |
AHA! an ‘Artificial Hippocampal Algorithm’ for Episodic Machine Learning
Title | AHA! an ‘Artificial Hippocampal Algorithm’ for Episodic Machine Learning |
Authors | Gideon Kowadlo, Abdelrahman Ahmed, David Rawlinson |
Abstract | The majority of ML research concerns slow, statistical learning of i.i.d. samples from large, labelled datasets. Animals do not learn this way. An enviable characteristic of animal learning is `episodic’ learning - the ability to memorise a specific experience as a composition of existing concepts, after just one experience, without provided labels. The new knowledge can then be used to distinguish between similar experiences, to generalise between classes, and to selectively consolidate to long-term memory. The Hippocampus is known to be vital to these abilities. AHA is a biologically-plausible computational model of the Hippocampus. Unlike most machine learning models, AHA is trained without external labels and uses only local credit assignment. We demonstrate AHA in a superset of the Omniglot one-shot classification benchmark. The extended benchmark covers a wider range of known hippocampal functions by testing pattern separation, completion, and recall of original input. These functions are all performed within a single configuration of the computational model. Despite these constraints, image classification results are comparable to conventional deep convolutional ANNs. | |
Tasks | Image Classification, Omniglot |
Published | 2019-09-23 |
URL | https://arxiv.org/abs/1909.10340v5 |
https://arxiv.org/pdf/1909.10340v5.pdf | |
PWC | https://paperswithcode.com/paper/190910340 |
Repo | |
Framework | |
SurfelWarp: Efficient Non-Volumetric Single View Dynamic Reconstruction
Title | SurfelWarp: Efficient Non-Volumetric Single View Dynamic Reconstruction |
Authors | Wei Gao, Russ Tedrake |
Abstract | We contribute a dense SLAM system that takes a live stream of depth images as input and reconstructs non-rigid deforming scenes in real time, without templates or prior models. In contrast to existing approaches, we do not maintain any volumetric data structures, such as truncated signed distance function (TSDF) fields or deformation fields, which are performance and memory intensive. Our system works with a flat point (surfel) based representation of geometry, which can be directly acquired from commodity depth sensors. Standard graphics pipelines and general purpose GPU (GPGPU) computing are leveraged for all central operations: i.e., nearest neighbor maintenance, non-rigid deformation field estimation and fusion of depth measurements. Our pipeline inherently avoids expensive volumetric operations such as marching cubes, volumetric fusion and dense deformation field update, leading to significantly improved performance. Furthermore, the explicit and flexible surfel based geometry representation enables efficient tackling of topology changes and tracking failures, which makes our reconstructions consistent with updated depth observations. Our system allows robots to maintain a scene description with non-rigidly deformed objects that potentially enables interactions with dynamic working environments. |
Tasks | |
Published | 2019-04-30 |
URL | http://arxiv.org/abs/1904.13073v1 |
http://arxiv.org/pdf/1904.13073v1.pdf | |
PWC | https://paperswithcode.com/paper/surfelwarp-efficient-non-volumetric-single |
Repo | |
Framework | |
Physics-based Simulation of Continuous-Wave LIDAR for Localization, Calibration and Tracking
Title | Physics-based Simulation of Continuous-Wave LIDAR for Localization, Calibration and Tracking |
Authors | Eric Heiden, Ziang Liu, Ragesh K. Ramachandran, Gaurav S. Sukhatme |
Abstract | Light Detection and Ranging (LIDAR) sensors play an important role in the perception stack of autonomous robots, supplying mapping and localization pipelines with depth measurements of the environment. While their accuracy outperforms other types of depth sensors, such as stereo or time-of-flight cameras, the accurate modeling of LIDAR sensors requires laborious manual calibration that typically does not take into account the interaction of laser light with different surface types, incidence angles and other phenomena that significantly influence measurements. In this work, we introduce a physically plausible model of a 2D continuous-wave LIDAR that accounts for the surface-light interactions and simulates the measurement process in the Hokuyo URG-04LX LIDAR. Through automatic differentiation, we employ gradient-based optimization to estimate model parameters from real sensor measurements. |
Tasks | Calibration |
Published | 2019-12-03 |
URL | https://arxiv.org/abs/1912.01652v2 |
https://arxiv.org/pdf/1912.01652v2.pdf | |
PWC | https://paperswithcode.com/paper/physics-based-simulation-of-continuous-wave |
Repo | |
Framework | |
Line Drawings of Natural Scenes Guide Visual Attention
Title | Line Drawings of Natural Scenes Guide Visual Attention |
Authors | Kai-Fu Yang, Wen-Wen Jiang, Teng-Fei Zhan, Yong-Jie Li |
Abstract | Visual search is an important strategy of the human visual system for fast scene perception. The guided search theory suggests that the global layout or other top-down sources of scenes play a crucial role in guiding object searching. In order to verify the specific roles of scene layout and regional cues in guiding visual attention, we executed a psychophysical experiment to record the human fixations on line drawings of natural scenes with an eye-tracking system in this work. We collected the human fixations of ten subjects from 498 natural images and of another ten subjects from the corresponding 996 human-marked line drawings of boundaries (two boundary maps per image) under free-viewing condition. The experimental results show that with the absence of some basic features like color and luminance, the distribution of the fixations on the line drawings has a high correlation with that on the natural images. Moreover, compared to the basic cues of regions, subjects pay more attention to the closed regions of line drawings which are usually related to the dominant objects of the scenes. Finally, we built a computational model to demonstrate that the fixation information on the line drawings can be used to significantly improve the performances of classical bottom-up models for fixation prediction in natural scenes. These results support that Gestalt features and scene layout are important cues for guiding fast visual object searching. |
Tasks | Eye Tracking |
Published | 2019-12-19 |
URL | https://arxiv.org/abs/1912.09581v1 |
https://arxiv.org/pdf/1912.09581v1.pdf | |
PWC | https://paperswithcode.com/paper/line-drawings-of-natural-scenes-guide-visual |
Repo | |
Framework | |
Anomaly Detection in Road Traffic Using Visual Surveillance: A Survey
Title | Anomaly Detection in Road Traffic Using Visual Surveillance: A Survey |
Authors | Santhosh Kelathodi Kumaran, Debi Prosad Dogra, Partha Pratim Roy |
Abstract | Computer vision has evolved in the last decade as a key technology for numerous applications replacing human supervision. In this paper, we present a survey on relevant visual surveillance related researches for anomaly detection in public places, focusing primarily on roads. Firstly, we revisit the surveys done in the last 10 years in this field. Since the underlying building block of a typical anomaly detection is learning, we emphasize more on learning methods applied on video scenes. We then summarize the important contributions made during last six years on anomaly detection primarily focusing on features, underlying techniques, applied scenarios and types of anomalies using single static camera. Finally, we discuss the challenges in the computer vision related anomaly detection techniques and some of the important future possibilities. |
Tasks | Anomaly Detection |
Published | 2019-01-24 |
URL | http://arxiv.org/abs/1901.08292v1 |
http://arxiv.org/pdf/1901.08292v1.pdf | |
PWC | https://paperswithcode.com/paper/anomaly-detection-in-road-traffic-using |
Repo | |
Framework | |
Formulating Manipulable Argumentation with Intra-/Inter-Agent Preferences
Title | Formulating Manipulable Argumentation with Intra-/Inter-Agent Preferences |
Authors | Ryuta Arisaka, Makoto Hagiwara, Takayuki Ito |
Abstract | From marketing to politics, exploitation of incomplete information through selective communication of arguments is ubiquitous. In this work, we focus on development of an argumentation-theoretic model for manipulable multi-agent argumentation, where each agent may transmit deceptive information to others for tactical motives. In particular, we study characterisation of epistemic states, and their roles in deception/honesty detection and (mis)trust-building. To this end, we propose the use of intra-agent preferences to handle deception/honesty detection and inter-agent preferences to determine which agent(s) to believe in more. We show how deception/honesty in an argumentation of an agent, if detected, would alter the agent’s perceived trustworthiness, and how that may affect their judgement as to which arguments should be acceptable. |
Tasks | |
Published | 2019-09-09 |
URL | https://arxiv.org/abs/1909.03616v2 |
https://arxiv.org/pdf/1909.03616v2.pdf | |
PWC | https://paperswithcode.com/paper/formulating-manipulable-argumentation-with |
Repo | |
Framework | |
Joint Learning of Named Entity Recognition and Entity Linking
Title | Joint Learning of Named Entity Recognition and Entity Linking |
Authors | Pedro Henrique Martins, Zita Marinho, André F. T. Martins |
Abstract | Named entity recognition (NER) and entity linking (EL) are two fundamentally related tasks, since in order to perform EL, first the mentions to entities have to be detected. However, most entity linking approaches disregard the mention detection part, assuming that the correct mentions have been previously detected. In this paper, we perform joint learning of NER and EL to leverage their relatedness and obtain a more robust and generalisable system. For that, we introduce a model inspired by the Stack-LSTM approach (Dyer et al., 2015). We observe that, in fact, doing multi-task learning of NER and EL improves the performance in both tasks when comparing with models trained with individual objectives. Furthermore, we achieve results competitive with the state-of-the-art in both NER and EL. |
Tasks | Entity Linking, Multi-Task Learning, Named Entity Recognition |
Published | 2019-07-18 |
URL | https://arxiv.org/abs/1907.08243v1 |
https://arxiv.org/pdf/1907.08243v1.pdf | |
PWC | https://paperswithcode.com/paper/joint-learning-of-named-entity-recognition |
Repo | |
Framework | |
Information Scrambling in Quantum Neural Networks
Title | Information Scrambling in Quantum Neural Networks |
Authors | Huitao Shen, Pengfei Zhang, Yi-Zhuang You, Hui Zhai |
Abstract | Quantum neural networks are one of the promising applications for near-term noisy intermediate-scale quantum computers. A quantum neural network distills the information from the input wavefunction into the output qubits. In this Letter, we show that this process can also be viewed from the opposite direction: the quantum information in the output qubits is scrambled into the input. This observation motivates us to use the tripartite information, a quantity recently developed to characterize information scrambling, to diagnose the training dynamics of quantum neural networks. We empirically find strong correlation between the dynamical behavior of the tripartite information and the loss function in the training process, from which we identify that the training process has two stages for randomly initialized networks. In the early stage, the network performance improves rapidly and the tripartite information increases linearly with a universal slope, meaning that the neural network becomes less scrambled than the random unitary. In the latter stage, the network performance improves slowly while the tripartite information decreases. We present evidences that the network constructs local correlations in the early stage and learns large-scale structures in the latter stage. We believe this two-stage training dynamics is universal and is applicable to a wide range of problems. Our work builds bridges between two research subjects of quantum neural networks and information scrambling, which opens up a new perspective to understand quantum neural networks. |
Tasks | |
Published | 2019-09-26 |
URL | https://arxiv.org/abs/1909.11887v2 |
https://arxiv.org/pdf/1909.11887v2.pdf | |
PWC | https://paperswithcode.com/paper/information-scrambling-in-quantum-neural |
Repo | |
Framework | |
HAXMLNet: Hierarchical Attention Network for Extreme Multi-Label Text Classification
Title | HAXMLNet: Hierarchical Attention Network for Extreme Multi-Label Text Classification |
Authors | Ronghui You, Zihan Zhang, Suyang Dai, Shanfeng Zhu |
Abstract | Extreme multi-label text classification (XMTC) addresses the problem of tagging each text with the most relevant labels from an extreme-scale label set. Traditional methods use bag-of-words (BOW) representations without context information as their features. The state-ot-the-art deep learning-based method, AttentionXML, which uses a recurrent neural network (RNN) and the multi-label attention, can hardly deal with extreme-scale (hundreds of thousands labels) problem. To address this, we propose our HAXMLNet, which uses an efficient and effective hierarchical structure with the multi-label attention. Experimental results show that HAXMLNet reaches a competitive performance with other state-of-the-art methods. |
Tasks | Multi-Label Text Classification, Text Classification |
Published | 2019-03-24 |
URL | http://arxiv.org/abs/1904.12578v1 |
http://arxiv.org/pdf/1904.12578v1.pdf | |
PWC | https://paperswithcode.com/paper/190412578 |
Repo | |
Framework | |