Paper Group ANR 301
Removing Dynamic Objects for Static Scene Reconstruction using Light Fields. Proceedings of the Artificial Intelligence for Cyber Security (AICS) Workshop 2020. Improving Reliability of Latent Dirichlet Allocation by Assessing Its Stability Using Clustering Techniques on Replicated Runs. Adversarially Robust Frame Sampling with Bounded Irregulariti …
Removing Dynamic Objects for Static Scene Reconstruction using Light Fields
Title | Removing Dynamic Objects for Static Scene Reconstruction using Light Fields |
Authors | Pushyami Kaveti, Sammie Katt, Hanumant Singh |
Abstract | There is a general expectation that robots should operate in environments that consist of static and dynamic entities including people, furniture and automobiles. These dynamic environments pose challenges to visual simultaneous localization and mapping (SLAM) algorithms by introducing errors into the front-end. Light fields provide one possible method for addressing such problems by capturing a more complete visual information of a scene. In contrast to a single ray from a perspective camera, Light Fields capture a bundle of light rays emerging from a single point in space, allowing us to see through dynamic objects by refocusing past them. In this paper we present a method to synthesize a refocused image of the static background in the presence of dynamic objects that uses a light-field acquired with a linear camera array. We simultaneously estimate both the depth and the refocused image of the static scene using semantic segmentation for detecting dynamic objects in a single time step. This eliminates the need for initializing a static map . The algorithm is parallelizable and is implemented on GPU allowing us execute it at close to real time speeds. We demonstrate the effectiveness of our method on real-world data acquired using a small robot with a five camera array. |
Tasks | Semantic Segmentation, Simultaneous Localization and Mapping |
Published | 2020-03-24 |
URL | https://arxiv.org/abs/2003.11076v1 |
https://arxiv.org/pdf/2003.11076v1.pdf | |
PWC | https://paperswithcode.com/paper/removing-dynamic-objects-for-static-scene |
Repo | |
Framework | |
Proceedings of the Artificial Intelligence for Cyber Security (AICS) Workshop 2020
Title | Proceedings of the Artificial Intelligence for Cyber Security (AICS) Workshop 2020 |
Authors | Dennis Ross, Arunesh Sinha, Diane Staheli, Bill Streilein |
Abstract | The workshop will focus on the application of artificial intelligence to problems in cyber security. AICS 2020 emphasis will be on human-machine teaming within the context of cyber security problems and will specifically explore collaboration between human operators and AI technologies. The workshop will address applicable areas of AI, such as machine learning, game theory, natural language processing, knowledge representation, automated and assistive reasoning and human machine interactions. Further, cyber security application areas with a particular emphasis on the characterization and deployment of human-machine teaming will be the focus. |
Tasks | |
Published | 2020-02-07 |
URL | https://arxiv.org/abs/2002.08320v1 |
https://arxiv.org/pdf/2002.08320v1.pdf | |
PWC | https://paperswithcode.com/paper/proceedings-of-the-artificial-intelligence-1 |
Repo | |
Framework | |
Improving Reliability of Latent Dirichlet Allocation by Assessing Its Stability Using Clustering Techniques on Replicated Runs
Title | Improving Reliability of Latent Dirichlet Allocation by Assessing Its Stability Using Clustering Techniques on Replicated Runs |
Authors | Jonas Rieger, Lars Koppers, Carsten Jentsch, Jörg Rahnenführer |
Abstract | For organizing large text corpora topic modeling provides useful tools. A widely used method is Latent Dirichlet Allocation (LDA), a generative probabilistic model which models single texts in a collection of texts as mixtures of latent topics. The assignments of words to topics rely on initial values such that generally the outcome of LDA is not fully reproducible. In addition, the reassignment via Gibbs Sampling is based on conditional distributions, leading to different results in replicated runs on the same text data. This fact is often neglected in everyday practice. We aim to improve the reliability of LDA results. Therefore, we study the stability of LDA by comparing assignments from replicated runs. We propose to quantify the similarity of two generated topics by a modified Jaccard coefficient. Using such similarities, topics can be clustered. A new pruning algorithm for hierarchical clustering results based on the idea that two LDA runs create pairs of similar topics is proposed. This approach leads to the new measure S-CLOP ({\bf S}imilarity of multiple sets by {\bf C}lustering with {\bf LO}cal {\bf P}runing) for quantifying the stability of LDA models. We discuss some characteristics of this measure and illustrate it with an application to real data consisting of newspaper articles from \textit{USA Today}. Our results show that the measure S-CLOP is useful for assessing the stability of LDA models or any other topic modeling procedure that characterize its topics by word distributions. Based on the newly proposed measure for LDA stability, we propose a method to increase the reliability and hence to improve the reproducibility of empirical findings based on topic modeling. This increase in reliability is obtained by running the LDA several times and taking as prototype the most representative run, that is the LDA run with highest average similarity to all other runs. |
Tasks | |
Published | 2020-02-14 |
URL | https://arxiv.org/abs/2003.04980v1 |
https://arxiv.org/pdf/2003.04980v1.pdf | |
PWC | https://paperswithcode.com/paper/improving-reliability-of-latent-dirichlet |
Repo | |
Framework | |
Adversarially Robust Frame Sampling with Bounded Irregularities
Title | Adversarially Robust Frame Sampling with Bounded Irregularities |
Authors | Hanhan Li, Pin Wang |
Abstract | In recent years, video analysis tools for automatically extracting meaningful information from videos are widely studied and deployed. Because most of them use deep neural networks which are computationally expensive, feeding only a subset of video frames into such algorithms is desired. Sampling the frames with fixed rate is always attractive for its simplicity, representativeness, and interpretability. For example, a popular cloud video API generated video and shot labels by processing only the first frame of every second in a video. However, one can easily attack such strategies by placing chosen frames at the sampled locations. In this paper, we present an elegant solution to this sampling problem that is provably robust against adversarial attacks and introduces bounded irregularities as well. |
Tasks | |
Published | 2020-02-04 |
URL | https://arxiv.org/abs/2002.01147v1 |
https://arxiv.org/pdf/2002.01147v1.pdf | |
PWC | https://paperswithcode.com/paper/adversarially-robust-frame-sampling-with |
Repo | |
Framework | |
Mask Encoding for Single Shot Instance Segmentation
Title | Mask Encoding for Single Shot Instance Segmentation |
Authors | Rufeng Zhang, Zhi Tian, Chunhua Shen, Mingyu You, Youliang Yan |
Abstract | To date, instance segmentation is dominated by twostage methods, as pioneered by Mask R-CNN. In contrast, one-stage alternatives cannot compete with Mask R-CNN in mask AP, mainly due to the difficulty of compactly representing masks, making the design of one-stage methods very challenging. In this work, we propose a simple singleshot instance segmentation framework, termed mask encoding based instance segmentation (MEInst). Instead of predicting the two-dimensional mask directly, MEInst distills it into a compact and fixed-dimensional representation vector, which allows the instance segmentation task to be incorporated into one-stage bounding-box detectors and results in a simple yet efficient instance segmentation framework. The proposed one-stage MEInst achieves 36.4% in mask AP with single-model (ResNeXt-101-FPN backbone) and single-scale testing on the MS-COCO benchmark. We show that the much simpler and flexible one-stage instance segmentation method, can also achieve competitive performance. This framework can be easily adapted for other instance-level recognition tasks. Code is available at: https://git.io/AdelaiDet |
Tasks | Instance Segmentation, Semantic Segmentation |
Published | 2020-03-26 |
URL | https://arxiv.org/abs/2003.11712v1 |
https://arxiv.org/pdf/2003.11712v1.pdf | |
PWC | https://paperswithcode.com/paper/mask-encoding-for-single-shot-instance |
Repo | |
Framework | |
Robust saliency maps with decoy-enhanced saliency score
Title | Robust saliency maps with decoy-enhanced saliency score |
Authors | Yang Lu, Wenbo Guo, Xinyu Xing, William Stafford Noble |
Abstract | Saliency methods help to make deep neural network predictions more interpretable by identifying particular features, such as pixels in an image, that contribute most strongly to the network’s prediction. Unfortunately, recent evidence suggests that many saliency methods perform poorly when gradients are saturated or in the presence of strong inter-feature dependence or noise injected by an adversarial attack. In this work, we propose to infer robust saliency scores by integrating the saliency scores of a set of decoys with a novel decoy-enhanced saliency score, in which the decoys are generated by either solving an optimization problem or blurring the original input. We theoretically analyze that our method compensates for gradient saturation and considers joint activation patterns of pixels. We also apply our method to three different CNNs—VGGNet, AlexNet, and ResNet trained on ImageNet data set. The empirical results show both qualitatively and quantitatively that our method outperforms raw scores produced by three existing saliency methods, even in the presence of adversarial attacks. |
Tasks | Adversarial Attack |
Published | 2020-02-03 |
URL | https://arxiv.org/abs/2002.00526v1 |
https://arxiv.org/pdf/2002.00526v1.pdf | |
PWC | https://paperswithcode.com/paper/robust-saliency-maps-with-decoy-enhanced |
Repo | |
Framework | |
Cooperative Initialization based Deep Neural Network Training
Title | Cooperative Initialization based Deep Neural Network Training |
Authors | Pravendra Singh, Munender Varshney, Vinay P. Namboodiri |
Abstract | Researchers have proposed various activation functions. These activation functions help the deep network to learn non-linear behavior with a significant effect on training dynamics and task performance. The performance of these activations also depends on the initial state of the weight parameters, i.e., different initial state leads to a difference in the performance of a network. In this paper, we have proposed a cooperative initialization for training the deep network using ReLU activation function to improve the network performance. Our approach uses multiple activation functions in the initial few epochs for the update of all sets of weight parameters while training the network. These activation functions cooperate to overcome their drawbacks in the update of weight parameters, which in effect learn better “feature representation” and boost the network performance later. Cooperative initialization based training also helps in reducing the overfitting problem and does not increase the number of parameters, inference (test) time in the final model while improving the performance. Experiments show that our approach outperforms various baselines and, at the same time, performs well over various tasks such as classification and detection. The Top-1 classification accuracy of the model trained using our approach improves by 2.8% for VGG-16 and 2.1% for ResNet-56 on CIFAR-100 dataset. |
Tasks | |
Published | 2020-01-05 |
URL | https://arxiv.org/abs/2001.01240v1 |
https://arxiv.org/pdf/2001.01240v1.pdf | |
PWC | https://paperswithcode.com/paper/cooperative-initialization-based-deep-neural |
Repo | |
Framework | |
End-to-End Trainable One-Stage Parking Slot Detection Integrating Global and Local Information
Title | End-to-End Trainable One-Stage Parking Slot Detection Integrating Global and Local Information |
Authors | Jae Kyu Suhr, Ho Gi Jung |
Abstract | This paper proposes an end-to-end trainable one-stage parking slot detection method for around view monitor (AVM) images. The proposed method simultaneously acquires global information (entrance, type, and occupancy of parking slot) and local information (location and orientation of junction) by using a convolutional neural network (CNN), and integrates them to detect parking slots with their properties. This method divides an AVM image into a grid and performs a CNN-based feature extraction. For each cell of the grid, the global and local information of the parking slot is obtained by applying convolution filters to the extracted feature map. Final detection results are produced by integrating the global and local information of the parking slot through non-maximum suppression (NMS). Since the proposed method obtains most of the information of the parking slot using a fully convolutional network without a region proposal stage, it is an end-to-end trainable one-stage detector. In experiments, this method was quantitatively evaluated using the public dataset and outperforms previous methods by showing both recall and precision of 99.77%, type classification accuracy of 100%, and occupancy classification accuracy of 99.31% while processing 60 frames per second. |
Tasks | |
Published | 2020-03-05 |
URL | https://arxiv.org/abs/2003.02445v1 |
https://arxiv.org/pdf/2003.02445v1.pdf | |
PWC | https://paperswithcode.com/paper/end-to-end-trainable-one-stage-parking-slot |
Repo | |
Framework | |
Hypergraph Spectral Analysis and Processing in 3D Point Cloud
Title | Hypergraph Spectral Analysis and Processing in 3D Point Cloud |
Authors | Songyang Zhang, Shuguang Cui, Zhi Ding |
Abstract | Along with increasingly popular virtual reality applications, the three-dimensional (3D) point cloud has become a fundamental data structure to characterize 3D objects and surroundings. To process 3D point clouds efficiently, a suitable model for the underlying structure and outlier noises is always critical. In this work, we propose a hypergraph-based new point cloud model that is amenable to efficient analysis and processing. We introduce tensor-based methods to estimate hypergraph spectrum components and frequency coefficients of point clouds in both ideal and noisy settings. We establish an analytical connection between hypergraph frequencies and structural features. We further evaluate the efficacy of hypergraph spectrum estimation in two common point cloud applications of sampling and denoising for which also we elaborate specific hypergraph filter design and spectral properties. The empirical performance demonstrates the strength of hypergraph signal processing as a tool in 3D point clouds and the underlying properties. |
Tasks | Denoising |
Published | 2020-01-08 |
URL | https://arxiv.org/abs/2001.02384v1 |
https://arxiv.org/pdf/2001.02384v1.pdf | |
PWC | https://paperswithcode.com/paper/hypergraph-spectral-analysis-and-processing |
Repo | |
Framework | |
On-line non-overlapping camera calibration net
Title | On-line non-overlapping camera calibration net |
Authors | Zhao Fangda, Toru Tamaki, Takio Kurita, Bisser Raytchev, Kazufumi Kaneda |
Abstract | We propose an easy-to-use non-overlapping camera calibration method. First, successive images are fed to a PoseNet-based network to obtain ego-motion of cameras between frames. Next, the pose between cameras are estimated. Instead of using a batch method, we propose an on-line method of the inter-camera pose estimation. Furthermore, we implement the entire procedure on a computation graph. Experiments with simulations and the KITTI dataset show the proposed method to be effective in simulation. |
Tasks | Calibration, Pose Estimation |
Published | 2020-02-19 |
URL | https://arxiv.org/abs/2002.08005v1 |
https://arxiv.org/pdf/2002.08005v1.pdf | |
PWC | https://paperswithcode.com/paper/on-line-non-overlapping-camera-calibration |
Repo | |
Framework | |
Multi-step Online Unsupervised Domain Adaptation
Title | Multi-step Online Unsupervised Domain Adaptation |
Authors | J. H. Moon, Debasmit Das, C. S. George Lee |
Abstract | In this paper, we address the Online Unsupervised Domain Adaptation (OUDA) problem, where the target data are unlabelled and arriving sequentially. The traditional methods on the OUDA problem mainly focus on transforming each arriving target data to the source domain, and they do not sufficiently consider the temporal coherency and accumulative statistics among the arriving target data. We propose a multi-step framework for the OUDA problem, which institutes a novel method to compute the mean-target subspace inspired by the geometrical interpretation on the Euclidean space. This mean-target subspace contains accumulative temporal information among the arrived target data. Moreover, the transformation matrix computed from the mean-target subspace is applied to the next target data as a preprocessing step, aligning the target data closer to the source domain. Experiments on four datasets demonstrated the contribution of each step in our proposed multi-step OUDA framework and its performance over previous approaches. |
Tasks | Domain Adaptation, Unsupervised Domain Adaptation |
Published | 2020-02-20 |
URL | https://arxiv.org/abs/2002.08930v1 |
https://arxiv.org/pdf/2002.08930v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-step-online-unsupervised-domain |
Repo | |
Framework | |
A Resolution in Algorithmic Fairness: Calibrated Scores for Fair Classifications
Title | A Resolution in Algorithmic Fairness: Calibrated Scores for Fair Classifications |
Authors | Claire Lazar, Suhas Vijaykumar |
Abstract | Calibration and equal error rates are fundamental conditions for algorithmic fairness that have been shown to conflict with each other, suggesting that they cannot be satisfied simultaneously. This paper shows that the two are in fact compatible and presents a method for reconciling them. In particular, we derive necessary and sufficient conditions for the existence of calibrated scores that yield classifications achieving equal error rates. We then present an algorithm that searches for the most informative score subject to both calibration and minimal error rate disparity. Applied empirically to credit lending, our algorithm provides a solution that is more fair and profitable than a common alternative that omits sensitive features. |
Tasks | Calibration |
Published | 2020-02-18 |
URL | https://arxiv.org/abs/2002.07676v1 |
https://arxiv.org/pdf/2002.07676v1.pdf | |
PWC | https://paperswithcode.com/paper/a-resolution-in-algorithmic-fairness |
Repo | |
Framework | |
Topological Sweep for Multi-Target Detection of Geostationary Space Objects
Title | Topological Sweep for Multi-Target Detection of Geostationary Space Objects |
Authors | Daqi Liu, Bo Chen, Tat-Jun Chin, Mark Rutten |
Abstract | Conducting surveillance of the Earth’s orbit is a key task towards achieving space situational awareness (SSA). Our work focuses on the optical detection of man-made objects (e.g., satellites, space debris) in Geostationary orbit (GEO), which is home to major space assets such as telecommunications and navigational satellites. GEO object detection is challenging due to the distance of the targets, which appear as small dim points among a clutter of bright stars. In this paper, we propose a novel multi-target detection technique based on topological sweep, to find GEO objects from a short sequence of optical images. Our topological sweep technique exploits the geometric duality that underpins the approximately linear trajectory of target objects across the input sequence, to extract the targets from significant clutter and noise. Unlike standard multi-target methods, our algorithm deterministically solves a combinatorial problem to ensure high-recall rates without requiring accurate initializations. The usage of geometric duality also yields an algorithm that is computationally efficient and suitable for online processing. |
Tasks | Object Detection |
Published | 2020-03-21 |
URL | https://arxiv.org/abs/2003.09583v1 |
https://arxiv.org/pdf/2003.09583v1.pdf | |
PWC | https://paperswithcode.com/paper/topological-sweep-for-multi-target-detection |
Repo | |
Framework | |
Learning Preference-Based Similarities from Face Images using Siamese Multi-Task CNNs
Title | Learning Preference-Based Similarities from Face Images using Siamese Multi-Task CNNs |
Authors | Nils Gessert, Alexander Schlaefer |
Abstract | Online dating has become a common occurrence over the last few decades. A key challenge for online dating platforms is to determine suitable matches for their users. A lot of dating services rely on self-reported user traits and preferences for matching. At the same time, some services largely rely on user images and thus initial visual preference. Especially for the latter approach, previous research has attempted to capture users’ visual preferences for automatic match recommendation. These approaches are mostly based on the assumption that physical attraction is the key factor for relationship formation and personal preferences, interests, and attitude are largely neglected. Deep learning approaches have shown that a variety of properties can be predicted from human faces to some degree, including age, health and even personality traits. Therefore, we investigate the feasibility of bridging image-based matching and matching with personal interests, preferences, and attitude. We approach the problem in a supervised manner by predicting similarity scores between two users based on images of their faces only. The ground-truth for the similarity matching scores is determined by a test that aims to capture users’ preferences, interests, and attitude that are relevant for forming romantic relationships. The images are processed by a Siamese Multi-Task deep learning architecture. We find a statistically significant correlation between predicted and target similarity scores. Thus, our results indicate that learning similarities in terms of interests, preferences, and attitude from face images appears to be feasible to some degree. |
Tasks | |
Published | 2020-01-25 |
URL | https://arxiv.org/abs/2001.09371v1 |
https://arxiv.org/pdf/2001.09371v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-preference-based-similarities-from |
Repo | |
Framework | |
Two-dimensional Multi-fiber Spectrum Image Correction Based on Machine Learning Techniques
Title | Two-dimensional Multi-fiber Spectrum Image Correction Based on Machine Learning Techniques |
Authors | Jiali Xu, Qian Yin, Ping Guo, Xin Zheng |
Abstract | Due to limited size and imperfect of the optical components in a spectrometer, aberration has inevitably been brought into two-dimensional multi-fiber spectrum image in LAMOST, which leads to obvious spacial variation of the point spread functions (PSFs). Consequently, if spatial variant PSFs are estimated directly , the huge storage and intensive computation requirements result in deconvolutional spectral extraction method become intractable. In this paper, we proposed a novel method to solve the problem of spatial variation PSF through image aberration correction. When CCD image aberration is corrected, PSF, the convolution kernel, can be approximated by one spatial invariant PSF only. Specifically, machine learning techniques are adopted to calibrate distorted spectral image, including Total Least Squares (TLS) algorithm, intelligent sampling method, multi-layer feed-forward neural networks. The calibration experiments on the LAMOST CCD images show that the calibration effect of proposed method is effectible. At the same time, the spectrum extraction results before and after calibration are compared, results show the characteristics of the extracted one-dimensional waveform are more close to an ideal optics system, and the PSF of the corrected object spectrum image estimated by the blind deconvolution method is nearly central symmetry, which indicates that our proposed method can significantly reduce the complexity of spectrum extraction and improve extraction accuracy. |
Tasks | Calibration |
Published | 2020-02-16 |
URL | https://arxiv.org/abs/2002.06600v1 |
https://arxiv.org/pdf/2002.06600v1.pdf | |
PWC | https://paperswithcode.com/paper/two-dimensional-multi-fiber-spectrum-image |
Repo | |
Framework | |