January 25, 2020

3117 words 15 mins read

Paper Group ANR 1635

Paper Group ANR 1635

Adjusting Rate of Spread Factors through Derivative-Free Optimization: A New Methodology to Improve the Performance of Forest Fire Simulators. Wireless Federated Distillation for Distributed Edge Learning with Heterogeneous Data. Learning to Align Multi-Camera Domains using Part-Aware Clustering for Unsupervised Video Person Re-Identification. Vide …

Adjusting Rate of Spread Factors through Derivative-Free Optimization: A New Methodology to Improve the Performance of Forest Fire Simulators

Title Adjusting Rate of Spread Factors through Derivative-Free Optimization: A New Methodology to Improve the Performance of Forest Fire Simulators
Authors Jaime Carrasco, Cristobal Pais, Zuo-Jun Max Shen, Andres Weintraub
Abstract In practical applications, it is common that wildfire simulators do not correctly predict the evolution of the fire scar. Usually, this is caused due to multiple factors including inaccuracy in the input data such as land cover classification, moisture, improperly represented local winds, cumulative errors in the fire growth simulation model, high level of discontinuity/heterogeneity within the landscape, among many others. Therefore in practice, it is necessary to adjust the propagation of the fire to obtain better results, either to support suppression activities or to improve the performance of the simulator considering new default parameters for future events, best representing the current fire spread growth phenomenon. In this article, we address this problem through a new methodology using Derivative-Free Optimization (DFO) algorithms for adjusting the Rate of Spread (ROS) factors in a fire simulation growth model called Cell2Fire. To achieve this, we solve an error minimization optimization problem that captures the difference between the simulated and observed fire, which involves the evaluation of the simulator output in each iteration as part of a DFO framework, allowing us to find the best possible factors for each fuel present on the landscape. Numerical results for different objective functions are shown and discussed, including a performance comparison of alternative DFO algorithms.
Tasks Text-to-Image Generation
Published 2019-09-11
URL https://arxiv.org/abs/1909.05949v1
PDF https://arxiv.org/pdf/1909.05949v1.pdf
PWC https://paperswithcode.com/paper/adjusting-rate-of-spread-factors-through
Repo
Framework

Wireless Federated Distillation for Distributed Edge Learning with Heterogeneous Data

Title Wireless Federated Distillation for Distributed Edge Learning with Heterogeneous Data
Authors Jin-Hyun Ahn, Osvaldo Simeone, Joonhyuk Kang
Abstract Cooperative training methods for distributed machine learning typically assume noiseless and ideal communication channels. This work studies some of the opportunities and challenges arising from the presence of wireless communication links. We specifically consider wireless implementations of Federated Learning (FL) and Federated Distillation (FD), as well as of a novel Hybrid Federated Distillation (HFD) scheme. Both digital implementations based on separate source-channel coding and over-the-air computing implementations based on joint source-channel coding are proposed and evaluated over Gaussian multiple-access channels.
Tasks
Published 2019-07-05
URL https://arxiv.org/abs/1907.02745v1
PDF https://arxiv.org/pdf/1907.02745v1.pdf
PWC https://paperswithcode.com/paper/wireless-federated-distillation-for
Repo
Framework

Learning to Align Multi-Camera Domains using Part-Aware Clustering for Unsupervised Video Person Re-Identification

Title Learning to Align Multi-Camera Domains using Part-Aware Clustering for Unsupervised Video Person Re-Identification
Authors Youngeun Kim, Seokeon Choi, Taekyung Kim, Sumin Lee, Changick Kim
Abstract Most video person re-identification (re-ID) methods are mainly based on supervised learning, which requires cross-camera ID labeling. Since the cost of labeling increases dramatically as the number of cameras increases, it is difficult to apply the re-identification algorithm to a large camera network. In this paper, we address the scalability issue by presenting deep representation learning without ID information across multiple cameras. Technically, we train neural networks to generate both ID-discriminative and camera-invariant features. To achieve the ID discrimination ability of the embedding features, we maximize feature distances between different person IDs within a camera by using a metric learning approach. At the same time, considering each camera as a different domain, we apply adversarial learning across multiple camera domains for generating camera-invariant features. We also propose a part-aware adaptation module, which effectively performs multi-camera domain invariant feature learning in different spatial regions. We carry out comprehensive experiments on three public re-ID datasets (i.e., PRID-2011, iLIDS-VID, and MARS). Our method outperforms state-of-the-art methods by a large margin of about 20% in terms of rank-1 accuracy on the large-scale MARS dataset.
Tasks Metric Learning, Person Re-Identification, Representation Learning, Video-Based Person Re-Identification
Published 2019-09-29
URL https://arxiv.org/abs/1909.13248v3
PDF https://arxiv.org/pdf/1909.13248v3.pdf
PWC https://paperswithcode.com/paper/learning-to-align-multi-camera-domain-for
Repo
Framework

Video-Based Convolutional Attention for Person Re-Identification

Title Video-Based Convolutional Attention for Person Re-Identification
Authors Marco Zamprogno, Marco Passon, Niki Martinel, Giuseppe Serra, Giuseppe Lancioni, Christian Micheloni, Carlo Tasso, Gian Luca Foresti
Abstract In this paper we consider the problem of video-based person re-identification, which is the task of associating videos of the same person captured by different and non-overlapping cameras. We propose a Siamese framework in which video frames of the person to re-identify and of the candidate one are processed by two identical networks which produce a similarity score. We introduce an attention mechanisms to capture the relevant information both at frame level (spatial information) and at video level (temporal information given by the importance of a specific frame within the sequence). One of the novelties of our approach is given by a joint concurrent processing of both frame and video levels, providing in such a way a very simple architecture. Despite this fact, our approach achieves better performance than the state-of-the-art on the challenging iLIDS-VID dataset.
Tasks Person Re-Identification, Video-Based Person Re-Identification
Published 2019-09-26
URL https://arxiv.org/abs/1910.04856v1
PDF https://arxiv.org/pdf/1910.04856v1.pdf
PWC https://paperswithcode.com/paper/video-based-convolutional-attention-for
Repo
Framework

Almost Group Envy-free Allocation of Indivisible Goods and Chores

Title Almost Group Envy-free Allocation of Indivisible Goods and Chores
Authors Haris Aziz, Simon Rey
Abstract We consider a multi-agent resource allocation setting in which an agent’s utility may decrease or increase when an item is allocated. We take the group envy-freeness concept that is well-established in the literature and present stronger and relaxed versions that are especially suitable for the allocation of indivisible items. Of particular interest is a concept called group envy-freeness up to one item (GEF1). We then present a clear taxonomy of the fairness concepts. We study which fairness concepts guarantee the existence of a fair allocation under which preference domain. For two natural classes of additive utilities, we design polynomial-time algorithms to compute a GEF1 allocation. We also prove that checking whether a given allocation satisfies GEF1 is coNP-complete when there are either only goods, only chores or both.
Tasks
Published 2019-07-16
URL https://arxiv.org/abs/1907.09279v1
PDF https://arxiv.org/pdf/1907.09279v1.pdf
PWC https://paperswithcode.com/paper/almost-group-envy-free-allocation-of
Repo
Framework

From Data Quality to Model Quality: an Exploratory Study on Deep Learning

Title From Data Quality to Model Quality: an Exploratory Study on Deep Learning
Authors Tianxing He, Shengcheng Yu, Ziyuan Wang, Jieqiong Li, Zhenyu Chen
Abstract Nowadays, people strive to improve the accuracy of deep learning models. However, very little work has focused on the quality of data sets. In fact, data quality determines model quality. Therefore, it is important for us to make research on how data quality affects on model quality. In this paper, we mainly consider four aspects of data quality, including Dataset Equilibrium, Dataset Size, Quality of Label, Dataset Contamination. We deign experiment on MNIST and Cifar-10 and try to find out the influence the four aspects make on model quality. Experimental results show that four aspects all have decisive impact on the quality of models. It means that decrease in data quality in these aspects will reduce the accuracy of model.
Tasks
Published 2019-06-10
URL https://arxiv.org/abs/1906.11882v1
PDF https://arxiv.org/pdf/1906.11882v1.pdf
PWC https://paperswithcode.com/paper/from-data-quality-to-model-quality-an
Repo
Framework

Programmable Spectrometry – Per-pixel Classification of Materials using Learned Spectral Filters

Title Programmable Spectrometry – Per-pixel Classification of Materials using Learned Spectral Filters
Authors Vishwanath Saragadam, Aswin C. Sankaranarayanan
Abstract Many materials have distinct spectral profiles. This facilitates estimation of the material composition of a scene at each pixel by first acquiring its hyperspectral image, and subsequently filtering it using a bank of spectral profiles. This process is inherently wasteful since only a set of linear projections of the acquired measurements contribute to the classification task. We propose a novel programmable camera that is capable of producing images of a scene with an arbitrary spectral filter. We use this camera to optically implement the spectral filtering of the scene’s hyperspectral image with the bank of spectral profiles needed to perform per-pixel material classification. This provides gains both in terms of acquisition speed — since only the relevant measurements are acquired — and in signal-to-noise ratio — since we invariably avoid narrowband filters that are light inefficient. Given training data, we use a range of classical and modern techniques including SVMs and neural networks to identify the bank of spectral profiles that facilitate material classification. We verify the method in simulations on standard datasets as well as real data using a lab prototype of the camera.
Tasks Material Classification
Published 2019-05-13
URL https://arxiv.org/abs/1905.04815v1
PDF https://arxiv.org/pdf/1905.04815v1.pdf
PWC https://paperswithcode.com/paper/programmable-spectrometry-per-pixel
Repo
Framework

A Splitting-Based Iterative Algorithm for GPU-Accelerated Statistical Dual-Energy X-Ray CT Reconstruction

Title A Splitting-Based Iterative Algorithm for GPU-Accelerated Statistical Dual-Energy X-Ray CT Reconstruction
Authors Fangda Li, Ankit Manerikar, Tanmay Prakash, Avinash Kak
Abstract When dealing with material classification in baggage at airports, Dual-Energy Computed Tomography (DECT) allows characterization of any given material with coefficients based on two attenuative effects: Compton scattering and photoelectric absorption. However, straightforward projection-domain decomposition methods for this characterization often yield poor reconstructions due to the high dynamic range of material properties encountered in an actual luggage scan. Hence, for better reconstruction quality under a timing constraint, we propose a splitting-based, GPU-accelerated, statistical DECT reconstruction algorithm. Compared to prior art, our main contribution lies in the significant acceleration made possible by separating reconstruction and decomposition within an ADMM framework. Experimental results, on both synthetic and real-world baggage phantoms, demonstrate a significant reduction in time required for convergence.
Tasks Material Classification
Published 2019-05-02
URL https://arxiv.org/abs/1905.00934v1
PDF https://arxiv.org/pdf/1905.00934v1.pdf
PWC https://paperswithcode.com/paper/a-splitting-based-iterative-algorithm-for-gpu
Repo
Framework

Boosting Object Recognition in Point Clouds by Saliency Detection

Title Boosting Object Recognition in Point Clouds by Saliency Detection
Authors Marlon Marcon, Riccardo Spezialetti, Samuele Salti, Luciano Silva, Luigi Di Stefano
Abstract Object recognition in 3D point clouds is a challenging task, mainly when time is an important factor to deal with, such as in industrial applications. Local descriptors are an amenable choice whenever the 6 DoF pose of recognized objects should also be estimated. However, the pipeline for this kind of descriptors is highly time-consuming. In this work, we propose an update to the traditional pipeline, by adding a preliminary filtering stage referred to as saliency boost. We perform tests on a standard object recognition benchmark by considering four keypoint detectors and four local descriptors, in order to compare time and recognition performance between the traditional pipeline and the boosted one. Results on time show that the boosted pipeline could turn out up to 5 times faster, with the recognition rate improving in most of the cases and exhibiting only a slight decrease in the others. These results suggest that the boosted pipeline can speed-up processing time substantially with limited impacts or even benefits in recognition accuracy.
Tasks Object Recognition, Saliency Detection
Published 2019-11-06
URL https://arxiv.org/abs/1911.02286v1
PDF https://arxiv.org/pdf/1911.02286v1.pdf
PWC https://paperswithcode.com/paper/boosting-object-recognition-in-point-clouds
Repo
Framework

Aurora Guard: Real-Time Face Anti-Spoofing via Light Reflection

Title Aurora Guard: Real-Time Face Anti-Spoofing via Light Reflection
Authors Yao Liu, Ying Tai, Jilin Li, Shouhong Ding, Chengjie Wang, Feiyue Huang, Dongyang Li, Wenshuai Qi, Rongrong Ji
Abstract In this paper, we propose a light reflection based face anti-spoofing method named Aurora Guard (AG), which is fast, simple yet effective that has already been deployed in real-world systems serving for millions of users. Specifically, our method first extracts the normal cues via light reflection analysis, and then uses an end-to-end trainable multi-task Convolutional Neural Network (CNN) to not only recover subjects’ depth maps to assist liveness classification, but also provide the light CAPTCHA checking mechanism in the regression branch to further improve the system reliability. Moreover, we further collect a large-scale dataset containing $12,000$ live and spoofing samples, which covers abundant imaging qualities and Presentation Attack Instruments (PAI). Extensive experiments on both public and our datasets demonstrate the superiority of our proposed method over the state of the arts.
Tasks Face Anti-Spoofing
Published 2019-02-27
URL http://arxiv.org/abs/1902.10311v1
PDF http://arxiv.org/pdf/1902.10311v1.pdf
PWC https://paperswithcode.com/paper/aurora-guard-real-time-face-anti-spoofing-via
Repo
Framework

3D Point Cloud Denoising via Deep Neural Network based Local Surface Estimation

Title 3D Point Cloud Denoising via Deep Neural Network based Local Surface Estimation
Authors Chaojing Duan, Siheng Chen, Jelena Kovacevic
Abstract We present a neural-network-based architecture for 3D point cloud denoising called neural projection denoising (NPD). In our previous work, we proposed a two-stage denoising algorithm, which first estimates reference planes and follows by projecting noisy points to estimated reference planes. Since the estimated reference planes are inevitably noisy, multi-projection is applied to stabilize the denoising performance. NPD algorithm uses a neural network to estimate reference planes for points in noisy point clouds. With more accurate estimations of reference planes, we are able to achieve better denoising performances with only one-time projection. To the best of our knowledge, NPD is the first work to denoise 3D point clouds with deep learning techniques. To conduct the experiments, we sample 40000 point clouds from the 3D data in ShapeNet to train a network and sample 350 point clouds from the 3D data in ModelNet10 to test. Experimental results show that our algorithm can estimate normal vectors of points in noisy point clouds. Comparing to five competitive methods, the proposed algorithm achieves better denoising performance and produces much smaller variances.
Tasks Denoising
Published 2019-04-09
URL http://arxiv.org/abs/1904.04427v1
PDF http://arxiv.org/pdf/1904.04427v1.pdf
PWC https://paperswithcode.com/paper/3d-point-cloud-denoising-via-deep-neural
Repo
Framework

Autonomous Human Activity Classification from Ego-vision Camera and Accelerometer Data

Title Autonomous Human Activity Classification from Ego-vision Camera and Accelerometer Data
Authors Yantao Lu, Senem Velipasalar
Abstract There has been significant amount of research work on human activity classification relying either on Inertial Measurement Unit (IMU) data or data from static cameras providing a third-person view. Using only IMU data limits the variety and complexity of the activities that can be detected. For instance, the sitting activity can be detected by IMU data, but it cannot be determined whether the subject has sat on a chair or a sofa, or where the subject is. To perform fine-grained activity classification from egocentric videos, and to distinguish between activities that cannot be differentiated by only IMU data, we present an autonomous and robust method using data from both ego-vision cameras and IMUs. In contrast to convolutional neural network-based approaches, we propose to employ capsule networks to obtain features from egocentric video data. Moreover, Convolutional Long Short Term Memory framework is employed both on egocentric videos and IMU data to capture temporal aspect of actions. We also propose a genetic algorithm-based approach to autonomously and systematically set various network parameters, rather than using manual settings. Experiments have been performed to perform 9- and 26-label activity classification, and the proposed method, using autonomously set network parameters, has provided very promising results, achieving overall accuracies of 86.6% and 77.2%, respectively. The proposed approach combining both modalities also provides increased accuracy compared to using only egovision data and only IMU data.
Tasks Multimodal Activity Recognition
Published 2019-05-28
URL https://arxiv.org/abs/1905.13533v1
PDF https://arxiv.org/pdf/1905.13533v1.pdf
PWC https://paperswithcode.com/paper/190513533
Repo
Framework

An End-to-End Network for Co-Saliency Detection in One Single Image

Title An End-to-End Network for Co-Saliency Detection in One Single Image
Authors Yuanhao Yue, Qin Zou, Hongkai Yu, Qian Wang, Song Wang
Abstract As a common visual problem, co-saliency detection within a single image does not attract enough attention and yet has not been well addressed. Existing methods often follow a bottom-up strategy to infer co-saliency in an image, where salient regions are firstly detected using visual primitives such as color and shape, and then grouped and merged into a co-saliency map. However, co-saliency is intrinsically perceived in a complex manner with bottom-up and top-down strategies combined in human vision. To deal with this problem, a novel end-to-end trainable network is proposed in this paper, which includes a backbone net and two branch nets. The backbone net uses ground-truth masks as top-down guidance for saliency prediction, while the two branch nets construct triplet proposals for feature organization and clustering, which drives the network to be sensitive to co-salient regions in a bottom-up way. To evaluate the proposed method, we construct a new dataset of 2,019 nature images with co-saliency in each image. Experimental results show that the proposed method achieves a state-of-the-art accuracy with a running speed of 28fps.
Tasks Co-Saliency Detection, Saliency Detection, Saliency Prediction
Published 2019-10-25
URL https://arxiv.org/abs/1910.11819v1
PDF https://arxiv.org/pdf/1910.11819v1.pdf
PWC https://paperswithcode.com/paper/an-end-to-end-network-for-co-saliency
Repo
Framework

Bottleneck detection by slope difference distribution: a robust approach for separating overlapped cells

Title Bottleneck detection by slope difference distribution: a robust approach for separating overlapped cells
Authors ZhenZhou Wang
Abstract To separate the overlapped cells, a bottleneck detection approach is proposed in this paper. The cell image is segmented by slope difference distribution (SDD) threshold selection. For each segmented binary clump, its one-dimensional boundary is computed as the distance distribution between its centroid and each point on the two-dimensional boundary. The bottleneck points of the one-dimensional boundary is detected by SDD and then transformed back into two-dimensional bottleneck points. Two largest concave parts of the binary clump are used to select the valid bottleneck points. Two bottleneck points from different concave parts with the minimum Euclidean distance is connected to separate the binary clump with minimum-cut. The binary clumps are separated iteratively until the number of computed concave parts is smaller than two. We use four types of open-accessible cell datasets to verify the effectiveness of the proposed approach and experimental results showed that the proposed approach is significantly more robust than state of the art methods.
Tasks
Published 2019-12-11
URL https://arxiv.org/abs/1912.05096v1
PDF https://arxiv.org/pdf/1912.05096v1.pdf
PWC https://paperswithcode.com/paper/bottleneck-detection-by-slope-difference
Repo
Framework

Lossy Image Compression with Recurrent Neural Networks: from Human Perceived Visual Quality to Classification Accuracy

Title Lossy Image Compression with Recurrent Neural Networks: from Human Perceived Visual Quality to Classification Accuracy
Authors Maurice Weber, Cedric Renggli, Helmut Grabner, Ce Zhang
Abstract Deep neural networks have recently advanced the state-of-the-art in image compression and surpassed many traditional compression algorithms. The training of such networks involves carefully trading off entropy of the latent representation against reconstruction quality. The term quality crucially depends on the observer of the images which, in the vast majority of literature, is assumed to be human. In this paper, we go beyond this notion of quality and look at human visual perception and machine perception simultaneously. To that end, we propose a family of loss functions that allows to optimize deep image compression depending on the observer and to interpolate between human perceived visual quality and classification accuracy. Our experiments show that our proposed training objectives result in compression systems that, when trained with machine friendly loss, preserve accuracy much better than the traditional codecs BPG, WebP and JPEG, without requiring fine-tuning of inference algorithms on decoded images and independent of the classifier architecture. At the same time, when using the human friendly loss, we achieve competitive performance in terms of MS-SSIM.
Tasks Image Compression
Published 2019-10-08
URL https://arxiv.org/abs/1910.03472v1
PDF https://arxiv.org/pdf/1910.03472v1.pdf
PWC https://paperswithcode.com/paper/lossy-image-compression-with-recurrent-neural
Repo
Framework
comments powered by Disqus