Paper Group ANR 1177
DeepFault: Fault Localization for Deep Neural Networks. Big-Data Clustering: K-Means or K-Indicators?. Handwritten Chinese Character Recognition by Convolutional Neural Network and Similarity Ranking. Autonomous Air Traffic Controller: A Deep Multi-Agent Reinforcement Learning Approach. vqSGD: Vector Quantized Stochastic Gradient Descent. Is Fast A …
DeepFault: Fault Localization for Deep Neural Networks
Title | DeepFault: Fault Localization for Deep Neural Networks |
Authors | Hasan Ferit Eniser, Simos Gerasimou, Alper Sen |
Abstract | Deep Neural Networks (DNNs) are increasingly deployed in safety-critical applications including autonomous vehicles and medical diagnostics. To reduce the residual risk for unexpected DNN behaviour and provide evidence for their trustworthy operation, DNNs should be thoroughly tested. The DeepFault whitebox DNN testing approach presented in our paper addresses this challenge by employing suspiciousness measures inspired by fault localization to establish the hit spectrum of neurons and identify suspicious neurons whose weights have not been calibrated correctly and thus are considered responsible for inadequate DNN performance. DeepFault also uses a suspiciousness-guided algorithm to synthesize new inputs, from correctly classified inputs, that increase the activation values of suspicious neurons. Our empirical evaluation on several DNN instances trained on MNIST and CIFAR-10 datasets shows that DeepFault is effective in identifying suspicious neurons. Also, the inputs synthesized by DeepFault closely resemble the original inputs, exercise the identified suspicious neurons and are highly adversarial. |
Tasks | Autonomous Vehicles |
Published | 2019-02-15 |
URL | http://arxiv.org/abs/1902.05974v1 |
http://arxiv.org/pdf/1902.05974v1.pdf | |
PWC | https://paperswithcode.com/paper/deepfault-fault-localization-for-deep-neural |
Repo | |
Framework | |
Big-Data Clustering: K-Means or K-Indicators?
Title | Big-Data Clustering: K-Means or K-Indicators? |
Authors | Feiyu Chen, Yuchen Yang, Liwei Xu, Taiping Zhang, Yin Zhang |
Abstract | The K-means algorithm is arguably the most popular data clustering method, commonly applied to processed datasets in some “feature spaces”, as is in spectral clustering. Highly sensitive to initializations, however, K-means encounters a scalability bottleneck with respect to the number of clusters K as this number grows in big data applications. In this work, we promote a closely related model called K-indicators model and construct an efficient, semi-convex-relaxation algorithm that requires no randomized initializations. We present extensive empirical results to show advantages of the new algorithm when K is large. In particular, using the new algorithm to start the K-means algorithm, without any replication, can significantly outperform the standard K-means with a large number of currently state-of-the-art random replications. |
Tasks | |
Published | 2019-06-03 |
URL | https://arxiv.org/abs/1906.00938v1 |
https://arxiv.org/pdf/1906.00938v1.pdf | |
PWC | https://paperswithcode.com/paper/190600938 |
Repo | |
Framework | |
Handwritten Chinese Character Recognition by Convolutional Neural Network and Similarity Ranking
Title | Handwritten Chinese Character Recognition by Convolutional Neural Network and Similarity Ranking |
Authors | Junyi Zou, Jinliang Zhang, Ludi Wang |
Abstract | Convolution Neural Networks (CNN) have recently achieved state-of-the art performance on handwritten Chinese character recognition (HCCR). However, most of CNN models employ the SoftMax activation function and minimize cross entropy loss, which may cause loss of inter-class information. To cope with this problem, we propose to combine cross entropy with similarity ranking function and use it as loss function. The experiments results show that the combination loss functions produce higher accuracy in HCCR. This report briefly reviews cross entropy loss function, a typical similarity ranking function: Euclidean distance, and also propose a new similarity ranking function: Average variance similarity. Experiments are done to compare the performances of a CNN model with three different loss functions. In the end, SoftMax cross entropy with Average variance similarity produce the highest accuracy on handwritten Chinese characters recognition. |
Tasks | |
Published | 2019-08-30 |
URL | https://arxiv.org/abs/1908.11550v1 |
https://arxiv.org/pdf/1908.11550v1.pdf | |
PWC | https://paperswithcode.com/paper/handwritten-chinese-character-recognition-by |
Repo | |
Framework | |
Autonomous Air Traffic Controller: A Deep Multi-Agent Reinforcement Learning Approach
Title | Autonomous Air Traffic Controller: A Deep Multi-Agent Reinforcement Learning Approach |
Authors | Marc Brittain, Peng Wei |
Abstract | Air traffic control is a real-time safety-critical decision making process in highly dynamic and stochastic environments. In today’s aviation practice, a human air traffic controller monitors and directs many aircraft flying through its designated airspace sector. With the fast growing air traffic complexity in traditional (commercial airliners) and low-altitude (drones and eVTOL aircraft) airspace, an autonomous air traffic control system is needed to accommodate high density air traffic and ensure safe separation between aircraft. We propose a deep multi-agent reinforcement learning framework that is able to identify and resolve conflicts between aircraft in a high-density, stochastic, and dynamic en-route sector with multiple intersections and merging points. The proposed framework utilizes an actor-critic model, A2C that incorporates the loss function from Proximal Policy Optimization (PPO) to help stabilize the learning process. In addition we use a centralized learning, decentralized execution scheme where one neural network is learned and shared by all agents in the environment. We show that our framework is both scalable and efficient for large number of incoming aircraft to achieve extremely high traffic throughput with safety guarantee. We evaluate our model via extensive simulations in the BlueSky environment. Results show that our framework is able to resolve 99.97% and 100% of all conflicts both at intersections and merging points, respectively, in extreme high-density air traffic scenarios. |
Tasks | Decision Making, Multi-agent Reinforcement Learning |
Published | 2019-05-02 |
URL | https://arxiv.org/abs/1905.01303v1 |
https://arxiv.org/pdf/1905.01303v1.pdf | |
PWC | https://paperswithcode.com/paper/autonomous-air-traffic-controller-a-deep |
Repo | |
Framework | |
vqSGD: Vector Quantized Stochastic Gradient Descent
Title | vqSGD: Vector Quantized Stochastic Gradient Descent |
Authors | Venkata Gandikota, Daniel Kane, Raj Kumar Maity, Arya Mazumdar |
Abstract | In this work, we present a family of vector quantization schemes vqSGD (Vector-Quantized Stochastic Gradient Descent), that provide an asymptotic reduction in the communication cost with convergence guarantees in distributed optimization. In particular, we consider a randomized scheme based on the convex hull of a point set, that returns an unbiased estimator of a d-dimensional gradient vector with bounded variance. We provide multiple efficient instances of our scheme that require only o(d) bits of communication at the expense of a reasonable increase in variance. The instances of our quantization scheme are obtained using the properties of binary error-correcting codes and provide a smooth tradeoff between the communication and the variance of quantization. Furthermore, we show that vqSGD also offers strong privacy guarantees. |
Tasks | Distributed Optimization, Quantization |
Published | 2019-11-18 |
URL | https://arxiv.org/abs/1911.07971v2 |
https://arxiv.org/pdf/1911.07971v2.pdf | |
PWC | https://paperswithcode.com/paper/vqsgd-vector-quantized-stochastic-gradient |
Repo | |
Framework | |
Is Fast Adaptation All You Need?
Title | Is Fast Adaptation All You Need? |
Authors | Khurram Javed, Hengshuai Yao, Martha White |
Abstract | Gradient-based meta-learning has proven to be highly effective at learning model initializations, representations, and update rules that allow fast adaptation from a few samples. The core idea behind these approaches is to use fast adaptation and generalization – two second-order metrics – as training signals on a meta-training dataset. However, little attention has been given to other possible second-order metrics. In this paper, we investigate a different training signal – robustness to catastrophic interference – and demonstrate that representations learned by directing minimizing interference are more conducive to incremental learning than those learned by just maximizing fast adaptation. |
Tasks | Meta-Learning |
Published | 2019-10-03 |
URL | https://arxiv.org/abs/1910.01705v1 |
https://arxiv.org/pdf/1910.01705v1.pdf | |
PWC | https://paperswithcode.com/paper/is-fast-adaptation-all-you-need |
Repo | |
Framework | |
Learned-SBL: A Deep Learning Architecture for Sparse Signal Recovery
Title | Learned-SBL: A Deep Learning Architecture for Sparse Signal Recovery |
Authors | Rubin Jose Peter, Chandra R. Murthy |
Abstract | In this paper, we present a computationally efficient sparse signal recovery scheme using Deep Neural Networks (DNN). The architecture of the introduced neural network is inspired from sparse Bayesian learning (SBL) and named as Learned-SBL (L-SBL). We design a common architecture to recover sparse as well as block sparse vectors from single measurement vector (SMV) or multiple measurement vectors (MMV) depending on the nature of the training data. In the MMV model, the L-SBL network can be trained to learn any underlying sparsity pattern among the vectors including joint sparsity, block sparsity, etc. In particular, for block sparse recovery, learned-SBL does not require any prior knowledge of block boundaries. In each layer of the L-SBL, an estimate of the signal covariance matrix is obtained as the output of a neural network. Then a maximum a posteriori (MAP) estimator of the unknown sparse vector is implemented with non-trainable parameters. In many applications, the measurement matrix may be time-varying. The existing DNN based sparse signal recovery schemes demand the retraining of the neural network using current measurement matrix. The architecture of L-SBL allows it to accept the measurement matrix as an input to the network, and thereby avoids the need for retraining. We also evaluate the performance of Learned-SBL in the detection of an extended target using a multiple-input multiple-output (MIMO) radar. Simulation results illustrate that the proposed approach offers superior sparse recovery performance compared to the state-of-the-art methods. |
Tasks | |
Published | 2019-09-17 |
URL | https://arxiv.org/abs/1909.08185v1 |
https://arxiv.org/pdf/1909.08185v1.pdf | |
PWC | https://paperswithcode.com/paper/learned-sbl-a-deep-learning-architecture-for |
Repo | |
Framework | |
Convex Hierarchical Clustering for Graph-Structured Data
Title | Convex Hierarchical Clustering for Graph-Structured Data |
Authors | Claire Donnat, Susan Holmes |
Abstract | Convex clustering is a recent stable alternative to hierarchical clustering. It formulates the recovery of progressively coalescing clusters as a regularized convex problem. While convex clustering was originally designed for handling Euclidean distances between data points, in a growing number of applications, the data is directly characterized by a similarity matrix or weighted graph. In this paper, we extend the robust hierarchical clustering approach to these broader classes of similarities. Having defined an appropriate convex objective, the crux of this adaptation lies in our ability to provide: (a) an efficient recovery of the regularization path and (b) an empirical demonstration of the use of our method. We address the first challenge through a proximal dual algorithm, for which we characterize both the theoretical efficiency as well as the empirical performance on a set of experiments. Finally, we highlight the potential of our method by showing its application to several real-life datasets, thus providing a natural extension to the current scope of applications of convex clustering. |
Tasks | |
Published | 2019-11-08 |
URL | https://arxiv.org/abs/1911.03417v2 |
https://arxiv.org/pdf/1911.03417v2.pdf | |
PWC | https://paperswithcode.com/paper/convex-hierarchical-clustering-for-graph |
Repo | |
Framework | |
Phrase Localization Without Paired Training Examples
Title | Phrase Localization Without Paired Training Examples |
Authors | Josiah Wang, Lucia Specia |
Abstract | Localizing phrases in images is an important part of image understanding and can be useful in many applications that require mappings between textual and visual information. Existing work attempts to learn these mappings from examples of phrase-image region correspondences (strong supervision) or from phrase-image pairs (weak supervision). We postulate that such paired annotations are unnecessary, and propose the first method for the phrase localization problem where neither training procedure nor paired, task-specific data is required. Our method is simple but effective: we use off-the-shelf approaches to detect objects, scenes and colours in images, and explore different approaches to measure semantic similarity between the categories of detected visual elements and words in phrases. Experiments on two well-known phrase localization datasets show that this approach surpasses all weakly supervised methods by a large margin and performs very competitively to strongly supervised methods, and can thus be considered a strong baseline to the task. The non-paired nature of our method makes it applicable to any domain and where no paired phrase localization annotation is available. |
Tasks | Semantic Similarity, Semantic Textual Similarity |
Published | 2019-08-20 |
URL | https://arxiv.org/abs/1908.07553v1 |
https://arxiv.org/pdf/1908.07553v1.pdf | |
PWC | https://paperswithcode.com/paper/190807553 |
Repo | |
Framework | |
Gated Recurrent Units Learning for Optimal Deployment of Visible Light Communications Enabled UAVs
Title | Gated Recurrent Units Learning for Optimal Deployment of Visible Light Communications Enabled UAVs |
Authors | Yining Wang, Mingzhe Chen, Zhaohui Yang, Xue Hao, Tao Luo, Walid Saad |
Abstract | In this paper, the problem of optimizing the deployment of unmanned aerial vehicles (UAVs) equipped with visible light communication (VLC) capabilities is studied. In the studied model, the UAVs can simultaneously provide communications and illumination to service ground users. Ambient illumination increases the interference over VLC links while reducing the illumination threshold of the UAVs. Therefore, it is necessary to consider the illumination distribution of the target area for UAV deployment optimization. This problem is formulated as an optimization problem whose goal is to minimize the total transmit power while meeting the illumination and communication requirements of users. To solve this problem, an algorithm based on the machine learning framework of gated recurrent units (GRUs) is proposed. Using GRUs, the UAVs can model the long-term historical illumination distribution and predict the future illumination distribution. In order to reduce the complexity of the prediction algorithm while accurately predicting the illumination distribution, a Gaussian mixture model (GMM) is used to fit the illumination distribution of the target area at each time slot. Based on the predicted illumination distribution, the optimization problem is proved to be a convex optimization problem that can be solved by using duality. Simulations using real data from the Earth observations group (EOG) at NOAA/NCEI show that the proposed approach can achieve up to 22.1% reduction in transmit power compared to a conventional optimal UAV deployment that does not consider the illumination distribution. The results also show that UAVs must hover at areas having strong illumination, thus providing useful guidelines on the deployment of VLC-enabled UAVs. |
Tasks | |
Published | 2019-09-17 |
URL | https://arxiv.org/abs/1909.07554v1 |
https://arxiv.org/pdf/1909.07554v1.pdf | |
PWC | https://paperswithcode.com/paper/gated-recurrent-units-learning-for-optimal |
Repo | |
Framework | |
On the Evaluation and Real-World Usage Scenarios of Deep Vessel Segmentation for Retinography
Title | On the Evaluation and Real-World Usage Scenarios of Deep Vessel Segmentation for Retinography |
Authors | Tim Laibacher, André Anjos |
Abstract | We identify and address three research gaps in the field of vessel segmentation for funduscopy. The first focuses on the task of inference on high-resolution fundus images for which only a limited set of ground-truth data is publicly available. Notably, we highlight that simple rescaling and padding or cropping of lower resolution datasets is surprisingly effective. Additionally we explore the effectiveness of semi-supervised learning for better domain adaptation. Our results show competitive performance on a set of common public retinal vessel datasets using a small and light-weight neural network. For HRF, the only very high-resolution dataset currently available, we reach new state-of-the-art performance by solely relying on training images from lower-resolution datasets. The second topic concerns evaluation metrics. We investigate the variability of the F1-score on the existing datasets and report results for recent SOTA architectures. Our evaluation show that most SOTA results are actually comparable to each other in performance. Last, we address the issue of reproducibility by open-sourcing our complete pipeline. |
Tasks | Domain Adaptation |
Published | 2019-09-09 |
URL | https://arxiv.org/abs/1909.03856v3 |
https://arxiv.org/pdf/1909.03856v3.pdf | |
PWC | https://paperswithcode.com/paper/on-the-evaluation-and-real-world-usage |
Repo | |
Framework | |
IvaNet: Learning to jointly detect and segment objets with the help of Local Top-Down Modules
Title | IvaNet: Learning to jointly detect and segment objets with the help of Local Top-Down Modules |
Authors | Shihua Huang, Lu Wang |
Abstract | Driven by Convolutional Neural Networks, object detection and semantic segmentation have gained significant improvements. However, existing methods on the basis of a full top-down module have limited robustness in handling those two tasks simultaneously. To this end, we present a joint multi-task framework, termed IvaNet. Different from existing methods, our IvaNet backwards abstract semantic information from higher layers to augment lower layers using local top-down modules. The comparisons against some counterparts on the PASCAL VOC and MS COCO datasets demonstrate the functionality of IvaNet. |
Tasks | Object Detection, Semantic Segmentation |
Published | 2019-03-18 |
URL | http://arxiv.org/abs/1903.07360v1 |
http://arxiv.org/pdf/1903.07360v1.pdf | |
PWC | https://paperswithcode.com/paper/ivanet-learning-to-jointly-detect-and-segment |
Repo | |
Framework | |
A Character-Level Approach to the Text Normalization Problem Based on a New Causal Encoder
Title | A Character-Level Approach to the Text Normalization Problem Based on a New Causal Encoder |
Authors | Adrián Javaloy Bornás, Ginés García Mateos |
Abstract | Text normalization is a ubiquitous process that appears as the first step of many Natural Language Processing problems. However, previous Deep Learning approaches have suffered from so-called silly errors, which are undetectable on unsupervised frameworks, making those models unsuitable for deployment. In this work, we make use of an attention-based encoder-decoder architecture that overcomes these undetectable errors by using a fine-grained character-level approach rather than a word-level one. Furthermore, our new general-purpose encoder based on causal convolutions, called Causal Feature Extractor (CFE), is introduced and compared to other common encoders. The experimental results show the feasibility of this encoder, which leverages the attention mechanisms the most and obtains better results in terms of accuracy, number of parameters and convergence time. While our method results in a slightly worse initial accuracy (92.74%), errors can be automatically detected and, thus, more readily solved, obtaining a more robust model for deployment. Furthermore, there is still plenty of room for future improvements that will push even further these advantages. |
Tasks | |
Published | 2019-03-06 |
URL | http://arxiv.org/abs/1903.02642v1 |
http://arxiv.org/pdf/1903.02642v1.pdf | |
PWC | https://paperswithcode.com/paper/a-character-level-approach-to-the-text |
Repo | |
Framework | |
GPRInvNet: Deep Learning-Based Ground Penetrating Radar Data Inversion for Tunnel Lining
Title | GPRInvNet: Deep Learning-Based Ground Penetrating Radar Data Inversion for Tunnel Lining |
Authors | Bin Liu, Yuxiao Ren, Hanchi Liu, Hui Xu, Zhengfang Wang, Anthony G. Cohn, Peng Jiang |
Abstract | A DNN architecture called GPRInvNet is proposed to tackle the challenge of mapping Ground Penetrating Radar (GPR) B-Scan data to complex permittivity maps of subsurface structure. GPRInvNet consists of a trace-to-trace encoder and a decoder. It is specially designed to take account of the characteristics of GPR inversion when faced with complex GPR B-Scan data as well as addressing the spatial alignment issue between time-series B-Scan data and spatial permittivity maps. It fuses features from several adjacent traces on the B-Scan data to enhance each trace, and then further condense the features of each trace separately. The sensitive zone on the permittivity map spatially aligned to the enhanced trace is reconstructed accurately. GPRInvNet has been utilized to reconstruct the permittivity map of tunnel linings. A diverse range of dielectric models of tunnel lining containing complex defects has been reconstructed using GPRInvNet, and results demonstrate that GPRInvNet is capable of effectively reconstructing complex tunnel lining defects with clear boundaries. Comparative results with existing baseline methods also demonstrate the superiority of the GPRInvNet. To generalize GPRInvNet to real GPR data, we integrated background noise patches recorded form a practical model testing into synthetic GPR data to train GPRInvNet. The model testing has been conducted for validation, and experimental results show that GPRInvNet achieves satisfactory results on real data. |
Tasks | Time Series |
Published | 2019-12-12 |
URL | https://arxiv.org/abs/1912.05759v2 |
https://arxiv.org/pdf/1912.05759v2.pdf | |
PWC | https://paperswithcode.com/paper/gprinvnet-deep-learning-based-ground |
Repo | |
Framework | |
Inferring Super-Resolution Depth from a Moving Light-Source Enhanced RGB-D Sensor: A Variational Approach
Title | Inferring Super-Resolution Depth from a Moving Light-Source Enhanced RGB-D Sensor: A Variational Approach |
Authors | Lu Sang, Bjoern Haefner, Daniel Cremers |
Abstract | A novel approach towards depth map super-resolution using multi-view uncalibrated photometric stereo is presented. Practically, an LED light source is attached to a commodity RGB-D sensor and is used to capture objects from multiple viewpoints with unknown motion. This non-static camera-to-object setup is described with a nonconvex variational approach such that no calibration on lighting or camera motion is required due to the formulation of an end-to-end joint optimization problem. Solving the proposed variational model results in high resolution depth, reflectance and camera pose estimates, as we show on challenging synthetic and real-world datasets. |
Tasks | Calibration, Depth Map Super-Resolution, Super-Resolution |
Published | 2019-12-13 |
URL | https://arxiv.org/abs/1912.06501v1 |
https://arxiv.org/pdf/1912.06501v1.pdf | |
PWC | https://paperswithcode.com/paper/inferring-super-resolution-depth-from-a |
Repo | |
Framework | |