Paper Group ANR 731
Study of Robust Two-Stage Reduced-Dimension Sparsity-Aware STAP with Coprime Arrays. Conditional Adversarial Generative Flow for Controllable Image Synthesis. Siamese Neural Networks for Wireless Positioning and Channel Charting. Estimating covariance and precision matrices along subspaces. Decoupling feature propagation from the design of graph au …
Study of Robust Two-Stage Reduced-Dimension Sparsity-Aware STAP with Coprime Arrays
Title | Study of Robust Two-Stage Reduced-Dimension Sparsity-Aware STAP with Coprime Arrays |
Authors | X. Wang, Z. Yang, J. Huang, R. C. de Lamare |
Abstract | Space-time adaptive processing (STAP) algorithms with coprime arrays can provide good clutter suppression potential with low cost in airborne radar systems as compared with their uniform linear arrays counterparts. However, the performance of these algorithms is limited by the training samples support in practical applications. To address this issue, a robust two-stage reduced-dimension (RD) sparsity-aware STAP algorithm is proposed in this work. In the first stage, an RD virtual snapshot is constructed using all spatial channels but only $m$ adjacent Doppler channels around the target Doppler frequency to reduce the slow-time dimension of the signal. In the second stage, an RD sparse measurement modeling is formulated based on the constructed RD virtual snapshot, where the sparsity of clutter and the prior knowledge of the clutter ridge are exploited to formulate an RD overcomplete dictionary. Moreover, an orthogonal matching pursuit (OMP)-like method is proposed to recover the clutter subspace. In order to set the stopping parameter of the OMP-like method, a robust clutter rank estimation approach is developed. Compared with recently developed sparsity-aware STAP algorithms, the size of the proposed sparse representation dictionary is much smaller, resulting in low complexity. Simulation results show that the proposed algorithm is robust to prior knowledge errors and can provide good clutter suppression performance in low sample support. |
Tasks | |
Published | 2019-12-23 |
URL | https://arxiv.org/abs/2001.01560v1 |
https://arxiv.org/pdf/2001.01560v1.pdf | |
PWC | https://paperswithcode.com/paper/study-of-robust-two-stage-reduced-dimension |
Repo | |
Framework | |
Conditional Adversarial Generative Flow for Controllable Image Synthesis
Title | Conditional Adversarial Generative Flow for Controllable Image Synthesis |
Authors | Rui Liu, Yu Liu, Xinyu Gong, Xiaogang Wang, Hongsheng Li |
Abstract | Flow-based generative models show great potential in image synthesis due to its reversible pipeline and exact log-likelihood target, yet it suffers from weak ability for conditional image synthesis, especially for multi-label or unaware conditions. This is because the potential distribution of image conditions is hard to measure precisely from its latent variable $z$. In this paper, based on modeling a joint probabilistic density of an image and its conditions, we propose a novel flow-based generative model named conditional adversarial generative flow (CAGlow). Instead of disentangling attributes from latent space, we blaze a new trail for learning an encoder to estimate the mapping from condition space to latent space in an adversarial manner. Given a specific condition $c$, CAGlow can encode it to a sampled $z$, and then enable robust conditional image synthesis in complex situations like combining person identity with multiple attributes. The proposed CAGlow can be implemented in both supervised and unsupervised manners, thus can synthesize images with conditional information like categories, attributes, and even some unknown properties. Extensive experiments show that CAGlow ensures the independence of different conditions and outperforms regular Glow to a significant extent. |
Tasks | Image Generation |
Published | 2019-04-03 |
URL | http://arxiv.org/abs/1904.01782v1 |
http://arxiv.org/pdf/1904.01782v1.pdf | |
PWC | https://paperswithcode.com/paper/conditional-adversarial-generative-flow-for |
Repo | |
Framework | |
Siamese Neural Networks for Wireless Positioning and Channel Charting
Title | Siamese Neural Networks for Wireless Positioning and Channel Charting |
Authors | Eric Lei, Oscar Castañeda, Olav Tirkkonen, Tom Goldstein, Christoph Studer |
Abstract | Neural networks have been proposed recently for positioning and channel charting of user equipments (UEs) in wireless systems. Both of these approaches process channel state information (CSI) that is acquired at a multi-antenna base-station in order to learn a function that maps CSI to location information. CSI-based positioning using deep neural networks requires a dataset that contains both CSI and associated location information. Channel charting (CC) only requires CSI information to extract relative position information. Since CC builds on dimensionality reduction, it can be implemented using autoencoders. In this paper, we propose a unified architecture based on Siamese networks that can be used for supervised UE positioning and unsupervised channel charting. In addition, our framework enables semisupervised positioning, where only a small set of location information is available during training. We use simulations to demonstrate that Siamese networks achieve similar or better performance than existing positioning and CC approaches with a single, unified neural network architecture. |
Tasks | Dimensionality Reduction |
Published | 2019-09-29 |
URL | https://arxiv.org/abs/1909.13355v1 |
https://arxiv.org/pdf/1909.13355v1.pdf | |
PWC | https://paperswithcode.com/paper/siamese-neural-networks-for-wireless |
Repo | |
Framework | |
Estimating covariance and precision matrices along subspaces
Title | Estimating covariance and precision matrices along subspaces |
Authors | Zeljko Kereta, Timo Klock |
Abstract | We study the accuracy of estimating the covariance and the precision matrix of a $D$-variate sub-Gaussian distribution along a prescribed subspace or direction using the finite sample covariance. Our results show that the estimation accuracy depends almost exclusively on the components of the distribution that correspond to desired subspaces or directions. This is relevant and important for problems where the behavior of data along a lower-dimensional space is of specific interest, such as dimension reduction or structured regression problems. We also show that estimation of precision matrices is almost independent of the condition number of the covariance matrix. The presented applications include direction-sensitive eigenspace perturbation bounds, relative bounds for the smallest eigenvalue, and the estimation of the single-index model. For the latter, a new estimator, derived from the analysis, with strong theoretical guarantees and superior numerical performance is proposed. |
Tasks | Dimensionality Reduction |
Published | 2019-09-26 |
URL | https://arxiv.org/abs/1909.12218v2 |
https://arxiv.org/pdf/1909.12218v2.pdf | |
PWC | https://paperswithcode.com/paper/estimating-covariance-and-precision-matrices |
Repo | |
Framework | |
Decoupling feature propagation from the design of graph auto-encoders
Title | Decoupling feature propagation from the design of graph auto-encoders |
Authors | Paul Scherer, Helena Andres-Terre, Pietro Lio, Mateja Jamnik |
Abstract | We present two instances, L-GAE and L-VGAE, of the variational graph auto-encoding family (VGAE) based on separating feature propagation operations from graph convolution layers typically found in graph learning methods to a single linear matrix computation made prior to input in standard auto-encoder architectures. This decoupling enables the independent and fixed design of the auto-encoder without requiring additional GCN layers for every desired increase in the size of a node’s local receptive field. Fixing the auto-encoder enables a fairer assessment on the size of a nodes receptive field in building representations. Furthermore a by-product of fixing the auto-encoder design often results in substantially smaller networks than their VGAE counterparts especially as we increase the number of feature propagations. A comparative downstream evaluation on link prediction tasks show comparable state of the art performance to similar VGAE arrangements despite considerable simplification. We also show the simple application of our methodology to more challenging representation learning scenarios such as spatio-temporal graph representation learning. |
Tasks | Graph Representation Learning, Link Prediction, Representation Learning |
Published | 2019-10-18 |
URL | https://arxiv.org/abs/1910.08589v1 |
https://arxiv.org/pdf/1910.08589v1.pdf | |
PWC | https://paperswithcode.com/paper/decoupling-feature-propagation-from-the |
Repo | |
Framework | |
Artificial Intelligence: Powering Human Exploration of the Moon and Mars
Title | Artificial Intelligence: Powering Human Exploration of the Moon and Mars |
Authors | Jeremy D. Frank |
Abstract | Over the past decade, the NASA Autonomous Systems and Operations (ASO) project has developed and demonstrated numerous autonomy enabling technologies employing AI techniques. Our work has employed AI in three distinct ways to enable autonomous mission operations capabilities. Crew Autonomy gives astronauts tools to assist in the performance of each of these mission operations functions. Vehicle System Management uses AI techniques to turn the astronaut’s spacecraft into a robot, allowing it to operate when astronauts are not present, or to reduce astronaut workload. AI technology also enables Autonomous Robots as crew assistants or proxies when the crew are not present. We first describe human spaceflight mission operations capabilities. We then describe the ASO project, and the development and demonstration performed by ASO since 2011. We will describe the AI techniques behind each of these demonstrations, which include a variety of symbolic automated reasoning and machine learning based approaches. Finally, we conclude with an assessment of future development needs for AI to enable NASA’s future Exploration missions. |
Tasks | |
Published | 2019-10-07 |
URL | https://arxiv.org/abs/1910.03014v1 |
https://arxiv.org/pdf/1910.03014v1.pdf | |
PWC | https://paperswithcode.com/paper/artificial-intelligence-powering-human |
Repo | |
Framework | |
Seq2seq Translation Model for Sequential Recommendation
Title | Seq2seq Translation Model for Sequential Recommendation |
Authors | Ke Sun, Tieyun Qian |
Abstract | The context information such as product category plays a critical role in sequential recommendation. Recent years have witnessed a growing interest in context-aware sequential recommender systems. Existing studies often treat the contexts as auxiliary feature vectors without considering the sequential dependency in contexts. However, such a dependency provides valuable clues to predict the user’s future behavior. For example, a user might buy electronic accessories after he/she buy an electronic product. In this paper, we propose a novel seq2seq translation architecture to highlight the importance of sequential dependency in contexts for sequential recommendation. Specifically, we first construct a collateral context sequence in addition to the main interaction sequence. We then generalize recent advancements in translation model from sequences of words in two languages to sequences of items and contexts in recommender systems. Taking the category information as an item’s context, we develop a basic coupled and an extended tripled seq2seq translation models to encode the category-item and item-category-item relations between the item and context sequences. We conduct extensive experiments on three real world datasets. The results demonstrate the superior performance of the proposed model compared with the state-of-the-art baselines. |
Tasks | Recommendation Systems |
Published | 2019-12-16 |
URL | https://arxiv.org/abs/1912.07274v2 |
https://arxiv.org/pdf/1912.07274v2.pdf | |
PWC | https://paperswithcode.com/paper/seq2seq-translation-model-for-sequential |
Repo | |
Framework | |
Leveraging Just a Few Keywords for Fine-Grained Aspect Detection Through Weakly Supervised Co-Training
Title | Leveraging Just a Few Keywords for Fine-Grained Aspect Detection Through Weakly Supervised Co-Training |
Authors | Giannis Karamanolakis, Daniel Hsu, Luis Gravano |
Abstract | User-generated reviews can be decomposed into fine-grained segments (e.g., sentences, clauses), each evaluating a different aspect of the principal entity (e.g., price, quality, appearance). Automatically detecting these aspects can be useful for both users and downstream opinion mining applications. Current supervised approaches for learning aspect classifiers require many fine-grained aspect labels, which are labor-intensive to obtain. And, unfortunately, unsupervised topic models often fail to capture the aspects of interest. In this work, we consider weakly supervised approaches for training aspect classifiers that only require the user to provide a small set of seed words (i.e., weakly positive indicators) for the aspects of interest. First, we show that current weakly supervised approaches do not effectively leverage the predictive power of seed words for aspect detection. Next, we propose a student-teacher approach that effectively leverages seed words in a bag-of-words classifier (teacher); in turn, we use the teacher to train a second model (student) that is potentially more powerful (e.g., a neural network that uses pre-trained word embeddings). Finally, we show that iterative co-training can be used to cope with noisy seed words, leading to both improved teacher and student models. Our proposed approach consistently outperforms previous weakly supervised approaches (by 14.1 absolute F1 points on average) in six different domains of product reviews and six multilingual datasets of restaurant reviews. |
Tasks | Opinion Mining, Topic Models, Word Embeddings |
Published | 2019-09-01 |
URL | https://arxiv.org/abs/1909.00415v1 |
https://arxiv.org/pdf/1909.00415v1.pdf | |
PWC | https://paperswithcode.com/paper/leveraging-just-a-few-keywords-for-fine |
Repo | |
Framework | |
Reverse-Engineering Deep ReLU Networks
Title | Reverse-Engineering Deep ReLU Networks |
Authors | David Rolnick, Konrad P. Kording |
Abstract | It has been widely assumed that a neural network cannot be recovered from its outputs, as the network depends on its parameters in a highly nonlinear way. Here, we prove that in fact it is often possible to identify the architecture, weights, and biases of an unknown deep ReLU network by observing only its output. Every ReLU network defines a piecewise linear function, where the boundaries between linear regions correspond to inputs for which some neuron in the network switches between inactive and active ReLU states. By dissecting the set of region boundaries into components associated with particular neurons, we show both theoretically and empirically that it is possible to recover the weights of neurons and their arrangement within the network, up to isomorphism. |
Tasks | |
Published | 2019-10-02 |
URL | https://arxiv.org/abs/1910.00744v2 |
https://arxiv.org/pdf/1910.00744v2.pdf | |
PWC | https://paperswithcode.com/paper/identifying-weights-and-architectures-of-1 |
Repo | |
Framework | |
Counterfactual Vision-and-Language Navigation via Adversarial Path Sampling
Title | Counterfactual Vision-and-Language Navigation via Adversarial Path Sampling |
Authors | Tsu-Jui Fu, Xin Wang, Matthew Peterson, Scott Grafton, Miguel Eckstein, William Yang Wang |
Abstract | Vision-and-Language Navigation (VLN) is a task where agents must decide how to move through a 3D environment to reach a goal by grounding natural language instructions to the visual surroundings. One of the problems of the VLN task is data scarcity since it is difficult to collect enough navigation paths with human-annotated instructions for interactive environments. In this paper, we explore the use of counterfactual thinking as a human-inspired data augmentation method that results in robust models. Counterfactual thinking is a concept that describes the human propensity to create possible alternatives to life events that have already occurred. We propose an adversarial-driven counterfactual reasoning model that can consider effective conditions instead of low-quality augmented data. In particular, we present a model-agnostic adversarial path sampler (APS) that learns to sample challenging paths that force the navigator to improve based on the navigation performance. APS also serves to do pre-exploration of unseen environments to strengthen the model’s ability to generalize. We evaluate the influence of APS on the performance of different VLN baseline models using the room-to-room dataset (R2R). The results show that the adversarial training process with our proposed APS benefits VLN models under both seen and unseen environments. And the pre-exploration process can further gain additional improvements under unseen environments. |
Tasks | Data Augmentation |
Published | 2019-11-17 |
URL | https://arxiv.org/abs/1911.07308v1 |
https://arxiv.org/pdf/1911.07308v1.pdf | |
PWC | https://paperswithcode.com/paper/counterfactual-vision-and-language-navigation |
Repo | |
Framework | |
Going deep in clustering high-dimensional data: deep mixtures of unigrams for uncovering topics in textual data
Title | Going deep in clustering high-dimensional data: deep mixtures of unigrams for uncovering topics in textual data |
Authors | Laura Anderlucci, Cinzia Viroli |
Abstract | Mixtures of Unigrams (Nigam et al., 2000) are one of the simplest and most efficient tools for clustering textual data, as they assume that documents related to the same topic have similar distributions of terms, naturally described by Multinomials. When the classification task is particularly challenging, such as when the document-term matrix is high-dimensional and extremely sparse, a more composite representation can provide better insight on the grouping structure. In this work, we developed a deep version of mixtures of Unigrams for the unsupervised classification of very short documents with a large number of terms, by allowing for models with further deeper latent layers; the proposal is derived in a Bayesian framework. Simulation studies and real data analysis prove that going deep in clustering such data highly improves the classification accuracy with respect to more `shallow’ methods. | |
Tasks | |
Published | 2019-02-18 |
URL | http://arxiv.org/abs/1902.06615v1 |
http://arxiv.org/pdf/1902.06615v1.pdf | |
PWC | https://paperswithcode.com/paper/going-deep-in-clustering-high-dimensional |
Repo | |
Framework | |
REFIT: a Unified Watermark Removal Framework for Deep Learning Systems with Limited Data
Title | REFIT: a Unified Watermark Removal Framework for Deep Learning Systems with Limited Data |
Authors | Xinyun Chen, Wenxiao Wang, Chris Bender, Yiming Ding, Ruoxi Jia, Bo Li, Dawn Song |
Abstract | Deep neural networks (DNNs) have achieved tremendous success in various fields; however, training these models from scratch could be computationally expensive and requires a lot of training data. Recent work has explored different watermarking techniques to protect the pre-trained deep neural networks from potential copyright infringements; however, they could be vulnerable to adversaries who aim at removing the watermarks. In this work, we propose REFIT, a unified watermark removal framework based on fine-tuning, which does not rely on the knowledge of the watermarks and even the watermarking schemes. Firstly, we demonstrate that by properly designing the learning rate schedule for fine-tuning, such approaches could be effective instead. Furthermore, we conduct a comprehensive study of a realistic attack scenario where the adversary has limited training data. To effectively remove the watermarks without compromising the model functionality under this weak threat model, we propose to incorporate two techniques: (1) an adaption of the elastic weight consolidation (EWC) algorithm, which is originally proposed for mitigating the catastrophic forgetting phenomenon; and (2) unlabeled data augmentation (AU), where we leverage auxiliary unlabeled data from other sources. Our extensive evaluation shows the effectiveness of REFIT against diverse watermark embedding schemes. In particular, both EWC and AU significantly decrease the amount of labeled training data needed for effective watermark removal, and the unlabeled data samples used for AU do not necessarily need to be drawn from the same distribution as the benign data for model evaluation. The experimental results demonstrate that our fine-tuning based watermark removal attacks could pose real threats to the copyright of pre-trained models, and thus highlight the importance of further investigation of the watermarking problem. |
Tasks | Data Augmentation |
Published | 2019-11-17 |
URL | https://arxiv.org/abs/1911.07205v2 |
https://arxiv.org/pdf/1911.07205v2.pdf | |
PWC | https://paperswithcode.com/paper/refit-a-unified-watermark-removal-framework |
Repo | |
Framework | |
Making Better Mistakes: Leveraging Class Hierarchies with Deep Networks
Title | Making Better Mistakes: Leveraging Class Hierarchies with Deep Networks |
Authors | Luca Bertinetto, Romain Mueller, Konstantinos Tertikas, Sina Samangooei, Nicholas A. Lord |
Abstract | Deep neural networks have improved image classification dramatically over the past decade, but have done so by focusing on performance measures that treat all classes other than the ground truth as equally wrong. This has led to a situation in which mistakes are less likely to be made than before, but are equally likely to be absurd or catastrophic when they do occur. Past works have recognised and tried to address this issue of mistake severity, often by using graph distances in class hierarchies, but this has largely been neglected since the advent of the current deep learning era in computer vision. In this paper, we aim to renew interest in this problem by reviewing past approaches and proposing two simple modifications of the cross-entropy loss which outperform the prior art under several metrics on two large datasets with complex class hierarchies: tieredImageNet and iNaturalist19. |
Tasks | Image Classification |
Published | 2019-12-19 |
URL | https://arxiv.org/abs/1912.09393v1 |
https://arxiv.org/pdf/1912.09393v1.pdf | |
PWC | https://paperswithcode.com/paper/making-better-mistakes-leveraging-class |
Repo | |
Framework | |
Signed Input Regularization
Title | Signed Input Regularization |
Authors | Saeid Asgari Taghanaki, Kumar Abhishek, Ghassan Hamarneh |
Abstract | Over-parameterized deep models usually over-fit to a given training distribution, which makes them sensitive to small changes and out-of-distribution samples at inference time, leading to low generalization performance. To this end, several model-based and randomized data-dependent regularization methods are applied, such as data augmentation, which prevents a model from memorizing the training distribution. Instead of the random transformation of the input images, we propose SIGN, a new regularization method, which modifies the input variables using a linear transformation by estimating each variable’s contribution to the final prediction. Our proposed technique maps the input data to a new manifold where the less important variables are de-emphasized. To test the effectiveness of the proposed idea and compare it with other competing methods, we design several test scenarios, such as classification performance, uncertainty, out-of-distribution, and robustness analyses. We compare the methods using three different datasets and four models. We find that SIGN encourages more compact class representations, which results in the model’s robustness to random corruptions and out-of-distribution samples while also simultaneously achieving superior performance on normal data compared to other competing methods. Our experiments also demonstrate the successful transferability of the SIGN samples from one model to another. |
Tasks | Data Augmentation |
Published | 2019-11-16 |
URL | https://arxiv.org/abs/1911.07086v3 |
https://arxiv.org/pdf/1911.07086v3.pdf | |
PWC | https://paperswithcode.com/paper/signed-input-regularization |
Repo | |
Framework | |
Deep Learning for Detecting Building Defects Using Convolutional Neural Networks
Title | Deep Learning for Detecting Building Defects Using Convolutional Neural Networks |
Authors | Husein Perez, Joseph H. M. Tah, Amir Mosavi |
Abstract | Clients are increasingly looking for fast and effective means to quickly and frequently survey and communicate the condition of their buildings so that essential repairs and maintenance work can be done in a proactive and timely manner before it becomes too dangerous and expensive. Traditional methods for this type of work commonly comprise of engaging building surveyors to undertake a condition assessment which involves a lengthy site inspection to produce a systematic recording of the physical condition of the building elements, including cost estimates of immediate and projected long-term costs of renewal, repair and maintenance of the building. Current asset condition assessment procedures are extensively time consuming, laborious, and expensive and pose health and safety threats to surveyors, particularly at height and roof levels which are difficult to access. This paper aims at evaluating the application of convolutional neural networks (CNN) towards an automated detection and localisation of key building defects, e.g., mould, deterioration, and stain, from images. The proposed model is based on pre-trained CNN classifier of VGG-16 (later compaired with ResNet-50, and Inception models), with class activation mapping (CAM) for object localisation. The challenges and limitations of the model in real-life applications have been identified. The proposed model has proven to be robust and able to accurately detect and localise building defects. The approach is being developed with the potential to scale-up and further advance to support automated detection of defects and deterioration of buildings in real-time using mobile devices and drones. |
Tasks | |
Published | 2019-08-06 |
URL | https://arxiv.org/abs/1908.04392v1 |
https://arxiv.org/pdf/1908.04392v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-learning-for-detecting-building-defects |
Repo | |
Framework | |