Paper Group ANR 958
Input Redundancy for Parameterized Quantum Circuits. Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment. Mumford-Shah Loss Functional for Image Segmentation with Deep Learning. A Retina-inspired Sampling Method for Visual Texture Reconstruction. Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems. Intera …
Input Redundancy for Parameterized Quantum Circuits
Title | Input Redundancy for Parameterized Quantum Circuits |
Authors | Javier Gil Vidal, Dirk Oliver Theis |
Abstract | The topic area of this paper parameterized quantum circuits (quantum neural networks) which are trained to estimate a given function, specifically the type of circuits proposed by Mitarai et al. (Phys. Rev. A, 2018). The input is encoded into amplitudes of states of qubits. The no-cloning principle of quantum mechanics suggests that there is an advantage in redundantly encoding the input value several times. We follow this suggestion and prove lower bounds on the number of redundant copies for two types of input encoding. We draw conclusions for the architecture design of QNNs. |
Tasks | |
Published | 2019-01-31 |
URL | http://arxiv.org/abs/1901.11434v1 |
http://arxiv.org/pdf/1901.11434v1.pdf | |
PWC | https://paperswithcode.com/paper/input-redundancy-for-parameterized-quantum |
Repo | |
Framework | |
Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment
Title | Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment |
Authors | Ziqi Yang, Ee-Chien Chang, Zhenkai Liang |
Abstract | The rise of deep learning technique has raised new privacy concerns about the training data and test data. In this work, we investigate the model inversion problem in the adversarial settings, where the adversary aims at inferring information about the target model’s training data and test data from the model’s prediction values. We develop a solution to train a second neural network that acts as the inverse of the target model to perform the inversion. The inversion model can be trained with black-box accesses to the target model. We propose two main techniques towards training the inversion model in the adversarial settings. First, we leverage the adversary’s background knowledge to compose an auxiliary set to train the inversion model, which does not require access to the original training data. Second, we design a truncation-based technique to align the inversion model to enable effective inversion of the target model from partial predictions that the adversary obtains on victim user’s data. We systematically evaluate our inversion approach in various machine learning tasks and model architectures on multiple image datasets. Our experimental results show that even with no full knowledge about the target model’s training data, and with only partial prediction values, our inversion approach is still able to perform accurate inversion of the target model, and outperform previous approaches. |
Tasks | |
Published | 2019-02-22 |
URL | http://arxiv.org/abs/1902.08552v1 |
http://arxiv.org/pdf/1902.08552v1.pdf | |
PWC | https://paperswithcode.com/paper/adversarial-neural-network-inversion-via |
Repo | |
Framework | |
Mumford-Shah Loss Functional for Image Segmentation with Deep Learning
Title | Mumford-Shah Loss Functional for Image Segmentation with Deep Learning |
Authors | Boah Kim, Jong Chul Ye |
Abstract | Recent state-of-the-art image segmentation algorithms are mostly based on deep neural networks, thanks to their high performance and fast computation time. However, these methods are usually trained in a supervised manner, which requires large number of high quality ground-truth segmentation masks. On the other hand, classical image segmentation approaches such as level-set methods are formulated in a self-supervised manner by minimizing energy functions such as Mumford-Shah functional, so they are still useful to help generation of segmentation masks without labels. Unfortunately, these algorithms are usually computationally expensive and often have limitation in semantic segmentation. In this paper, we propose a novel loss function based on Mumford-Shah functional that can be used in deep-learning based image segmentation without or with small labeled data. This loss function is based on the observation that the softmax layer of deep neural networks has striking similarity to the characteristic function in the Mumford-Shah functional. We show that the new loss function enables semi-supervised and unsupervised segmentation. In addition, our loss function can be also used as a regularized function to enhance supervised semantic segmentation algorithms. Experimental results on multiple datasets demonstrate the effectiveness of the proposed method. |
Tasks | Semantic Segmentation, Unsupervised Semantic Segmentation |
Published | 2019-04-05 |
URL | https://arxiv.org/abs/1904.02872v2 |
https://arxiv.org/pdf/1904.02872v2.pdf | |
PWC | https://paperswithcode.com/paper/multiphase-level-set-loss-for-semi-supervised |
Repo | |
Framework | |
A Retina-inspired Sampling Method for Visual Texture Reconstruction
Title | A Retina-inspired Sampling Method for Visual Texture Reconstruction |
Authors | Lin Zhu, Siwei Dong, Tiejun Huang, Yonghong Tian |
Abstract | Conventional frame-based camera is not able to meet the demand of rapid reaction for real-time applications, while the emerging dynamic vision sensor (DVS) can realize high speed capturing for moving objects. However, to achieve visual texture reconstruction, DVS need extra information apart from the output spikes. This paper introduces a fovea-like sampling method inspired by the neuron signal processing in retina, which aims at visual texture reconstruction only taking advantage of the properties of spikes. In the proposed method, the pixels independently respond to the luminance changes with temporal asynchronous spikes. Analyzing the arrivals of spikes makes it possible to restore the luminance information, enabling reconstructing the natural scene for visualization. Three decoding methods of spike stream for texture reconstruction are proposed for high-speed motion and stationary scenes. Compared to conventional frame-based camera and DVS, our model can achieve better image quality and higher flexibility, which is capable of changing the way that demanding machine vision applications are built. |
Tasks | |
Published | 2019-07-20 |
URL | https://arxiv.org/abs/1907.08769v1 |
https://arxiv.org/pdf/1907.08769v1.pdf | |
PWC | https://paperswithcode.com/paper/a-retina-inspired-sampling-method-for-visual |
Repo | |
Framework | |
Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems
Title | Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems |
Authors | Steffen Eger, Gözde Gül Şahin, Andreas Rücklé, Ji-Ung Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, Iryna Gurevych |
Abstract | Visual modifications to text are often used to obfuscate offensive comments in social media (e.g., “!d10t”) or as a writing style (“1337” in “leet speak”), among other scenarios. We consider this as a new type of adversarial attack in NLP, a setting to which humans are very robust, as our experiments with both simple and more difficult visual input perturbations demonstrate. We then investigate the impact of visual adversarial attacks on current NLP systems on character-, word-, and sentence-level tasks, showing that both neural and non-neural models are, in contrast to humans, extremely sensitive to such attacks, suffering performance decreases of up to 82%. We then explore three shielding methods—visual character embeddings, adversarial training, and rule-based recovery—which substantially improve the robustness of the models. However, the shielding methods still fall behind performances achieved in non-attack scenarios, which demonstrates the difficulty of dealing with visual attacks. |
Tasks | Adversarial Attack |
Published | 2019-03-27 |
URL | http://arxiv.org/abs/1903.11508v1 |
http://arxiv.org/pdf/1903.11508v1.pdf | |
PWC | https://paperswithcode.com/paper/text-processing-like-humans-do-visually |
Repo | |
Framework | |
Interactive Search and Exploration in Online Discussion Forums Using Multimodal Embeddings
Title | Interactive Search and Exploration in Online Discussion Forums Using Multimodal Embeddings |
Authors | Iva Gornishka, Stevan Rudinac, Marcel Worring |
Abstract | In this paper we present a novel interactive multimodal learning system, which facilitates search and exploration in large networks of social multimedia users. It allows the analyst to identify and select users of interest, and to find similar users in an interactive learning setting. Our approach is based on novel multimodal representations of users, words and concepts, which we simultaneously learn by deploying a general-purpose neural embedding model. We show these representations to be useful not only for categorizing users, but also for automatically generating user and community profiles. Inspired by traditional summarization approaches, we create the profiles by selecting diverse and representative content from all available modalities, i.e. the text, image and user modality. The usefulness of the approach is evaluated using artificial actors, which simulate user behavior in a relevance feedback scenario. Multiple experiments were conducted in order to evaluate the quality of our multimodal representations, to compare different embedding strategies, and to determine the importance of different modalities. We demonstrate the capabilities of the proposed approach on two different multimedia collections originating from the violent online extremism forum Stormfront and the microblogging platform Twitter, which are particularly interesting due to the high semantic level of the discussions they feature. |
Tasks | |
Published | 2019-05-07 |
URL | https://arxiv.org/abs/1905.02430v1 |
https://arxiv.org/pdf/1905.02430v1.pdf | |
PWC | https://paperswithcode.com/paper/interactive-search-and-exploration-in-online |
Repo | |
Framework | |
Dealing with Label Scarcity in Computational Pathology: A Use Case in Prostate Cancer Classification
Title | Dealing with Label Scarcity in Computational Pathology: A Use Case in Prostate Cancer Classification |
Authors | Koen Dercksen, Wouter Bulten, Geert Litjens |
Abstract | Large amounts of unlabelled data are commonplace for many applications in computational pathology, whereas labelled data is often expensive, both in time and cost, to acquire. We investigate the performance of unsupervised and supervised deep learning methods when few labelled data are available. Three methods are compared: clustering autoencoder latent vectors (unsupervised), a single layer classifier combined with a pre-trained autoencoder (semi-supervised), and a supervised CNN. We apply these methods on hematoxylin and eosin (H&E) stained prostatectomy images to classify tumour versus non-tumour tissue. Results show that semi-/unsupervised methods have an advantage over supervised learning when few labels are available. Additionally, we show that incorporating immunohistochemistry (IHC) stained data provides an increase in performance over only using H&E. |
Tasks | |
Published | 2019-05-16 |
URL | https://arxiv.org/abs/1905.06820v1 |
https://arxiv.org/pdf/1905.06820v1.pdf | |
PWC | https://paperswithcode.com/paper/dealing-with-label-scarcity-in-computational |
Repo | |
Framework | |
Iterative Peptide Modeling With Active Learning And Meta-Learning
Title | Iterative Peptide Modeling With Active Learning And Meta-Learning |
Authors | Rainier Barrett, Andrew D. White |
Abstract | Often the development of novel materials is not amenable to high-throughput or purely computational screening methods. Instead, materials must be synthesized one at a time in a process that does not generate significant amounts of data. One way this method can be improved is by ensuring that each experiment provides the best improvement in both material properties and predictive modeling accuracy. In this work, we study the effectiveness of active learning, which optimizes the order of experiments, and meta learning, which transfers knowledge from one context to another, to reduce the number of experiments necessary to build a predictive model. We present a novel multi-task benchmark database of peptides designed to advance active, few-shot, and meta-learning methods for experimental design. Each task is binary classification of peptides represented as a sequence string. We show results of standard active learning and meta-learning methods across these datasets to assess their ability to improve predictive models with the fewest number of experiments. We find the ensemble query by committee active learning method to be effective. The meta-learning method Reptile was found to improve accuracy. The robustness of these conclusions were tested across multiple model choices. We find that combining meta-learning with active learning methods offers inconsistent benefits. |
Tasks | Active Learning, Meta-Learning |
Published | 2019-11-20 |
URL | https://arxiv.org/abs/1911.09103v2 |
https://arxiv.org/pdf/1911.09103v2.pdf | |
PWC | https://paperswithcode.com/paper/iterative-peptide-modeling-with-active |
Repo | |
Framework | |
A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks
Title | A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks |
Authors | Rajeev Sahay, Rehana Mahfuz, Aly El Gamal |
Abstract | The reliance on deep learning algorithms has grown significantly in recent years. Yet, these models are highly vulnerable to adversarial attacks, which introduce visually imperceptible perturbations into testing data to induce misclassifications. The literature has proposed several methods to combat such adversarial attacks, but each method either fails at high perturbation values, requires excessive computing power, or both. This letter proposes a computationally efficient method for defending the Fast Gradient Sign (FGS) adversarial attack by simultaneously denoising and compressing data. Specifically, our proposed defense relies on training a fully connected multi-layer Denoising Autoencoder (DAE) and using its encoder as a defense against the adversarial attack. Our results show that using this dimensionality reduction scheme is not only highly effective in mitigating the effect of the FGS attack in multiple threat models, but it also provides a 2.43x speedup in comparison to defense strategies providing similar robustness against the same attack. |
Tasks | Adversarial Attack, Denoising, Dimensionality Reduction |
Published | 2019-06-13 |
URL | https://arxiv.org/abs/1906.05599v1 |
https://arxiv.org/pdf/1906.05599v1.pdf | |
PWC | https://paperswithcode.com/paper/a-computationally-efficient-method-for |
Repo | |
Framework | |
Should Adversarial Attacks Use Pixel p-Norm?
Title | Should Adversarial Attacks Use Pixel p-Norm? |
Authors | Ayon Sen, Xiaojin Zhu, Liam Marshall, Robert Nowak |
Abstract | Adversarial attacks aim to confound machine learning systems, while remaining virtually imperceptible to humans. Attacks on image classification systems are typically gauged in terms of $p$-norm distortions in the pixel feature space. We perform a behavioral study, demonstrating that the pixel $p$-norm for any $0\le p \le \infty$, and several alternative measures including earth mover’s distance, structural similarity index, and deep net embedding, do not fit human perception. Our result has the potential to improve the understanding of adversarial attack and defense strategies. |
Tasks | Adversarial Attack, Image Classification |
Published | 2019-06-06 |
URL | https://arxiv.org/abs/1906.02439v1 |
https://arxiv.org/pdf/1906.02439v1.pdf | |
PWC | https://paperswithcode.com/paper/should-adversarial-attacks-use-pixel-p-norm |
Repo | |
Framework | |
Evaluating and Boosting Uncertainty Quantification in Classification
Title | Evaluating and Boosting Uncertainty Quantification in Classification |
Authors | Xiaoyang Huang, Jiancheng Yang, Linguo Li, Haoran Deng, Bingbing Ni, Yi Xu |
Abstract | Emergence of artificial intelligence techniques in biomedical applications urges the researchers to pay more attention on the uncertainty quantification (UQ) in machine-assisted medical decision making. For classification tasks, prior studies on UQ are difficult to compare with each other, due to the lack of a unified quantitative evaluation metric. Considering that well-performing UQ models ought to know when the classification models act incorrectly, we design a new evaluation metric, area under Confidence-Classification Characteristic curves (AUCCC), to quantitatively evaluate the performance of the UQ models. AUCCC is threshold-free, robust to perturbation, and insensitive to the classification performance. We evaluate several UQ methods (e.g., max softmax output) with AUCCC to validate its effectiveness. Furthermore, a simple scheme, named Uncertainty Distillation (UDist), is developed to boost the UQ performance, where a confidence model is distilling the confidence estimated by deep ensembles. The proposed method is easy to implement; it consistently outperforms strong baselines on natural and medical image datasets in our experiments. |
Tasks | Decision Making |
Published | 2019-09-13 |
URL | https://arxiv.org/abs/1909.06030v2 |
https://arxiv.org/pdf/1909.06030v2.pdf | |
PWC | https://paperswithcode.com/paper/evaluating-and-boosting-uncertainty |
Repo | |
Framework | |
Speaker Recognition with Random Digit Strings Using Uncertainty Normalized HMM-based i-vectors
Title | Speaker Recognition with Random Digit Strings Using Uncertainty Normalized HMM-based i-vectors |
Authors | Nooshin Maghsoodi, Hossein Sameti, Hossein Zeinali, Themos~Stafylakis |
Abstract | In this paper, we combine Hidden Markov Models (HMMs) with i-vector extractors to address the problem of text-dependent speaker recognition with random digit strings. We employ digit-specific HMMs to segment the utterances into digits, to perform frame alignment to HMM states and to extract Baum-Welch statistics. By making use of the natural partition of input features into digits, we train digit-specific i-vector extractors on top of each HMM and we extract well-localized i-vectors, each modelling merely the phonetic content corresponding to a single digit. We then examine ways to perform channel and uncertainty compensation, and we propose a novel method for using the uncertainty in the i-vector estimates. The experiments on RSR2015 part III show that the proposed method attains 1.52% and 1.77% Equal Error Rate (EER) for male and female respectively, outperforming state-of-the-art methods such as x-vectors, trained on vast amounts of data. Furthermore, these results are attained by a single system trained entirely on RSR2015, and by a simple score-normalized cosine distance. Moreover, we show that the omission of channel compensation yields only a minor degradation in performance, meaning that the system attains state-of-the-art results even without recordings from multiple handsets per speaker for training or enrolment. Similar conclusions are drawn from our experiments on the RedDots corpus, where the same method is evaluated on phrases. Finally, we report results with bottleneck features and show that further improvement is attained when fusing them with spectral features. |
Tasks | Speaker Recognition, Speaker Verification |
Published | 2019-07-13 |
URL | https://arxiv.org/abs/1907.06111v1 |
https://arxiv.org/pdf/1907.06111v1.pdf | |
PWC | https://paperswithcode.com/paper/speaker-recognition-with-random-digit-strings |
Repo | |
Framework | |
Architecture Selection via the Trade-off Between Accuracy and Robustness
Title | Architecture Selection via the Trade-off Between Accuracy and Robustness |
Authors | Zhun Deng, Cynthia Dwork, Jialiang Wang, Yao Zhao |
Abstract | We provide a general framework for characterizing the trade-off between accuracy and robustness in supervised learning. We propose a method and define quantities to characterize the trade-off between accuracy and robustness for a given architecture, and provide theoretical insight into the trade-off. Specifically we introduce a simple trade-off curve, define and study an influence function that captures the sensitivity, under adversarial attack, of the optima of a given loss function. We further show how adversarial training regularizes the parameters in an over-parameterized linear model, recovering the LASSO and ridge regression as special cases, which also allows us to theoretically analyze the behavior of the trade-off curve. In experiments, we demonstrate the corresponding trade-off curves of neural networks and how they vary with respect to factors such as number of layers, neurons, and across different network structures. Such information provides a useful guideline to architecture selection. |
Tasks | Adversarial Attack |
Published | 2019-06-04 |
URL | https://arxiv.org/abs/1906.01354v1 |
https://arxiv.org/pdf/1906.01354v1.pdf | |
PWC | https://paperswithcode.com/paper/architecture-selection-via-the-trade-off |
Repo | |
Framework | |
Grounding Natural Language Commands to StarCraft II Game States for Narration-Guided Reinforcement Learning
Title | Grounding Natural Language Commands to StarCraft II Game States for Narration-Guided Reinforcement Learning |
Authors | Nicholas Waytowich, Sean L. Barton, Vernon Lawhern, Ethan Stump, Garrett Warnell |
Abstract | While deep reinforcement learning techniques have led to agents that are successfully able to learn to perform a number of tasks that had been previously unlearnable, these techniques are still susceptible to the longstanding problem of {\em reward sparsity}. This is especially true for tasks such as training an agent to play StarCraft II, a real-time strategy game where reward is only given at the end of a game which is usually very long. While this problem can be addressed through reward shaping, such approaches typically require a human expert with specialized knowledge. Inspired by the vision of enabling reward shaping through the more-accessible paradigm of natural-language narration, we investigate to what extent we can contextualize these narrations by grounding them to the goal-specific states. We present a mutual-embedding model using a multi-input deep-neural network that projects a sequence of natural language commands into the same high-dimensional representation space as corresponding goal states. We show that using this model we can learn an embedding space with separable and distinct clusters that accurately maps natural-language commands to corresponding game states . We also discuss how this model can allow for the use of narrations as a robust form of reward shaping to improve RL performance and efficiency. |
Tasks | Starcraft, Starcraft II |
Published | 2019-04-24 |
URL | http://arxiv.org/abs/1906.02671v1 |
http://arxiv.org/pdf/1906.02671v1.pdf | |
PWC | https://paperswithcode.com/paper/190602671 |
Repo | |
Framework | |
Loss aversion fosters coordination among independent reinforcement learners
Title | Loss aversion fosters coordination among independent reinforcement learners |
Authors | Marco Jerome Gasparrini, Martí Sánchez-Fibla |
Abstract | We study what are the factors that can accelerate the emergence of collaborative behaviours among independent selfish learning agents. We depart from the “Battle of the Exes” (BoE), a spatial repeated game from which human behavioral data has been obtained (by Hawkings and Goldstone, 2016) that we find interesting because it considers two cases: a classic game theory version, called ballistic, in which agents can only make one action/decision (equivalent to the Battle of the Sexes) and a spatial version, called dynamic, in which agents can change decision (a spatial continuous version). We model both versions of the game with independent reinforcement learning agents and we manipulate the reward function transforming it into an utility introducing “loss aversion”: the reward that an agent obtains can be perceived as less valuable when compared to what the other got. We prove experimentally the introduction of loss aversion fosters cooperation by accelerating its appearance, and by making it possible in some cases like in the dynamic condition. We suggest that this may be an important factor explaining the rapid converge of human behaviour towards collaboration reported in the experiment of Hawkings and Goldstone. |
Tasks | |
Published | 2019-12-29 |
URL | https://arxiv.org/abs/1912.12633v1 |
https://arxiv.org/pdf/1912.12633v1.pdf | |
PWC | https://paperswithcode.com/paper/loss-aversion-fosters-coordination-among |
Repo | |
Framework | |