Paper Group ANR 1623
Sequential Adaptive Design for Jump Regression Estimation. Private Two-Party Cluster Analysis Made Formal & Scalable. All you need is a good representation: A multi-level and classifier-centric representation for few-shot learning. Generating Geological Facies Models with Fidelity to Diversity and Statistics of Training Images using Improved Genera …
Sequential Adaptive Design for Jump Regression Estimation
Title | Sequential Adaptive Design for Jump Regression Estimation |
Authors | Chiwoo Park, Peihua Qiu |
Abstract | Selecting input data or design points for statistical models has been of great interest in sequential design and active learning. In this paper, we present a new strategy of selecting the design points for a regression model when the underlying regression function is discontinuous. Two main motivating examples are (1) compressed material imaging with the purpose of accelerating the imaging speed and (2) design for regression analysis over a phase diagram in chemistry. In both examples, the underlying regression functions have discontinuities, so many of the existing design optimization approaches cannot be applied for the two examples because they mostly assume a continuous regression function. There are some studies for estimating a discontinuous regression function from its noisy observations, but all noisy observations are typically provided in advance in these studies. In this paper, we develop a design strategy of selecting the design points for regression analysis with discontinuities. We first review the existing approaches relevant to design optimization and active learning for regression analysis and discuss their limitations in handling a discontinuous regression function. We then present our novel design strategy for a regression analysis with discontinuities: some statistical properties with a fixed design will be presented first, and then these properties will be used to propose a new criterion of selecting the design points for the regression analysis. Sequential design of experiments with the new criterion will be presented with numerical examples. |
Tasks | Active Learning |
Published | 2019-04-02 |
URL | http://arxiv.org/abs/1904.01648v1 |
http://arxiv.org/pdf/1904.01648v1.pdf | |
PWC | https://paperswithcode.com/paper/sequential-adaptive-design-for-jump |
Repo | |
Framework | |
Private Two-Party Cluster Analysis Made Formal & Scalable
Title | Private Two-Party Cluster Analysis Made Formal & Scalable |
Authors | Xianrui Meng, Dimitrios Papadopoulos, Alina Oprea, Nikos Triandopoulos |
Abstract | Machine Learning (ML) is widely used for predictive tasks in numerous important applications—most successfully, in the context of collaborative learning, where a plurality of entities contribute their own datasets to jointly deduce global ML models. Despite its efficacy, this new learning paradigm fails to encompass critical application domains, such as healthcare and security analytics, that involve learning over highly sensitive data, wherein privacy risks limit entities to individually deduce local models using solely their own datasets. In this work, we present the first comprehensive study for privacy-preserving collaborative hierarchical clustering, overall featuring scalable cryptographic protocols that allow two parties to safely perform cluster analysis over their combined sensitive datasets. For this problem at hand, we introduce a formal security notion that achieves the required balance between intended accuracy and privacy and presents a class of two-party hierarchical clustering protocols that guarantee strong privacy protection, provable in our new security model. Crucially, our solution employs modular design and judicious use of cryptography to achieve high degrees of efficiency and extensibility. Specifically, we extend our core protocol to obtain two secure variants that significantly improve performance, an optimized variant for single-linkage clustering and a scalable approximate variant. Finally, we provide a prototype implementation of our approach and experimentally evaluate its feasibility and efficiency on synthetic and real datasets, obtaining encouraging results. For example, end-to-end execution of our secure approximate protocol, over 1M 10-dimensional records, completes in 35 sec, transferring only 896KB and achieving 97.09% accuracy. |
Tasks | |
Published | 2019-04-09 |
URL | https://arxiv.org/abs/1904.04475v2 |
https://arxiv.org/pdf/1904.04475v2.pdf | |
PWC | https://paperswithcode.com/paper/privacy-preserving-hierarchical-clustering |
Repo | |
Framework | |
All you need is a good representation: A multi-level and classifier-centric representation for few-shot learning
Title | All you need is a good representation: A multi-level and classifier-centric representation for few-shot learning |
Authors | Shaoli Huang, Dacheng Tao |
Abstract | The main problems of few-shot learning are how to learn a generalized representation and how to construct discriminant classifiers with few-shot samples. We tackle both issues by learning a multi-level representation with a classifier-centric constraint. We first build the multi-level representation by combining three different levels of information: local, global, and higher-level. The resulting representation can characterize new concepts with different aspects and present more universality. To overcome the difficulty of generating classifiers by several shot features, we also propose a classifier-centric loss for learning the representation of each level, which forces samples to be centered on their respective classifier weights in the feature space. Therefore, the multi-level representation learned with classifier-centric constraint not only can enhance the generalization ability, but also can be used to construct the discriminant classifier through a small number of samples. Experiments show that our proposed method, without training or fine-tuning on novel examples, can outperform the current state-of-the-art methods on two low-shot learning datasets. We further show that our approach achieves a significant improvement over baseline method in cross-task validation, and demonstrate its superiority in alleviating the domain shift problem. |
Tasks | Few-Shot Learning |
Published | 2019-11-28 |
URL | https://arxiv.org/abs/1911.12476v1 |
https://arxiv.org/pdf/1911.12476v1.pdf | |
PWC | https://paperswithcode.com/paper/all-you-need-is-a-good-representation-a-multi |
Repo | |
Framework | |
Generating Geological Facies Models with Fidelity to Diversity and Statistics of Training Images using Improved Generative Adversarial Networks
Title | Generating Geological Facies Models with Fidelity to Diversity and Statistics of Training Images using Improved Generative Adversarial Networks |
Authors | Lingchen Zhu, Tuanfeng Zhang |
Abstract | This paper presents a methodology and workflow that overcome the limitations of the conventional Generative Adversarial Networks (GANs) for geological facies modeling. It attempts to improve the training stability and guarantee the diversity of the generated geology through interpretable latent vectors. The resulting samples are ensured to have the equal probability (or an unbiased distribution) as from the training dataset. This is critical when applying GANs to generate unbiased and representative geological models that can be further used to facilitate objective uncertainty evaluation and optimal decision-making in oil field exploration and development. We proposed and implemented a new variant of GANs called Info-WGAN for the geological facies modeling that combines Information Maximizing Generative Adversarial Network (InfoGAN) with Wasserstein distance and Gradient Penalty (GP) for learning interpretable latent codes as well as generating stable and unbiased distribution from the training data. Different from the original GAN design, InfoGAN can use the training images with full, partial, or no labels to perform disentanglement of the complex sedimentary types exhibited in the training dataset to achieve the variety and diversity of the generated samples. This is accomplished by adding additional categorical variables that provide disentangled semantic representations besides the mere randomized latent vector used in the original GANs. By such means, a regularization term is used to maximize the mutual information between such latent categorical codes and the generated geological facies in the loss function. Furthermore, the resulting unbiased sampling by Info-WGAN makes the data conditioning much easier than the conventional GANs in geological modeling because of the variety and diversity as well as the equal probability of the unconditional sampling by the generator. |
Tasks | Decision Making |
Published | 2019-09-23 |
URL | https://arxiv.org/abs/1909.10652v1 |
https://arxiv.org/pdf/1909.10652v1.pdf | |
PWC | https://paperswithcode.com/paper/generating-geological-facies-models-with |
Repo | |
Framework | |
Learning Orthogonal Projections in Linear Bandits
Title | Learning Orthogonal Projections in Linear Bandits |
Authors | Qiyu Kang, Wee Peng Tay |
Abstract | In a linear stochastic bandit model, each arm is a vector in an Euclidean space and the observed return at each time step is an unknown linear function of the chosen arm at that time step. In this paper, we investigate the problem of learning the best arm in a linear stochastic bandit model, where each arm’s expected reward is an unknown linear function of the projection of the arm onto a subspace. We call this the projection reward. Unlike the classical linear bandit problem in which the observed return corresponds to the reward, the projection reward at each time step is unobservable. Such a model is useful in recommendation applications where the observed return includes corruption by each individual’s biases, which we wish to exclude in the learned model. In the case where there are finitely many arms, we develop a strategy to achieve $O(\bbD\log n)$ regret, where $n$ is the number of time steps and $\bbD$ is the number of arms. In the case where each arm is chosen from an infinite compact set, our strategy achieves $O(n^{2/3}(\log{n})^{1/2})$ regret. Experiments verify the efficiency of our strategy. |
Tasks | |
Published | 2019-06-26 |
URL | https://arxiv.org/abs/1906.10981v3 |
https://arxiv.org/pdf/1906.10981v3.pdf | |
PWC | https://paperswithcode.com/paper/orthogonal-projection-in-linear-bandits |
Repo | |
Framework | |
A database for face presentation attack using wax figure faces
Title | A database for face presentation attack using wax figure faces |
Authors | Shan Jia, Chuanbo Hu, Guodong Guo, Zhengquan Xu |
Abstract | Compared to 2D face presentation attacks (e.g. printed photos and video replays), 3D type attacks are more challenging to face recognition systems (FRS) by presenting 3D characteristics or materials similar to real faces. Existing 3D face spoofing databases, however, mostly based on 3D masks, are restricted to small data size or poor authenticity due to the production difficulty and high cost. In this work, we introduce the first wax figure face database, WFFD, as one type of super-realistic 3D presentation attacks to spoof the FRS. This database consists of 2200 images with both real and wax figure faces (totally 4400 faces) with a high diversity from online collections. Experiments on this database first investigate the vulnerability of three popular FRS to this kind of new attack. Further, we evaluate the performance of several face presentation attack detection methods to show the attack abilities of this super-realistic face spoofing database. |
Tasks | Face Presentation Attack Detection, Face Recognition |
Published | 2019-06-06 |
URL | https://arxiv.org/abs/1906.11900v1 |
https://arxiv.org/pdf/1906.11900v1.pdf | |
PWC | https://paperswithcode.com/paper/a-database-for-face-presentation-attack-using |
Repo | |
Framework | |
End to end collision avoidance based on optical flow and neural networks
Title | End to end collision avoidance based on optical flow and neural networks |
Authors | Jan Blumenkamp |
Abstract | Optical flow is believed to play an important role in the agile flight of birds and insects. Even though it is a very simple concept, it is rarely used in computer vision for collision avoidance. This work implements a neural network based collision avoidance which was deployed and evaluated on a solely for this purpose refitted car. |
Tasks | Optical Flow Estimation |
Published | 2019-11-06 |
URL | https://arxiv.org/abs/1911.08582v1 |
https://arxiv.org/pdf/1911.08582v1.pdf | |
PWC | https://paperswithcode.com/paper/end-to-end-collision-avoidance-based-on |
Repo | |
Framework | |
High Resolution Millimeter Wave Imaging For Self-Driving Cars
Title | High Resolution Millimeter Wave Imaging For Self-Driving Cars |
Authors | Junfeng Guan, Sohrab Madani, Suraj Jog, Haitham Hassanieh |
Abstract | Recent years have witnessed much interest in expanding the use of networking signals beyond communication to sensing, localization, robotics, and autonomous systems. This paper explores how we can leverage recent advances in 5G millimeter wave (mmWave) technology for imaging in self-driving cars. Specifically, the use of mmWave in 5G has led to the creation of compact phased arrays with hundreds of antenna elements that can be electronically steered. Such phased arrays can expand the use of mmWave beyond vehicular communications and simple ranging sensors to a full-fledged imaging system that enables self-driving cars to see through fog, smog, snow, etc. Unfortunately, using mmWave signals for imaging in self-driving cars is challenging due to the very low resolution, the presence of fake artifacts resulting from multipath reflections and the absence of portions of the car due to specularity. This paper presents HawkEye, a system that can enable high resolution mmWave imaging in self driving cars. HawkEye addresses the above challenges by leveraging recent advances in deep learning known as Generative Adversarial Networks (GANs). HawkEye introduces a GAN architecture that is customized to mmWave imaging and builds a system that can significantly enhance the quality of mmWave images for self-driving cars. |
Tasks | Self-Driving Cars |
Published | 2019-12-19 |
URL | https://arxiv.org/abs/1912.09579v2 |
https://arxiv.org/pdf/1912.09579v2.pdf | |
PWC | https://paperswithcode.com/paper/high-resolution-millimeter-wave-imaging-for |
Repo | |
Framework | |
Deep Flow Collaborative Network for Online Visual Tracking
Title | Deep Flow Collaborative Network for Online Visual Tracking |
Authors | Peidong Liu, Xiyu Yan, Yong Jiang, Shu-Tao Xia |
Abstract | The deep learning-based visual tracking algorithms such as MDNet achieve high performance leveraging to the feature extraction ability of a deep neural network. However, the tracking efficiency of these trackers is not very high due to the slow feature extraction for each frame in a video. In this paper, we propose an effective tracking algorithm to alleviate the time-consuming problem. Specifically, we design a deep flow collaborative network, which executes the expensive feature network only on sparse keyframes and transfers the feature maps to other frames via optical flow. Moreover, we raise an effective adaptive keyframe scheduling mechanism to select the most appropriate keyframe. We evaluate the proposed approach on large-scale datasets: OTB2013 and OTB2015. The experiment results show that our algorithm achieves considerable speedup and high precision as well. |
Tasks | Optical Flow Estimation, Visual Tracking |
Published | 2019-11-05 |
URL | https://arxiv.org/abs/1911.01786v1 |
https://arxiv.org/pdf/1911.01786v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-flow-collaborative-network-for-online |
Repo | |
Framework | |
Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Title | Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian |
Authors | Samet Oymak, Zalan Fabian, Mingchen Li, Mahdi Soltanolkotabi |
Abstract | Modern neural network architectures often generalize well despite containing many more parameters than the size of the training dataset. This paper explores the generalization capabilities of neural networks trained via gradient descent. We develop a data-dependent optimization and generalization theory which leverages the low-rank structure of the Jacobian matrix associated with the network. Our results help demystify why training and generalization is easier on clean and structured datasets and harder on noisy and unstructured datasets as well as how the network size affects the evolution of the train and test errors during training. Specifically, we use a control knob to split the Jacobian spectum into “information” and “nuisance” spaces associated with the large and small singular values. We show that over the information space learning is fast and one can quickly train a model with zero training loss that can also generalize well. Over the nuisance space training is slower and early stopping can help with generalization at the expense of some bias. We also show that the overall generalization capability of the network is controlled by how well the label vector is aligned with the information space. A key feature of our results is that even constant width neural nets can provably generalize for sufficiently nice datasets. We conduct various numerical experiments on deep networks that corroborate our theoretical findings and demonstrate that: (i) the Jacobian of typical neural networks exhibit low-rank structure with a few large singular values and many small ones leading to a low-dimensional information space, (ii) over the information space learning is fast and most of the label vector falls on this space, and (iii) label noise falls on the nuisance space and impedes optimization/generalization. |
Tasks | |
Published | 2019-06-12 |
URL | https://arxiv.org/abs/1906.05392v2 |
https://arxiv.org/pdf/1906.05392v2.pdf | |
PWC | https://paperswithcode.com/paper/generalization-guarantees-for-neural-networks |
Repo | |
Framework | |
Mining Hidden Populations through Attributed Search
Title | Mining Hidden Populations through Attributed Search |
Authors | Suhansanu Kumar, Heting Gao, Changyu Wang, Hari Sundaram, Kevin Chen-Chuan Chang |
Abstract | Researchers often query online social platforms through their application programming interfaces (API) to find target populations such as people with mental illness~\cite{De-Choudhury2017} and jazz musicians~\cite{heckathorn2001finding}. Entities of such target population satisfy a property that is typically identified using an oracle (human or a pre-trained classifier). When the property of the target entities is not directly queryable via the API, we refer to the property as `hidden’ and the population as a hidden population. Finding individuals who belong to these populations on social networks is hard because they are non-queryable, and the sampler has to explore from a combinatorial query space within a finite budget limit. By exploiting the correlation between queryable attributes and the population of interest and by hierarchically ordering the query space, we propose a Decision tree-based Thompson sampler (\texttt{DT-TMP}) that efficiently discovers the right combination of attributes to query. Our proposed sampler outperforms the state-of-the-art samplers in online experiments, for example by 54% on Twitter. When the number of matching entities to a query is known in offline experiments, \texttt{DT-TMP} performs exceedingly well by a factor of 0.9-1.5$\times$ over the baseline samplers. In the future, we wish to explore the option of finding hidden populations by formulating more complex queries. | |
Tasks | |
Published | 2019-05-11 |
URL | https://arxiv.org/abs/1905.04505v1 |
https://arxiv.org/pdf/1905.04505v1.pdf | |
PWC | https://paperswithcode.com/paper/mining-hidden-populations-through-attributed |
Repo | |
Framework | |
Representing Closed Transformation Paths in Encoded Network Latent Space
Title | Representing Closed Transformation Paths in Encoded Network Latent Space |
Authors | Marissa Connor, Christopher Rozell |
Abstract | Deep generative networks have been widely used for learning mappings from a low-dimensional latent space to a high-dimensional data space. In many cases, data transformations are defined by linear paths in this latent space. However, the Euclidean structure of the latent space may be a poor match for the underlying latent structure in the data. In this work, we incorporate a generative manifold model into the latent space of an autoencoder in order to learn the low-dimensional manifold structure from the data and adapt the latent space to accommodate this structure. In particular, we focus on applications in which the data has closed transformation paths which extend from a starting point and return to nearly the same point. Through experiments on data with natural closed transformation paths, we show that this model introduces the ability to learn the latent dynamics of complex systems, generate transformation paths, and classify samples that belong on the same transformation path. |
Tasks | |
Published | 2019-12-05 |
URL | https://arxiv.org/abs/1912.02644v1 |
https://arxiv.org/pdf/1912.02644v1.pdf | |
PWC | https://paperswithcode.com/paper/representing-closed-transformation-paths-in |
Repo | |
Framework | |
Inverse Halftoning Through Structure-Aware Deep Convolutional Neural Networks
Title | Inverse Halftoning Through Structure-Aware Deep Convolutional Neural Networks |
Authors | Chang-Hwan Son |
Abstract | The primary issue in inverse halftoning is removing noisy dots on flat areas and restoring image structures (e.g., lines, patterns) on textured areas. Hence, a new structure-aware deep convolutional neural network that incorporates two subnetworks is proposed in this paper. One subnetwork is for image structure prediction while the other is for continuous-tone image reconstruction. First, to predict image structures, patch pairs comprising continuous-tone patches and the corresponding halftoned patches generated through digital halftoning are trained. Subsequently, gradient patches are generated by convolving gradient filters with the continuous-tone patches. The subnetwork for the image structure prediction is trained using the mini-batch gradient descent algorithm given the halftoned patches and gradient patches, which are fed into the input and loss layers of the subnetwork, respectively. Next, the predicted map including the image structures is stacked on the top of the input halftoned image through a fusion layer and fed into the image reconstruction subnetwork such that the entire network is trained adaptively to the image structures. The experimental results confirm that the proposed structure-aware network can remove noisy dot-patterns well on flat areas and restore details clearly on textured areas. Furthermore, it is demonstrated that the proposed method surpasses the conventional state-of-the-art methods based on deep convolutional neural networks and locally learned dictionaries. |
Tasks | Image Reconstruction |
Published | 2019-05-02 |
URL | https://arxiv.org/abs/1905.00637v2 |
https://arxiv.org/pdf/1905.00637v2.pdf | |
PWC | https://paperswithcode.com/paper/inverse-halftoning-through-structure-aware |
Repo | |
Framework | |
Differentiable Disentanglement Filter: an Application Agnostic Core Concept Discovery Probe
Title | Differentiable Disentanglement Filter: an Application Agnostic Core Concept Discovery Probe |
Authors | Guntis Barzdins, Eduards Sidorovics |
Abstract | It has long been speculated that deep neural networks function by discovering a hierarchical set of domain-specific core concepts or patterns, which are further combined to recognize even more elaborate concepts for the classification or other machine learning tasks. Meanwhile disentangling the actual core concepts engrained in the word embeddings (like word2vec or BERT) or deep convolutional image recognition neural networks (like PG-GAN) is difficult and some success there has been achieved only recently. In this paper we propose a novel neural network nonlinearity named Differentiable Disentanglement Filter (DDF) which can be transparently inserted into any existing neural network layer to automatically disentangle the core concepts used by that layer. The DDF probe is inspired by the obscure properties of the hyper-dimensional computing theory. The DDF proof-of-concept implementation is shown to disentangle concepts within the neural 3D scene representation - a task vital for visual grounding of natural language narratives. |
Tasks | Word Embeddings |
Published | 2019-07-17 |
URL | https://arxiv.org/abs/1907.07507v2 |
https://arxiv.org/pdf/1907.07507v2.pdf | |
PWC | https://paperswithcode.com/paper/differentiable-disentanglement-filter-an |
Repo | |
Framework | |
Language Models with Transformers
Title | Language Models with Transformers |
Authors | Chenguang Wang, Mu Li, Alexander J. Smola |
Abstract | The Transformer architecture is superior to RNN-based models in computational efficiency. Recently, GPT and BERT demonstrate the efficacy of Transformer models on various NLP tasks using pre-trained language models on large-scale corpora. Surprisingly, these Transformer architectures are suboptimal for language model itself. Neither self-attention nor the positional encoding in the Transformer is able to efficiently incorporate the word-level sequential context crucial to language modeling. In this paper, we explore effective Transformer architectures for language model, including adding additional LSTM layers to better capture the sequential context while still keeping the computation efficient. We propose Coordinate Architecture Search (CAS) to find an effective architecture through iterative refinement of the model. Experimental results on the PTB, WikiText-2, and WikiText-103 show that CAS achieves perplexities between 20.42 and 34.11 on all problems, i.e. on average an improvement of 12.0 perplexity units compared to state-of-the-art LSTMs. The source code is publicly available. |
Tasks | Language Modelling, Neural Architecture Search |
Published | 2019-04-20 |
URL | https://arxiv.org/abs/1904.09408v2 |
https://arxiv.org/pdf/1904.09408v2.pdf | |
PWC | https://paperswithcode.com/paper/190409408 |
Repo | |
Framework | |