Paper Group ANR 1436
Simple Question Answering with Subgraph Ranking and Joint-Scoring. ColorFool: Semantic Adversarial Colorization. Statistical bounds for entropic optimal transport: sample complexity and the central limit theorem. GBCNs: Genetic Binary Convolutional Networks for Enhancing the Performance of 1-bit DCNNs. Identifying Model Weakness with Adversarial Ex …
Simple Question Answering with Subgraph Ranking and Joint-Scoring
Title | Simple Question Answering with Subgraph Ranking and Joint-Scoring |
Authors | Wenbo Zhao, Tagyoung Chung, Anuj Goyal, Angeliki Metallinou |
Abstract | Knowledge graph based simple question answering (KBSQA) is a major area of research within question answering. Although only dealing with simple questions, i.e., questions that can be answered through a single knowledge base (KB) fact, this task is neither simple nor close to being solved. Targeting on the two main steps, subgraph selection and fact selection, the research community has developed sophisticated approaches. However, the importance of subgraph ranking and leveraging the subject–relation dependency of a KB fact have not been sufficiently explored. Motivated by this, we present a unified framework to describe and analyze existing approaches. Using this framework as a starting point, we focus on two aspects: improving subgraph selection through a novel ranking method and leveraging the subject–relation dependency by proposing a joint scoring CNN model with a novel loss function that enforces the well-order of scores. Our methods achieve a new state of the art (85.44% in accuracy) on the SimpleQuestions dataset. |
Tasks | Question Answering |
Published | 2019-04-04 |
URL | http://arxiv.org/abs/1904.04049v1 |
http://arxiv.org/pdf/1904.04049v1.pdf | |
PWC | https://paperswithcode.com/paper/simple-question-answering-with-subgraph |
Repo | |
Framework | |
ColorFool: Semantic Adversarial Colorization
Title | ColorFool: Semantic Adversarial Colorization |
Authors | Ali Shahin Shamsabadi, Ricardo Sanchez-Matilla, Andrea Cavallaro |
Abstract | Adversarial attacks that generate small L_p-norm perturbations to mislead classifiers have limited success in black-box settings and with unseen classifiers. These attacks are also fragile with defenses that use denoising filters and to adversarial training procedures. Instead, adversarial attacks that generate unrestricted perturbations are more robust to defenses, are generally more successful in black-box settings and are more transferable to unseen classifiers. However, unrestricted perturbations may be noticeable to humans. In this paper, we propose a content-based black-box adversarial attack that generates unrestricted perturbations by exploiting image semantics to selectively modify colors within chosen ranges that are perceived as natural by humans. We show that the proposed approach, ColorFool, outperforms in terms of success rate, robustness to defense frameworks and transferability five state-of-the-art adversarial attacks on two different tasks, scene and object classification, when attacking three state-of-the-art deep neural networks using three standard datasets. We will make the code of the proposed approach and the whole evaluation framework publicly available. |
Tasks | Adversarial Attack, Colorization, Denoising, Object Classification |
Published | 2019-11-25 |
URL | https://arxiv.org/abs/1911.10891v1 |
https://arxiv.org/pdf/1911.10891v1.pdf | |
PWC | https://paperswithcode.com/paper/colorfool-semantic-adversarial-colorization |
Repo | |
Framework | |
Statistical bounds for entropic optimal transport: sample complexity and the central limit theorem
Title | Statistical bounds for entropic optimal transport: sample complexity and the central limit theorem |
Authors | Gonzalo Mena, Jonathan Weed |
Abstract | We prove several fundamental statistical bounds for entropic OT with the squared Euclidean cost between subgaussian probability measures in arbitrary dimension. First, through a new sample complexity result we establish the rate of convergence of entropic OT for empirical measures. Our analysis improves exponentially on the bound of Genevay et al. (2019) and extends their work to unbounded measures. Second, we establish a central limit theorem for entropic OT, based on techniques developed by Del Barrio and Loubes (2019). Previously, such a result was only known for finite metric spaces. As an application of our results, we develop and analyze a new technique for estimating the entropy of a random variable corrupted by gaussian noise. |
Tasks | |
Published | 2019-05-28 |
URL | https://arxiv.org/abs/1905.11882v2 |
https://arxiv.org/pdf/1905.11882v2.pdf | |
PWC | https://paperswithcode.com/paper/statistical-bounds-for-entropic-optimal |
Repo | |
Framework | |
GBCNs: Genetic Binary Convolutional Networks for Enhancing the Performance of 1-bit DCNNs
Title | GBCNs: Genetic Binary Convolutional Networks for Enhancing the Performance of 1-bit DCNNs |
Authors | Chunlei Liu, Wenrui Ding, Yuan Hu, Baochang Zhang, Jianzhuang Liu, Guodong Guo |
Abstract | Training 1-bit deep convolutional neural networks (DCNNs) is one of the most challenging problems in computer vision, because it is much easier to get trapped into local minima than conventional DCNNs. The reason lies in that the binarized kernels and activations of 1-bit DCNNs cause a significant accuracy loss and training inefficiency. To address this problem, we propose Genetic Binary Convolutional Networks (GBCNs) to optimize 1-bit DCNNs, by introducing a new balanced Genetic Algorithm (BGA) to improve the representational ability in an end-to-end framework. The BGA method is proposed to modify the binary process of GBCNs to alleviate the local minima problem, which can significantly improve the performance of 1-bit DCNNs. We develop a new BGA module that is generic and flexible, and can be easily incorporated into existing DCNNs, such asWideResNets and ResNets. Extensive experiments on the object classification tasks (CIFAR, ImageNet) validate the effectiveness of the proposed method. To highlight, our method shows strong generalization on the object recognition task, i.e., face recognition, facial and person re-identification. |
Tasks | Face Recognition, Object Classification, Object Recognition, Person Re-Identification |
Published | 2019-11-25 |
URL | https://arxiv.org/abs/1911.11634v2 |
https://arxiv.org/pdf/1911.11634v2.pdf | |
PWC | https://paperswithcode.com/paper/gbcns-genetic-binary-convolutional-networks |
Repo | |
Framework | |
Identifying Model Weakness with Adversarial Examiner
Title | Identifying Model Weakness with Adversarial Examiner |
Authors | Michelle Shu, Chenxi Liu, Weichao Qiu, Alan Yuille |
Abstract | Machine learning models are usually evaluated according to the average case performance on the test set. However, this is not always ideal, because in some sensitive domains (e.g. autonomous driving), it is the worst case performance that matters more. In this paper, we are interested in systematic exploration of the input data space to identify the weakness of the model to be evaluated. We propose to use an adversarial examiner in the testing stage. Different from the existing strategy to always give the same (distribution of) test data, the adversarial examiner will dynamically select the next test data to hand out based on the testing history so far, with the goal being to undermine the model’s performance. This sequence of test data not only helps us understand the current model, but also serves as constructive feedback to help improve the model in the next iteration. We conduct experiments on ShapeNet object classification. We show that our adversarial examiner can successfully put more emphasis on the weakness of the model, preventing performance estimates from being overly optimistic. |
Tasks | Autonomous Driving, Object Classification |
Published | 2019-11-25 |
URL | https://arxiv.org/abs/1911.11230v1 |
https://arxiv.org/pdf/1911.11230v1.pdf | |
PWC | https://paperswithcode.com/paper/identifying-model-weakness-with-adversarial |
Repo | |
Framework | |
DeepSat V2: Feature Augmented Convolutional Neural Nets for Satellite Image Classification
Title | DeepSat V2: Feature Augmented Convolutional Neural Nets for Satellite Image Classification |
Authors | Qun Liu, Saikat Basu, Sangram Ganguly, Supratik Mukhopadhyay, Robert DiBiano, Manohar Karki, Ramakrishna Nemani |
Abstract | Satellite image classification is a challenging problem that lies at the crossroads of remote sensing, computer vision, and machine learning. Due to the high variability inherent in satellite data, most of the current object classification approaches are not suitable for handling satellite datasets. The progress of satellite image analytics has also been inhibited by the lack of a single labeled high-resolution dataset with multiple class labels. In a preliminary version of this work, we introduced two new high resolution satellite imagery datasets (SAT-4 and SAT-6) and proposed DeepSat framework for classification based on “handcrafted” features and a deep belief network (DBN). The present paper is an extended version, we present an end-to-end framework leveraging an improved architecture that augments a convolutional neural network (CNN) with handcrafted features (instead of using DBN-based architecture) for classification. Our framework, having access to fused spatial information obtained from handcrafted features as well as CNN feature maps, have achieved accuracies of 99.90% and 99.84% respectively, on SAT-4 and SAT-6, surpassing all the other state-of-the-art results. A statistical analysis based on Distribution Separability Criterion substantiates the robustness of our approach in learning better representations for satellite imagery. |
Tasks | Image Classification, Object Classification |
Published | 2019-11-15 |
URL | https://arxiv.org/abs/1911.07747v1 |
https://arxiv.org/pdf/1911.07747v1.pdf | |
PWC | https://paperswithcode.com/paper/deepsat-v2-feature-augmented-convolutional |
Repo | |
Framework | |
Unsupervised particle sorting for high-resolution single-particle cryo-EM
Title | Unsupervised particle sorting for high-resolution single-particle cryo-EM |
Authors | Ye Zhou, Amit Moscovich, Tamir Bendory, Alberto Bartesaghi |
Abstract | Single-particle cryo-Electron Microscopy (EM) has become a popular technique for determining the structure of challenging biomolecules that are inaccessible to other technologies. Recent advances in automation, both in data collection and data processing, have significantly lowered the barrier for non-expert users to successfully execute the structure determination workflow. Many critical data processing steps, however, still require expert user intervention in order to converge to the correct high-resolution structure. In particular, strategies to identify homogeneous populations of particles rely heavily on subjective criteria that are not always consistent or reproducible among different users. Here, we explore the use of unsupervised strategies for particle sorting that are compatible with the autonomous operation of the image processing pipeline. More specifically, we show that particles can be successfully sorted based on a simple statistical model for the distribution of scores assigned during refinement. This represents an important step towards the development of automated workflows for protein structure determination using single-particle cryo-EM. |
Tasks | |
Published | 2019-10-22 |
URL | https://arxiv.org/abs/1910.10051v1 |
https://arxiv.org/pdf/1910.10051v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-particle-sorting-for-high |
Repo | |
Framework | |
An Information Theory Approach on Deciding Spectroscopic Follow Ups
Title | An Information Theory Approach on Deciding Spectroscopic Follow Ups |
Authors | Javiera Astudillo, Pavlos Protopapas, Karim Pichara, Pablo Huijse |
Abstract | Classification and characterization of variable phenomena and transient phenomena are critical for astrophysics and cosmology. These objects are commonly studied using photometric time series or spectroscopic data. Given that many ongoing and future surveys are in time-domain and given that adding spectra provide further insights but requires more observational resources, it would be valuable to know which objects should we prioritize to have spectrum in addition to time series. We propose a methodology in a probabilistic setting that determines a-priory which objects are worth taking spectrum to obtain better insights, where we focus ‘insight’ as the type of the object (classification). Objects for which we query its spectrum are reclassified using their full spectrum information. We first train two classifiers, one that uses photometric data and another that uses photometric and spectroscopic data together. Then for each photometric object we estimate the probability of each possible spectrum outcome. We combine these models in various probabilistic frameworks (strategies) which are used to guide the selection of follow up observations. The best strategy depends on the intended use, whether it is getting more confidence or accuracy. For a given number of candidate objects (127, equal to 5% of the dataset) for taking spectra, we improve 37% class prediction accuracy as opposed to 20% of a non-naive (non-random) best base-line strategy. Our approach provides a general framework for follow-up strategies and can be extended beyond classification and to include other forms of follow-ups beyond spectroscopy. |
Tasks | Object Classification, Time Series |
Published | 2019-11-06 |
URL | https://arxiv.org/abs/1911.02444v1 |
https://arxiv.org/pdf/1911.02444v1.pdf | |
PWC | https://paperswithcode.com/paper/an-information-theory-approach-on-deciding |
Repo | |
Framework | |
Deep Learning for 2D and 3D Rotatable Data: An Overview of Methods
Title | Deep Learning for 2D and 3D Rotatable Data: An Overview of Methods |
Authors | Luca Della Libera, Vladimir Golkov, Yue Zhu, Arman Mielke, Daniel Cremers |
Abstract | One of the reasons for the success of convolutional networks is their equivariance/invariance under translations. However, rotatable data such as molecules, living cells, everyday objects, or galaxies require processing with equivariance/invariance under rotations in cases where the rotation of the coordinate system does not affect the meaning of the data (e.g. object classification). On the other hand, estimation/processing of rotations is necessary in cases where rotations are important (e.g. motion estimation). There has been recent progress in methods and theory in all these regards. Here we provide an overview of existing methods, both for 2D and 3D rotations (and translations), and identify commonalities and links between them, in the hope that our insights will be useful for choosing and perfecting the methods. |
Tasks | Motion Estimation, Object Classification |
Published | 2019-10-31 |
URL | https://arxiv.org/abs/1910.14594v1 |
https://arxiv.org/pdf/1910.14594v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-learning-for-2d-and-3d-rotatable-data-an |
Repo | |
Framework | |
Energy Storage Management via Deep Q-Networks
Title | Energy Storage Management via Deep Q-Networks |
Authors | Ahmed S. Zamzam, Bo Yang, Nicholas D. Sidiropoulos |
Abstract | Energy storage devices represent environmentally friendly candidates to cope with volatile renewable energy generation. Motivated by the increase in privately owned storage systems, this paper studies the problem of real-time control of a storage unit co-located with a renewable energy generator and an inelastic load. Unlike many approaches in the literature, no distributional assumptions are being made on the renewable energy generation or the real-time prices. Building on the deep Q-networks algorithm, a reinforcement learning approach utilizing a neural network is devised where the storage unit operational constraints are respected. The neural network approximates the action-value function which dictates what action (charging, discharging, etc.) to take. Simulations indicate that near-optimal performance can be attained with the proposed learning-based control policy for the storage units. |
Tasks | |
Published | 2019-03-26 |
URL | http://arxiv.org/abs/1903.11107v1 |
http://arxiv.org/pdf/1903.11107v1.pdf | |
PWC | https://paperswithcode.com/paper/energy-storage-management-via-deep-q-networks |
Repo | |
Framework | |
V-NAS: Neural Architecture Search for Volumetric Medical Image Segmentation
Title | V-NAS: Neural Architecture Search for Volumetric Medical Image Segmentation |
Authors | Zhuotun Zhu, Chenxi Liu, Dong Yang, Alan Yuille, Daguang Xu |
Abstract | Deep learning algorithms, in particular 2D and 3D fully convolutional neural networks (FCNs), have rapidly become the mainstream methodology for volumetric medical image segmentation. However, 2D convolutions cannot fully leverage the rich spatial information along the third axis, while 3D convolutions suffer from the demanding computation and high GPU memory consumption. In this paper, we propose to automatically search the network architecture tailoring to volumetric medical image segmentation problem. Concretely, we formulate the structure learning as differentiable neural architecture search, and let the network itself choose between 2D, 3D or Pseudo-3D (P3D) convolutions at each layer. We evaluate our method on 3 public datasets, i.e., the NIH Pancreas dataset, the Lung and Pancreas dataset from the Medical Segmentation Decathlon (MSD) Challenge. Our method, named V-NAS, consistently outperforms other state-of-the-arts on the segmentation task of both normal organ (NIH Pancreas) and abnormal organs (MSD Lung tumors and MSD Pancreas tumors), which shows the power of chosen architecture. Moreover, the searched architecture on one dataset can be well generalized to other datasets, which demonstrates the robustness and practical use of our proposed method. |
Tasks | Medical Image Segmentation, Neural Architecture Search, Semantic Segmentation, Volumetric Medical Image Segmentation |
Published | 2019-06-06 |
URL | https://arxiv.org/abs/1906.02817v2 |
https://arxiv.org/pdf/1906.02817v2.pdf | |
PWC | https://paperswithcode.com/paper/v-nas-neural-architecture-search-for |
Repo | |
Framework | |
On Linear Convergence of Weighted Kernel Herding
Title | On Linear Convergence of Weighted Kernel Herding |
Authors | Rajiv Khanna, Michael W. Mahoney |
Abstract | We provide a novel convergence analysis of two popular sampling algorithms, Weighted Kernel Herding and Sequential Bayesian Quadrature, that are used to approximate the expectation of a function under a distribution. Existing theoretical analysis was insufficient to explain the empirical successes of these algorithms. We improve upon existing convergence rates to show that, under mild assumptions, these algorithms converge linearly. To this end, we also suggest a simplifying assumption that is true for most cases in finite dimensions, and that acts as a sufficient condition for linear convergence to hold in the much harder case of infinite dimensions. When this condition is not satisfied, we provide a weaker convergence guarantee. Our analysis also yields a new distributed algorithm for large-scale computation that we prove converges linearly under the same assumptions. Finally, we provide an empirical evaluation to test the proposed algorithm for a real world application. |
Tasks | |
Published | 2019-07-19 |
URL | https://arxiv.org/abs/1907.08410v2 |
https://arxiv.org/pdf/1907.08410v2.pdf | |
PWC | https://paperswithcode.com/paper/on-linear-convergence-of-weighted-kernel |
Repo | |
Framework | |
A Model-driven and Data-driven Fusion Framework for Accurate Air Quality Prediction
Title | A Model-driven and Data-driven Fusion Framework for Accurate Air Quality Prediction |
Authors | Haolin Fei, Xiaofeng Wu, Chunbo Luo |
Abstract | Air quality is closely related to public health. Health issues such as cardiovascular diseases and respiratory diseases, may have connection with long exposure to highly polluted environment. Therefore, accurate air quality forecasts are extremely important to those who are vulnerable. To estimate the variation of several air pollution concentrations, previous researchers used various approaches, such as the Community Multiscale Air Quality model (CMAQ) or neural networks. Although CMAQ model considers a coverage of the historic air pollution data and meteorological variables, extra bias is introduced due to additional adjustment. In this paper, a combination of model-based strategy and data-driven method namely the physical-temporal collection(PTC) model is proposed, aiming to fix the systematic error that traditional models deliver. In the data-driven part, the first components are the temporal pattern and the weather pattern to measure important features that contribute to the prediction performance. The less relevant input variables will be removed to eliminate negative weights in network training. Then, we deploy a long-short-term-memory (LSTM) to fetch the preliminary results, which will be further corrected by a neural network (NN) involving the meteorological index as well as other pollutants concentrations. The data-set we applied for forecasting is from January 1st, 2016 to December 31st, 2016. According to the results, our PTC achieves an excellent performance compared with the baseline model (CMAQ prediction, GRU, DNN and etc.). This joint model-based data-driven method for air quality prediction can be easily deployed on stations without extra adjustment, providing results with high-time-resolution information for vulnerable members to prevent heavy air pollution ahead. |
Tasks | |
Published | 2019-12-06 |
URL | https://arxiv.org/abs/1912.07367v1 |
https://arxiv.org/pdf/1912.07367v1.pdf | |
PWC | https://paperswithcode.com/paper/a-model-driven-and-data-driven-fusion |
Repo | |
Framework | |
Regularized Weighted Chebyshev Approximations for Support Estimation
Title | Regularized Weighted Chebyshev Approximations for Support Estimation |
Authors | I, Chien, Olgica Milenkovic |
Abstract | We introduce a new method for estimating the support size of an unknown distribution which provably matches the performance bounds of the state-of-the-art techniques in the area and outperforms them in practice. In particular, we present both theoretical and computer simulation results that illustrate the utility and performance improvements of our method. The theoretical analysis relies on introducing a new weighted Chebyshev polynomial approximation method, jointly optimizing the bias and variance components of the risk, and combining the weighted minmax polynomial approximation method with discretized semi-infinite programming solvers. Such a setting allows for casting the estimation problem as a linear program (LP) with a small number of variables and constraints that may be solved as efficiently as the original Chebyshev approximation problem. Our technique is tested on synthetic data and used to address an important problem in computational biology - estimating the number of bacterial genera in the human gut. On synthetic datasets, for practically relevant sample sizes, we observe significant improvements in the value of the worst-case risk compared to existing methods. For the bioinformatics application, using metagenomic data from the NIH Human Gut and the American Gut Microbiome Projects, we generate a list of frequencies of bacterial taxa that allows us to estimate the number of bacterial genera to approximately 2300. |
Tasks | |
Published | 2019-01-22 |
URL | https://arxiv.org/abs/1901.07506v5 |
https://arxiv.org/pdf/1901.07506v5.pdf | |
PWC | https://paperswithcode.com/paper/support-estimation-via-regularized-and |
Repo | |
Framework | |
Generalization of Dempster-Shafer theory: A complex belief function
Title | Generalization of Dempster-Shafer theory: A complex belief function |
Authors | Fuyuan Xiao |
Abstract | Dempster-Shafer evidence theory has been widely used in various fields of applications, because of the flexibility and effectiveness in modeling uncertainties without prior information. However, the existing evidence theory is insufficient to consider the situations where it has no capability to express the fluctuations of data at a given phase of time during their execution, and the uncertainty and imprecision which are inevitably involved in the data occur concurrently with changes to the phase or periodicity of the data. In this paper, therefore, a generalized Dempster-Shafer evidence theory is proposed. To be specific, a mass function in the generalized Dempster-Shafer evidence theory is modeled by a complex number, called as a complex basic belief assignment, which has more powerful ability to express uncertain information. Based on that, a generalized Dempster’s combination rule is exploited. In contrast to the classical Dempster’s combination rule, the condition in terms of the conflict coefficient between the evidences K<1 is released in the generalized Dempster’s combination rule. Hence, it is more general and applicable than the classical Dempster’s combination rule. When the complex mass function is degenerated from complex numbers to real numbers, the generalized Dempster’s combination rule degenerates to the classical evidence theory under the condition that the conflict coefficient between the evidences K is less than 1. In a word, this generalized Dempster-Shafer evidence theory provides a promising way to model and handle more uncertain information. |
Tasks | |
Published | 2019-06-27 |
URL | https://arxiv.org/abs/1906.11409v1 |
https://arxiv.org/pdf/1906.11409v1.pdf | |
PWC | https://paperswithcode.com/paper/generalization-of-dempster-shafer-theory-a |
Repo | |
Framework | |