January 29, 2020

3126 words 15 mins read

Paper Group ANR 700

Paper Group ANR 700

Diversity Regularized Adversarial Learning. Analysis of overfitting in the regularized Cox model. Deep Learning for Power System Security Assessment. A security steganography scheme based on hdr image. PixelSteganalysis: Destroying Hidden Information with a Low Degree of Visual Degradation. A Probabilistic Representation of Deep Learning. Factored …

Diversity Regularized Adversarial Learning

Title Diversity Regularized Adversarial Learning
Authors Babajide O. Ayinde, Keishin Nishihama, Jacek M. Zurada
Abstract The two key players in Generative Adversarial Networks (GANs), the discriminator and generator, are usually parameterized as deep neural networks (DNNs). On many generative tasks, GANs achieve state-of-the-art performance but are often unstable to train and sometimes miss modes. A typical failure mode is the collapse of the generator to a single parameter configuration where its outputs are identical. When this collapse occurs, the gradient of the discriminator may point in similar directions for many similar points. We hypothesize that some of these shortcomings are in part due to primitive and redundant features extracted by discriminator and this can easily make the training stuck. We present a novel approach for regularizing adversarial models by enforcing diverse feature learning. In order to do this, both generator and discriminator are regularized by penalizing both negatively and positively correlated features according to their differentiation and based on their relative cosine distances. In addition to the gradient information from the adversarial loss made available by the discriminator, diversity regularization also ensures that a more stable gradient is provided to update both the generator and discriminator. Results indicate our regularizer enforces diverse features, stabilizes training, and improves image synthesis.
Tasks Image Generation
Published 2019-01-30
URL http://arxiv.org/abs/1901.10824v1
PDF http://arxiv.org/pdf/1901.10824v1.pdf
PWC https://paperswithcode.com/paper/diversity-regularized-adversarial-learning
Repo
Framework

Analysis of overfitting in the regularized Cox model

Title Analysis of overfitting in the regularized Cox model
Authors M Sheikh, A. C. C. Coolen
Abstract The Cox proportional hazards model is ubiquitous in the analysis of time-to-event data. However, when the data dimension p is comparable to the sample size $N$, maximum likelihood estimates for its regression parameters are known to be biased or break down entirely due to overfitting. This prompted the introduction of the so-called regularized Cox model. In this paper we use the replica method from statistical physics to investigate the relationship between the true and inferred regression parameters in regularized multivariate Cox regression with L2 regularization, in the regime where both p and N are large but with p/N ~ O(1). We thereby generalize a recent study from maximum likelihood to maximum a posteriori inference. We also establish a relationship between the optimal regularization parameter and p/N, allowing for straightforward overfitting corrections in time-to-event analysis.
Tasks L2 Regularization
Published 2019-04-14
URL https://arxiv.org/abs/1904.06632v2
PDF https://arxiv.org/pdf/1904.06632v2.pdf
PWC https://paperswithcode.com/paper/analysis-of-overfitting-in-the-regularized
Repo
Framework

Deep Learning for Power System Security Assessment

Title Deep Learning for Power System Security Assessment
Authors José-María Hidalgo-Arteaga, Fiodar Hancharou, Florian Thams, Spyros Chatzivasileiadis
Abstract Security assessment is among the most fundamental functions of power system operator. The sheer complexity of power systems exceeding a few buses, however, makes it an extremely computationally demanding task. The emergence of deep learning methods that are able to handle immense amounts of data, and infer valuable information appears as a promising alternative. This paper has two main contributions. First, inspired by the remarkable performance of convolutional neural networks for image processing, we represent for the first time power system snapshots as 2-dimensional images, thus taking advantage of the wide range of deep learning methods available for image processing. Second, we train deep neural networks on a large database for the NESTA 162-bus system to assess both N-1 security and small-signal stability. We find that our approach is over 255 times faster than a standard small-signal stability assessment, and it can correctly determine unsafe points with over 99% accuracy.
Tasks
Published 2019-03-31
URL http://arxiv.org/abs/1904.09029v1
PDF http://arxiv.org/pdf/1904.09029v1.pdf
PWC https://paperswithcode.com/paper/190409029
Repo
Framework

A security steganography scheme based on hdr image

Title A security steganography scheme based on hdr image
Authors Wei Gao, Yongqing Huo, Yan Qiao
Abstract It is widely recognized that the image format is crucial to steganography for that each individual format has its unique properities. Nowadays, the most famous approach of digital image steganography is to combine a well-defined distortion function with efficient practical codes such as STC. And numerous researches are concentrated on spatial domain and jpeg domain. However, whether in spatial domain or jpeg domain, high payload (e.g., 0.5 bit per pixel) is not secure enough. In this paper, we propose a novel adaptive steganography scheme based on 32-bit HDR (High dynamic range) format and Norm IEEE 754. Experiments show that the steganographic method can achieve satisfactory security under payload from 0.3bpp to 0.5bpp.
Tasks Image Steganography
Published 2019-02-28
URL http://arxiv.org/abs/1902.10943v1
PDF http://arxiv.org/pdf/1902.10943v1.pdf
PWC https://paperswithcode.com/paper/a-security-steganography-scheme-based-on-hdr
Repo
Framework

PixelSteganalysis: Destroying Hidden Information with a Low Degree of Visual Degradation

Title PixelSteganalysis: Destroying Hidden Information with a Low Degree of Visual Degradation
Authors Dahuin Jung, Ho Bae, Hyun-Soo Choi, Sungroh Yoon
Abstract Steganography is the science of unnoticeably concealing a secret message within a certain image, called a cover image. The cover image with the secret message is called a stego image. Steganography is commonly used for illegal purposes such as terrorist activities and pornography. To thwart covert communications and transactions, attacking algorithms against steganography, called steganalysis, exist. Currently, there are many studies implementing deep learning to the steganography algorithm. However, conventional steganalysis is no longer effective for deep learning based steganography algorithms. Our framework is the first one to disturb covert communications and transactions via the recent deep learning-based steganography algorithms. We first extract a sophisticated pixel distribution of the potential stego image from the auto-regressive model induced by deep learning. Using the extracted pixel distributions, we detect whether an image is the stego or not at the pixel level. Each pixel value is adjusted as required and the adjustment induces an effective removal of the secret image. Because the decoding method of deep learning-based steganography algorithms is approximate (lossy), which is different from the conventional steganography, we propose a new quantitative metric that is more suitable for measuring the accurate effect. We evaluate our method using three public benchmarks in comparison with a conventional steganalysis method and show up to a 20% improvement in terms of decoding rate.
Tasks Image Steganography
Published 2019-01-30
URL http://arxiv.org/abs/1902.11113v2
PDF http://arxiv.org/pdf/1902.11113v2.pdf
PWC https://paperswithcode.com/paper/pixelsteganalysis-destroying-hidden
Repo
Framework

A Probabilistic Representation of Deep Learning

Title A Probabilistic Representation of Deep Learning
Authors Xinjie Lan, Kenneth E. Barner
Abstract In this work, we introduce a novel probabilistic representation of deep learning, which provides an explicit explanation for the Deep Neural Networks (DNNs) in three aspects: (i) neurons define the energy of a Gibbs distribution; (ii) the hidden layers of DNNs formulate Gibbs distributions; and (iii) the whole architecture of DNNs can be interpreted as a Bayesian neural network. Based on the proposed probabilistic representation, we investigate two fundamental properties of deep learning: hierarchy and generalization. First, we explicitly formulate the hierarchy property from the Bayesian perspective, namely that some hidden layers formulate a prior distribution and the remaining layers formulate a likelihood distribution. Second, we demonstrate that DNNs have an explicit regularization by learning a prior distribution and the learning algorithm is one reason for decreasing the generalization ability of DNNs. Moreover, we clarify two empirical phenomena of DNNs that cannot be explained by traditional theories of generalization. Simulation results validate the proposed probabilistic representation and the insights into these properties of deep learning based on a synthetic dataset.
Tasks
Published 2019-08-26
URL https://arxiv.org/abs/1908.09772v1
PDF https://arxiv.org/pdf/1908.09772v1.pdf
PWC https://paperswithcode.com/paper/a-probabilistic-representation-of-deep
Repo
Framework

Factored Probabilistic Belief Tracking

Title Factored Probabilistic Belief Tracking
Authors Blai Bonet, Hector Geffner
Abstract The problem of belief tracking in the presence of stochastic actions and observations is pervasive and yet computationally intractable. In this work we show however that probabilistic beliefs can be maintained in factored form exactly and efficiently across a number of causally closed beams, when the state variables that appear in more than one beam obey a form of backward determinism. Since computing marginals from the factors is still computationally intractable in general, and variables appearing in several beams are not always backward-deterministic, the basic formulation is extended with two approximations: forms of belief propagation for computing marginals from factors, and sampling of non-backward-deterministic variables for making such variables backward-deterministic given their sampled history. Unlike, Rao-Blackwellized particle-filtering, the sampling is not used for making inference tractable but for making the factorization sound. The resulting algorithm involves sampling and belief propagation or just one of them as determined by the structure of the model.
Tasks
Published 2019-09-26
URL https://arxiv.org/abs/1909.13779v1
PDF https://arxiv.org/pdf/1909.13779v1.pdf
PWC https://paperswithcode.com/paper/factored-probabilistic-belief-tracking
Repo
Framework

Robot Affect: the Amygdala as Bloch Sphere

Title Robot Affect: the Amygdala as Bloch Sphere
Authors Johan F. Hoorn, Johnny K. W. Ho
Abstract In the design of artificially sentient robots, an obstacle always has been that conventional computers cannot really process information in parallel, whereas the human affective system is capable of producing experiences of emotional concurrency (e.g., happy and sad). Another schism that has been in the way is the persistent Cartesian divide between cognition and affect, whereas people easily can reflect on their emotions or have feelings about a thought. As an essentially theoretical exercise, we posit that quantum physics at the basis of neurology explains observations in cognitive emotion psychology from the belief that the construct of reality is partially imagined (Im) in the complex coordinate space C^3. We propose a quantum computational account to mixed states of reflection and affect, while transforming known psychological dimensions into the actual quantum dynamics of electromotive forces. As a precursor to actual simulations, we show examples of possible robot behaviors, using Einstein-Podolsky-Rosen circuits. Keywords: emotion, reflection, modelling, quantum computing
Tasks
Published 2019-11-22
URL https://arxiv.org/abs/1911.12128v2
PDF https://arxiv.org/pdf/1911.12128v2.pdf
PWC https://paperswithcode.com/paper/robot-affect-the-amygdala-as-bloch-sphere
Repo
Framework

Exponential Convergence Rates of Classification Errors on Learning with SGD and Random Features

Title Exponential Convergence Rates of Classification Errors on Learning with SGD and Random Features
Authors Shingo Yashima, Atsushi Nitanda, Taiji Suzuki
Abstract Although kernel methods are widely used in many learning problems, they have poor scalability to large datasets. To address this problem, sketching and stochastic gradient methods are the most commonly used techniques to derive efficient large-scale learning algorithms. In this study, we consider solving a binary classification problem using random features and stochastic gradient descent. In recent research, an exponential convergence rate of the expected classification error under the strong low-noise condition has been shown. We extend these analyses to a random features setting, analyzing the error induced by the approximation of random features in terms of the distance between the generated hypothesis including population risk minimizers and empirical risk minimizers when using general Lipschitz loss functions, to show that an exponential convergence of the expected classification error is achieved even if random features approximation is applied. Additionally, we demonstrate that the convergence rate does not depend on the number of features and there is a significant computational benefit in using random features in classification problems because of the strong low-noise condition.
Tasks
Published 2019-11-13
URL https://arxiv.org/abs/1911.05350v1
PDF https://arxiv.org/pdf/1911.05350v1.pdf
PWC https://paperswithcode.com/paper/exponential-convergence-rates-of
Repo
Framework

Texture Selection for Automatic Music Genre Classification

Title Texture Selection for Automatic Music Genre Classification
Authors Juliano H. Foleiss, Tiago F. Tavares
Abstract Music Genre Classification is the problem of associating genre-related labels to digitized music tracks. It has applications in the organization of commercial and personal music collections. Often, music tracks are described as a set of timbre-inspired sound textures. In shallow-learning systems, the total number of sound textures per track is usually too high, and texture downsampling is necessary to make training tractable. Although previous work has solved this by linear downsampling, no extensive work has been done to evaluate how texture selection benefits genre classification in the context of the bag of frames track descriptions. In this paper, we evaluate the impact of frame selection on automatic music genre classification in a bag of frames scenario. We also present a novel texture selector based on K-Means aimed to identify diverse sound textures within each track. We evaluated texture selection in diverse datasets, four different feature sets, as well as its relationship to a univariate feature selection strategy. The results show that frame selection leads to significant improvement over the single vector baseline on datasets consisting of full-length tracks, regardless of the feature set. Results also indicate that the K-Means texture selector achieves significant improvements over the baseline, using fewer textures per track than the commonly used linear downsampling. The results also suggest that texture selection is complementary to the feature selection strategy evaluated. Our qualitative analysis indicates that texture variety within classes benefits model generalization. Our analysis shows that selecting specific audio excerpts can improve classification performance, and it can be done automatically.
Tasks Feature Selection
Published 2019-05-28
URL https://arxiv.org/abs/1905.11959v1
PDF https://arxiv.org/pdf/1905.11959v1.pdf
PWC https://paperswithcode.com/paper/texture-selection-for-automatic-music-genre
Repo
Framework

Rolling-Shutter Modelling for Direct Visual-Inertial Odometry

Title Rolling-Shutter Modelling for Direct Visual-Inertial Odometry
Authors David Schubert, Nikolaus Demmel, Lukas von Stumberg, Vladyslav Usenko, Daniel Cremers
Abstract We present a direct visual-inertial odometry (VIO) method which estimates the motion of the sensor setup and sparse 3D geometry of the environment based on measurements from a rolling-shutter camera and an inertial measurement unit (IMU). The visual part of the system performs a photometric bundle adjustment on a sparse set of points. This direct approach does not extract feature points and is able to track not only corners, but any pixels with sufficient gradient magnitude. Neglecting rolling-shutter effects in the visual part severely degrades accuracy and robustness of the system. In this paper, we incorporate a rolling-shutter model into the photometric bundle adjustment that estimates a set of recent keyframe poses and the inverse depth of a sparse set of points. IMU information is accumulated between several frames using measurement preintegration, and is inserted into the optimization as an additional constraint between selected keyframes. For every keyframe we estimate not only the pose but also velocity and biases to correct the IMU measurements. Unlike systems with global-shutter cameras, we use both IMU measurements and rolling-shutter effects of the camera to estimate velocity and biases for every state. Last, we evaluate our system on a novel dataset that contains global-shutter and rolling-shutter images, IMU data and ground-truth poses for ten different sequences, which we make publicly available. Evaluation shows that the proposed method outperforms a system where rolling shutter is not modelled and achieves similar accuracy to the global-shutter method on global-shutter data.
Tasks
Published 2019-11-04
URL https://arxiv.org/abs/1911.01015v1
PDF https://arxiv.org/pdf/1911.01015v1.pdf
PWC https://paperswithcode.com/paper/rolling-shutter-modelling-for-direct-visual
Repo
Framework

New methods for SVM feature selection

Title New methods for SVM feature selection
Authors Tangui Aladjidi, François Pasqualini
Abstract Support Vector Machines have been a popular topic for quite some time now, and as they develop, a need for new methods of feature selection arises. This work presents various approaches SVM feature selection developped using new tools such as entropy measurement and K-medoid clustering. The work focuses on the use of one-class SVM’s for wafer testing, with a numerical implementation in R.
Tasks Feature Selection
Published 2019-05-23
URL https://arxiv.org/abs/1905.09653v2
PDF https://arxiv.org/pdf/1905.09653v2.pdf
PWC https://paperswithcode.com/paper/new-methods-for-svm-feature-selection
Repo
Framework

Naive Feature Selection: Sparsity in Naive Bayes

Title Naive Feature Selection: Sparsity in Naive Bayes
Authors Armin Askari, Alexandre d’Aspremont, Laurent El Ghaoui
Abstract Due to its linear complexity, naive Bayes classification remains an attractive supervised learning method, especially in very large-scale settings. We propose a sparse version of naive Bayes, which can be used for feature selection. This leads to a combinatorial maximum-likelihood problem, for which we provide an exact solution in the case of binary data, or a bound in the multinomial case. We prove that our bound becomes tight as the marginal contribution of additional features decreases. Both binary and multinomial sparse models are solvable in time almost linear in problem size, representing a very small extra relative cost compared to the classical naive Bayes. Numerical experiments on text data show that the naive Bayes feature selection method is as statistically effective as state-of-the-art feature selection methods such as recursive feature elimination, $l_1$-penalized logistic regression and LASSO, while being orders of magnitude faster. For a large data set, having more than with $1.6$ million training points and about $12$ million features, and with a non-optimized CPU implementation, our sparse naive Bayes model can be trained in less than 15 seconds.
Tasks Feature Selection
Published 2019-05-23
URL https://arxiv.org/abs/1905.09884v2
PDF https://arxiv.org/pdf/1905.09884v2.pdf
PWC https://paperswithcode.com/paper/naive-feature-selection-sparsity-in-naive
Repo
Framework

Unpaired image denoising using a generative adversarial network in X-ray CT

Title Unpaired image denoising using a generative adversarial network in X-ray CT
Authors Hyoung Suk Park, Jineon Baek, Sun Kyoung You, Jae Kyu Choi, Jin Keun Seo
Abstract This paper proposes a deep learning-based denoising method for noisy low-dose computerized tomography (CT) images in the absence of paired training data. The proposed method uses a fidelity-embedded generative adversarial network (GAN) to learn a denoising function from unpaired training data of low-dose CT (LDCT) and standard-dose CT (SDCT) images, where the denoising function is the optimal generator in the GAN framework. This paper analyzes the f-GAN objective to derive a suitable generator that is optimized by minimizing a weighted sum of two losses: the Kullback-Leibler divergence between an SDCT data distribution and a generated distribution, and the $\ell_2$ loss between the LDCT image and the corresponding generated images (or denoised image). The computed generator reflects the prior belief about SDCT data distribution through training. We observed that the proposed method allows the preservation of fine anomalous features while eliminating noise. The experimental results show that the proposed deep-learning method with unpaired datasets performs comparably to a method using paired datasets. A clinical experiment was also performed to show the validity of the proposed method for noise arising in the low-dose X-ray CT.
Tasks Denoising, Image Denoising
Published 2019-03-04
URL https://arxiv.org/abs/1903.06257v2
PDF https://arxiv.org/pdf/1903.06257v2.pdf
PWC https://paperswithcode.com/paper/unpaired-image-denoising-using-a-generative
Repo
Framework
Title Threshold-Based Retrieval and Textual Entailment Detection on Legal Bar Exam Questions
Authors Sabine Wehnert, Sayed Anisul Hoque, Wolfram Fenske, Gunter Saake
Abstract Getting an overview over the legal domain has become challenging, especially in a broad, international context. Legal question answering systems have the potential to alleviate this task by automatically retrieving relevant legal texts for a specific statement and checking whether the meaning of the statement can be inferred from the found documents. We investigate a combination of the BM25 scoring method of Elasticsearch with word embeddings trained on English translations of the German and Japanese civil law. For this, we define criteria which select a dynamic number of relevant documents according to threshold scores. Exploiting two deep learning classifiers and their respective prediction bias with a threshold-based answer inclusion criterion has shown to be beneficial for the textual entailment task, when compared to the baseline.
Tasks Natural Language Inference, Question Answering, Word Embeddings
Published 2019-05-30
URL https://arxiv.org/abs/1905.13350v1
PDF https://arxiv.org/pdf/1905.13350v1.pdf
PWC https://paperswithcode.com/paper/threshold-based-retrieval-and-textual
Repo
Framework
comments powered by Disqus