Paper Group ANR 157
Non-Iterative Knowledge Fusion in Deep Convolutional Neural Networks. Optimal locally private estimation under $\ell_p$ loss for $1\le p\le 2$. A game-theoretic approach to timeline-based planning with uncertainty. Towards Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection. An efficient de …
Non-Iterative Knowledge Fusion in Deep Convolutional Neural Networks
Title | Non-Iterative Knowledge Fusion in Deep Convolutional Neural Networks |
Authors | Mikhail Iu. Leontev, Viktoriia Islenteva, Sergey V. Sukhov |
Abstract | Incorporation of a new knowledge into neural networks with simultaneous preservation of the previous one is known to be a nontrivial problem. This problem becomes even more complex when new knowledge is contained not in new training examples, but inside the parameters (connection weights) of another neural network. Here we propose and test two methods allowing combining the knowledge contained in separate networks. One method is based on a simple operation of summation of weights of constituent neural networks. Another method assumes incorporation of a new knowledge by modification of weights nonessential for the preservation of already stored information. We show that with these methods the knowledge from one network can be transferred into another one non-iteratively without requiring training sessions. The fused network operates efficiently, performing classification far better than a chance level. The efficiency of the methods is quantified on several publicly available data sets in classification tasks both for shallow and deep neural networks. |
Tasks | |
Published | 2018-09-25 |
URL | http://arxiv.org/abs/1809.09399v1 |
http://arxiv.org/pdf/1809.09399v1.pdf | |
PWC | https://paperswithcode.com/paper/non-iterative-knowledge-fusion-in-deep |
Repo | |
Framework | |
Optimal locally private estimation under $\ell_p$ loss for $1\le p\le 2$
Title | Optimal locally private estimation under $\ell_p$ loss for $1\le p\le 2$ |
Authors | Min Ye, Alexander Barg |
Abstract | We consider the minimax estimation problem of a discrete distribution with support size $k$ under locally differential privacy constraints. A privatization scheme is applied to each raw sample independently, and we need to estimate the distribution of the raw samples from the privatized samples. A positive number $\epsilon$ measures the privacy level of a privatization scheme. In our previous work (IEEE Trans. Inform. Theory, 2018), we proposed a family of new privatization schemes and the corresponding estimator. We also proved that our scheme and estimator are order optimal in the regime $e^{\epsilon} \ll k$ under both $\ell_2^2$ (mean square) and $\ell_1$ loss. In this paper, we sharpen this result by showing asymptotic optimality of the proposed scheme under the $\ell_p^p$ loss for all $1\le p\le 2.$ More precisely, we show that for any $p\in[1,2]$ and any $k$ and $\epsilon,$ the ratio between the worst-case $\ell_p^p$ estimation loss of our scheme and the optimal value approaches $1$ as the number of samples tends to infinity. The lower bound on the minimax risk of private estimation that we establish as a part of the proof is valid for any loss function $\ell_p^p, p\ge 1.$ |
Tasks | |
Published | 2018-10-16 |
URL | http://arxiv.org/abs/1810.07283v1 |
http://arxiv.org/pdf/1810.07283v1.pdf | |
PWC | https://paperswithcode.com/paper/optimal-locally-private-estimation-under |
Repo | |
Framework | |
A game-theoretic approach to timeline-based planning with uncertainty
Title | A game-theoretic approach to timeline-based planning with uncertainty |
Authors | Nicola Gigante, Angelo Montanari, Marta Cialdea Mayer, Andrea Orlandini, Mark Reynolds |
Abstract | In timeline-based planning, domains are described as sets of independent, but interacting, components, whose behaviour over time (the set of timelines) is governed by a set of temporal constraints. A distinguishing feature of timeline-based planning systems is the ability to integrate planning with execution by synthesising control strategies for flexible plans. However, flexible plans can only represent temporal uncertainty, while more complex forms of nondeterminism are needed to deal with a wider range of realistic problems. In this paper, we propose a novel game-theoretic approach to timeline-based planning problems, generalising the state of the art while uniformly handling temporal uncertainty and nondeterminism. We define a general concept of timeline-based game and we show that the notion of winning strategy for these games is strictly more general than that of control strategy for dynamically controllable flexible plans. Moreover, we show that the problem of establishing the existence of such winning strategies is decidable using a doubly exponential amount of space. |
Tasks | |
Published | 2018-07-12 |
URL | https://arxiv.org/abs/1807.04837v2 |
https://arxiv.org/pdf/1807.04837v2.pdf | |
PWC | https://paperswithcode.com/paper/a-game-theoretic-approach-to-timeline-based |
Repo | |
Framework | |
Towards Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection
Title | Towards Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection |
Authors | Lev Konstantinovskiy, Oliver Price, Mevan Babakar, Arkaitz Zubiaga |
Abstract | In an effort to assist factcheckers in the process of factchecking, we tackle the claim detection task, one of the necessary stages prior to determining the veracity of a claim. It consists of identifying the set of sentences, out of a long text, deemed capable of being factchecked. This paper is a collaborative work between Full Fact, an independent factchecking charity, and academic partners. Leveraging the expertise of professional factcheckers, we develop an annotation schema and a benchmark for automated claim detection that is more consistent across time, topics and annotators than previous approaches. Our annotation schema has been used to crowdsource the annotation of a dataset with sentences from UK political TV shows. We introduce an approach based on universal sentence representations to perform the classification, achieving an F1 score of 0.83, with over 5% relative improvement over the state-of-the-art methods ClaimBuster and ClaimRank. The system was deployed in production and received positive user feedback. |
Tasks | |
Published | 2018-09-21 |
URL | http://arxiv.org/abs/1809.08193v1 |
http://arxiv.org/pdf/1809.08193v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-automated-factchecking-developing-an |
Repo | |
Framework | |
An efficient density-based clustering algorithm using reverse nearest neighbour
Title | An efficient density-based clustering algorithm using reverse nearest neighbour |
Authors | Stiphen Chowdhury, Renato Cordeiro de Amorim |
Abstract | Density-based clustering is the task of discovering high-density regions of entities (clusters) that are separated from each other by contiguous regions of low-density. DBSCAN is, arguably, the most popular density-based clustering algorithm. However, its cluster recovery capabilities depend on the combination of the two parameters. In this paper we present a new density-based clustering algorithm which uses reverse nearest neighbour (RNN) and has a single parameter. We also show that it is possible to estimate a good value for this parameter using a clustering validity index. The RNN queries enable our algorithm to estimate densities taking more than a single entity into account, and to recover clusters that are not well-separated or have different densities. Our experiments on synthetic and real-world data sets show our proposed algorithm outperforms DBSCAN and its recent variant ISDBSCAN. |
Tasks | |
Published | 2018-11-19 |
URL | http://arxiv.org/abs/1811.07615v1 |
http://arxiv.org/pdf/1811.07615v1.pdf | |
PWC | https://paperswithcode.com/paper/an-efficient-density-based-clustering |
Repo | |
Framework | |
Deep Residual Network for Joint Demosaicing and Super-Resolution
Title | Deep Residual Network for Joint Demosaicing and Super-Resolution |
Authors | Ruofan Zhou, Radhakrishna Achanta, Sabine Süsstrunk |
Abstract | In digital photography, two image restoration tasks have been studied extensively and resolved independently: demosaicing and super-resolution. Both these tasks are related to resolution limitations of the camera. Performing super-resolution on a demosaiced images simply exacerbates the artifacts introduced by demosaicing. In this paper, we show that such accumulation of errors can be easily averted by jointly performing demosaicing and super-resolution. To this end, we propose a deep residual network for learning an end-to-end mapping between Bayer images and high-resolution images. By training on high-quality samples, our deep residual demosaicing and super-resolution network is able to recover high-quality super-resolved images from low-resolution Bayer mosaics in a single step without producing the artifacts common to such processing when the two operations are done separately. We perform extensive experiments to show that our deep residual network achieves demosaiced and super-resolved images that are superior to the state-of-the-art both qualitatively and in terms of PSNR and SSIM metrics. |
Tasks | Demosaicking, Image Restoration, Super-Resolution |
Published | 2018-02-19 |
URL | http://arxiv.org/abs/1802.06573v1 |
http://arxiv.org/pdf/1802.06573v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-residual-network-for-joint-demosaicing |
Repo | |
Framework | |
Alpha-Integration Pooling for Convolutional Neural Networks
Title | Alpha-Integration Pooling for Convolutional Neural Networks |
Authors | Hayoung Eom, Heeyoul Choi |
Abstract | Convolutional neural networks (CNNs) have achieved remarkable performance in many applications, especially in image recognition tasks. As a crucial component of CNNs, sub-sampling plays an important role for efficient training or invariance property, and max-pooling and arithmetic average-pooling are commonly used sub-sampling methods. In addition to the two pooling methods, however, there could be many other pooling types, such as geometric average, harmonic average, and so on. Since it is not easy for algorithms to find the best pooling method, usually the pooling types are assumed a priority, which might not be optimal for different tasks. In line with the deep learning philosophy, the type of pooling can be driven by data for a given task. In this paper, we propose {\it $\alpha$-integration pooling} ($\alpha$I-pooling), which has a trainable parameter $\alpha$ to find the type of pooling. $\alpha$I-pooling is a general pooling method including max-pooling and arithmetic average-pooling as a special case, depending on the parameter $\alpha$. Experiments show that $\alpha$I-pooling outperforms other pooling methods including max-pooling, in image recognition tasks. Also, it turns out that each layer has different optimal pooling type. |
Tasks | |
Published | 2018-11-08 |
URL | https://arxiv.org/abs/1811.03436v4 |
https://arxiv.org/pdf/1811.03436v4.pdf | |
PWC | https://paperswithcode.com/paper/alpha-pooling-for-convolutional-neural |
Repo | |
Framework | |
Improved Dynamic Memory Network for Dialogue Act Classification with Adversarial Training
Title | Improved Dynamic Memory Network for Dialogue Act Classification with Adversarial Training |
Authors | Yao Wan, Wenqiang Yan, Jianwei Gao, Zhou Zhao, Jian Wu, Philip S. Yu |
Abstract | Dialogue Act (DA) classification is a challenging problem in dialogue interpretation, which aims to attach semantic labels to utterances and characterize the speaker’s intention. Currently, many existing approaches formulate the DA classification problem ranging from multi-classification to structured prediction, which suffer from two limitations: a) these methods are either handcrafted feature-based or have limited memories. b) adversarial examples can’t be correctly classified by traditional training methods. To address these issues, in this paper we first cast the problem into a question and answering problem and proposed an improved dynamic memory networks with hierarchical pyramidal utterance encoder. Moreover, we apply adversarial training to train our proposed model. We evaluate our model on two public datasets, i.e., Switchboard dialogue act corpus and the MapTask corpus. Extensive experiments show that our proposed model is not only robust, but also achieves better performance when compared with some state-of-the-art baselines. |
Tasks | Dialogue Act Classification, Dialogue Interpretation, Structured Prediction |
Published | 2018-11-12 |
URL | http://arxiv.org/abs/1811.05021v1 |
http://arxiv.org/pdf/1811.05021v1.pdf | |
PWC | https://paperswithcode.com/paper/improved-dynamic-memory-network-for-dialogue |
Repo | |
Framework | |
ContextNet: Exploring Context and Detail for Semantic Segmentation in Real-time
Title | ContextNet: Exploring Context and Detail for Semantic Segmentation in Real-time |
Authors | Rudra P K Poudel, Ujwal Bonde, Stephan Liwicki, Christopher Zach |
Abstract | Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naive adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representation to produce competitive semantic segmentation in real-time with low memory requirement. ContextNet combines a deep network branch at low resolution that captures global context information efficiently with a shallow branch that focuses on high-resolution segmentation details. We analyse our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024x2048) resolution (41.9 fps with pipelined computations for streamed data). |
Tasks | Semantic Segmentation |
Published | 2018-05-11 |
URL | http://arxiv.org/abs/1805.04554v4 |
http://arxiv.org/pdf/1805.04554v4.pdf | |
PWC | https://paperswithcode.com/paper/contextnet-exploring-context-and-detail-for |
Repo | |
Framework | |
Variational Neural Networks: Every Layer and Neuron Can Be Unique
Title | Variational Neural Networks: Every Layer and Neuron Can Be Unique |
Authors | Yiwei Li, Enzhi Li |
Abstract | The choice of activation function can significantly influence the performance of neural networks. The lack of guiding principles for the selection of activation function is lamentable. We try to address this issue by introducing our variational neural networks, where the activation function is represented as a linear combination of possible candidate functions, and an optimal activation is obtained via minimization of a loss function using gradient descent method. The gradient formulae for the loss function with respect to these expansion coefficients are central for the implementation of gradient descent algorithm, and here we derive these gradient formulae. |
Tasks | |
Published | 2018-10-14 |
URL | http://arxiv.org/abs/1810.06120v1 |
http://arxiv.org/pdf/1810.06120v1.pdf | |
PWC | https://paperswithcode.com/paper/variational-neural-networks-every-layer-and |
Repo | |
Framework | |
Wasserstein GAN and Waveform Loss-based Acoustic Model Training for Multi-speaker Text-to-Speech Synthesis Systems Using a WaveNet Vocoder
Title | Wasserstein GAN and Waveform Loss-based Acoustic Model Training for Multi-speaker Text-to-Speech Synthesis Systems Using a WaveNet Vocoder |
Authors | Yi Zhao, Shinji Takaki, Hieu-Thi Luong, Junichi Yamagishi, Daisuke Saito, Nobuaki Minematsu |
Abstract | Recent neural networks such as WaveNet and sampleRNN that learn directly from speech waveform samples have achieved very high-quality synthetic speech in terms of both naturalness and speaker similarity even in multi-speaker text-to-speech synthesis systems. Such neural networks are being used as an alternative to vocoders and hence they are often called neural vocoders. The neural vocoder uses acoustic features as local condition parameters, and these parameters need to be accurately predicted by another acoustic model. However, it is not yet clear how to train this acoustic model, which is problematic because the final quality of synthetic speech is significantly affected by the performance of the acoustic model. Significant degradation happens, especially when predicted acoustic features have mismatched characteristics compared to natural ones. In order to reduce the mismatched characteristics between natural and generated acoustic features, we propose frameworks that incorporate either a conditional generative adversarial network (GAN) or its variant, Wasserstein GAN with gradient penalty (WGAN-GP), into multi-speaker speech synthesis that uses the WaveNet vocoder. We also extend the GAN frameworks and use the discretized mixture logistic loss of a well-trained WaveNet in addition to mean squared error and adversarial losses as parts of objective functions. Experimental results show that acoustic models trained using the WGAN-GP framework using back-propagated discretized-mixture-of-logistics (DML) loss achieves the highest subjective evaluation scores in terms of both quality and speaker similarity. |
Tasks | Speech Synthesis, Text-To-Speech Synthesis |
Published | 2018-07-31 |
URL | http://arxiv.org/abs/1807.11679v1 |
http://arxiv.org/pdf/1807.11679v1.pdf | |
PWC | https://paperswithcode.com/paper/wasserstein-gan-and-waveform-loss-based |
Repo | |
Framework | |
Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes
Title | Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes |
Authors | Ronan Fruit, Matteo Pirotta, Alessandro Lazaric |
Abstract | While designing the state space of an MDP, it is common to include states that are transient or not reachable by any policy (e.g., in mountain car, the product space of speed and position contains configurations that are not physically reachable). This leads to defining weakly-communicating or multi-chain MDPs. In this paper, we introduce \tucrl, the first algorithm able to perform efficient exploration-exploitation in any finite Markov Decision Process (MDP) without requiring any form of prior knowledge. In particular, for any MDP with $S^{\texttt{C}}$ communicating states, $A$ actions and $\Gamma^{\texttt{C}} \leq S^{\texttt{C}}$ possible communicating next states, we derive a $\widetilde{O}(D^{\texttt{C}} \sqrt{\Gamma^{\texttt{C}} S^{\texttt{C}} AT})$ regret bound, where $D^{\texttt{C}}$ is the diameter (i.e., the longest shortest path) of the communicating part of the MDP. This is in contrast with optimistic algorithms (e.g., UCRL, Optimistic PSRL) that suffer linear regret in weakly-communicating MDPs, as well as posterior sampling or regularised algorithms (e.g., REGAL), which require prior knowledge on the bias span of the optimal policy to bias the exploration to achieve sub-linear regret. We also prove that in weakly-communicating MDPs, no algorithm can ever achieve a logarithmic growth of the regret without first suffering a linear regret for a number of steps that is exponential in the parameters of the MDP. Finally, we report numerical simulations supporting our theoretical findings and showing how TUCRL overcomes the limitations of the state-of-the-art. |
Tasks | Efficient Exploration |
Published | 2018-07-06 |
URL | http://arxiv.org/abs/1807.02373v2 |
http://arxiv.org/pdf/1807.02373v2.pdf | |
PWC | https://paperswithcode.com/paper/near-optimal-exploration-exploitation-in-non |
Repo | |
Framework | |
Killing Four Birds with one Gaussian Process: Analyzing Test-Time Attack Vectors on Classification
Title | Killing Four Birds with one Gaussian Process: Analyzing Test-Time Attack Vectors on Classification |
Authors | Kathrin Grosse, Michael T. Smith, Michael Backes |
Abstract | The wide usage of Machine Learning (ML) leads to direct security threats, as ML algorithms are vulnerable to a plethora of attacks themselves. Different attack vectors are known, and target for example the training phase using manipulated data. Alternatively, they take place at test time and aim for miss-classification, the leakage of the training data or extraction of the model. Previous works studied different test time attacks individually. We show that using an ML model enabling formal analysis and allowing control over the decision surface curvature, interesting insights can be gained when attack vectors are not studied in isolation but in relation to each pother. We show for example how we can secure Gaussian Process Classification against empirical membership inference by properly configuring the algorithm. In this configuration, however, the model’s parameters are leaked. This allows an analytic computation of the the training data, which is thus leaked, against the original intention of protecting the data. We extend our study to evasion attacks, and find that analogously, hardening the model against one attack boils down to enabling a different attacker. |
Tasks | |
Published | 2018-06-06 |
URL | http://arxiv.org/abs/1806.02032v2 |
http://arxiv.org/pdf/1806.02032v2.pdf | |
PWC | https://paperswithcode.com/paper/killing-four-birds-with-one-gaussian-process |
Repo | |
Framework | |
Model-Based Action Exploration for Learning Dynamic Motion Skills
Title | Model-Based Action Exploration for Learning Dynamic Motion Skills |
Authors | Glen Berseth, Michiel van de Panne |
Abstract | Deep reinforcement learning has achieved great strides in solving challenging motion control tasks. Recently, there has been significant work on methods for exploiting the data gathered during training, but there has been less work on how to best generate the data to learn from. For continuous action domains, the most common method for generating exploratory actions involves sampling from a Gaussian distribution centred around the mean action output by a policy. Although these methods can be quite capable, they do not scale well with the dimensionality of the action space, and can be dangerous to apply on hardware. We consider learning a forward dynamics model to predict the result, ($x_{t+1}$), of taking a particular action, ($u$), given a specific observation of the state, ($x_{t}$). With this model we perform internal look-ahead predictions of outcomes and seek actions we believe have a reasonable chance of success. This method alters the exploratory action space, thereby increasing learning speed and enables higher quality solutions to difficult problems, such as robotic locomotion and juggling. |
Tasks | |
Published | 2018-01-11 |
URL | http://arxiv.org/abs/1801.03954v2 |
http://arxiv.org/pdf/1801.03954v2.pdf | |
PWC | https://paperswithcode.com/paper/model-based-action-exploration-for-learning |
Repo | |
Framework | |
Point-to-Pose Voting based Hand Pose Estimation using Residual Permutation Equivariant Layer
Title | Point-to-Pose Voting based Hand Pose Estimation using Residual Permutation Equivariant Layer |
Authors | Shile Li, Dongheui Lee |
Abstract | Recently, 3D input data based hand pose estimation methods have shown state-of-the-art performance, because 3D data capture more spatial information than the depth image. Whereas 3D voxel-based methods need a large amount of memory, PointNet based methods need tedious preprocessing steps such as K-nearest neighbour search for each point. In this paper, we present a novel deep learning hand pose estimation method for an unordered point cloud. Our method takes 1024 3D points as input and does not require additional information. We use Permutation Equivariant Layer (PEL) as the basic element, where a residual network version of PEL is proposed for the hand pose estimation task. Furthermore, we propose a voting based scheme to merge information from individual points to the final pose output. In addition to the pose estimation task, the voting-based scheme can also provide point cloud segmentation result without ground-truth for segmentation. We evaluate our method on both NYU dataset and the Hands2017Challenge dataset. Our method outperforms recent state-of-the-art methods, where our pose accuracy is currently the best for the Hands2017Challenge dataset. |
Tasks | Hand Pose Estimation, Pose Estimation |
Published | 2018-12-05 |
URL | http://arxiv.org/abs/1812.02050v1 |
http://arxiv.org/pdf/1812.02050v1.pdf | |
PWC | https://paperswithcode.com/paper/point-to-pose-voting-based-hand-pose |
Repo | |
Framework | |