Paper Group ANR 7
A New Modal Framework for Epistemic Logic. Classification non supervisée des données hétérogènes à large échelle. Robust Sparse Coding via Self-Paced Learning. DNA-GAN: Learning Disentangled Representations from Multi-Attribute Images. Extending the small-ball method. Manifold learning with bi-stochastic kernels. Autoencoder with recurrent neural n …
A New Modal Framework for Epistemic Logic
Title | A New Modal Framework for Epistemic Logic |
Authors | Yanjing Wang |
Abstract | Recent years witnessed a growing interest in non-standard epistemic logics of knowing whether, knowing how, knowing what, knowing why and so on. The new epistemic modalities introduced in those logics all share, in their semantics, the general schema of $\exists x \Box \phi$, e.g., knowing how to achieve $\phi$ roughly means that there exists a way such that you know that it is a way to ensure that $\phi$. Moreover, the resulting logics are decidable. Inspired by those particular logics, in this work, we propose a very general and powerful framework based on quantifier-free predicate language extended by a new modality $\Box^x$, which packs exactly $\exists x \Box$ together. We show that the resulting language, though much more expressive, shares many good properties of the basic propositional modal logic over arbitrary models, such as finite-tree-model property and van Benthem-like characterization w.r.t.\ first-order modal logic. We axiomatize the logic over S5 frames with intuitive axioms to capture the interaction between $\Box^x$ and know-that operator in an epistemic setting. |
Tasks | |
Published | 2017-07-27 |
URL | http://arxiv.org/abs/1707.08764v1 |
http://arxiv.org/pdf/1707.08764v1.pdf | |
PWC | https://paperswithcode.com/paper/a-new-modal-framework-for-epistemic-logic |
Repo | |
Framework | |
Classification non supervisée des données hétérogènes à large échelle
Title | Classification non supervisée des données hétérogènes à large échelle |
Authors | Mohamed Ali Zoghlami, Olfa Arfaoui, Minyar Sassi Hidri, Rahma Ben Ayed |
Abstract | When it comes to cluster massive data, response time, disk access and quality of formed classes becoming major issues for companies. It is in this context that we have come to define a clustering framework for large scale heterogeneous data that contributes to the resolution of these issues. The proposed framework is based on, firstly, the descriptive analysis based on MCA, and secondly, the MapReduce paradigm in a large scale environment. The results are encouraging and prove the efficiency of the hybrid deployment on response quality and time component as on qualitative and quantitative data. |
Tasks | |
Published | 2017-07-02 |
URL | http://arxiv.org/abs/1707.00297v1 |
http://arxiv.org/pdf/1707.00297v1.pdf | |
PWC | https://paperswithcode.com/paper/classification-non-supervisee-des-donnees |
Repo | |
Framework | |
Robust Sparse Coding via Self-Paced Learning
Title | Robust Sparse Coding via Self-Paced Learning |
Authors | Xiaodong Feng, Zhiwei Tang, Sen Wu |
Abstract | Sparse coding (SC) is attracting more and more attention due to its comprehensive theoretical studies and its excellent performance in many signal processing applications. However, most existing sparse coding algorithms are nonconvex and are thus prone to becoming stuck into bad local minima, especially when there are outliers and noisy data. To enhance the learning robustness, in this paper, we propose a unified framework named Self-Paced Sparse Coding (SPSC), which gradually include matrix elements into SC learning from easy to complex. We also generalize the self-paced learning schema into different levels of dynamic selection on samples, features and elements respectively. Experimental results on real-world data demonstrate the efficacy of the proposed algorithms. |
Tasks | |
Published | 2017-09-10 |
URL | http://arxiv.org/abs/1709.03030v1 |
http://arxiv.org/pdf/1709.03030v1.pdf | |
PWC | https://paperswithcode.com/paper/robust-sparse-coding-via-self-paced-learning |
Repo | |
Framework | |
DNA-GAN: Learning Disentangled Representations from Multi-Attribute Images
Title | DNA-GAN: Learning Disentangled Representations from Multi-Attribute Images |
Authors | Taihong Xiao, Jiapeng Hong, Jinwen Ma |
Abstract | Disentangling factors of variation has become a very challenging problem on representation learning. Existing algorithms suffer from many limitations, such as unpredictable disentangling factors, poor quality of generated images from encodings, lack of identity information, etc. In this paper, we propose a supervised learning model called DNA-GAN which tries to disentangle different factors or attributes of images. The latent representations of images are DNA-like, in which each individual piece (of the encoding) represents an independent factor of the variation. By annihilating the recessive piece and swapping a certain piece of one latent representation with that of the other one, we obtain two different representations which could be decoded into two kinds of images with the existence of the corresponding attribute being changed. In order to obtain realistic images and also disentangled representations, we further introduce the discriminator for adversarial training. Experiments on Multi-PIE and CelebA datasets finally demonstrate that our proposed method is effective for factors disentangling and even overcome certain limitations of the existing methods. |
Tasks | Representation Learning |
Published | 2017-11-15 |
URL | http://arxiv.org/abs/1711.05415v2 |
http://arxiv.org/pdf/1711.05415v2.pdf | |
PWC | https://paperswithcode.com/paper/dna-gan-learning-disentangled-representations |
Repo | |
Framework | |
Extending the small-ball method
Title | Extending the small-ball method |
Authors | Shahar Mendelson |
Abstract | The small-ball method was introduced as a way of obtaining a high probability, isomorphic lower bound on the quadratic empirical process, under weak assumptions on the indexing class. The key assumption was that class members satisfy a uniform small-ball estimate, that is, $Pr(f \geq \kappa\f_{L_2}) \geq \delta$ for given constants $\kappa$ and $\delta$. Here we extend the small-ball method and obtain a high probability, almost-isometric (rather than isomorphic) lower bound on the quadratic empirical process. The scope of the result is considerably wider than the small-ball method: there is no need for class members to satisfy a uniform small-ball condition, and moreover, motivated by the notion of tournament learning procedures, the result is stable under a `majority vote’. As applications, we study the performance of empirical risk minimization in learning problems involving bounded subsets of $L_p$ that satisfy a Bernstein condition, and of the tournament procedure in problems involving bounded subsets of $L_\infty$. | |
Tasks | |
Published | 2017-09-04 |
URL | http://arxiv.org/abs/1709.00843v1 |
http://arxiv.org/pdf/1709.00843v1.pdf | |
PWC | https://paperswithcode.com/paper/extending-the-small-ball-method |
Repo | |
Framework | |
Manifold learning with bi-stochastic kernels
Title | Manifold learning with bi-stochastic kernels |
Authors | Nicholas F. Marshall, Ronald R. Coifman |
Abstract | In this paper we answer the following question: what is the infinitesimal generator of the diffusion process defined by a kernel that is normalized such that it is bi-stochastic with respect to a specified measure? More precisely, under the assumption that data is sampled from a Riemannian manifold we determine how the resulting infinitesimal generator depends on the potentially nonuniform distribution of the sample points, and the specified measure for the bi-stochastic normalization. In a special case, we demonstrate a connection to the heat kernel. We consider both the case where only a single data set is given, and the case where a data set and a reference set are given. The spectral theory of the constructed operators is studied, and Nystr"om extension formulas for the gradients of the eigenfunctions are computed. Applications to discrete point sets and manifold learning are discussed. |
Tasks | |
Published | 2017-11-17 |
URL | http://arxiv.org/abs/1711.06711v2 |
http://arxiv.org/pdf/1711.06711v2.pdf | |
PWC | https://paperswithcode.com/paper/manifold-learning-with-bi-stochastic-kernels |
Repo | |
Framework | |
Autoencoder with recurrent neural networks for video forgery detection
Title | Autoencoder with recurrent neural networks for video forgery detection |
Authors | Dario D’Avino, Davide Cozzolino, Giovanni Poggi, Luisa Verdoliva |
Abstract | Video forgery detection is becoming an important issue in recent years, because modern editing software provide powerful and easy-to-use tools to manipulate videos. In this paper we propose to perform detection by means of deep learning, with an architecture based on autoencoders and recurrent neural networks. A training phase on a few pristine frames allows the autoencoder to learn an intrinsic model of the source. Then, forged material is singled out as anomalous, as it does not fit the learned model, and is encoded with a large reconstruction error. Recursive networks, implemented with the long short-term memory model, are used to exploit temporal dependencies. Preliminary results on forged videos show the potential of this approach. |
Tasks | |
Published | 2017-08-29 |
URL | http://arxiv.org/abs/1708.08754v1 |
http://arxiv.org/pdf/1708.08754v1.pdf | |
PWC | https://paperswithcode.com/paper/autoencoder-with-recurrent-neural-networks |
Repo | |
Framework | |
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines
Title | Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines |
Authors | Andrew Moore, Paul Rayson |
Abstract | This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Term Memory (BLSTM). We found an improvement of 4-6% using the LSTM model over the SVR and came fourth in the track. We report a number of different evaluations using a finance specific word embedding model and reflect on the effects of using different evaluation metrics. |
Tasks | |
Published | 2017-05-01 |
URL | http://arxiv.org/abs/1705.00571v1 |
http://arxiv.org/pdf/1705.00571v1.pdf | |
PWC | https://paperswithcode.com/paper/lancaster-a-at-semeval-2017-task-5-evaluation |
Repo | |
Framework | |
Entropic Determinants
Title | Entropic Determinants |
Authors | Diego Granziol, Stephen Roberts |
Abstract | The ability of many powerful machine learning algorithms to deal with large data sets without compromise is often hampered by computationally expensive linear algebra tasks, of which calculating the log determinant is a canonical example. In this paper we demonstrate the optimality of Maximum Entropy methods in approximating such calculations. We prove the equivalence between mean value constraints and sample expectations in the big data limit, that Covariance matrix eigenvalue distributions can be completely defined by moment information and that the reduction of the self entropy of a maximum entropy proposal distribution, achieved by adding more moments reduces the KL divergence between the proposal and true eigenvalue distribution. We empirically verify our results on a variety of SparseSuite matrices and establish best practices. |
Tasks | |
Published | 2017-09-08 |
URL | http://arxiv.org/abs/1709.02702v1 |
http://arxiv.org/pdf/1709.02702v1.pdf | |
PWC | https://paperswithcode.com/paper/entropic-determinants |
Repo | |
Framework | |
Basic protocols in quantum reinforcement learning with superconducting circuits
Title | Basic protocols in quantum reinforcement learning with superconducting circuits |
Authors | Lucas Lamata |
Abstract | Superconducting circuit technologies have recently achieved quantum protocols involving closed feedback loops. Quantum artificial intelligence and quantum machine learning are emerging fields inside quantum technologies which may enable quantum devices to acquire information from the outer world and improve themselves via a learning process. Here we propose the implementation of basic protocols in quantum reinforcement learning, with superconducting circuits employing feedback-loop control. We introduce diverse scenarios for proof-of-principle experiments with state-of-the-art superconducting circuit technologies and analyze their feasibility in presence of imperfections. The field of quantum artificial intelligence implemented with superconducting circuits paves the way for enhanced quantum control and quantum computation protocols. |
Tasks | Quantum Machine Learning |
Published | 2017-01-18 |
URL | http://arxiv.org/abs/1701.05131v3 |
http://arxiv.org/pdf/1701.05131v3.pdf | |
PWC | https://paperswithcode.com/paper/basic-protocols-in-quantum-reinforcement |
Repo | |
Framework | |
Multi-appearance Segmentation and Extended 0-1 Program for Dense Small Object Tracking
Title | Multi-appearance Segmentation and Extended 0-1 Program for Dense Small Object Tracking |
Authors | Longtao Chen, Jing Lou, Wei Zhu, Qingyuan Xia, Mingwu Ren |
Abstract | Aiming to address the fast multi-object tracking for dense small object in the cluster background, we review track orientated multi-hypothesis tracking(TOMHT) with consideration of batch optimization. Employing autocorrelation based motion score test and staged hypotheses merging approach, we build our homologous hypothesis generation and management method. A new one-to-many constraint is proposed and applied to tackle the track exclusions during complex occlusions. Besides, to achieve better results, we develop a multi-appearance segmentation for detection, which exploits tree-like topological information and realizes one threshold for one object. Experimental results verify the strength of our methods, indicating speed and performance advantages of our tracker. |
Tasks | Multi-Object Tracking, Object Tracking |
Published | 2017-12-14 |
URL | http://arxiv.org/abs/1712.05116v1 |
http://arxiv.org/pdf/1712.05116v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-appearance-segmentation-and-extended-0 |
Repo | |
Framework | |
Causal Regularization
Title | Causal Regularization |
Authors | Mohammad Taha Bahadori, Krzysztof Chalupka, Edward Choi, Robert Chen, Walter F. Stewart, Jimeng Sun |
Abstract | In application domains such as healthcare, we want accurate predictive models that are also causally interpretable. In pursuit of such models, we propose a causal regularizer to steer predictive models towards causally-interpretable solutions and theoretically study its properties. In a large-scale analysis of Electronic Health Records (EHR), our causally-regularized model outperforms its L1-regularized counterpart in causal accuracy and is competitive in predictive performance. We perform non-linear causality analysis by causally regularizing a special neural network architecture. We also show that the proposed causal regularizer can be used together with neural representation learning algorithms to yield up to 20% improvement over multilayer perceptron in detecting multivariate causation, a situation common in healthcare, where many causal factors should occur simultaneously to have an effect on the target variable. |
Tasks | Representation Learning |
Published | 2017-02-08 |
URL | http://arxiv.org/abs/1702.02604v2 |
http://arxiv.org/pdf/1702.02604v2.pdf | |
PWC | https://paperswithcode.com/paper/causal-regularization |
Repo | |
Framework | |
Double-sided probing by map of Asplund’s distances using Logarithmic Image Processing in the framework of Mathematical Morphology
Title | Double-sided probing by map of Asplund’s distances using Logarithmic Image Processing in the framework of Mathematical Morphology |
Authors | Guillaume Noyel, Michel Jourlin |
Abstract | We establish the link between Mathematical Morphology and the map of Asplund’s distances between a probe and a grey scale function, using the Logarithmic Image Processing scalar multiplication. We demonstrate that the map is the logarithm of the ratio between a dilation and an erosion of the function by a structuring function: the probe. The dilations and erosions are mappings from the lattice of the images into the lattice of the positive functions. Using a flat structuring element, the expression of the map of Asplund’s distances can be simplified with a dilation and an erosion of the image; these mappings stays in the lattice of the images. We illustrate our approach by an example of pattern matching with a non-flat structuring function. |
Tasks | |
Published | 2017-01-27 |
URL | http://arxiv.org/abs/1701.08092v5 |
http://arxiv.org/pdf/1701.08092v5.pdf | |
PWC | https://paperswithcode.com/paper/double-sided-probing-by-map-of-asplunds |
Repo | |
Framework | |
Weight Sharing is Crucial to Succesful Optimization
Title | Weight Sharing is Crucial to Succesful Optimization |
Authors | Shai Shalev-Shwartz, Ohad Shamir, Shaked Shammah |
Abstract | Exploiting the great expressive power of Deep Neural Network architectures, relies on the ability to train them. While current theoretical work provides, mostly, results showing the hardness of this task, empirical evidence usually differs from this line, with success stories in abundance. A strong position among empirically successful architectures is captured by networks where extensive weight sharing is used, either by Convolutional or Recurrent layers. Additionally, characterizing specific aspects of different tasks, making them “harder” or “easier”, is an interesting direction explored both theoretically and empirically. We consider a family of ConvNet architectures, and prove that weight sharing can be crucial, from an optimization point of view. We explore different notions of the frequency, of the target function, proving necessity of the target function having some low frequency components. This necessity is not sufficient - only with weight sharing can it be exploited, thus theoretically separating architectures using it, from others which do not. Our theoretical results are aligned with empirical experiments in an even more general setting, suggesting viability of examination of the role played by interleaving those aspects in broader families of tasks. |
Tasks | |
Published | 2017-06-02 |
URL | http://arxiv.org/abs/1706.00687v1 |
http://arxiv.org/pdf/1706.00687v1.pdf | |
PWC | https://paperswithcode.com/paper/weight-sharing-is-crucial-to-succesful |
Repo | |
Framework | |
Stochastic Sequential Neural Networks with Structured Inference
Title | Stochastic Sequential Neural Networks with Structured Inference |
Authors | Hao Liu, Haoli Bai, Lirong He, Zenglin Xu |
Abstract | Unsupervised structure learning in high-dimensional time series data has attracted a lot of research interests. For example, segmenting and labelling high dimensional time series can be helpful in behavior understanding and medical diagnosis. Recent advances in generative sequential modeling have suggested to combine recurrent neural networks with state space models (e.g., Hidden Markov Models). This combination can model not only the long term dependency in sequential data, but also the uncertainty included in the hidden states. Inheriting these advantages of stochastic neural sequential models, we propose a structured and stochastic sequential neural network, which models both the long-term dependencies via recurrent neural networks and the uncertainty in the segmentation and labels via discrete random variables. For accurate and efficient inference, we present a bi-directional inference network by reparamterizing the categorical segmentation and labels with the recent proposed Gumbel-Softmax approximation and resort to the Stochastic Gradient Variational Bayes. We evaluate the proposed model in a number of tasks, including speech modeling, automatic segmentation and labeling in behavior understanding, and sequential multi-objects recognition. Experimental results have demonstrated that our proposed model can achieve significant improvement over the state-of-the-art methods. |
Tasks | Medical Diagnosis, Time Series |
Published | 2017-05-24 |
URL | http://arxiv.org/abs/1705.08695v1 |
http://arxiv.org/pdf/1705.08695v1.pdf | |
PWC | https://paperswithcode.com/paper/stochastic-sequential-neural-networks-with |
Repo | |
Framework | |