Paper Group ANR 533
Robust Image Registration via Empirical Mode Decomposition. Independence, Conditionality and Structure of Dempster-Shafer Belief Functions. A Type II Fuzzy Entropy Based Multi-Level Image Thresholding Using Adaptive Plant Propagation Algorithm. Stochastic Bandit Models for Delayed Conversions. Real-time Distracted Driver Posture Classification. Inc …
Robust Image Registration via Empirical Mode Decomposition
Title | Robust Image Registration via Empirical Mode Decomposition |
Authors | Reza Abbasi-Asl, Aboozar Ghaffari, Emad Fatemizadeh |
Abstract | Spatially varying intensity noise is a common source of distortion in images. Bias field noise is one example of such distortion that is often present in the magnetic resonance (MR) images. In this paper, we first show that empirical mode decomposition (EMD) can considerably reduce the bias field noise in the MR images. Then, we propose two hierarchical multi-resolution EMD-based algorithms for robust registration of images in the presence of spatially varying noise. One algorithm (LR-EMD) is based on registering EMD feature-maps of both floating and reference images in various resolution levels. In the second algorithm (AFR-EMD), we first extract an average feature-map based on EMD from both floating and reference images. Then, we use a simple hierarchical multi-resolution algorithm based on downsampling to register the average feature-maps. Both algorithms achieve lower error rate and higher convergence percentage compared to the intensity-based hierarchical registration. Specifically, using mutual information as the similarity measure, AFR-EMD achieves 42% lower error rate in intensity and 52% lower error rate in transformation compared to intensity-based hierarchical registration. For LR-EMD, the error rate is 32% lower for the intensity and 41% lower for the transformation. |
Tasks | Image Registration |
Published | 2017-11-12 |
URL | http://arxiv.org/abs/1711.04247v1 |
http://arxiv.org/pdf/1711.04247v1.pdf | |
PWC | https://paperswithcode.com/paper/robust-image-registration-via-empirical-mode |
Repo | |
Framework | |
Independence, Conditionality and Structure of Dempster-Shafer Belief Functions
Title | Independence, Conditionality and Structure of Dempster-Shafer Belief Functions |
Authors | Mieczysław A. Kłopotek |
Abstract | Several approaches of structuring (factorization, decomposition) of Dempster-Shafer joint belief functions from literature are reviewed with special emphasis on their capability to capture independence from the point of view of the claim that belief functions generalize bayes notion of probability. It is demonstrated that Zhu and Lee’s {Zhu:93} logical networks and Smets’ {Smets:93} directed acyclic graphs are unable to capture statistical dependence/independence of bayesian networks {Pearl:88}. On the other hand, though Shenoy and Shafer’s hypergraphs can explicitly represent bayesian network factorization of bayesian belief functions, they disclaim any need for representation of independence of variables in belief functions. Cano et al. {Cano:93} reject the hypergraph representation of Shenoy and Shafer just on grounds of missing representation of variable independence, but in their frameworks some belief functions factorizable in Shenoy/Shafer framework cannot be factored. The approach in {Klopotek:93f} on the other hand combines the merits of both Cano et al. and of Shenoy/Shafer approach in that for Shenoy/Shafer approach no simpler factorization than that in {Klopotek:93f} approach exists and on the other hand all independences among variables captured in Cano et al. framework and many more are captured in {Klopotek:93f} approach.% |
Tasks | |
Published | 2017-07-12 |
URL | http://arxiv.org/abs/1707.03872v1 |
http://arxiv.org/pdf/1707.03872v1.pdf | |
PWC | https://paperswithcode.com/paper/independence-conditionality-and-structure-of |
Repo | |
Framework | |
A Type II Fuzzy Entropy Based Multi-Level Image Thresholding Using Adaptive Plant Propagation Algorithm
Title | A Type II Fuzzy Entropy Based Multi-Level Image Thresholding Using Adaptive Plant Propagation Algorithm |
Authors | Sayan Nag |
Abstract | One of the most straightforward, direct and efficient approaches to Image Segmentation is Image Thresholding. Multi-level Image Thresholding is an essential viewpoint in many image processing and Pattern Recognition based real-time applications which can effectively and efficiently classify the pixels into various groups denoting multiple regions in an Image. Thresholding based Image Segmentation using fuzzy entropy combined with intelligent optimization approaches are commonly used direct methods to properly identify the thresholds so that they can be used to segment an Image accurately. In this paper a novel approach for multi-level image thresholding is proposed using Type II Fuzzy sets combined with Adaptive Plant Propagation Algorithm (APPA). Obtaining the optimal thresholds for an image by maximizing the entropy is extremely tedious and time consuming with increase in the number of thresholds. Hence, Adaptive Plant Propagation Algorithm (APPA), a memetic algorithm based on plant intelligence, is used for fast and efficient selection of optimal thresholds. This fact is reasonably justified by comparing the accuracy of the outcomes and computational time consumed by other modern state-of-the-art algorithms such as Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA) and Genetic Algorithm (GA). |
Tasks | Semantic Segmentation |
Published | 2017-08-23 |
URL | http://arxiv.org/abs/1708.09461v1 |
http://arxiv.org/pdf/1708.09461v1.pdf | |
PWC | https://paperswithcode.com/paper/a-type-ii-fuzzy-entropy-based-multi-level |
Repo | |
Framework | |
Stochastic Bandit Models for Delayed Conversions
Title | Stochastic Bandit Models for Delayed Conversions |
Authors | Claire Vernade, Olivier Cappé, Vianney Perchet |
Abstract | Online advertising and product recommendation are important domains of applications for multi-armed bandit methods. In these fields, the reward that is immediately available is most often only a proxy for the actual outcome of interest, which we refer to as a conversion. For instance, in web advertising, clicks can be observed within a few seconds after an ad display but the corresponding sale –if any– will take hours, if not days to happen. This paper proposes and investigates a new stochas-tic multi-armed bandit model in the framework proposed by Chapelle (2014) –based on empirical studies in the field of web advertising– in which each action may trigger a future reward that will then happen with a stochas-tic delay. We assume that the probability of conversion associated with each action is unknown while the distribution of the conversion delay is known, distinguishing between the (idealized) case where the conversion events may be observed whatever their delay and the more realistic setting in which late conversions are censored. We provide performance lower bounds as well as two simple but efficient algorithms based on the UCB and KLUCB frameworks. The latter algorithm, which is preferable when conversion rates are low, is based on a Poissonization argument, of independent interest in other settings where aggregation of Bernoulli observations with different success probabilities is required. |
Tasks | Product Recommendation |
Published | 2017-06-28 |
URL | http://arxiv.org/abs/1706.09186v3 |
http://arxiv.org/pdf/1706.09186v3.pdf | |
PWC | https://paperswithcode.com/paper/stochastic-bandit-models-for-delayed |
Repo | |
Framework | |
Real-time Distracted Driver Posture Classification
Title | Real-time Distracted Driver Posture Classification |
Authors | Yehya Abouelnaga, Hesham M. Eraqi, Mohamed N. Moustafa |
Abstract | In this paper, we present a new dataset for “distracted driver” posture estimation. In addition, we propose a novel system that achieves 95.98% driving posture estimation classification accuracy. The system consists of a genetically-weighted ensemble of Convolutional Neural Networks (CNNs). We show that a weighted ensemble of classifiers using a genetic algorithm yields in better classification confidence. We also study the effect of different visual elements (i.e. hands and face) in distraction detection and classification by means of face and hand localizations. Finally, we present a thinned version of our ensemble that could achieve a 94.29% classification accuracy and operate in a realtime environment. |
Tasks | |
Published | 2017-06-28 |
URL | http://arxiv.org/abs/1706.09498v3 |
http://arxiv.org/pdf/1706.09498v3.pdf | |
PWC | https://paperswithcode.com/paper/real-time-distracted-driver-posture |
Repo | |
Framework | |
Incremental Maintenance Of Association Rules Under Support Threshold Change
Title | Incremental Maintenance Of Association Rules Under Support Threshold Change |
Authors | Mohamed Anis Bach Tobji, Mohamed Salah Gouider |
Abstract | Maintenance of association rules is an interesting problem. Several incremental maintenance algorithms were proposed since the work of (Cheung et al, 1996). The majority of these algorithms maintain rule bases assuming that support threshold doesn’t change. In this paper, we present incremental maintenance algorithm under support threshold change. This solution allows user to maintain its rule base under any support threshold. |
Tasks | |
Published | 2017-01-27 |
URL | http://arxiv.org/abs/1701.08191v1 |
http://arxiv.org/pdf/1701.08191v1.pdf | |
PWC | https://paperswithcode.com/paper/incremental-maintenance-of-association-rules |
Repo | |
Framework | |
Learning Local Feature Aggregation Functions with Backpropagation
Title | Learning Local Feature Aggregation Functions with Backpropagation |
Authors | Angelos Katharopoulos, Despoina Paschalidou, Christos Diou, Anastasios Delopoulos |
Abstract | This paper introduces a family of local feature aggregation functions and a novel method to estimate their parameters, such that they generate optimal representations for classification (or any task that can be expressed as a cost function minimization problem). To achieve that, we compose the local feature aggregation function with the classifier cost function and we backpropagate the gradient of this cost function in order to update the local feature aggregation function parameters. Experiments on synthetic datasets indicate that our method discovers parameters that model the class-relevant information in addition to the local feature space. Further experiments on a variety of motion and visual descriptors, both on image and video datasets, show that our method outperforms other state-of-the-art local feature aggregation functions, such as Bag of Words, Fisher Vectors and VLAD, by a large margin. |
Tasks | |
Published | 2017-06-26 |
URL | http://arxiv.org/abs/1706.08580v1 |
http://arxiv.org/pdf/1706.08580v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-local-feature-aggregation-functions |
Repo | |
Framework | |
SVM via Saddle Point Optimization: New Bounds and Distributed Algorithms
Title | SVM via Saddle Point Optimization: New Bounds and Distributed Algorithms |
Authors | Yifei Jin, Lingxiao Huang, Jian Li |
Abstract | We study two important SVM variants: hard-margin SVM (for linearly separable cases) and $\nu$-SVM (for linearly non-separable cases). We propose new algorithms from the perspective of saddle point optimization. Our algorithms achieve $(1-\epsilon)$-approximations with running time $\tilde{O}(nd+n\sqrt{d / \epsilon})$ for both variants, where $n$ is the number of points and $d$ is the dimensionality. To the best of our knowledge, the current best algorithm for $\nu$-SVM is based on quadratic programming approach which requires $\Omega(n^2 d)$ time in worst case~\cite{joachims1998making,platt199912}. In the paper, we provide the first nearly linear time algorithm for $\nu$-SVM. The current best algorithm for hard margin SVM achieved by Gilbert algorithm~\cite{gartner2009coresets} requires $O(nd / \epsilon )$ time. Our algorithm improves the running time by a factor of $\sqrt{d}/\sqrt{\epsilon}$. Moreover, our algorithms can be implemented in the distributed settings naturally. We prove that our algorithms require $\tilde{O}(k(d +\sqrt{d/\epsilon}))$ communication cost, where $k$ is the number of clients, which almost matches the theoretical lower bound. Numerical experiments support our theory and show that our algorithms converge faster on high dimensional, large and dense data sets, as compared to previous methods. |
Tasks | |
Published | 2017-05-20 |
URL | http://arxiv.org/abs/1705.07252v4 |
http://arxiv.org/pdf/1705.07252v4.pdf | |
PWC | https://paperswithcode.com/paper/svm-via-saddle-point-optimization-new-bounds |
Repo | |
Framework | |
DNA Steganalysis Using Deep Recurrent Neural Networks
Title | DNA Steganalysis Using Deep Recurrent Neural Networks |
Authors | Ho Bae, Byunghan Lee, Sunyoung Kwon, Sungroh Yoon |
Abstract | Recent advances in next-generation sequencing technologies have facilitated the use of deoxyribonucleic acid (DNA) as a novel covert channels in steganography. There are various methods that exist in other domains to detect hidden messages in conventional covert channels. However, they have not been applied to DNA steganography. The current most common detection approaches, namely frequency analysis-based methods, often overlook important signals when directly applied to DNA steganography because those methods depend on the distribution of the number of sequence characters. To address this limitation, we propose a general sequence learning-based DNA steganalysis framework. The proposed approach learns the intrinsic distribution of coding and non-coding sequences and detects hidden messages by exploiting distribution variations after hiding these messages. Using deep recurrent neural networks (RNNs), our framework identifies the distribution variations by using the classification score to predict whether a sequence is to be a coding or non-coding sequence. We compare our proposed method to various existing methods and biological sequence analysis methods implemented on top of our framework. According to our experimental results, our approach delivers a robust detection performance compared to other tools. |
Tasks | |
Published | 2017-04-27 |
URL | http://arxiv.org/abs/1704.08443v3 |
http://arxiv.org/pdf/1704.08443v3.pdf | |
PWC | https://paperswithcode.com/paper/dna-steganalysis-using-deep-recurrent-neural |
Repo | |
Framework | |
Generalizing Distance Covariance to Measure and Test Multivariate Mutual Dependence
Title | Generalizing Distance Covariance to Measure and Test Multivariate Mutual Dependence |
Authors | Ze Jin, David S. Matteson |
Abstract | We propose three measures of mutual dependence between multiple random vectors. All the measures are zero if and only if the random vectors are mutually independent. The first measure generalizes distance covariance from pairwise dependence to mutual dependence, while the other two measures are sums of squared distance covariance. All the measures share similar properties and asymptotic distributions to distance covariance, and capture non-linear and non-monotone mutual dependence between the random vectors. Inspired by complete and incomplete V-statistics, we define the empirical measures and simplified empirical measures as a trade-off between the complexity and power when testing mutual independence. Implementation of the tests is demonstrated by both simulation results and real data examples. |
Tasks | |
Published | 2017-09-08 |
URL | http://arxiv.org/abs/1709.02532v5 |
http://arxiv.org/pdf/1709.02532v5.pdf | |
PWC | https://paperswithcode.com/paper/generalizing-distance-covariance-to-measure |
Repo | |
Framework | |
Extrapolating Expected Accuracies for Large Multi-Class Problems
Title | Extrapolating Expected Accuracies for Large Multi-Class Problems |
Authors | Charles Zheng, Rakesh Achanta, Yuval Benjamini |
Abstract | The difficulty of multi-class classification generally increases with the number of classes. Using data from a subset of the classes, can we predict how well a classifier will scale with an increased number of classes? Under the assumptions that the classes are sampled identically and independently from a population, and that the classifier is based on independently learned scoring functions, we show that the expected accuracy when the classifier is trained on k classes is the (k-1)st moment of a certain distribution that can be estimated from data. We present an unbiased estimation method based on the theory, and demonstrate its application on a facial recognition example. |
Tasks | |
Published | 2017-12-27 |
URL | http://arxiv.org/abs/1712.09713v1 |
http://arxiv.org/pdf/1712.09713v1.pdf | |
PWC | https://paperswithcode.com/paper/extrapolating-expected-accuracies-for-large |
Repo | |
Framework | |
Generalized Concomitant Multi-Task Lasso for sparse multimodal regression
Title | Generalized Concomitant Multi-Task Lasso for sparse multimodal regression |
Authors | Mathurin Massias, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon |
Abstract | In high dimension, it is customary to consider Lasso-type estimators to enforce sparsity. For standard Lasso theory to hold, the regularization parameter should be proportional to the noise level, yet the latter is generally unknown in practice. A possible remedy is to consider estimators, such as the Concomitant/Scaled Lasso, which jointly optimize over the regression coefficients as well as over the noise level, making the choice of the regularization independent of the noise level. However, when data from different sources are pooled to increase sample size, or when dealing with multimodal datasets, noise levels typically differ and new dedicated estimators are needed. In this work we provide new statistical and computational solutions to deal with such heteroscedastic regression models, with an emphasis on functional brain imaging with combined magneto- and electroencephalographic (M/EEG) signals. Adopting the formulation of Concomitant Lasso-type estimators, we propose a jointly convex formulation to estimate both the regression coefficients and the (square root of the) noise covariance. When our framework is instantiated to de-correlated noise, it leads to an efficient algorithm whose computational cost is not higher than for the Lasso and Concomitant Lasso, while addressing more complex noise structures. Numerical experiments demonstrate that our estimator yields improved prediction and support identification while correctly estimating the noise (square root) covariance. Results on multimodal neuroimaging problems with M/EEG data are also reported. |
Tasks | EEG |
Published | 2017-05-27 |
URL | http://arxiv.org/abs/1705.09778v2 |
http://arxiv.org/pdf/1705.09778v2.pdf | |
PWC | https://paperswithcode.com/paper/generalized-concomitant-multi-task-lasso-for |
Repo | |
Framework | |
Towards Moral Autonomous Systems
Title | Towards Moral Autonomous Systems |
Authors | Vicky Charisi, Louise Dennis, Michael Fisher, Robert Lieck, Andreas Matthias, Marija Slavkovik, Janina Sombetzki, Alan F. T. Winfield, Roman Yampolskiy |
Abstract | Both the ethics of autonomous systems and the problems of their technical implementation have by now been studied in some detail. Less attention has been given to the areas in which these two separate concerns meet. This paper, written by both philosophers and engineers of autonomous systems, addresses a number of issues in machine ethics that are located at precisely the intersection between ethics and engineering. We first discuss the main challenges which, in our view, machine ethics posses to moral philosophy. We them consider different approaches towards the conceptual design of autonomous systems and their implications on the ethics implementation in such systems. Then we examine problematic areas regarding the specification and verification of ethical behavior in autonomous systems, particularly with a view towards the requirements of future legislation. We discuss transparency and accountability issues that will be crucial for any future wide deployment of autonomous systems in society. Finally we consider the, often overlooked, possibility of intentional misuse of AI systems and the possible dangers arising out of deliberately unethical design, implementation, and use of autonomous robots. |
Tasks | |
Published | 2017-03-14 |
URL | http://arxiv.org/abs/1703.04741v3 |
http://arxiv.org/pdf/1703.04741v3.pdf | |
PWC | https://paperswithcode.com/paper/towards-moral-autonomous-systems |
Repo | |
Framework | |
word representation or word embedding in Persian text
Title | word representation or word embedding in Persian text |
Authors | Siamak Sarmady, Erfan Rahmani |
Abstract | Text processing is one of the sub-branches of natural language processing. Recently, the use of machine learning and neural networks methods has been given greater consideration. For this reason, the representation of words has become very important. This article is about word representation or converting words into vectors in Persian text. In this research GloVe, CBOW and skip-gram methods are updated to produce embedded vectors for Persian words. In order to train a neural networks, Bijankhan corpus, Hamshahri corpus and UPEC corpus have been compound and used. Finally, we have 342,362 words that obtained vectors in all three models for this words. These vectors have many usage for Persian natural language processing. |
Tasks | |
Published | 2017-12-18 |
URL | http://arxiv.org/abs/1712.06674v1 |
http://arxiv.org/pdf/1712.06674v1.pdf | |
PWC | https://paperswithcode.com/paper/word-representation-or-word-embedding-in |
Repo | |
Framework | |
Raw Waveform-based Audio Classification Using Sample-level CNN Architectures
Title | Raw Waveform-based Audio Classification Using Sample-level CNN Architectures |
Authors | Jongpil Lee, Taejun Kim, Jiyoung Park, Juhan Nam |
Abstract | Music, speech, and acoustic scene sound are often handled separately in the audio domain because of their different signal characteristics. However, as the image domain grows rapidly by versatile image classification models, it is necessary to study extensible classification models in the audio domain as well. In this study, we approach this problem using two types of sample-level deep convolutional neural networks that take raw waveforms as input and uses filters with small granularity. One is a basic model that consists of convolution and pooling layers. The other is an improved model that additionally has residual connections, squeeze-and-excitation modules and multi-level concatenation. We show that the sample-level models reach state-of-the-art performance levels for the three different categories of sound. Also, we visualize the filters along layers and compare the characteristics of learned filters. |
Tasks | Audio Classification, Image Classification |
Published | 2017-12-04 |
URL | http://arxiv.org/abs/1712.00866v1 |
http://arxiv.org/pdf/1712.00866v1.pdf | |
PWC | https://paperswithcode.com/paper/raw-waveform-based-audio-classification-using |
Repo | |
Framework | |