July 29, 2019

2958 words 14 mins read

Paper Group ANR 123

Paper Group ANR 123

Disruptive Event Classification using PMU Data in Distribution Networks. Emotional Metaheuristics For in-situ Foraging Using Sensor Constrained Robot Swarms. A fast ILP-based Heuristic for the robust design of Body Wireless Sensor Networks. Morphology-based Entity and Relational Entity Extraction Framework for Arabic. Private Incremental Regression …

Disruptive Event Classification using PMU Data in Distribution Networks

Title Disruptive Event Classification using PMU Data in Distribution Networks
Authors Iman Niazazari, Hanif Livani
Abstract Proliferation of advanced metering devices with high sampling rates in distribution grids, e.g., micro-phasor measurement units ({\mu}PMU), provides unprecedented potentials for wide-area monitoring and diagnostic applications, e.g., situational awareness, health monitoring of distribution assets. Unexpected disruptive events interrupting the normal operation of assets in distribution grids can eventually lead to permanent failure with expensive replacement cost over time. Therefore, disruptive event classification provides useful information for preventive maintenance of the assets in distribution networks. Preventive maintenance provides wide range of benefits in terms of time, avoiding unexpected outages, maintenance crew utilization, and equipment replacement cost. In this paper, a PMU-data-driven framework is proposed for classification of disruptive events in distribution networks. The two disruptive events, i.e., malfunctioned capacitor bank switching and malfunctioned regulator on-load tap changer (OLTC) switching are considered and distinguished from the normal abrupt load change in distribution grids. The performance of the proposed framework is verified using the simulation of the events in the IEEE 13-bus distribution network. The event classification is formulated using two different algorithms as; i) principle component analysis (PCA) together with multi-class support vector machine (SVM), and ii) autoencoder along with softmax classifier. The results demonstrate the effectiveness of the proposed algorithms and satisfactory classification accuracies.
Tasks
Published 2017-03-20
URL http://arxiv.org/abs/1703.09800v1
PDF http://arxiv.org/pdf/1703.09800v1.pdf
PWC https://paperswithcode.com/paper/disruptive-event-classification-using-pmu
Repo
Framework

Emotional Metaheuristics For in-situ Foraging Using Sensor Constrained Robot Swarms

Title Emotional Metaheuristics For in-situ Foraging Using Sensor Constrained Robot Swarms
Authors Esh Vckay, Debasish Ghose
Abstract We present a new social animal inspired emotional swarm intelligence technique. This technique is used to solve a variant of the popular collective robots problem called foraging. We show with a simulation study how simple interaction rules based on sensations like hunger and loneliness can lead to globally coherent emergent behavior which allows sensor constrained robots to solve the given problem
Tasks
Published 2017-05-09
URL https://arxiv.org/abs/1705.03175v2
PDF https://arxiv.org/pdf/1705.03175v2.pdf
PWC https://paperswithcode.com/paper/emotional-metaheuristics-for-in-situ-foraging
Repo
Framework

A fast ILP-based Heuristic for the robust design of Body Wireless Sensor Networks

Title A fast ILP-based Heuristic for the robust design of Body Wireless Sensor Networks
Authors Fabio D’Andreagiovanni, Antonella Nardin, Enrico Natalizio
Abstract We consider the problem of optimally designing a body wireless sensor network, while taking into account the uncertainty of data generation of biosensors. Since the related min-max robustness Integer Linear Programming (ILP) problem can be difficult to solve even for state-of-the-art commercial optimization solvers, we propose an original heuristic for its solution. The heuristic combines deterministic and probabilistic variable fixing strategies, guided by the information coming from strengthened linear relaxations of the ILP robust model, and includes a very large neighborhood search for reparation and improvement of generated solutions, formulated as an ILP problem solved exactly. Computational tests on realistic instances show that our heuristic finds solutions of much higher quality than a state-of-the-art solver and than an effective benchmark heuristic.
Tasks
Published 2017-04-15
URL http://arxiv.org/abs/1704.04640v1
PDF http://arxiv.org/pdf/1704.04640v1.pdf
PWC https://paperswithcode.com/paper/a-fast-ilp-based-heuristic-for-the-robust
Repo
Framework

Morphology-based Entity and Relational Entity Extraction Framework for Arabic

Title Morphology-based Entity and Relational Entity Extraction Framework for Arabic
Authors Amin Jaber, Fadi A. Zaraket
Abstract Rule-based techniques to extract relational entities from documents allow users to specify desired entities with natural language questions, finite state automata, regular expressions and structured query language. They require linguistic and programming expertise and lack support for Arabic morphological analysis. We present a morphology-based entity and relational entity extraction framework for Arabic (MERF). MERF requires basic knowledge of linguistic features and regular expressions, and provides the ability to interactively specify Arabic morphological and synonymity features, tag types associated with regular expressions, and relations and code actions defined over matches of subexpressions. MERF constructs entities and relational entities from matches of the specifications. We evaluated MERF with several case studies. The results show that MERF requires shorter development time and effort compared to existing application specific techniques and produces reasonably accurate results within a reasonable overhead in run time.
Tasks Entity Extraction, Morphological Analysis
Published 2017-09-17
URL http://arxiv.org/abs/1709.05700v2
PDF http://arxiv.org/pdf/1709.05700v2.pdf
PWC https://paperswithcode.com/paper/morphology-based-entity-and-relational-entity
Repo
Framework

Private Incremental Regression

Title Private Incremental Regression
Authors Shiva Prasad Kasiviswanathan, Kobbi Nissim, Hongxia Jin
Abstract Data is continuously generated by modern data sources, and a recent challenge in machine learning has been to develop techniques that perform well in an incremental (streaming) setting. In this paper, we investigate the problem of private machine learning, where as common in practice, the data is not given at once, but rather arrives incrementally over time. We introduce the problems of private incremental ERM and private incremental regression where the general goal is to always maintain a good empirical risk minimizer for the history observed under differential privacy. Our first contribution is a generic transformation of private batch ERM mechanisms into private incremental ERM mechanisms, based on a simple idea of invoking the private batch ERM procedure at some regular time intervals. We take this construction as a baseline for comparison. We then provide two mechanisms for the private incremental regression problem. Our first mechanism is based on privately constructing a noisy incremental gradient function, which is then used in a modified projected gradient procedure at every timestep. This mechanism has an excess empirical risk of $\approx\sqrt{d}$, where $d$ is the dimensionality of the data. While from the results of [Bassily et al. 2014] this bound is tight in the worst-case, we show that certain geometric properties of the input and constraint set can be used to derive significantly better results for certain interesting regression problems.
Tasks
Published 2017-01-04
URL http://arxiv.org/abs/1701.01093v1
PDF http://arxiv.org/pdf/1701.01093v1.pdf
PWC https://paperswithcode.com/paper/private-incremental-regression
Repo
Framework

A Comparison of deep learning methods for environmental sound

Title A Comparison of deep learning methods for environmental sound
Authors Juncheng Li, Wei Dai, Florian Metze, Shuhui Qu, Samarjit Das
Abstract Environmental sound detection is a challenging application of machine learning because of the noisy nature of the signal, and the small amount of (labeled) data that is typically available. This work thus presents a comparison of several state-of-the-art Deep Learning models on the IEEE challenge on Detection and Classification of Acoustic Scenes and Events (DCASE) 2016 challenge task and data, classifying sounds into one of fifteen common indoor and outdoor acoustic scenes, such as bus, cafe, car, city center, forest path, library, train, etc. In total, 13 hours of stereo audio recordings are available, making this one of the largest datasets available. We perform experiments on six sets of features, including standard Mel-frequency cepstral coefficients (MFCC), Binaural MFCC, log Mel-spectrum and two different large- scale temporal pooling features extracted using OpenSMILE. On these features, we apply five models: Gaussian Mixture Model (GMM), Deep Neural Network (DNN), Recurrent Neural Network (RNN), Convolutional Deep Neural Net- work (CNN) and i-vector. Using the late-fusion approach, we improve the performance of the baseline 72.5% by 15.6% in 4-fold Cross Validation (CV) avg. accuracy and 11% in test accuracy, which matches the best result of the DCASE 2016 challenge. With large feature sets, deep neural network models out- perform traditional methods and achieve the best performance among all the studied methods. Consistent with other work, the best performing single model is the non-temporal DNN model, which we take as evidence that sounds in the DCASE challenge do not exhibit strong temporal dynamics.
Tasks
Published 2017-03-20
URL http://arxiv.org/abs/1703.06902v1
PDF http://arxiv.org/pdf/1703.06902v1.pdf
PWC https://paperswithcode.com/paper/a-comparison-of-deep-learning-methods-for
Repo
Framework

Learning Similarity Functions for Pronunciation Variations

Title Learning Similarity Functions for Pronunciation Variations
Authors Einat Naaman, Yossi Adi, Joseph Keshet
Abstract A significant source of errors in Automatic Speech Recognition (ASR) systems is due to pronunciation variations which occur in spontaneous and conversational speech. Usually ASR systems use a finite lexicon that provides one or more pronunciations for each word. In this paper, we focus on learning a similarity function between two pronunciations. The pronunciations can be the canonical and the surface pronunciations of the same word or they can be two surface pronunciations of different words. This task generalizes problems such as lexical access (the problem of learning the mapping between words and their possible pronunciations), and defining word neighborhoods. It can also be used to dynamically increase the size of the pronunciation lexicon, or in predicting ASR errors. We propose two methods, which are based on recurrent neural networks, to learn the similarity function. The first is based on binary classification, and the second is based on learning the ranking of the pronunciations. We demonstrate the efficiency of our approach on the task of lexical access using a subset of the Switchboard conversational speech corpus. Results suggest that on this task our methods are superior to previous methods which are based on graphical Bayesian methods.
Tasks Speech Recognition
Published 2017-03-28
URL http://arxiv.org/abs/1703.09817v3
PDF http://arxiv.org/pdf/1703.09817v3.pdf
PWC https://paperswithcode.com/paper/learning-similarity-functions-for
Repo
Framework

Structural Compression of Convolutional Neural Networks

Title Structural Compression of Convolutional Neural Networks
Authors Reza Abbasi-Asl, Bin Yu
Abstract Deep convolutional neural networks (CNNs) have been successful in many tasks in machine vision, however, millions of weights in the form of thousands of convolutional filters in CNNs makes them difficult for human intepretation or understanding in science. In this article, we introduce CAR, a greedy structural compression scheme to obtain smaller and more interpretable CNNs, while achieving close to original accuracy. The compression is based on pruning filters with the least contribution to the classification accuracy. We demonstrate the interpretability of CAR-compressed CNNs by showing that our algorithm prunes filters with visually redundant functionalities such as color filters. These compressed networks are easier to interpret because they retain the filter diversity of uncompressed networks with order of magnitude less filters. Finally, a variant of CAR is introduced to quantify the importance of each image category to each CNN filter. Specifically, the most and the least important class labels are shown to be meaningful interpretations of each filter.
Tasks
Published 2017-05-20
URL https://arxiv.org/abs/1705.07356v4
PDF https://arxiv.org/pdf/1705.07356v4.pdf
PWC https://paperswithcode.com/paper/structural-compression-of-convolutional
Repo
Framework

Cross-lingual RST Discourse Parsing

Title Cross-lingual RST Discourse Parsing
Authors Chloé Braud, Maximin Coavoux, Anders Søgaard
Abstract Discourse parsing is an integral part of understanding information flow and argumentative structure in documents. Most previous research has focused on inducing and evaluating models from the English RST Discourse Treebank. However, discourse treebanks for other languages exist, including Spanish, German, Basque, Dutch and Brazilian Portuguese. The treebanks share the same underlying linguistic theory, but differ slightly in the way documents are annotated. In this paper, we present (a) a new discourse parser which is simpler, yet competitive (significantly better on 2/3 metrics) to state of the art for English, (b) a harmonization of discourse treebanks across languages, enabling us to present (c) what to the best of our knowledge are the first experiments on cross-lingual discourse parsing.
Tasks
Published 2017-01-11
URL http://arxiv.org/abs/1701.02946v1
PDF http://arxiv.org/pdf/1701.02946v1.pdf
PWC https://paperswithcode.com/paper/cross-lingual-rst-discourse-parsing
Repo
Framework

Online linear optimization with the log-determinant regularizer

Title Online linear optimization with the log-determinant regularizer
Authors Ken-ichiro Moridomi, Kohei Hatano, Eiji Takimoto
Abstract We consider online linear optimization over symmetric positive semi-definite matrices, which has various applications including the online collaborative filtering. The problem is formulated as a repeated game between the algorithm and the adversary, where in each round t the algorithm and the adversary choose matrices X_t and L_t, respectively, and then the algorithm suffers a loss given by the Frobenius inner product of X_t and L_t. The goal of the algorithm is to minimize the cumulative loss. We can employ a standard framework called Follow the Regularized Leader (FTRL) for designing algorithms, where we need to choose an appropriate regularization function to obtain a good performance guarantee. We show that the log-determinant regularization works better than other popular regularization functions in the case where the loss matrices L_t are all sparse. Using this property, we show that our algorithm achieves an optimal performance guarantee for the online collaborative filtering. The technical contribution of the paper is to develop a new technique of deriving performance bounds by exploiting the property of strong convexity of the log-determinant with respect to the loss matrices, while in the previous analysis the strong convexity is defined with respect to a norm. Intuitively, skipping the norm analysis results in the improved bound. Moreover, we apply our method to online linear optimization over vectors and show that the FTRL with the Burg entropy regularizer, which is the analogue of the log-determinant regularizer in the vector case, works well.
Tasks
Published 2017-10-27
URL http://arxiv.org/abs/1710.10002v1
PDF http://arxiv.org/pdf/1710.10002v1.pdf
PWC https://paperswithcode.com/paper/online-linear-optimization-with-the-log
Repo
Framework

Found in Translation: Reconstructing Phylogenetic Language Trees from Translations

Title Found in Translation: Reconstructing Phylogenetic Language Trees from Translations
Authors Ella Rabinovich, Noam Ordan, Shuly Wintner
Abstract Translation has played an important role in trade, law, commerce, politics, and literature for thousands of years. Translators have always tried to be invisible; ideal translations should look as if they were written originally in the target language. We show that traces of the source language remain in the translation product to the extent that it is possible to uncover the history of the source language by looking only at the translation. Specifically, we automatically reconstruct phylogenetic language trees from monolingual texts (translated from several source languages). The signal of the source language is so powerful that it is retained even after two phases of translation. This strongly indicates that source language interference is the most dominant characteristic of translated texts, overshadowing the more subtle signals of universal properties of translation.
Tasks
Published 2017-04-24
URL http://arxiv.org/abs/1704.07146v1
PDF http://arxiv.org/pdf/1704.07146v1.pdf
PWC https://paperswithcode.com/paper/found-in-translation-reconstructing
Repo
Framework

Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization

Title Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization
Authors Shengke Xue, Wenyuan Qiu, Fan Liu, Xinyu Jin
Abstract Currently, low-rank tensor completion has gained cumulative attention in recovering incomplete visual data whose partial elements are missing. By taking a color image or video as a three-dimensional (3D) tensor, previous studies have suggested several definitions of tensor nuclear norm. However, they have limitations and may not properly approximate the real rank of a tensor. Besides, they do not explicitly use the low-rank property in optimization. It is proved that the recently proposed truncated nuclear norm (TNN) can replace the traditional nuclear norm, as a better estimation to the rank of a matrix. Thus, this paper presents a new method called the tensor truncated nuclear norm (T-TNN), which proposes a new definition of tensor nuclear norm and extends the truncated nuclear norm from the matrix case to the tensor case. Beneficial from the low rankness of TNN, our approach improves the efficacy of tensor completion. We exploit the previously proposed tensor singular value decomposition and the alternating direction method of multipliers in optimization. Extensive experiments on real-world videos and images demonstrate that the performance of our approach is superior to those of existing methods.
Tasks
Published 2017-12-03
URL http://arxiv.org/abs/1712.00704v5
PDF http://arxiv.org/pdf/1712.00704v5.pdf
PWC https://paperswithcode.com/paper/low-rank-tensor-completion-by-truncated
Repo
Framework

Real-Time Visual Localisation in a Tagged Environment

Title Real-Time Visual Localisation in a Tagged Environment
Authors Jérémy Taquet, Gaël Écorchard, Libor Přeučil
Abstract In a robotised warehouse a major issue is the safety of human operators in case of intervention in the work area of the robots. The current solution is to shut down every robot but it causes a loss of productivity, especially for large robotised warehouses. In order to avoid this loss we need to ensure the operator’s security during his/her intervention in the warehouse without powering off the robots. The human operator needs to be localised in the warehouse and the trajectories of the robots have to be modified so that they do not interfere with the human. The purpose of this paper is to demonstrate a visual localisation method with visual elements that are already available in the current warehouse setup.
Tasks
Published 2017-07-31
URL http://arxiv.org/abs/1708.02283v1
PDF http://arxiv.org/pdf/1708.02283v1.pdf
PWC https://paperswithcode.com/paper/real-time-visual-localisation-in-a-tagged
Repo
Framework

Guided Labeling using Convolutional Neural Networks

Title Guided Labeling using Convolutional Neural Networks
Authors Sebastian Stabinger, Antonio Rodriguez-Sanchez
Abstract Over the last couple of years, deep learning and especially convolutional neural networks have become one of the work horses of computer vision. One limiting factor for the applicability of supervised deep learning to more areas is the need for large, manually labeled datasets. In this paper we propose an easy to implement method we call guided labeling, which automatically determines which samples from an unlabeled dataset should be labeled. We show that using this procedure, the amount of samples that need to be labeled is reduced considerably in comparison to labeling images arbitrarily.
Tasks
Published 2017-12-06
URL http://arxiv.org/abs/1712.02154v1
PDF http://arxiv.org/pdf/1712.02154v1.pdf
PWC https://paperswithcode.com/paper/guided-labeling-using-convolutional-neural
Repo
Framework

Uniform Deviation Bounds for Unbounded Loss Functions like k-Means

Title Uniform Deviation Bounds for Unbounded Loss Functions like k-Means
Authors Olivier Bachem, Mario Lucic, S. Hamed Hassani, Andreas Krause
Abstract Uniform deviation bounds limit the difference between a model’s expected loss and its loss on an empirical sample uniformly for all models in a learning problem. As such, they are a critical component to empirical risk minimization. In this paper, we provide a novel framework to obtain uniform deviation bounds for loss functions which are unbounded. In our main application, this allows us to obtain bounds for $k$-Means clustering under weak assumptions on the underlying distribution. If the fourth moment is bounded, we prove a rate of $\mathcal{O}\left(m^{-\frac12}\right)$ compared to the previously known $\mathcal{O}\left(m^{-\frac14}\right)$ rate. Furthermore, we show that the rate also depends on the kurtosis - the normalized fourth moment which measures the “tailedness” of a distribution. We further provide improved rates under progressively stronger assumptions, namely, bounded higher moments, subgaussianity and bounded support.
Tasks
Published 2017-02-27
URL http://arxiv.org/abs/1702.08249v1
PDF http://arxiv.org/pdf/1702.08249v1.pdf
PWC https://paperswithcode.com/paper/uniform-deviation-bounds-for-unbounded-loss
Repo
Framework
comments powered by Disqus