January 29, 2020

2820 words 14 mins read

Paper Group ANR 530

Paper Group ANR 530

Modularity as a Means for Complexity Management in Neural Networks Learning. An Approach to Characterize Graded Entailment of Arguments through a Label-based Framework. QDNN: DNN with Quantum Neural Network Layers. Training capsules as a routing-weighted product of expert neurons. Impressive computational acceleration by using machine learning for …

Modularity as a Means for Complexity Management in Neural Networks Learning

Title Modularity as a Means for Complexity Management in Neural Networks Learning
Authors David Castillo-Bolado, Cayetano Guerra-Artal, Mario Hernandez-Tejera
Abstract Training a Neural Network (NN) with lots of parameters or intricate architectures creates undesired phenomena that complicate the optimization process. To address this issue we propose a first modular approach to NN design, wherein the NN is decomposed into a control module and several functional modules, implementing primitive operations. We illustrate the modular concept by comparing performances between a monolithic and a modular NN on a list sorting problem and show the benefits in terms of training speed, training stability and maintainability. We also discuss some questions that arise in modular NNs.
Tasks
Published 2019-02-25
URL http://arxiv.org/abs/1902.09240v1
PDF http://arxiv.org/pdf/1902.09240v1.pdf
PWC https://paperswithcode.com/paper/modularity-as-a-means-for-complexity
Repo
Framework

An Approach to Characterize Graded Entailment of Arguments through a Label-based Framework

Title An Approach to Characterize Graded Entailment of Arguments through a Label-based Framework
Authors Maximiliano C. D. Budán, Gerardo I. Simari, Ignacio Viglizzo, Guillermo R. Simari
Abstract Argumentation theory is a powerful paradigm that formalizes a type of commonsense reasoning that aims to simulate the human ability to resolve a specific problem in an intelligent manner. A classical argumentation process takes into account only the properties related to the intrinsic logical soundness of an argument in order to determine its acceptability status. However, these properties are not always the only ones that matter to establish the argument’s acceptability—there exist other qualities, such as strength, weight, social votes, trust degree, relevance level, and certainty degree, among others.
Tasks
Published 2019-03-05
URL http://arxiv.org/abs/1903.01865v1
PDF http://arxiv.org/pdf/1903.01865v1.pdf
PWC https://paperswithcode.com/paper/an-approach-to-characterize-graded-entailment
Repo
Framework

QDNN: DNN with Quantum Neural Network Layers

Title QDNN: DNN with Quantum Neural Network Layers
Authors Chen Zhao, Xiao-Shan Gao
Abstract The deep neural network (DNN) became the most important and powerful machine learning method in recent years. In this paper, we introduce a general quantum DNN, which consists of fully quantum structured layers with better representation power than the classical DNN and still keeps the advantages of the classical DNN such as the non-linear activation, the multi-layer structure, and the efficient backpropagation training algorithm. We prove that the quantum structured layer can not be simulated efficiently by classical computers unless universal quantum computing can be classically simulated efficiently and hence our quantum DNN has more representation power than the classical DNN. Moreover, our quantum DNN can be used on near-term noisy intermediate scale quantum (NISQ) processors. A numerical experiment for image classification based on quantum DNN is given, where high accurate rate is achieved.
Tasks Image Classification
Published 2019-12-29
URL https://arxiv.org/abs/1912.12660v1
PDF https://arxiv.org/pdf/1912.12660v1.pdf
PWC https://paperswithcode.com/paper/qdnn-dnn-with-quantum-neural-network-layers
Repo
Framework

Training capsules as a routing-weighted product of expert neurons

Title Training capsules as a routing-weighted product of expert neurons
Authors Michael Hauser
Abstract Capsules are the multidimensional analogue to scalar neurons in neural networks, and because they are multidimensional, much more complex routing schemes can be used to pass information forward through the network than what can be used in traditional neural networks. This work treats capsules as collections of neurons in a fully connected neural network, where sub-networks connecting capsules are weighted according to the routing coefficients determined by routing by agreement. An energy function is designed to reflect this model, and it follows that capsule networks with dynamic routing can be formulated as a product of expert neurons. By alternating between dynamic routing, which acts to both find subnetworks within the overall network as well as to mix the model distribution, and updating the parameters by the gradient of the contrastive divergence, a bottom-up, unsupervised learning algorithm is constructed for capsule networks with dynamic routing. The model and its training algorithm are qualitatively tested in the generative sense, and is able to produce realistic looking images from standard vision datasets.
Tasks
Published 2019-07-26
URL https://arxiv.org/abs/1907.11639v1
PDF https://arxiv.org/pdf/1907.11639v1.pdf
PWC https://paperswithcode.com/paper/training-capsules-as-a-routing-weighted
Repo
Framework

Impressive computational acceleration by using machine learning for 2-dimensional super-lubricant materials discovery

Title Impressive computational acceleration by using machine learning for 2-dimensional super-lubricant materials discovery
Authors Marco Fronzi, Mutaz Abu Ghazaleh, Olexandr Isayev, David A. Winkler, Joe Shapter, Michael J. Ford
Abstract The screening of novel materials is an important topic in the field of materials science. Although traditional computational modeling, especially first-principles approaches, is a very useful and accurate tool to predict the properties of novel materials, it still demands extensive and expensive state-of-the-art computational resources. Additionally, they can be often extremely time consuming. We describe a time and resource-efficient machine learning approach to create a large dataset of structural properties of van der Waals layered structures. In particular, we focus on the interlayer energy and the elastic constant of layered materials composed of two different 2-dimensional (2D) structures, that are important for novel solid lubricant and super-lubricant materials. We show that machine learning models can recapitulate results of computationally expansive approaches (i.e. density functional theory) with high accuracy.
Tasks
Published 2019-11-20
URL https://arxiv.org/abs/1911.11559v1
PDF https://arxiv.org/pdf/1911.11559v1.pdf
PWC https://paperswithcode.com/paper/impressive-computational-acceleration-by
Repo
Framework

Signal Demodulation with Machine Learning Methods for Physical Layer Visible Light Communications: Prototype Platform, Open Dataset and Algorithms

Title Signal Demodulation with Machine Learning Methods for Physical Layer Visible Light Communications: Prototype Platform, Open Dataset and Algorithms
Authors Shuai Ma, Jiahui Dai, Songtao Lu, Hang Li, Han Zhang, Chun Du, Shiyin Li
Abstract In this paper, we investigate the design and implementation of machine learning (ML) based demodulation methods in the physical layer of visible light communication (VLC) systems. We build a flexible hardware prototype of an end-to-end VLC system, from which the received signals are collected as the real data. The dataset is available online, which contains eight types of modulated signals. Then, we propose three ML demodulators based on convolutional neural network (CNN), deep belief network (DBN), and adaptive boosting (AdaBoost), respectively. Specifically, the CNN based demodulator converts the modulated signals to images and recognizes the signals by the image classification. The proposed DBN based demodulator contains three restricted Boltzmann machines (RBMs) to extract the modulation features. The AdaBoost method includes a strong classifier that is constructed by the weak classifiers with the k-nearest neighbor (KNN) algorithm. These three demodulators are trained and tested by our online open dataset. Experimental results show that the demodulation accuracy of the three data-driven demodulators drops as the transmission distance increases. A higher modulation order negatively influences the accuracy for a given transmission distance. Among the three ML methods, the AdaBoost modulator achieves the best performance.
Tasks Image Classification
Published 2019-03-13
URL http://arxiv.org/abs/1903.11385v1
PDF http://arxiv.org/pdf/1903.11385v1.pdf
PWC https://paperswithcode.com/paper/signal-demodulation-with-machine-learning
Repo
Framework

When is it right and good for an intelligent autonomous vehicle to take over control (and hand it back)?

Title When is it right and good for an intelligent autonomous vehicle to take over control (and hand it back)?
Authors Ajit Narayanan
Abstract There is much debate in machine ethics about the most appropriate way to introduce ethical reasoning capabilities into intelligent autonomous machines. Recent incidents involving autonomous vehicles in which humans have been killed or injured have raised questions about how we ensure that such vehicles have an ethical dimension to their behaviour and are therefore trustworthy. The main problem is that hardwiring such machines with rules not to cause harm or damage is not consistent with the notion of autonomy and intelligence. Also, such ethical hardwiring does not leave intelligent autonomous machines with any course of action if they encounter situations or dilemmas for which they are not programmed or where some harm is caused no matter what course of action is taken. Teaching machines so that they learn ethics may also be problematic given recent findings in machine learning that machines pick up the prejudices and biases embedded in their learning algorithms or data. This paper describes a fuzzy reasoning approach to machine ethics. The paper shows how it is possible for an ethics architecture to reason when taking over from a human driver is morally justified. The design behind such an ethical reasoner is also applied to an ethical dilemma resolution case. One major advantage of the approach is that the ethical reasoner can generate its own data for learning moral rules (hence, autometric) and thereby reduce the possibility of picking up human biases and prejudices. The results show that a new type of metric-based ethics appropriate for autonomous intelligent machines is feasible and that our current concept of ethical reasoning being largely qualitative in nature may need revising if want to construct future autonomous machines that have an ethical dimension to their reasoning so that they become moral machines.
Tasks Autonomous Vehicles
Published 2019-01-24
URL http://arxiv.org/abs/1901.08221v1
PDF http://arxiv.org/pdf/1901.08221v1.pdf
PWC https://paperswithcode.com/paper/when-is-it-right-and-good-for-an-intelligent
Repo
Framework

Private Center Points and Learning of Halfspaces

Title Private Center Points and Learning of Halfspaces
Authors Amos Beimel, Shay Moran, Kobbi Nissim, Uri Stemmer
Abstract We present a private learner for halfspaces over an arbitrary finite domain $X\subset \mathbb{R}^d$ with sample complexity $mathrm{poly}(d,2^{\log^X})$. The building block for this learner is a differentially private algorithm for locating an approximate center point of $m>\mathrm{poly}(d,2^{\log^X})$ points – a high dimensional generalization of the median function. Our construction establishes a relationship between these two problems that is reminiscent of the relation between the median and learning one-dimensional thresholds [Bun et al.\ FOCS ‘15]. This relationship suggests that the problem of privately locating a center point may have further applications in the design of differentially private algorithms. We also provide a lower bound on the sample complexity for privately finding a point in the convex hull. For approximate differential privacy, we show a lower bound of $m=\Omega(d+\log^*X)$, whereas for pure differential privacy $m=\Omega(d\logX)$.
Tasks
Published 2019-02-27
URL http://arxiv.org/abs/1902.10731v1
PDF http://arxiv.org/pdf/1902.10731v1.pdf
PWC https://paperswithcode.com/paper/private-center-points-and-learning-of
Repo
Framework

A Comparative Analysis of XGBoost

Title A Comparative Analysis of XGBoost
Authors Candice Bentéjac, Anna Csörgő, Gonzalo Martínez-Muñoz
Abstract XGBoost is a scalable ensemble technique based on gradient boosting that has demonstrated to be a reliable and efficient machine learning challenge solver. This work proposes a practical analysis of how this novel technique works in terms of training speed, generalization performance and parameter setup. In addition, a comprehensive comparison between XGBoost, random forests and gradient boosting has been performed using carefully tuned models as well as using the default settings. The results of this comparison may indicate that XGBoost is not necessarily the best choice under all circumstances. Finally an extensive analysis of XGBoost parametrization tuning process is carried out.
Tasks
Published 2019-11-05
URL https://arxiv.org/abs/1911.01914v1
PDF https://arxiv.org/pdf/1911.01914v1.pdf
PWC https://paperswithcode.com/paper/a-comparative-analysis-of-xgboost
Repo
Framework

Weighted Automata Extraction from Recurrent Neural Networks via Regression on State Spaces

Title Weighted Automata Extraction from Recurrent Neural Networks via Regression on State Spaces
Authors Takamasa Okudono, Masaki Waga, Taro Sekiyama, Ichiro Hasuo
Abstract We present a method to extract a weighted finite automaton (WFA) from a recurrent neural network (RNN). Our algorithm is based on the WFA learning algorithm by Balle and Mohri, which is in turn an extension of Angluin’s classic \lstar algorithm. Our technical novelty is in the use of \emph{regression} methods for the so-called equivalence queries, thus exploiting the internal state space of an RNN to prioritize counterexample candidates. This way we achieve a quantitative/weighted extension of the recent work by Weiss, Goldberg and Yahav that extracts DFAs. We experimentally evaluate the accuracy, expressivity and efficiency of the extracted WFAs.
Tasks
Published 2019-04-05
URL https://arxiv.org/abs/1904.02931v3
PDF https://arxiv.org/pdf/1904.02931v3.pdf
PWC https://paperswithcode.com/paper/weighted-automata-extraction-from-recurrent
Repo
Framework

Inspecting adversarial examples using the Fisher information

Title Inspecting adversarial examples using the Fisher information
Authors Jörg Martin, Clemens Elster
Abstract Adversarial examples are slight perturbations that are designed to fool artificial neural networks when fed as an input. In this work the usability of the Fisher information for the detection of such adversarial attacks is studied. We discuss various quantities whose computation scales well with the network size, study their behavior on adversarial examples and show how they can highlight the importance of single input neurons, thereby providing a visual tool for further analyzing (un-)reasonable behavior of a neural network. The potential of our methods is demonstrated by applications to the MNIST, CIFAR10 and Fruits-360 datasets.
Tasks
Published 2019-09-12
URL https://arxiv.org/abs/1909.05527v1
PDF https://arxiv.org/pdf/1909.05527v1.pdf
PWC https://paperswithcode.com/paper/inspecting-adversarial-examples-using-the
Repo
Framework

Demand forecasting techniques for build-to-order lean manufacturing supply chains

Title Demand forecasting techniques for build-to-order lean manufacturing supply chains
Authors Rodrigo Rivera-Castro, Ivan Nazarov, Yuke Xiang, Alexander Pletneev, Ivan Maksimov, Evgeny Burnaev
Abstract Build-to-order (BTO) supply chains have become common-place in industries such as electronics, automotive and fashion. They enable building products based on individual requirements with a short lead time and minimum inventory and production costs. Due to their nature, they differ significantly from traditional supply chains. However, there have not been studies dedicated to demand forecasting methods for this type of setting. This work makes two contributions. First, it presents a new and unique data set from a manufacturer in the BTO sector. Second, it proposes a novel data transformation technique for demand forecasting of BTO products. Results from thirteen forecasting methods show that the approach compares well to the state-of-the-art while being easy to implement and to explain to decision-makers.
Tasks
Published 2019-05-20
URL https://arxiv.org/abs/1905.07902v1
PDF https://arxiv.org/pdf/1905.07902v1.pdf
PWC https://paperswithcode.com/paper/demand-forecasting-techniques-for-build-to
Repo
Framework

Hierarchical Demand Forecasting Benchmark for the Distribution Grid

Title Hierarchical Demand Forecasting Benchmark for the Distribution Grid
Authors Lorenzo Nespoli, Vasco Medici, Kristijan Lopatichki, Fabrizio Sossan
Abstract We present a comparative study of different probabilistic forecasting techniques on the task of predicting the electrical load of secondary substations and cabinets located in a low voltage distribution grid, as well as their aggregated power profile. The methods are evaluated using standard KPIs for deterministic and probabilistic forecasts. We also compare the ability of different hierarchical techniques in improving the bottom level forecasters’ performances. Both the raw and cleaned datasets, including meteorological data, are made publicly available to provide a standard benchmark for evaluating forecasting algorithms for demand-side management applications.
Tasks
Published 2019-10-03
URL https://arxiv.org/abs/1910.03976v1
PDF https://arxiv.org/pdf/1910.03976v1.pdf
PWC https://paperswithcode.com/paper/hierarchical-demand-forecasting-benchmark-for
Repo
Framework

Approximation power of random neural networks

Title Approximation power of random neural networks
Authors Bolton Bailey, Ziwei Ji, Matus Telgarsky, Ruicheng Xian
Abstract This paper investigates the approximation power of three types of random neural networks: (a) infinite width networks, with weights following an arbitrary distribution; (b) finite width networks obtained by subsampling the preceding infinite width networks; (c) finite width networks obtained by starting with standard Gaussian initialization, and then adding a vanishingly small correction to the weights. The primary result is a fully quantified bound on the rate of approximation of general general continuous functions: in all three cases, a function $f$ can be approximated with complexity $\f_1 (d/\delta)^{\mathcal{O}(d)}$, where $\delta$ depends on continuity properties of $f$ and the complexity measure depends on the weight magnitudes and/or cardinalities. Along the way, a variety of ancillary results are developed: an exact construction of Gaussian densities with infinite width networks, an elementary stand-alone proof scheme for approximation via convolutions of radial basis functions, subsampling rates for infinite width networks, and depth separation for corrected networks.
Tasks
Published 2019-06-18
URL https://arxiv.org/abs/1906.07709v2
PDF https://arxiv.org/pdf/1906.07709v2.pdf
PWC https://paperswithcode.com/paper/approximation-power-of-random-neural-networks
Repo
Framework
Title Semantic Granularity Metric Learning for Visual Search
Authors Dipu Manandhar, Muhammet Bastan, Kim-Hui Yap
Abstract Deep metric learning applied to various applications has shown promising results in identification, retrieval and recognition. Existing methods often do not consider different granularity in visual similarity. However, in many domain applications, images exhibit similarity at multiple granularities with visual semantic concepts, e.g. fashion demonstrates similarity ranging from clothing of the exact same instance to similar looks/design or a common category. Therefore, training image triplets/pairs used for metric learning inherently possess different degree of information. However, the existing methods often treats them with equal importance during training. This hinders capturing the underlying granularities in feature similarity required for effective visual search. In view of this, we propose a new deep semantic granularity metric learning (SGML) that develops a novel idea of leveraging attribute semantic space to capture different granularity of similarity, and then integrate this information into deep metric learning. The proposed method simultaneously learns image attributes and embeddings using multitask CNNs. The two tasks are not only jointly optimized but are further linked by the semantic granularity similarity mappings to leverage the correlations between the tasks. To this end, we propose a new soft-binomial deviance loss that effectively integrates the degree of information in training samples, which helps to capture visual similarity at multiple granularities. Compared to recent ensemble-based methods, our framework is conceptually elegant, computationally simple and provides better performance. We perform extensive experiments on benchmark metric learning datasets and demonstrate that our method outperforms recent state-of-the-art methods, e.g., 1-4.5% improvement in Recall@1 over the previous state-of-the-arts [1],[2] on DeepFashion In-Shop dataset.
Tasks Metric Learning
Published 2019-11-14
URL https://arxiv.org/abs/1911.06047v1
PDF https://arxiv.org/pdf/1911.06047v1.pdf
PWC https://paperswithcode.com/paper/semantic-granularity-metric-learning-for
Repo
Framework
comments powered by Disqus