October 18, 2019

2906 words 14 mins read

Paper Group ANR 429

Paper Group ANR 429

French Word Recognition through a Quick Survey on Recurrent Neural Networks Using Long-Short Term Memory RNN-LSTM. Learning sparse relational transition models. Learning One-hidden-layer ReLU Networks via Gradient Descent. Memetic Graph Clustering. Effective Quantization Approaches for Recurrent Neural Networks. Online learning with feedback graphs …

French Word Recognition through a Quick Survey on Recurrent Neural Networks Using Long-Short Term Memory RNN-LSTM

Title French Word Recognition through a Quick Survey on Recurrent Neural Networks Using Long-Short Term Memory RNN-LSTM
Authors Saman Sarraf
Abstract Optical character recognition (OCR) is a fundamental problem in computer vision. Research studies have shown significant progress in classifying printed characters using deep learning-based methods and topologies. Among current algorithms, recurrent neural networks with long-short term memory blocks called RNN-LSTM have provided the highest performance in terms of accuracy rate. Using the top 5,000 French words collected from the internet including all signs and accents, RNN-LSTM models were trained and tested for several cases. Six fonts were used to generate OCR samples and an additional dataset that included all samples from these six fonts was prepared for training and testing purposes. The trained RNN-LSTM models were tested and achieved the accuracy rates of 99.98798% and 99.91889% for edit distance and sequence error, respectively. An accurate preprocessing followed by height normalization (standardization methods in deep learning) enabled the RNN-LSTM model to be trained in the most efficient way. This machine learning work also revealed the robustness of RNN-LSTM topology to recognize printed characters.
Tasks Optical Character Recognition
Published 2018-04-10
URL http://arxiv.org/abs/1804.03683v1
PDF http://arxiv.org/pdf/1804.03683v1.pdf
PWC https://paperswithcode.com/paper/french-word-recognition-through-a-quick
Repo
Framework

Learning sparse relational transition models

Title Learning sparse relational transition models
Authors Victoria Xia, Zi Wang, Leslie Pack Kaelbling
Abstract We present a representation for describing transition models in complex uncertain domains using relational rules. For any action, a rule selects a set of relevant objects and computes a distribution over properties of just those objects in the resulting state given their properties in the previous state. An iterative greedy algorithm is used to construct a set of deictic references that determine which objects are relevant in any given state. Feed-forward neural networks are used to learn the transition distribution on the relevant objects’ properties. This strategy is demonstrated to be both more versatile and more sample efficient than learning a monolithic transition model in a simulated domain in which a robot pushes stacks of objects on a cluttered table.
Tasks
Published 2018-10-26
URL http://arxiv.org/abs/1810.11177v1
PDF http://arxiv.org/pdf/1810.11177v1.pdf
PWC https://paperswithcode.com/paper/learning-sparse-relational-transition-models
Repo
Framework

Learning One-hidden-layer ReLU Networks via Gradient Descent

Title Learning One-hidden-layer ReLU Networks via Gradient Descent
Authors Xiao Zhang, Yaodong Yu, Lingxiao Wang, Quanquan Gu
Abstract We study the problem of learning one-hidden-layer neural networks with Rectified Linear Unit (ReLU) activation function, where the inputs are sampled from standard Gaussian distribution and the outputs are generated from a noisy teacher network. We analyze the performance of gradient descent for training such kind of neural networks based on empirical risk minimization, and provide algorithm-dependent guarantees. In particular, we prove that tensor initialization followed by gradient descent can converge to the ground-truth parameters at a linear rate up to some statistical error. To the best of our knowledge, this is the first work characterizing the recovery guarantee for practical learning of one-hidden-layer ReLU networks with multiple neurons. Numerical experiments verify our theoretical findings.
Tasks
Published 2018-06-20
URL http://arxiv.org/abs/1806.07808v1
PDF http://arxiv.org/pdf/1806.07808v1.pdf
PWC https://paperswithcode.com/paper/learning-one-hidden-layer-relu-networks-via
Repo
Framework

Memetic Graph Clustering

Title Memetic Graph Clustering
Authors Sonja Biedermann, Monika Henzinger, Christian Schulz, Bernhard Schuster
Abstract It is common knowledge that there is no single best strategy for graph clustering, which justifies a plethora of existing approaches. In this paper, we present a general memetic algorithm, VieClus, to tackle the graph clustering problem. This algorithm can be adapted to optimize different objective functions. A key component of our contribution are natural recombine operators that employ ensemble clusterings as well as multi-level techniques. Lastly, we combine these techniques with a scalable communication protocol, producing a system that is able to compute high-quality solutions in a short amount of time. We instantiate our scheme with local search for modularity and show that our algorithm successfully improves or reproduces all entries of the 10th DIMACS implementation~challenge under consideration using a small amount of time.
Tasks Graph Clustering
Published 2018-02-20
URL http://arxiv.org/abs/1802.07034v1
PDF http://arxiv.org/pdf/1802.07034v1.pdf
PWC https://paperswithcode.com/paper/memetic-graph-clustering
Repo
Framework

Effective Quantization Approaches for Recurrent Neural Networks

Title Effective Quantization Approaches for Recurrent Neural Networks
Authors Md Zahangir Alom, Adam T Moody, Naoya Maruyama, Brian C Van Essen, Tarek M. Taha
Abstract Deep learning, and in particular Recurrent Neural Networks (RNN) have shown superior accuracy in a large variety of tasks including machine translation, language understanding, and movie frame generation. However, these deep learning approaches are very expensive in terms of computation. In most cases, Graphic Processing Units (GPUs) are in used for large scale implementations. Meanwhile, energy efficient RNN approaches are proposed for deploying solutions on special purpose hardware including Field Programming Gate Arrays (FPGAs) and mobile platforms. In this paper, we propose an effective quantization approach for Recurrent Neural Networks (RNN) techniques including Long Short Term Memory (LSTM), Gated Recurrent Units (GRU), and Convolutional Long Short Term Memory (ConvLSTM). We have implemented different quantization methods including Binary Connect {-1, 1}, Ternary Connect {-1, 0, 1}, and Quaternary Connect {-1, -0.5, 0.5, 1}. These proposed approaches are evaluated on different datasets for sentiment analysis on IMDB and video frame predictions on the moving MNIST dataset. The experimental results are compared against the full precision versions of the LSTM, GRU, and ConvLSTM. They show promising results for both sentiment analysis and video frame prediction.
Tasks Machine Translation, Quantization, Sentiment Analysis
Published 2018-02-07
URL http://arxiv.org/abs/1802.02615v1
PDF http://arxiv.org/pdf/1802.02615v1.pdf
PWC https://paperswithcode.com/paper/effective-quantization-approaches-for
Repo
Framework

Online learning with feedback graphs and switching costs

Title Online learning with feedback graphs and switching costs
Authors Anshuka Rangi, Massimo Franceschetti
Abstract We study online learning when partial feedback information is provided following every action of the learning process, and the learner incurs switching costs for changing his actions. In this setting, the feedback information system can be represented by a graph, and previous works studied the expected regret of the learner in the case of a clique (Expert setup), or disconnected single loops (Multi-Armed Bandits (MAB)). This work provides a lower bound on the expected regret in the Partial Information (PI) setting, namely for general feedback graphs –excluding the clique. Additionally, it shows that all algorithms that are optimal without switching costs are necessarily sub-optimal in the presence of switching costs, which motivates the need to design new algorithms. We propose two new algorithms: Threshold Based EXP3 and EXP3. SC. For the two special cases of symmetric PI setting and MAB, the expected regret of both of these algorithms is order optimal in the duration of the learning process. Additionally, Threshold Based EXP3 is order optimal in the switching cost, whereas EXP3. SC is not. Finally, empirical evaluations show that Threshold Based EXP3 outperforms the previously proposed order-optimal algorithms EXP3 SET in the presence of switching costs, and Batch EXP3 in the MAB setting with switching costs.
Tasks Multi-Armed Bandits
Published 2018-10-23
URL https://arxiv.org/abs/1810.09666v2
PDF https://arxiv.org/pdf/1810.09666v2.pdf
PWC https://paperswithcode.com/paper/online-learning-with-feedback-graphs-and
Repo
Framework

A Bi-layered Parallel Training Architecture for Large-scale Convolutional Neural Networks

Title A Bi-layered Parallel Training Architecture for Large-scale Convolutional Neural Networks
Authors Jianguo Chen, Kenli Li, Kashif Bilal, Xu Zhou, Keqin Li, Philip S. Yu
Abstract Benefitting from large-scale training datasets and the complex training network, Convolutional Neural Networks (CNNs) are widely applied in various fields with high accuracy. However, the training process of CNNs is very time-consuming, where large amounts of training samples and iterative operations are required to obtain high-quality weight parameters. In this paper, we focus on the time-consuming training process of large-scale CNNs and propose a Bi-layered Parallel Training (BPT-CNN) architecture in distributed computing environments. BPT-CNN consists of two main components: (a) an outer-layer parallel training for multiple CNN subnetworks on separate data subsets, and (b) an inner-layer parallel training for each subnetwork. In the outer-layer parallelism, we address critical issues of distributed and parallel computing, including data communication, synchronization, and workload balance. A heterogeneous-aware Incremental Data Partitioning and Allocation (IDPA) strategy is proposed, where large-scale training datasets are partitioned and allocated to the computing nodes in batches according to their computing power. To minimize the synchronization waiting during the global weight update process, an Asynchronous Global Weight Update (AGWU) strategy is proposed. In the inner-layer parallelism, we further accelerate the training process for each CNN subnetwork on each computer, where computation steps of convolutional layer and the local weight training are parallelized based on task-parallelism. We introduce task decomposition and scheduling strategies with the objectives of thread-level load balancing and minimum waiting time for critical paths. Extensive experimental results indicate that the proposed BPT-CNN effectively improves the training performance of CNNs while maintaining the accuracy.
Tasks
Published 2018-10-17
URL http://arxiv.org/abs/1810.07742v1
PDF http://arxiv.org/pdf/1810.07742v1.pdf
PWC https://paperswithcode.com/paper/a-bi-layered-parallel-training-architecture
Repo
Framework

Graph Convolutional Neural Networks for Polymers Property Prediction

Title Graph Convolutional Neural Networks for Polymers Property Prediction
Authors Minggang Zeng, Jatin Nitin Kumar, Zeng Zeng, Ramasamy Savitha, Vijay Ramaseshan Chandrasekhar, Kedar Hippalgaonkar
Abstract A fast and accurate predictive tool for polymer properties is demanding and will pave the way to iterative inverse design. In this work, we apply graph convolutional neural networks (GCNN) to predict the dielectric constant and energy bandgap of polymers. Using density functional theory (DFT) calculated properties as the ground truth, GCNN can achieve remarkable agreement with DFT results. Moreover, we show that GCNN outperforms other machine learning algorithms. Our work proves that GCNN relies only on morphological data of polymers and removes the requirement for complicated hand-crafted descriptors, while still offering accuracy in fast predictions.
Tasks
Published 2018-11-15
URL http://arxiv.org/abs/1811.06231v1
PDF http://arxiv.org/pdf/1811.06231v1.pdf
PWC https://paperswithcode.com/paper/graph-convolutional-neural-networks-for
Repo
Framework

Combining Convolution and Recursive Neural Networks for Sentiment Analysis

Title Combining Convolution and Recursive Neural Networks for Sentiment Analysis
Authors Vinh D. Van, Thien Thai, Minh-Quoc Nghiem
Abstract This paper addresses the problem of sentence-level sentiment analysis. In recent years, Convolution and Recursive Neural Networks have been proven to be effective network architecture for sentence-level sentiment analysis. Nevertheless, each of them has their own potential drawbacks. For alleviating their weaknesses, we combined Convolution and Recursive Neural Networks into a new network architecture. In addition, we employed transfer learning from a large document-level labeled sentiment dataset to improve the word embedding in our models. The resulting models outperform all recent Convolution and Recursive Neural Networks. Beyond that, our models achieve comparable performance with state-of-the-art systems on Stanford Sentiment Treebank.
Tasks Sentiment Analysis, Transfer Learning
Published 2018-01-27
URL http://arxiv.org/abs/1801.09053v1
PDF http://arxiv.org/pdf/1801.09053v1.pdf
PWC https://paperswithcode.com/paper/combining-convolution-and-recursive-neural
Repo
Framework

Effective Unsupervised Author Disambiguation with Relative Frequencies

Title Effective Unsupervised Author Disambiguation with Relative Frequencies
Authors Tobias Backes
Abstract This work addresses the problem of author name homonymy in the Web of Science. Aiming for an efficient, simple and straightforward solution, we introduce a novel probabilistic similarity measure for author name disambiguation based on feature overlap. Using the researcher-ID available for a subset of the Web of Science, we evaluate the application of this measure in the context of agglomeratively clustering author mentions. We focus on a concise evaluation that shows clearly for which problem setups and at which time during the clustering process our approach works best. In contrast to most other works in this field, we are sceptical towards the performance of author name disambiguation methods in general and compare our approach to the trivial single-cluster baseline. Our results are presented separately for each correct clustering size as we can explain that, when treating all cases together, the trivial baseline and more sophisticated approaches are hardly distinguishable in terms of evaluation results. Our model shows state-of-the-art performance for all correct clustering sizes without any discriminative training and with tuning only one convergence parameter.
Tasks
Published 2018-08-10
URL http://arxiv.org/abs/1808.04216v1
PDF http://arxiv.org/pdf/1808.04216v1.pdf
PWC https://paperswithcode.com/paper/effective-unsupervised-author-disambiguation
Repo
Framework

Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision

Title Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision
Authors Yuhao Zhu, Anand Samajdar, Matthew Mattina, Paul Whatmough
Abstract Continuous computer vision (CV) tasks increasingly rely on convolutional neural networks (CNN). However, CNNs have massive compute demands that far exceed the performance and energy constraints of mobile devices. In this paper, we propose and develop an algorithm-architecture co-designed system, Euphrates, that simultaneously improves the energy-efficiency and performance of continuous vision tasks. Our key observation is that changes in pixel data between consecutive frames represents visual motion. We first propose an algorithm that leverages this motion information to relax the number of expensive CNN inferences required by continuous vision applications. We co-design a mobile System-on-a-Chip (SoC) architecture to maximize the efficiency of the new algorithm. The key to our architectural augmentation is to co-optimize different SoC IP blocks in the vision pipeline collectively. Specifically, we propose to expose the motion data that is naturally generated by the Image Signal Processor (ISP) early in the vision pipeline to the CNN engine. Measurement and synthesis results show that Euphrates achieves up to 66% SoC-level energy savings (4 times for the vision computations), with only 1% accuracy loss.
Tasks
Published 2018-03-29
URL http://arxiv.org/abs/1803.11232v1
PDF http://arxiv.org/pdf/1803.11232v1.pdf
PWC https://paperswithcode.com/paper/euphrates-algorithm-soc-co-design-for-low
Repo
Framework

Style Memory: Making a Classifier Network Generative

Title Style Memory: Making a Classifier Network Generative
Authors Rey Wiyatno, Jeff Orchard
Abstract Deep networks have shown great performance in classification tasks. However, the parameters learned by the classifier networks usually discard stylistic information of the input, in favour of information strictly relevant to classification. We introduce a network that has the capacity to do both classification and reconstruction by adding a “style memory” to the output layer of the network. We also show how to train such a neural network as a deep multi-layer autoencoder, jointly minimizing both classification and reconstruction losses. The generative capacity of our network demonstrates that the combination of style-memory neurons with the classifier neurons yield good reconstructions of the inputs when the classification is correct. We further investigate the nature of the style memory, and how it relates to composing digits and letters. Finally, we propose that this architecture enables the bidirectional flow of information used in predictive coding, and that such bidirectional networks can help mitigate against being fooled by ambiguous or adversarial input.
Tasks
Published 2018-03-05
URL http://arxiv.org/abs/1803.01900v1
PDF http://arxiv.org/pdf/1803.01900v1.pdf
PWC https://paperswithcode.com/paper/style-memory-making-a-classifier-network
Repo
Framework

Towards Visible and Thermal Drone Monitoring with Convolutional Neural Networks

Title Towards Visible and Thermal Drone Monitoring with Convolutional Neural Networks
Authors Ye Wang, Yueru Chen, Jongmoo Choi, C. -C. Jay Kuo
Abstract This paper reports a visible and thermal drone monitoring system that integrates deep-learning-based detection and tracking modules. The biggest challenge in adopting deep learning methods for drone detection is the paucity of training drone images especially thermal drone images. To address this issue, we develop two data augmentation techniques. One is a model-based drone augmentation technique that automatically generates visible drone images with a bounding box label on the drone’s location. The other is exploiting an adversarial data augmentation methodology to create thermal drone images. To track a small flying drone, we utilize the residual information between consecutive image frames. Finally, we present an integrated detection and tracking system that outperforms the performance of each individual module containing detection or tracking only. The experiments show that even being trained on synthetic data, the proposed system performs well on real-world drone images with complex background. The USC drone detection and tracking dataset with user labeled bounding boxes is available to the public.
Tasks Data Augmentation
Published 2018-12-19
URL http://arxiv.org/abs/1812.08333v1
PDF http://arxiv.org/pdf/1812.08333v1.pdf
PWC https://paperswithcode.com/paper/towards-visible-and-thermal-drone-monitoring
Repo
Framework

TBI Contusion Segmentation from MRI using Convolutional Neural Networks

Title TBI Contusion Segmentation from MRI using Convolutional Neural Networks
Authors Snehashis Roy, John A. Butman, Leighton Chan, Dzung L. Pham
Abstract Traumatic brain injury (TBI) is caused by a sudden trauma to the head that may result in hematomas and contusions and can lead to stroke or chronic disability. An accurate quantification of the lesion volumes and their locations is essential to understand the pathophysiology of TBI and its progression. In this paper, we propose a fully convolutional neural network (CNN) model to segment contusions and lesions from brain magnetic resonance (MR) images of patients with TBI. The CNN architecture proposed here was based on a state of the art CNN architecture from Google, called Inception. Using a 3-layer Inception network, lesions are segmented from multi-contrast MR images. When compared with two recent TBI lesion segmentation methods, one based on CNN (called DeepMedic) and another based on random forests, the proposed algorithm showed improved segmentation accuracy on images of 18 patients with mild to severe TBI. Using a leave-one-out cross validation, the proposed model achieved a median Dice of 0.75, which was significantly better (p<0.01) than the two competing methods.
Tasks Lesion Segmentation
Published 2018-07-27
URL http://arxiv.org/abs/1807.10839v1
PDF http://arxiv.org/pdf/1807.10839v1.pdf
PWC https://paperswithcode.com/paper/tbi-contusion-segmentation-from-mri-using
Repo
Framework

The Role of Normware in Trustworthy and Explainable AI

Title The Role of Normware in Trustworthy and Explainable AI
Authors Giovanni Sileno, Alexander Boer, Tom van Engers
Abstract For being potentially destructive, in practice incomprehensible and for the most unintelligible, contemporary technology is setting high challenges on our society. New conception methods are urgently required. Reorganizing ideas and discussions presented in AI and related fields, this position paper aims to highlight the importance of normware–that is, computational artifacts specifying norms–with respect to these issues, and argues for its irreducibility with respect to software by making explicit its neglected ecological dimension in the decision-making cycle.
Tasks Decision Making
Published 2018-12-06
URL http://arxiv.org/abs/1812.02471v1
PDF http://arxiv.org/pdf/1812.02471v1.pdf
PWC https://paperswithcode.com/paper/the-role-of-normware-in-trustworthy-and
Repo
Framework
comments powered by Disqus