October 16, 2019

2906 words 14 mins read

Paper Group ANR 1009

Paper Group ANR 1009

A novel extension of Generalized Low-Rank Approximation of Matrices based on multiple-pairs of transformations. Sentiment Analysis of Code-Mixed Indian Languages: An Overview of SAIL_Code-Mixed Shared Task @ICON-2017. Reservoir Computing Hardware with Cellular Automata. Visual Attention Network for Low Dose CT. HONE: Higher-Order Network Embeddings …

A novel extension of Generalized Low-Rank Approximation of Matrices based on multiple-pairs of transformations

Title A novel extension of Generalized Low-Rank Approximation of Matrices based on multiple-pairs of transformations
Authors Soheil Ahmadi, Mansoor Rezghi
Abstract Dimensionality reduction is a main step in the learning process which plays an essential role in many applications. The most popular methods in this field like SVD, PCA, and LDA, only can be applied to data with vector format. This means that for higher order data like matrices or more generally tensors, data should be fold to the vector format. So, in this approach, the spatial relations of features are not considered and also the probability of over-fitting is increased. Due to these issues, in recent years some methods like Generalized low-rank approximation of matrices (GLRAM) and Multilinear PCA (MPCA) are proposed which deal with the data in their own format. So, in these methods, the spatial relationships of features are preserved and the probability of overfitting could be fallen. Also, their time and space complexities are less than vector-based ones. However, because of the fewer parameters, the search space in a multilinear approach is much smaller than the search space of the vector-based approach. To overcome this drawback of multilinear methods like GLRAM, we proposed a new method which is a general form of GLRAM and by preserving the merits of it have a larger search space. Experimental results confirm the quality of the proposed method. Also, applying this approach to the other multilinear dimensionality reduction methods like MPCA and MLDA is straightforward.
Tasks Dimensionality Reduction
Published 2018-08-31
URL http://arxiv.org/abs/1808.10632v3
PDF http://arxiv.org/pdf/1808.10632v3.pdf
PWC https://paperswithcode.com/paper/a-novel-extension-of-generalized-low-rank
Repo
Framework

Sentiment Analysis of Code-Mixed Indian Languages: An Overview of SAIL_Code-Mixed Shared Task @ICON-2017

Title Sentiment Analysis of Code-Mixed Indian Languages: An Overview of SAIL_Code-Mixed Shared Task @ICON-2017
Authors Braja Gopal Patra, Dipankar Das, Amitava Das
Abstract Sentiment analysis is essential in many real-world applications such as stance detection, review analysis, recommendation system, and so on. Sentiment analysis becomes more difficult when the data is noisy and collected from social media. India is a multilingual country; people use more than one languages to communicate within themselves. The switching in between the languages is called code-switching or code-mixing, depending upon the type of mixing. This paper presents overview of the shared task on sentiment analysis of code-mixed data pairs of Hindi-English and Bengali-English collected from the different social media platform. The paper describes the task, dataset, evaluation, baseline and participant’s systems.
Tasks Sentiment Analysis, Stance Detection
Published 2018-03-18
URL http://arxiv.org/abs/1803.06745v1
PDF http://arxiv.org/pdf/1803.06745v1.pdf
PWC https://paperswithcode.com/paper/sentiment-analysis-of-code-mixed-indian
Repo
Framework

Reservoir Computing Hardware with Cellular Automata

Title Reservoir Computing Hardware with Cellular Automata
Authors Alejandro Morán, Christiam F. Frasser, Josep L. Rosselló
Abstract Elementary cellular automata (ECA) is a widely studied one-dimensional processing methodology where the successive iteration of the automaton may lead to the recreation of a rich pattern dynamic. Recently, cellular automata have been proposed as a feasible way to implement Reservoir Computing (RC) systems in which the automata rule is fixed and the training is performed using a linear regression. In this work we perform an exhaustive study of the performance of the different ECA rules when applied to pattern recognition of time-independent input signals using a RC scheme. Once the different ECA rules have been tested, the most accurate one (rule 90) is selected to implement a digital circuit. Rule 90 is easily reproduced using a reduced set of XOR gates and shift-registers, thus representing a high-performance alternative for RC hardware implementation in terms of processing time, circuit area, power dissipation and system accuracy. The model (both in software and its hardware implementation) has been tested using a pattern recognition task of handwritten numbers (the MNIST database) for which we obtained competitive results in terms of accuracy, speed and power dissipation. The proposed model can be considered to be a low-cost method to implement fast pattern recognition digital circuits.
Tasks
Published 2018-06-13
URL http://arxiv.org/abs/1806.04932v2
PDF http://arxiv.org/pdf/1806.04932v2.pdf
PWC https://paperswithcode.com/paper/reservoir-computing-hardware-with-cellular
Repo
Framework

Visual Attention Network for Low Dose CT

Title Visual Attention Network for Low Dose CT
Authors Wenchao Du, Hu Chen, Peixi Liao, Hongyu Yang, Ge Wang, Yi Zhang
Abstract Noise and artifacts are intrinsic to low dose CT (LDCT) data acquisition, and will significantly affect the imaging performance. Perfect noise removal and image restoration is intractable in the context of LDCT due to the statistical and technical uncertainties. In this paper, we apply the generative adversarial network (GAN) framework with a visual attention mechanism to deal with this problem in a data-driven/machine learning fashion. Our main idea is to inject visual attention knowledge into the learning process of GAN to provide a powerful prior of the noise distribution. By doing this, both the generator and discriminator networks are empowered with visual attention information so they will not only pay special attention to noisy regions and surrounding structures but also explicitly assess the local consistency of the recovered regions. Our experiments qualitatively and quantitatively demonstrate the effectiveness of the proposed method with clinic CT images.
Tasks Image Restoration
Published 2018-10-31
URL https://arxiv.org/abs/1810.13059v2
PDF https://arxiv.org/pdf/1810.13059v2.pdf
PWC https://paperswithcode.com/paper/visual-attention-network-for-low-dose-ct
Repo
Framework

HONE: Higher-Order Network Embeddings

Title HONE: Higher-Order Network Embeddings
Authors Ryan A. Rossi, Nesreen K. Ahmed, Eunyee Koh, Sungchul Kim, Anup Rao, Yasin Abbasi Yadkori
Abstract This paper describes a general framework for learning Higher-Order Network Embeddings (HONE) from graph data based on network motifs. The HONE framework is highly expressive and flexible with many interchangeable components. The experimental results demonstrate the effectiveness of learning higher-order network representations. In all cases, HONE outperforms recent embedding methods that are unable to capture higher-order structures with a mean relative gain in AUC of $19%$ (and up to $75%$ gain) across a wide variety of networks and embedding methods.
Tasks
Published 2018-01-28
URL http://arxiv.org/abs/1801.09303v2
PDF http://arxiv.org/pdf/1801.09303v2.pdf
PWC https://paperswithcode.com/paper/hone-higher-order-network-embeddings
Repo
Framework

Discovering the Elite Hypervolume by Leveraging Interspecies Correlation

Title Discovering the Elite Hypervolume by Leveraging Interspecies Correlation
Authors Vassilis Vassiliades, Jean-Baptiste Mouret
Abstract Evolution has produced an astonishing diversity of species, each filling a different niche. Algorithms like MAP-Elites mimic this divergent evolutionary process to find a set of behaviorally diverse but high-performing solutions, called the elites. Our key insight is that species in nature often share a surprisingly large part of their genome, in spite of occupying very different niches; similarly, the elites are likely to be concentrated in a specific “elite hypervolume” whose shape is defined by their common features. In this paper, we first introduce the elite hypervolume concept and propose two metrics to characterize it: the genotypic spread and the genotypic similarity. We then introduce a new variation operator, called “directional variation”, that exploits interspecies (or inter-elites) correlations to accelerate the MAP-Elites algorithm. We demonstrate the effectiveness of this operator in three problems (a toy function, a redundant robotic arm, and a hexapod robot).
Tasks
Published 2018-04-11
URL http://arxiv.org/abs/1804.03906v1
PDF http://arxiv.org/pdf/1804.03906v1.pdf
PWC https://paperswithcode.com/paper/discovering-the-elite-hypervolume-by
Repo
Framework

Cryo-CARE: Content-Aware Image Restoration for Cryo-Transmission Electron Microscopy Data

Title Cryo-CARE: Content-Aware Image Restoration for Cryo-Transmission Electron Microscopy Data
Authors Tim-Oliver Buchholz, Mareike Jordan, Gaia Pigino, Florian Jug
Abstract Multiple approaches to use deep learning for image restoration have recently been proposed. Training such approaches requires well registered pairs of high and low quality images. While this is easily achievable for many imaging modalities, e.g. fluorescence light microscopy, for others it is not. Cryo-transmission electron microscopy (cryo-TEM) could profoundly benefit from improved denoising methods, unfortunately it is one of the latter. Here we show how recent advances in network training for image restoration tasks, i.e. denoising, can be applied to cryo-TEM data. We describe our proposed method and show how it can be applied to single cryo-TEM projections and whole cryo-tomographic image volumes. Our proposed restoration method dramatically increases contrast in cryo-TEM images, which improves the interpretability of the acquired data. Furthermore we show that automated downstream processing on restored image data, demonstrated on a dense segmentation task, leads to improved results.
Tasks Denoising, Image Restoration
Published 2018-10-12
URL http://arxiv.org/abs/1810.05420v2
PDF http://arxiv.org/pdf/1810.05420v2.pdf
PWC https://paperswithcode.com/paper/cryo-care-content-aware-image-restoration-for
Repo
Framework

Differential and integral invariants under Mobius transformation

Title Differential and integral invariants under Mobius transformation
Authors He Zhang, Hanlin Mo, You Hao, Qi Li, Hua Li
Abstract One of the most challenging problems in the domain of 2-D image or 3-D shape is to handle the non-rigid deformation. From the perspective of transformation groups, the conformal transformation is a key part of the diffeomorphism. According to the Liouville Theorem, an important part of the conformal transformation is the Mobius transformation, so we focus on Mobius transformation and propose two differential expressions that are invariable under 2-D and 3-D Mobius transformation respectively. Next, we analyze the absoluteness and relativity of invariance on them and their components. After that, we propose integral invariants under Mobius transformation based on the two differential expressions. Finally, we propose a conjecture about the structure of differential invariants under conformal transformation according to our observation on the composition of the above two differential invariants.
Tasks
Published 2018-08-30
URL http://arxiv.org/abs/1808.10083v1
PDF http://arxiv.org/pdf/1808.10083v1.pdf
PWC https://paperswithcode.com/paper/differential-and-integral-invariants-under
Repo
Framework

Signal Alignment for Humanoid Skeletons via the Globally Optimal Reparameterization Algorithm

Title Signal Alignment for Humanoid Skeletons via the Globally Optimal Reparameterization Algorithm
Authors Thomas W. Mitchel, Sipu Ruan, Gregory S. Chirikjian
Abstract The general ability to analyze and classify the 3D kinematics of the human form is an essential step in the development of socially adept humanoid robots. A variety of different types of signals can be used by machines to represent and characterize actions such as RGB videos, infrared maps, and optical flow. In particular, skeleton sequences provide a natural 3D kinematic description of human motions and can be acquired in real time using RGB+D cameras. Moreover, skeleton sequences are generalizable to characterize the motions of both humans and humanoid robots. The Globally Optimal Reparameterization Algorithm (GORA) is a novel, recently proposed algorithm for signal alignment in which signals are reparameterized to a globally optimal universal standard timescale (UST). Here, we introduce a variant of GORA for humanoid action recognition with skeleton sequences, which we call GORA-S. We briefly review the algorithm’s mathematical foundations and contextualize them in the problem of action recognition with skeleton sequences. Subsequently, we introduce GORA-S and discuss parameters and numerical techniques for its effective implementation. We then compare its performance with that of the DTW and FastDTW algorithms, in terms of computational efficiency and accuracy in matching skeletons. Our results show that GORA-S attains a complexity that is significantly less than that of any tested DTW method. In addition, it displays a favorable balance between speed and accuracy that remains invariant under changes in skeleton sampling frequency, lending it a degree of versatility that could make it well-suited for a variety of action recognition tasks.
Tasks Optical Flow Estimation, Temporal Action Localization
Published 2018-07-18
URL http://arxiv.org/abs/1807.07432v2
PDF http://arxiv.org/pdf/1807.07432v2.pdf
PWC https://paperswithcode.com/paper/signal-alignment-for-humanoid-skeletons-via
Repo
Framework

Decision-making processes in the Cognitive Theory of True Conditions

Title Decision-making processes in the Cognitive Theory of True Conditions
Authors Sergio Miguel-Tomé
Abstract The Cognitive Theory of True Conditions (CTTC) is a proposal to design the implementation of cognitive abilities and to describe the model-theoretic semantics of symbolic cognitive architectures. The CTTC is formulated mathematically using the multi-optional many-sorted past present future(MMPPF) structures. This article discussed how decision-making processes are described in the CTTC.
Tasks Decision Making
Published 2018-03-06
URL http://arxiv.org/abs/1803.02476v1
PDF http://arxiv.org/pdf/1803.02476v1.pdf
PWC https://paperswithcode.com/paper/decision-making-processes-in-the-cognitive
Repo
Framework

Prostate Segmentation using 2D Bridged U-net

Title Prostate Segmentation using 2D Bridged U-net
Authors Wanli Chen, Yue Zhang, Junjun He, Yu Qiao, Yifan Chen, Hongjian Shi, Xiaoying Tang
Abstract In this paper, we focus on three problems in deep learning based medical image segmentation. Firstly, U-net, as a popular model for medical image segmentation, is difficult to train when convolutional layers increase even though a deeper network usually has a better generalization ability because of more learnable parameters. Secondly, the exponential ReLU (ELU), as an alternative of ReLU, is not much different from ReLU when the network of interest gets deep. Thirdly, the Dice loss, as one of the pervasive loss functions for medical image segmentation, is not effective when the prediction is close to ground truth and will cause oscillation during training. To address the aforementioned three problems, we propose and validate a deeper network that can fit medical image datasets that are usually small in the sample size. Meanwhile, we propose a new loss function to accelerate the learning process and a combination of different activation functions to improve the network performance. Our experimental results suggest that our network is comparable or superior to state-of-the-art methods.
Tasks Medical Image Segmentation, Semantic Segmentation
Published 2018-07-12
URL http://arxiv.org/abs/1807.04459v2
PDF http://arxiv.org/pdf/1807.04459v2.pdf
PWC https://paperswithcode.com/paper/prostate-segmentation-using-2d-bridged-u-net
Repo
Framework

Efficient Tree Solver for Hines Matrices on the GPU

Title Efficient Tree Solver for Hines Matrices on the GPU
Authors Felix Huber
Abstract The human brain consists of a large number of interconnected neurons communicating via exchange of electrical spikes. Simulations play an important role in better understanding electrical activity in the brain and offers a way to to compare measured data to simulated data such that experimental data can be interpreted better. A key component in such simulations is an efficient solver for the Hines matrices used in computing inter-neuron signal propagation. In order to achieve high performance simulations, it is crucial to have an efficient solver algorithm. In this report we explain a new parallel GPU solver for these matrices which offers fine grained parallelization and allows for work balancing during the simulation setup.
Tasks
Published 2018-10-30
URL http://arxiv.org/abs/1810.12742v2
PDF http://arxiv.org/pdf/1810.12742v2.pdf
PWC https://paperswithcode.com/paper/efficient-tree-solver-for-hines-matrices-on
Repo
Framework

Training Deeper Neural Machine Translation Models with Transparent Attention

Title Training Deeper Neural Machine Translation Models with Transparent Attention
Authors Ankur Bapna, Mia Xu Chen, Orhan Firat, Yuan Cao, Yonghui Wu
Abstract While current state-of-the-art NMT models, such as RNN seq2seq and Transformers, possess a large number of parameters, they are still shallow in comparison to convolutional models used for both text and vision applications. In this work we attempt to train significantly (2-3x) deeper Transformer and Bi-RNN encoders for machine translation. We propose a simple modification to the attention mechanism that eases the optimization of deeper models, and results in consistent gains of 0.7-1.1 BLEU on the benchmark WMT’14 English-German and WMT’15 Czech-English tasks for both architectures.
Tasks Machine Translation
Published 2018-08-22
URL http://arxiv.org/abs/1808.07561v2
PDF http://arxiv.org/pdf/1808.07561v2.pdf
PWC https://paperswithcode.com/paper/training-deeper-neural-machine-translation
Repo
Framework

Training Recurrent Neural Networks against Noisy Computations during Inference

Title Training Recurrent Neural Networks against Noisy Computations during Inference
Authors Minghai Qin, Dejan Vucinic
Abstract We explore the robustness of recurrent neural networks when the computations within the network are noisy. One of the motivations for looking into this problem is to reduce the high power cost of conventional computing of neural network operations through the use of analog neuromorphic circuits. Traditional GPU/CPU-centered deep learning architectures exhibit bottlenecks in power-restricted applications, such as speech recognition in embedded systems. The use of specialized neuromorphic circuits, where analog signals passed through memory-cell arrays are sensed to accomplish matrix-vector multiplications, promises large power savings and speed gains but brings with it the problems of limited precision of computations and unavoidable analog noise. In this paper we propose a method, called {\em Deep Noise Injection training}, to train RNNs to obtain a set of weights/biases that is much more robust against noisy computation during inference. We explore several RNN architectures, such as vanilla RNN and long-short-term memories (LSTM), and show that after convergence of Deep Noise Injection training the set of trained weights/biases has more consistent performance over a wide range of noise powers entering the network during inference. Surprisingly, we find that Deep Noise Injection training improves overall performance of some networks even for numerically accurate inference.
Tasks Speech Recognition
Published 2018-07-17
URL http://arxiv.org/abs/1807.06555v1
PDF http://arxiv.org/pdf/1807.06555v1.pdf
PWC https://paperswithcode.com/paper/training-recurrent-neural-networks-against
Repo
Framework

Learning to Skip Ineffectual Recurrent Computations in LSTMs

Title Learning to Skip Ineffectual Recurrent Computations in LSTMs
Authors Arash Ardakani, Zhengyun Ji, Warren J. Gross
Abstract Long Short-Term Memory (LSTM) is a special class of recurrent neural network, which has shown remarkable successes in processing sequential data. The typical architecture of an LSTM involves a set of states and gates: the states retain information over arbitrary time intervals and the gates regulate the flow of information. Due to the recursive nature of LSTMs, they are computationally intensive to deploy on edge devices with limited hardware resources. To reduce the computational complexity of LSTMs, we first introduce a method that learns to retain only the important information in the states by pruning redundant information. We then show that our method can prune over 90% of information in the states without incurring any accuracy degradation over a set of temporal tasks. This observation suggests that a large fraction of the recurrent computations are ineffectual and can be avoided to speed up the process during the inference as they involve noncontributory multiplications/accumulations with zero-valued states. Finally, we introduce a custom hardware accelerator that can perform the recurrent computations using both sparse and dense states. Experimental measurements show that performing the computations using the sparse states speeds up the process and improves energy efficiency by up to 5.2x when compared to implementation results of the accelerator performing the computations using dense states.
Tasks
Published 2018-11-09
URL http://arxiv.org/abs/1811.10396v2
PDF http://arxiv.org/pdf/1811.10396v2.pdf
PWC https://paperswithcode.com/paper/learning-to-skip-ineffectual-recurrent
Repo
Framework
comments powered by Disqus