May 7, 2019

3165 words 15 mins read

Paper Group ANR 116

Paper Group ANR 116

Distributed Private Online Learning for Social Big Data Computing over Data Center Networks. Adaptive Neuron Apoptosis for Accelerating Deep Learning on Large Scale Systems. Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations. Desiderata for Vector-Space Word Representations. Fully Convolutional Recurrent …

Distributed Private Online Learning for Social Big Data Computing over Data Center Networks

Title Distributed Private Online Learning for Social Big Data Computing over Data Center Networks
Authors Chencheng Li, Pan Zhou, Yingxue Zhou, Kaigui Bian, Tao Jiang, Susanto Rahardja
Abstract With the rapid growth of Internet technologies, cloud computing and social networks have become ubiquitous. An increasing number of people participate in social networks and massive online social data are obtained. In order to exploit knowledge from copious amounts of data obtained and predict social behavior of users, we urge to realize data mining in social networks. Almost all online websites use cloud services to effectively process the large scale of social data, which are gathered from distributed data centers. These data are so large-scale, high-dimension and widely distributed that we propose a distributed sparse online algorithm to handle them. Additionally, privacy-protection is an important point in social networks. We should not compromise the privacy of individuals in networks, while these social data are being learned for data mining. Thus we also consider the privacy problem in this article. Our simulations shows that the appropriate sparsity of data would enhance the performance of our algorithm and the privacy-preserving method does not significantly hurt the performance of the proposed algorithm.
Tasks
Published 2016-02-21
URL http://arxiv.org/abs/1602.06489v1
PDF http://arxiv.org/pdf/1602.06489v1.pdf
PWC https://paperswithcode.com/paper/distributed-private-online-learning-for
Repo
Framework

Adaptive Neuron Apoptosis for Accelerating Deep Learning on Large Scale Systems

Title Adaptive Neuron Apoptosis for Accelerating Deep Learning on Large Scale Systems
Authors Charles Siegel, Jeff Daily, Abhinav Vishnu
Abstract We present novel techniques to accelerate the convergence of Deep Learning algorithms by conducting low overhead removal of redundant neurons – apoptosis of neurons – which do not contribute to model learning, during the training phase itself. We provide in-depth theoretical underpinnings of our heuristics (bounding accuracy loss and handling apoptosis of several neuron types), and present the methods to conduct adaptive neuron apoptosis. Specifically, we are able to improve the training time for several datasets by 2-3x, while reducing the number of parameters by up to 30x (4-5x on average) on datasets such as ImageNet classification. For the Higgs Boson dataset, our implementation improves the accuracy (measured by Area Under Curve (AUC)) for classification from 0.88/1 to 0.94/1, while reducing the number of parameters by 3x in comparison to existing literature. The proposed methods achieve a 2.44x speedup in comparison to the default (no apoptosis) algorithm.
Tasks
Published 2016-10-03
URL http://arxiv.org/abs/1610.00790v1
PDF http://arxiv.org/pdf/1610.00790v1.pdf
PWC https://paperswithcode.com/paper/adaptive-neuron-apoptosis-for-accelerating
Repo
Framework

Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations

Title Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations
Authors Michael Laskey, Caleb Chuck, Jonathan Lee, Jeffrey Mahler, Sanjay Krishnan, Kevin Jamieson, Anca Dragan, Ken Goldberg
Abstract Motivated by recent advances in Deep Learning for robot control, this paper considers two learning algorithms in terms of how they acquire demonstrations. “Human-Centric” (HC) sampling is the standard supervised learning algorithm, where a human supervisor demonstrates the task by teleoperating the robot to provide trajectories consisting of state-control pairs. “Robot-Centric” (RC) sampling is an increasingly popular alternative used in algorithms such as DAgger, where a human supervisor observes the robot executing a learned policy and provides corrective control labels for each state visited. RC sampling can be challenging for human supervisors and prone to mislabeling. RC sampling can also induce error in policy performance because it repeatedly visits areas of the state space that are harder to learn. Although policies learned with RC sampling can be superior to HC sampling for standard learning models such as linear SVMs, policies learned with HC sampling may be comparable with highly-expressive learning models such as deep learning and hyper-parametric decision trees, which have little model error. We compare HC and RC using a grid world and a physical robot singulation task, where in the latter the input is a binary image of a connected set of objects on a planar worksurface and the policy generates a motion of the gripper to separate one object from the rest. We observe in simulation that for linear SVMs, policies learned with RC outperformed those learned with HC but that with deep models this advantage disappears. We also find that with RC, the corrective control labels provided by humans can be highly inconsistent. We prove there exists a class of examples where in the limit, HC is guaranteed to converge to an optimal policy while RC may fail to converge.
Tasks
Published 2016-10-04
URL http://arxiv.org/abs/1610.00850v3
PDF http://arxiv.org/pdf/1610.00850v3.pdf
PWC https://paperswithcode.com/paper/comparing-human-centric-and-robot-centric
Repo
Framework

Desiderata for Vector-Space Word Representations

Title Desiderata for Vector-Space Word Representations
Authors Leon Derczynski
Abstract A plethora of vector-space representations for words is currently available, which is growing. These consist of fixed-length vectors containing real values, which represent a word. The result is a representation upon which the power of many conventional information processing and data mining techniques can be brought to bear, as long as the representations are designed with some forethought and fit certain constraints. This paper details desiderata for the design of vector space representations of words.
Tasks
Published 2016-08-06
URL http://arxiv.org/abs/1608.02094v1
PDF http://arxiv.org/pdf/1608.02094v1.pdf
PWC https://paperswithcode.com/paper/desiderata-for-vector-space-word
Repo
Framework

Fully Convolutional Recurrent Network for Handwritten Chinese Text Recognition

Title Fully Convolutional Recurrent Network for Handwritten Chinese Text Recognition
Authors Zecheng Xie, Zenghui Sun, Lianwen Jin, Ziyong Feng, Shuye Zhang
Abstract This paper proposes an end-to-end framework, namely fully convolutional recurrent network (FCRN) for handwritten Chinese text recognition (HCTR). Unlike traditional methods that rely heavily on segmentation, our FCRN is trained with online text data directly and learns to associate the pen-tip trajectory with a sequence of characters. FCRN consists of four parts: a path-signature layer to extract signature features from the input pen-tip trajectory, a fully convolutional network to learn informative representation, a sequence modeling layer to make per-frame predictions on the input sequence and a transcription layer to translate the predictions into a label sequence. The FCRN is end-to-end trainable in contrast to conventional methods whose components are separately trained and tuned. We also present a refined beam search method that efficiently integrates the language model to decode the FCRN and significantly improve the recognition results. We evaluate the performance of the proposed method on the test sets from the databases CASIA-OLHWDB and ICDAR 2013 Chinese handwriting recognition competition, and both achieve state-of-the-art performance with correct rates of 96.40% and 95.00%, respectively.
Tasks Handwritten Chinese Text Recognition, Language Modelling
Published 2016-04-18
URL http://arxiv.org/abs/1604.04953v1
PDF http://arxiv.org/pdf/1604.04953v1.pdf
PWC https://paperswithcode.com/paper/fully-convolutional-recurrent-network-for
Repo
Framework

Recoding Color Transfer as a Color Homography

Title Recoding Color Transfer as a Color Homography
Authors Han Gong, Graham D. Finlayson, Robert B. Fisher
Abstract Color transfer is an image editing process that adjusts the colors of a picture to match a target picture’s color theme. A natural color transfer not only matches the color styles but also prevents after-transfer artifacts due to image compression, noise, and gradient smoothness change. The recently discovered color homography theorem proves that colors across a change in photometric viewing condition are related by a homography. In this paper, we propose a color-homography-based color transfer decomposition which encodes color transfer as a combination of chromaticity shift and shading adjustment. A powerful form of shading adjustment is shown to be a global shading curve by which the same shading homography can be applied elsewhere. Our experiments show that the proposed color transfer decomposition provides a very close approximation to many popular color transfer methods. The advantage of our approach is that the learned color transfer can be applied to many other images (e.g. other frames in a video), instead of a frame-to-frame basis. We demonstrate two applications for color transfer enhancement and video color grading re-application. This simple model of color transfer is also important for future color transfer algorithm design.
Tasks Image Compression
Published 2016-08-04
URL http://arxiv.org/abs/1608.01505v1
PDF http://arxiv.org/pdf/1608.01505v1.pdf
PWC https://paperswithcode.com/paper/recoding-color-transfer-as-a-color-homography
Repo
Framework

Entailment Relations on Distributions

Title Entailment Relations on Distributions
Authors John van de Wetering
Abstract In this paper we give an overview of partial orders on the space of probability distributions that carry a notion of information content and serve as a generalisation of the Bayesian order given in (Coecke and Martin, 2011). We investigate what constraints are necessary in order to get a unique notion of information content. These partial orders can be used to give an ordering on words in vector space models of natural language meaning relating to the contexts in which words are used, which is useful for a notion of entailment and word disambiguation. The construction used also points towards a way to create orderings on the space of density operators which allow a more fine-grained study of entailment. The partial orders in this paper are directed complete and form domains in the sense of domain theory.
Tasks
Published 2016-08-04
URL http://arxiv.org/abs/1608.01405v1
PDF http://arxiv.org/pdf/1608.01405v1.pdf
PWC https://paperswithcode.com/paper/entailment-relations-on-distributions
Repo
Framework

SSP: Semantic Space Projection for Knowledge Graph Embedding with Text Descriptions

Title SSP: Semantic Space Projection for Knowledge Graph Embedding with Text Descriptions
Authors Han Xiao, Minlie Huang, Xiaoyan Zhu
Abstract Knowledge representation is an important, long-history topic in AI, and there have been a large amount of work for knowledge graph embedding which projects symbolic entities and relations into low-dimensional, real-valued vector space. However, most embedding methods merely concentrate on data fitting and ignore the explicit semantic expression, leading to uninterpretable representations. Thus, traditional embedding methods have limited potentials for many applications such as question answering, and entity classification. To this end, this paper proposes a semantic representation method for knowledge graph \textbf{(KSR)}, which imposes a two-level hierarchical generative process that globally extracts many aspects and then locally assigns a specific category in each aspect for every triple. Since both aspects and categories are semantics-relevant, the collection of categories in each aspect is treated as the semantic representation of this triple. Extensive experiments justify our model outperforms other state-of-the-art baselines substantially.
Tasks Graph Embedding, Knowledge Graph Embedding, Question Answering
Published 2016-04-17
URL http://arxiv.org/abs/1604.04835v3
PDF http://arxiv.org/pdf/1604.04835v3.pdf
PWC https://paperswithcode.com/paper/ssp-semantic-space-projection-for-knowledge
Repo
Framework

Learning a Deep Model for Human Action Recognition from Novel Viewpoints

Title Learning a Deep Model for Human Action Recognition from Novel Viewpoints
Authors Hossein Rahmani, Ajmal Mian, Mubarak Shah
Abstract Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a non-linear virtual path that connects the views. The R-NKTM is learned from dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-training or fine-tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-the-art.
Tasks Motion Capture, Temporal Action Localization, Transfer Learning
Published 2016-02-02
URL http://arxiv.org/abs/1602.00828v1
PDF http://arxiv.org/pdf/1602.00828v1.pdf
PWC https://paperswithcode.com/paper/learning-a-deep-model-for-human-action
Repo
Framework

Regret Bounds for Non-decomposable Metrics with Missing Labels

Title Regret Bounds for Non-decomposable Metrics with Missing Labels
Authors Prateek Jain, Nagarajan Natarajan
Abstract We consider the problem of recommending relevant labels (items) for a given data point (user). In particular, we are interested in the practically important setting where the evaluation is with respect to non-decomposable (over labels) performance metrics like the $F_1$ measure, and the training data has missing labels. To this end, we propose a generic framework that given a performance metric $\Psi$, can devise a regularized objective function and a threshold such that all the values in the predicted score vector above and only above the threshold are selected to be positive. We show that the regret or generalization error in the given metric $\Psi$ is bounded ultimately by estimation error of certain underlying parameters. In particular, we derive regret bounds under three popular settings: a) collaborative filtering, b) multilabel classification, and c) PU (positive-unlabeled) learning. For each of the above problems, we can obtain precise non-asymptotic regret bound which is small even when a large fraction of labels is missing. Our empirical results on synthetic and benchmark datasets demonstrate that by explicitly modeling for missing labels and optimizing the desired performance metric, our algorithm indeed achieves significantly better performance (like $F_1$ score) when compared to methods that do not model missing label information carefully.
Tasks
Published 2016-06-07
URL http://arxiv.org/abs/1606.02077v1
PDF http://arxiv.org/pdf/1606.02077v1.pdf
PWC https://paperswithcode.com/paper/regret-bounds-for-non-decomposable-metrics
Repo
Framework

Reservoir computing for spatiotemporal signal classification without trained output weights

Title Reservoir computing for spatiotemporal signal classification without trained output weights
Authors Ashley Prater
Abstract Reservoir computing is a recently introduced machine learning paradigm that has been shown to be well-suited for the processing of spatiotemporal data. Rather than training the network node connections and weights via backpropagation in traditional recurrent neural networks, reservoirs instead have fixed connections and weights among the `hidden layer’ nodes, and traditionally only the weights to the output layer of neurons are trained using linear regression. We claim that for signal classification tasks one may forgo the weight training step entirely and instead use a simple supervised clustering method based upon principal components of norms of reservoir states. The proposed method is mathematically analyzed and explored through numerical experiments on real-world data. The examples demonstrate that the proposed may outperform the traditional trained output weight approach in terms of classification accuracy and sensitivity to reservoir parameters. |
Tasks
Published 2016-04-11
URL http://arxiv.org/abs/1604.03073v2
PDF http://arxiv.org/pdf/1604.03073v2.pdf
PWC https://paperswithcode.com/paper/reservoir-computing-for-spatiotemporal-signal
Repo
Framework

Visual Concept Recognition and Localization via Iterative Introspection

Title Visual Concept Recognition and Localization via Iterative Introspection
Authors Amir Rosenfeld, Shimon Ullman
Abstract Convolutional neural networks have been shown to develop internal representations, which correspond closely to semantically meaningful objects and parts, although trained solely on class labels. Class Activation Mapping (CAM) is a recent method that makes it possible to easily highlight the image regions contributing to a network’s classification decision. We build upon these two developments to enable a network to re-examine informative image regions, which we term introspection. We propose a weakly-supervised iterative scheme, which shifts its center of attention to increasingly discriminative regions as it progresses, by alternating stages of classification and introspection. We evaluate our method and show its effectiveness over a range of several datasets, where we obtain competitive or state-of-the-art results: on Stanford-40 Actions, we set a new state-of the art of 81.74%. On FGVC-Aircraft and the Stanford Dogs dataset, we show consistent improvements over baselines, some of which include significantly more supervision.
Tasks
Published 2016-03-14
URL http://arxiv.org/abs/1603.04186v2
PDF http://arxiv.org/pdf/1603.04186v2.pdf
PWC https://paperswithcode.com/paper/visual-concept-recognition-and-localization
Repo
Framework

Feature Extraction and Soft Computing Methods for Aerospace Structure Defect Classification

Title Feature Extraction and Soft Computing Methods for Aerospace Structure Defect Classification
Authors Gianni D’Angelo, Salvatore Rampone
Abstract This study concerns the effectiveness of several techniques and methods of signals processing and data interpretation for the diagnosis of aerospace structure defects. This is done by applying different known feature extraction methods, in addition to a new CBIR-based one; and some soft computing techniques including a recent HPC parallel implementation of the U-BRAIN learning algorithm on Non Destructive Testing data. The performance of the resulting detection systems are measured in terms of Accuracy, Sensitivity, Specificity, and Precision. Their effectiveness is evaluated by the Matthews correlation, the Area Under Curve (AUC), and the F-Measure. Several experiments are performed on a standard dataset of eddy current signal samples for aircraft structures. Our experimental results evidence that the key to a successful defect classifier is the feature extraction method - namely the novel CBIR-based one outperforms all the competitors - and they illustrate the greater effectiveness of the U-BRAIN algorithm and the MLP neural network among the soft computing methods in this kind of application. Keywords- Non-destructive testing (NDT); Soft Computing; Feature Extraction; Classification Algorithms; Content-Based Image Retrieval (CBIR); Eddy Currents (EC).
Tasks Content-Based Image Retrieval, Image Retrieval
Published 2016-11-15
URL http://arxiv.org/abs/1611.04782v1
PDF http://arxiv.org/pdf/1611.04782v1.pdf
PWC https://paperswithcode.com/paper/feature-extraction-and-soft-computing-methods
Repo
Framework

Content-Based Image Retrieval Using Multiresolution Analysis Of Shape-Based Classified Images

Title Content-Based Image Retrieval Using Multiresolution Analysis Of Shape-Based Classified Images
Authors I. M. El-Henawy, Kareem Ahmed
Abstract Content-Based Image Retrieval (CBIR) systems have been widely used for a wide range of applications such as Art collections, Crime prevention and Intellectual property. In this paper, a novel CBIR system, which utilizes visual contents (color, texture and shape) of an image to retrieve images, is proposed. The proposed system builds three feature vectors and stores them into MySQL database. The first feature vector uses descriptive statistics to describe the distribution of data in each channel of RGB channels of the image. The second feature vector describes the texture using eigenvalues of the 39 sub-bands that are generated after applying four levels 2D DWT in each channel (red, green and blue channels) of the image. These wavelets sub-bands perfectly describes the horizontal, vertical and diagonal edges that exist in the multi-resolution analysis of the image. The third feature vector describes the basic shapes that exist in the skeletonization version of the black and white representation of the image. Experimental results on a private MYSQL database that consists of 10000 images, using color, texture, shape and stored relevance feedbacks, showed 96.4% average correct retrieval rate in an efficient recovery time.
Tasks Content-Based Image Retrieval, Image Retrieval
Published 2016-10-08
URL http://arxiv.org/abs/1610.02509v1
PDF http://arxiv.org/pdf/1610.02509v1.pdf
PWC https://paperswithcode.com/paper/content-based-image-retrieval-using
Repo
Framework

Causal Network Learning from Multiple Interventions of Unknown Manipulated Targets

Title Causal Network Learning from Multiple Interventions of Unknown Manipulated Targets
Authors Yango He, Zhi Geng
Abstract In this paper, we discuss structure learning of causal networks from multiple data sets obtained by external intervention experiments where we do not know what variables are manipulated. For example, the conditions in these experiments are changed by changing temperature or using drugs, but we do not know what target variables are manipulated by the external interventions. From such data sets, the structure learning becomes more difficult. For this case, we first discuss the identifiability of causal structures. Next we present a graph-merging method for learning causal networks for the case that the sample sizes are large for these interventions. Then for the case that the sample sizes of these interventions are relatively small, we propose a data-pooling method for learning causal networks in which we pool all data sets of these interventions together for the learning. Further we propose a re-sampling approach to evaluate the edges of the causal network learned by the data-pooling method. Finally we illustrate the proposed learning methods by simulations.
Tasks
Published 2016-10-27
URL http://arxiv.org/abs/1610.08611v1
PDF http://arxiv.org/pdf/1610.08611v1.pdf
PWC https://paperswithcode.com/paper/causal-network-learning-from-multiple
Repo
Framework
comments powered by Disqus