May 5, 2019

3349 words 16 mins read

Paper Group ANR 558

Paper Group ANR 558

Semantic Video Segmentation by Gated Recurrent Flow Propagation. COREALMLIB: An ALM Library Translated from the Component Library. Gland Instance Segmentation Using Deep Multichannel Neural Networks. Strongly-Typed Recurrent Neural Networks. RandomOut: Using a convolutional gradient norm to rescue convolutional filters. Neuro-Symbolic Program Synth …

Semantic Video Segmentation by Gated Recurrent Flow Propagation

Title Semantic Video Segmentation by Gated Recurrent Flow Propagation
Authors David Nilsson, Cristian Sminchisescu
Abstract Semantic video segmentation is challenging due to the sheer amount of data that needs to be processed and labeled in order to construct accurate models. In this paper we present a deep, end-to-end trainable methodology to video segmentation that is capable of leveraging information present in unlabeled data in order to improve semantic estimates. Our model combines a convolutional architecture and a spatio-temporal transformer recurrent layer that are able to temporally propagate labeling information by means of optical flow, adaptively gated based on its locally estimated uncertainty. The flow, the recognition and the gated temporal propagation modules can be trained jointly, end-to-end. The temporal, gated recurrent flow propagation component of our model can be plugged into any static semantic segmentation architecture and turn it into a weakly supervised video processing one. Our extensive experiments in the challenging CityScapes and Camvid datasets, and based on multiple deep architectures, indicate that the resulting model can leverage unlabeled temporal frames, next to a labeled one, in order to improve both the video segmentation accuracy and the consistency of its temporal labeling, at no additional annotation cost and with little extra computation.
Tasks Optical Flow Estimation, Semantic Segmentation, Video Semantic Segmentation
Published 2016-12-28
URL http://arxiv.org/abs/1612.08871v2
PDF http://arxiv.org/pdf/1612.08871v2.pdf
PWC https://paperswithcode.com/paper/semantic-video-segmentation-by-gated
Repo
Framework

COREALMLIB: An ALM Library Translated from the Component Library

Title COREALMLIB: An ALM Library Translated from the Component Library
Authors Daniela Inclezan
Abstract This paper presents COREALMLIB, an ALM library of commonsense knowledge about dynamic domains. The library was obtained by translating part of the COMPONENT LIBRARY (CLIB) into the modular action language ALM. CLIB consists of general reusable and composable commonsense concepts, selected based on a thorough study of ontological and lexical resources. Our translation targets CLIB states (i.e., fluents) and actions. The resulting ALM library contains the descriptions of 123 action classes grouped into 43 reusable modules that are organized into a hierarchy. It is made available online and of interest to researchers in the action language, answer-set programming, and natural language understanding communities. We believe that our translation has two main advantages over its CLIB counterpart: (i) it specifies axioms about actions in a more elaboration tolerant and readable way, and (ii) it can be seamlessly integrated with ASP reasoning algorithms (e.g., for planning and postdiction). In contrast, axioms are described in CLIB using STRIPS-like operators, and CLIB’s inference engine cannot handle planning nor postdiction. Under consideration for publication in TPLP.
Tasks
Published 2016-08-06
URL http://arxiv.org/abs/1608.02082v2
PDF http://arxiv.org/pdf/1608.02082v2.pdf
PWC https://paperswithcode.com/paper/corealmlib-an-alm-library-translated-from-the
Repo
Framework

Gland Instance Segmentation Using Deep Multichannel Neural Networks

Title Gland Instance Segmentation Using Deep Multichannel Neural Networks
Authors Yan Xu, Yang Li, Yipei Wang, Mingyuan Liu, Yubo Fan, Maode Lai, Eric I-Chao Chang
Abstract Objective: A new image instance segmentation method is proposed to segment individual glands (instances) in colon histology images. This process is challenging since the glands not only need to be segmented from a complex background, they must also be individually identified. Methods: We leverage the idea of image-to-image prediction in recent deep learning by designing an algorithm that automatically exploits and fuses complex multichannel information - regional, location, and boundary cues - in gland histology images. Our proposed algorithm, a deep multichannel framework, alleviates heavy feature design due to the use of convolutional neural networks and is able to meet multifarious requirements by altering channels. Results: Compared with methods reported in the 2015 MICCAI Gland Segmentation Challenge and other currently prevalent instance segmentation methods, we observe state-of-the-art results based on the evaluation metrics. Conclusion: The proposed deep multichannel algorithm is an effective method for gland instance segmentation. Significance: The generalization ability of our model not only enable the algorithm to solve gland instance segmentation problems, but the channel is also alternative that can be replaced for a specific task.
Tasks Instance Segmentation, Semantic Segmentation
Published 2016-11-21
URL http://arxiv.org/abs/1611.06661v3
PDF http://arxiv.org/pdf/1611.06661v3.pdf
PWC https://paperswithcode.com/paper/gland-instance-segmentation-using-deep
Repo
Framework

Strongly-Typed Recurrent Neural Networks

Title Strongly-Typed Recurrent Neural Networks
Authors David Balduzzi, Muhammad Ghifary
Abstract Recurrent neural networks are increasing popular models for sequential learning. Unfortunately, although the most effective RNN architectures are perhaps excessively complicated, extensive searches have not found simpler alternatives. This paper imports ideas from physics and functional programming into RNN design to provide guiding principles. From physics, we introduce type constraints, analogous to the constraints that forbids adding meters to seconds. From functional programming, we require that strongly-typed architectures factorize into stateless learnware and state-dependent firmware, reducing the impact of side-effects. The features learned by strongly-typed nets have a simple semantic interpretation via dynamic average-pooling on one-dimensional convolutions. We also show that strongly-typed gradients are better behaved than in classical architectures, and characterize the representational power of strongly-typed nets. Finally, experiments show that, despite being more constrained, strongly-typed architectures achieve lower training and comparable generalization error to classical architectures.
Tasks
Published 2016-02-06
URL http://arxiv.org/abs/1602.02218v2
PDF http://arxiv.org/pdf/1602.02218v2.pdf
PWC https://paperswithcode.com/paper/strongly-typed-recurrent-neural-networks
Repo
Framework

RandomOut: Using a convolutional gradient norm to rescue convolutional filters

Title RandomOut: Using a convolutional gradient norm to rescue convolutional filters
Authors Joseph Paul Cohen, Henry Z. Lo, Wei Ding
Abstract Filters in convolutional neural networks are sensitive to their initialization. The random numbers used to initialize filters are a bias and determine if you will “win” and converge to a satisfactory local minimum so we call this The Filter Lottery. We observe that the 28x28 Inception-V3 model without Batch Normalization fails to train 26% of the time when varying the random seed alone. This is a problem that affects the trial and error process of designing a network. Because random seeds have a large impact it makes it hard to evaluate a network design without trying many different random starting weights. This work aims to reduce the bias imposed by the initial weights so a network converges more consistently. We propose to evaluate and replace specific convolutional filters that have little impact on the prediction. We use the gradient norm to evaluate the impact of a filter on error, and re-initialize filters when the gradient norm of its weights falls below a specific threshold. This consistently improves accuracy on the 28x28 Inception-V3 with a median increase of +3.3%. In effect our method RandomOut increases the number of filters explored without increasing the size of the network. We observe that the RandomOut method has more consistent generalization performance, having a standard deviation of 1.3% instead of 2% when varying random seeds, and does so faster and with fewer parameters.
Tasks
Published 2016-02-18
URL http://arxiv.org/abs/1602.05931v3
PDF http://arxiv.org/pdf/1602.05931v3.pdf
PWC https://paperswithcode.com/paper/randomout-using-a-convolutional-gradient-norm
Repo
Framework

Neuro-Symbolic Program Synthesis

Title Neuro-Symbolic Program Synthesis
Authors Emilio Parisotto, Abdel-rahman Mohamed, Rishabh Singh, Lihong Li, Dengyong Zhou, Pushmeet Kohli
Abstract Recent years have seen the proposal of a number of neural architectures for the problem of Program Induction. Given a set of input-output examples, these architectures are able to learn mappings that generalize to new test inputs. While achieving impressive results, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network). In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis, to overcome the above-mentioned problems. Once trained, our approach can automatically construct computer programs in a domain-specific language that are consistent with a set of input-output examples provided at test time. Our method is based on two novel neural modules. The first module, called the cross correlation I/O network, given a set of input-output examples, produces a continuous representation of the set of I/O examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representation of the examples, synthesizes a program by incrementally expanding partial programs. We demonstrate the effectiveness of our approach by applying it to the rich and complex domain of regular expression based string transformations. Experiments show that the R3NN model is not only able to construct programs from new input-output examples, but it is also able to construct new programs for tasks that it had never observed before during training.
Tasks Program Synthesis
Published 2016-11-06
URL http://arxiv.org/abs/1611.01855v1
PDF http://arxiv.org/pdf/1611.01855v1.pdf
PWC https://paperswithcode.com/paper/neuro-symbolic-program-synthesis
Repo
Framework

Optimizing the Learning Order of Chinese Characters Using a Novel Topological Sort Algorithm

Title Optimizing the Learning Order of Chinese Characters Using a Novel Topological Sort Algorithm
Authors James C. Loach, Jinzhao Wang
Abstract We present a novel algorithm for optimizing the order in which Chinese characters are learned, one that incorporates the benefits of learning them in order of usage frequency and in order of their hierarchal structural relationships. We show that our work outperforms previously published orders and algorithms. Our algorithm is applicable to any scheduling task where nodes have intrinsic differences in importance and must be visited in topological order.
Tasks
Published 2016-02-28
URL http://arxiv.org/abs/1602.08742v3
PDF http://arxiv.org/pdf/1602.08742v3.pdf
PWC https://paperswithcode.com/paper/optimizing-the-learning-order-of-chinese
Repo
Framework

HUME: Human UCCA-Based Evaluation of Machine Translation

Title HUME: Human UCCA-Based Evaluation of Machine Translation
Authors Alexandra Birch, Omri Abend, Ondrej Bojar, Barry Haddow
Abstract Human evaluation of machine translation normally uses sentence-level measures such as relative ranking or adequacy scales. However, these provide no insight into possible errors, and do not scale well with sentence length. We argue for a semantics-based evaluation, which captures what meaning components are retained in the MT output, thus providing a more fine-grained analysis of translation quality, and enabling the construction and tuning of semantics-based MT. We present a novel human semantic evaluation measure, Human UCCA-based MT Evaluation (HUME), building on the UCCA semantic representation scheme. HUME covers a wider range of semantic phenomena than previous methods and does not rely on semantic annotation of the potentially garbled MT output. We experiment with four language pairs, demonstrating HUME’s broad applicability, and report good inter-annotator agreement rates and correlation with human adequacy scores.
Tasks Machine Translation
Published 2016-06-30
URL http://arxiv.org/abs/1607.00030v2
PDF http://arxiv.org/pdf/1607.00030v2.pdf
PWC https://paperswithcode.com/paper/hume-human-ucca-based-evaluation-of-machine
Repo
Framework

Using Empirical Covariance Matrix in Enhancing Prediction Accuracy of Linear Models with Missing Information

Title Using Empirical Covariance Matrix in Enhancing Prediction Accuracy of Linear Models with Missing Information
Authors Ahmadreza Moradipari, Sina Shahsavari, Ashkan Esmaeili, Farokh Marvasti
Abstract Inference and Estimation in Missing Information (MI) scenarios are important topics in Statistical Learning Theory and Machine Learning (ML). In ML literature, attempts have been made to enhance prediction through precise feature selection methods. In sparse linear models, LASSO is well-known in extracting the desired support of the signal and resisting against noisy systems. When sparse models are also suffering from MI, the sparse recovery and inference of the missing models are taken into account simultaneously. In this paper, we will introduce an approach which enjoys sparse regression and covariance matrix estimation to improve matrix completion accuracy, and as a result enhancing feature selection preciseness which leads to reduction in prediction Mean Squared Error (MSE). We will compare the effect of employing covariance matrix in enhancing estimation accuracy to the case it is not used in feature selection. Simulations show the improvement in the performance as compared to the case where the covariance matrix estimation is not used.
Tasks Feature Selection, Matrix Completion
Published 2016-11-21
URL http://arxiv.org/abs/1611.07093v3
PDF http://arxiv.org/pdf/1611.07093v3.pdf
PWC https://paperswithcode.com/paper/using-empirical-covariance-matrix-in
Repo
Framework

Managing Overstaying Electric Vehicles in Park-and-Charge Facilities

Title Managing Overstaying Electric Vehicles in Park-and-Charge Facilities
Authors Arpita Biswas, Ragavendran Gopalakrishnan, Partha Dutta
Abstract With the increase in adoption of Electric Vehicles (EVs), proper utilization of the charging infrastructure is an emerging challenge for service providers. Overstaying of an EV after a charging event is a key contributor to low utilization. Since overstaying is easily detectable by monitoring the power drawn from the charger, managing this problem primarily involves designing an appropriate “penalty” during the overstaying period. Higher penalties do discourage overstaying; however, due to uncertainty in parking duration, less people would find such penalties acceptable, leading to decreased utilization (and revenue). To analyze this central trade-off, we develop a novel framework that integrates models for realistic user behavior into queueing dynamics to locate the optimal penalty from the points of view of utilization and revenue, for different values of the external charging demand. Next, when the model parameters are unknown, we show how an online learning algorithm, such as UCB, can be adapted to learn the optimal penalty. Our experimental validation, based on charging data from London, shows that an appropriate penalty can increase both utilization and revenue while significantly reducing overstaying.
Tasks
Published 2016-04-19
URL http://arxiv.org/abs/1604.05471v2
PDF http://arxiv.org/pdf/1604.05471v2.pdf
PWC https://paperswithcode.com/paper/managing-overstaying-electric-vehicles-in
Repo
Framework

Clipper: A Low-Latency Online Prediction Serving System

Title Clipper: A Low-Latency Online Prediction Serving System
Authors Daniel Crankshaw, Xin Wang, Giulio Zhou, Michael J. Franklin, Joseph E. Gonzalez, Ion Stoica
Abstract Machine learning is being deployed in a growing number of applications which demand real-time, accurate, and robust predictions under heavy query load. However, most machine learning frameworks and systems only address model training and not deployment. In this paper, we introduce Clipper, a general-purpose low-latency prediction serving system. Interposing between end-user applications and a wide range of machine learning frameworks, Clipper introduces a modular architecture to simplify model deployment across frameworks and applications. Furthermore, by introducing caching, batching, and adaptive model selection techniques, Clipper reduces prediction latency and improves prediction throughput, accuracy, and robustness without modifying the underlying machine learning frameworks. We evaluate Clipper on four common machine learning benchmark datasets and demonstrate its ability to meet the latency, accuracy, and throughput demands of online serving applications. Finally, we compare Clipper to the TensorFlow Serving system and demonstrate that we are able to achieve comparable throughput and latency while enabling model composition and online learning to improve accuracy and render more robust predictions.
Tasks Model Selection
Published 2016-12-09
URL http://arxiv.org/abs/1612.03079v2
PDF http://arxiv.org/pdf/1612.03079v2.pdf
PWC https://paperswithcode.com/paper/clipper-a-low-latency-online-prediction
Repo
Framework

Adaptive Algorithm and Platform Selection for Visual Detection and Tracking

Title Adaptive Algorithm and Platform Selection for Visual Detection and Tracking
Authors Shu Zhang, Qi Zhu, Amit Roy-Chowdhury
Abstract Computer vision algorithms are known to be extremely sensitive to the environmental conditions in which the data is captured, e.g., lighting conditions and target density. Tuning of parameters or choosing a completely new algorithm is often needed to achieve a certain performance level, especially when there is a limitation of the computation source. In this paper, we focus on this problem and propose a framework to adaptively select the “best” algorithm-parameter combination and the computation platform under performance and cost constraints at design time, and adapt the algorithms at runtime based on real-time inputs. This necessitates developing a mechanism to switch between different algorithms as the nature of the input video changes. Our proposed algorithm calculates a similarity function between a test video scenario and each training scenario, where the similarity calculation is based on learning a manifold of image features that is shared by both the training and test datasets. Similarity between training and test dataset indicates the same algorithm can be applied to both of them and achieve similar performance. We design a cost function with this similarity measure to find the most similar training scenario to the test data. The “best” algorithm under a given platform is obtained by selecting the algorithm with a specific parameter combination that performs the best on the corresponding training data. The proposed framework can be used first offline to choose the platform based on performance and cost constraints, and then online whereby the “best” algorithm is selected for each new incoming video segment for a given platform. In the experiments, we apply our algorithm to the problems of pedestrian detection and tracking. We show how to adaptively select platforms and algorithm-parameter combinations. Our results provide optimal performance on 3 publicly available datasets.
Tasks Pedestrian Detection
Published 2016-05-21
URL http://arxiv.org/abs/1605.06597v1
PDF http://arxiv.org/pdf/1605.06597v1.pdf
PWC https://paperswithcode.com/paper/adaptive-algorithm-and-platform-selection-for
Repo
Framework

Integrated Sequence Tagging for Medieval Latin Using Deep Representation Learning

Title Integrated Sequence Tagging for Medieval Latin Using Deep Representation Learning
Authors Mike Kestemont, Jeroen De Gussem
Abstract In this paper we consider two sequence tagging tasks for medieval Latin: part-of-speech tagging and lemmatization. These are both basic, yet foundational preprocessing steps in applications such as text re-use detection. Nevertheless, they are generally complicated by the considerable orthographic variation which is typical of medieval Latin. In Digital Classics, these tasks are traditionally solved in a (i) cascaded and (ii) lexicon-dependent fashion. For example, a lexicon is used to generate all the potential lemma-tag pairs for a token, and next, a context-aware PoS-tagger is used to select the most appropriate tag-lemma pair. Apart from the problems with out-of-lexicon items, error percolation is a major downside of such approaches. In this paper we explore the possibility to elegantly solve these tasks using a single, integrated approach. For this, we make use of a layered neural network architecture from the field of deep representation learning.
Tasks Lemmatization, Part-Of-Speech Tagging, Representation Learning
Published 2016-03-04
URL http://arxiv.org/abs/1603.01597v2
PDF http://arxiv.org/pdf/1603.01597v2.pdf
PWC https://paperswithcode.com/paper/integrated-sequence-tagging-for-medieval
Repo
Framework

Logical Induction

Title Logical Induction
Authors Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, Jessica Taylor
Abstract We present a computable algorithm that assigns probabilities to every logical statement in a given formal language, and refines those probabilities over time. For instance, if the language is Peano arithmetic, it assigns probabilities to all arithmetical statements, including claims about the twin prime conjecture, the outputs of long-running computations, and its own probabilities. We show that our algorithm, an instance of what we call a logical inductor, satisfies a number of intuitive desiderata, including: (1) it learns to predict patterns of truth and falsehood in logical statements, often long before having the resources to evaluate the statements, so long as the patterns can be written down in polynomial time; (2) it learns to use appropriate statistical summaries to predict sequences of statements whose truth values appear pseudorandom; and (3) it learns to have accurate beliefs about its own current beliefs, in a manner that avoids the standard paradoxes of self-reference. For example, if a given computer program only ever produces outputs in a certain range, a logical inductor learns this fact in a timely manner; and if late digits in the decimal expansion of $\pi$ are difficult to predict, then a logical inductor learns to assign $\approx 10%$ probability to “the $n$th digit of $\pi$ is a 7” for large $n$. Logical inductors also learn to trust their future beliefs more than their current beliefs, and their beliefs are coherent in the limit (whenever $\phi \implies \psi$, $\mathbb{P}\infty(\phi) \le \mathbb{P}\infty(\psi)$, and so on); and logical inductors strictly dominate the universal semimeasure in the limit. These properties and many others all follow from a single logical induction criterion, which is motivated by a series of stock trading analogies. Roughly speaking, each logical sentence $\phi$ is associated with a stock that is worth $1 per share if […]
Tasks
Published 2016-09-12
URL http://arxiv.org/abs/1609.03543v4
PDF http://arxiv.org/pdf/1609.03543v4.pdf
PWC https://paperswithcode.com/paper/logical-induction
Repo
Framework

A Feature-Enriched Neural Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging

Title A Feature-Enriched Neural Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging
Authors Xinchi Chen, Xipeng Qiu, Xuanjing Huang
Abstract Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering. However, the previous neural models cannot extract the complicated feature compositions as the traditional methods with discrete features. In this work, we propose a feature-enriched neural model for joint Chinese word segmentation and part-of-speech tagging task. Specifically, to simulate the feature templates of traditional discrete feature based models, we use different filters to model the complex compositional features with convolutional and pooling layer, and then utilize long distance dependency information with recurrent layer. Experimental results on five different datasets show the effectiveness of our proposed model.
Tasks Chinese Word Segmentation, Feature Engineering, Part-Of-Speech Tagging
Published 2016-11-16
URL http://arxiv.org/abs/1611.05384v2
PDF http://arxiv.org/pdf/1611.05384v2.pdf
PWC https://paperswithcode.com/paper/a-feature-enriched-neural-model-for-joint
Repo
Framework
comments powered by Disqus