July 29, 2019

3056 words 15 mins read

Paper Group ANR 93

Paper Group ANR 93

Deep Reinforcement Learning for Multi-Resource Multi-Machine Job Scheduling. Comprehension-guided referring expressions. Deep Learning in the Automotive Industry: Applications and Tools. Anatomically Constrained Neural Networks (ACNN): Application to Cardiac Image Enhancement and Segmentation. Predictive Coding-based Deep Dynamic Neural Network for …

Deep Reinforcement Learning for Multi-Resource Multi-Machine Job Scheduling

Title Deep Reinforcement Learning for Multi-Resource Multi-Machine Job Scheduling
Authors Weijia Chen, Yuedong Xu, Xiaofeng Wu
Abstract Minimizing job scheduling time is a fundamental issue in data center networks that has been extensively studied in recent years. The incoming jobs require different CPU and memory units, and span different number of time slots. The traditional solution is to design efficient heuristic algorithms with performance guarantee under certain assumptions. In this paper, we improve a recently proposed job scheduling algorithm using deep reinforcement learning and extend it to multiple server clusters. Our study reveals that deep reinforcement learning method has the potential to outperform traditional resource allocation algorithms in a variety of complicated environments.
Tasks
Published 2017-11-20
URL http://arxiv.org/abs/1711.07440v1
PDF http://arxiv.org/pdf/1711.07440v1.pdf
PWC https://paperswithcode.com/paper/deep-reinforcement-learning-for-multi
Repo
Framework

Comprehension-guided referring expressions

Title Comprehension-guided referring expressions
Authors Ruotian Luo, Gregory Shakhnarovich
Abstract We consider generation and comprehension of natural language referring expression for objects in an image. Unlike generic “image captioning” which lacks natural standard evaluation criteria, quality of a referring expression may be measured by the receiver’s ability to correctly infer which object is being described. Following this intuition, we propose two approaches to utilize models trained for comprehension task to generate better expressions. First, we use a comprehension module trained on human-generated expressions, as a “critic” of referring expression generator. The comprehension module serves as a differentiable proxy of human evaluation, providing training signal to the generation module. Second, we use the comprehension module in a generate-and-rerank pipeline, which chooses from candidate expressions generated by a model according to their performance on the comprehension task. We show that both approaches lead to improved referring expression generation on multiple benchmark datasets.
Tasks Image Captioning
Published 2017-01-12
URL http://arxiv.org/abs/1701.03439v1
PDF http://arxiv.org/pdf/1701.03439v1.pdf
PWC https://paperswithcode.com/paper/comprehension-guided-referring-expressions
Repo
Framework

Deep Learning in the Automotive Industry: Applications and Tools

Title Deep Learning in the Automotive Industry: Applications and Tools
Authors Andre Luckow, Matthew Cook, Nathan Ashcraft, Edwin Weill, Emil Djerekarov, Bennie Vorster
Abstract Deep Learning refers to a set of machine learning techniques that utilize neural networks with many hidden layers for tasks, such as image classification, speech recognition, language understanding. Deep learning has been proven to be very effective in these domains and is pervasively used by many Internet services. In this paper, we describe different automotive uses cases for deep learning in particular in the domain of computer vision. We surveys the current state-of-the-art in libraries, tools and infrastructures (e.,g.\ GPUs and clouds) for implementing, training and deploying deep neural networks. We particularly focus on convolutional neural networks and computer vision use cases, such as the visual inspection process in manufacturing plants and the analysis of social media data. To train neural networks, curated and labeled datasets are essential. In particular, both the availability and scope of such datasets is typically very limited. A main contribution of this paper is the creation of an automotive dataset, that allows us to learn and automatically recognize different vehicle properties. We describe an end-to-end deep learning application utilizing a mobile app for data collection and process support, and an Amazon-based cloud backend for storage and training. For training we evaluate the use of cloud and on-premises infrastructures (including multiple GPUs) in conjunction with different neural network architectures and frameworks. We assess both the training times as well as the accuracy of the classifier. Finally, we demonstrate the effectiveness of the trained classifier in a real world setting during manufacturing process.
Tasks Image Classification, Speech Recognition
Published 2017-04-30
URL http://arxiv.org/abs/1705.00346v1
PDF http://arxiv.org/pdf/1705.00346v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-in-the-automotive-industry
Repo
Framework

Anatomically Constrained Neural Networks (ACNN): Application to Cardiac Image Enhancement and Segmentation

Title Anatomically Constrained Neural Networks (ACNN): Application to Cardiac Image Enhancement and Segmentation
Authors Ozan Oktay, Enzo Ferrante, Konstantinos Kamnitsas, Mattias Heinrich, Wenjia Bai, Jose Caballero, Stuart Cook, Antonio de Marvao, Timothy Dawes, Declan O’Regan, Bernhard Kainz, Ben Glocker, Daniel Rueckert
Abstract Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning based techniques. However, in most recent and promising techniques such as CNN based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learned non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac datasets and public benchmarks. Additionally, we demonstrate how the learned deep models of 3D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.
Tasks Image Enhancement
Published 2017-05-22
URL http://arxiv.org/abs/1705.08302v4
PDF http://arxiv.org/pdf/1705.08302v4.pdf
PWC https://paperswithcode.com/paper/anatomically-constrained-neural-networks-acnn
Repo
Framework

Predictive Coding-based Deep Dynamic Neural Network for Visuomotor Learning

Title Predictive Coding-based Deep Dynamic Neural Network for Visuomotor Learning
Authors Jungsik Hwang, Jinhyung Kim, Ahmadreza Ahmadi, Minkyu Choi, Jun Tani
Abstract This study presents a dynamic neural network model based on the predictive coding framework for perceiving and predicting the dynamic visuo-proprioceptive patterns. In our previous study [1], we have shown that the deep dynamic neural network model was able to coordinate visual perception and action generation in a seamless manner. In the current study, we extended the previous model under the predictive coding framework to endow the model with a capability of perceiving and predicting dynamic visuo-proprioceptive patterns as well as a capability of inferring intention behind the perceived visuomotor information through minimizing prediction error. A set of synthetic experiments were conducted in which a robot learned to imitate the gestures of another robot in a simulation environment. The experimental results showed that with given intention states, the model was able to mentally simulate the possible incoming dynamic visuo-proprioceptive patterns in a top-down process without the inputs from the external environment. Moreover, the results highlighted the role of minimizing prediction error in inferring underlying intention of the perceived visuo-proprioceptive patterns, supporting the predictive coding account of the mirror neuron systems. The results also revealed that minimizing prediction error in one modality induced the recall of the corresponding representation of another modality acquired during the consolidative learning of raw-level visuo-proprioceptive patterns.
Tasks
Published 2017-06-08
URL http://arxiv.org/abs/1706.02444v1
PDF http://arxiv.org/pdf/1706.02444v1.pdf
PWC https://paperswithcode.com/paper/predictive-coding-based-deep-dynamic-neural
Repo
Framework

Actor-Critic Sequence Training for Image Captioning

Title Actor-Critic Sequence Training for Image Captioning
Authors Li Zhang, Flood Sung, Feng Liu, Tao Xiang, Shaogang Gong, Yongxin Yang, Timothy M. Hospedales
Abstract Generating natural language descriptions of images is an important capability for a robot or other visual-intelligence driven AI agent that may need to communicate with human users about what it is seeing. Such image captioning methods are typically trained by maximising the likelihood of ground-truth annotated caption given the image. While simple and easy to implement, this approach does not directly maximise the language quality metrics we care about such as CIDEr. In this paper we investigate training image captioning methods based on actor-critic reinforcement learning in order to directly optimise non-differentiable quality metrics of interest. By formulating a per-token advantage and value computation strategy in this novel reinforcement learning based captioning model, we show that it is possible to achieve the state of the art performance on the widely used MSCOCO benchmark.
Tasks Image Captioning
Published 2017-06-29
URL http://arxiv.org/abs/1706.09601v2
PDF http://arxiv.org/pdf/1706.09601v2.pdf
PWC https://paperswithcode.com/paper/actor-critic-sequence-training-for-image
Repo
Framework

Location Dependent Dirichlet Processes

Title Location Dependent Dirichlet Processes
Authors Shiliang Sun, John Paisley, Qiuyang Liu
Abstract Dirichlet processes (DP) are widely applied in Bayesian nonparametric modeling. However, in their basic form they do not directly integrate dependency information among data arising from space and time. In this paper, we propose location dependent Dirichlet processes (LDDP) which incorporate nonparametric Gaussian processes in the DP modeling framework to model such dependencies. We develop the LDDP in the context of mixture modeling, and develop a mean field variational inference algorithm for this mixture model. The effectiveness of the proposed modeling framework is shown on an image segmentation task.
Tasks Gaussian Processes, Semantic Segmentation
Published 2017-07-02
URL http://arxiv.org/abs/1707.00260v1
PDF http://arxiv.org/pdf/1707.00260v1.pdf
PWC https://paperswithcode.com/paper/location-dependent-dirichlet-processes
Repo
Framework

Improved underwater image enhancement algorithms based on partial differential equations (PDEs)

Title Improved underwater image enhancement algorithms based on partial differential equations (PDEs)
Authors U. A. Nnolim
Abstract The experimental results of improved underwater image enhancement algorithms based on partial differential equations (PDEs) are presented in this report. This second work extends the study of previous work and incorporating several improvements into the revised algorithm. Experiments show the evidence of the improvements when compared to previously proposed approaches and other conventional algorithms found in the literature.
Tasks Image Enhancement
Published 2017-04-29
URL http://arxiv.org/abs/1705.04272v1
PDF http://arxiv.org/pdf/1705.04272v1.pdf
PWC https://paperswithcode.com/paper/improved-underwater-image-enhancement
Repo
Framework

Active learning in annotating micro-blogs dealing with e-reputation

Title Active learning in annotating micro-blogs dealing with e-reputation
Authors Jean-Valère Cossu, Alejandro Molina-Villegas, Mariana Tello-Signoret
Abstract Elections unleash strong political views on Twitter, but what do people really think about politics? Opinion and trend mining on micro blogs dealing with politics has recently attracted researchers in several fields including Information Retrieval and Machine Learning (ML). Since the performance of ML and Natural Language Processing (NLP) approaches are limited by the amount and quality of data available, one promising alternative for some tasks is the automatic propagation of expert annotations. This paper intends to develop a so-called active learning process for automatically annotating French language tweets that deal with the image (i.e., representation, web reputation) of politicians. Our main focus is on the methodology followed to build an original annotated dataset expressing opinion from two French politicians over time. We therefore review state of the art NLP-based ML algorithms to automatically annotate tweets using a manual initiation step as bootstrap. This paper focuses on key issues about active learning while building a large annotated data set from noise. This will be introduced by human annotators, abundance of data and the label distribution across data and entities. In turn, we show that Twitter characteristics such as the author’s name or hashtags can be considered as the bearing point to not only improve automatic systems for Opinion Mining (OM) and Topic Classification but also to reduce noise in human annotations. However, a later thorough analysis shows that reducing noise might induce the loss of crucial information.
Tasks Active Learning, Information Retrieval, Opinion Mining
Published 2017-06-16
URL http://arxiv.org/abs/1706.05349v4
PDF http://arxiv.org/pdf/1706.05349v4.pdf
PWC https://paperswithcode.com/paper/active-learning-in-annotating-micro-blogs
Repo
Framework

Agglomerative Info-Clustering

Title Agglomerative Info-Clustering
Authors Chung Chan, Ali Al-Bashabsheh, Qiaoqiao Zhou
Abstract An agglomerative clustering of random variables is proposed, where clusters of random variables sharing the maximum amount of multivariate mutual information are merged successively to form larger clusters. Compared to the previous info-clustering algorithms, the agglomerative approach allows the computation to stop earlier when clusters of desired size and accuracy are obtained. An efficient algorithm is also derived based on the submodularity of entropy and the duality between the principal sequence of partitions and the principal sequence for submodular functions.
Tasks
Published 2017-01-18
URL http://arxiv.org/abs/1701.04926v3
PDF http://arxiv.org/pdf/1701.04926v3.pdf
PWC https://paperswithcode.com/paper/agglomerative-info-clustering
Repo
Framework

Locally-adapted convolution-based super-resolution of irregularly-sampled ocean remote sensing data

Title Locally-adapted convolution-based super-resolution of irregularly-sampled ocean remote sensing data
Authors Manuel López-Radcenco, Ronan Fablet, Abdeldjalil Aïssa-El-Bey, Pierre Ailliot
Abstract Super-resolution is a classical problem in image processing, with numerous applications to remote sensing image enhancement. Here, we address the super-resolution of irregularly-sampled remote sensing images. Using an optimal interpolation as the low-resolution reconstruction, we explore locally-adapted multimodal convolutional models and investigate different dictionary-based decompositions, namely based on principal component analysis (PCA), sparse priors and non-negativity constraints. We consider an application to the reconstruction of sea surface height (SSH) fields from two information sources, along-track altimeter data and sea surface temperature (SST) data. The reported experiments demonstrate the relevance of the proposed model, especially locally-adapted parametrizations with non-negativity constraints, to outperform optimally-interpolated reconstructions.
Tasks Image Enhancement, Super-Resolution
Published 2017-04-07
URL http://arxiv.org/abs/1704.02162v2
PDF http://arxiv.org/pdf/1704.02162v2.pdf
PWC https://paperswithcode.com/paper/locally-adapted-convolution-based-super
Repo
Framework

A Structured Approach to Predicting Image Enhancement Parameters

Title A Structured Approach to Predicting Image Enhancement Parameters
Authors Parag S. Chandakkar, Baoxin Li
Abstract Social networking on mobile devices has become a commonplace of everyday life. In addition, photo capturing process has become trivial due to the advances in mobile imaging. Hence people capture a lot of photos everyday and they want them to be visually-attractive. This has given rise to automated, one-touch enhancement tools. However, the inability of those tools to provide personalized and content-adaptive enhancement has paved way for machine-learned methods to do the same. The existing typical machine-learned methods heuristically (e.g. kNN-search) predict the enhancement parameters for a new image by relating the image to a set of similar training images. These heuristic methods need constant interaction with the training images which makes the parameter prediction sub-optimal and computationally expensive at test time which is undesired. This paper presents a novel approach to predicting the enhancement parameters given a new image using only its features, without using any training images. We propose to model the interaction between the image features and its corresponding enhancement parameters using the matrix factorization (MF) principles. We also propose a way to integrate the image features in the MF formulation. We show that our approach outperforms heuristic approaches as well as recent approaches in MF and structured prediction on synthetic as well as real-world data of image enhancement.
Tasks Image Enhancement, Structured Prediction
Published 2017-04-05
URL http://arxiv.org/abs/1704.01249v1
PDF http://arxiv.org/pdf/1704.01249v1.pdf
PWC https://paperswithcode.com/paper/a-structured-approach-to-predicting-image
Repo
Framework

Dense Associative Memory is Robust to Adversarial Inputs

Title Dense Associative Memory is Robust to Adversarial Inputs
Authors Dmitry Krotov, John J Hopfield
Abstract Deep neural networks (DNN) trained in a supervised way suffer from two known problems. First, the minima of the objective function used in learning correspond to data points (also known as rubbish examples or fooling images) that lack semantic similarity with the training data. Second, a clean input can be changed by a small, and often imperceptible for human vision, perturbation, so that the resulting deformed input is misclassified by the network. These findings emphasize the differences between the ways DNN and humans classify patterns, and raise a question of designing learning algorithms that more accurately mimic human perception compared to the existing methods. Our paper examines these questions within the framework of Dense Associative Memory (DAM) models. These models are defined by the energy function, with higher order (higher than quadratic) interactions between the neurons. We show that in the limit when the power of the interaction vertex in the energy function is sufficiently large, these models have the following three properties. First, the minima of the objective function are free from rubbish images, so that each minimum is a semantically meaningful pattern. Second, artificial patterns poised precisely at the decision boundary look ambiguous to human subjects and share aspects of both classes that are separated by that decision boundary. Third, adversarial images constructed by models with small power of the interaction vertex, which are equivalent to DNN with rectified linear units (ReLU), fail to transfer to and fool the models with higher order interactions. This opens up a possibility to use higher order models for detecting and stopping malicious adversarial attacks. The presented results suggest that DAM with higher order energy functions are closer to human visual perception than DNN with ReLUs.
Tasks Semantic Similarity, Semantic Textual Similarity
Published 2017-01-04
URL http://arxiv.org/abs/1701.00939v1
PDF http://arxiv.org/pdf/1701.00939v1.pdf
PWC https://paperswithcode.com/paper/dense-associative-memory-is-robust-to
Repo
Framework

Joint Regression and Ranking for Image Enhancement

Title Joint Regression and Ranking for Image Enhancement
Authors Parag S. Chandakkar, Baoxin Li
Abstract Research on automated image enhancement has gained momentum in recent years, partially due to the need for easy-to-use tools for enhancing pictures captured by ubiquitous cameras on mobile devices. Many of the existing leading methods employ machine-learning-based techniques, by which some enhancement parameters for a given image are found by relating the image to the training images with known enhancement parameters. While knowing the structure of the parameter space can facilitate search for the optimal solution, none of the existing methods has explicitly modeled and learned that structure. This paper presents an end-to-end, novel joint regression and ranking approach to model the interaction between desired enhancement parameters and images to be processed, employing a Gaussian process (GP). GP allows searching for ideal parameters using only the image features. The model naturally leads to a ranking technique for comparing images in the induced feature space. Comparative evaluation using the ground-truth based on the MIT-Adobe FiveK dataset plus subjective tests on an additional data-set were used to demonstrate the effectiveness of the proposed approach.
Tasks Image Enhancement
Published 2017-04-05
URL http://arxiv.org/abs/1704.01235v1
PDF http://arxiv.org/pdf/1704.01235v1.pdf
PWC https://paperswithcode.com/paper/joint-regression-and-ranking-for-image
Repo
Framework

Understanding food inflation in India: A Machine Learning approach

Title Understanding food inflation in India: A Machine Learning approach
Authors Akash Malhotra, Mayank Maloo
Abstract Over the past decade, the stellar growth of Indian economy has been challenged by persistently high levels of inflation, particularly in food prices. The primary reason behind this stubborn food inflation is mismatch in supply-demand, as domestic agricultural production has failed to keep up with rising demand owing to a number of proximate factors. The relative significance of these factors in determining the change in food prices have been analysed using gradient boosted regression trees (BRT), a machine learning technique. The results from BRT indicates all predictor variables to be fairly significant in explaining the change in food prices, with MSP and farm wages being relatively more important than others. International food prices were found to have limited relevance in explaining the variation in domestic food prices. The challenge of ensuring food and nutritional security for growing Indian population with rising incomes needs to be addressed through resolute policy reforms.
Tasks
Published 2017-01-30
URL http://arxiv.org/abs/1701.08789v1
PDF http://arxiv.org/pdf/1701.08789v1.pdf
PWC https://paperswithcode.com/paper/understanding-food-inflation-in-india-a
Repo
Framework
comments powered by Disqus