April 2, 2020

3161 words 15 mins read

Paper Group ANR 237

Paper Group ANR 237

An interpretable neural network model through piecewise linear approximation. A Maximum Likelihood Approach to Speed Estimation of Foreground Objects in Video Signals. Learning to Optimize Non-Rigid Tracking. Designing for the Long Tail of Machine Learning. Solving Area Coverage Problem with UAVs: A Vehicle Routing with Time Windows Variation. Deep …

An interpretable neural network model through piecewise linear approximation

Title An interpretable neural network model through piecewise linear approximation
Authors Mengzhuo Guo, Qingpeng Zhang, Xiuwu Liao, Daniel Dajun Zeng
Abstract Most existing interpretable methods explain a black-box model in a post-hoc manner, which uses simpler models or data analysis techniques to interpret the predictions after the model is learned. However, they (a) may derive contradictory explanations on the same predictions given different methods and data samples, and (b) focus on using simpler models to provide higher descriptive accuracy at the sacrifice of prediction accuracy. To address these issues, we propose a hybrid interpretable model that combines a piecewise linear component and a nonlinear component. The first component describes the explicit feature contributions by piecewise linear approximation to increase the expressiveness of the model. The other component uses a multi-layer perceptron to capture feature interactions and implicit nonlinearity, and increase the prediction performance. Different from the post-hoc approaches, the interpretability is obtained once the model is learned in the form of feature shapes. We also provide a variant to explore higher-order interactions among features to demonstrate that the proposed model is flexible for adaptation. Experiments demonstrate that the proposed model can achieve good interpretability by describing feature shapes while maintaining state-of-the-art accuracy.
Tasks
Published 2020-01-20
URL https://arxiv.org/abs/2001.07119v1
PDF https://arxiv.org/pdf/2001.07119v1.pdf
PWC https://paperswithcode.com/paper/an-interpretable-neural-network-model-through
Repo
Framework

A Maximum Likelihood Approach to Speed Estimation of Foreground Objects in Video Signals

Title A Maximum Likelihood Approach to Speed Estimation of Foreground Objects in Video Signals
Authors Veronica Mattioli, Davide Alinovi, Riccardo Raheli
Abstract Motion and speed estimation play a key role in computer vision and video processing for various application scenarios. Existing algorithms are mainly based on projected and apparent motion models and are currently used in many contexts, such as automotive security and driver assistance, industrial automation and inspection systems, video surveillance, human activity tracking techniques and biomedical solutions, including monitoring of vital signs. In this paper, a general Maximum Likelihood (ML) approach to speed estimation of foreground objects in video streams is proposed. Application examples are presented and the performance of the proposed algorithms is discussed and compared with more conventional solutions.
Tasks
Published 2020-03-10
URL https://arxiv.org/abs/2003.04883v1
PDF https://arxiv.org/pdf/2003.04883v1.pdf
PWC https://paperswithcode.com/paper/a-maximum-likelihood-approach-to-speed
Repo
Framework

Learning to Optimize Non-Rigid Tracking

Title Learning to Optimize Non-Rigid Tracking
Authors Yang Li, Aljaž Božič, Tianwei Zhang, Yanli Ji, Tatsuya Harada, Matthias Nießner
Abstract One of the widespread solutions for non-rigid tracking has a nested-loop structure: with Gauss-Newton to minimize a tracking objective in the outer loop, and Preconditioned Conjugate Gradient (PCG) to solve a sparse linear system in the inner loop. In this paper, we employ learnable optimizations to improve tracking robustness and speed up solver convergence. First, we upgrade the tracking objective by integrating an alignment data term on deep features which are learned end-to-end through CNN. The new tracking objective can capture the global deformation which helps Gauss-Newton to jump over local minimum, leading to robust tracking on large non-rigid motions. Second, we bridge the gap between the preconditioning technique and learning method by introducing a ConditionNet which is trained to generate a preconditioner such that PCG can converge within a small number of steps. Experimental results indicate that the proposed learning method converges faster than the original PCG by a large margin.
Tasks
Published 2020-03-27
URL https://arxiv.org/abs/2003.12230v1
PDF https://arxiv.org/pdf/2003.12230v1.pdf
PWC https://paperswithcode.com/paper/learning-to-optimize-non-rigid-tracking
Repo
Framework

Designing for the Long Tail of Machine Learning

Title Designing for the Long Tail of Machine Learning
Authors Martin Lindvall, Jesper Molin
Abstract Recent technical advances has made machine learning (ML) a promising component to include in end user facing systems. However, user experience (UX) practitioners face challenges in relating ML to existing user-centered design processes and how to navigate the possibilities and constraints of this design space. Drawing on our own experience, we characterize designing within this space as navigating trade-offs between data gathering, model development and designing valuable interactions for a given model performance. We suggest that the theoretical description of how machine learning performance scales with training data can guide designers in these trade-offs as well as having implications for prototyping. We exemplify the learning curve’s usage by arguing that a useful pattern is to design an initial system in a bootstrap phase that aims to exploit the training effect of data collected at increasing orders of magnitude.
Tasks
Published 2020-01-21
URL https://arxiv.org/abs/2001.07455v1
PDF https://arxiv.org/pdf/2001.07455v1.pdf
PWC https://paperswithcode.com/paper/designing-for-the-long-tail-of-machine
Repo
Framework

Solving Area Coverage Problem with UAVs: A Vehicle Routing with Time Windows Variation

Title Solving Area Coverage Problem with UAVs: A Vehicle Routing with Time Windows Variation
Authors Fatih Semiz, Faruk Polat
Abstract In real life, providing security for a set of large areas by covering the area with Unmanned Aerial Vehicles (UAVs) is a difficult problem that consist of multiple objectives. These difficulties are even greater if the area coverage must continue throughout a specific time window. We address this by considering a Vehicle Routing Problem with Time Windows (VRPTW) variation in which capacity of agents is one and each customer (target area) must be supplied with more than one vehicles simultaneously without violating time windows. In this problem, our aim is to find a way to cover all areas with the necessary number of UAVs during the time windows, minimize the total distance traveled, and provide a fast solution by satisfying the additional constraint that each agent has limited fuel. We present a novel algorithm that relies on clustering the target areas according to their time windows, and then incrementally generating transportation problems with each cluster and the ready UAVs. Then we solve transportation problems with the simplex algorithm to generate the solution. The performance of the proposed algorithm and other implemented algorithms to compare the solution quality is evaluated on example scenarios with practical problem sizes.
Tasks
Published 2020-03-16
URL https://arxiv.org/abs/2003.07124v1
PDF https://arxiv.org/pdf/2003.07124v1.pdf
PWC https://paperswithcode.com/paper/solving-area-coverage-problem-with-uavs-a
Repo
Framework

Deep Learning and Statistical Models for Time-Critical Pedestrian Behaviour Prediction

Title Deep Learning and Statistical Models for Time-Critical Pedestrian Behaviour Prediction
Authors Joel Janek Dabrowski, Johan Pieter de Villiers, Ashfaqur Rahman, Conrad Beyers
Abstract The time it takes for a classifier to make an accurate prediction can be crucial in many behaviour recognition problems. For example, an autonomous vehicle should detect hazardous pedestrian behaviour early enough for it to take appropriate measures. In this context, we compare the switching linear dynamical system (SLDS) and a three-layered bi-directional long short-term memory (LSTM) neural network, which are applied to infer pedestrian behaviour from motion tracks. We show that, though the neural network model achieves an accuracy of 80%, it requires long sequences to achieve this (100 samples or more). The SLDS, has a lower accuracy of 74%, but it achieves this result with short sequences (10 samples). To our knowledge, such a comparison on sequence length has not been considered in the literature before. The results provide a key intuition of the suitability of the models in time-critical problems.
Tasks
Published 2020-02-26
URL https://arxiv.org/abs/2002.11226v1
PDF https://arxiv.org/pdf/2002.11226v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-and-statistical-models-for-time
Repo
Framework

JS-son – A Lean, Extensible JavaScript Agent Programming Library

Title JS-son – A Lean, Extensible JavaScript Agent Programming Library
Authors Timotheus Kampik, Juan Carlos Nieves
Abstract A multitude of agent-oriented software engineering frameworks exist, most of which are developed by the academic multi-agent systems community. However, these frameworks often impose programming paradigms on their users that are challenging to learn for engineers who are used to modern high-level programming languages such as JavaScript and Python. To show how the adoption of agent-oriented programming by the software engineering mainstream can be facilitated, we provide a lean JavaScript library prototype for implementing reasoning-loop agents. The library focuses on core agent programming concepts and refrains from imposing further restrictions on the programming approach. To illustrate its usefulness, we show how the library can be applied to multi-agent systems simulations on the web, deployed to cloud-hosted function-as-a-service environments, and embedded in Python-based data science tools.
Tasks
Published 2020-03-10
URL https://arxiv.org/abs/2003.04690v1
PDF https://arxiv.org/pdf/2003.04690v1.pdf
PWC https://paperswithcode.com/paper/js-son-a-lean-extensible-javascript-agent
Repo
Framework

The Unreasonable Effectiveness of Deep Learning in Artificial Intelligence

Title The Unreasonable Effectiveness of Deep Learning in Artificial Intelligence
Authors Terrence J. Sejnowski
Abstract Deep learning networks have been trained to recognize speech, caption photographs and translate text between languages at high levels of performance. Although applications of deep learning networks to real world problems have become ubiquitous, our understanding of why they are so effective is lacking. These empirical results should not be possible according to sample complexity in statistics and non-convex optimization theory. However, paradoxes in the training and effectiveness of deep learning networks are being investigated and insights are being found in the geometry of high-dimensional spaces. A mathematical theory of deep learning would illuminate how they function, allow us to assess the strengths and weaknesses of different network architectures and lead to major improvements. Deep learning has provided natural ways for humans to communicate with digital devices and is foundational for building artificial general intelligence. Deep learning was inspired by the architecture of the cerebral cortex and insights into autonomy and general intelligence may be found in other brain regions that are essential for planning and survival, but major breakthroughs will be needed to achieve these goals.
Tasks
Published 2020-02-12
URL https://arxiv.org/abs/2002.04806v1
PDF https://arxiv.org/pdf/2002.04806v1.pdf
PWC https://paperswithcode.com/paper/the-unreasonable-effectiveness-of-deep-2
Repo
Framework

Label-Driven Reconstruction for Domain Adaptation in Semantic Segmentation

Title Label-Driven Reconstruction for Domain Adaptation in Semantic Segmentation
Authors Jinyu Yang, Weizhi An, Sheng Wang, Xinliang Zhu, Chaochao Yan, Junzhou Huang
Abstract Unsupervised domain adaptation enables to alleviate the need for pixel-wise annotation in the semantic segmentation. One of the most common strategies is to translate images from the source domain to the target domain and then align their marginal distributions in the feature space using adversarial learning. However, source-to-target translation enlarges the bias in translated images, owing to the dominant data size of the source domain. Furthermore, consistency of the joint distribution in source and target domains cannot be guaranteed through global feature alignment. Here, we present an innovative framework, designed to mitigate the image translation bias and align cross-domain features with the same category. This is achieved by 1) performing the target-to-source translation and 2) reconstructing both source and target images from their predicted labels. Extensive experiments on adapting from synthetic to real urban scene understanding demonstrate that our framework competes favorably against existing state-of-the-art methods.
Tasks Domain Adaptation, Scene Understanding, Semantic Segmentation, Unsupervised Domain Adaptation
Published 2020-03-10
URL https://arxiv.org/abs/2003.04614v1
PDF https://arxiv.org/pdf/2003.04614v1.pdf
PWC https://paperswithcode.com/paper/label-driven-reconstruction-for-domain
Repo
Framework

K-NN active learning under local smoothness assumption

Title K-NN active learning under local smoothness assumption
Authors Boris Ndjia Njike, Xavier Siebert
Abstract There is a large body of work on convergence rates either in passive or active learning. Here we first outline some of the main results that have been obtained, more specifically in a nonparametric setting under assumptions about the smoothness of the regression function (or the boundary between classes) and the margin noise. We discuss the relative merits of these underlying assumptions by putting active learning in perspective with recent work on passive learning. We design an active learning algorithm with a rate of convergence better than in passive learning, using a particular smoothness assumption customized for k-nearest neighbors. Unlike previous active learning algorithms, we use a smoothness assumption that provides a dependence on the marginal distribution of the instance space. Additionally, our algorithm avoids the strong density assumption that supposes the existence of the density function of the marginal distribution of the instance space and is therefore more generally applicable.
Tasks Active Learning
Published 2020-01-17
URL https://arxiv.org/abs/2001.06485v1
PDF https://arxiv.org/pdf/2001.06485v1.pdf
PWC https://paperswithcode.com/paper/k-nn-active-learning-under-local-smoothness-1
Repo
Framework

Learning regularization and intensity-gradient-based fidelity for single image super resolution

Title Learning regularization and intensity-gradient-based fidelity for single image super resolution
Authors Hu Liang, Shengrong Zhao
Abstract How to extract more and useful information for single image super resolution is an imperative and difficult problem. Learning-based method is a representative method for such task. However, the results are not so stable as there may exist big difference between the training data and the test data. The regularization-based method can effectively utilize the self-information of observation. However, the degradation model used in regularization-based method just considers the degradation in intensity space. It may not reconstruct images well as the degradation reflections in various feature space are not considered. In this paper, we first study the image degradation progress, and establish degradation model both in intensity and gradient space. Thus, a comprehensive data consistency constraint is established for the reconstruction. Consequently, more useful information can be extracted from the observed data. Second, the regularization term is learned by a designed symmetric residual deep neural-network. It can search similar external information from a predefined dataset avoiding the artificial tendency. Finally, the proposed fidelity term and designed regularization term are embedded into the regularization framework. Further, an optimization method is developed based on the half-quadratic splitting method and the pseudo conjugate method. Experimental results indicated that the subjective and the objective metric corresponding to the proposed method were better than those obtained by the comparison methods.
Tasks Image Super-Resolution, Super-Resolution
Published 2020-03-24
URL https://arxiv.org/abs/2003.10689v1
PDF https://arxiv.org/pdf/2003.10689v1.pdf
PWC https://paperswithcode.com/paper/learning-regularization-and-intensity
Repo
Framework

Unsupervised Domain Adaptation for Mammogram Image Classification: A Promising Tool for Model Generalization

Title Unsupervised Domain Adaptation for Mammogram Image Classification: A Promising Tool for Model Generalization
Authors Yu Zhang, Gongbo Liang, Nathan Jacobs, Xiaoqin Wang
Abstract Generalization is one of the key challenges in the clinical validation and application of deep learning models to medical images. Studies have shown that such models trained on publicly available datasets often do not work well on real-world clinical data due to the differences in patient population and image device configurations. Also, manually annotating clinical images is expensive. In this work, we propose an unsupervised domain adaptation (UDA) method using Cycle-GAN to improve the generalization ability of the model without using any additional manual annotations.
Tasks Domain Adaptation, Image Classification, Unsupervised Domain Adaptation
Published 2020-03-02
URL https://arxiv.org/abs/2003.01111v1
PDF https://arxiv.org/pdf/2003.01111v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-domain-adaptation-for-mammogram
Repo
Framework

On the Performance of Metaheuristics: A Different Perspective

Title On the Performance of Metaheuristics: A Different Perspective
Authors Hamid Reza Boveiri, Raouf Khayami
Abstract Nowadays, we are immersed in tens of newly-proposed evolutionary and swam-intelligence metaheuristics, which makes it very difficult to choose a proper one to be applied on a specific optimization problem at hand. On the other hand, most of these metaheuristics are nothing but slightly modified variants of the basic metaheuristics. For example, Differential Evolution (DE) or Shuffled Frog Leaping (SFL) are just Genetic Algorithms (GA) with a specialized operator or an extra local search, respectively. Therefore, what comes to the mind is whether the behavior of such newly-proposed metaheuristics can be investigated on the basis of studying the specifications and characteristics of their ancestors. In this paper, a comprehensive evaluation study on some basic metaheuristics i.e. Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Teaching-Learning-Based Optimization (TLBO), and Cuckoo Optimization algorithm (COA) is conducted, which give us a deeper insight into the performance of them so that we will be able to better estimate the performance and applicability of all other variations originated from them. A large number of experiments have been conducted on 20 different combinatorial optimization benchmark functions with different characteristics, and the results reveal to us some fundamental conclusions besides the following ranking order among these metaheuristics, {ABC, PSO, TLBO, GA, COA} i.e. ABC and COA are the best and the worst methods from the performance point of view, respectively. In addition, from the convergence perspective, PSO and ABC have significant better convergence for unimodal and multimodal functions, respectively, while GA and COA have premature convergence to local optima in many cases needing alternative mutation mechanisms to enhance diversification and global search.
Tasks Combinatorial Optimization
Published 2020-01-24
URL https://arxiv.org/abs/2001.08928v1
PDF https://arxiv.org/pdf/2001.08928v1.pdf
PWC https://paperswithcode.com/paper/on-the-performance-of-metaheuristics-a
Repo
Framework

MixTConv: Mixed Temporal Convolutional Kernels for Efficient Action Recogntion

Title MixTConv: Mixed Temporal Convolutional Kernels for Efficient Action Recogntion
Authors Kaiyu Shan, Yongtao Wang, Zhuoying Wang, Tingting Liang, Zhi Tang, Ying Chen, Yangyan Li
Abstract To efficiently extract spatiotemporal features of video for action recognition, most state-of-the-art methods integrate 1D temporal convolution into a conventional 2D CNN backbone. However, they all exploit 1D temporal convolution of fixed kernel size (i.e., 3) in the network building block, thus have suboptimal temporal modeling capability to handle both long-term and short-term actions. To address this problem, we first investigate the impacts of different kernel sizes for the 1D temporal convolutional filters. Then, we propose a simple yet efficient operation called Mixed Temporal Convolution (MixTConv), which consists of multiple depthwise 1D convolutional filters with different kernel sizes. By plugging MixTConv into the conventional 2D CNN backbone ResNet-50, we further propose an efficient and effective network architecture named MSTNet for action recognition, and achieve state-of-the-art results on multiple benchmarks.
Tasks
Published 2020-01-19
URL https://arxiv.org/abs/2001.06769v3
PDF https://arxiv.org/pdf/2001.06769v3.pdf
PWC https://paperswithcode.com/paper/mixtconv-mixed-temporal-convolutional-kernels
Repo
Framework

Graph Ordering: Towards the Optimal by Learning

Title Graph Ordering: Towards the Optimal by Learning
Authors Kangfei Zhao, Yu Rong, Jeffrey Xu Yu, Junzhou Huang, Hao Zhang
Abstract Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, link prediction, and community detection. These models are usually designed to preserve the vertex information at different granularity and reduce the problems in discrete space to some machine learning tasks in continuous space. However, regardless of the fruitful progress, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks. Moreover, these problems are closely related to reformulating a global layout for a specific graph, which is an important NP-hard combinatorial optimization problem: graph ordering. In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach. Distinguished from greedy algorithms based on predefined heuristics, we propose a neural network model: Deep Order Network (DON) to capture the hidden locality structure from partial vertex order sets. Supervised by sampled partial order, DON has the ability to infer unseen combinations. Furthermore, to alleviate the combinatorial explosion in the training space of DON and make the efficient partial vertex order sampling , we employ a reinforcement learning model: the Policy Network, to adjust the partial order sampling probabilities during the training phase of DON automatically. To this end, the Policy Network can improve the training efficiency and guide DON to evolve towards a more effective model automatically. Comprehensive experiments on both synthetic and real data validate that DON-RL outperforms the current state-of-the-art heuristic algorithm consistently. Two case studies on graph compression and edge partitioning demonstrate the potential power of DON-RL in real applications.
Tasks Combinatorial Optimization, Community Detection, Graph Representation Learning, Link Prediction, Node Classification, Representation Learning
Published 2020-01-18
URL https://arxiv.org/abs/2001.06631v1
PDF https://arxiv.org/pdf/2001.06631v1.pdf
PWC https://paperswithcode.com/paper/graph-ordering-towards-the-optimal-by
Repo
Framework
comments powered by Disqus