October 19, 2019

2924 words 14 mins read

Paper Group ANR 292

Paper Group ANR 292

Layout Design for Intelligent Warehouse by Evolution with Fitness Approximation. To the problem of “The Instrumental complex for ontological engineering purpose” software system design. Analysing the Robustness of Evolutionary Algorithms to Noise: Refined Runtime Bounds and an Example Where Noise is Beneficial. Spatially adaptive image compression …

Layout Design for Intelligent Warehouse by Evolution with Fitness Approximation

Title Layout Design for Intelligent Warehouse by Evolution with Fitness Approximation
Authors Haifeng Zhang, Zilong Guo, Han Cai, Chris Wang, Weinan Zhang, Yong Yu, Wenxin Li, Jun Wang
Abstract With the rapid growth of the express industry, intelligent warehouses that employ autonomous robots for carrying parcels have been widely used to handle the vast express volume. For such warehouses, the warehouse layout design plays a key role in improving the transportation efficiency. However, this work is still done by human experts, which is expensive and leads to suboptimal results. In this paper, we aim to automate the warehouse layout designing process. We propose a two-layer evolutionary algorithm to efficiently explore the warehouse layout space, where an auxiliary objective fitness approximation model is introduced to predict the outcome of the designed warehouse layout and a two-layer population structure is proposed to incorporate the approximation model into the ordinary evolution framework. Empirical experiments show that our method can efficiently design effective warehouse layouts that outperform both heuristic-designed and vanilla evolution-designed warehouse layouts.
Tasks
Published 2018-11-14
URL http://arxiv.org/abs/1811.05685v1
PDF http://arxiv.org/pdf/1811.05685v1.pdf
PWC https://paperswithcode.com/paper/layout-design-for-intelligent-warehouse-by
Repo
Framework

To the problem of “The Instrumental complex for ontological engineering purpose” software system design

Title To the problem of “The Instrumental complex for ontological engineering purpose” software system design
Authors A. V. Palagin, N. G. Petrenko, V. Yu. Velychko, K. S. Malakhov, Yu. L. Tikhonov
Abstract The given work describes methodological principles of design instrumental complex of ontological purpose. Instrumental complex intends for the implementation of the integrated information technologies automated build of domain ontologies. Results focus on enhancing the effectiveness of the automatic analysis and understanding of natural-language texts, building of knowledge description of subject areas (primarily in the area of science and technology) and for interdisciplinary research in conjunction with the solution of complex problems.
Tasks
Published 2018-02-10
URL http://arxiv.org/abs/1802.03544v2
PDF http://arxiv.org/pdf/1802.03544v2.pdf
PWC https://paperswithcode.com/paper/to-the-problem-of-the-instrumental-complex
Repo
Framework

Analysing the Robustness of Evolutionary Algorithms to Noise: Refined Runtime Bounds and an Example Where Noise is Beneficial

Title Analysing the Robustness of Evolutionary Algorithms to Noise: Refined Runtime Bounds and an Example Where Noise is Beneficial
Authors Dirk Sudholt
Abstract We analyse the performance of well-known evolutionary algorithms (1+1)EA and (1+$\lambda$)EA in the prior noise model, where in each fitness evaluation the search point is altered before evaluation with probability $p$. We present refined results for the expected optimisation time of the (1+1)EA and the (1+$\lambda$)EA on the function LeadingOnes, where bits have to be optimised in sequence. Previous work showed that the (1+1)EA on LeadingOnes runs in polynomial expected time if $p = O((\log n)/n^2)$ and needs superpolynomial expected time if $p = \omega((\log n)/n)$, leaving a huge gap for which no results were known. We close this gap by showing that the expected optimisation time is $\Theta(n^2) \cdot \exp(\Theta(\min{pn^2, n}))$ for all $p \le 1/2$, allowing for the first time to locate the threshold between polynomial and superpolynomial expected times at $p = \Theta((\log n)/n^2)$. Hence the (1+1)EA on LeadingOnes is much more sensitive to noise than previously thought. We also show that offspring populations of size $\lambda \ge 3.42\log n$ can effectively deal with much higher noise than known before. Finally, we present an example of a rugged landscape where prior noise can help to escape from local optima by blurring the landscape and allowing a hill climber to see the underlying gradient. We prove that in this particular setting noise can have a highly beneficial effect on performance.
Tasks
Published 2018-12-03
URL http://arxiv.org/abs/1812.00966v1
PDF http://arxiv.org/pdf/1812.00966v1.pdf
PWC https://paperswithcode.com/paper/analysing-the-robustness-of-evolutionary
Repo
Framework

Spatially adaptive image compression using a tiled deep network

Title Spatially adaptive image compression using a tiled deep network
Authors David Minnen, George Toderici, Michele Covell, Troy Chinen, Nick Johnston, Joel Shor, Sung Jin Hwang, Damien Vincent, Saurabh Singh
Abstract Deep neural networks represent a powerful class of function approximators that can learn to compress and reconstruct images. Existing image compression algorithms based on neural networks learn quantized representations with a constant spatial bit rate across each image. While entropy coding introduces some spatial variation, traditional codecs have benefited significantly by explicitly adapting the bit rate based on local image complexity and visual saliency. This paper introduces an algorithm that combines deep neural networks with quality-sensitive bit rate adaptation using a tiled network. We demonstrate the importance of spatial context prediction and show improved quantitative (PSNR) and qualitative (subjective rater assessment) results compared to a non-adaptive baseline and a recently published image compression model based on fully-convolutional neural networks.
Tasks Image Compression
Published 2018-02-07
URL http://arxiv.org/abs/1802.02629v1
PDF http://arxiv.org/pdf/1802.02629v1.pdf
PWC https://paperswithcode.com/paper/spatially-adaptive-image-compression-using-a
Repo
Framework

Beef Cattle Instance Segmentation Using Fully Convolutional Neural Network

Title Beef Cattle Instance Segmentation Using Fully Convolutional Neural Network
Authors Aram Ter-Sarkisov, Robert Ross, John Kelleher, Bernadette Earley, Michael Keane
Abstract We present an instance segmentation algorithm trained and applied to a CCTV recording of beef cattle during a winter finishing period. A fully convolutional network was transformed into an instance segmentation network that learns to label each instance of an animal separately. We introduce a conceptually simple framework that the network uses to output a single prediction for every animal. These results are a contribution towards behaviour analysis in winter finishing beef cattle for early detection of animal welfare-related problems.
Tasks Instance Segmentation, Semantic Segmentation
Published 2018-07-05
URL http://arxiv.org/abs/1807.01972v2
PDF http://arxiv.org/pdf/1807.01972v2.pdf
PWC https://paperswithcode.com/paper/beef-cattle-instance-segmentation-using-fully
Repo
Framework

Tropical Modeling of Weighted Transducer Algorithms on Graphs

Title Tropical Modeling of Weighted Transducer Algorithms on Graphs
Authors Emmanouil Theodosis, Petros Maragos
Abstract Weighted Finite State Transducers (WFSTs) are versatile data structures that can model a great number of problems, ranging from Automatic Speech Recognition to DNA sequencing. Traditional computer science algorithms are employed when working with these structures in order to optimise their size, but also the runtime of decoding algorithms. However, these algorithms are not unified under a common framework that would allow for their treatment as a whole. Moreover, the inherent geometrical representation of WFSTs, coupled with the topology-preserving algorithms that operate on them make the structures ideal for tropical analysis. The benefits of such analysis have a twofold nature; first, matrix operations offer a connection to nonlinear vector space and spectral theory, and, second, tropical algebra offers a connection to tropical geometry. In this work we model some of the most frequently used algorithms in WFSTs by using tropical algebra; this provides a theoretical unification and allows us to also analyse aspects of their tropical geometry. Further, we provide insights via numerical examples.
Tasks Speech Recognition
Published 2018-11-01
URL http://arxiv.org/abs/1811.00573v1
PDF http://arxiv.org/pdf/1811.00573v1.pdf
PWC https://paperswithcode.com/paper/tropical-modeling-of-weighted-transducer
Repo
Framework

SplineNets: Continuous Neural Decision Graphs

Title SplineNets: Continuous Neural Decision Graphs
Authors Cem Keskin, Shahram Izadi
Abstract We present SplineNets, a practical and novel approach for using conditioning in convolutional neural networks (CNNs). SplineNets are continuous generalizations of neural decision graphs, and they can dramatically reduce runtime complexity and computation costs of CNNs, while maintaining or even increasing accuracy. Functions of SplineNets are both dynamic (i.e., conditioned on the input) and hierarchical (i.e., conditioned on the computational path). SplineNets employ a unified loss function with a desired level of smoothness over both the network and decision parameters, while allowing for sparse activation of a subset of nodes for individual samples. In particular, we embed infinitely many function weights (e.g. filters) on smooth, low dimensional manifolds parameterized by compact B-splines, which are indexed by a position parameter. Instead of sampling from a categorical distribution to pick a branch, samples choose a continuous position to pick a function weight. We further show that by maximizing the mutual information between spline positions and class labels, the network can be optimally utilized and specialized for classification tasks. Experiments show that our approach can significantly increase the accuracy of ResNets with negligible cost in speed, matching the precision of a 110 level ResNet with a 32 level SplineNet.
Tasks
Published 2018-10-31
URL http://arxiv.org/abs/1810.13118v1
PDF http://arxiv.org/pdf/1810.13118v1.pdf
PWC https://paperswithcode.com/paper/splinenets-continuous-neural-decision-graphs
Repo
Framework

Deep Shape-from-Template: Wide-Baseline, Dense and Fast Registration and Deformable Reconstruction from a Single Image

Title Deep Shape-from-Template: Wide-Baseline, Dense and Fast Registration and Deformable Reconstruction from a Single Image
Authors David Fuentes-Jimenez, David Casillas-Perez, Daniel Pizarro, Toby Collins, Adrien Bartoli
Abstract We present Deep Shape-from-Template (DeepSfT), a novel Deep Neural Network (DNN) method for solving real-time automatic registration and 3D reconstruction of a deformable object viewed in a single monocular image.DeepSfT advances the state-of-the-art in various aspects. Compared to existing DNN SfT methods, it is the first fully convolutional real-time approach that handles an arbitrary object geometry, topology and surface representation. It also does not require ground truth registration with real data and scales well to very complex object models with large numbers of elements. Compared to previous non-DNN SfT methods, it does not involve numerical optimization at run-time, and is a dense, wide-baseline solution that does not demand, and does not suffer from, feature-based matching. It is able to process a single image with significant deformation and viewpoint changes, and handles well the core challenges of occlusions, weak texture and blur. DeepSfT is based on residual encoder-decoder structures and refining blocks. It is trained end-to-end with a novel combination of supervised learning from simulated renderings of the object model and semi-supervised automatic fine-tuning using real data captured with a standard RGB-D camera. The cameras used for fine-tuning and run-time can be different, making DeepSfT practical for real-world use. We show that DeepSfT significantly outperforms state-of-the-art wide-baseline approaches for non-trivial templates, with quantitative and qualitative evaluation.
Tasks 3D Reconstruction
Published 2018-11-19
URL http://arxiv.org/abs/1811.07791v2
PDF http://arxiv.org/pdf/1811.07791v2.pdf
PWC https://paperswithcode.com/paper/deep-shape-from-template-wide-baseline-dense
Repo
Framework

Reconciling Irrational Human Behavior with AI based Decision Making: A Quantum Probabilistic Approach

Title Reconciling Irrational Human Behavior with AI based Decision Making: A Quantum Probabilistic Approach
Authors Sagar Uprety, Dawei Song
Abstract There are many examples of human decision making which cannot be modeled by classical probabilistic and logic models, on which the current AI systems are based. Hence the need for a modeling framework which can enable intelligent systems to detect and predict cognitive biases in human decisions to facilitate better human-agent interaction. We give a few examples of irrational behavior and use a generalized probabilistic model inspired by the mathematical framework of Quantum Theory to model and explain such behavior.
Tasks Decision Making
Published 2018-08-14
URL http://arxiv.org/abs/1808.04600v1
PDF http://arxiv.org/pdf/1808.04600v1.pdf
PWC https://paperswithcode.com/paper/reconciling-irrational-human-behavior-with-ai
Repo
Framework

Motion Segmentation by Exploiting Complementary Geometric Models

Title Motion Segmentation by Exploiting Complementary Geometric Models
Authors Xun Xu, Loong-Fah Cheong, Zhuwen Li
Abstract Many real-world sequences cannot be conveniently categorized as general or degenerate; in such cases, imposing a false dichotomy in using the fundamental matrix or homography model for motion segmentation would lead to difficulty. Even when we are confronted with a general scene-motion, the fundamental matrix approach as a model for motion segmentation still suffers from several defects, which we discuss in this paper. The full potential of the fundamental matrix approach could only be realized if we judiciously harness information from the simpler homography model. From these considerations, we propose a multi-view spectral clustering framework that synergistically combines multiple models together. We show that the performance can be substantially improved in this way. We perform extensive testing on existing motion segmentation datasets, achieving state-of-the-art performance on all of them; we also put forth a more realistic and challenging dataset adapted from the KITTI benchmark, containing real-world effects such as strong perspectives and strong forward translations not seen in the traditional datasets.
Tasks Motion Segmentation
Published 2018-04-06
URL http://arxiv.org/abs/1804.02142v1
PDF http://arxiv.org/pdf/1804.02142v1.pdf
PWC https://paperswithcode.com/paper/motion-segmentation-by-exploiting
Repo
Framework

Varying k-Lipschitz Constraint for Generative Adversarial Networks

Title Varying k-Lipschitz Constraint for Generative Adversarial Networks
Authors Kanglin Liu
Abstract Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recent proposed Wasserstein GAN with gradient penalty (WGAN-GP) makes progress toward stable training. Gradient penalty acts as the role of enforcing a Lipschitz constraint. Further investigation on gradient penalty shows that gradient penalty may impose restriction on the capacity of discriminator. As a replacement, we introduce varying k-Lipschitz constraint. Proposed varying k-Lipschitz constraint witness better image quality and significantly improved training speed when testing on GAN architecture. Besides, we introduce an effective convergence measure, which correlates well with image quality.
Tasks
Published 2018-03-16
URL http://arxiv.org/abs/1803.06107v2
PDF http://arxiv.org/pdf/1803.06107v2.pdf
PWC https://paperswithcode.com/paper/varying-k-lipschitz-constraint-for-generative
Repo
Framework

Modular Sensor Fusion for Semantic Segmentation

Title Modular Sensor Fusion for Semantic Segmentation
Authors Hermann Blum, Abel Gawel, Roland Siegwart, Cesar Cadena
Abstract Sensor fusion is a fundamental process in robotic systems as it extends the perceptual range and increases robustness in real-world operations. Current multi-sensor deep learning based semantic segmentation approaches do not provide robustness to under-performing classes in one modality, or require a specific architecture with access to the full aligned multi-sensor training data. In this work, we analyze statistical fusion approaches for semantic segmentation that overcome these drawbacks while keeping a competitive performance. The studied approaches are modular by construction, allowing to have different training sets per modality and only a much smaller subset is needed to calibrate the statistical models. We evaluate a range of statistical fusion approaches and report their performance against state-of-the-art baselines on both real-world and simulated data. In our experiments, the approach improves performance in IoU over the best single modality segmentation results by up to 5%. We make all implementations and configurations publicly available.
Tasks Semantic Segmentation, Sensor Fusion
Published 2018-07-30
URL http://arxiv.org/abs/1807.11249v1
PDF http://arxiv.org/pdf/1807.11249v1.pdf
PWC https://paperswithcode.com/paper/modular-sensor-fusion-for-semantic
Repo
Framework

Exploring Novel Game Spaces with Fluidic Games

Title Exploring Novel Game Spaces with Fluidic Games
Authors Swen E. Gaudl, Mark J. Nelson, Simon Colton, Rob Saunders, Edward J. Powley, Peter Ivey, Blanca Perez Ferrer, Michael Cook
Abstract With the growing integration of smartphones into our daily lives, and their increased ease of use, mobile games have become highly popular across all demographics. People listen to music, play games or read the news while in transit or bridging gap times. While mobile gaming is gaining popularity, mobile expression of creativity is still in its early stages. We present here a new type of mobile app – fluidic games – and illustrate our iterative approach to their design. This new type of app seamlessly integrates exploration of the design space into the actual user experience of playing the game, and aims to enrich the user experience. To better illustrate the game domain and our approach, we discuss one specific fluidic game, which is available as a commercial product. We also briefly discuss open challenges such as player support and how generative techniques can aid the exploration of the game space further.
Tasks
Published 2018-03-04
URL http://arxiv.org/abs/1803.01403v1
PDF http://arxiv.org/pdf/1803.01403v1.pdf
PWC https://paperswithcode.com/paper/exploring-novel-game-spaces-with-fluidic
Repo
Framework

Transfer learning to model inertial confinement fusion experiments

Title Transfer learning to model inertial confinement fusion experiments
Authors K. D. Humbird, J. L. Peterson, R. G. McClarren
Abstract Inertial confinement fusion (ICF) experiments are designed using computer simulations that are approximations of reality, and therefore must be calibrated to accurately predict experimental observations. In this work, we propose a novel nonlinear technique for calibrating from simulations to experiments, or from low fidelity simulations to high fidelity simulations, via “transfer learning”. Transfer learning is a commonly used technique in the machine learning community, in which models trained on one task are partially retrained to solve a separate, but related task, for which there is a limited quantity of data. We introduce the idea of hierarchical transfer learning, in which neural networks trained on low fidelity models are calibrated to high fidelity models, then to experimental data. This technique essentially bootstraps the calibration process, enabling the creation of models which predict high fidelity simulations or experiments with minimal computational cost. We apply this technique to a database of ICF simulations and experiments carried out at the Omega laser facility. Transfer learning with deep neural networks enables the creation of models that are more predictive of Omega experiments than simulations alone. The calibrated models accurately predict future Omega experiments, and are used to search for new, optimal implosion designs.
Tasks Calibration, Transfer Learning
Published 2018-12-14
URL http://arxiv.org/abs/1812.06055v1
PDF http://arxiv.org/pdf/1812.06055v1.pdf
PWC https://paperswithcode.com/paper/transfer-learning-to-model-inertial
Repo
Framework

FaceOff: Anonymizing Videos in the Operating Rooms

Title FaceOff: Anonymizing Videos in the Operating Rooms
Authors Evangello Flouty, Odysseas Zisimopoulos, Danail Stoyanov
Abstract Video capture in the surgical operating room (OR) is increasingly possible and has potential for use with computer assisted interventions (CAI), surgical data science and within smart OR integration. Captured video innately carries sensitive information that should not be completely visible in order to preserve the patient’s and the clinical teams’ identities. When surgical video streams are stored on a server, the videos must be anonymized prior to storage if taken outside of the hospital. In this article, we describe how a deep learning model, Faster R-CNN, can be used for this purpose and help to anonymize video data captured in the OR. The model detects and blurs faces in an effort to preserve anonymity. After testing an existing face detection trained model, a new dataset tailored to the surgical environment, with faces obstructed by surgical masks and caps, was collected for fine-tuning to achieve higher face-detection rates in the OR. We also propose a temporal regularisation kernel to improve recall rates. The fine-tuned model achieves a face detection recall of 88.05 % and 93.45 % before and after applying temporal-smoothing respectively.
Tasks Face Detection
Published 2018-08-06
URL http://arxiv.org/abs/1808.04440v1
PDF http://arxiv.org/pdf/1808.04440v1.pdf
PWC https://paperswithcode.com/paper/faceoff-anonymizing-videos-in-the-operating
Repo
Framework
comments powered by Disqus