April 2, 2020

3339 words 16 mins read

Paper Group ANR 255

Paper Group ANR 255

Chaotic Phase Synchronization and Desynchronization in an Oscillator Network for Object Selection. A Hybrid Residual Dilated LSTM end Exponential Smoothing Model for Mid-Term Electric Load Forecasting. Predicting Rate of Cognitive Decline at Baseline Using a Deep Neural Network with Multidata Analysis. Cybersecurity for Industrial Control Systems: …

Chaotic Phase Synchronization and Desynchronization in an Oscillator Network for Object Selection

Title Chaotic Phase Synchronization and Desynchronization in an Oscillator Network for Object Selection
Authors Fabricio A Breve, Marcos G Quiles, Liang Zhao, Elbert E. N. Macau
Abstract Object selection refers to the mechanism of extracting objects of interest while ignoring other objects and background in a given visual scene. It is a fundamental issue for many computer vision and image analysis techniques and it is still a challenging task to artificial visual systems. Chaotic phase synchronization takes place in cases involving almost identical dynamical systems and it means that the phase difference between the systems is kept bounded over the time, while their amplitudes remain chaotic and may be uncorrelated. Instead of complete synchronization, phase synchronization is believed to be a mechanism for neural integration in brain. In this paper, an object selection model is proposed. Oscillators in the network representing the salient object in a given scene are phase synchronized, while no phase synchronization occurs for background objects. In this way, the salient object can be extracted. In this model, a shift mechanism is also introduced to change attention from one object to another. Computer simulations show that the model produces some results similar to those observed in natural vision systems.
Tasks
Published 2020-02-13
URL https://arxiv.org/abs/2002.05493v1
PDF https://arxiv.org/pdf/2002.05493v1.pdf
PWC https://paperswithcode.com/paper/chaotic-phase-synchronization-and
Repo
Framework

A Hybrid Residual Dilated LSTM end Exponential Smoothing Model for Mid-Term Electric Load Forecasting

Title A Hybrid Residual Dilated LSTM end Exponential Smoothing Model for Mid-Term Electric Load Forecasting
Authors Grzegorz Dudek, Paweł Pełka, Slawek Smyl
Abstract This work presents a hybrid and hierarchical deep learning model for mid-term load forecasting. The model combines exponential smoothing (ETS), advanced Long Short-Term Memory (LSTM) and ensembling. ETS extracts dynamically the main components of each individual time series and enables the model to learn their representation. Multi-layer LSTM is equipped with dilated recurrent skip connections and a spatial shortcut path from lower layers to allow the model to better capture long-term seasonal relationships and ensure more efficient training. A common learning procedure for LSTM and ETS, with a penalized pinball loss, leads to simultaneous optimization of data representation and forecasting performance. In addition, ensembling at three levels ensures a powerful regularization. A simulation study performed on the monthly electricity demand time series for 35 European countries confirmed the high performance of the proposed model and its competitiveness with classical models such as ARIMA and ETS as well as state-of-the-art models based on machine learning.
Tasks Load Forecasting, Time Series
Published 2020-03-29
URL https://arxiv.org/abs/2004.00508v1
PDF https://arxiv.org/pdf/2004.00508v1.pdf
PWC https://paperswithcode.com/paper/a-hybrid-residual-dilated-lstm-end
Repo
Framework

Predicting Rate of Cognitive Decline at Baseline Using a Deep Neural Network with Multidata Analysis

Title Predicting Rate of Cognitive Decline at Baseline Using a Deep Neural Network with Multidata Analysis
Authors Sema Candemir, Xuan V. Nguyen, Luciano M. Prevedello, Matthew T. Bigelow, Richard D. White, Barbaros S. Erdal
Abstract This study investigates whether a machine-learning-based system can predict the rate of cognitive-decline in mildly cognitively impaired (MCI) patients by processing only the clinical and imaging data collected at the initial visit. We build a predictive model based on a supervised hybrid neural network utilizing a 3-Dimensional Convolutional Neural Network to perform volume analysis of Magnetic Resonance Imaging (MRI) and integration of non-imaging clinical data at the fully connected layer of the architecture. The analysis is performed on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Experimental results confirm that there is a correlation between cognitive decline and the data obtained at the first visit. The system achieved an area under the receiver operator curve (AUC) of 66.6% for cognitive decline class prediction.
Tasks
Published 2020-02-24
URL https://arxiv.org/abs/2002.10034v1
PDF https://arxiv.org/pdf/2002.10034v1.pdf
PWC https://paperswithcode.com/paper/predicting-rate-of-cognitive-decline-at
Repo
Framework

Cybersecurity for Industrial Control Systems: A Survey

Title Cybersecurity for Industrial Control Systems: A Survey
Authors Deval Bhamare, Maede Zolanvari, Aiman Erbad, Raj Jain, Khaled Khan, Nader Meskin
Abstract Industrial Control System (ICS) is a general term that includes supervisory control & data acquisition (SCADA) systems, distributed control systems (DCS), and other control system configurations such as programmable logic controllers (PLC). ICSs are often found in the industrial sectors and critical infrastructures, such as nuclear and thermal plants, water treatment facilities, power generation, heavy industries, and distribution systems. Though ICSs were kept isolated from the Internet for so long, significant achievable business benefits are driving a convergence between ICSs and the Internet as well as information technology (IT) environments, such as cloud computing. As a result, ICSs have been exposed to the attack vectors used in the majority of cyber-attacks. However, ICS devices are inherently much less secure against such advanced attack scenarios. A compromise to ICS can lead to enormous physical damage and danger to human lives. In this work, we have a close look at the shift of the ICS from stand-alone systems to cloud-based environments. Then we discuss the major works, from industry and academia towards the development of the secure ICSs, especially applicability of the machine learning techniques for the ICS cyber-security. The work may help to address the challenges of securing industrial processes, particularly while migrating them to the cloud environments.
Tasks
Published 2020-02-10
URL https://arxiv.org/abs/2002.04124v1
PDF https://arxiv.org/pdf/2002.04124v1.pdf
PWC https://paperswithcode.com/paper/cybersecurity-for-industrial-control-systems
Repo
Framework

Tatistical Context-Dependent Units Boundary Correction for Corpus-based Unit-Selection Text-to-Speech

Title Tatistical Context-Dependent Units Boundary Correction for Corpus-based Unit-Selection Text-to-Speech
Authors Claudio Zito, Fabio Tesser, Mauro Nicolao, Piero Cosi
Abstract In this study, we present an innovative technique for speaker adaptation in order to improve the accuracy of segmentation with application to unit-selection Text-To-Speech (TTS) systems. Unlike conventional techniques for speaker adaptation, which attempt to improve the accuracy of the segmentation using acoustic models that are more robust in the face of the speaker’s characteristics, we aim to use only context dependent characteristics extrapolated with linguistic analysis techniques. In simple terms, we use the intuitive idea that context dependent information is tightly correlated with the related acoustic waveform. We propose a statistical model, which predicts correcting values to reduce the systematic error produced by a state-of-the-art Hidden Markov Model (HMM) based speech segmentation. Our approach consists of two phases: (1) identifying context-dependent phonetic unit classes (for instance, the class which identifies vowels as being the nucleus of monosyllabic words); and (2) building a regression model that associates the mean error value made by the ASR during the segmentation of a single speaker corpus to each class. The success of the approach is evaluated by comparing the corrected boundaries of units and the state-of-the-art HHM segmentation against a reference alignment, which is supposed to be the optimal solution. In conclusion, our work supplies a first analysis of a model sensitive to speaker-dependent characteristics, robust to defective and noisy information, and a very simple implementation which could be utilized as an alternative to either more expensive speaker-adaptation systems or of numerous manual correction sessions.
Tasks
Published 2020-03-05
URL https://arxiv.org/abs/2003.02837v1
PDF https://arxiv.org/pdf/2003.02837v1.pdf
PWC https://paperswithcode.com/paper/tatistical-context-dependent-units-boundary
Repo
Framework

Machine learning assisted quantum state estimation

Title Machine learning assisted quantum state estimation
Authors Sanjaya Lohani, Brian T. Kirby, Michael Brodsky, Onur Danaci, Ryan T. Glasser
Abstract We build a general quantum state tomography framework that makes use of machine learning techniques to reconstruct quantum states from a given set of coincidence measurements. For a wide range of pure and mixed input states we demonstrate via simulations that our method produces functionally equivalent reconstructed states to that of traditional methods with the added benefit that expensive computations are front-loaded with our system. Further, by training our system with measurement results that include simulated noise sources we are able to demonstrate a significantly enhanced average fidelity when compared to typical reconstruction methods. These enhancements in average fidelity are also shown to persist when we consider state reconstruction from partial tomography data where several measurements are missing. We anticipate that the present results combining the fields of machine intelligence and quantum state estimation will greatly improve and speed up tomography-based quantum experiments.
Tasks Quantum State Tomography
Published 2020-03-06
URL https://arxiv.org/abs/2003.03441v1
PDF https://arxiv.org/pdf/2003.03441v1.pdf
PWC https://paperswithcode.com/paper/machine-learning-assisted-quantum-state
Repo
Framework

Real-time Fruit Recognition and Grasp Estimation for Autonomous Apple harvesting

Title Real-time Fruit Recognition and Grasp Estimation for Autonomous Apple harvesting
Authors Hanwen Kang, Chao Chen
Abstract In this research, a fully neural network based visual perception framework for autonomous apple harvesting is proposed. The proposed framework includes a multi-function neural network for fruit recognition and a Pointnet grasp estimation to determine the proper grasp pose to guide the robotic execution. Fruit recognition takes raw input of RGB images from the RGB-D camera to perform fruit detection and instance segmentation, and Pointnet grasp estimation take point cloud of each fruit as input and output the prediction of grasp pose for each of fruits. The proposed framework is validated by using RGB-D images collected from laboratory and orchard environments, a robotic grasping test in a controlled environment is also included in the experiments. Experimental shows that the proposed framework can accurately localise and estimate the grasp pose for robotic grasping.
Tasks Instance Segmentation, Robotic Grasping, Semantic Segmentation
Published 2020-03-30
URL https://arxiv.org/abs/2003.13298v1
PDF https://arxiv.org/pdf/2003.13298v1.pdf
PWC https://paperswithcode.com/paper/real-time-fruit-recognition-and-grasp
Repo
Framework

Q-Learning in Regularized Mean-field Games

Title Q-Learning in Regularized Mean-field Games
Authors Berkay Anahtarci, Can Deha Kariksiz, Naci Saldi
Abstract In this paper, we introduce a regularized mean-field game and study learning of this game under an infinite-horizon discounted reward function. The game is defined by adding a regularization function to the one-stage reward function in the classical mean-field game model. We establish a value iteration based learning algorithm to this regularized mean-field game using fitted Q-learning. This regularization term in general makes reinforcement learning algorithm more robust with improved exploration. Moreover, it enables us to establish error analysis of the learning algorithm without imposing restrictive convexity assumptions on the system components, which are needed in the absence of a regularization term.
Tasks Q-Learning
Published 2020-03-24
URL https://arxiv.org/abs/2003.12151v1
PDF https://arxiv.org/pdf/2003.12151v1.pdf
PWC https://paperswithcode.com/paper/q-learning-in-regularized-mean-field-games
Repo
Framework

Substituting Gadolinium in Brain MRI Using DeepContrast

Title Substituting Gadolinium in Brain MRI Using DeepContrast
Authors Haoran Sun, Xueqing Liu, Xinyang Feng, Chen Liu, Nanyan Zhu, Sabrina J. Gjerswold-Selleck, Hong-Jian Wei, Pavan S. Upadhyayula, Angeliki Mela, Cheng-Chia Wu, Peter D. Canoll, Andrew F. Laine, J. Thomas Vaughan, Scott A. Small, Jia Guo
Abstract Cerebral blood volume (CBV) is a hemodynamic correlate of oxygen metabolism and reflects brain activity and function. High-resolution CBV maps can be generated using the steady-state gadolinium-enhanced MRI technique. Such a technique requires an intravenous injection of exogenous gadolinium based contrast agent (GBCA) and recent studies suggest that the GBCA can accumulate in the brain after frequent use. We hypothesize that endogenous sources of contrast might exist within the most conventional and commonly acquired structural MRI, potentially obviating the need for exogenous contrast. Here, we test this hypothesis by developing and optimizing a deep learning algorithm, which we call DeepContrast, in mice. We find that DeepContrast performs equally well as exogenous GBCA in mapping CBV of the normal brain tissue and enhancing glioblastoma. Together, these studies validate our hypothesis that a deep learning approach can potentially replace the need for GBCAs in brain MRI.
Tasks
Published 2020-01-15
URL https://arxiv.org/abs/2001.05551v1
PDF https://arxiv.org/pdf/2001.05551v1.pdf
PWC https://paperswithcode.com/paper/substituting-gadolinium-in-brain-mri-using
Repo
Framework

Robust Medical Instrument Segmentation Challenge 2019

Title Robust Medical Instrument Segmentation Challenge 2019
Authors Tobias Ross, Annika Reinke, Peter M. Full, Martin Wagner, Hannes Kenngott, Martin Apitz, Hellena Hempe, Diana Mindroc Filimon, Patrick Scholz, Thuy Nuong Tran, Pierangela Bruno, Pablo Arbeláez, Gui-Bin Bian, Sebastian Bodenstedt, Jon Lindström Bolmgren, Laura Bravo-Sánchez, Hua-Bin Chen, Cristina González, Dong Guo, Pål Halvorsen, Pheng-Ann Heng, Enes Hosgor, Zeng-Guang Hou, Fabian Isensee, Debesh Jha, Tingting Jiang, Yueming Jin, Kadir Kirtac, Sabrina Kletz, Stefan Leger, Zhixuan Li, Klaus H. Maier-Hein, Zhen-Liang Ni, Michael A. Riegler, Klaus Schoeffmann, Ruohua Shi, Stefanie Speidel, Michael Stenzel, Isabell Twick, Gutai Wang, Jiacheng Wang, Liansheng Wang, Lu Wang, Yujie Zhang, Yan-Jie Zhou, Lei Zhu, Manuel Wiesenfarth, Annette Kopp-Schneider, Beat P. Müller-Stich, Lena Maier-Hein
Abstract Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions. While numerous methods for detecting, segmenting and tracking of medical instruments based on endoscopic video images have been proposed in the literature, key limitations remain to be addressed: Firstly, robustness, that is, the reliable performance of state-of-the-art methods when run on challenging images (e.g. in the presence of blood, smoke or motion artifacts). Secondly, generalization; algorithms trained for a specific intervention in a specific hospital should generalize to other interventions or institutions. In an effort to promote solutions for these limitations, we organized the Robust Medical Instrument Segmentation (ROBUST-MIS) challenge as an international benchmarking competition with a specific focus on the robustness and generalization capabilities of algorithms. For the first time in the field of endoscopic image processing, our challenge included a task on binary segmentation and also addressed multi-instance detection and segmentation. The challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures from three different types of surgery. The validation of the competing methods for the three tasks (binary segmentation, multi-instance detection and multi-instance segmentation) was performed in three different stages with an increasing domain gap between the training and the test data. The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap. While the average detection and segmentation quality of the best-performing algorithms is high, future research should concentrate on detection and segmentation of small, crossing, moving and transparent instrument(s) (parts).
Tasks Instance Segmentation, Semantic Segmentation
Published 2020-03-23
URL https://arxiv.org/abs/2003.10299v1
PDF https://arxiv.org/pdf/2003.10299v1.pdf
PWC https://paperswithcode.com/paper/robust-medical-instrument-segmentation
Repo
Framework

Greedy Policy Search: A Simple Baseline for Learnable Test-Time Augmentation

Title Greedy Policy Search: A Simple Baseline for Learnable Test-Time Augmentation
Authors Dmitry Molchanov, Alexander Lyzhov, Yuliya Molchanova, Arsenii Ashukha, Dmitry Vetrov
Abstract Test-time data augmentation—averaging the predictions of a machine learning model across multiple augmented samples of data—is a widely used technique that improves the predictive performance. While many advanced learnable data augmentation techniques have emerged in recent years, they are focused on the training phase. Such techniques are not necessarily optimal for test-time augmentation and can be outperformed by a policy consisting of simple crops and flips. The primary goal of this paper is to demonstrate that test-time augmentation policies can be successfully learned too. We~introduce \emph{greedy policy search} (GPS), a simple but high-performing method for learning a policy of test-time augmentation. We demonstrate that augmentation policies learned with GPS achieve superior predictive performance on image classification problems, provide better in-domain uncertainty estimation, and improve the robustness to domain shift.
Tasks Data Augmentation, Image Classification
Published 2020-02-21
URL https://arxiv.org/abs/2002.09103v1
PDF https://arxiv.org/pdf/2002.09103v1.pdf
PWC https://paperswithcode.com/paper/greedy-policy-search-a-simple-baseline-for
Repo
Framework

Buchi automata augmented with spatial constraints: simulating an alternating with a nondeterministic and deciding the emptiness problem for the latter

Title Buchi automata augmented with spatial constraints: simulating an alternating with a nondeterministic and deciding the emptiness problem for the latter
Authors Amar Isli
Abstract The aim of this work is to thoroughly investigate Buchi automata augmented with spatial constraints. The input trees of such an automaton are infinite k-ary Sigma-trees, with the nodes standing for time points, and Sigma including, additionally to its uses in classical k-ary Sigma-trees, the description of the snapshot of an n-object spatial scene of interest. The constraints, from an RCC8-like spatial Relation Algebra (RA) x, are used to impose spatial constraints on objects of the spatial scene, eventually at different nodes of the input trees. We show that a Buchi alternating automaton augmented with spatial constraints can be simulated with a classical Buchi nondeterministic automaton of the same type, augmented with spatial constraints. We then provide a nondeterministic doubly depth-first polynomial space algorithm for the emptiness problem of the latter automaton. Our main motivation came from another work, also submitted to this conference, which defines a spatio-temporalisation of the well-known family ALC(D) of description logics with a concrete domain: together, the two works provide an effective solution to the satisfiability problem of a concept of the spatio-temporalisation with respect to a weakly cyclic TBox.
Tasks
Published 2020-02-22
URL https://arxiv.org/abs/2002.11510v1
PDF https://arxiv.org/pdf/2002.11510v1.pdf
PWC https://paperswithcode.com/paper/buchi-automata-augmented-with-spatial
Repo
Framework

A Hybrid-Order Distributed SGD Method for Non-Convex Optimization to Balance Communication Overhead, Computational Complexity, and Convergence Rate

Title A Hybrid-Order Distributed SGD Method for Non-Convex Optimization to Balance Communication Overhead, Computational Complexity, and Convergence Rate
Authors Naeimeh Omidvar, Mohammad Ali Maddah-Ali, Hamed Mahdavi
Abstract In this paper, we propose a method of distributed stochastic gradient descent (SGD), with low communication load and computational complexity, and still fast convergence. To reduce the communication load, at each iteration of the algorithm, the worker nodes calculate and communicate some scalers, that are the directional derivatives of the sample functions in some \emph{pre-shared directions}. However, to maintain accuracy, after every specific number of iterations, they communicate the vectors of stochastic gradients. To reduce the computational complexity in each iteration, the worker nodes approximate the directional derivatives with zeroth-order stochastic gradient estimation, by performing just two function evaluations rather than computing a first-order gradient vector. The proposed method highly improves the convergence rate of the zeroth-order methods, guaranteeing order-wise faster convergence. Moreover, compared to the famous communication-efficient methods of model averaging (that perform local model updates and periodic communication of the gradients to synchronize the local models), we prove that for the general class of non-convex stochastic problems and with reasonable choice of parameters, the proposed method guarantees the same orders of communication load and convergence rate, while having order-wise less computational complexity. Experimental results on various learning problems in neural networks applications demonstrate the effectiveness of the proposed approach compared to various state-of-the-art distributed SGD methods.
Tasks
Published 2020-03-27
URL https://arxiv.org/abs/2003.12423v1
PDF https://arxiv.org/pdf/2003.12423v1.pdf
PWC https://paperswithcode.com/paper/a-hybrid-order-distributed-sgd-method-for-non
Repo
Framework

Locality-Sensitive Hashing for Efficient Web Application Security Testing

Title Locality-Sensitive Hashing for Efficient Web Application Security Testing
Authors Ilan Ben-Bassat, Erez Rokah
Abstract Web application security has become a major concern in recent years, as more and more content and services are available online. A useful method for identifying security vulnerabilities is black-box testing, which relies on an automated crawling of web applications. However, crawling Rich Internet Applications (RIAs) is a very challenging task. One of the key obstacles crawlers face is the state similarity problem: how to determine if two client-side states are equivalent. As current methods do not completely solve this problem, a successful scan of many real-world RIAs is still not possible. We present a novel approach to detect redundant content for security testing purposes. The algorithm applies locality-sensitive hashing using MinHash sketches in order to analyze the Document Object Model (DOM) structure of web pages, and to efficiently estimate similarity between them. Our experimental results show that this approach allows a successful scan of RIAs that cannot be crawled otherwise.
Tasks
Published 2020-01-04
URL https://arxiv.org/abs/2001.01128v1
PDF https://arxiv.org/pdf/2001.01128v1.pdf
PWC https://paperswithcode.com/paper/locality-sensitive-hashing-for-efficient-web
Repo
Framework

Seeing The Whole Patient: Using Multi-Label Medical Text Classification Techniques to Enhance Predictions of Medical Codes

Title Seeing The Whole Patient: Using Multi-Label Medical Text Classification Techniques to Enhance Predictions of Medical Codes
Authors Vithya Yogarajan, Jacob Montiel, Tony Smith, Bernhard Pfahringer
Abstract Machine learning-based multi-label medical text classifications can be used to enhance the understanding of the human body and aid the need for patient care. We present a broad study on clinical natural language processing techniques to maximise a feature representing text when predicting medical codes on patients with multi-morbidity. We present results of multi-label medical text classification problems with 18, 50 and 155 labels. We compare several variations to embeddings, text tagging, and pre-processing. For imbalanced data we show that labels which occur infrequently, benefit the most from additional features incorporated in embeddings. We also show that high dimensional embeddings pre-trained using health-related data present a significant improvement in a multi-label setting, similarly to the way they improve performance for binary classification. High dimensional embeddings from this research are made available for public use.
Tasks Text Classification
Published 2020-03-29
URL https://arxiv.org/abs/2004.00430v1
PDF https://arxiv.org/pdf/2004.00430v1.pdf
PWC https://paperswithcode.com/paper/seeing-the-whole-patient-using-multi-label
Repo
Framework
comments powered by Disqus