Paper Group ANR 1053
Modeling and Soft-fault Diagnosis of Underwater Thrusters with Recurrent Neural Networks. Fully Automated Segmentation of Hyperreflective Foci in Optical Coherence Tomography Images. Generalizing multistain immunohistochemistry tissue segmentation using one-shot color deconvolution deep neural networks. A Compressed Sensing Approach for Distributio …
Modeling and Soft-fault Diagnosis of Underwater Thrusters with Recurrent Neural Networks
Title | Modeling and Soft-fault Diagnosis of Underwater Thrusters with Recurrent Neural Networks |
Authors | Samy Nascimento, Matias Valdenegro-Toro |
Abstract | Noncritical soft-faults and model deviations are a challenge for Fault Detection and Diagnosis (FDD) of resident Autonomous Underwater Vehicles (AUVs). Such systems may have a faster performance degradation due to the permanent exposure to the marine environment, and constant monitoring of component conditions is required to ensure their reliability. This works presents an evaluation of Recurrent Neural Networks (RNNs) for a data-driven fault detection and diagnosis scheme for underwater thrusters with empirical data. The nominal behavior of the thruster was modeled using the measured control input, voltage, rotational speed and current signals. We evaluated the performance of fault classification using all the measured signals compared to using the computed residuals from the nominal model as features. |
Tasks | Fault Detection |
Published | 2018-07-11 |
URL | http://arxiv.org/abs/1807.04109v1 |
http://arxiv.org/pdf/1807.04109v1.pdf | |
PWC | https://paperswithcode.com/paper/modeling-and-soft-fault-diagnosis-of |
Repo | |
Framework | |
Fully Automated Segmentation of Hyperreflective Foci in Optical Coherence Tomography Images
Title | Fully Automated Segmentation of Hyperreflective Foci in Optical Coherence Tomography Images |
Authors | Thomas Schlegl, Hrvoje Bogunovic, Sophie Klimscha, Philipp Seeböck, Amir Sadeghipour, Bianca Gerendas, Sebastian M. Waldstein, Georg Langs, Ursula Schmidt-Erfurth |
Abstract | The automatic detection of disease related entities in retinal imaging data is relevant for disease- and treatment monitoring. It enables the quantitative assessment of large amounts of data and the corresponding study of disease characteristics. The presence of hyperreflective foci (HRF) is related to disease progression in various retinal diseases. Manual identification of HRF in spectral-domain optical coherence tomography (SD-OCT) scans is error-prone and tedious. We present a fully automated machine learning approach for segmenting HRF in SD-OCT scans. Evaluation on annotated OCT images of the retina demonstrates that a residual U-Net allows to segment HRF with high accuracy. As our dataset comprised data from different retinal diseases including age-related macular degeneration, diabetic macular edema and retinal vein occlusion, the algorithm can safely be applied in all of them though different pathophysiological origins are known. |
Tasks | |
Published | 2018-05-08 |
URL | http://arxiv.org/abs/1805.03278v1 |
http://arxiv.org/pdf/1805.03278v1.pdf | |
PWC | https://paperswithcode.com/paper/fully-automated-segmentation-of |
Repo | |
Framework | |
Generalizing multistain immunohistochemistry tissue segmentation using one-shot color deconvolution deep neural networks
Title | Generalizing multistain immunohistochemistry tissue segmentation using one-shot color deconvolution deep neural networks |
Authors | Amal Lahiani, Jacob Gildenblat, Irina Klaman, Nassir Navab, Eldad Klaiman |
Abstract | A key challenge in cancer immunotherapy biomarker research is quantification of pattern changes in microscopic whole slide images of tumor biopsies. Different cell types tend to migrate into various tissue compartments and form variable distribution patterns. Drug development requires correlative analysis of various biomarkers in and between the tissue compartments. To enable that, tissue slides are manually annotated by expert pathologists. Manual annotation of tissue slides is a labor intensive, tedious and error-prone task. Automation of this annotation process can improve accuracy and consistency while reducing workload and cost in a way that will positively influence drug development efforts. In this paper we present a novel one-shot color deconvolution deep learning method to automatically segment and annotate digitized slide images with multiple stainings into compartments of tumor, healthy tissue, and necrosis. We address the task in the context of drug development where multiple stains, tissue and tumor types exist and look into solutions for generalizations over these image populations. |
Tasks | |
Published | 2018-05-17 |
URL | http://arxiv.org/abs/1805.06958v3 |
http://arxiv.org/pdf/1805.06958v3.pdf | |
PWC | https://paperswithcode.com/paper/generalizing-multistain-immunohistochemistry |
Repo | |
Framework | |
A Compressed Sensing Approach for Distribution Matching
Title | A Compressed Sensing Approach for Distribution Matching |
Authors | Mohamad Dia, Vahid Aref, Laurent Schmalen |
Abstract | In this work, we formulate the fixed-length distribution matching as a Bayesian inference problem. Our proposed solution is inspired from the compressed sensing paradigm and the sparse superposition (SS) codes. First, we introduce sparsity in the binary source via position modulation (PM). We then present a simple and exact matcher based on Gaussian signal quantization. At the receiver, the dematcher exploits the sparsity in the source and performs low-complexity dematching based on generalized approximate message-passing (GAMP). We show that GAMP dematcher and spatial coupling lead to asymptotically optimal performance, in the sense that the rate tends to the entropy of the target distribution with vanishing reconstruction error in a proper limit. Furthermore, we assess the performance of the dematcher on practical Hadamard-based operators. A remarkable feature of our proposed solution is the possibility to: i) perform matching at the symbol level (nonbinary); ii) perform joint channel coding and matching. |
Tasks | Bayesian Inference, Quantization |
Published | 2018-04-02 |
URL | http://arxiv.org/abs/1804.00602v2 |
http://arxiv.org/pdf/1804.00602v2.pdf | |
PWC | https://paperswithcode.com/paper/a-compressed-sensing-approach-for |
Repo | |
Framework | |
A Comprehensive Comparison between Neural Style Transfer and Universal Style Transfer
Title | A Comprehensive Comparison between Neural Style Transfer and Universal Style Transfer |
Authors | Somshubra Majumdar, Amlaan Bhoi, Ganesh Jagadeesan |
Abstract | Style transfer aims to transfer arbitrary visual styles to content images. We explore algorithms adapted from two papers that try to solve the problem of style transfer while generalizing on unseen styles or compromised visual quality. Majority of the improvements made focus on optimizing the algorithm for real-time style transfer while adapting to new styles with considerably less resources and constraints. We compare these strategies and compare how they measure up to produce visually appealing images. We explore two approaches to style transfer: neural style transfer with improvements and universal style transfer. We also make a comparison between the different images produced and how they can be qualitatively measured. |
Tasks | Style Transfer |
Published | 2018-06-03 |
URL | http://arxiv.org/abs/1806.00868v1 |
http://arxiv.org/pdf/1806.00868v1.pdf | |
PWC | https://paperswithcode.com/paper/a-comprehensive-comparison-between-neural |
Repo | |
Framework | |
Echo: Compiler-based GPU Memory Footprint Reduction for LSTM RNN Training
Title | Echo: Compiler-based GPU Memory Footprint Reduction for LSTM RNN Training |
Authors | Bojian Zheng, Abhishek Tiwari, Nandita Vijaykumar, Gennady Pekhimenko |
Abstract | The Long-Short-Term-Memory Recurrent Neural Networks (LSTM RNNs) are a popular class of machine learning models for analyzing sequential data. Their training on modern GPUs, however, is limited by the GPU memory capacity. Our profiling results of the LSTM RNN-based Neural Machine Translation (NMT) model reveal that feature maps of the attention and RNN layers form the memory bottleneck and runtime is unevenly distributed across different layers when training on GPUs. Based on these two observations, we propose to recompute the feature maps rather than stashing them persistently in the GPU memory. While the idea of feature map recomputation has been considered before, existing solutions fail to deliver satisfactory footprint reduction, as they do not address two key challenges. For each feature map recomputation to be effective and efficient, its effect on (1) the total memory footprint, and (2) the total execution time has to be carefully estimated. To this end, we propose Echo, a new compiler-based optimization scheme that addresses the first challenge with a practical mechanism that estimates the memory benefits of recomputation over the entire computation graph, and the second challenge by non-conservatively estimating the recomputation overhead leveraging layer specifics. Echo reduces the GPU memory footprint automatically and transparently without any changes required to the training source code, and is effective for models beyond LSTM RNNs. We evaluate Echo on numerous state-of-the-art machine learning workloads on real systems with modern GPUs and observe footprint reduction ratios of 1.89X on average and 3.13X maximum. Such reduction can be converted into faster training with a larger batch size, savings in GPU energy consumption (e.g., training with one GPU as fast as with four), and/or an increase in the maximum number of layers under the same GPU memory budget. |
Tasks | Machine Translation |
Published | 2018-05-22 |
URL | https://arxiv.org/abs/1805.08899v5 |
https://arxiv.org/pdf/1805.08899v5.pdf | |
PWC | https://paperswithcode.com/paper/ecornn-efficient-computing-of-lstm-rnn |
Repo | |
Framework | |
The Historical Significance of Textual Distances
Title | The Historical Significance of Textual Distances |
Authors | Ted Underwood |
Abstract | Measuring similarity is a basic task in information retrieval, and now often a building-block for more complex arguments about cultural change. But do measures of textual similarity and distance really correspond to evidence about cultural proximity and differentiation? To explore that question empirically, this paper compares textual and social measures of the similarities between genres of English-language fiction. Existing measures of textual similarity (cosine similarity on tf-idf vectors or topic vectors) are also compared to new strategies that use supervised learning to anchor textual measurement in a social context. |
Tasks | Information Retrieval |
Published | 2018-06-30 |
URL | http://arxiv.org/abs/1807.00181v1 |
http://arxiv.org/pdf/1807.00181v1.pdf | |
PWC | https://paperswithcode.com/paper/the-historical-significance-of-textual |
Repo | |
Framework | |
Structured Differential Learning for Automatic Threshold Setting
Title | Structured Differential Learning for Automatic Threshold Setting |
Authors | Jonathan Connell, Benjamin Herta |
Abstract | We introduce a technique that can automatically tune the parameters of a rule-based computer vision system comprised of thresholds, combinational logic, and time constants. This lets us retain the flexibility and perspicacity of a conventionally structured system while allowing us to perform approximate gradient descent using labeled data. While this is only a heuristic procedure, as far as we are aware there is no other efficient technique for tuning such systems. We describe the components of the system and the associated supervised learning mechanism. We also demonstrate the utility of the algorithm by comparing its performance versus hand tuning for an automotive headlight controller. Despite having over 100 parameters, the method is able to profitably adjust the system values given just the desired output for a number of videos. |
Tasks | |
Published | 2018-08-01 |
URL | http://arxiv.org/abs/1808.00361v1 |
http://arxiv.org/pdf/1808.00361v1.pdf | |
PWC | https://paperswithcode.com/paper/structured-differential-learning-for |
Repo | |
Framework | |
Graph Autoencoder-Based Unsupervised Feature Selection with Broad and Local Data Structure Preservation
Title | Graph Autoencoder-Based Unsupervised Feature Selection with Broad and Local Data Structure Preservation |
Authors | Siwei Feng, Marco F. Duarte |
Abstract | Feature selection is a dimensionality reduction technique that selects a subset of representative features from high dimensional data by eliminating irrelevant and redundant features. Recently, feature selection combined with sparse learning has attracted significant attention due to its outstanding performance compared with traditional feature selection methods that ignores correlation between features. These works first map data onto a low-dimensional subspace and then select features by posing a sparsity constraint on the transformation matrix. However, they are restricted by design to linear data transformation, a potential drawback given that the underlying correlation structures of data are often non-linear. To leverage a more sophisticated embedding, we propose an autoencoder-based unsupervised feature selection approach that leverages a single-layer autoencoder for a joint framework of feature selection and manifold learning. More specifically, we enforce column sparsity on the weight matrix connecting the input layer and the hidden layer, as in previous work. Additionally, we include spectral graph analysis on the projected data into the learning process to achieve local data geometry preservation from the original data space to the low-dimensional feature space. Extensive experiments are conducted on image, audio, text, and biological data. The promising experimental results validate the superiority of the proposed method. |
Tasks | Dimensionality Reduction, Feature Selection, Sparse Learning |
Published | 2018-01-07 |
URL | http://arxiv.org/abs/1801.02251v2 |
http://arxiv.org/pdf/1801.02251v2.pdf | |
PWC | https://paperswithcode.com/paper/graph-autoencoder-based-unsupervised-feature |
Repo | |
Framework | |
Calcium Removal From Cardiac CT Images Using Deep Convolutional Neural Network
Title | Calcium Removal From Cardiac CT Images Using Deep Convolutional Neural Network |
Authors | Siming Yan, Feng Shi, Yuhua Chen, Damini Dey, Sang-Eun Lee, Hyuk-Jae Chang, Debiao Li, Yibin Xie |
Abstract | Coronary calcium causes beam hardening and blooming artifacts on cardiac computed tomography angiography (CTA) images, which lead to overestimation of lumen stenosis and reduction of diagnostic specificity. To properly remove coronary calcification and restore arterial lumen precisely, we propose a machine learning-based method with a multi-step inpainting process. We developed a new network configuration, Dense-Unet, to achieve optimal performance with low computational cost. Results after the calcium removal process were validated by comparing with gold-standard X-ray angiography. Our results demonstrated that removing coronary calcification from images with the proposed approach was feasible, and may potentially improve the diagnostic accuracy of CTA. |
Tasks | |
Published | 2018-02-20 |
URL | http://arxiv.org/abs/1803.00399v1 |
http://arxiv.org/pdf/1803.00399v1.pdf | |
PWC | https://paperswithcode.com/paper/calcium-removal-from-cardiac-ct-images-using |
Repo | |
Framework | |
Learning Influence-Receptivity Network Structure with Guarantee
Title | Learning Influence-Receptivity Network Structure with Guarantee |
Authors | Ming Yu, Varun Gupta, Mladen Kolar |
Abstract | Traditional works on community detection from observations of information cascade assume that a single adjacency matrix parametrizes all the observed cascades. However, in reality the connection structure usually does not stay the same across cascades. For example, different people have different topics of interest, therefore the connection structure depends on the information/topic content of the cascade. In this paper we consider the case where we observe a sequence of noisy adjacency matrices triggered by information/event with different topic distributions. We propose a novel latent model using the intuition that a connection is more likely to exist between two nodes if they are interested in similar topics, which are common with the information/event. Specifically, we endow each node with two node-topic vectors: an influence vector that measures how influential/authoritative they are on each topic; and a receptivity vector that measures how receptive/susceptible they are to each topic. We show how these two node-topic structures can be estimated from observed adjacency matrices with theoretical guarantee on estimation error, in cases where the topic distributions of the information/event are known, as well as when they are unknown. Experiments on synthetic and real data demonstrate the effectiveness of our model and superior performance compared to state-of-the-art methods. |
Tasks | Community Detection |
Published | 2018-06-14 |
URL | http://arxiv.org/abs/1806.05730v2 |
http://arxiv.org/pdf/1806.05730v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-influence-receptivity-network |
Repo | |
Framework | |
Deep SNP: An End-to-end Deep Neural Network with Attention-based Localization for Break-point Detection in SNP Array Genomic data
Title | Deep SNP: An End-to-end Deep Neural Network with Attention-based Localization for Break-point Detection in SNP Array Genomic data |
Authors | Hamid Eghbal-zadeh, Lukas Fischer, Niko Popitsch, Florian Kromp, Sabine Taschner-Mandl, Khaled Koutini, Teresa Gerber, Eva Bozsaky, Peter F. Ambros, Inge M. Ambros, Gerhard Widmer, Bernhard A. Moser |
Abstract | Diagnosis and risk stratification of cancer and many other diseases require the detection of genomic breakpoints as a prerequisite of calling copy number alterations (CNA). This, however, is still challenging and requires time-consuming manual curation. As deep-learning methods outperformed classical state-of-the-art algorithms in various domains and have also been successfully applied to life science problems including medicine and biology, we here propose Deep SNP, a novel Deep Neural Network to learn from genomic data. Specifically, we used a manually curated dataset from 12 genomic single nucleotide polymorphism array (SNPa) profiles as truth-set and aimed at predicting the presence or absence of genomic breakpoints, an indicator of structural chromosomal variations, in windows of 40,000 probes. We compare our results with well-known neural network models as well as Rawcopy though this tool is designed to predict breakpoints and in addition genomic segments with high sensitivity. We show, that Deep SNP is capable of successfully predicting the presence or absence of a breakpoint in large genomic windows and outperforms state-of-the-art neural network models. Qualitative examples suggest that integration of a localization unit may enable breakpoint detection and prediction of genomic segments, even if the breakpoint coordinates were not provided for network training. These results warrant further evaluation of DeepSNP for breakpoint localization and subsequent calling of genomic segments. |
Tasks | |
Published | 2018-06-22 |
URL | http://arxiv.org/abs/1806.08840v1 |
http://arxiv.org/pdf/1806.08840v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-snp-an-end-to-end-deep-neural-network |
Repo | |
Framework | |
Have You Stolen My Model? Evasion Attacks Against Deep Neural Network Watermarking Techniques
Title | Have You Stolen My Model? Evasion Attacks Against Deep Neural Network Watermarking Techniques |
Authors | Dorjan Hitaj, Luigi V. Mancini |
Abstract | Deep neural networks have had enormous impact on various domains of computer science, considerably outperforming previous state of the art machine learning techniques. To achieve this performance, neural networks need large quantities of data and huge computational resources, which heavily increases their construction costs. The increased cost of building a good deep neural network model gives rise to a need for protecting this investment from potential copyright infringements. Legitimate owners of a machine learning model want to be able to reliably track and detect a malicious adversary that tries to steal the intellectual property related to the model. Recently, this problem was tackled by introducing in deep neural networks the concept of watermarking, which allows a legitimate owner to embed some secret information(watermark) in a given model. The watermark allows the legitimate owner to detect copyright infringements of his model. This paper focuses on verifying the robustness and reliability of state-of- the-art deep neural network watermarking schemes. We show that, a malicious adversary, even in scenarios where the watermark is difficult to remove, can still evade the verification by the legitimate owners, thus avoiding the detection of model theft. |
Tasks | |
Published | 2018-09-03 |
URL | http://arxiv.org/abs/1809.00615v1 |
http://arxiv.org/pdf/1809.00615v1.pdf | |
PWC | https://paperswithcode.com/paper/have-you-stolen-my-model-evasion-attacks |
Repo | |
Framework | |
Multiplex Communities and the Emergence of International Conflict
Title | Multiplex Communities and the Emergence of International Conflict |
Authors | Caleb Pomeroy, Niheer Dasandi, Slava Jankin Mikhaylov |
Abstract | Advances in community detection reveal new insights into multiplex and multilayer networks. Less work, however, investigates the relationship between these communities and outcomes in social systems. We leverage these advances to shed light on the relationship between the cooperative mesostructure of the international system and the onset of interstate conflict. We detect communities based upon weaker signals of affinity expressed in United Nations votes and speeches, as well as stronger signals observed across multiple layers of bilateral cooperation. Communities of diplomatic affinity display an expected negative relationship with conflict onset. Ties in communities based upon observed cooperation, however, display no effect under a standard model specification and a positive relationship with conflict under an alternative specification. These results align with some extant hypotheses but also point to a paucity in our understanding of the relationship between community structure and behavioral outcomes in networks. |
Tasks | Community Detection |
Published | 2018-06-02 |
URL | https://arxiv.org/abs/1806.00615v2 |
https://arxiv.org/pdf/1806.00615v2.pdf | |
PWC | https://paperswithcode.com/paper/multiplex-communities-and-the-emergence-of |
Repo | |
Framework | |
From Satellite Imagery to Disaster Insights
Title | From Satellite Imagery to Disaster Insights |
Authors | Jigar Doshi, Saikat Basu, Guan Pang |
Abstract | The use of satellite imagery has become increasingly popular for disaster monitoring and response. After a disaster, it is important to prioritize rescue operations, disaster response and coordinate relief efforts. These have to be carried out in a fast and efficient manner since resources are often limited in disaster-affected areas and it’s extremely important to identify the areas of maximum damage. However, most of the existing disaster mapping efforts are manual which is time-consuming and often leads to erroneous results. In order to address these issues, we propose a framework for change detection using Convolutional Neural Networks (CNN) on satellite images which can then be thresholded and clustered together into grids to find areas which have been most severely affected by a disaster. We also present a novel metric called Disaster Impact Index (DII) and use it to quantify the impact of two natural disasters - the Hurricane Harvey flood and the Santa Rosa fire. Our framework achieves a top F1 score of 81.2% on the gridded flood dataset and 83.5% on the gridded fire dataset. |
Tasks | |
Published | 2018-12-17 |
URL | http://arxiv.org/abs/1812.07033v1 |
http://arxiv.org/pdf/1812.07033v1.pdf | |
PWC | https://paperswithcode.com/paper/from-satellite-imagery-to-disaster-insights |
Repo | |
Framework | |