Paper Group ANR 969
Probabilistic design of a molybdenum-base alloy using a neural network. Out-distribution training confers robustness to deep neural networks. Resampling Forgery Detection Using Deep Learning and A-Contrario Analysis. Temporal and volumetric denoising via quantile sparse image prior. Highly accurate model for prediction of lung nodule malignancy wit …
Probabilistic design of a molybdenum-base alloy using a neural network
Title | Probabilistic design of a molybdenum-base alloy using a neural network |
Authors | B. D. Conduit, N. G. Jones, H. J. Stone, G. J. Conduit |
Abstract | An artificial intelligence tool is exploited to discover and characterize a new molybdenum-base alloy that is the most likely to simultaneously satisfy targets of cost, phase stability, precipitate content, yield stress, and hardness. Experimental testing demonstrates that the proposed alloy fulfils the computational predictions, and furthermore the physical properties exceed those of other commercially available Mo-base alloys for forging-die applications. |
Tasks | |
Published | 2018-03-02 |
URL | http://arxiv.org/abs/1803.00879v1 |
http://arxiv.org/pdf/1803.00879v1.pdf | |
PWC | https://paperswithcode.com/paper/probabilistic-design-of-a-molybdenum-base |
Repo | |
Framework | |
Out-distribution training confers robustness to deep neural networks
Title | Out-distribution training confers robustness to deep neural networks |
Authors | Mahdieh Abbasi, Christian Gagné |
Abstract | The easiness at which adversarial instances can be generated in deep neural networks raises some fundamental questions on their functioning and concerns on their use in critical systems. In this paper, we draw a connection between over-generalization and adversaries: a possible cause of adversaries lies in models designed to make decisions all over the input space, leading to inappropriate high-confidence decisions in parts of the input space not represented in the training set. We empirically show an augmented neural network, which is not trained on any types of adversaries, can increase the robustness by detecting black-box one-step adversaries, i.e. assimilated to out-distribution samples, and making generation of white-box one-step adversaries harder. |
Tasks | |
Published | 2018-02-20 |
URL | http://arxiv.org/abs/1802.07124v3 |
http://arxiv.org/pdf/1802.07124v3.pdf | |
PWC | https://paperswithcode.com/paper/out-distribution-training-confers-robustness |
Repo | |
Framework | |
Resampling Forgery Detection Using Deep Learning and A-Contrario Analysis
Title | Resampling Forgery Detection Using Deep Learning and A-Contrario Analysis |
Authors | Arjuna Flenner, Lawrence Peterson, Jason Bunk, Tajuddin Manhar Mohammed, Lakshmanan Nataraj, B. S. Manjunath |
Abstract | The amount of digital imagery recorded has recently grown exponentially, and with the advancement of software, such as Photoshop or Gimp, it has become easier to manipulate images. However, most images on the internet have not been manipulated and any automated manipulation detection algorithm must carefully control the false alarm rate. In this paper we discuss a method to automatically detect local resampling using deep learning while controlling the false alarm rate using a-contrario analysis. The automated procedure consists of three primary steps. First, resampling features are calculated for image blocks. A deep learning classifier is then used to generate a heatmap that indicates if the image block has been resampled. We expect some of these blocks to be falsely identified as resampled. We use a-contrario hypothesis testing to both identify if the patterns of the manipulated blocks indicate if the image has been tampered with and to localize the manipulation. We demonstrate that this strategy is effective in indicating if an image has been manipulated and localizing the manipulations. |
Tasks | |
Published | 2018-03-01 |
URL | http://arxiv.org/abs/1803.01711v1 |
http://arxiv.org/pdf/1803.01711v1.pdf | |
PWC | https://paperswithcode.com/paper/resampling-forgery-detection-using-deep |
Repo | |
Framework | |
Temporal and volumetric denoising via quantile sparse image prior
Title | Temporal and volumetric denoising via quantile sparse image prior |
Authors | Franziska Schirrmacher, Thomas Köhler, Tobias Lindenberger, Lennart Husvogt, Jürgen Endres, James G. Fujimoto, Joachim Hornegger, Arnd Dörfler, Philip Hoelter, Andreas K. Maier |
Abstract | This paper introduces an universal and structure-preserving regularization term, called quantile sparse image (QuaSI) prior. The prior is suitable for denoising images from various medical imaging modalities. We demonstrate its effectiveness on volumetric optical coherence tomography (OCT) and computed tomography (CT) data, which show different noise and image characteristics. OCT offers high-resolution scans of the human retina but is inherently impaired by speckle noise. CT on the other hand has a lower resolution and shows high-frequency noise. For the purpose of denoising, we propose a variational framework based on the QuaSI prior and a Huber data fidelity model that can handle 3-D and 3-D+t data. Efficient optimization is facilitated through the use of an alternating direction method of multipliers (ADMM) scheme and the linearization of the quantile filter. Experiments on multiple datasets emphasize the excellent performance of the proposed method. |
Tasks | Computed Tomography (CT), Denoising |
Published | 2018-02-12 |
URL | https://arxiv.org/abs/1802.03943v3 |
https://arxiv.org/pdf/1802.03943v3.pdf | |
PWC | https://paperswithcode.com/paper/temporal-and-volumetric-denoising-via |
Repo | |
Framework | |
Highly accurate model for prediction of lung nodule malignancy with CT scans
Title | Highly accurate model for prediction of lung nodule malignancy with CT scans |
Authors | Jason Causey, Junyu Zhang, Shiqian Ma, Bo Jiang, Jake Qualls, David G. Politte, Fred Prior, Shuzhong Zhang, Xiuzhen Huang |
Abstract | Computed tomography (CT) examinations are commonly used to predict lung nodule malignancy in patients, which are shown to improve noninvasive early diagnosis of lung cancer. It remains challenging for computational approaches to achieve performance comparable to experienced radiologists. Here we present NoduleX, a systematic approach to predict lung nodule malignancy from CT data, based on deep learning convolutional neural networks (CNN). For training and validation, we analyze >1000 lung nodules in images from the LIDC/IDRI cohort. All nodules were identified and classified by four experienced thoracic radiologists who participated in the LIDC project. NoduleX achieves high accuracy for nodule malignancy classification, with an AUC of ~0.99. This is commensurate with the analysis of the dataset by experienced radiologists. Our approach, NoduleX, provides an effective framework for highly accurate nodule malignancy prediction with the model trained on a large patient population. Our results are replicable with software available at http://bioinformatics.astate.edu/NoduleX. |
Tasks | Computed Tomography (CT) |
Published | 2018-02-06 |
URL | http://arxiv.org/abs/1802.01756v1 |
http://arxiv.org/pdf/1802.01756v1.pdf | |
PWC | https://paperswithcode.com/paper/highly-accurate-model-for-prediction-of-lung |
Repo | |
Framework | |
On Multi-resident Activity Recognition in Ambient Smart-Homes
Title | On Multi-resident Activity Recognition in Ambient Smart-Homes |
Authors | Son N. Tran, Qing Zhang, Mohan Karunanithi |
Abstract | Increasing attention to the research on activity monitoring in smart homes has motivated the employment of ambient intelligence to reduce the deployment cost and solve the privacy issue. Several approaches have been proposed for multi-resident activity recognition, however, there still lacks a comprehensive benchmark for future research and practical selection of models. In this paper we study different methods for multi-resident activity recognition and evaluate them on same sets of data. The experimental results show that recurrent neural network with gated recurrent units is better than other models and also considerably efficient, and that using combined activities as single labels is more effective than represent them as separate labels. |
Tasks | Activity Recognition |
Published | 2018-06-18 |
URL | http://arxiv.org/abs/1806.06611v1 |
http://arxiv.org/pdf/1806.06611v1.pdf | |
PWC | https://paperswithcode.com/paper/on-multi-resident-activity-recognition-in |
Repo | |
Framework | |
Large Margin Structured Convolution Operator for Thermal Infrared Object Tracking
Title | Large Margin Structured Convolution Operator for Thermal Infrared Object Tracking |
Authors | Peng Gao, Yipeng Ma, Ke Song, Chao Li, Fei Wang, Liyi Xiao |
Abstract | Compared with visible object tracking, thermal infrared (TIR) object tracking can track an arbitrary target in total darkness since it cannot be influenced by illumination variations. However, there are many unwanted attributes that constrain the potentials of TIR tracking, such as the absence of visual color patterns and low resolutions. Recently, structured output support vector machine (SOSVM) and discriminative correlation filter (DCF) have been successfully applied to visible object tracking, respectively. Motivated by these, in this paper, we propose a large margin structured convolution operator (LMSCO) to achieve efficient TIR object tracking. To improve the tracking performance, we employ the spatial regularization and implicit interpolation to obtain continuous deep feature maps, including deep appearance features and deep motion features, of the TIR targets. Finally, a collaborative optimization strategy is exploited to significantly update the operators. Our approach not only inherits the advantage of the strong discriminative capability of SOSVM but also achieves accurate and robust tracking with higher-dimensional features and more dense samples. To the best of our knowledge, we are the first to incorporate the advantages of DCF and SOSVM for TIR object tracking. Comprehensive evaluations on two thermal infrared tracking benchmarks, i.e. VOT-TIR2015 and VOT-TIR2016, clearly demonstrate that our LMSCO tracker achieves impressive results and outperforms most state-of-the-art trackers in terms of accuracy and robustness with sufficient frame rate. |
Tasks | Object Tracking, Thermal Infrared Object Tracking |
Published | 2018-04-19 |
URL | http://arxiv.org/abs/1804.07006v2 |
http://arxiv.org/pdf/1804.07006v2.pdf | |
PWC | https://paperswithcode.com/paper/large-margin-structured-convolution-operator |
Repo | |
Framework | |
Qiniu Submission to ActivityNet Challenge 2018
Title | Qiniu Submission to ActivityNet Challenge 2018 |
Authors | Xiaoteng Zhang, Yixin Bao, Feiyun Zhang, Kai Hu, Yicheng Wang, Liang Zhu, Qinzhu He, Yining Lin, Jie Shao, Yao Peng |
Abstract | In this paper, we introduce our submissions for the tasks of trimmed activity recognition (Kinetics) and trimmed event recognition (Moments in Time) for Activitynet Challenge 2018. In the two tasks, non-local neural networks and temporal segment networks are implemented as our base models. Multi-modal cues such as RGB image, optical flow and acoustic signal have also been used in our method. We also propose new non-local-based models for further improvement on the recognition accuracy. The final submissions after ensembling the models achieve 83.5% top-1 accuracy and 96.8% top-5 accuracy on the Kinetics validation set, 35.81% top-1 accuracy and 62.59% top-5 accuracy on the MIT validation set. |
Tasks | Activity Recognition, Optical Flow Estimation |
Published | 2018-06-12 |
URL | http://arxiv.org/abs/1806.04391v1 |
http://arxiv.org/pdf/1806.04391v1.pdf | |
PWC | https://paperswithcode.com/paper/qiniu-submission-to-activitynet-challenge |
Repo | |
Framework | |
Satellite imagery analysis for operational damage assessment in Emergency situations
Title | Satellite imagery analysis for operational damage assessment in Emergency situations |
Authors | Alexey Trekin, German Novikov, Georgy Potapov, Vladimir Ignatiev, Evgeny Burnaev |
Abstract | When major disaster occurs the questions are raised how to estimate the damage in time to support the decision making process and relief efforts by local authorities or humanitarian teams. In this paper we consider the use of Machine Learning and Computer Vision on remote sensing imagery to improve time efficiency of assessment of damaged buildings in disaster affected area. We propose a general workflow that can be useful in various disaster management applications, and demonstrate the use of the proposed workflow for the assessment of the damage caused by the wildfires in California in 2017. |
Tasks | Decision Making |
Published | 2018-02-19 |
URL | http://arxiv.org/abs/1803.00397v1 |
http://arxiv.org/pdf/1803.00397v1.pdf | |
PWC | https://paperswithcode.com/paper/satellite-imagery-analysis-for-operational |
Repo | |
Framework | |
Building Computational Models to Predict One-Year Mortality in ICU Patients with Acute Myocardial Infarction and Post Myocardial Infarction Syndrome
Title | Building Computational Models to Predict One-Year Mortality in ICU Patients with Acute Myocardial Infarction and Post Myocardial Infarction Syndrome |
Authors | Laura A. Barrett, Seyedeh Neelufar Payrovnaziri, Jiang Bian, Zhe He |
Abstract | Heart disease remains the leading cause of death in the United States. Compared with risk assessment guidelines that require manual calculation of scores, machine learning-based prediction for disease outcomes such as mortality can be utilized to save time and improve prediction accuracy. This study built and evaluated various machine learning models to predict one-year mortality in patients diagnosed with acute myocardial infarction or post myocardial infarction syndrome in the MIMIC-III database. The results of the best performing shallow prediction models were compared to a deep feedforward neural network (Deep FNN) with back propagation. We included a cohort of 5436 admissions. Six datasets were developed and compared. The models applying Logistic Model Trees (LMT) and Simple Logistic algorithms to the combined dataset resulted in the highest prediction accuracy at 85.12% and the highest AUC at .901. In addition, other factors were observed to have an impact on outcomes as well. |
Tasks | |
Published | 2018-12-12 |
URL | http://arxiv.org/abs/1812.05072v1 |
http://arxiv.org/pdf/1812.05072v1.pdf | |
PWC | https://paperswithcode.com/paper/building-computational-models-to-predict-one |
Repo | |
Framework | |
Pretraining by Backtranslation for End-to-end ASR in Low-Resource Settings
Title | Pretraining by Backtranslation for End-to-end ASR in Low-Resource Settings |
Authors | Matthew Wiesner, Adithya Renduchintala, Shinji Watanabe, Chunxi Liu, Najim Dehak, Sanjeev Khudanpur |
Abstract | We explore training attention-based encoder-decoder ASR in low-resource settings. These models perform poorly when trained on small amounts of transcribed speech, in part because they depend on having sufficient target-side text to train the attention and decoder networks. In this paper we address this shortcoming by pretraining our network parameters using only text-based data and transcribed speech from other languages. We analyze the relative contributions of both sources of data. Across 3 test languages, our text-based approach resulted in a 20% average relative improvement over a text-based augmentation technique without pretraining. Using transcribed speech from nearby languages gives a further 20-30% relative reduction in character error rate. |
Tasks | Data Augmentation, End-To-End Speech Recognition |
Published | 2018-12-10 |
URL | https://arxiv.org/abs/1812.03919v2 |
https://arxiv.org/pdf/1812.03919v2.pdf | |
PWC | https://paperswithcode.com/paper/low-resource-multi-modal-data-augmentation |
Repo | |
Framework | |
Transfer learning of language-independent end-to-end ASR with language model fusion
Title | Transfer learning of language-independent end-to-end ASR with language model fusion |
Authors | Hirofumi Inaguma, Jaejin Cho, Murali Karthick Baskar, Tatsuya Kawahara, Shinji Watanabe |
Abstract | This work explores better adaptation methods to low-resource languages using an external language model (LM) under the framework of transfer learning. We first build a language-independent ASR system in a unified sequence-to-sequence (S2S) architecture with a shared vocabulary among all languages. During adaptation, we perform LM fusion transfer, where an external LM is integrated into the decoder network of the attention-based S2S model in the whole adaptation stage, to effectively incorporate linguistic context of the target language. We also investigate various seed models for transfer learning. Experimental evaluations using the IARPA BABEL data set show that LM fusion transfer improves performances on all target five languages compared with simple transfer learning when the external text data is available. Our final system drastically reduces the performance gap from the hybrid systems. |
Tasks | End-To-End Speech Recognition, Language Modelling, Transfer Learning |
Published | 2018-11-06 |
URL | https://arxiv.org/abs/1811.02134v2 |
https://arxiv.org/pdf/1811.02134v2.pdf | |
PWC | https://paperswithcode.com/paper/transfer-learning-of-language-independent-end |
Repo | |
Framework | |
Forecasting Cardiology Admissions from Catheterization Laboratory
Title | Forecasting Cardiology Admissions from Catheterization Laboratory |
Authors | Avishek Choudhury, Sunanda Perumalla |
Abstract | Emergent and unscheduled cardiology admissions from cardiac catheterization laboratory add complexity to the management of Cardiology and in-patient department. In this article, we sought to study the behavior of cardiology admissions from Catheterization laboratory using time series models. Our research involves retrospective cardiology admission data from March 1, 2012, to November 3, 2016, retrieved from a hospital in Iowa. Autoregressive integrated moving average (ARIMA), Holts method, mean method, na"ive method, seasonal na"ive, exponential smoothing, and drift method were implemented to forecast weekly cardiology admissions from Catheterization laboratory. ARIMA (2,0,2) (1,1,1) was selected as the best fit model with the minimum sum of error, Akaike information criterion and Schwartz Bayesian criterion. The model failed to reject the null hypothesis of stationarity, it lacked the evidence of independence, and rejected the null hypothesis of normality. The implication of this study will not only improve catheterization laboratory staff schedule, advocate efficient use of imaging equipment and inpatient telemetry beds but also equip management to proactively tackle inpatient overcrowding, plan for physical capacity expansion and so forth. |
Tasks | Time Series, Time Series Forecasting |
Published | 2018-12-28 |
URL | https://arxiv.org/abs/1812.10486v2 |
https://arxiv.org/pdf/1812.10486v2.pdf | |
PWC | https://paperswithcode.com/paper/cardiology-admissions-from-catheterization |
Repo | |
Framework | |
Learning to Reason with HOL4 tactics
Title | Learning to Reason with HOL4 tactics |
Authors | Thibault Gauthier, Cezary Kaliszyk, Josef Urban |
Abstract | Techniques combining machine learning with translation to automated reasoning have recently become an important component of formal proof assistants. Such “hammer” tech- niques complement traditional proof assistant automation as implemented by tactics and decision procedures. In this paper we present a unified proof assistant automation approach which attempts to automate the selection of appropriate tactics and tactic-sequences com- bined with an optimized small-scale hammering approach. We implement the technique as a tactic-level automation for HOL4: TacticToe. It implements a modified A*-algorithm directly in HOL4 that explores different tactic-level proof paths, guiding their selection by learning from a large number of previous tactic-level proofs. Unlike the existing hammer methods, TacticToe avoids translation to FOL, working directly on the HOL level. By combining tactic prediction and premise selection, TacticToe is able to re-prove 39 percent of 7902 HOL4 theorems in 5 seconds whereas the best single HOL(y)Hammer strategy solves 32 percent in the same amount of time. |
Tasks | |
Published | 2018-04-02 |
URL | http://arxiv.org/abs/1804.00595v1 |
http://arxiv.org/pdf/1804.00595v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-reason-with-hol4-tactics |
Repo | |
Framework | |
Approximate Bayesian inference in spatial environments
Title | Approximate Bayesian inference in spatial environments |
Authors | Atanas Mirchev, Baris Kayalibay, Maximilian Soelch, Patrick van der Smagt, Justin Bayer |
Abstract | Model-based approaches bear great promise for decision making of agents interacting with the physical world. In the context of spatial environments, different types of problems such as localisation, mapping, navigation or autonomous exploration are typically adressed with specialised methods, often relying on detailed knowledge of the system at hand. We express these tasks as probabilistic inference and planning under the umbrella of deep sequential generative models. Using the frameworks of variational inference and neural networks, our method inherits favourable properties such as flexibility, scalability and the ability to learn from data. The method performs comparably to specialised state-of-the-art methodology in two distinct simulated environments. |
Tasks | Bayesian Inference, Decision Making |
Published | 2018-05-18 |
URL | https://arxiv.org/abs/1805.07206v3 |
https://arxiv.org/pdf/1805.07206v3.pdf | |
PWC | https://paperswithcode.com/paper/approximate-bayesian-inference-in-spatial |
Repo | |
Framework | |