October 20, 2019

3434 words 17 mins read

Paper Group ANR 51

Paper Group ANR 51

The CoNLL–SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection. Vibration-Based Damage Detection in Wind Turbine Blades using Phase-Based Motion Estimation and Motion Magnification. MORF: A Framework for Predictive Modeling and Replication At Scale With Privacy-Restricted MOOC Data. Revisiting Small Batch Training for Deep Neural Netw …

The CoNLL–SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection

Title The CoNLL–SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection
Authors Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Arya D. McCarthy, Katharina Kann, Sabrina J. Mielke, Garrett Nicolai, Miikka Silfverberg, David Yarowsky, Jason Eisner, Mans Hulden
Abstract The CoNLL–SIGMORPHON 2018 shared task on supervised learning of morphological generation featured data sets from 103 typologically diverse languages. Apart from extending the number of languages involved in earlier supervised tasks of generating inflected forms, this year the shared task also featured a new second task which asked participants to inflect words in sentential context, similar to a cloze task. This second task featured seven languages. Task 1 received 27 submissions and task 2 received 6 submissions. Both tasks featured a low, medium, and high data condition. Nearly all submissions featured a neural component and built on highly-ranked systems from the earlier 2017 shared task. In the inflection task (task 1), 41 of the 52 languages present in last year’s inflection task showed improvement by the best systems in the low-resource setting. The cloze task (task 2) proved to be difficult, and few submissions managed to consistently improve upon both a simple neural baseline system and a lemma-repeating baseline.
Tasks
Published 2018-10-16
URL https://arxiv.org/abs/1810.07125v3
PDF https://arxiv.org/pdf/1810.07125v3.pdf
PWC https://paperswithcode.com/paper/the-conll-sigmorphon-2018-shared-task
Repo
Framework

Vibration-Based Damage Detection in Wind Turbine Blades using Phase-Based Motion Estimation and Motion Magnification

Title Vibration-Based Damage Detection in Wind Turbine Blades using Phase-Based Motion Estimation and Motion Magnification
Authors Aral Sarrafi, Zhu Mao, Christopher Niezrecki, Peyman Poozesh
Abstract Vibration-based Structural Health Monitoring (SHM) techniques are among the most common approaches for structural damage identification. The presence of damage in structures may be identified by monitoring the changes in dynamic behavior subject to external loading, and is typically performed by using experimental modal analysis (EMA) or operational modal analysis (OMA). These tools for SHM normally require a limited number of physically attached transducers (e.g. accelerometers) in order to record the response of the structure for further analysis. Signal conditioners, wires, wireless receivers and a data acquisition system (DAQ) are also typical components of traditional sensing systems used in vibration-based SHM. However, instrumentation of lightweight structures with contact sensors such as accelerometers may induce mass-loading effects, and for large-scale structures, the instrumentation is labor intensive and time consuming. Achieving high spatial measurement resolution for a large-scale structure is not always feasible while working with traditional contact sensors, and there is also the potential for a lack of reliability associated with fixed contact sensors in outliving the life-span of the host structure. Among the state-of-the-art non-contact measurements, digital video cameras are able to rapidly collect high-density spatial information from structures remotely. In this paper, the subtle motions from recorded video (i.e. a sequence of images) are extracted by means of Phase-based Motion Estimation (PME) and the extracted information is used to conduct damage identification on a 2.3-meter long Skystream wind turbine blade (WTB). The PME and phased-based motion magnification approach estimates the structural motion from the captured sequence of images for both a baseline and damaged test cases on a wind turbine blade.
Tasks Motion Estimation
Published 2018-03-30
URL http://arxiv.org/abs/1804.00558v1
PDF http://arxiv.org/pdf/1804.00558v1.pdf
PWC https://paperswithcode.com/paper/vibration-based-damage-detection-in-wind
Repo
Framework

MORF: A Framework for Predictive Modeling and Replication At Scale With Privacy-Restricted MOOC Data

Title MORF: A Framework for Predictive Modeling and Replication At Scale With Privacy-Restricted MOOC Data
Authors Josh Gardner, Christopher Brooks, Juan Miguel L. Andres, Ryan Baker
Abstract Big data repositories from online learning platforms such as Massive Open Online Courses (MOOCs) represent an unprecedented opportunity to advance research on education at scale and impact a global population of learners. To date, such research has been hindered by poor reproducibility and a lack of replication, largely due to three types of barriers: experimental, inferential, and data. We present a novel system for large-scale computational research, the MOOC Replication Framework (MORF), to jointly address these barriers. We discuss MORF’s architecture, an open-source platform-as-a-service (PaaS) which includes a simple, flexible software API providing for multiple modes of research (predictive modeling or production rule analysis) integrated with a high-performance computing environment. All experiments conducted on MORF use executable Docker containers which ensure complete reproducibility while allowing for the use of any software or language which can be installed in the linux-based Docker container. Each experimental artifact is assigned a DOI and made publicly available. MORF has the potential to accelerate and democratize research on its massive data repository, which currently includes over 200 MOOCs, as demonstrated by initial research conducted on the platform. We also highlight ways in which MORF represents a solution template to a more general class of problems faced by computational researchers in other domains.
Tasks
Published 2018-01-16
URL http://arxiv.org/abs/1801.05236v3
PDF http://arxiv.org/pdf/1801.05236v3.pdf
PWC https://paperswithcode.com/paper/morf-a-framework-for-predictive-modeling-and
Repo
Framework

Revisiting Small Batch Training for Deep Neural Networks

Title Revisiting Small Batch Training for Deep Neural Networks
Authors Dominic Masters, Carlo Luschi
Abstract Modern deep neural network training is typically based on mini-batch stochastic gradient optimization. While the use of large mini-batches increases the available computational parallelism, small batch training has been shown to provide improved generalization performance and allows a significantly smaller memory footprint, which might also be exploited to improve machine throughput. In this paper, we review common assumptions on learning rate scaling and training duration, as a basis for an experimental comparison of test performance for different mini-batch sizes. We adopt a learning rate that corresponds to a constant average weight update per gradient calculation (i.e., per unit cost of computation), and point out that this results in a variance of the weight updates that increases linearly with the mini-batch size $m$. The collected experimental results for the CIFAR-10, CIFAR-100 and ImageNet datasets show that increasing the mini-batch size progressively reduces the range of learning rates that provide stable convergence and acceptable test performance. On the other hand, small mini-batch sizes provide more up-to-date gradient calculations, which yields more stable and reliable training. The best performance has been consistently obtained for mini-batch sizes between $m = 2$ and $m = 32$, which contrasts with recent work advocating the use of mini-batch sizes in the thousands.
Tasks
Published 2018-04-20
URL http://arxiv.org/abs/1804.07612v1
PDF http://arxiv.org/pdf/1804.07612v1.pdf
PWC https://paperswithcode.com/paper/revisiting-small-batch-training-for-deep
Repo
Framework

Five-point Fundamental Matrix Estimation for Uncalibrated Cameras

Title Five-point Fundamental Matrix Estimation for Uncalibrated Cameras
Authors Daniel Barath
Abstract We aim at estimating the fundamental matrix in two views from five correspondences of rotation invariant features obtained by e.g.\ the SIFT detector. The proposed minimal solver first estimates a homography from three correspondences assuming that they are co-planar and exploiting their rotational components. Then the fundamental matrix is obtained from the homography and two additional point pairs in general position. The proposed approach, combined with robust estimators like Graph-Cut RANSAC, is superior to other state-of-the-art algorithms both in terms of accuracy and number of iterations required. This is validated on synthesized data and $561$ real image pairs. Moreover, the tests show that requiring three points on a plane is not too restrictive in urban environment and locally optimized robust estimators lead to accurate estimates even if the points are not entirely co-planar. As a potential application, we show that using the proposed method makes two-view multi-motion estimation more accurate.
Tasks Motion Estimation
Published 2018-03-01
URL http://arxiv.org/abs/1803.00260v1
PDF http://arxiv.org/pdf/1803.00260v1.pdf
PWC https://paperswithcode.com/paper/five-point-fundamental-matrix-estimation-for
Repo
Framework

Analyzing Roles of Classifiers and Code-Mixed factors for Sentiment Identification

Title Analyzing Roles of Classifiers and Code-Mixed factors for Sentiment Identification
Authors Soumil Mandal, Dipankar Das
Abstract Multilingual speakers often switch between languages to express themselves on social communication platforms. Sometimes, the original script of the language is preserved, while using a common script for all the languages is quite popular as well due to convenience. On such occasions, multiple languages are being mixed with different rules of grammar, using the same script which makes it a challenging task for natural language processing even in case of accurate sentiment identification. In this paper, we report results of various experiments carried out on movie reviews dataset having this code-mixing property of two languages, English and Bengali, both typed in Roman script. We have tested various machine learning algorithms trained only on English features on our code-mixed data and have achieved the maximum accuracy of 59.00% using Naive Bayes (NB) model. We have also tested various models trained on code-mixed data, as well as English features and the highest accuracy of 72.50% was obtained by a Support Vector Machine (SVM) model. Finally, we have analyzed the misclassified snippets and have discussed the challenges needed to be resolved for better accuracy.
Tasks
Published 2018-01-08
URL http://arxiv.org/abs/1801.02581v2
PDF http://arxiv.org/pdf/1801.02581v2.pdf
PWC https://paperswithcode.com/paper/analyzing-roles-of-classifiers-and-code-mixed
Repo
Framework

Robust training of recurrent neural networks to handle missing data for disease progression modeling

Title Robust training of recurrent neural networks to handle missing data for disease progression modeling
Authors Mostafa Mehdipour Ghazi, Mads Nielsen, Akshay Pai, M. Jorge Cardoso, Marc Modat, Sebastien Ourselin, Lauge Sørensen
Abstract Disease progression modeling (DPM) using longitudinal data is a challenging task in machine learning for healthcare that can provide clinicians with better tools for diagnosis and monitoring of disease. Existing DPM algorithms neglect temporal dependencies among measurements and make parametric assumptions about biomarker trajectories. In addition, they do not model multiple biomarkers jointly and need to align subjects’ trajectories. In this paper, recurrent neural networks (RNNs) are utilized to address these issues. However, in many cases, longitudinal cohorts contain incomplete data, which hinders the application of standard RNNs and requires a pre-processing step such as imputation of the missing values. We, therefore, propose a generalized training rule for the most widely used RNN architecture, long short-term memory (LSTM) networks, that can handle missing values in both target and predictor variables. This algorithm is applied for modeling the progression of Alzheimer’s disease (AD) using magnetic resonance imaging (MRI) biomarkers. The results show that the proposed LSTM algorithm achieves a lower mean absolute error for prediction of measurements across all considered MRI biomarkers compared to using standard LSTM networks with data imputation or using a regression-based DPM method. Moreover, applying linear discriminant analysis to the biomarkers’ values predicted by the proposed algorithm results in a larger area under the receiver operating characteristic curve (AUC) for clinical diagnosis of AD compared to the same alternatives, and the AUC is comparable to state-of-the-art AUCs from a recent cross-sectional medical image classification challenge. This paper shows that built-in handling of missing values in LSTM network training paves the way for application of RNNs in disease progression modeling.
Tasks Image Classification, Imputation
Published 2018-08-16
URL http://arxiv.org/abs/1808.05500v1
PDF http://arxiv.org/pdf/1808.05500v1.pdf
PWC https://paperswithcode.com/paper/robust-training-of-recurrent-neural-networks
Repo
Framework

v-SVR Polynomial Kernel for Predicting the Defect Density in New Software Projects

Title v-SVR Polynomial Kernel for Predicting the Defect Density in New Software Projects
Authors Cuauhtemoc Lopez-Martin, Mohammad Azzeh, Ali Bou Nassif, Shadi Banitaan
Abstract An important product measure to determine the effectiveness of software processes is the defect density (DD). In this study, we propose the application of support vector regression (SVR) to predict the DD of new software projects obtained from the International Software Benchmarking Standards Group (ISBSG) Release 2018 data set. Two types of SVR (e-SVR and v-SVR) were applied to train and test these projects. Each SVR used four types of kernels. The prediction accuracy of each SVR was compared to that of a statistical regression (i.e., a simple linear regression, SLR). Statistical significance test showed that v-SVR with polynomial kernel was better than that of SLR when new software projects were developed on mainframes and coded in programming languages of third generation
Tasks
Published 2018-12-15
URL http://arxiv.org/abs/1901.03362v1
PDF http://arxiv.org/pdf/1901.03362v1.pdf
PWC https://paperswithcode.com/paper/v-svr-polynomial-kernel-for-predicting-the
Repo
Framework

Structural and object detection for phosphene images

Title Structural and object detection for phosphene images
Authors Melani Sanchez-Garcia, Ruben Martinez-Cantin, Jose J. Guerrero
Abstract Prosthetic vision based on phosphenes is a promising way to provide visual perception to some blind people. However, phosphenic images are very limited in terms of spatial resolution (e.g.: 32 x 32 phosphene array) and luminance levels (e.g.: 8 gray levels), which results in the subject receiving very limited information about the scene. This requires using high-level processing to extract more information from the scene and present it to the subject with the phosphenes limitations. In this work, we study the recognition of indoor environments under simulated prosthetic vision. Most research in simulated prosthetic vision is performed based on static images, while very few researchers have addressed the problem of scene recognition through video sequences. We propose a new approach to build a schematic representation of indoor environments for phosphene images. Our schematic representation relies on two parallel CNNs for the extraction of structural informative edges of the room and the relevant object silhouettes based on mask segmentation. We have performed a study with twelve normally sighted subjects to evaluate how our methods were able to the room recognition by presenting phosphenic images and videos. We show how our method is able to increase the recognition ability of the user from 75% using alternative methods to 90% using our approach.
Tasks Object Detection, Scene Recognition
Published 2018-09-25
URL http://arxiv.org/abs/1809.09607v2
PDF http://arxiv.org/pdf/1809.09607v2.pdf
PWC https://paperswithcode.com/paper/structural-and-object-detection-for-phosphene
Repo
Framework

IntentsKB: A Knowledge Base of Entity-Oriented Search Intents

Title IntentsKB: A Knowledge Base of Entity-Oriented Search Intents
Authors Darío Garigliotti, Krisztian Balog
Abstract We address the problem of constructing a knowledge base of entity-oriented search intents. Search intents are defined on the level of entity types, each comprising of a high-level intent category (property, website, service, or other), along with a cluster of query terms used to express that intent. These machine-readable statements can be leveraged in various applications, e.g., for generating entity cards or query recommendations. By structuring service-oriented search intents, we take one step towards making entities actionable. The main contribution of this paper is a pipeline of components we develop to construct a knowledge base of entity intents. We evaluate performance both component-wise and end-to-end, and demonstrate that our approach is able to generate high-quality data.
Tasks
Published 2018-09-02
URL http://arxiv.org/abs/1809.00345v1
PDF http://arxiv.org/pdf/1809.00345v1.pdf
PWC https://paperswithcode.com/paper/intentskb-a-knowledge-base-of-entity-oriented
Repo
Framework

Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey and Future Directions

Title Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey and Future Directions
Authors Stylianos I. Venieris, Alexandros Kouris, Christos-Savvas Bouganis
Abstract In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep learning ecosystem to provide a tunable balance between performance, power consumption and programmability. In this paper, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics which include the supported applications, architectural choices, design space exploration methods and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete and in-depth evaluation of CNN-to-FPGA toolflows.
Tasks
Published 2018-03-15
URL http://arxiv.org/abs/1803.05900v1
PDF http://arxiv.org/pdf/1803.05900v1.pdf
PWC https://paperswithcode.com/paper/toolflows-for-mapping-convolutional-neural
Repo
Framework

ReXCam: Resource-Efficient, Cross-Camera Video Analytics at Scale

Title ReXCam: Resource-Efficient, Cross-Camera Video Analytics at Scale
Authors Samvit Jain, Xun Zhang, Yuhao Zhou, Ganesh Ananthanarayanan, Junchen Jiang, Yuanchao Shu, Joseph Gonzalez
Abstract Enterprises are increasingly deploying large camera networks for video analytics. Many target applications entail a common problem template: searching for and tracking an object or activity of interest (e.g. a speeding vehicle, a break-in) through a large camera network in live video. Such cross-camera analytics is compute and data intensive, with cost growing with the number of cameras and time. To address this cost challenge, we present ReXCam, a new system for efficient cross-camera video analytics. ReXCam exploits spatial and temporal locality in the dynamics of real camera networks to guide its inference-time search for a query identity. In an offline profiling phase, ReXCam builds a cross-camera correlation model that encodes the locality observed in historical traffic patterns. At inference time, ReXCam applies this model to filter frames that are not spatially and temporally correlated with the query identity’s current position. In the cases of occasional missed detections, ReXCam performs a fast-replay search on recently filtered video frames, enabling gracefully recovery. Together, these techniques allow ReXCam to reduce compute workload by 8.3x on an 8-camera dataset, and by 23x - 38x on a simulated 130-camera dataset. ReXCam has been implemented and deployed on a testbed of 5 AWS DeepLens cameras.
Tasks
Published 2018-11-03
URL https://arxiv.org/abs/1811.01268v4
PDF https://arxiv.org/pdf/1811.01268v4.pdf
PWC https://paperswithcode.com/paper/rexcam-resource-efficient-cross-camera-video
Repo
Framework

Non-parametric Sparse Additive Auto-regressive Network Models

Title Non-parametric Sparse Additive Auto-regressive Network Models
Authors Hao Henry Zhou, Garvesh Raskutti
Abstract Consider a multi-variate time series $(X_t)_{t=0}^{T}$ where $X_t \in \mathbb{R}^d$ which may represent spike train responses for multiple neurons in a brain, crime event data across multiple regions, and many others. An important challenge associated with these time series models is to estimate an influence network between the $d$ variables, especially when the number of variables $d$ is large meaning we are in the high-dimensional setting. Prior work has focused on parametric vector auto-regressive models. However, parametric approaches are somewhat restrictive in practice. In this paper, we use the non-parametric sparse additive model (SpAM) framework to address this challenge. Using a combination of $\beta$ and $\phi$-mixing properties of Markov chains and empirical process techniques for reproducing kernel Hilbert spaces (RKHSs), we provide upper bounds on mean-squared error in terms of the sparsity $s$, logarithm of the dimension $\log d$, number of time points $T$, and the smoothness of the RKHSs. Our rates are sharp up to logarithm factors in many cases. We also provide numerical experiments that support our theoretical results and display potential advantages of using our non-parametric SpAM framework for a Chicago crime dataset.
Tasks Time Series
Published 2018-01-23
URL http://arxiv.org/abs/1801.07644v2
PDF http://arxiv.org/pdf/1801.07644v2.pdf
PWC https://paperswithcode.com/paper/non-parametric-sparse-additive-auto
Repo
Framework

Fast Botnet Detection From Streaming Logs Using Online Lanczos Method

Title Fast Botnet Detection From Streaming Logs Using Online Lanczos Method
Authors Zheng Chen, Xinli Yu, Chi Zhang, Jin Zhang, Cui Lin, Bo Song, Jianliang Gao, Xiaohua Hu, Wei-Shih Yang, Erjia Yan
Abstract Botnet, a group of coordinated bots, is becoming the main platform of malicious Internet activities like DDOS, click fraud, web scraping, spam/rumor distribution, etc. This paper focuses on design and experiment of a new approach for botnet detection from streaming web server logs, motivated by its wide applicability, real-time protection capability, ease of use and better security of sensitive data. Our algorithm is inspired by a Principal Component Analysis (PCA) to capture correlation in data, and we are first to recognize and adapt Lanczos method to improve the time complexity of PCA-based botnet detection from cubic to sub-cubic, which enables us to more accurately and sensitively detect botnets with sliding time windows rather than fixed time windows. We contribute a generalized online correlation matrix update formula, and a new termination condition for Lanczos iteration for our purpose based on error bound and non-decreasing eigenvalues of symmetric matrices. On our dataset of an ecommerce website logs, experiments show the time cost of Lanczos method with different time windows are consistently only 20% to 25% of PCA.
Tasks
Published 2018-12-19
URL http://arxiv.org/abs/1812.07810v1
PDF http://arxiv.org/pdf/1812.07810v1.pdf
PWC https://paperswithcode.com/paper/fast-botnet-detection-from-streaming-logs
Repo
Framework

HYPE: A High Performing NLP System for Automatically Detecting Hypoglycemia Events from Electronic Health Record Notes

Title HYPE: A High Performing NLP System for Automatically Detecting Hypoglycemia Events from Electronic Health Record Notes
Authors Yonghao Jin, Fei Li, Hong Yu
Abstract Hypoglycemia is common and potentially dangerous among those treated for diabetes. Electronic health records (EHRs) are important resources for hypoglycemia surveillance. In this study, we report the development and evaluation of deep learning-based natural language processing systems to automatically detect hypoglycemia events from the EHR narratives. Experts in Public Health annotated 500 EHR notes from patients with diabetes. We used this annotated dataset to train and evaluate HYPE, supervised NLP systems for hypoglycemia detection. In our experiment, the convolutional neural network model yielded promising performance $Precision=0.96 \pm 0.03, Recall=0.86 \pm 0.03, F1=0.91 \pm 0.03$ in a 10-fold cross-validation setting. Despite the annotated data is highly imbalanced, our CNN-based HYPE system still achieved a high performance for hypoglycemia detection. HYPE could be used for EHR-based hypoglycemia surveillance and to facilitate clinicians for timely treatment of high-risk patients.
Tasks
Published 2018-11-29
URL http://arxiv.org/abs/1811.11945v1
PDF http://arxiv.org/pdf/1811.11945v1.pdf
PWC https://paperswithcode.com/paper/hype-a-high-performing-nlp-system-for
Repo
Framework
comments powered by Disqus