July 28, 2019

3548 words 17 mins read

Paper Group ANR 225

Paper Group ANR 225

A Statistical Machine Learning Approach to Yield Curve Forecasting. An Energy-Efficient Mixed-Signal Neuron for Inherently Error-Resilient Neuromorphic Systems. Real-Time Automatic Fetal Brain Extraction in Fetal MRI by Deep Learning. A Novel Low-Complexity Framework in Ultra-Wideband Imaging for Breast Cancer Detection. ScanNet: A Fast and Dense S …

A Statistical Machine Learning Approach to Yield Curve Forecasting

Title A Statistical Machine Learning Approach to Yield Curve Forecasting
Authors Rajiv Sambasivan, Sourish Das
Abstract Yield curve forecasting is an important problem in finance. In this work we explore the use of Gaussian Processes in conjunction with a dynamic modeling strategy, much like the Kalman Filter, to model the yield curve. Gaussian Processes have been successfully applied to model functional data in a variety of applications. A Gaussian Process is used to model the yield curve. The hyper-parameters of the Gaussian Process model are updated as the algorithm receives yield curve data. Yield curve data is typically available as a time series with a frequency of one day. We compare existing methods to forecast the yield curve with the proposed method. The results of this study showed that while a competing method (a multivariate time series method) performed well in forecasting the yields at the short term structure region of the yield curve, Gaussian Processes perform well in the medium and long term structure regions of the yield curve. Accuracy in the long term structure region of the yield curve has important practical implications. The Gaussian Process framework yields uncertainty and probability estimates directly in contrast to other competing methods. Analysts are frequently interested in this information. In this study the proposed method has been applied to yield curve forecasting, however it can be applied to model high frequency time series data or data streams in other domains.
Tasks Gaussian Processes, Time Series
Published 2017-03-04
URL http://arxiv.org/abs/1703.01536v1
PDF http://arxiv.org/pdf/1703.01536v1.pdf
PWC https://paperswithcode.com/paper/a-statistical-machine-learning-approach-to
Repo
Framework

An Energy-Efficient Mixed-Signal Neuron for Inherently Error-Resilient Neuromorphic Systems

Title An Energy-Efficient Mixed-Signal Neuron for Inherently Error-Resilient Neuromorphic Systems
Authors Baibhab Chatterjee, Priyadarshini Panda, Shovan Maity, Kaushik Roy, Shreyas Sen
Abstract This work presents the design and analysis of a mixed-signal neuron (MS-N) for convolutional neural networks (CNN) and compares its performance with a digital neuron (Dig-N) in terms of operating frequency, power and noise. The circuit-level implementation of the MS-N in 65 nm CMOS technology exhibits 2-3 orders of magnitude better energy-efficiency over Dig-N for neuromorphic computing applications - especially at low frequencies due to the high leakage currents from many transistors in Dig-N. The inherent error-resiliency of CNN is exploited to handle the thermal and flicker noise of MS-N. A system-level analysis using a cohesive circuit-algorithmic framework on MNIST and CIFAR-10 datasets demonstrate an increase of 3% in worst-case classification error for MNIST when the integrated noise power in the bandwidth is ~ 1 {\mu}V2.
Tasks
Published 2017-10-24
URL http://arxiv.org/abs/1710.09012v1
PDF http://arxiv.org/pdf/1710.09012v1.pdf
PWC https://paperswithcode.com/paper/an-energy-efficient-mixed-signal-neuron-for
Repo
Framework

Real-Time Automatic Fetal Brain Extraction in Fetal MRI by Deep Learning

Title Real-Time Automatic Fetal Brain Extraction in Fetal MRI by Deep Learning
Authors Seyed Sadegh Mohseni Salehi, Seyed Raein Hashemi, Clemente Velasco-Annis, Abdelhakim Ouaalam, Judy A. Estroff, Deniz Erdogmus, Simon K. Warfield, Ali Gholipour
Abstract Brain segmentation is a fundamental first step in neuroimage analysis. In the case of fetal MRI, it is particularly challenging and important due to the arbitrary orientation of the fetus, organs that surround the fetal head, and intermittent fetal motion. Several promising methods have been proposed but are limited in their performance in challenging cases and in real-time segmentation. We aimed to develop a fully automatic segmentation method that independently segments sections of the fetal brain in 2D fetal MRI slices in real-time. To this end, we developed and evaluated a deep fully convolutional neural network based on 2D U-net and autocontext, and compared it to two alternative fast methods based on 1) a voxelwise fully convolutional network and 2) a method based on SIFT features, random forest and conditional random field. We trained the networks with manual brain masks on 250 stacks of training images, and tested on 17 stacks of normal fetal brain images as well as 18 stacks of extremely challenging cases based on extreme motion, noise, and severely abnormal brain shape. Experimental results show that our U-net approach outperformed the other methods and achieved average Dice metrics of 96.52% and 78.83% in the normal and challenging test sets, respectively. With an unprecedented performance and a test run time of about 1 second, our network can be used to segment the fetal brain in real-time while fetal MRI slices are being acquired. This can enable real-time motion tracking, motion detection, and 3D reconstruction of fetal brain MRI.
Tasks 3D Reconstruction, Brain Segmentation, Motion Detection
Published 2017-10-25
URL http://arxiv.org/abs/1710.09338v1
PDF http://arxiv.org/pdf/1710.09338v1.pdf
PWC https://paperswithcode.com/paper/real-time-automatic-fetal-brain-extraction-in
Repo
Framework

A Novel Low-Complexity Framework in Ultra-Wideband Imaging for Breast Cancer Detection

Title A Novel Low-Complexity Framework in Ultra-Wideband Imaging for Breast Cancer Detection
Authors Yasaman Ettefagh, Mohammad Hossein Moghaddam, Saeed Vahidian
Abstract In this research work, a novel framework is pro- posed as an efficient successor to traditional imaging methods for breast cancer detection in order to decrease the computational complexity. In this framework, the breast is devided into seg- ments in an iterative process and in each iteration, the one having the most probability of containing tumor with lowest possible resolution is selected by using suitable decision metrics. After finding the smallest tumor-containing segment, the resolution is increased in the detected tumor-containing segment, leaving the other parts of the breast image with low resolution. Our framework is applied on the most common used beamforming techniques, such as delay and sum (DAS) and delay multiply and sum (DMAS) and according to simulation results, our framework can decrease the computational complexity significantly for both DAS and DMAS without imposing any degradation on accuracy of basic algorithms. The amount of complexity reduction can be determined manually or automatically based on two proposed methods that are described in this framework.
Tasks Breast Cancer Detection
Published 2017-09-08
URL http://arxiv.org/abs/1709.02549v1
PDF http://arxiv.org/pdf/1709.02549v1.pdf
PWC https://paperswithcode.com/paper/a-novel-low-complexity-framework-in-ultra
Repo
Framework

ScanNet: A Fast and Dense Scanning Framework for Metastatic Breast Cancer Detection from Whole-Slide Images

Title ScanNet: A Fast and Dense Scanning Framework for Metastatic Breast Cancer Detection from Whole-Slide Images
Authors Huangjing Lin, Hao Chen, Qi Dou, Liansheng Wang, Jing Qin, Pheng-Ann Heng
Abstract Lymph node metastasis is one of the most significant diagnostic indicators in breast cancer, which is traditionally observed under the microscope by pathologists. In recent years, computerized histology diagnosis has become one of the most rapidly expanding fields in medical image computing, which alleviates pathologists’ workload and reduces misdiagnosis rate. However, automatic detection of lymph node metastases from whole slide images remains a challenging problem, due to the large-scale data with enormous resolutions and existence of hard mimics. In this paper, we propose a novel framework by leveraging fully convolutional networks for efficient inference to meet the speed requirement for clinical practice, while reconstructing dense predictions under different offsets for ensuring accurate detection on both micro- and macro-metastases. Incorporating with the strategies of asynchronous sample prefetching and hard negative mining, the network can be effectively trained. Extensive experiments on the benchmark dataset of 2016 Camelyon Grand Challenge corroborated the efficacy of our method. Compared with the state-of-the-art methods, our method achieved superior performance with a faster speed on the tumor localization task and surpassed human performance on the WSI classification task.
Tasks Breast Cancer Detection
Published 2017-07-30
URL http://arxiv.org/abs/1707.09597v1
PDF http://arxiv.org/pdf/1707.09597v1.pdf
PWC https://paperswithcode.com/paper/scannet-a-fast-and-dense-scanning-framework
Repo
Framework

Synthesizing Training Data for Object Detection in Indoor Scenes

Title Synthesizing Training Data for Object Detection in Indoor Scenes
Authors Georgios Georgakis, Arsalan Mousavian, Alexander C. Berg, Jana Kosecka
Abstract Detection of objects in cluttered indoor environments is one of the key enabling functionalities for service robots. The best performing object detection approaches in computer vision exploit deep Convolutional Neural Networks (CNN) to simultaneously detect and categorize the objects of interest in cluttered scenes. Training of such models typically requires large amounts of annotated training data which is time consuming and costly to obtain. In this work we explore the ability of using synthetically generated composite images for training state-of-the-art object detectors, especially for object instance detection. We superimpose 2D images of textured object models into images of real environments at variety of locations and scales. Our experiments evaluate different superimposition strategies ranging from purely image-based blending all the way to depth and semantics informed positioning of the object models into real scenes. We demonstrate the effectiveness of these object detector training strategies on two publicly available datasets, the GMU-Kitchens and the Washington RGB-D Scenes v2. As one observation, augmenting some hand-labeled training data with synthetic examples carefully composed onto scenes yields object detectors with comparable performance to using much more hand-labeled data. Broadly, this work charts new opportunities for training detectors for new objects by exploiting existing object model repositories in either a purely automatic fashion or with only a very small number of human-annotated examples.
Tasks Object Detection, Object Detection In Indoor Scenes
Published 2017-02-25
URL http://arxiv.org/abs/1702.07836v2
PDF http://arxiv.org/pdf/1702.07836v2.pdf
PWC https://paperswithcode.com/paper/synthesizing-training-data-for-object
Repo
Framework

Reinforced Temporal Attention and Split-Rate Transfer for Depth-Based Person Re-Identification

Title Reinforced Temporal Attention and Split-Rate Transfer for Depth-Based Person Re-Identification
Authors Nikolaos Karianakis, Zicheng Liu, Yinpeng Chen, Stefano Soatto
Abstract We address the problem of person re-identification from commodity depth sensors. One challenge for depth-based recognition is data scarcity. Our first contribution addresses this problem by introducing split-rate RGB-to-Depth transfer, which leverages large RGB datasets more effectively than popular fine-tuning approaches. Our transfer scheme is based on the observation that the model parameters at the bottom layers of a deep convolutional neural network can be directly shared between RGB and depth data while the remaining layers need to be fine-tuned rapidly. Our second contribution enhances re-identification for video by implementing temporal attention as a Bernoulli-Sigmoid unit acting upon frame-level features. Since this unit is stochastic, the temporal attention parameters are trained using reinforcement learning. Extensive experiments validate the accuracy of our method in person re-identification from depth sequences. Finally, in a scenario where subjects wear unseen clothes, we show large performance gains compared to a state-of-the-art model which relies on RGB data.
Tasks Person Re-Identification
Published 2017-05-28
URL http://arxiv.org/abs/1705.09882v2
PDF http://arxiv.org/pdf/1705.09882v2.pdf
PWC https://paperswithcode.com/paper/reinforced-temporal-attention-and-split-rate
Repo
Framework

Convergence Analysis of Two-layer Neural Networks with ReLU Activation

Title Convergence Analysis of Two-layer Neural Networks with ReLU Activation
Authors Yuanzhi Li, Yang Yuan
Abstract In recent years, stochastic gradient descent (SGD) based techniques has become the standard tools for training neural networks. However, formal theoretical understanding of why SGD can train neural networks in practice is largely missing. In this paper, we make progress on understanding this mystery by providing a convergence analysis for SGD on a rich subset of two-layer feedforward networks with ReLU activations. This subset is characterized by a special structure called “identity mapping”. We prove that, if input follows from Gaussian distribution, with standard $O(1/\sqrt{d})$ initialization of the weights, SGD converges to the global minimum in polynomial number of steps. Unlike normal vanilla networks, the “identity mapping” makes our network asymmetric and thus the global minimum is unique. To complement our theory, we are also able to show experimentally that multi-layer networks with this mapping have better performance compared with normal vanilla networks. Our convergence theorem differs from traditional non-convex optimization techniques. We show that SGD converges to optimal in “two phases”: In phase I, the gradient points to the wrong direction, however, a potential function $g$ gradually decreases. Then in phase II, SGD enters a nice one point convex region and converges. We also show that the identity mapping is necessary for convergence, as it moves the initial point to a better place for optimization. Experiment verifies our claims.
Tasks
Published 2017-05-28
URL http://arxiv.org/abs/1705.09886v2
PDF http://arxiv.org/pdf/1705.09886v2.pdf
PWC https://paperswithcode.com/paper/convergence-analysis-of-two-layer-neural
Repo
Framework

Evaluating Complex Task through Crowdsourcing: Multiple Views Approach

Title Evaluating Complex Task through Crowdsourcing: Multiple Views Approach
Authors Lingyu Lyu, Mehmed Kantardzic
Abstract With the popularity of massive open online courses, grading through crowdsourcing has become a prevalent approach towards large scale classes. However, for getting grades for complex tasks, which require specific skills and efforts for grading, crowdsourcing encounters a restriction of insufficient knowledge of the workers from the crowd. Due to knowledge limitation of the crowd graders, grading based on partial perspectives becomes a big challenge for evaluating complex tasks through crowdsourcing. Especially for those tasks which not only need specific knowledge for grading, but also should be graded as a whole instead of being decomposed into smaller and simpler subtasks. We propose a framework for grading complex tasks via multiple views, which are different grading perspectives defined by experts for the task, to provide uniformity. Aggregation algorithm based on graders variances are used to combine the grades for each view. We also detect bias patterns of the graders, and debias them regarding each view of the task. Bias pattern determines how the behavior is biased among graders, which is detected by a statistical technique. The proposed approach is analyzed on a synthetic data set. We show that our model gives more accurate results compared to the grading approaches without different views and debiasing algorithm.
Tasks
Published 2017-03-30
URL http://arxiv.org/abs/1703.10579v1
PDF http://arxiv.org/pdf/1703.10579v1.pdf
PWC https://paperswithcode.com/paper/evaluating-complex-task-through-crowdsourcing
Repo
Framework

Filling missing data in point clouds by merging structured and unstructured point clouds

Title Filling missing data in point clouds by merging structured and unstructured point clouds
Authors Franziska Lippoldt, Hartmut Schwandt
Abstract Point clouds arising from structured data, mainly as a result of CT scans, provides special properties on the distribution of points and the distances between those. Yet often, the amount of data provided can not compare to unstructured point clouds, i.e. data that arises from 3D light scans or laser scans. This article hereby proposes an approach to extend structured data and enhance the quality by inserting selected points from an unstructured point cloud. The resulting point cloud still has a partial structure that is called “half-structure”. In this way, missing data that can not be optimally recovered through other surface reconstruction methods can be completed.
Tasks
Published 2017-02-15
URL http://arxiv.org/abs/1702.04641v1
PDF http://arxiv.org/pdf/1702.04641v1.pdf
PWC https://paperswithcode.com/paper/filling-missing-data-in-point-clouds-by
Repo
Framework

Pretata: predicting TATA binding proteins with novel features and dimensionality reduction strategy

Title Pretata: predicting TATA binding proteins with novel features and dimensionality reduction strategy
Authors Quan Zou, Shixiang Wan, Ying Ju, Jijun Tang, Xiangxiang Zeng
Abstract Background: It is necessary and essential to discovery protein function from the novel primary sequences. Wet lab experimental procedures are not only time-consuming, but also costly, so predicting protein structure and function reliably based only on amino acid sequence has significant value. TATA-binding protein (TBP) is a kind of DNA binding protein, which plays a key role in the transcription regulation. Our study proposed an automatic approach for identifying TATA-binding proteins efficiently, accurately, and conveniently. This method would guide for the special protein identification with computational intelligence strategies. Results: Firstly, we proposed novel fingerprint features for TBP based on pseudo amino acid composition, physicochemical properties, and secondary structure. Secondly, hierarchical features dimensionality reduction strategies were employed to improve the performance furthermore. Currently, Pretata achieves 92.92% TATA- binding protein prediction accuracy, which is better than all other existing methods. Conclusions: The experiments demonstrate that our method could greatly improve the prediction accuracy and speed, thus allowing large-scale NGS data prediction to be practical. A web server is developed to facilitate the other researchers, which can be accessed at http://server.malab.cn/preTata/.
Tasks Dimensionality Reduction
Published 2017-03-07
URL http://arxiv.org/abs/1703.02850v1
PDF http://arxiv.org/pdf/1703.02850v1.pdf
PWC https://paperswithcode.com/paper/pretata-predicting-tata-binding-proteins-with
Repo
Framework

Linear Convergence of An Iterative Phase Retrieval Algorithm with Data Reuse

Title Linear Convergence of An Iterative Phase Retrieval Algorithm with Data Reuse
Authors Gen Li, Yuchen Jiao, Yuantao Gu
Abstract Phase retrieval has been an attractive but difficult problem rising from physical science, and there has been a gap between state-of-the-art theoretical convergence analyses and the corresponding efficient retrieval methods. Firstly, these analyses all assume that the sensing vectors and the iterative updates are independent, which only fits the ideal model with infinite measurements but not the reality, where data are limited and have to be reused. Secondly, the empirical results of some efficient methods, such as the randomized Kaczmarz method, show linear convergence, which is beyond existing theoretical explanations considering its randomness and reuse of data. In this work, we study for the first time, without the independence assumption, the convergence behavior of the randomized Kaczmarz method for phase retrieval. Specifically, beginning from taking expectation of the squared estimation error with respect to the index of measurement by fixing the sensing vector and the error in the previous step, we discard the independence assumption, rigorously derive the upper and lower bounds of the reduction of the mean squared error, and prove the linear convergence. This work fills the gap between a fast converging algorithm and its theoretical understanding. The proposed methodology may contribute to the study of other iterative algorithms for phase retrieval and other problems in the broad area of signal processing and machine learning.
Tasks
Published 2017-12-05
URL http://arxiv.org/abs/1712.01712v1
PDF http://arxiv.org/pdf/1712.01712v1.pdf
PWC https://paperswithcode.com/paper/linear-convergence-of-an-iterative-phase
Repo
Framework

Building Fast and Compact Convolutional Neural Networks for Offline Handwritten Chinese Character Recognition

Title Building Fast and Compact Convolutional Neural Networks for Offline Handwritten Chinese Character Recognition
Authors Xuefeng Xiao, Lianwen Jin, Yafeng Yang, Weixin Yang, Jun Sun, Tianhai Chang
Abstract Like other problems in computer vision, offline handwritten Chinese character recognition (HCCR) has achieved impressive results using convolutional neural network (CNN)-based methods. However, larger and deeper networks are needed to deliver state-of-the-art results in this domain. Such networks intuitively appear to incur high computational cost, and require the storage of a large number of parameters, which renders them unfeasible for deployment in portable devices. To solve this problem, we propose a Global Supervised Low-rank Expansion (GSLRE) method and an Adaptive Drop-weight (ADW) technique to solve the problems of speed and storage capacity. We design a nine-layer CNN for HCCR consisting of 3,755 classes, and devise an algorithm that can reduce the networks computational cost by nine times and compress the network to 1/18 of the original size of the baseline model, with only a 0.21% drop in accuracy. In tests, the proposed algorithm surpassed the best single-network performance reported thus far in the literature while requiring only 2.3 MB for storage. Furthermore, when integrated with our effective forward implementation, the recognition of an offline character image took only 9.7 ms on a CPU. Compared with the state-of-the-art CNN model for HCCR, our approach is approximately 30 times faster, yet 10 times more cost efficient.
Tasks Offline Handwritten Chinese Character Recognition
Published 2017-02-26
URL http://arxiv.org/abs/1702.07975v1
PDF http://arxiv.org/pdf/1702.07975v1.pdf
PWC https://paperswithcode.com/paper/building-fast-and-compact-convolutional
Repo
Framework

Ensemble classifier approach in breast cancer detection and malignancy grading- A review

Title Ensemble classifier approach in breast cancer detection and malignancy grading- A review
Authors Deepti Ameta
Abstract The diagnosed cases of Breast cancer is increasing annually and unfortunately getting converted into a high mortality rate. Cancer, at the early stages, is hard to detect because the malicious cells show similar properties (density) as shown by the non-malicious cells. The mortality ratio could have been minimized if the breast cancer could have been detected in its early stages. But the current systems have not been able to achieve a fully automatic system which is not just capable of detecting the breast cancer but also can detect the stage of it. Estimation of malignancy grading is important in diagnosing the degree of growth of malicious cells as well as in selecting a proper therapy for the patient. Therefore, a complete and efficient clinical decision support system is proposed which is capable of achieving breast cancer malignancy grading scheme very efficiently. The system is based on Image processing and machine learning domains. Classification Imbalance problem, a machine learning problem, occurs when instances of one class is much higher than the instances of the other class resulting in an inefficient classification of samples and hence a bad decision support system. Therefore EUSBoost, ensemble based classifier is proposed which is efficient and is able to outperform other classifiers as it takes the benefits of both-boosting algorithm with Random Undersampling techniques. Also comparison of EUSBoost with other techniques is shown in the paper.
Tasks Breast Cancer Detection
Published 2017-04-11
URL http://arxiv.org/abs/1704.03801v1
PDF http://arxiv.org/pdf/1704.03801v1.pdf
PWC https://paperswithcode.com/paper/ensemble-classifier-approach-in-breast-cancer
Repo
Framework

Microwave breast cancer detection using Empirical Mode Decomposition features

Title Microwave breast cancer detection using Empirical Mode Decomposition features
Authors Hongchao Song, Yunpeng Li, Mark Coates, Aidong Men
Abstract Microwave-based breast cancer detection has been proposed as a complementary approach to compensate for some drawbacks of existing breast cancer detection techniques. Among the existing microwave breast cancer detection methods, machine learning-type algorithms have recently become more popular. These focus on detecting the existence of breast tumours rather than performing imaging to identify the exact tumour position. A key step of the machine learning approaches is feature extraction. One of the most widely used feature extraction method is principle component analysis (PCA). However, it can be sensitive to signal misalignment. This paper presents an empirical mode decomposition (EMD)-based feature extraction method, which is more robust to the misalignment. Experimental results involving clinical data sets combined with numerically simulated tumour responses show that combined features from EMD and PCA improve the detection performance with an ensemble selection-based classifier.
Tasks Breast Cancer Detection
Published 2017-02-24
URL http://arxiv.org/abs/1702.07608v1
PDF http://arxiv.org/pdf/1702.07608v1.pdf
PWC https://paperswithcode.com/paper/microwave-breast-cancer-detection-using
Repo
Framework
comments powered by Disqus