Paper Group ANR 234
Using machine learning to speed up new and upgrade detector studies: a calorimeter case. Predicting Stock Returns with Batched AROW. Ultra High Fidelity Image Compression with $\ell_\infty$-constrained Encoding and Deep Decoding. An Inverse-free Truncated Rayleigh-Ritz Method for Sparse Generalized Eigenvalue Problem. SAFE: Scalable Automatic Featu …
Using machine learning to speed up new and upgrade detector studies: a calorimeter case
Title | Using machine learning to speed up new and upgrade detector studies: a calorimeter case |
Authors | F. Ratnikov, D. Derkach, A. Boldyrev, A. Shevelev, P. Fakanov, L. Matyushin |
Abstract | In this paper, we discuss the way advanced machine learning techniques allow physicists to perform in-depth studies of the realistic operating modes of the detectors during the stage of their design. Proposed approach can be applied to both design concept (CDR) and technical design (TDR) phases of future detectors and existing detectors if upgraded. The machine learning approaches may speed up the verification of the possible detector configurations and will automate the entire detector R&D, which is often accompanied by a large number of scattered studies. We present the approach of using machine learning for detector R&D and its optimisation cycle with an emphasis on the project of the electromagnetic calorimeter upgrade for the LHCb detector\cite{lhcls3}. The spatial reconstruction and time of arrival properties for the electromagnetic calorimeter were demonstrated. |
Tasks | |
Published | 2020-03-11 |
URL | https://arxiv.org/abs/2003.05118v1 |
https://arxiv.org/pdf/2003.05118v1.pdf | |
PWC | https://paperswithcode.com/paper/using-machine-learning-to-speed-up-new-and |
Repo | |
Framework | |
Predicting Stock Returns with Batched AROW
Title | Predicting Stock Returns with Batched AROW |
Authors | Rachid Guennouni Hassani, Alexis Gilles, Emmanuel Lassalle, Arthur Dénouveaux |
Abstract | We extend the AROW regression algorithm developed by Vaits and Crammer in [VC11] to handle synchronous mini-batch updates and apply it to stock return prediction. By design, the model should be more robust to noise and adapt better to non-stationarity compared to a simple rolling regression. We empirically show that the new model outperforms more classical approaches by backtesting a strategy on S&P500 stocks. |
Tasks | |
Published | 2020-03-06 |
URL | https://arxiv.org/abs/2003.03076v2 |
https://arxiv.org/pdf/2003.03076v2.pdf | |
PWC | https://paperswithcode.com/paper/predicting-stock-returns-with-batched-arow |
Repo | |
Framework | |
Ultra High Fidelity Image Compression with $\ell_\infty$-constrained Encoding and Deep Decoding
Title | Ultra High Fidelity Image Compression with $\ell_\infty$-constrained Encoding and Deep Decoding |
Authors | Xi Zhang, Xiaolin Wu |
Abstract | In many professional fields, such as medicine, remote sensing and sciences, users often demand image compression methods to be mathematically lossless. But lossless image coding has a rather low compression ratio (around 2:1 for natural images). The only known technique to achieve significant compression while meeting the stringent fidelity requirements is the methodology of $\ell_\infty$-constrained coding that was developed and standardized in nineties. We make a major progress in $\ell_\infty$-constrained image coding after two decades, by developing a novel CNN-based soft $\ell_\infty$-constrained decoding method. The new method repairs compression defects by using a restoration CNN called the $\ell_\infty\mbox{-SDNet}$ to map a conventionally decoded image to the latent image. A unique strength of the $\ell_\infty\mbox{-SDNet}$ is its ability to enforce a tight error bound on a per pixel basis. As such, no small distinctive structures of the original image can be dropped or distorted, even if they are statistical outliers that are otherwise sacrificed by mainstream CNN restoration methods. More importantly, this research ushers in a new image compression system of $\ell_\infty$-constrained encoding and deep soft decoding ($\ell_\infty\mbox{-ED}^2$). The $\ell_\infty \mbox{-ED}^2$ approach beats the best of existing lossy image compression methods (e.g., BPG, WebP, etc.) not only in $\ell_\infty$ but also in $\ell_2$ error metric and perceptual quality, for bit rates near the threshold of perceptually transparent reconstruction. Operationally, the new compression system is practical, with a low-complexity real-time encoder and a cascade decoder consisting of a fast initial decoder and an optional CNN soft decoder. |
Tasks | Image Compression |
Published | 2020-02-10 |
URL | https://arxiv.org/abs/2002.03482v1 |
https://arxiv.org/pdf/2002.03482v1.pdf | |
PWC | https://paperswithcode.com/paper/ultra-high-fidelity-image-compression-with |
Repo | |
Framework | |
An Inverse-free Truncated Rayleigh-Ritz Method for Sparse Generalized Eigenvalue Problem
Title | An Inverse-free Truncated Rayleigh-Ritz Method for Sparse Generalized Eigenvalue Problem |
Authors | Yunfeng Cai, Ping Li |
Abstract | This paper considers the sparse generalized eigenvalue problem (SGEP), which aims to find the leading eigenvector with at most $k$ nonzero entries. SGEP naturally arises in many applications in machine learning, statistics, and scientific computing, for example, the sparse principal component analysis (SPCA), the sparse discriminant analysis (SDA), and the sparse canonical correlation analysis (SCCA). In this paper, we focus on the development of a three-stage algorithm named {\em inverse-free truncated Rayleigh-Ritz method} ({\em IFTRR}) to efficiently solve SGEP. In each iteration of IFTRR, only a small number of matrix-vector products is required. This makes IFTRR well-suited for large scale problems. Particularly, a new truncation strategy is proposed, which is able to find the support set of the leading eigenvector effectively. Theoretical results are developed to explain why IFTRR works well. Numerical simulations demonstrate the merits of IFTRR. |
Tasks | |
Published | 2020-03-24 |
URL | https://arxiv.org/abs/2003.10897v1 |
https://arxiv.org/pdf/2003.10897v1.pdf | |
PWC | https://paperswithcode.com/paper/an-inverse-free-truncated-rayleigh-ritz |
Repo | |
Framework | |
SAFE: Scalable Automatic Feature Engineering Framework for Industrial Tasks
Title | SAFE: Scalable Automatic Feature Engineering Framework for Industrial Tasks |
Authors | Qitao Shi, Ya-Lin Zhang, Longfei Li, Xinxing Yang, Meng Li, Jun Zhou |
Abstract | Machine learning techniques have been widely applied in Internet companies for various tasks, acting as an essential driving force, and feature engineering has been generally recognized as a crucial tache when constructing machine learning systems. Recently, a growing effort has been made to the development of automatic feature engineering methods, so that the substantial and tedious manual effort can be liberated. However, for industrial tasks, the efficiency and scalability of these methods are still far from satisfactory. In this paper, we proposed a staged method named SAFE (Scalable Automatic Feature Engineering), which can provide excellent efficiency and scalability, along with requisite interpretability and promising performance. Extensive experiments are conducted and the results show that the proposed method can provide prominent efficiency and competitive effectiveness when comparing with other methods. What’s more, the adequate scalability of the proposed method ensures it to be deployed in large scale industrial tasks. |
Tasks | Feature Engineering |
Published | 2020-03-05 |
URL | https://arxiv.org/abs/2003.02556v3 |
https://arxiv.org/pdf/2003.02556v3.pdf | |
PWC | https://paperswithcode.com/paper/safe-scalable-automatic-feature-engineering |
Repo | |
Framework | |
Prediction with Corrupted Expert Advice
Title | Prediction with Corrupted Expert Advice |
Authors | Idan Amir, Idan Attias, Tomer Koren, Roi Livni, Yishay Mansour |
Abstract | We revisit the fundamental problem of prediction with expert advice, in a setting where the environment is benign and generates losses stochastically, but the feedback observed by the learner is subject to a moderate adversarial corruption. We prove that a variant of the classical Multiplicative Weights algorithm with decreasing step sizes achieves constant regret in this setting and performs optimally in a wide range of environments, regardless of the magnitude of the injected corruption. Our results reveal a surprising disparity between the often comparable Follow the Regularized Leader (FTRL) and Online Mirror Descent (OMD) frameworks: we show that for experts in the corrupted stochastic regime, the regret performance of OMD is in fact strictly inferior to that of FTRL. |
Tasks | |
Published | 2020-02-24 |
URL | https://arxiv.org/abs/2002.10286v1 |
https://arxiv.org/pdf/2002.10286v1.pdf | |
PWC | https://paperswithcode.com/paper/prediction-with-corrupted-expert-advice |
Repo | |
Framework | |
DCMD: Distance-based Classification Using Mixture Distributions on Microbiome Data
Title | DCMD: Distance-based Classification Using Mixture Distributions on Microbiome Data |
Authors | Konstantin Shestopaloff, Mei Dong, Fan Gao, Wei Xu |
Abstract | Current advances in next generation sequencing techniques have allowed researchers to conduct comprehensive research on microbiome and human diseases, with recent studies identifying associations between human microbiome and health outcomes for a number of chronic conditions. However, microbiome data structure, characterized by sparsity and skewness, presents challenges to building effective classifiers. To address this, we present an innovative approach for distance-based classification using mixture distributions (DCMD). The method aims to improve classification performance when using microbiome community data, where the predictors are composed of sparse and heterogeneous count data. This approach models the inherent uncertainty in sparse counts by estimating a mixture distribution for the sample data, and representing each observation as a distribution, conditional on observed counts and the estimated mixture, which are then used as inputs for distance-based classification. The method is implemented into a k-means and k-nearest neighbours framework and we identify two distance metrics that produce optimal results. The performance of the model is assessed using simulations and applied to a human microbiome study, with results compared against a number of existing machine learning and distance-based approaches. The proposed method is competitive when compared to the machine learning approaches and showed a clear improvement over commonly used distance-based classifiers. The range of applicability and robustness make the proposed method a viable alternative for classification using sparse microbiome count data. |
Tasks | |
Published | 2020-03-29 |
URL | https://arxiv.org/abs/2003.13161v1 |
https://arxiv.org/pdf/2003.13161v1.pdf | |
PWC | https://paperswithcode.com/paper/dcmd-distance-based-classification-using |
Repo | |
Framework | |
Deep Learning-based Image Compression with Trellis Coded Quantization
Title | Deep Learning-based Image Compression with Trellis Coded Quantization |
Authors | Binglin Li, Mohammad Akbari, Jie Liang, Yang Wang |
Abstract | Recently many works attempt to develop image compression models based on deep learning architectures, where the uniform scalar quantizer (SQ) is commonly applied to the feature maps between the encoder and decoder. In this paper, we propose to incorporate trellis coded quantizer (TCQ) into a deep learning based image compression framework. A soft-to-hard strategy is applied to allow for back propagation during training. We develop a simple image compression model that consists of three subnetworks (encoder, decoder and entropy estimation), and optimize all of the components in an end-to-end manner. We experiment on two high resolution image datasets and both show that our model can achieve superior performance at low bit rates. We also show the comparisons between TCQ and SQ based on our proposed baseline model and demonstrate the advantage of TCQ. |
Tasks | Image Compression, Quantization |
Published | 2020-01-26 |
URL | https://arxiv.org/abs/2001.09417v1 |
https://arxiv.org/pdf/2001.09417v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-learning-based-image-compression-with |
Repo | |
Framework | |
Image Segmentation Using Deep Learning: A Survey
Title | Image Segmentation Using Deep Learning: A Survey |
Authors | Shervin Minaee, Yuri Boykov, Fatih Porikli, Antonio Plaza, Nasser Kehtarnavaz, Demetri Terzopoulos |
Abstract | Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. Recently, due to the success of deep learning models in a wide range of vision applications, there has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models. In this survey, we provide a comprehensive review of the literature at the time of this writing, covering a broad spectrum of pioneering works for semantic and instance-level segmentation, including fully convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area. |
Tasks | Image Compression, Scene Understanding, Semantic Segmentation |
Published | 2020-01-15 |
URL | https://arxiv.org/abs/2001.05566v2 |
https://arxiv.org/pdf/2001.05566v2.pdf | |
PWC | https://paperswithcode.com/paper/image-segmentation-using-deep-learning-a |
Repo | |
Framework | |
Are Direct Links Necessary in RVFL NNs for Regression?
Title | Are Direct Links Necessary in RVFL NNs for Regression? |
Authors | Grzegorz Dudek |
Abstract | A random vector functional link network (RVFL) is widely used as a universal approximator for classification and regression problems. The big advantage of RVFL is fast training without backpropagation. This is because the weights and biases of hidden nodes are selected randomly and stay untrained. Recently, alternative architectures with randomized learning are developed which differ from RVFL in that they have no direct links and a bias term in the output layer. In this study, we investigate the effect of direct links and output node bias on the regression performance of RVFL. For generating random parameters of hidden nodes we use the classical method and two new methods recently proposed in the literature. We test the RVFL performance on several function approximation problems with target functions of different nature: nonlinear, nonlinear with strong fluctuations, nonlinear with linear component and linear. Surprisingly, we found that the direct links and output node bias do not play an important role in improving RVFL accuracy for typical nonlinear regression problems. |
Tasks | |
Published | 2020-03-29 |
URL | https://arxiv.org/abs/2003.13090v1 |
https://arxiv.org/pdf/2003.13090v1.pdf | |
PWC | https://paperswithcode.com/paper/are-direct-links-necessary-in-rvfl-nns-for |
Repo | |
Framework | |
Mutual Information Maximization for Effective Lip Reading
Title | Mutual Information Maximization for Effective Lip Reading |
Authors | Xing Zhao, Shuang Yang, Shiguang Shan, Xilin Chen |
Abstract | Lip reading has received an increasing research interest in recent years due to the rapid development of deep learning and its widespread potential applications. One key point to obtain good performance for the lip reading task depends heavily on how effective the representation can be to capture the lip movement information and meanwhile to resist the noises resulted from the change of pose, lighting conditions, speaker’s appearance and so on. Towards this target, we propose to introduce the mutual information constraints on both the local feature’s level and the global sequence’s level to enhance the relations of the features with the speech content. On the one hand, we constraint the features generated at each time step to enable them carry a strong relation with the speech content by imposing the local mutual information maximization constraint (LMIM), leading to improvements over the model’s ability to discover fine-grained lip movements and the fine-grained differences among words with similar pronunciation, such as spend'' and spending’'. On the other hand, we introduce the mutual information maximization constraint on the global sequence’s level (GMIM), to make the model be able to pay more attention to discriminate key frames related with the speech content, and less to various noises appeared in the speaking process. By combining these two advantages together, the proposed method is expected to be both discriminative and robust for effective lip reading. To verify this method, we evaluate on two large-scale benchmark. We perform a detailed analysis and comparison on several aspects, including the comparison of the LMIM and GMIM with the baseline, the visualization of the learned representation and so on. The results not only prove the effectiveness of the proposed method but also report new state-of-the-art performance on both the two benchmarks. |
Tasks | |
Published | 2020-03-13 |
URL | https://arxiv.org/abs/2003.06439v1 |
https://arxiv.org/pdf/2003.06439v1.pdf | |
PWC | https://paperswithcode.com/paper/mutual-information-maximization-for-effective |
Repo | |
Framework | |
Distributionally Robust Bayesian Quadrature Optimization
Title | Distributionally Robust Bayesian Quadrature Optimization |
Authors | Thanh Tang Nguyen, Sunil Gupta, Huong Ha, Santu Rana, Svetha Venkatesh |
Abstract | Bayesian quadrature optimization (BQO) maximizes the expectation of an expensive black-box integrand taken over a known probability distribution. In this work, we study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples. A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set. Though Monte Carlo estimate is unbiased, it has high variance given a small set of samples; thus can result in a spurious objective function. We adopt the distributionally robust optimization perspective to this problem by maximizing the expected objective under the most adversarial distribution. In particular, we propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose. We demonstrate the empirical effectiveness of our proposed framework in synthetic and real-world problems, and characterize its theoretical convergence via Bayesian regret. |
Tasks | |
Published | 2020-01-19 |
URL | https://arxiv.org/abs/2001.06814v1 |
https://arxiv.org/pdf/2001.06814v1.pdf | |
PWC | https://paperswithcode.com/paper/distributionally-robust-bayesian-quadrature |
Repo | |
Framework | |
Open Source Computer Vision-based Layer-wise 3D Printing Analysis
Title | Open Source Computer Vision-based Layer-wise 3D Printing Analysis |
Authors | Aliaksei L. Petsiuk, Joshua M. Pearce |
Abstract | The paper describes an open source computer vision-based hardware structure and software algorithm, which analyzes layer-wise the 3-D printing processes, tracks printing errors, and generates appropriate printer actions to improve reliability. This approach is built upon multiple-stage monocular image examination, which allows monitoring both the external shape of the printed object and internal structure of its layers. Starting with the side-view height validation, the developed program analyzes the virtual top view for outer shell contour correspondence using the multi-template matching and iterative closest point algorithms, as well as inner layer texture quality clustering the spatial-frequency filter responses with Gaussian mixture models and segmenting structural anomalies with the agglomerative hierarchical clustering algorithm. This allows evaluation of both global and local parameters of the printing modes. The experimentally-verified analysis time per layer is less than one minute, which can be considered a quasi-real-time process for large prints. The systems can work as an intelligent printing suspension tool designed to save time and material. However, the results show the algorithm provides a means to systematize in situ printing data as a first step in a fully open source failure correction algorithm for additive manufacturing. |
Tasks | |
Published | 2020-03-12 |
URL | https://arxiv.org/abs/2003.05660v1 |
https://arxiv.org/pdf/2003.05660v1.pdf | |
PWC | https://paperswithcode.com/paper/open-source-computer-vision-based-layer-wise |
Repo | |
Framework | |
Unlabeled Data Deployment for Classification of Diabetic Retinopathy Images Using Knowledge Transfer
Title | Unlabeled Data Deployment for Classification of Diabetic Retinopathy Images Using Knowledge Transfer |
Authors | Sajjad Abbasi, Mohsen Hajabdollahi, Nader Karimi, Shadrokh Samavi, Shahram Shirani |
Abstract | Convolutional neural networks (CNNs) are extensively beneficial for medical image processing. Medical images are plentiful, but there is a lack of annotated data. Transfer learning is used to solve the problem of lack of labeled data and grants CNNs better training capability. Transfer learning can be used in many different medical applications; however, the model under transfer should have the same size as the original network. Knowledge distillation is recently proposed to transfer the knowledge of a model to another one and can be useful to cover the shortcomings of transfer learning. But some parts of the knowledge may not be distilled by knowledge distillation. In this paper, a novel knowledge distillation using transfer learning is proposed to transfer the whole knowledge of a model to another one. The proposed method can be beneficial and practical for medical image analysis in which a small number of labeled data are available. The proposed process is tested for diabetic retinopathy classification. Simulation results demonstrate that using the proposed method, knowledge of an extensive network can be transferred to a smaller model. |
Tasks | Transfer Learning |
Published | 2020-02-09 |
URL | https://arxiv.org/abs/2002.03321v1 |
https://arxiv.org/pdf/2002.03321v1.pdf | |
PWC | https://paperswithcode.com/paper/unlabeled-data-deployment-for-classification |
Repo | |
Framework | |
Temporal Extension Module for Skeleton-Based Action Recognition
Title | Temporal Extension Module for Skeleton-Based Action Recognition |
Authors | Yuya Obinata, Takuma Yamamoto |
Abstract | We present a module that extends the temporal graph of a graph convolutional network (GCN) for action recognition with a sequence of skeletons. Existing methods attempt to represent a more appropriate spatial graph on an intra-frame, but disregard optimization of the temporal graph on the inter-frame. In this work, we focus on adding extra edges to neighboring multiple vertices on the inter-frame and extracting additional features based on the extended temporal graph. Our module is a simple yet effective method to extract correlated features of multiple joints in human movement. Moreover, our module aids in further performance improvements, along with other GCN methods that optimize only the spatial graph. We conduct extensive experiments on two large datasets, NTU RGB+D and Kinetics-Skeleton, and demonstrate that our module is effective for several existing models and our final model achieves competitive or state-of-the-art performance. |
Tasks | Skeleton Based Action Recognition |
Published | 2020-03-19 |
URL | https://arxiv.org/abs/2003.08951v1 |
https://arxiv.org/pdf/2003.08951v1.pdf | |
PWC | https://paperswithcode.com/paper/temporal-extension-module-for-skeleton-based-1 |
Repo | |
Framework | |