January 30, 2020

3197 words 16 mins read

Paper Group ANR 368

Paper Group ANR 368

Semantic Segmentation of Thigh Muscle using 2.5D Deep Learning Network Trained with Limited Datasets. Inverse Path Tracing for Joint Material and Lighting Estimation. Deep Bayesian Multi-Target Learning for Recommender Systems. Generative Tensor Network Classification Model for Supervised Machine Learning. Encoders and Decoders for Quantum Expander …

Semantic Segmentation of Thigh Muscle using 2.5D Deep Learning Network Trained with Limited Datasets

Title Semantic Segmentation of Thigh Muscle using 2.5D Deep Learning Network Trained with Limited Datasets
Authors Hasnine Haque, Masahiro Hashimoto, Nozomu Uetake, Masahiro Jinzaki
Abstract Purpose: We propose a 2.5D deep learning neural network (DLNN) to automatically classify thigh muscle into 11 classes and evaluate its classification accuracy over 2D and 3D DLNN when trained with limited datasets. Enables operator invariant quantitative assessment of the thigh muscle volume change with respect to the disease progression. Materials and methods: Retrospective datasets consist of 48 thigh volume (TV) cropped from CT DICOM images. Cropped volumes were aligned with femur axis and resample in 2 mm voxel-spacing. Proposed 2.5D DLNN consists of three 2D U-Net trained with axial, coronal and sagittal muscle slices respectively. A voting algorithm was used to combine the output of U-Nets to create final segmentation. 2.5D U-Net was trained on PC with 38 TV and the remaining 10 TV were used to evaluate segmentation accuracy of 10 classes within Thigh. The result segmentation of both left and right thigh were de-cropped to original CT volume space. Finally, segmentation accuracies were compared between proposed DLNN and 2D/3D U-Net. Results: Average segmentation DSC score accuracy of all classes with 2.5D U-Net as 91.18% and Average Surface distance (ASD) accuracy as 0.84 mm. We found, mean DSC score for 2D U-Net was 3.3% lower than the that of 2.5D U-Net and mean DSC score of 3D U-Net was 5.7% lower than that of 2.5D U-Net when trained with same datasets. Conclusion: We achieved a faster computationally efficient and automatic segmentation of thigh muscle into 11 classes with reasonable accuracy. Enables quantitative evaluation of muscle atrophy with disease progression.
Tasks Semantic Segmentation
Published 2019-11-21
URL https://arxiv.org/abs/1911.09249v1
PDF https://arxiv.org/pdf/1911.09249v1.pdf
PWC https://paperswithcode.com/paper/semantic-segmentation-of-thigh-muscle-using
Repo
Framework

Inverse Path Tracing for Joint Material and Lighting Estimation

Title Inverse Path Tracing for Joint Material and Lighting Estimation
Authors Dejan Azinović, Tzu-Mao Li, Anton Kaplanyan, Matthias Nießner
Abstract Modern computer vision algorithms have brought significant advancement to 3D geometry reconstruction. However, illumination and material reconstruction remain less studied, with current approaches assuming very simplified models for materials and illumination. We introduce Inverse Path Tracing, a novel approach to jointly estimate the material properties of objects and light sources in indoor scenes by using an invertible light transport simulation. We assume a coarse geometry scan, along with corresponding images and camera poses. The key contribution of this work is an accurate and simultaneous retrieval of light sources and physically based material properties (e.g., diffuse reflectance, specular reflectance, roughness, etc.) for the purpose of editing and re-rendering the scene under new conditions. To this end, we introduce a novel optimization method using a differentiable Monte Carlo renderer that computes derivatives with respect to the estimated unknown illumination and material properties. This enables joint optimization for physically correct light transport and material models using a tailored stochastic gradient descent.
Tasks
Published 2019-03-17
URL http://arxiv.org/abs/1903.07145v1
PDF http://arxiv.org/pdf/1903.07145v1.pdf
PWC https://paperswithcode.com/paper/inverse-path-tracing-for-joint-material-and
Repo
Framework

Deep Bayesian Multi-Target Learning for Recommender Systems

Title Deep Bayesian Multi-Target Learning for Recommender Systems
Authors Qi Wang, Zhihui Ji, Huasheng Liu, Binqiang Zhao
Abstract With the increasing variety of services that e-commerce platforms provide, criteria for evaluating their success become also increasingly multi-targeting. This work introduces a multi-target optimization framework with Bayesian modeling of the target events, called Deep Bayesian Multi-Target Learning (DBMTL). In this framework, target events are modeled as forming a Bayesian network, in which directed links are parameterized by hidden layers, and learned from training samples. The structure of Bayesian network is determined by model selection. We applied the framework to Taobao live-streaming recommendation, to simultaneously optimize (and strike a balance) on targets including click-through rate, user stay time in live room, purchasing behaviors and interactions. Significant improvement has been observed for the proposed method over other MTL frameworks and the non-MTL model. Our practice shows that with an integrated causality structure, we can effectively make the learning of a target benefit from other targets, creating significant synergy effects that improve all targets. The neural network construction guided by DBMTL fits in with the general probabilistic model connecting features and multiple targets, taking weaker assumption than the other methods discussed in this paper. This theoretical generality brings about practical generalization power over various targets distributions, including sparse targets and continuous-value ones.
Tasks Model Selection, Recommendation Systems
Published 2019-02-25
URL http://arxiv.org/abs/1902.09154v1
PDF http://arxiv.org/pdf/1902.09154v1.pdf
PWC https://paperswithcode.com/paper/deep-bayesian-multi-target-learning-for
Repo
Framework

Generative Tensor Network Classification Model for Supervised Machine Learning

Title Generative Tensor Network Classification Model for Supervised Machine Learning
Authors Zheng-Zhi Sun, Cheng Peng, Ding Liu, Shi-Ju Ran, Gang Su
Abstract Tensor network (TN) has recently triggered extensive interests in developing machine-learning models in quantum many-body Hilbert space. Here we purpose a generative TN classification (GTNC) approach for supervised learning. The strategy is to train the generative TN for each class of the samples to construct the classifiers. The classification is implemented by comparing the distance in the many-body Hilbert space. The numerical experiments by GTNC show impressive performance on the MNIST and Fashion-MNIST dataset. The testing accuracy is competitive to the state-of-the-art convolutional neural network while higher than the naive Bayes classifier (a generative classifier) and support vector machine. Moreover, GTNC is more efficient than the existing TN models that are in general discriminative. By investigating the distances in the many-body Hilbert space, we find that (a) the samples are naturally clustering in such a space; and (b) bounding the bond dimensions of the TN’s to finite values corresponds to removing redundant information in the image recognition. These two characters make GTNC an adaptive and universal model of excellent performance.
Tasks
Published 2019-03-26
URL http://arxiv.org/abs/1903.10742v1
PDF http://arxiv.org/pdf/1903.10742v1.pdf
PWC https://paperswithcode.com/paper/generative-tensor-network-classification
Repo
Framework

Encoders and Decoders for Quantum Expander Codes Using Machine Learning

Title Encoders and Decoders for Quantum Expander Codes Using Machine Learning
Authors Sathwik Chadaga, Mridul Agarwal, Vaneet Aggarwal
Abstract Quantum key distribution (QKD) allows two distant parties to share encryption keys with security based on laws of quantum mechanics. In order to share the keys, the quantum bits have to be transmitted from the sender to the receiver over a noisy quantum channel. In order to transmit this information, efficient encoders and decoders need to be designed. However, large-scale design of quantum encoders and decoders have to depend on the channel characteristics and require look-up tables which require memory that is exponential in the number of qubits. In order to alleviate that, this paper aims to design the quantum encoders and decoders for expander codes by adapting techniques from machine learning including reinforcement learning and neural networks to the quantum domain. The proposed quantum decoder trains a neural network which is trained using the maximum aposteriori error for the syndromes, eliminating the use of large lookup tables. The quantum encoder uses deep Q-learning based techniques to optimize the generator matrices in the quantum Calderbank-Shor-Steane (CSS) codes. The evaluation results demonstrate improved performance of the proposed quantum encoder and decoder designs as compared to the quantum expander codes.
Tasks Q-Learning
Published 2019-09-06
URL https://arxiv.org/abs/1909.02945v1
PDF https://arxiv.org/pdf/1909.02945v1.pdf
PWC https://paperswithcode.com/paper/encoders-and-decoders-for-quantum-expander
Repo
Framework

Software Effort Estimation using Neuro Fuzzy Inference System: Past and Present

Title Software Effort Estimation using Neuro Fuzzy Inference System: Past and Present
Authors Aditi Sharma, Ravi Ranjan
Abstract Most important reason for project failure is poor effort estimation. Software development effort estimation is needed for assigning appropriate team members for development, allocating resources for software development, binding etc. Inaccurate software estimation may lead to delay in project, over-budget or cancellation of the project. But the effort estimation models are not very efficient. In this paper, we are analyzing the new approach for estimation i.e. Neuro Fuzzy Inference System (NFIS). It is a mixture model that consolidates the components of artificial neural network with fuzzy logic for giving a better estimation.
Tasks
Published 2019-12-26
URL https://arxiv.org/abs/1912.11855v1
PDF https://arxiv.org/pdf/1912.11855v1.pdf
PWC https://paperswithcode.com/paper/software-effort-estimation-using-neuro-fuzzy
Repo
Framework

Tensor p-shrinkage nuclear norm for low-rank tensor completion

Title Tensor p-shrinkage nuclear norm for low-rank tensor completion
Authors Chunsheng Liu, Hong Shan, Chunlei Chen
Abstract In this paper, a new definition of tensor p-shrinkage nuclear norm (p-TNN) is proposed based on tensor singular value decomposition (t-SVD). In particular, it can be proved that p-TNN is a better approximation of the tensor average rank than the tensor nuclear norm when p < 1. Therefore, by employing the p-shrinkage nuclear norm, a novel low-rank tensor completion (LRTC) model is proposed to estimate a tensor from its partial observations. Statistically, the upper bound of recovery error is provided for the LRTC model. Furthermore, an efficient algorithm, accelerated by the adaptive momentum scheme, is developed to solve the resulting nonconvex optimization problem. It can be further guaranteed that the algorithm enjoys a global convergence rate under the smoothness assumption. Numerical experiments conducted on both synthetic and real-world data sets verify our results and demonstrate the superiority of our p-TNN in LRTC problems over several state-of-the-art methods.
Tasks
Published 2019-07-09
URL https://arxiv.org/abs/1907.04092v1
PDF https://arxiv.org/pdf/1907.04092v1.pdf
PWC https://paperswithcode.com/paper/tensor-p-shrinkage-nuclear-norm-for-low-rank
Repo
Framework

Cooperative Schedule-Driven Intersection Control with Connected and Autonomous Vehicles

Title Cooperative Schedule-Driven Intersection Control with Connected and Autonomous Vehicles
Authors Hsu-Chieh Hu, Stephen F. Smith, Rick Goldstein
Abstract Recent work in decentralized, schedule-driven traffic control has demonstrated the ability to improve the efficiency of traffic flow in complex urban road networks. In this approach, a scheduling agent is associated with each intersection. Each agent senses the traffic approaching its intersection and in real-time constructs a schedule that minimizes the cumulative wait time of vehicles approaching the intersection over the current look-ahead horizon. In this paper, we propose a cooperative algorithm that utilizes both connected and autonomous vehicles (CAV) and schedule-driven traffic control to create better traffic flow in the city. The algorithm enables an intersection scheduling agent to adjust the arrival time of an approaching platoon through use of wireless communication to control the velocity of vehicles. The sequence of approaching platoons is thus shifted toward a new shape that has smaller cumulative delay. We demonstrate how this algorithm outperforms the original approach in a real-time traffic signal control problem.
Tasks Autonomous Vehicles
Published 2019-07-03
URL https://arxiv.org/abs/1907.01984v1
PDF https://arxiv.org/pdf/1907.01984v1.pdf
PWC https://paperswithcode.com/paper/cooperative-schedule-driven-intersection
Repo
Framework

Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer

Title Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer
Authors Hongyan Chang, Virat Shejwalkar, Reza Shokri, Amir Houmansadr
Abstract Collaborative (federated) learning enables multiple parties to train a model without sharing their private data, but through repeated sharing of the parameters of their local models. Despite its advantages, this approach has many known privacy and security weaknesses and performance overhead, in addition to being limited only to models with homogeneous architectures. Shared parameters leak a significant amount of information about the local (and supposedly private) datasets. Besides, federated learning is severely vulnerable to poisoning attacks, where some participants can adversarially influence the aggregate parameters. Large models, with high dimensional parameter vectors, are in particular highly susceptible to privacy and security attacks: curse of dimensionality in federated learning. We argue that sharing parameters is the most naive way of information exchange in collaborative learning, as they open all the internal state of the model to inference attacks, and maximize the model’s malleability by stealthy poisoning attacks. We propose Cronus, a robust collaborative machine learning framework. The simple yet effective idea behind designing Cronus is to control, unify, and significantly reduce the dimensions of the exchanged information between parties, through robust knowledge transfer between their black-box local models. We evaluate all existing federated learning algorithms against poisoning attacks, and we show that Cronus is the only secure method, due to its tight robustness guarantee. Treating local models as black-box, reduces the information leakage through models, and enables us using existing privacy-preserving algorithms that mitigate the risk of information leakage through the model’s output (predictions). Cronus also has a significantly lower sample complexity, compared to federated learning, which does not bind its security to the number of participants.
Tasks Transfer Learning
Published 2019-12-24
URL https://arxiv.org/abs/1912.11279v1
PDF https://arxiv.org/pdf/1912.11279v1.pdf
PWC https://paperswithcode.com/paper/cronus-robust-and-heterogeneous-collaborative
Repo
Framework

Designing and Implementing Data Warehouse for Agricultural Big Data

Title Designing and Implementing Data Warehouse for Agricultural Big Data
Authors Vuong M. Ngo, Nhien-An Le-Khac, M-Tahar Kechadi
Abstract In recent years, precision agriculture that uses modern information and communication technologies is becoming very popular. Raw and semi-processed agricultural data are usually collected through various sources, such as: Internet of Thing (IoT), sensors, satellites, weather stations, robots, farm equipment, farmers and agribusinesses, etc. Besides, agricultural datasets are very large, complex, unstructured, heterogeneous, non-standardized, and inconsistent. Hence, the agricultural data mining is considered as Big Data application in terms of volume, variety, velocity and veracity. It is a key foundation to establishing a crop intelligence platform, which will enable resource efficient agronomy decision making and recommendations. In this paper, we designed and implemented a continental level agricultural data warehouse by combining Hive, MongoDB and Cassandra. Our data warehouse capabilities: (1) flexible schema; (2) data integration from real agricultural multi datasets; (3) data science and business intelligent support; (4) high performance; (5) high storage; (6) security; (7) governance and monitoring; (8) replication and recovery; (9) consistency, availability and partition tolerant; (10) distributed and cloud deployment. We also evaluate the performance of our data warehouse.
Tasks Decision Making
Published 2019-05-29
URL https://arxiv.org/abs/1905.12411v1
PDF https://arxiv.org/pdf/1905.12411v1.pdf
PWC https://paperswithcode.com/paper/designing-and-implementing-data-warehouse-for
Repo
Framework

Robust Data Preprocessing for Machine-Learning-Based Disk Failure Prediction in Cloud Production Environments

Title Robust Data Preprocessing for Machine-Learning-Based Disk Failure Prediction in Cloud Production Environments
Authors Shujie Han, Jun Wu, Erci Xu, Cheng He, Patrick P. C. Lee, Yi Qiang, Qixing Zheng, Tao Huang, Zixi Huang, Rui Li
Abstract To provide proactive fault tolerance for modern cloud data centers, extensive studies have proposed machine learning (ML) approaches to predict imminent disk failures for early remedy and evaluated their approaches directly on public datasets (e.g., Backblaze SMART logs). However, in real-world production environments, the data quality is imperfect (e.g., inaccurate labeling, missing data samples, and complex failure types), thereby degrading the prediction accuracy. We present RODMAN, a robust data preprocessing pipeline that refines data samples before feeding them into ML models. We start with a large-scale trace-driven study of over three million disks from Alibaba Cloud’s data centers, and motivate the practical challenges in ML-based disk failure prediction. We then design RODMAN with three data preprocessing echniques, namely failure-type filtering, spline-based data filling, and automated pre-failure backtracking, that are applicable for general ML models. Evaluation on both the Alibaba and Backblaze datasets shows that RODMAN improves the prediction accuracy compared to without data preprocessing under various settings.
Tasks
Published 2019-12-20
URL https://arxiv.org/abs/1912.09722v1
PDF https://arxiv.org/pdf/1912.09722v1.pdf
PWC https://paperswithcode.com/paper/robust-data-preprocessing-for-machine
Repo
Framework

Risk-Aware MMSE Estimation

Title Risk-Aware MMSE Estimation
Authors Dionysios S. Kalogerias, Luiz F. O. Chamon, George J. Pappas, Alejandro Ribeiro
Abstract Despite the simplicity and intuitive interpretation of Minimum Mean Squared Error (MMSE) estimators, their effectiveness in certain scenarios is questionable. Indeed, minimizing squared errors on average does not provide any form of stability, as the volatility of the estimation error is left unconstrained. When this volatility is statistically significant, the difference between the average and realized performance of the MMSE estimator can be drastically different. To address this issue, we introduce a new risk-aware MMSE formulation which trades between mean performance and risk by explicitly constraining the expected predictive variance of the involved squared error. We show that, under mild moment boundedness conditions, the corresponding risk-aware optimal solution can be evaluated explicitly, and has the form of an appropriately biased nonlinear MMSE estimator. We further illustrate the effectiveness of our approach via several numerical examples, which also showcase the advantages of risk-aware MMSE estimation against risk-neutral MMSE estimation, especially in models involving skewed, heavy-tailed distributions.
Tasks
Published 2019-12-06
URL https://arxiv.org/abs/1912.02933v1
PDF https://arxiv.org/pdf/1912.02933v1.pdf
PWC https://paperswithcode.com/paper/risk-aware-mmse-estimation
Repo
Framework

Use of First and Third Person Views for Deep Intersection Classification

Title Use of First and Third Person Views for Deep Intersection Classification
Authors Koji Takeda, Kanji Tanaka
Abstract We explore the problem of intersection classification using monocular on-board passive vision, with the goal of classifying traffic scenes with respect to road topology. We divide the existing approaches into two broad categories according to the type of input data: (a) first person vision (FPV) approaches, which use an egocentric view sequence as the intersection is passed; and (b) third person vision (TPV) approaches, which use a single view immediately before entering the intersection. The FPV and TPV approaches each have advantages and disadvantages. Therefore, we aim to combine them into a unified deep learning framework. Experimental results show that the proposed FPV-TPV scheme outperforms previous methods and only requires minimal FPV/TPV measurements.
Tasks
Published 2019-01-22
URL http://arxiv.org/abs/1901.07446v1
PDF http://arxiv.org/pdf/1901.07446v1.pdf
PWC https://paperswithcode.com/paper/use-of-first-and-third-person-views-for-deep
Repo
Framework

Metric-Learning based Deep Hashing Network for Content Based Retrieval of Remote Sensing Images

Title Metric-Learning based Deep Hashing Network for Content Based Retrieval of Remote Sensing Images
Authors Subhankar Roy, Enver Sangineto, Begüm Demir, Nicu Sebe
Abstract Hashing methods have been recently found very effective in retrieval of remote sensing (RS) images due to their computational efficiency and fast search speed. The traditional hashing methods in RS usually exploit hand-crafted features to learn hash functions to obtain binary codes, which can be insufficient to optimally represent the information content of RS images. To overcome this problem, in this paper we introduce a metric-learning based hashing network, which learns: 1) a semantic-based metric space for effective feature representation; and 2) compact binary hash codes for fast archive search. Our network considers an interplay of multiple loss functions that allows to jointly learn a metric based semantic space facilitating similar images to be clustered together in that target space and at the same time producing compact final activations that lose negligible information when binarized. Experiments carried out on two benchmark RS archives point out that the proposed network significantly improves the retrieval performance under the same retrieval time when compared to the state-of-the-art hashing methods in RS.
Tasks Metric Learning
Published 2019-04-02
URL https://arxiv.org/abs/1904.01258v2
PDF https://arxiv.org/pdf/1904.01258v2.pdf
PWC https://paperswithcode.com/paper/metric-learning-based-deep-hashing-network
Repo
Framework

Cross-Lingual BERT Transformation for Zero-Shot Dependency Parsing

Title Cross-Lingual BERT Transformation for Zero-Shot Dependency Parsing
Authors Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, Ting Liu
Abstract This paper investigates the problem of learning cross-lingual representations in a contextual space. We propose Cross-Lingual BERT Transformation (CLBT), a simple and efficient approach to generate cross-lingual contextualized word embeddings based on publicly available pre-trained BERT models (Devlin et al., 2018). In this approach, a linear transformation is learned from contextual word alignments to align the contextualized embeddings independently trained in different languages. We demonstrate the effectiveness of this approach on zero-shot cross-lingual transfer parsing. Experiments show that our embeddings substantially outperform the previous state-of-the-art that uses static embeddings. We further compare our approach with XLM (Lample and Conneau, 2019), a recently proposed cross-lingual language model trained with massive parallel data, and achieve highly competitive results.
Tasks Cross-Lingual Transfer, Dependency Parsing, Language Modelling, Word Embeddings
Published 2019-09-15
URL https://arxiv.org/abs/1909.06775v1
PDF https://arxiv.org/pdf/1909.06775v1.pdf
PWC https://paperswithcode.com/paper/cross-lingual-bert-transformation-for-zero
Repo
Framework
comments powered by Disqus