Paper Group ANR 785
Co-occurrence of the Benford-like and Zipf Laws Arising from the Texts Representing Human and Artificial Languages. Deep Learning Approach for Predicting 30 Day Readmissions after Coronary Artery Bypass Graft Surgery. dynnode2vec: Scalable Dynamic Network Embedding. A Deep Latent-Variable Model Application to Select Treatment Intensity in Survival …
Co-occurrence of the Benford-like and Zipf Laws Arising from the Texts Representing Human and Artificial Languages
Title | Co-occurrence of the Benford-like and Zipf Laws Arising from the Texts Representing Human and Artificial Languages |
Authors | Evgeny Shulzinger, Irina Legchenkova, Edward Bormashenko |
Abstract | We demonstrate that large texts, representing human (English, Russian, Ukrainian) and artificial (C++, Java) languages, display quantitative patterns characterized by the Benford-like and Zipf laws. The frequency of a word following the Zipf law is inversely proportional to its rank, whereas the total numbers of a certain word appearing in the text generate the uneven Benford-like distribution of leading numbers. Excluding the most popular words essentially improves the correlation of actual textual data with the Zipfian distribution, whereas the Benford distribution of leading numbers (arising from the overall amount of a certain word) is insensitive to the same elimination procedure. The calculated values of the moduli of slopes of double logarithmical plots for artificial languages (C++, Java) are markedly larger than those for human ones. |
Tasks | |
Published | 2018-03-06 |
URL | http://arxiv.org/abs/1803.03667v1 |
http://arxiv.org/pdf/1803.03667v1.pdf | |
PWC | https://paperswithcode.com/paper/co-occurrence-of-the-benford-like-and-zipf |
Repo | |
Framework | |
Deep Learning Approach for Predicting 30 Day Readmissions after Coronary Artery Bypass Graft Surgery
Title | Deep Learning Approach for Predicting 30 Day Readmissions after Coronary Artery Bypass Graft Surgery |
Authors | Ramesh B. Manyam, Yanqing Zhang, William B. Keeling, Jose Binongo, Michael Kayatta, Seth Carter |
Abstract | Hospital Readmissions within 30 days after discharge following Coronary Artery Bypass Graft (CABG) Surgery are substantial contributors to healthcare costs. Many predictive models were developed to identify risk factors for readmissions. However, majority of the existing models use statistical analysis techniques with data available at discharge. We propose an ensembled model to predict CABG readmissions using pre-discharge perioperative data and machine learning survival analysis techniques. Firstly, we applied fifty one potential readmission risk variables to Cox Proportional Hazard (CPH) survival regression univariate analysis. Fourteen of them turned out to be significant (with p value < 0.05), contributing to readmissions. Subsequently, we applied these 14 predictors to multivariate CPH model and Deep Learning Neural Network (NN) representation of the CPH model, DeepSurv. We validated this new ensembled model with 453 isolated adult CABG cases. Nine of the fourteen perioperative risk variables were identified as the most significant with Hazard Ratios (HR) of greater than 1.0. The concordance index metrics for CPH, DeepSurv, and ensembled models were then evaluated with training and validation datasets. Our ensembled model yielded promising results in terms of c-statistics, as we raised the the number of iterations and data set sizes. 30 day all-cause readmissions among isolated CABG patients can be predicted more effectively with perioperative pre-discharge data, using machine learning survival analysis techniques. Prediction accuracy levels could be improved further with deep learning algorithms. |
Tasks | Survival Analysis |
Published | 2018-12-03 |
URL | http://arxiv.org/abs/1812.00596v1 |
http://arxiv.org/pdf/1812.00596v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-learning-approach-for-predicting-30-day |
Repo | |
Framework | |
dynnode2vec: Scalable Dynamic Network Embedding
Title | dynnode2vec: Scalable Dynamic Network Embedding |
Authors | Sedigheh Mahdavi, Shima Khoshraftar, Aijun An |
Abstract | Network representation learning in low dimensional vector space has attracted considerable attention in both academic and industrial domains. Most real-world networks are dynamic with addition/deletion of nodes and edges. The existing graph embedding methods are designed for static networks and they cannot capture evolving patterns in a large dynamic network. In this paper, we propose a dynamic embedding method, dynnode2vec, based on the well-known graph embedding method node2vec. Node2vec is a random walk based embedding method for static networks. Applying static network embedding in dynamic settings has two crucial problems: 1) Generating random walks for every time step is time consuming 2) Embedding vector spaces in each timestamp are different. In order to tackle these challenges, dynnode2vec uses evolving random walks and initializes the current graph embedding with previous embedding vectors. We demonstrate the advantages of the proposed dynamic network embedding by conducting empirical evaluations on several large dynamic network datasets. |
Tasks | Graph Embedding, Network Embedding, Representation Learning |
Published | 2018-12-06 |
URL | http://arxiv.org/abs/1812.02356v2 |
http://arxiv.org/pdf/1812.02356v2.pdf | |
PWC | https://paperswithcode.com/paper/dynnode2vec-scalable-dynamic-network |
Repo | |
Framework | |
A Deep Latent-Variable Model Application to Select Treatment Intensity in Survival Analysis
Title | A Deep Latent-Variable Model Application to Select Treatment Intensity in Survival Analysis |
Authors | Cédric Beaulac, Jeffrey S. Rosenthal, David Hodgson |
Abstract | In the following short article we adapt a new and popular machine learning model for inference on medical data sets. Our method is based on the Variational AutoEncoder (VAE) framework that we adapt to survival analysis on small data sets with missing values. In our model, the true health status appears as a set of latent variables that affects the observed covariates and the survival chances. We show that this flexible model allows insightful decision-making using a predicted distribution and outperforms a classic survival analysis model. |
Tasks | Decision Making, Survival Analysis |
Published | 2018-11-29 |
URL | http://arxiv.org/abs/1811.12323v1 |
http://arxiv.org/pdf/1811.12323v1.pdf | |
PWC | https://paperswithcode.com/paper/a-deep-latent-variable-model-application-to |
Repo | |
Framework | |
Learning from Richer Human Guidance: Augmenting Comparison-Based Learning with Feature Queries
Title | Learning from Richer Human Guidance: Augmenting Comparison-Based Learning with Feature Queries |
Authors | Chandrayee Basu, Mukesh Singhal, Anca D. Dragan |
Abstract | We focus on learning the desired objective function for a robot. Although trajectory demonstrations can be very informative of the desired objective, they can also be difficult for users to provide. Answers to comparison queries, asking which of two trajectories is preferable, are much easier for users, and have emerged as an effective alternative. Unfortunately, comparisons are far less informative. We propose that there is much richer information that users can easily provide and that robots ought to leverage. We focus on augmenting comparisons with feature queries, and introduce a unified formalism for treating all answers as observations about the true desired reward. We derive an active query selection algorithm, and test these queries in simulation and on real users. We find that richer, feature-augmented queries can extract more information faster, leading to robots that better match user preferences in their behavior. |
Tasks | |
Published | 2018-02-05 |
URL | http://arxiv.org/abs/1802.01604v1 |
http://arxiv.org/pdf/1802.01604v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-from-richer-human-guidance |
Repo | |
Framework | |
Semantic Correspondence: A Hierarchical Approach
Title | Semantic Correspondence: A Hierarchical Approach |
Authors | Akila Pemasiri, Kien Nguyen, Sridha Sridhara, and Clinton Fookes |
Abstract | Establishing semantic correspondence across images when the objects in the images have undergone complex deformations remains a challenging task in the field of computer vision. In this paper, we propose a hierarchical method to tackle this problem by first semantically targeting the foreground objects to localize the search space and then looking deeply into multiple levels of the feature representation to search for point-level correspondence. In contrast to existing approaches, which typically penalize large discrepancies, our approach allows for significant displacements, with the aim to accommodate large deformations of the objects in scene. Localizing the search space by semantically matching object-level correspondence, our method robustly handles large deformations of objects. Representing the target region by concatenated hypercolumn features which take into account the hierarchical levels of the surrounding context, helps to clear the ambiguity to further improve the accuracy. By conducting multiple experiments across scenes with non-rigid objects, we validate the proposed approach, and show that it outperforms the state of the art methods for semantic correspondence establishment. |
Tasks | |
Published | 2018-06-10 |
URL | http://arxiv.org/abs/1806.03560v1 |
http://arxiv.org/pdf/1806.03560v1.pdf | |
PWC | https://paperswithcode.com/paper/semantic-correspondence-a-hierarchical |
Repo | |
Framework | |
Fully-Coupled Two-Stream Spatiotemporal Networks for Extremely Low Resolution Action Recognition
Title | Fully-Coupled Two-Stream Spatiotemporal Networks for Extremely Low Resolution Action Recognition |
Authors | Mingze Xu, Aidean Sharghi, Xin Chen, David J Crandall |
Abstract | A major emerging challenge is how to protect people’s privacy as cameras and computer vision are increasingly integrated into our daily lives, including in smart devices inside homes. A potential solution is to capture and record just the minimum amount of information needed to perform a task of interest. In this paper, we propose a fully-coupled two-stream spatiotemporal architecture for reliable human action recognition on extremely low resolution (e.g., 12x16 pixel) videos. We provide an efficient method to extract spatial and temporal features and to aggregate them into a robust feature representation for an entire action video sequence. We also consider how to incorporate high resolution videos during training in order to build better low resolution action recognition models. We evaluate on two publicly-available datasets, showing significant improvements over the state-of-the-art. |
Tasks | Temporal Action Localization |
Published | 2018-01-11 |
URL | http://arxiv.org/abs/1801.03983v1 |
http://arxiv.org/pdf/1801.03983v1.pdf | |
PWC | https://paperswithcode.com/paper/fully-coupled-two-stream-spatiotemporal |
Repo | |
Framework | |
MATCH-Net: Dynamic Prediction in Survival Analysis using Convolutional Neural Networks
Title | MATCH-Net: Dynamic Prediction in Survival Analysis using Convolutional Neural Networks |
Authors | Daniel Jarrett, Jinsung Yoon, Mihaela van der Schaar |
Abstract | Accurate prediction of disease trajectories is critical for early identification and timely treatment of patients at risk. Conventional methods in survival analysis are often constrained by strong parametric assumptions and limited in their ability to learn from high-dimensional data, while existing neural network models are not readily-adapted to the longitudinal setting. This paper develops a novel convolutional approach that addresses these drawbacks. We present MATCH-Net: a Missingness-Aware Temporal Convolutional Hitting-time Network, designed to capture temporal dependencies and heterogeneous interactions in covariate trajectories and patterns of missingness. To the best of our knowledge, this is the first investigation of temporal convolutions in the context of dynamic prediction for personalized risk prognosis. Using real-world data from the Alzheimer’s Disease Neuroimaging Initiative, we demonstrate state-of-the-art performance without making any assumptions regarding underlying longitudinal or time-to-event processes attesting to the model’s potential utility in clinical decision support. |
Tasks | Survival Analysis |
Published | 2018-11-26 |
URL | http://arxiv.org/abs/1811.10746v1 |
http://arxiv.org/pdf/1811.10746v1.pdf | |
PWC | https://paperswithcode.com/paper/match-net-dynamic-prediction-in-survival |
Repo | |
Framework | |
Feature Selection for Survival Analysis with Competing Risks using Deep Learning
Title | Feature Selection for Survival Analysis with Competing Risks using Deep Learning |
Authors | Carl Rietschel, Jinsung Yoon, Mihaela van der Schaar |
Abstract | Deep learning models for survival analysis have gained significant attention in the literature, but they suffer from severe performance deficits when the dataset contains many irrelevant features. We give empirical evidence for this problem in real-world medical settings using the state-of-the-art model DeepHit. Furthermore, we develop methods to improve the deep learning model through novel approaches to feature selection in survival analysis. We propose filter methods for hard feature selection and a neural network architecture that weights features for soft feature selection. Our experiments on two real-world medical datasets demonstrate that substantial performance improvements against the original models are achievable. |
Tasks | Feature Selection, Survival Analysis |
Published | 2018-11-22 |
URL | http://arxiv.org/abs/1811.09317v4 |
http://arxiv.org/pdf/1811.09317v4.pdf | |
PWC | https://paperswithcode.com/paper/feature-selection-for-survival-analysis-with |
Repo | |
Framework | |
Non-computability of human intelligence
Title | Non-computability of human intelligence |
Authors | Yasha Savelyev |
Abstract | We revisit the question (most famously) initiated by Turing: can human intelligence be completely modeled by a Turing machine? We show that the answer is \emph{no}, assuming a certain weak soundness hypothesis. More specifically we show that at least some meaningful thought processes of the brain cannot be Turing computable. In particular some physical processes are not Turing computable, which is not entirely expected. There are some similarities of our argument with the well known Lucas-Penrose argument, but we work purely on the level of Turing machines, and do not use G"odel’s incompleteness theorem or any direct analogue. Instead we construct directly and use a weak analogue of a G"odel statement for a certain system which involves our human, this allows us to side-step some (possible) meta-logical issues with their argument. |
Tasks | |
Published | 2018-10-12 |
URL | https://arxiv.org/abs/1810.06985v8 |
https://arxiv.org/pdf/1810.06985v8.pdf | |
PWC | https://paperswithcode.com/paper/non-computability-of-human-intelligence |
Repo | |
Framework | |
Metric-Driven Learning of Correspondence Weighting for 2-D/3-D Image Registration
Title | Metric-Driven Learning of Correspondence Weighting for 2-D/3-D Image Registration |
Authors | Roman Schaffert, Jian Wang, Peter Fischer, Anja Borsdorf, Andreas Maier |
Abstract | Registration of pre-operative 3-D volumes to intra-operative 2-D X-ray images is important in minimally invasive medical procedures. Rigid registration can be performed by estimating a global rigid motion that optimizes the alignment of local correspondences. However, inaccurate correspondences challenge the registration performance. To minimize their influence, we estimate optimal weights for correspondences using PointNet. We train the network directly with the criterion to minimize the registration error. We propose an objective function which includes point-to-plane correspondence-based motion estimation and projection error computation, thereby enabling the learning of a weighting strategy that optimally fits the underlying formulation of the registration task in an end-to-end fashion. For single-vertebra registration, we achieve an accuracy of 0.74$\pm$0.26 mm and highly improved robustness. The success rate is increased from 79.3 % to 94.3 % and the capture range from 3 mm to 13 mm. |
Tasks | Image Registration, Motion Estimation |
Published | 2018-06-20 |
URL | http://arxiv.org/abs/1806.07812v3 |
http://arxiv.org/pdf/1806.07812v3.pdf | |
PWC | https://paperswithcode.com/paper/metric-driven-learning-of-correspondence |
Repo | |
Framework | |
The Architecture of Mr. DLib’s Scientific Recommender-System API
Title | The Architecture of Mr. DLib’s Scientific Recommender-System API |
Authors | Joeran Beel, Andrew Collins, Akiko Aizawa |
Abstract | Recommender systems in academia are not widely available. This may be in part due to the difficulty and cost of developing and maintaining recommender systems. Many operators of academic products such as digital libraries and reference managers avoid this effort, although a recommender system could provide significant benefits to their users. In this paper, we introduce Mr. DLib’s “Recommendations as-a-Service” (RaaS) API that allows operators of academic products to easily integrate a scientific recommender system into their products. Mr. DLib generates recommendations for research articles but in the future, recommendations may include call for papers, grants, etc. Operators of academic products can request recommendations from Mr. DLib and display these recommendations to their users. Mr. DLib can be integrated in just a few hours or days; creating an equivalent recommender system from scratch would require several months for an academic operator. Mr. DLib has been used by GESIS Sowiport and by the reference manager JabRef. Mr. DLib is open source and its goal is to facilitate the application of, and research on, scientific recommender systems. In this paper, we present the motivation for Mr. DLib, the architecture and details about the effectiveness. Mr. DLib has delivered 94m recommendations over a span of two years with an average click-through rate of 0.12%. |
Tasks | Recommendation Systems |
Published | 2018-11-26 |
URL | http://arxiv.org/abs/1811.10364v1 |
http://arxiv.org/pdf/1811.10364v1.pdf | |
PWC | https://paperswithcode.com/paper/the-architecture-of-mr-dlibs-scientific |
Repo | |
Framework | |
High-quality nonparallel voice conversion based on cycle-consistent adversarial network
Title | High-quality nonparallel voice conversion based on cycle-consistent adversarial network |
Authors | Fuming Fang, Junichi Yamagishi, Isao Echizen, Jaime Lorenzo-Trueba |
Abstract | Although voice conversion (VC) algorithms have achieved remarkable success along with the development of machine learning, superior performance is still difficult to achieve when using nonparallel data. In this paper, we propose using a cycle-consistent adversarial network (CycleGAN) for nonparallel data-based VC training. A CycleGAN is a generative adversarial network (GAN) originally developed for unpaired image-to-image translation. A subjective evaluation of inter-gender conversion demonstrated that the proposed method significantly outperformed a method based on the Merlin open source neural network speech synthesis system (a parallel VC system adapted for our setup) and a GAN-based parallel VC system. This is the first research to show that the performance of a nonparallel VC method can exceed that of state-of-the-art parallel VC methods. |
Tasks | Image-to-Image Translation, Speech Synthesis, Voice Conversion |
Published | 2018-04-02 |
URL | http://arxiv.org/abs/1804.00425v1 |
http://arxiv.org/pdf/1804.00425v1.pdf | |
PWC | https://paperswithcode.com/paper/high-quality-nonparallel-voice-conversion |
Repo | |
Framework | |
An exploration of algorithmic discrimination in data and classification
Title | An exploration of algorithmic discrimination in data and classification |
Authors | Jixue Liu, Jiuyong Li, Feiyue Ye, Lin Liu, Thuc Duy Le, Ping Xiong |
Abstract | Algorithmic discrimination is an important aspect when data is used for predictive purposes. This paper analyzes the relationships between discrimination and classification, data set partitioning, and decision models, as well as correlation. The paper uses real world data sets to demonstrate the existence of discrimination and the independence between the discrimination of data sets and the discrimination of classification models. |
Tasks | |
Published | 2018-11-06 |
URL | http://arxiv.org/abs/1811.02994v1 |
http://arxiv.org/pdf/1811.02994v1.pdf | |
PWC | https://paperswithcode.com/paper/an-exploration-of-algorithmic-discrimination |
Repo | |
Framework | |
EOE: Expected Overlap Estimation over Unstructured Point Cloud Data
Title | EOE: Expected Overlap Estimation over Unstructured Point Cloud Data |
Authors | Ben Eckart, Kihwan Kim, Jan Kautz |
Abstract | We present an iterative overlap estimation technique to augment existing point cloud registration algorithms that can achieve high performance in difficult real-world situations where large pose displacement and non-overlapping geometry would otherwise cause traditional methods to fail. Our approach estimates overlapping regions through an iterative Expectation Maximization procedure that encodes the sensor field-of-view into the registration process. The proposed technique, Expected Overlap Estimation (EOE), is derived from the observation that differences in field-of-view violate the iid assumption implicitly held by all maximum likelihood based registration techniques. We demonstrate how our approach can augment many popular registration methods with minimal computational overhead. Through experimentation on both synthetic and real-world datasets, we find that adding an explicit overlap estimation step can aid robust outlier handling and increase the accuracy of both ICP-based and GMM-based registration methods, especially in large unstructured domains and where the amount of overlap between point clouds is very small. |
Tasks | Point Cloud Registration |
Published | 2018-08-06 |
URL | http://arxiv.org/abs/1808.02155v1 |
http://arxiv.org/pdf/1808.02155v1.pdf | |
PWC | https://paperswithcode.com/paper/eoe-expected-overlap-estimation-over |
Repo | |
Framework | |