Paper Group ANR 715
Auto-Model: Utilizing Research Papers and HPO Techniques to Deal with the CASH problem. Deep learning-based survival prediction for multiple cancer types using histopathology images. On Conditioning GANs to Hierarchical Ontologies. Multi-Frame Content Integration with a Spatio-Temporal Attention Mechanism for Person Video Motion Transfer. Data Samp …
Auto-Model: Utilizing Research Papers and HPO Techniques to Deal with the CASH problem
Title | Auto-Model: Utilizing Research Papers and HPO Techniques to Deal with the CASH problem |
Authors | Chunnan Wang, Hongzhi Wang, Tianyu Mu, Jianzhong Li, Hong Gao |
Abstract | In many fields, a mass of algorithms with completely different hyperparameters have been developed to address the same type of problems. Choosing the algorithm and hyperparameter setting correctly can promote the overall performance greatly, but users often fail to do so due to the absence of knowledge. How to help users to effectively and quickly select the suitable algorithm and hyperparameter settings for the given task instance is an important research topic nowadays, which is known as the CASH problem. In this paper, we design the Auto-Model approach, which makes full use of known information in the related research paper and introduces hyperparameter optimization techniques, to solve the CASH problem effectively. Auto-Model tremendously reduces the cost of algorithm implementations and hyperparameter configuration space, and thus capable of dealing with the CASH problem efficiently and easily. To demonstrate the benefit of Auto-Model, we compare it with classical Auto-Weka approach. The experimental results show that our proposed approach can provide superior results and achieves better performance in a short time. |
Tasks | Hyperparameter Optimization |
Published | 2019-10-24 |
URL | https://arxiv.org/abs/1910.10902v1 |
https://arxiv.org/pdf/1910.10902v1.pdf | |
PWC | https://paperswithcode.com/paper/auto-model-utilizing-research-papers-and-hpo |
Repo | |
Framework | |
Deep learning-based survival prediction for multiple cancer types using histopathology images
Title | Deep learning-based survival prediction for multiple cancer types using histopathology images |
Authors | Ellery Wulczyn, David F. Steiner, Zhaoyang Xu, Apaar Sadhwani, Hongwu Wang, Isabelle Flament, Craig H. Mermel, Po-Hsuan Cameron Chen, Yun Liu, Martin C. Stumpe |
Abstract | Prognostic information at diagnosis has important implications for cancer treatment and monitoring. Although cancer staging, histopathological assessment, molecular features, and clinical variables can provide useful prognostic insights, improving risk stratification remains an active research area. We developed a deep learning system (DLS) to predict disease specific survival across 10 cancer types from The Cancer Genome Atlas (TCGA). We used a weakly-supervised approach without pixel-level annotations, and tested three different survival loss functions. The DLS was developed using 9,086 slides from 3,664 cases and evaluated using 3,009 slides from 1,216 cases. In multivariable Cox regression analysis of the combined cohort including all 10 cancers, the DLS was significantly associated with disease specific survival (hazard ratio of 1.58, 95% CI 1.28-1.70, p<0.0001) after adjusting for cancer type, stage, age, and sex. In a per-cancer adjusted subanalysis, the DLS remained a significant predictor of survival in 5 of 10 cancer types. Compared to a baseline model including stage, age, and sex, the c-index of the model demonstrated an absolute 3.7% improvement (95% CI 1.0-6.5) in the combined cohort. Additionally, our models stratified patients within individual cancer stages, particularly stage II (p=0.025) and stage III (p<0.001). By developing and evaluating prognostic models across multiple cancer types, this work represents one of the most comprehensive studies exploring the direct prediction of clinical outcomes using deep learning and histopathology images. Our analysis demonstrates the potential for this approach to provide prognostic information in multiple cancer types, and even within specific pathologic stages. However, given the relatively small number of clinical events, we observed wide confidence intervals, suggesting that future work will benefit from larger datasets. |
Tasks | |
Published | 2019-12-16 |
URL | https://arxiv.org/abs/1912.07354v1 |
https://arxiv.org/pdf/1912.07354v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-learning-based-survival-prediction-for |
Repo | |
Framework | |
On Conditioning GANs to Hierarchical Ontologies
Title | On Conditioning GANs to Hierarchical Ontologies |
Authors | Hamid Eghbal-zadeh, Lukas Fischer, Thomas Hoch |
Abstract | The recent success of Generative Adversarial Networks (GAN) is a result of their ability to generate high quality images from a latent vector space. An important application is the generation of images from a text description, where the text description is encoded and further used in the conditioning of the generated image. Thus the generative network has to additionally learn a mapping from the text latent vector space to a highly complex and multi-modal image data distribution, which makes the training of such models challenging. To handle the complexities of fashion image and meta data, we propose Ontology Generative Adversarial Networks (O-GANs) for fashion image synthesis that is conditioned on an hierarchical fashion ontology in order to improve the image generation fidelity. We show that the incorporation of the ontology leads to better image quality as measured by Fr'{e}chet Inception Distance and Inception Score. Additionally, we show that the O-GAN achieves better conditioning results evaluated by implicit similarity between the text and the generated image. |
Tasks | Image Generation |
Published | 2019-05-16 |
URL | https://arxiv.org/abs/1905.06586v1 |
https://arxiv.org/pdf/1905.06586v1.pdf | |
PWC | https://paperswithcode.com/paper/on-conditioning-gans-to-hierarchical |
Repo | |
Framework | |
Multi-Frame Content Integration with a Spatio-Temporal Attention Mechanism for Person Video Motion Transfer
Title | Multi-Frame Content Integration with a Spatio-Temporal Attention Mechanism for Person Video Motion Transfer |
Authors | Kun Cheng, Hao-Zhi Huang, Chun Yuan, Lingyiqing Zhou, Wei Liu |
Abstract | Existing person video generation methods either lack the flexibility in controlling both the appearance and motion, or fail to preserve detailed appearance and temporal consistency. In this paper, we tackle the problem of motion transfer for generating person videos, which provides controls on both the appearance and the motion. Specifically, we transfer the motion of one person in a target video to another person in a source video, while preserving the appearance of the source person. Besides only relying on one source frame as the existing state-of-the-art methods, our proposed method integrates information from multiple source frames based on a spatio-temporal attention mechanism to preserve rich appearance details. In addition to a spatial discriminator employed for encouraging the frame-level fidelity, a multi-range temporal discriminator is adopted to enforce the generated video to resemble temporal dynamics of a real video in various time ranges. A challenging real-world dataset, which contains about 500 dancing video clips with complex and unpredictable motions, is collected for the training and testing. Extensive experiments show that the proposed method can produce more photo-realistic and temporally consistent person videos than previous methods. As our method decomposes the syntheses of the foreground and background into two branches, a flexible background substitution application can also be achieved. |
Tasks | Video Generation |
Published | 2019-08-12 |
URL | https://arxiv.org/abs/1908.04013v1 |
https://arxiv.org/pdf/1908.04013v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-frame-content-integration-with-a-spatio |
Repo | |
Framework | |
Data Sampling for Graph Based Unsupervised Learning: Convex and Greedy Optimization
Title | Data Sampling for Graph Based Unsupervised Learning: Convex and Greedy Optimization |
Authors | Saeed Vahidian, Alexander Cloninger, Baharan Mirzasoleiman |
Abstract | In a number of situations, collecting a function value for every data point may be prohibitively expensive, and random sampling ignores any structure in the underlying data. We introduce a scalable optimization algorithm with no correction steps (in contrast to Frank-Wolfe and its variants), a variant of gradient ascent for coreset selection in graphs, that greedily selects a weighted subset of vertices that are deemed most important to sample. Our algorithm estimates the mean of the function by taking a weighted sum only at these vertices, and we provably bound the estimation error in terms of the location and weights of the selected vertices in the graph. In addition, we consider the case where nodes have different selection costs and provide bounds on the quality of the low-cost selected coresets. We demonstrate the benefits of our algorithm on point clouds and structured graphs, as well as sensor placement where the cost of placing sensors depends on the location of the placement. We also elucidate that the empirical convergence of our proposed method is faster than random selection and various clustering methods while still respecting sensor placement cost. The paper concludes with validation of the developed algorithm on both synthetic and real datasets, demonstrating that it performs very well compared to the current state of the art. |
Tasks | |
Published | 2019-06-03 |
URL | https://arxiv.org/abs/1906.01021v2 |
https://arxiv.org/pdf/1906.01021v2.pdf | |
PWC | https://paperswithcode.com/paper/data-sampling-for-graph-based-unsupervised |
Repo | |
Framework | |
QoS-Aware Machine Learning-based Multiple Resources Scheduling for Microservices in Cloud Environment
Title | QoS-Aware Machine Learning-based Multiple Resources Scheduling for Microservices in Cloud Environment |
Authors | Lei Liu |
Abstract | Microservices have been dominating in the modern cloud environment. To improve cost efficiency, multiple microservices are normally co-located on a server. Thus, the run-time resource scheduling becomes the pivot for QoS control. However, the scheduling exploration space enlarges rapidly with the increasing server resources - cores, cache, bandwidth, etc. - and the diversity of microservices. Consequently, the existing schedulers might not meet the rapid changes in service demands. Besides, we observe that there exist resource cliffs in the scheduling space. It not only impacts the exploration efficiency, making it difficult to converge to the optimal scheduling solution, but also results in severe QoS fluctuation. To overcome these problems, we propose a novel machine learning-based scheduling mechanism called OSML. It uses resources and runtime states as the input and employs two MLP models and a reinforcement learning model to perform scheduling space exploration. Thus, OSML can reach an optimal solution much faster than traditional approaches. More importantly, it can automatically detect the resource cliff and avoid them during exploration. To verify the effectiveness of OSML and obtain a well-generalized model, we collect a dataset containing over 2-billion samples from 11 typical microservices running on real servers over 9 months. Under the same QoS constraint, experimental results show that OSML outperforms the state-of-the-art work, and achieves around 5 times scheduling speed. |
Tasks | |
Published | 2019-11-26 |
URL | https://arxiv.org/abs/1911.13208v2 |
https://arxiv.org/pdf/1911.13208v2.pdf | |
PWC | https://paperswithcode.com/paper/qos-aware-machine-learning-based-multiple |
Repo | |
Framework | |
Ontologies-based Architecture for Sociocultural Knowledge Co-Construction Systems
Title | Ontologies-based Architecture for Sociocultural Knowledge Co-Construction Systems |
Authors | Guidedi Kaladzavi, Papa Fary Diallo, Cedric Béré, Olivier Corby, Isabelle Mirbel, Moussa Lo, Dina Taiwe Kolyang |
Abstract | Considering the evolution of the semantic wiki engine based platforms, two main approaches could be distinguished: Ontologies for Wikis (OfW) and Wikis for Ontologies (WfO). OfW vision requires existing ontologies to be imported. Most of them use the RDF-based (Resource Description Framework) systems in conjunction with the standard SQL (Structured Query Language) database to manage and query semantic data. But, relational database is not an ideal type of storage for semantic data. A more natural data model for SMW (Semantic MediaWiki) is RDF, a data format that organizes information in graphs rather than in fixed database tables. This paper presents an ontology based architecture, which aims to implement this idea. The architecture mainly includes three layered functional architectures: Web User Interface Layer, Semantic Layer and Persistence Layer. |
Tasks | |
Published | 2019-04-11 |
URL | http://arxiv.org/abs/1904.05596v1 |
http://arxiv.org/pdf/1904.05596v1.pdf | |
PWC | https://paperswithcode.com/paper/ontologies-based-architecture-for |
Repo | |
Framework | |
SWift – A SignWriting improved fast transcriber
Title | SWift – A SignWriting improved fast transcriber |
Authors | Claudia S. Bianchini, Fabrizio Borgia, Paolo Bottoni, Maria de Marsico |
Abstract | We present SWift (SignWriting improved fast transcriber), an advanced editor for computer-aided writing and transcribing using SignWriting (SW). SW is devised to allow deaf people and linguists alike to exploit an easy-to-grasp written form of (any) sign language. Similarly, SWift has been developed for everyone who masters SW, and is not exclusively deaf-oriented. Using SWift, it is possible to compose and save any sign, using elementary components called glyphs. A guided procedure facilitates the composition process. SWift is aimed at helping to break down the “electronic” barriers that keep the deaf community away from Information and Communication Technology (ICT). The editor has been developed modularly and can be integrated everywhere the use of SW, as an alternative to written vocal language, may be advisable. |
Tasks | |
Published | 2019-11-25 |
URL | https://arxiv.org/abs/1911.10882v1 |
https://arxiv.org/pdf/1911.10882v1.pdf | |
PWC | https://paperswithcode.com/paper/swift-a-signwriting-improved-fast-transcriber |
Repo | |
Framework | |
An iterative scheme for feature based positioning using a weighted dissimilarity measure
Title | An iterative scheme for feature based positioning using a weighted dissimilarity measure |
Authors | Caifa Zhou, Andreas Wieser |
Abstract | We propose an iterative scheme for feature-based positioning using a new weighted dissimilarity measure with the goal of reducing the impact of large errors among the measured or modeled features. The weights are computed from the location-dependent standard deviations of the features and stored as part of the reference fingerprint map (RFM). Spatial filtering and kernel smoothing of the kinematically collected raw data allow efficiently estimating the standard deviations during RFM generation. In the positioning stage, the weights control the contribution of each feature to the dissimilarity measure, which in turn quantifies the difference between the set of online measured features and the fingerprints stored in the RFM. Features with little variability contribute more to the estimated position than features with high variability. Iterations are necessary because the variability depends on the location, and the location is initially unknown when estimating the position. Using real WiFi signal strength data from extended test measurements with ground truth in an office building, we show that the standard deviations of these features vary considerably within the region of interest and are neither simple functions of the signal strength nor of the distances from the corresponding access points. This is the motivation to include the empirical standard deviations in the RFM. We then analyze the deviations of the estimated positions with and without the location-dependent weighting. In the present example the maximum radial positioning error from ground truth are reduced by 40% comparing to kNN without the weighted dissimilarity measure. |
Tasks | |
Published | 2019-05-20 |
URL | https://arxiv.org/abs/1905.08022v2 |
https://arxiv.org/pdf/1905.08022v2.pdf | |
PWC | https://paperswithcode.com/paper/an-iterative-scheme-for-feature-based |
Repo | |
Framework | |
Learning a Matching Model with Co-teaching for Multi-turn Response Selection in Retrieval-based Dialogue Systems
Title | Learning a Matching Model with Co-teaching for Multi-turn Response Selection in Retrieval-based Dialogue Systems |
Authors | Jiazhan Feng, Chongyang Tao, Wei Wu, Yansong Feng, Dongyan Zhao, Rui Yan |
Abstract | We study learning of a matching model for response selection in retrieval-based dialogue systems. The problem is equally important with designing the architecture of a model, but is less explored in existing literature. To learn a robust matching model from noisy training data, we propose a general co-teaching framework with three specific teaching strategies that cover both teaching with loss functions and teaching with data curriculum. Under the framework, we simultaneously learn two matching models with independent training sets. In each iteration, one model transfers the knowledge learned from its training set to the other model, and at the same time receives the guide from the other model on how to overcome noise in training. Through being both a teacher and a student, the two models learn from each other and get improved together. Evaluation results on two public data sets indicate that the proposed learning approach can generally and significantly improve the performance of existing matching models. |
Tasks | |
Published | 2019-06-11 |
URL | https://arxiv.org/abs/1906.04413v1 |
https://arxiv.org/pdf/1906.04413v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-a-matching-model-with-co-teaching |
Repo | |
Framework | |
On Residual Networks Learning a Perturbation from Identity
Title | On Residual Networks Learning a Perturbation from Identity |
Authors | Michael Hauser |
Abstract | The purpose of this work is to test and study the hypothesis that residual networks are learning a perturbation from identity. Residual networks are enormously important deep learning models, with many theories attempting to explain how they function; learning a perturbation from identity is one such theory. In order to answer this question, the magnitudes of the perturbations are measured in both an absolute sense as well as in a scaled sense, with each form having its relative benefits and drawbacks. Additionally, a stopping rule is developed that can be used to decide the depth of the residual network based on the average perturbation magnitude being less than a given epsilon. With this analysis a better understanding of how residual networks process and transform data from input to output is formed. Parallel experiments are conducted on MNIST as well as CIFAR10 for various sized residual networks with between 6 and 300 residual blocks. It is found that, in this setting, the average scaled perturbation magnitude is roughly inversely proportional to increasing the number of residual blocks, and from this it follows that for sufficiently large residual networks, they are learning a perturbation from identity. |
Tasks | |
Published | 2019-02-11 |
URL | http://arxiv.org/abs/1902.04106v1 |
http://arxiv.org/pdf/1902.04106v1.pdf | |
PWC | https://paperswithcode.com/paper/on-residual-networks-learning-a-perturbation |
Repo | |
Framework | |
Examining Adversarial Learning against Graph-based IoT Malware Detection Systems
Title | Examining Adversarial Learning against Graph-based IoT Malware Detection Systems |
Authors | Ahmed Abusnaina, Aminollah Khormali, Hisham Alasmary, Jeman Park, Afsah Anwar, Ulku Meteriz, Aziz Mohaisen |
Abstract | The main goal of this study is to investigate the robustness of graph-based Deep Learning (DL) models used for Internet of Things (IoT) malware classification against Adversarial Learning (AL). We designed two approaches to craft adversarial IoT software, including Off-the-Shelf Adversarial Attack (OSAA) methods, using six different AL attack approaches, and Graph Embedding and Augmentation (GEA). The GEA approach aims to preserve the functionality and practicality of the generated adversarial sample through a careful embedding of a benign sample to a malicious one. Our evaluations demonstrate that OSAAs are able to achieve a misclassification rate (MR) of 100%. Moreover, we observed that the GEA approach is able to misclassify all IoT malware samples as benign. |
Tasks | Adversarial Attack, Graph Embedding, Malware Classification, Malware Detection |
Published | 2019-02-12 |
URL | http://arxiv.org/abs/1902.04416v2 |
http://arxiv.org/pdf/1902.04416v2.pdf | |
PWC | https://paperswithcode.com/paper/examining-adversarial-learning-against-graph |
Repo | |
Framework | |
NL-LinkNet: Toward Lighter but More Accurate Road Extraction with Non-Local Operations
Title | NL-LinkNet: Toward Lighter but More Accurate Road Extraction with Non-Local Operations |
Authors | Yooseung Wang, Junghoon Seo, Taegyun Jeon |
Abstract | Road extraction from very high resolution satellite images is one of the most important topics in the field of remote sensing. For the road segmentation problem, spatial properties of the data can usually be captured using Convolutional Neural Networks. However, this approach only considers a few local neighborhoods at a time and has difficulty capturing long-range dependencies. In order to overcome the problem, we propose Non-Local LinkNet with non-local blocks that can grasp relations between global features. It enables each spatial feature point to refer to all other contextual information and results in more accurate road segmentation. In detail, our method achieved 65.00% mIOU scores on the DeepGlobe 2018 Road Extraction Challenge dataset. Our best model outperformed D-LinkNet, 1st-ranked solution, by a significant gap of mIOU 0.88% with much less number of parameters. We also present empirical analyses on proper usage of non-local blocks for the baseline model. |
Tasks | |
Published | 2019-08-22 |
URL | https://arxiv.org/abs/1908.08223v1 |
https://arxiv.org/pdf/1908.08223v1.pdf | |
PWC | https://paperswithcode.com/paper/nl-linknet-toward-lighter-but-more-accurate |
Repo | |
Framework | |
Methods of Weighted Combination for Text Field Recognition in a Video Stream
Title | Methods of Weighted Combination for Text Field Recognition in a Video Stream |
Authors | Olga Petrova, Konstantin Bulatov, Vladimir L. Arlazarov |
Abstract | Due to a noticeable expansion of document recognition applicability, there is a high demand for recognition on mobile devices. A mobile camera, unlike a scanner, cannot always ensure the absence of various image distortions, therefore the task of improving the recognition precision is relevant. The advantage of mobile devices over scanners is the ability to use video stream input, which allows to get multiple images of a recognized document. Despite this, not enough attention is currently paid to the issue of combining recognition results obtained from different frames when using video stream input. In this paper we propose a weighted text string recognition results combination method and weighting criteria, and provide experimental data for verifying their validity and effectiveness. Based on the obtained results, it is concluded that the use of such weighted combination is appropriate for improving the quality of the video stream recognition result. |
Tasks | |
Published | 2019-11-27 |
URL | https://arxiv.org/abs/1911.12028v1 |
https://arxiv.org/pdf/1911.12028v1.pdf | |
PWC | https://paperswithcode.com/paper/methods-of-weighted-combination-for-text |
Repo | |
Framework | |
SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
Title | SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization |
Authors | Bogdan Gliwa, Iwona Mochol, Maciej Biesek, Aleksander Wawer |
Abstract | This paper introduces the SAMSum Corpus, a new dataset with abstractive dialogue summaries. We investigate the challenges it poses for automated summarization by testing several models and comparing their results with those obtained on a corpus of news articles. We show that model-generated summaries of dialogues achieve higher ROUGE scores than the model-generated summaries of news – in contrast with human evaluators’ judgement. This suggests that a challenging task of abstractive dialogue summarization requires dedicated models and non-standard quality measures. To our knowledge, our study is the first attempt to introduce a high-quality chat-dialogues corpus, manually annotated with abstractive summarizations, which can be used by the research community for further studies. |
Tasks | Abstractive Text Summarization |
Published | 2019-11-27 |
URL | https://arxiv.org/abs/1911.12237v2 |
https://arxiv.org/pdf/1911.12237v2.pdf | |
PWC | https://paperswithcode.com/paper/samsum-corpus-a-human-annotated-dialogue-1 |
Repo | |
Framework | |