Paper Group AWR 285
Deep Fusion Network for Image Completion. Contextual Imagined Goals for Self-Supervised Robotic Learning. Noise Robust Generative Adversarial Networks. Generative Collaborative Networks for Single Image Super-Resolution. How Can We Make GAN Perform Better in Single Medical Image Super-Resolution? A Lesion Focused Multi-Scale Approach. RaFM: Rank-Aw …
Deep Fusion Network for Image Completion
Title | Deep Fusion Network for Image Completion |
Authors | Xin Hong, Pengfei Xiong, Renhe Ji, Haoqiang Fan |
Abstract | Deep image completion usually fails to harmonically blend the restored image into existing content, especially in the boundary area. This paper handles with this problem from a new perspective of creating a smooth transition and proposes a concise Deep Fusion Network (DFNet). Firstly, a fusion block is introduced to generate a flexible alpha composition map for combining known and unknown regions. The fusion block not only provides a smooth fusion between restored and existing content, but also provides an attention map to make network focus more on the unknown pixels. In this way, it builds a bridge for structural and texture information, so that information can be naturally propagated from known region into completion. Furthermore, fusion blocks are embedded into several decoder layers of the network. Accompanied by the adjustable loss constraints on each layer, more accurate structure information are achieved. We qualitatively and quantitatively compare our method with other state-of-the-art methods on Places2 and CelebA datasets. The results show the superior performance of DFNet, especially in the aspects of harmonious texture transition, texture detail and semantic structural consistency. Our source code will be avaiable at: \url{https://github.com/hughplay/DFNet} |
Tasks | Image Inpainting |
Published | 2019-04-17 |
URL | http://arxiv.org/abs/1904.08060v1 |
http://arxiv.org/pdf/1904.08060v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-fusion-network-for-image-completion |
Repo | https://github.com/hughplay/DFNet |
Framework | pytorch |
Contextual Imagined Goals for Self-Supervised Robotic Learning
Title | Contextual Imagined Goals for Self-Supervised Robotic Learning |
Authors | Ashvin Nair, Shikhar Bahl, Alexander Khazatsky, Vitchyr Pong, Glen Berseth, Sergey Levine |
Abstract | While reinforcement learning provides an appealing formalism for learning individual skills, a general-purpose robotic system must be able to master an extensive repertoire of behaviors. Instead of learning a large collection of skills individually, can we instead enable a robot to propose and practice its own behaviors automatically, learning about the affordances and behaviors that it can perform in its environment, such that it can then repurpose this knowledge once a new task is commanded by the user? In this paper, we study this question in the context of self-supervised goal-conditioned reinforcement learning. A central challenge in this learning regime is the problem of goal setting: in order to practice useful skills, the robot must be able to autonomously set goals that are feasible but diverse. When the robot’s environment and available objects vary, as they do in most open-world settings, the robot must propose to itself only those goals that it can accomplish in its present setting with the objects that are at hand. Previous work only studies self-supervised goal-conditioned RL in a single-environment setting, where goal proposals come from the robot’s past experience or a generative model are sufficient. In more diverse settings, this frequently leads to impossible goals and, as we show experimentally, prevents effective learning. We propose a conditional goal-setting model that aims to propose goals that are feasible from the robot’s current state. We demonstrate that this enables self-supervised goal-conditioned off-policy learning with raw image observations in the real world, enabling a robot to manipulate a variety of objects and generalize to new objects that were not seen during training. |
Tasks | |
Published | 2019-10-23 |
URL | https://arxiv.org/abs/1910.11670v1 |
https://arxiv.org/pdf/1910.11670v1.pdf | |
PWC | https://paperswithcode.com/paper/contextual-imagined-goals-for-self-supervised |
Repo | https://github.com/anair13/rlkit/tree/ccrig/examples/ccrig |
Framework | none |
Noise Robust Generative Adversarial Networks
Title | Noise Robust Generative Adversarial Networks |
Authors | Takuhiro Kaneko, Tatsuya Harada |
Abstract | Generative adversarial networks (GANs) are neural networks that learn data distributions through adversarial training. In intensive studies, recent GANs have shown promising results for reproducing training images. However, in spite of noise, they reproduce images with fidelity. As an alternative, we propose a novel family of GANs called noise robust GANs (NR-GANs), which can learn a clean image generator even when training images are noisy. In particular, NR-GANs can solve this problem without having complete noise information (e.g., the noise distribution type, noise amount, or signal-noise relationship). To achieve this, we introduce a noise generator and train it along with a clean image generator. However, without any constraints, there is no incentive to generate an image and noise separately. Therefore, we propose distribution and transformation constraints that encourage the noise generator to capture only the noise-specific components. In particular, considering such constraints under different assumptions, we devise two variants of NR-GANs for signal-independent noise and three variants of NR-GANs for signal-dependent noise. On three benchmark datasets, we demonstrate the effectiveness of NR-GANs in noise robust image generation. Furthermore, we show the applicability of NR-GANs in image denoising. Our code is available at https://github.com/takuhirok/NR-GAN/. |
Tasks | Denoising, Image Denoising, Image Generation |
Published | 2019-11-26 |
URL | https://arxiv.org/abs/1911.11776v2 |
https://arxiv.org/pdf/1911.11776v2.pdf | |
PWC | https://paperswithcode.com/paper/noise-robust-generative-adversarial-networks |
Repo | https://github.com/takuhirok/NR-GAN |
Framework | pytorch |
Generative Collaborative Networks for Single Image Super-Resolution
Title | Generative Collaborative Networks for Single Image Super-Resolution |
Authors | Mohamed El Amine Seddik, Mohamed Tamaazousti, John Lin |
Abstract | A common issue of deep neural networks-based methods for the problem of Single Image Super-Resolution (SISR), is the recovery of finer texture details when super-resolving at large upscaling factors. This issue is particularly related to the choice of the objective loss function. In particular, recent works proposed the use of a VGG loss which consists in minimizing the error between the generated high resolution images and ground-truth in the feature space of a Convolutional Neural Network (VGG19), pre-trained on the very “large” ImageNet dataset. When considering the problem of super-resolving images with a distribution “far” from the ImageNet images distribution (\textit{e.g.,} satellite images), their proposed \textit{fixed} VGG loss is no longer relevant. In this paper, we present a general framework named \textit{Generative Collaborative Networks} (GCN), where the idea consists in optimizing the \textit{generator} (the mapping of interest) in the feature space of a \textit{features extractor} network. The two networks (generator and extractor) are \textit{collaborative} in the sense that the latter “helps” the former, by constructing discriminative and relevant features (not necessarily \textit{fixed} and possibly learned \textit{mutually} with the generator). We evaluate the GCN framework in the context of SISR, and we show that it results in a method that is adapted to super-resolution domains that are “far” from the ImageNet domain. |
Tasks | Image Super-Resolution, Super-Resolution |
Published | 2019-02-27 |
URL | http://arxiv.org/abs/1902.10467v2 |
http://arxiv.org/pdf/1902.10467v2.pdf | |
PWC | https://paperswithcode.com/paper/generative-collaborative-networks-for-single |
Repo | https://github.com/melaseddik/GCN |
Framework | none |
How Can We Make GAN Perform Better in Single Medical Image Super-Resolution? A Lesion Focused Multi-Scale Approach
Title | How Can We Make GAN Perform Better in Single Medical Image Super-Resolution? A Lesion Focused Multi-Scale Approach |
Authors | Jin Zhu, Guang Yang, Pietro Lio |
Abstract | Single image super-resolution (SISR) is of great importance as a low-level computer vision task. The fast development of Generative Adversarial Network (GAN) based deep learning architectures realises an efficient and effective SISR to boost the spatial resolution of natural images captured by digital cameras. However, the SISR for medical images is still a very challenging problem. This is due to (1) compared to natural images, in general, medical images have lower signal to noise ratios, (2) GAN based models pre-trained on natural images may synthesise unrealistic patterns in medical images which could affect the clinical interpretation and diagnosis, and (3) the vanilla GAN architecture may suffer from unstable training and collapse mode that can also affect the SISR results. In this paper, we propose a novel lesion focused SR (LFSR) method, which incorporates GAN to achieve perceptually realistic SISR results for brain tumour MRI images. More importantly, we test and make comparison using recently developed GAN variations, e.g., Wasserstein GAN (WGAN) and WGAN with Gradient Penalty (WGAN-GP), and propose a novel multi-scale GAN (MS-GAN), to achieve a more stabilised and efficient training and improved perceptual quality of the super-resolved results. Based on both quantitative evaluations and our designed mean opinion score, the proposed LFSR coupled with MS-GAN has performed better in terms of both perceptual quality and efficiency. |
Tasks | Image Super-Resolution, Super-Resolution |
Published | 2019-01-10 |
URL | http://arxiv.org/abs/1901.03419v1 |
http://arxiv.org/pdf/1901.03419v1.pdf | |
PWC | https://paperswithcode.com/paper/how-can-we-make-gan-perform-better-in-single |
Repo | https://github.com/GinZhu/MSGAN |
Framework | none |
RaFM: Rank-Aware Factorization Machines
Title | RaFM: Rank-Aware Factorization Machines |
Authors | Xiaoshuang Chen, Yin Zheng, Jiaxing Wang, Wenye Ma, Junzhou Huang |
Abstract | Factorization machines (FM) are a popular model class to learn pairwise interactions by a low-rank approximation. Different from existing FM-based approaches which use a fixed rank for all features, this paper proposes a Rank-Aware FM (RaFM) model which adopts pairwise interactions from embeddings with different ranks. The proposed model achieves a better performance on real-world datasets where different features have significantly varying frequencies of occurrences. Moreover, we prove that the RaFM model can be stored, evaluated, and trained as efficiently as one single FM, and under some reasonable conditions it can be even significantly more efficient than FM. RaFM improves the performance of FMs in both regression tasks and classification tasks while incurring less computational burden, therefore also has attractive potential in industrial applications. |
Tasks | |
Published | 2019-05-18 |
URL | https://arxiv.org/abs/1905.07570v1 |
https://arxiv.org/pdf/1905.07570v1.pdf | |
PWC | https://paperswithcode.com/paper/rafm-rank-aware-factorization-machines |
Repo | https://github.com/cxsmarkchan/RaFM |
Framework | tf |
The Area of the Convex Hull of Sampled Curves: a Robust Functional Statistical Depth Measure
Title | The Area of the Convex Hull of Sampled Curves: a Robust Functional Statistical Depth Measure |
Authors | Guillaume Staerman, Pavlo Mozharovskyi, Stephan Clémençon |
Abstract | With the ubiquity of sensors in the IoT era, statistical observations are becoming increasingly available in the form of massive (multivariate) time-series. Formulated as unsupervised anomaly detection tasks, an abundance of applications like aviation safety management, the health monitoring of complex infrastructures or fraud detection can now rely on such functional data, acquired and stored with an ever finer granularity. The concept of statistical depth, which reflects centrality of an arbitrary observation w.r.t. a statistical population may play a crucial role in this regard, anomalies corresponding to observations with ‘small’ depth. Supported by sound theoretical and computational developments in the recent decades, it has proven to be extremely useful, in particular in functional spaces. However, most approaches documented in the literature consist in evaluating independently the centrality of each point forming the time series and consequently exhibit a certain insensitivity to possible shape changes. In this paper, we propose a novel notion of functional depth based on the area of the convex hull of sampled curves, capturing gradual departures from centrality, even beyond the envelope of the data, in a natural fashion. We discuss practical relevance of commonly imposed axioms on functional depths and investigate which of them are satisfied by the notion of depth we promote here. Estimation and computational issues are also addressed and various numerical experiments provide empirical evidence of the relevance of the approach proposed. |
Tasks | Anomaly Detection, Fraud Detection, Time Series, Unsupervised Anomaly Detection |
Published | 2019-10-09 |
URL | https://arxiv.org/abs/1910.04085v2 |
https://arxiv.org/pdf/1910.04085v2.pdf | |
PWC | https://paperswithcode.com/paper/the-area-of-the-convex-hull-of-sampled-curves |
Repo | https://github.com/Gstaerman/ACHD |
Framework | none |
Feature Robustness in Non-stationary Health Records: Caveats to Deployable Model Performance in Common Clinical Machine Learning Tasks
Title | Feature Robustness in Non-stationary Health Records: Caveats to Deployable Model Performance in Common Clinical Machine Learning Tasks |
Authors | Bret Nestor, Matthew B. A. McDermott, Willie Boag, Gabriela Berner, Tristan Naumann, Michael C. Hughes, Anna Goldenberg, Marzyeh Ghassemi |
Abstract | When training clinical prediction models from electronic health records (EHRs), a key concern should be a model’s ability to sustain performance over time when deployed, even as care practices, database systems, and population demographics evolve. Due to de-identification requirements, however, current experimental practices for public EHR benchmarks (such as the MIMIC-III critical care dataset) are time agnostic, assigning care records to train or test sets without regard for the actual dates of care. As a result, current benchmarks cannot assess how well models trained on one year generalise to another. In this work, we obtain a Limited Data Use Agreement to access year of care for each record in MIMIC and show that all tested state-of-the-art models decay in prediction quality when trained on historical data and tested on future data, particularly in response to a system-wide record-keeping change in 2008 (0.29 drop in AUROC for mortality prediction, 0.10 drop in AUROC for length-of-stay prediction with a random forest classifier). We further develop a simple yet effective mitigation strategy: by aggregating raw features into expert-defined clinical concepts, we see only a 0.06 drop in AUROC for mortality prediction and a 0.03 drop in AUROC for length-of-stay prediction. We demonstrate that this aggregation strategy outperforms other automatic feature preprocessing techniques aimed at increasing robustness to data drift. We release our aggregated representations and code to encourage more deployable clinical prediction models. |
Tasks | Length-of-Stay prediction, Mortality Prediction |
Published | 2019-08-02 |
URL | https://arxiv.org/abs/1908.00690v1 |
https://arxiv.org/pdf/1908.00690v1.pdf | |
PWC | https://paperswithcode.com/paper/feature-robustness-in-non-stationary-health |
Repo | https://github.com/MLforHealth/MIMIC_Generalisation |
Framework | pytorch |
Interpretable Outcome Prediction with Sparse Bayesian Neural Networks in Intensive Care
Title | Interpretable Outcome Prediction with Sparse Bayesian Neural Networks in Intensive Care |
Authors | Hiske Overweg, Anna-Lena Popkes, Ari Ercole, Yingzhen Li, José Miguel Hernández-Lobato, Yordan Zaykov, Cheng Zhang |
Abstract | Clinical decision making is challenging because of pathological complexity, as well as large amounts of heterogeneous data generated as part of routine clinical care. In recent years, machine learning tools have been developed to aid this process. Intensive care unit (ICU) admissions represent the most data dense and time-critical patient care episodes. In this context, prediction models may help clinicians determine which patients are most at risk and prioritize care. However, flexible tools such as artificial neural networks (ANNs) suffer from a lack of interpretability limiting their acceptability to clinicians. In this work, we propose a novel interpretable Bayesian neural network architecture which offers both the flexibility of ANNs and interpretability in terms of feature selection. In particular, we employ a sparsity inducing prior distribution in a tied manner to learn which features are important for outcome prediction. We evaluate our approach on the task of mortality prediction using two real-world ICU cohorts. In collaboration with clinicians we found that, in addition to the predicted outcome results, our approach can provide novel insights into the importance of different clinical measurements. This suggests that our model can support medical experts in their decision making process. |
Tasks | Decision Making, Feature Selection, Mortality Prediction |
Published | 2019-05-07 |
URL | https://arxiv.org/abs/1905.02599v2 |
https://arxiv.org/pdf/1905.02599v2.pdf | |
PWC | https://paperswithcode.com/paper/interpretable-outcome-prediction-with-sparse |
Repo | https://github.com/microsoft/horseshoe-bnn |
Framework | pytorch |
Learning single-image 3D reconstruction by generative modelling of shape, pose and shading
Title | Learning single-image 3D reconstruction by generative modelling of shape, pose and shading |
Authors | Paul Henderson, Vittorio Ferrari |
Abstract | We present a unified framework tackling two problems: class-specific 3D reconstruction from a single image, and generation of new 3D shape samples. These tasks have received considerable attention recently; however, most existing approaches rely on 3D supervision, annotation of 2D images with keypoints or poses, and/or training with multiple views of each object instance. Our framework is very general: it can be trained in similar settings to existing approaches, while also supporting weaker supervision. Importantly, it can be trained purely from 2D images, without pose annotations, and with only a single view per instance. We employ meshes as an output representation, instead of voxels used in most prior work. This allows us to reason over lighting parameters and exploit shading information during training, which previous 2D-supervised methods cannot. Thus, our method can learn to generate and reconstruct concave object classes. We evaluate our approach in various settings, showing that: (i) it learns to disentangle shape from pose and lighting; (ii) using shading in the loss improves performance compared to just silhouettes; (iii) when using a standard single white light, our model outperforms state-of-the-art 2D-supervised methods, both with and without pose supervision, thanks to exploiting shading cues; (iv) performance improves further when using multiple coloured lights, even approaching that of state-of-the-art 3D-supervised methods; (v) shapes produced by our model capture smooth surfaces and fine details better than voxel-based approaches; and (vi) our approach supports concave classes such as bathtubs and sofas, which methods based on silhouettes cannot learn. |
Tasks | 3D Reconstruction |
Published | 2019-01-19 |
URL | https://arxiv.org/abs/1901.06447v2 |
https://arxiv.org/pdf/1901.06447v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-single-image-3d-reconstruction-by |
Repo | https://github.com/pmh47/dirt |
Framework | tf |
Knowledge-based Analysis for Mortality Prediction from CT Images
Title | Knowledge-based Analysis for Mortality Prediction from CT Images |
Authors | Hengtao Guo, Uwe Kruger, Ge Wang, Mannudeep K. Kalra, Pingkun Yan |
Abstract | Recent studies have highlighted the high correlation between cardiovascular diseases (CVD) and lung cancer, and both are associated with significant morbidity and mortality. Low-Dose CT (LCDT) scans have led to significant improvements in the accuracy of lung cancer diagnosis and thus the reduction of cancer deaths. However, the high correlation between lung cancer and CVD has not been well explored for mortality prediction. This paper introduces a knowledge-based analytical method using deep convolutional neural network (CNN) for all-cause mortality prediction. The underlying approach combines structural image features extracted from CNNs, based on LDCT volume in different scale, and clinical knowledge obtained from quantitative measurements, to comprehensively predict the mortality risk of lung cancer screening subjects. The introduced method is referred to here as the Knowledge-based Analysis of Mortality Prediction Network, or KAMP-Net. It constitutes a collaborative framework that utilizes both imaging features and anatomical information, instead of completely relying on automatic feature extraction. Our work demonstrates the feasibility of incorporating quantitative clinical measurements to assist CNNs in all-cause mortality prediction from chest LDCT images. The results of this study confirm that radiologist defined features are an important complement to CNNs to achieve a more comprehensive feature extraction. Thus, the proposed KAMP-Net has shown to achieve a superior performance when compared to other methods. Our code is available at https://github.com/DIAL-RPI/KAMP-Net. |
Tasks | Lung Cancer Diagnosis, Mortality Prediction |
Published | 2019-02-20 |
URL | https://arxiv.org/abs/1902.07687v2 |
https://arxiv.org/pdf/1902.07687v2.pdf | |
PWC | https://paperswithcode.com/paper/knowledge-based-analysis-for-mortality |
Repo | https://github.com/DIAL-RPI/KAMP-Net |
Framework | pytorch |
ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU
Title | ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU |
Authors | William Caicedo-Torres, Jairo Gutierrez |
Abstract | To improve the performance of Intensive Care Units (ICUs), the field of bio-statistics has developed scores which try to predict the likelihood of negative outcomes. These help evaluate the effectiveness of treatments and clinical practice, and also help to identify patients with unexpected outcomes. However, they have been shown by several studies to offer sub-optimal performance. Alternatively, Deep Learning offers state of the art capabilities in certain prediction tasks and research suggests deep neural networks are able to outperform traditional techniques. Nevertheless, a main impediment for the adoption of Deep Learning in healthcare is its reduced interpretability, for in this field it is crucial to gain insight on the why of predictions, to assure that models are actually learning relevant features instead of spurious correlations. To address this, we propose a deep multi-scale convolutional architecture trained on the Medical Information Mart for Intensive Care III (MIMIC-III) for mortality prediction, and the use of concepts from coalitional game theory to construct visual explanations aimed to show how important these inputs are deemed by the network. Our results show our model attains state of the art performance while remaining interpretable. Supporting code can be found at https://github.com/williamcaicedo/ISeeU. |
Tasks | Mortality Prediction |
Published | 2019-01-24 |
URL | http://arxiv.org/abs/1901.08201v1 |
http://arxiv.org/pdf/1901.08201v1.pdf | |
PWC | https://paperswithcode.com/paper/iseeu-visually-interpretable-deep-learning |
Repo | https://github.com/williamcaicedo/ISeeU |
Framework | tf |
AdaptIS: Adaptive Instance Selection Network
Title | AdaptIS: Adaptive Instance Selection Network |
Authors | Konstantin Sofiiuk, Olga Barinova, Anton Konushin |
Abstract | We present Adaptive Instance Selection network architecture for class-agnostic instance segmentation. Given an input image and a point $(x, y)$, it generates a mask for the object located at $(x, y)$. The network adapts to the input point with a help of AdaIN layers, thus producing different masks for different objects on the same image. AdaptIS generates pixel-accurate object masks, therefore it accurately segments objects of complex shape or severely occluded ones. AdaptIS can be easily combined with standard semantic segmentation pipeline to perform panoptic segmentation. To illustrate the idea, we perform experiments on a challenging toy problem with difficult occlusions. Then we extensively evaluate the method on panoptic segmentation benchmarks. We obtain state-of-the-art results on Cityscapes and Mapillary even without pretraining on COCO, and show competitive results on a challenging COCO dataset. The source code of the method and the trained models are available at https://github.com/saic-vul/adaptis. |
Tasks | Instance Segmentation, Panoptic Segmentation, Semantic Segmentation |
Published | 2019-09-17 |
URL | https://arxiv.org/abs/1909.07829v1 |
https://arxiv.org/pdf/1909.07829v1.pdf | |
PWC | https://paperswithcode.com/paper/adaptis-adaptive-instance-selection-network |
Repo | https://github.com/saic-vul/adaptis |
Framework | mxnet |
Learning Representation Mapping for Relation Detection in Knowledge Base Question Answering
Title | Learning Representation Mapping for Relation Detection in Knowledge Base Question Answering |
Authors | Peng Wu, Shujian Huang, Rongxiang Weng, Zaixiang Zheng, Jianbing Zhang, Xiaohui Yan, Jiajun Chen |
Abstract | Relation detection is a core step in many natural language process applications including knowledge base question answering. Previous efforts show that single-fact questions could be answered with high accuracy. However, one critical problem is that current approaches only get high accuracy for questions whose relations have been seen in the training data. But for unseen relations, the performance will drop rapidly. The main reason for this problem is that the representations for unseen relations are missing. In this paper, we propose a simple mapping method, named representation adapter, to learn the representation mapping for both seen and unseen relations based on previously learned relation embedding. We employ the adversarial objective and the reconstruction objective to improve the mapping performance. We re-organize the popular SimpleQuestion dataset to reveal and evaluate the problem of detecting unseen relations. Experiments show that our method can greatly improve the performance of unseen relations while the performance for those seen part is kept comparable to the state-of-the-art. Our code and data are available at https://github.com/wudapeng268/KBQA-Adapter. |
Tasks | Knowledge Base Question Answering, Question Answering |
Published | 2019-07-17 |
URL | https://arxiv.org/abs/1907.07328v1 |
https://arxiv.org/pdf/1907.07328v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-representation-mapping-for-relation |
Repo | https://github.com/wudapeng268/KBQA-Adapter |
Framework | tf |
Cross-topic distributional semantic representations via unsupervised mappings
Title | Cross-topic distributional semantic representations via unsupervised mappings |
Authors | Eleftheria Briakou, Nikos Athanasiou, Alexandros Potamianos |
Abstract | In traditional Distributional Semantic Models (DSMs) the multiple senses of a polysemous word are conflated into a single vector space representation. In this work, we propose a DSM that learns multiple distributional representations of a word based on different topics. First, a separate DSM is trained for each topic and then each of the topic-based DSMs is aligned to a common vector space. Our unsupervised mapping approach is motivated by the hypothesis that words preserving their relative distances in different topic semantic sub-spaces constitute robust \textit{semantic anchors} that define the mappings between them. Aligned cross-topic representations achieve state-of-the-art results for the task of contextual word similarity. Furthermore, evaluation on NLP downstream tasks shows that multiple topic-based embeddings outperform single-prototype models. |
Tasks | |
Published | 2019-04-11 |
URL | http://arxiv.org/abs/1904.05674v1 |
http://arxiv.org/pdf/1904.05674v1.pdf | |
PWC | https://paperswithcode.com/paper/cross-topic-distributional-semantic |
Repo | https://github.com/Elbria/utdsm_naacl2018 |
Framework | none |