January 25, 2020

3510 words 17 mins read

Paper Group NAWR 40

Paper Group NAWR 40

REM: From Structural Entropy to Community Structure Deception. Learn, Imagine and Create: Text-to-Image Generation from Prior Knowledge. SSIM -A Deep Learning Approach for Recovering Missing Time Series Sensor Data. Progressive Reconstruction of Visual Structure for Image Inpainting. FastSpeech: Fast,Robustand Controllable Text-to-Speech. Morpheus: …

REM: From Structural Entropy to Community Structure Deception

Title REM: From Structural Entropy to Community Structure Deception
Authors Yiwei Liu, Jiamou Liu, Zijian Zhang, Liehuang Zhu, Angsheng Li
Abstract This paper focuses on the privacy risks of disclosing the community structure in an online social network. By exploiting the community affiliations of user accounts, an attacker may infer sensitive user attributes. This raises the problem of community structure deception (CSD), which asks for ways to minimally modify the network so that a given community structure maximally hides itself from community detection algorithms. We investigate CSD through an information-theoretic lens. To this end, we propose a community-based structural entropy to express the amount of information revealed by a community structure. This notion allows us to devise residual entropy minimization (REM) as an efficient procedure to solve CSD. Experimental results over 9 real-world networks and 6 community detection algorithms show that REM is very effective in obfuscating the community structure as compared to other benchmark methods.
Tasks Community Detection
Published 2019-12-01
URL http://papers.nips.cc/paper/9453-rem-from-structural-entropy-to-community-structure-deception
PDF http://papers.nips.cc/paper/9453-rem-from-structural-entropy-to-community-structure-deception.pdf
PWC https://paperswithcode.com/paper/rem-from-structural-entropy-to-community
Repo https://github.com/CommunityDeception/CommunityDeceptor
Framework none

Learn, Imagine and Create: Text-to-Image Generation from Prior Knowledge

Title Learn, Imagine and Create: Text-to-Image Generation from Prior Knowledge
Authors Tingting Qiao, Jing Zhang, Duanqing Xu, Dacheng Tao
Abstract Text-to-image generation, i.e. generating an image given a text description, is a very challenging task due to the significant semantic gap between the two domains. Humans, however, tackle this problem intelligently. We learn from diverse objects to form a solid prior about semantics, textures, colors, shapes, and layouts. Given a text description, we immediately imagine an overall visual impression using this prior and, based on this, we draw a picture by progressively adding more and more details. In this paper, and inspired by this process, we propose a novel text-to-image method called LeicaGAN to combine the above three phases in a unified framework. First, we formulate the multiple priors learning phase as a textual-visual co-embedding (TVE) comprising a text-image encoder for learning semantic, texture, and color priors and a text-mask encoder for learning shape and layout priors. Then, we formulate the imagination phase as multiple priors aggregation (MPA) by combining these complementary priors and adding noise for diversity. Lastly, we formulate the creation phase by using a cascaded attentive generator (CAG) to progressively draw a picture from coarse to fine. We leverage adversarial learning for LeicaGAN to enforce semantic consistency and visual realism. Thorough experiments on two public benchmark datasets demonstrate LeicaGAN’s superiority over the baseline method. Code has been made available at https://github.com/qiaott/LeicaGAN.
Tasks Image Generation, Text-to-Image Generation
Published 2019-12-01
URL http://papers.nips.cc/paper/8375-learn-imagine-and-create-text-to-image-generation-from-prior-knowledge
PDF http://papers.nips.cc/paper/8375-learn-imagine-and-create-text-to-image-generation-from-prior-knowledge.pdf
PWC https://paperswithcode.com/paper/learn-imagine-and-create-text-to-image
Repo https://github.com/qiaott/LeicaGAN
Framework pytorch

SSIM -A Deep Learning Approach for Recovering Missing Time Series Sensor Data

Title SSIM -A Deep Learning Approach for Recovering Missing Time Series Sensor Data
Authors Yi-Fan, Zhang ; Peter, Thorburn ; Wei, Xiang ; Peter, Fitch
Abstract Missing data are unavoidable in wireless sensor networks, due to issues such as network communication outage, sensor maintenance or failure, etc. Although a plethora of methods have been proposed for imputing sensor data, limitations still exist. Firstly, most methods give poor estimates when a consecutive number of data are missing. Secondly, some methods reconstruct missing data based on other parameters monitored simultaneously. When all the data are missing, these methods are no longer effective. Thirdly, the performance of deep learning methods relies highly on a massive number of training data. Moreover in many scenarios, it is difficult to obtain large volumes of data from wireless sensor networks. Hence, we propose a new sequence-to-sequence imputation model (SSIM) for recovering missing data in wireless sensor networks. The SSIM uses the state-of-the-art sequence-to-sequence deep learning architecture, and the Long Short Term Memory Network is chosen to utilize both the past and future information for a given time. Moreover, a variable-length sliding window algorithm is developed to generate a large number of training samples so the SSIM can be trained with small data sets. We evaluate the SSIM by using real-world time series data from a water quality monitoring network. Compared to methods like ARIMA, Seasonal ARIMA, Matrix Factorization, Multivariate Imputation by Chained Equations and Expectation Maximization, the proposed SSIM achieves up to 69.2%, 70.3%, 98.3% and 76% improvements in terms of the RMSE, MAE, MAPE and SMAPE respectively, when recovering missing data sequences of three different lengths. The SSIM is therefore a promising approach for data quality control in wireless sensor networks.
Tasks Imputation, Time Series
Published 2019-04-03
URL https://ieeexplore.ieee.org/document/8681112
PDF https://www.ivivan.com/papers/IOTJ2019.pdf
PWC https://paperswithcode.com/paper/ssim-a-deep-learning-approach-for-recovering
Repo https://github.com/ivivan/SSIM_Seq2Seq
Framework pytorch

Progressive Reconstruction of Visual Structure for Image Inpainting

Title Progressive Reconstruction of Visual Structure for Image Inpainting
Authors Jingyuan Li, Fengxiang He, Lefei Zhang, Bo Du, Dacheng Tao
Abstract Inpainting methods aim to restore missing parts of corrupted images and play a critical role in many computer vision applications, such as object removal and image restoration. Although existing methods perform well on images with small holes, restoring large holes remains elusive. To address this issue, this paper proposes a Progressive Reconstruction of Visual Structure (PRVS) network that progressively reconstructs the structures and the associated visual feature. Specifically, we design a novel Visual Structure Reconstruction (VSR) layer to entangle reconstructions of the visual structure and visual feature, which benefits each other by sharing parameters. We repeatedly stack four VSR layers in both encoding and decoding stages of a U-Net like architecture to form the generator of a generative adversarial network (GAN) for restoring images with either small or large holes. We prove the generalization error upper bound of the PRVS network is O(1sqrt(N)), which theoretically guarantees its performance. Extensive empirical evaluations and comparisons on Places2, Paris Street View and CelebA datasets validate the strengths of the proposed approach and demonstrate that the model outperforms current state-of-the-art methods. The source code package is available at https://github.com/jingyuanli001/PRVS-Image-Inpainting.
Tasks Image Inpainting, Image Restoration
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Li_Progressive_Reconstruction_of_Visual_Structure_for_Image_Inpainting_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Li_Progressive_Reconstruction_of_Visual_Structure_for_Image_Inpainting_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/progressive-reconstruction-of-visual
Repo https://github.com/jingyuanli001/PRVS-Image-Inpainting
Framework pytorch

FastSpeech: Fast,Robustand Controllable Text-to-Speech

Title FastSpeech: Fast,Robustand Controllable Text-to-Speech
Authors Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, Tie-Yan Liu
Abstract Neural network based end-to-end text to speech (TTS) has significantly improved the quality of synthesized speech. Prominent methods (e.g., Tacotron 2) usually first generate mel-spectrogram from text, and then synthesize speech from mel-spectrogram using vocoder such as WaveNet. Compared with traditional concatenative and statistical parametric approaches, neural network based end-to-end models suffer from slow inference speed, and the synthesized speech is usually not robust (i.e., some words are skipped or repeated) and lack of controllability (voice speed or prosody control). In this work, we propose a novel feed-forward network based on Transformer to generate mel-spectrogram in parallel for TTS. Specifically, we extract attention alignments from an encoder-decoder based teacher model for phoneme duration prediction, which is used by a length regulator to expand the source phoneme sequence to match the length of target mel-sprectrogram sequence for parallel mel-sprectrogram generation. Experiments on the LJSpeech dataset show that our parallel model matches autoregressive models in terms of speech quality, nearly eliminates the skipped words and repeated words, and can adjust voice speed smoothly. Most importantly, compared with autoregressive models, our model speeds up the mel-sprectrogram generation by 270x. Therefore, we call our model FastSpeech. We will release the code on Github.
Tasks
Published 2019-05-22
URL https://arxiv.org/abs/1905.09263v1
PDF https://arxiv.org/pdf/1905.09263v1.pdf
PWC https://paperswithcode.com/paper/fastspeech-fastrobustand-controllable-text-to
Repo https://github.com/xcmyz/FastSpeech
Framework pytorch

Morpheus: A Neural Network for Jointly Learning Contextual Lemmatization and Morphological Tagging

Title Morpheus: A Neural Network for Jointly Learning Contextual Lemmatization and Morphological Tagging
Authors Eray Yildiz, A. C{"u}neyd Tantu{\u{g}}
Abstract In this study, we present Morpheus, a joint contextual lemmatizer and morphological tagger. Morpheus is based on a neural sequential architecture where inputs are the characters of the surface words in a sentence and the outputs are the minimum edit operations between surface words and their lemmata as well as the morphological tags assigned to the words. The experiments on the datasets in nearly 100 languages provided by SigMorphon 2019 Shared Task 2 organizers show that the performance of Morpheus is comparable to the state-of-the-art system in terms of lemmatization. In morphological tagging, on the other hand, Morpheus significantly outperforms the SigMorphon baseline. In our experiments, we also show that the neural encoder-decoder architecture trained to predict the minimum edit operations can produce considerably better results than the architecture trained to predict the characters in lemmata directly as in previous studies. According to the SigMorphon 2019 Shared Task 2 results, Morpheus has placed 3rd in lemmatization and reached the 9th place in morphological tagging among all participant teams.
Tasks Lemmatization, Morphological Tagging
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-4205/
PDF https://www.aclweb.org/anthology/W19-4205
PWC https://paperswithcode.com/paper/morpheus-a-neural-network-for-jointly
Repo https://github.com/erayyildiz/Morpheus
Framework pytorch

Ultra Fast Medoid Identification via Correlated Sequential Halving

Title Ultra Fast Medoid Identification via Correlated Sequential Halving
Authors Tavor Baharav, David Tse
Abstract The medoid of a set of n points is the point in the set that minimizes the sum of distances to other points. It can be determined exactly in O(n^2) time by computing the distances between all pairs of points. Previous works show that one can significantly reduce the number of distance computations needed by adaptively querying distances. The resulting randomized algorithm is obtained by a direct conversion of the computation problem to a multi-armed bandit statistical inference problem. In this work, we show that we can better exploit the structure of the underlying computation problem by modifying the traditional bandit sampling strategy and using it in conjunction with a suitably chosen multi-armed bandit algorithm. Four to five orders of magnitude gains over exact computation are obtained on real data, in terms of both number of distance computations needed and wall clock time. Theoretical results are obtained to quantify such gains in terms of data parameters. Our code is publicly available online at https://github.com/TavorB/Correlated-Sequential-Halving.
Tasks
Published 2019-12-01
URL http://papers.nips.cc/paper/8623-ultra-fast-medoid-identification-via-correlated-sequential-halving
PDF http://papers.nips.cc/paper/8623-ultra-fast-medoid-identification-via-correlated-sequential-halving.pdf
PWC https://paperswithcode.com/paper/ultra-fast-medoid-identification-via-1
Repo https://github.com/TavorB/Correlated-Sequential-Halving
Framework none

MSSTN: Multi-Scale Spatial Temporal Network for Air Pollution Prediction

Title MSSTN: Multi-Scale Spatial Temporal Network for Air Pollution Prediction
Authors Zhiyuan Wu ; Yue Wang ; Lin Zhang
Abstract Air pollution has become an important factor constraining city development and threatening public health in recent years. Air pollution prediction has been considered as the key part for the early warning of pollution event. Considering the multi-scale nature of geo-sensory data such as air pollution signal, in this paper we adopt a multi-level graph data structure for better utilization of multi-scale spatio-temporal information. We further present a novel deep convolutional neural network model, named Multi-Scale Spatial Temporal Network (MSSTN), for the learning task on this data structure. The MSSTN is specially designed to better discover multi-scale spatial temporal patterns and their high-level interactions, by explicitly using multi-scale neural network structure in both spatial and temporal component. We conduct extensive experiments and ablation studies on Urban Air Pollution Datasets in North China, where the MSSTN can make hourly PM2.5 concentration predictions jointly for a number of cities. And our results shows an outstanding prediction accuracy as well as high computational efficiency compared to existing works.
Tasks Air Pollution Prediction
Published 2019-09-12
URL https://ieeexplore.ieee.org/document/9005574
PDF https://ieeexplore.ieee.org/document/9005574
PWC https://paperswithcode.com/paper/msstn-multi-scale-spatial-temporal-network
Repo https://github.com/Zhiyuan-Wu/MSSTN
Framework tf

Towards Interpretable Object Detection by Unfolding Latent Structures

Title Towards Interpretable Object Detection by Unfolding Latent Structures
Authors Tianfu Wu, Xi Song
Abstract This paper first proposes a method of formulating model interpretability in visual understanding tasks based on the idea of unfolding latent structures. It then presents a case study in object detection using popular two-stage region-based convolutional network (i.e., R-CNN) detection systems. The proposed method focuses on weakly-supervised extractive rationale generation, that is learning to unfold latent discriminative part configurations of object instances automatically and simultaneously in detection without using any supervision for part configurations. It utilizes a top-down hierarchical and compositional grammar model embedded in a directed acyclic AND-OR Graph (AOG) to explore and unfold the space of latent part configurations of regions of interest (RoIs). It presents an AOGParsing operator that seamlessly integrates with the RoIPooling/RoIAlign operator widely used in R-CNN and is trained end-to-end. In object detection, a bounding box is interpreted by the best parse tree derived from the AOG on-the-fly, which is treated as the qualitatively extractive rationale generated for interpreting detection. In experiments, Faster R-CNN is used to test the proposed method on the PASCAL VOC 2007 and the COCO 2017 object detection datasets. The experimental results show that the proposed method can compute promising latent structures without hurting the performance. The code and pretrained models are available at https://github.com/iVMCL/iRCNN.
Tasks Object Detection
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Wu_Towards_Interpretable_Object_Detection_by_Unfolding_Latent_Structures_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Wu_Towards_Interpretable_Object_Detection_by_Unfolding_Latent_Structures_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/towards-interpretable-object-detection-by
Repo https://github.com/iVMCL/iRCNN
Framework pytorch

Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network

Title Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network
Authors Awni Y. Hannun, Pranav Rajpurkar, Masoumeh Haghpanahi, Geoffrey H. Tison, Codie Bourn, Mintu P. Turakhia, Andrew Y. Ng
Abstract Computerized electrocardiogram (ECG) interpretation plays a critical role in the clinical ECG workflow. Widely available digital ECG data and the algorithmic paradigm of deep learning present an opportunity to substantially improve the accuracy and scalability of automated ECG analysis. However, a comprehensive evaluation of an end-to-end deep learning approach for ECG analysis across a wide variety of diagnostic classes has not been previously reported. Here, we develop a deep neural network (DNN) to classify 12 rhythm classes using 91,232 single-lead ECGs from 53,549 patients who used a single-lead ambulatory ECG monitoring device. When validated against an independent test dataset annotated by a consensus committee of board-certified practicing cardiologists, the DNN achieved an average area under the receiver operating characteristic curve (ROC) of 0.97. The average F1 score, which is the harmonic mean of the positive predictive value and sensitivity, for the DNN (0.837) exceeded that of average cardiologists (0.780). With specificity fixed at the average specificity achieved by cardiologists, the sensitivity of the DNN exceeded the average cardiologist sensitivity for all rhythm classes. These findings demonstrate that an end-to-end deep learning approach can classify a broad range of distinct arrhythmias from single-lead ECGs with high diagnostic performance similar to that of cardiologists. If confirmed in clinical settings, this approach could reduce the rate of misdiagnosed computerized ECG interpretations and improve the efficiency of expert human ECG interpretation by accurately triaging or prioritizing the most urgent conditions. Dataset available from: https://irhythm.github.io/cardiol_test_set/
Tasks Arrhythmia Detection, Electrocardiography (ECG)
Published 2019-01-07
URL https://doi.org/10.1038/s41591-018-0268-3
PDF https://arxiv.org/pdf/1707.01836.pdf
PWC https://paperswithcode.com/paper/cardiologist-level-arrhythmia-detection-and
Repo https://github.com/awni/ecg
Framework tf

ANODEV2: A Coupled Neural ODE Framework

Title ANODEV2: A Coupled Neural ODE Framework
Authors Tianjun Zhang, Zhewei Yao, Amir Gholami, Joseph E. Gonzalez, Kurt Keutzer, Michael W. Mahoney, George Biros
Abstract It has been observed that residual networks can be viewed as the explicit Euler discretization of an Ordinary Differential Equation (ODE). This observation motivated the introduction of so-called Neural ODEs, in which other discretization schemes and/or adaptive time stepping techniques can be used to improve the performance of residual networks. Here, we propose \OURS, which extends this approach by introducing a framework that allows ODE-based evolution for both the weights and the activations, in a coupled formulation. Such an approach provides more modeling flexibility, and it can help with generalization performance. We present the formulation of \OURS, derive optimality conditions, and implement the coupled framework in PyTorch. We present empirical results using several different configurations of \OURS, testing them on the CIFAR-10 dataset. We report results showing that our coupled ODE-based framework is indeed trainable, and that it achieves higher accuracy, compared to the baseline ResNet network and the recently-proposed Neural ODE approach.
Tasks
Published 2019-12-01
URL http://papers.nips.cc/paper/8758-anodev2-a-coupled-neural-ode-framework
PDF http://papers.nips.cc/paper/8758-anodev2-a-coupled-neural-ode-framework.pdf
PWC https://paperswithcode.com/paper/anodev2-a-coupled-neural-ode-framework
Repo https://github.com/amirgholami/anode
Framework pytorch

What’s There in the Dark

Title What’s There in the Dark
Authors Sauradip Nag, Saptakatha Adak, Sukhendu Das
Abstract Scene Parsing is an important cog for modern autonomous driving systems. Most of the works in semantic segmentation pertains to day-time scenes with favourable weather and illumination conditions. In this paper, we propose a novel deep architecture, NiSeNet, that performs semantic segmentation of night scenes using a domain mapping approach of synthetic to real data. It is a dual-channel network, where we designed a Real channel using DeepLabV3+ coupled with an MSE loss to preserve the spatial information. In addition, we used an Adaptive channel reducing the domain gap between synthetic and real night images, which also complements the failures of Real channel output. Apart from the dual channel, we introduced a novel fusion scheme to fuse the outputs of two channels. In addition to that, we compiled a new dataset Urban Night Driving Dataset (UNDD); it consists of 7125 unlabelled day and night images; additionally, it has 75 night images with pixel-level annotations having classes equivalent to Cityscapes dataset. We evaluated our approach on the Berkley Deep Drive dataset, the challenging Mapillary dataset and UNDD dataset to exhibit that the proposed method outperforms the state-of-the-art techniques in terms of accuracy and visual quality.
Tasks Autonomous Driving, Scene Parsing, Semantic Segmentation
Published 2019-09-24
URL https://www.researchgate.net/publication/335539032_What's_There_in_the_Dark
PDF https://www.researchgate.net/publication/335539032_What's_There_in_the_Dark
PWC https://paperswithcode.com/paper/whats-there-in-the-dark
Repo https://github.com/sauradip/night_image_semantic_segmentation
Framework none

Celebrity Profiling

Title Celebrity Profiling
Authors Matti Wiegmann, Benno Stein, Martin Potthast
Abstract Celebrities are among the most prolific users of social media, promoting their personas and rallying followers. This activity is closely tied to genuine writing samples, which makes them worthy research subjects in many respects, not least profiling. With this paper we introduce the Webis Celebrity Corpus 2019. For its construction the Twitter feeds of 71,706 verified accounts have been carefully linked with their respective Wikidata items, crawling both. After cleansing, the resulting profiles contain an average of 29,968 words per profile and up to 239 pieces of personal information. A cross-evaluation that checked the correct association of Twitter account and Wikidata item revealed an error rate of only 0.6{%}, rendering the profiles highly reliable. Our corpus comprises a wide cross-section of local and global celebrities, forming a unique combination of scale, profile comprehensiveness, and label reliability. We further establish the state of the art{'}s profiling performance by evaluating the winning approaches submitted to the PAN gender prediction tasks in a transfer learning experiment. They are only outperformed by our own deep learning approach, which we also use to exemplify celebrity occupation prediction for the first time.
Tasks Gender Prediction, Transfer Learning
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1249/
PDF https://www.aclweb.org/anthology/P19-1249
PWC https://paperswithcode.com/paper/celebrity-profiling
Repo https://github.com/webis-de/acl-19
Framework none

Compiler Auto-Vectorization with Imitation Learning

Title Compiler Auto-Vectorization with Imitation Learning
Authors Charith Mendis, Cambridge Yang, Yewen Pu, Dr.Saman Amarasinghe, Michael Carbin
Abstract Modern microprocessors are equipped with single instruction multiple data (SIMD) or vector instruction sets which allow compilers to exploit fine-grained data level parallelism. To exploit this parallelism, compilers employ auto-vectorization techniques to automatically convert scalar code into vector code. Larsen & Amarasinghe (2000) first introduced superword level parallelism (SLP) based vectorization, which is one form of vectorization popularly used by compilers. Current compilers employ hand-crafted heuristics and typically only follow one SLP vectorization strategy which can be suboptimal. Recently, Mendis & Amarasinghe (2018) formulated the instruction packing problem of SLP vectorization by leveraging an integer linear programming (ILP) solver, achieving superior runtime performance. In this work, we explore whether it is feasible to imitate optimal decisions made by their ILP solution by fitting a graph neural network policy. We show that the learnt policy produces a vectorization scheme which is better than industry standard compiler heuristics both in terms of static measures and runtime performance. More specifically, the learnt agent produces a vectorization scheme which has a 22.6% higher average reduction in cost compared to LLVM compiler when measured using its own cost model and achieves a geometric mean runtime speedup of 1.015× on the NAS benchmark suite when compared to LLVM’s SLP vectorizer.
Tasks Imitation Learning
Published 2019-12-01
URL http://papers.nips.cc/paper/9604-compiler-auto-vectorization-with-imitation-learning
PDF http://papers.nips.cc/paper/9604-compiler-auto-vectorization-with-imitation-learning.pdf
PWC https://paperswithcode.com/paper/compiler-auto-vectorization-with-imitation
Repo https://github.com/ithemal/vemal.git
Framework none

Combinatorial Inference against Label Noise

Title Combinatorial Inference against Label Noise
Authors Paul Hongsuck Seo, Geeho Kim, Bohyung Han
Abstract Label noise is one of the critical sources that degrade generalization performance of deep neural networks significantly. To handle the label noise issue in a principled way, we propose a unique classification framework of constructing multiple models in heterogeneous coarse-grained meta-class spaces and making joint inference of the trained models for the final predictions in the original (base) class space. Our approach reduces noise level by simply constructing meta-classes and improves accuracy via combinatorial inferences over multiple constituent classifiers. Since the proposed framework has distinct and complementary properties for the given problem, we can even incorporate additional off-the-shelf learning algorithms to improve accuracy further. We also introduce techniques to organize multiple heterogeneous meta-class sets using $k$-means clustering and identify a desirable subset leading to learn compact models. Our extensive experiments demonstrate outstanding performance in terms of accuracy and efficiency compared to the state-of-the-art methods under various synthetic noise configurations and in a real-world noisy dataset.
Tasks
Published 2019-12-01
URL http://papers.nips.cc/paper/8401-combinatorial-inference-against-label-noise
PDF http://papers.nips.cc/paper/8401-combinatorial-inference-against-label-noise.pdf
PWC https://paperswithcode.com/paper/combinatorial-inference-against-label-noise
Repo https://github.com/snow12345/Combinatorial_Classification
Framework pytorch
comments powered by Disqus