Paper Group NAWR 19
DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting. Scalable Convolutional Neural Network for Image Compressed Sensing. Curriculum-guided Hindsight Experience Replay. Semi-supervised Entity Alignment via Joint Knowledge Embedding Model and Cross-graph Model. FaceGenderID: Exploiting Gender Information in DCNNs Face Recogni …
DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting
Title | DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting |
Authors | Siteng Huang Donglin Wang∗ |
Abstract | Multivariate time series forecasting has attracted wide attention in areas, such as system, traffic, and finance. The difficulty of the task lies in that traditional methods fail to capture complicated nonlinear dependencies between time steps and between multiple time series. Recently, recurrent neural network and attention mechanism have been used to model periodic temporal patterns across multiple time steps. However, these models fit not well for time series with dynamic-period patterns or nonperiodic patterns. In this paper, we propose a dual self-attention network (DSANet) for highly efficient multivariate time series forecasting, especially for dynamic-period or nonperiodic series. DSANet completely dispenses with recurrence and utilizes two parallel convolutional components, called global temporal convolution and local temporal convolution, to capture complex mixtures of global and local temporal patterns. Moreover, DSANet employs a self-attention module to model dependencies between multiple series. To further improve the robustness, DSANet also integrates a traditional autoregressive linear model in parallel to the non-linear neural network. Experiments on realworld multivariate time series data show that the proposed model is effective and outperforms baselines. |
Tasks | Multivariate Time Series Forecasting, Time Series, Time Series Forecasting |
Published | 2019-11-03 |
URL | https://kyonhuang.top/files/Huang-DSANet.pdf |
https://kyonhuang.top/files/Huang-DSANet.pdf | |
PWC | https://paperswithcode.com/paper/dsanet-dual-self-attention-network-for |
Repo | https://github.com/bighuang624/DSANet |
Framework | pytorch |
Scalable Convolutional Neural Network for Image Compressed Sensing
Title | Scalable Convolutional Neural Network for Image Compressed Sensing |
Authors | Wuzhen Shi, Feng Jiang, Shaohui Liu, Debin Zhao |
Abstract | Recently, deep learning based image Compressed Sensing (CS) methods have been proposed and demonstrated superior reconstruction quality with low computational complexity. However, the existing deep learning based image CS methods need to train different models for different sampling ratios, which increases the complexity of the encoder and decoder. In this paper, we propose a scalable convolutional neural network (dubbed SCSNet) to achieve scalable sampling and scalable reconstruction with only one model. Specifically, SCSNet provides both coarse and fine granular scalability. For coarse granular scalability, SCSNet is designed as a single sampling matrix plus a hierarchical reconstruction network that contains a base layer plus multiple enhancement layers. The base layer provides the basic reconstruction quality, while the enhancement layers reference the lower reconstruction layers and gradually improve the reconstruction quality. For fine granular scalability, SCSNet achieves sampling and reconstruction at any sampling ratio by using a greedy method to select the measurement bases. Compared with the existing deep learning based image CS methods, SCSNet achieves scalable sampling and quality scalable reconstruction at any sampling ratio with only one model. Experimental results demonstrate that SCSNet has the state-of-the-art performance while maintaining a comparable running speed with the existing deep learning based image CS methods. |
Tasks | |
Published | 2019-06-01 |
URL | http://openaccess.thecvf.com/content_CVPR_2019/html/Shi_Scalable_Convolutional_Neural_Network_for_Image_Compressed_Sensing_CVPR_2019_paper.html |
http://openaccess.thecvf.com/content_CVPR_2019/papers/Shi_Scalable_Convolutional_Neural_Network_for_Image_Compressed_Sensing_CVPR_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/scalable-convolutional-neural-network-for |
Repo | https://github.com/wzhshi/SCSNet |
Framework | none |
Curriculum-guided Hindsight Experience Replay
Title | Curriculum-guided Hindsight Experience Replay |
Authors | Meng Fang, Tianyi Zhou, Yali Du, Lei Han, Zhengyou Zhang |
Abstract | In off-policy deep reinforcement learning, it is usually hard to collect sufficient successful experiences with sparse rewards to learn from. Hindsight experience replay (HER) enables an agent to learn from failures by treating the achieved state of a failed experience as a pseudo goal. However, not all the failed experiences are equally useful to different learning stages, so it is not efficient to replay all of them or uniform samples of them. In this paper, we propose to 1) adaptively select the failed experiences for replay according to the proximity to the true goals and the curiosity of exploration over diverse pseudo goals, and 2) gradually change the proportion of the goal-proximity and the diversity-based curiosity in the selection criteria: we adopt a human-like learning strategy that enforces more curiosity in earlier stages and changes to larger goal-proximity later. This Goal-and-Curiosity-driven Curriculum Learning'' leads to Curriculum-guided HER (CHER)'', which adaptively and dynamically controls the exploration-exploitation trade-off during the learning process via hindsight experience selection. We show that CHER improves the state of the art in challenging robotics environments. |
Tasks | |
Published | 2019-12-01 |
URL | http://papers.nips.cc/paper/9425-curriculum-guided-hindsight-experience-replay |
http://papers.nips.cc/paper/9425-curriculum-guided-hindsight-experience-replay.pdf | |
PWC | https://paperswithcode.com/paper/curriculum-guided-hindsight-experience-replay |
Repo | https://github.com/mengf1/CHER |
Framework | none |
Semi-supervised Entity Alignment via Joint Knowledge Embedding Model and Cross-graph Model
Title | Semi-supervised Entity Alignment via Joint Knowledge Embedding Model and Cross-graph Model |
Authors | Chengjiang Li, Yixin Cao, Lei Hou, Jiaxin Shi, Juanzi Li, Tat-Seng Chua |
Abstract | Entity alignment aims at integrating complementary knowledge graphs (KGs) from different sources or languages, which may benefit many knowledge-driven applications. It is challenging due to the heterogeneity of KGs and limited seed alignments. In this paper, we propose a semi-supervised entity alignment method by joint Knowledge Embedding model and Cross-Graph model (KECG). It can make better use of seed alignments to propagate over the entire graphs with KG-based constraints. Specifically, as for the knowledge embedding model, we utilize TransE to implicitly complete two KGs towards consistency and learn relational constraints between entities. As for the cross-graph model, we extend Graph Attention Network (GAT) with projection constraint to robustly encode graphs, and two KGs share the same GAT to transfer structural knowledge as well as to ignore unimportant neighbors for alignment via attention mechanism. Results on publicly available datasets as well as further analysis demonstrate the effectiveness of KECG. Our codes can be found in https: //github.com/THU-KEG/KECG. |
Tasks | Entity Alignment, Knowledge Graphs |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-1274/ |
https://www.aclweb.org/anthology/D19-1274 | |
PWC | https://paperswithcode.com/paper/semi-supervised-entity-alignment-via-joint |
Repo | https://github.com/THU-KEG/KECG |
Framework | pytorch |
FaceGenderID: Exploiting Gender Information in DCNNs Face Recognition Systems
Title | FaceGenderID: Exploiting Gender Information in DCNNs Face Recognition Systems |
Authors | Ruben Vera-Rodriguez, Marta Blazquez, Aythami Morales, Ester Gonzalez-Sosa, Joao C. Neves, Hugo Proenca |
Abstract | This paper addresses the effect of gender as a covariate in face verification systems. Even though pre-trained models based on Deep Convolutional Neural Networks (DCNNs), such as VGG-Face or ResNet-50, achieve very high performance, they are trained on very large datasets comprising millions of images, which have biases regarding demographic aspects like the gender and the ethnicity among others. In this work, we first analyse the separate performance of these state-of-the-art models for males and females. We observe a gap between face verification performances obtained by both gender classes. These results suggest that features obtained by biased models are affected by the gender covariate. We propose a gender-dependent training approach to improve the feature representation for both genders, and develop both: i) gender specific DCNNs models, and ii) a gender balanced DCNNs model. Our results show significant and consistent improvements in face verification performance for both genders, individually and in general with our proposed approach. Finally, we announce the availability (at GitHub) of the FaceGenderID DCNNs models proposed in this work, which can support further experiments on this topic. |
Tasks | Face Recognition, Face Verification |
Published | 2019-06-28 |
URL | http://openaccess.thecvf.com/content_CVPRW_2019/papers/BEFA/Vera-Rodriguez_FaceGenderID_Exploiting_Gender_Information_in_DCNNs_Face_Recognition_Systems_CVPRW_2019_paper.pdf |
http://openaccess.thecvf.com/content_CVPRW_2019/papers/BEFA/Vera-Rodriguez_FaceGenderID_Exploiting_Gender_Information_in_DCNNs_Face_Recognition_Systems_CVPRW_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/facegenderid-exploiting-gender-information-in |
Repo | https://github.com/BiDAlab/FaceGenderID |
Framework | tf |
The Regretful Navigation Agent for Vision-and-Language Navigation
Title | The Regretful Navigation Agent for Vision-and-Language Navigation |
Authors | Chih-Yao Ma, Zuxuan Wu, Ghassan AlRegib, Caiming Xiong, Zsolt Kira |
Abstract | As deep learning continues to make progress for challenging perception tasks, there is increased interest in combining vision, language, and decision-making. Specifically, the Vision and Language Navigation (VLN) task involves navigating to a goal purely from language instructions and visual information without explicit knowledge of the goal. Recent successful approaches have made in-roads in achieving good success rates for this task but rely on beam search, which thoroughly explores a large number of trajectories and is unrealistic for applications such as robotics. In this paper, inspired by the intuition of viewing the problem as search on a navigation graph, we propose to use a progress monitor developed in prior work as a learnable heuristic for search. We then propose two modules incorporated into an end-to-end architecture: 1) A learned mechanism to perform backtracking, which decides whether to continue moving forward or roll back to a previous state (Regret Module) and 2) A mechanism to help the agent decide which direction to go next by showing directions that are visited and their associated progress estimate (Progress Marker). Combined, the proposed approach significantly outperforms current state-of-the-art methods using greedy action selection, with 5% absolute improvement on the test server in success rates, and more importantly 8% on success rates normalized by the path length. |
Tasks | Decision Making, Vision-Language Navigation, Visual Navigation |
Published | 2019-03-05 |
URL | https://arxiv.org/abs/1903.01602 |
https://arxiv.org/pdf/1903.01602.pdf | |
PWC | https://paperswithcode.com/paper/the-regretful-navigation-agent-for-vision-and |
Repo | https://github.com/chihyaoma/regretful-agent |
Framework | pytorch |
Causal Discovery with Attention-Based Convolutional Neural Networks
Title | Causal Discovery with Attention-Based Convolutional Neural Networks |
Authors | Meike Nauta, Doina Bucur, Christin Seifert |
Abstract | Having insight into the causal associations in a complex system facilitates decision making, e.g., for medical treatments, urban infrastructure improvements or financial investments. The amount of observational data grows, which enables the discovery of causal relationships between variables from observation of their behaviour in time. Existing methods for causal discovery from time series data do not yet exploit the representational power of deep learning. We therefore present the Temporal Causal Discovery Framework (TCDF), a deep learning framework that learns a causal graph structure by discovering causal relationships in observational time series data. TCDF uses attention-based convolutional neural networks combined with a causal validation step. By interpreting the internal parameters of the convolutional networks, TCDF can also discover the time delay between a cause and the occurrence of its effect. Our framework learns temporal causal graphs, which can include confounders and instantaneous effects. Experiments on financial and neuroscientific benchmarks show state-of-the-art performance of TCDF on discovering causal relationships in continuous time series data. Furthermore, we show that TCDF can circumstantially discover the presence of hidden confounders. Our broadly applicable framework can be used to gain novel insights into the causal dependencies in a complex system, which is important for reliable predictions, knowledge discovery and data-driven decision making. |
Tasks | Causal Discovery, Decision Making, Time Series |
Published | 2019-01-07 |
URL | https://www.mdpi.com/2504-4990/1/1/19 |
https://www.mdpi.com/2504-4990/1/1/19/pdf | |
PWC | https://paperswithcode.com/paper/causal-discovery-with-attention-based |
Repo | https://github.com/M-Nauta/TCDF |
Framework | pytorch |
Self-Attentive, Multi-Context One-Class Classification for Unsupervised Anomaly Detection on Text
Title | Self-Attentive, Multi-Context One-Class Classification for Unsupervised Anomaly Detection on Text |
Authors | Lukas Ruff, Yury Zemlyanskiy, V, Robert ermeulen, Thomas Schnake, Marius Kloft |
Abstract | There exist few text-specific methods for unsupervised anomaly detection, and for those that do exist, none utilize pre-trained models for distributed vector representations of words. In this paper we introduce a new anomaly detection method{—}Context Vector Data Description (CVDD){—}which builds upon word embedding models to learn multiple sentence representations that capture multiple semantic contexts via the self-attention mechanism. Modeling multiple contexts enables us to perform contextual anomaly detection of sentences and phrases with respect to the multiple themes and concepts present in an unlabeled text corpus. These contexts in combination with the self-attention weights make our method highly interpretable. We demonstrate the effectiveness of CVDD quantitatively as well as qualitatively on the well-known Reuters, 20 Newsgroups, and IMDB Movie Reviews datasets. |
Tasks | Anomaly Detection, Unsupervised Anomaly Detection |
Published | 2019-07-01 |
URL | https://www.aclweb.org/anthology/P19-1398/ |
https://www.aclweb.org/anthology/P19-1398 | |
PWC | https://paperswithcode.com/paper/self-attentive-multi-context-one-class |
Repo | https://github.com/lukasruff/CVDD-PyTorch |
Framework | pytorch |
CrossInfoNet: Multi-Task Information Sharing Based Hand Pose Estimation
Title | CrossInfoNet: Multi-Task Information Sharing Based Hand Pose Estimation |
Authors | Kuo Du, Xiangbo Lin, Yi Sun, Xiaohong Ma |
Abstract | This paper focuses on the topic of vision based hand pose estimation from single depth map using convolutional neural network (CNN). Our main contributions lie in designing a new pose regression network architecture named CrossInfoNet. The proposed CrossInfoNet decomposes hand pose estimation task into palm pose estimation sub-task and finger pose estimation sub-task, and adopts two-branch crossconnection structure to share the beneficial complementary information between the sub-tasks. Our work is inspired by multi-task information sharing mechanism, which has been few discussed in hand pose estimation using depth data in previous publications. In addition, we propose a heat-map guided feature extraction structure to get better feature maps, and train the complete network end-to-end. The effectiveness of the proposed CrossInfoNet is evaluated with extensively self-comparative experiments and in comparison with state-of-the-art methods on four public hand pose datasets. The code is available. |
Tasks | Hand Pose Estimation, Pose Estimation |
Published | 2019-06-01 |
URL | http://openaccess.thecvf.com/content_CVPR_2019/html/Du_CrossInfoNet_Multi-Task_Information_Sharing_Based_Hand_Pose_Estimation_CVPR_2019_paper.html |
http://openaccess.thecvf.com/content_CVPR_2019/papers/Du_CrossInfoNet_Multi-Task_Information_Sharing_Based_Hand_Pose_Estimation_CVPR_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/crossinfonet-multi-task-information-sharing |
Repo | https://github.com/dumyy/handpose |
Framework | tf |
A Bayesian Monte Carlo approach for predicting the spread of infectious diseases
Title | A Bayesian Monte Carlo approach for predicting the spread of infectious diseases |
Authors | Olivera Stojanović, Johannes Leugering, Gordon Pipa, Stéphane Ghozzi, Alexander Ullrich |
Abstract | In this paper, a simple yet interpretable, probabilistic model is proposed for the prediction of reported case counts of infectious diseases. A spatio-temporal kernel is derived from training data to capture the typical interaction effects of reported infections across time and space, which provides insight into the dynamics of the spread of infectious diseases. Testing the model on a one-week-ahead prediction task for campylobacteriosis and rotavirus infections across Germany, as well as Lyme borreliosis across the federal state of Bavaria, shows that the proposed model performs on-par with the state-of-the-art hhh4 model. However, it provides a full posterior distribution over parameters in addition to model predictions, which aides in the assessment of the model. The employed Bayesian Monte Carlo regression framework is easily extensible and allows for incorporating prior domain knowledge, which makes it suitable for use on limited, yet complex datasets as often encountered in epidemiology. |
Tasks | Bayesian Inference, Disease Prediction, Epidemiology, Multivariate Time Series Forecasting, Probabilistic Programming |
Published | 2019-04-26 |
URL | https://www.biorxiv.org/content/10.1101/617795v1 |
https://www.biorxiv.org/content/biorxiv/early/2019/04/26/617795.full.pdf | |
PWC | https://paperswithcode.com/paper/a-bayesian-monte-carlo-approach-for |
Repo | https://github.com/ostojanovic/BSTIM |
Framework | none |
Deep Embedded SOM: Joint Representation Learning and Self-Organization
Title | Deep Embedded SOM: Joint Representation Learning and Self-Organization |
Authors | Florent Forest, Mustapha Lebbah, Hanene Azzag, Jérôme Lacaille |
Abstract | In the wake of recent advances in joint clustering and deep learning, we introduce the Deep Embedded Self-Organizing Map, a model that jointly learns representations and the code vectors of a self-organizing map. Our model is composed of an autoencoder and a custom SOM layer that are optimized in a joint training procedure, motivated by the idea that the SOM prior could help learning SOM-friendly representations. We evaluate SOM-based models in terms of clustering quality and unsupervised clustering accuracy, and study the benefits of joint training. |
Tasks | Dimensionality Reduction, Image/Document Clustering, Representation Learning, Self-Organized Clustering |
Published | 2019-04-24 |
URL | https://www.i6doc.com/en/book/?gcoi=28001100931280 |
http://florentfo.rest/files/ESANN-2019-DeepEmbeddedSOM-full-paper.pdf | |
PWC | https://paperswithcode.com/paper/deep-embedded-som-joint-representation |
Repo | https://github.com/FlorentF9/DESOM |
Framework | tf |
CPM-Nets: Cross Partial Multi-View Networks
Title | CPM-Nets: Cross Partial Multi-View Networks |
Authors | Changqing Zhang, Zongbo Han, Yajie Cui, Huazhu Fu, Joey Tianyi Zhou, Qinghua Hu |
Abstract | Despite multi-view learning progressed fast in past decades, it is still challenging due to the difficulty in modeling complex correlation among different views, especially under the context of view missing. To address the challenge, we propose a novel framework termed Cross Partial Multi-View Networks (CPM-Nets). In this framework, we first give a formal definition of completeness and versatility for multi-view representation and then theoretically prove the versatility of the latent representation learned from our algorithm. To achieve the completeness, the task of learning latent multi-view representation is specifically translated to degradation process through mimicking data transmitting, such that the optimal tradeoff between consistence and complementarity across different views could be achieved. In contrast with methods that either complete missing views or group samples according to view-missing patterns, our model fully exploits all samples and all views to produce structured representation for interpretability. Extensive experimental results validate the effectiveness of our algorithm over existing state-of-the-arts. |
Tasks | MULTI-VIEW LEARNING |
Published | 2019-12-01 |
URL | http://papers.nips.cc/paper/8346-cpm-nets-cross-partial-multi-view-networks |
http://papers.nips.cc/paper/8346-cpm-nets-cross-partial-multi-view-networks.pdf | |
PWC | https://paperswithcode.com/paper/cpm-nets-cross-partial-multi-view-networks |
Repo | https://github.com/hanmenghan/CPM_Nets |
Framework | tf |
Annotating Temporal Information in Clinical Notes for Timeline Reconstruction: Towards the Definition of Calendar Expressions
Title | Annotating Temporal Information in Clinical Notes for Timeline Reconstruction: Towards the Definition of Calendar Expressions |
Authors | Natalia Viani, Hegler Tissot, Ariane Bernardino, Sumithra Velupillai |
Abstract | To automatically analyse complex trajectory information enclosed in clinical text (e.g. timing of symptoms, duration of treatment), it is important to understand the related temporal aspects, anchoring each event on an absolute point in time. In the clinical domain, few temporally annotated corpora are currently available. Moreover, underlying annotation schemas - which mainly rely on the TimeML standard - are not necessarily easily applicable for applications such as patient timeline reconstruction. In this work, we investigated how temporal information is documented in clinical text by annotating a corpus of medical reports with time expressions (TIMEXes), based on TimeML. The developed corpus is available to the NLP community. Starting from our annotations, we analysed the suitability of the TimeML TIMEX schema for capturing timeline information, identifying challenges and possible solutions. As a result, we propose a novel annotation schema that could be useful for timeline reconstruction: CALendar EXpression (CALEX). |
Tasks | |
Published | 2019-08-01 |
URL | https://www.aclweb.org/anthology/W19-5021/ |
https://www.aclweb.org/anthology/W19-5021 | |
PWC | https://paperswithcode.com/paper/annotating-temporal-information-in-clinical |
Repo | https://github.com/medesto/timeline-reconstruction |
Framework | none |
AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters
Title | AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters |
Authors | Xia Xiao, Zigeng Wang, Sanguthevar Rajasekaran |
Abstract | Reducing the model redundancy is an important task to deploy complex deep learning models to resource-limited or time-sensitive devices. Directly regularizing or modifying weight values makes pruning procedure less robust and sensitive to the choice of hyperparameters, and it also requires prior knowledge to tune different hyperparameters for different models. To build a better generalized and easy-to-use pruning method, we propose AutoPrune, which prunes the network through optimizing a set of trainable auxiliary parameters instead of original weights. The instability and noise during training on auxiliary parameters will not directly affect weight values, which makes pruning process more robust to noise and less sensitive to hyperparameters. Moreover, we design gradient update rules for auxiliary parameters to keep them consistent with pruning tasks. Our method can automatically eliminate network redundancy with recoverability, relieving the complicated prior knowledge required to design thresholding functions, and reducing the time for trial and error. We evaluate our method with LeNet and VGG-like on MNIST and CIFAR-10 datasets, and with AlexNet, ResNet and MobileNet on ImageNet to establish the scalability of our work. Results show that our model achieves state-of-the-art sparsity, e.g. 7%, 23% FLOPs and 310x, 75x compression ratio for LeNet5 and VGG-like structure without accuracy drop, and 200M and 100M FLOPs for MobileNet V2 with accuracy 73.32% and 66.83% respectively. |
Tasks | Network Pruning |
Published | 2019-12-01 |
URL | http://papers.nips.cc/paper/9521-autoprune-automatic-network-pruning-by-regularizing-auxiliary-parameters |
http://papers.nips.cc/paper/9521-autoprune-automatic-network-pruning-by-regularizing-auxiliary-parameters.pdf | |
PWC | https://paperswithcode.com/paper/autoprune-automatic-network-pruning-by |
Repo | https://github.com/xxshdw/auto_prune |
Framework | tf |
Unsharp Masking Layer: Injecting Prior Knowledge in Convolutional Networks for Image Classification
Title | Unsharp Masking Layer: Injecting Prior Knowledge in Convolutional Networks for Image Classification |
Authors | Jose Carranza-Rojas, Saul Calderon-Ramirez, Adán Mora-Fallas, Michael Granados-Menani, Jordina Torrents-Barrena |
Abstract | Image enhancement refers to the enrichment of certain image features such as edges, boundaries, or contrast. The main objective is to process the original image so that the overall performance of visualization, classification and segmentation tasks is considerably improved. Traditional techniques require manual fine-tuning of the parameters to control enhancement behavior. To date, recent Convolutional Neural Network (CNN) approaches frequently employ the aforementioned techniques as an enriched pre-processing step. In this work, we present the first intrinsic CNN pre-processing layer based on the well-known unsharp masking algorithm. The proposed layer injects prior knowledge about how to enhance the image, by adding high frequency information to the input, to subsequently emphasize meaningful image features. The layer optimizes the unsharp masking parameters during model training, without any manual intervention. We evaluate the network performance and impact on two applications: CIFAR100 image classification, and the PlantCLEF identification challenge. Results obtained show a significant improvement over popular CNNs, yielding 9.49% and 2.42% for PlantCLEF and general-purpose CIFAR100, respectively. The design of an unsharp enhancement layer plainly boosts the accuracy with negligible performance cost on simple CNN models, as prior knowledge is directly injected to improve its robustness. |
Tasks | Image Classification, Image Enhancement, Scene Text Detection |
Published | 2019-09-29 |
URL | https://doi.org/10.1007/978-3-030-30508-6_1 |
https://www.researchgate.net/publication/335776859_Unsharp_Masking_Layer_Injecting_Prior_Knowledge_in_Convolutional_Networks_for_Image_Classification | |
PWC | https://paperswithcode.com/paper/unsharp-masking-layer-injecting-prior |
Repo | https://github.com/maeotaku/pytorch_usm |
Framework | pytorch |