Paper Group ANR 199
Performance Evaluation of Deep Generative Models for Generating Hand-Written Character Images. Navigation-Based Candidate Expansion and Pretrained Language Models for Citation Recommendation. Separation of target anatomical structure and occlusions in chest radiographs. Twitter Bot Detection Using Bidirectional Long Short-term Memory Neural Network …
Performance Evaluation of Deep Generative Models for Generating Hand-Written Character Images
Title | Performance Evaluation of Deep Generative Models for Generating Hand-Written Character Images |
Authors | Tanmoy Mondal, LE Thi Thuy Trang, Mickaël Coustaty, Jean-Marc Ogier |
Abstract | There have been many work in the literature on generation of various kinds of images such as Hand-Written characters (MNIST dataset), scene images (CIFAR-10 dataset), various objects images (ImageNet dataset), road signboard images (SVHN dataset) etc. Unfortunately, there have been very limited amount of work done in the domain of document image processing. Automatic image generation can lead to the enormous increase of labeled datasets with the help of only limited amount of labeled data. Various kinds of Deep generative models can be primarily divided into two categories. First category is auto-encoder (AE) and the second one is Generative Adversarial Networks (GANs). In this paper, we have evaluated various kinds of AE as well as GANs and have compared their performances on hand-written digits dataset (MNIST) and also on historical hand-written character dataset of Indonesian BALI language. Moreover, these generated characters are recognized by using character recognition tool for calculating the statistical performance of these generated characters with respect to original character images. |
Tasks | Image Generation |
Published | 2020-02-26 |
URL | https://arxiv.org/abs/2002.11424v1 |
https://arxiv.org/pdf/2002.11424v1.pdf | |
PWC | https://paperswithcode.com/paper/performance-evaluation-of-deep-generative |
Repo | |
Framework | |
Navigation-Based Candidate Expansion and Pretrained Language Models for Citation Recommendation
Title | Navigation-Based Candidate Expansion and Pretrained Language Models for Citation Recommendation |
Authors | Rodrigo Nogueira, Zhiying Jiang, Kyunghyun Cho, Jimmy Lin |
Abstract | Citation recommendation systems for the scientific literature, to help authors find papers that should be cited, have the potential to speed up discoveries and uncover new routes for scientific exploration. We treat this task as a ranking problem, which we tackle with a two-stage approach: candidate generation followed by re-ranking. Within this framework, we adapt to the scientific domain a proven combination based on “bag of words” retrieval followed by re-scoring with a BERT model. We experimentally show the effects of domain adaptation, both in terms of pretraining on in-domain data and exploiting in-domain vocabulary. In addition, we introduce a novel navigation-based document expansion strategy to enrich the candidate documents processed by our neural models. On three different collections from different scientific disciplines, we achieve the best-reported results in the citation recommendation task. |
Tasks | Domain Adaptation, Recommendation Systems |
Published | 2020-01-23 |
URL | https://arxiv.org/abs/2001.08687v1 |
https://arxiv.org/pdf/2001.08687v1.pdf | |
PWC | https://paperswithcode.com/paper/navigation-based-candidate-expansion-and |
Repo | |
Framework | |
Separation of target anatomical structure and occlusions in chest radiographs
Title | Separation of target anatomical structure and occlusions in chest radiographs |
Authors | Johannes Hofmanninger, Sebastian Roehrich, Helmut Prosch, Georg Langs |
Abstract | Chest radiographs are commonly performed low-cost exams for screening and diagnosis. However, radiographs are 2D representations of 3D structures causing considerable clutter impeding visual inspection and automated image analysis. Here, we propose a Fully Convolutional Network to suppress, for a specific task, undesired visual structure from radiographs while retaining the relevant image information such as lung-parenchyma. The proposed algorithm creates reconstructed radiographs and ground-truth data from high resolution CT-scans. Results show that removing visual variation that is irrelevant for a classification task improves the performance of a classifier when only limited training data are available. This is particularly relevant because a low number of ground-truth cases is common in medical imaging. |
Tasks | |
Published | 2020-02-03 |
URL | https://arxiv.org/abs/2002.00751v1 |
https://arxiv.org/pdf/2002.00751v1.pdf | |
PWC | https://paperswithcode.com/paper/separation-of-target-anatomical-structure-and |
Repo | |
Framework | |
Twitter Bot Detection Using Bidirectional Long Short-term Memory Neural Networks and Word Embeddings
Title | Twitter Bot Detection Using Bidirectional Long Short-term Memory Neural Networks and Word Embeddings |
Authors | Feng Wei, Uyen Trang Nguyen |
Abstract | Twitter is a web application playing dual roles of online social networking and micro-blogging. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots. Legitimate bots generate a large amount of benign contextual content, i.e., tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human and spambot accounts on Twitter, by employing recurrent neural networks, specifically bidirectional Long Short-term Memory (BiLSTM), to efficiently capture features across tweets. To the best of our knowledge, our work is the first that develops a recurrent neural model with word embeddings to distinguish Twitter bots from human accounts, that requires no prior knowledge or assumption about users’ profiles, friendship networks, or historical behavior on the target account. Moreover, our model does not require any handcrafted features. The preliminary simulation results are very encouraging. Experiments on the cresci-2017 dataset show that our approach can achieve competitive performance compared with existing state-of-the-art bot detection systems. |
Tasks | Twitter Bot Detection, Word Embeddings |
Published | 2020-02-03 |
URL | https://arxiv.org/abs/2002.01336v1 |
https://arxiv.org/pdf/2002.01336v1.pdf | |
PWC | https://paperswithcode.com/paper/twitter-bot-detection-using-bidirectional |
Repo | |
Framework | |
Repository for Reusing Artifacts of Artificial Neural Networks
Title | Repository for Reusing Artifacts of Artificial Neural Networks |
Authors | Javad Ghofrani, Ehsan Kozegar, Mohammad Divband Soorati, Arezoo Bozorgmehr, Hongfei Chen, Maximilian Naake |
Abstract | Artificial Neural Networks (ANNs) replaced conventional software systems in various domains such as machine translation, natural language processing, and image processing. So, why do we need an repository for artificial neural networks? Those systems are developed with labeled data and we have strong dependencies between the data that is used for training and testing our network. Another challenge is the data quality as well as reuse-ability. There we are trying to apply concepts from classic software engineering that is not limited to the model, while data and code haven’t been dealt with mostly in other projects. The first question that comes to mind might be, why don’t we use GitHub, a well known widely spread tool for reuse, for our issue. And the reason why is that GitHub, although very good in its class is not developed for machine learning appliances and focuses more on software reuse. In addition to that GitHub does not allow to execute the code directly on the platform which would be very convenient for collaborative work on one project. |
Tasks | Machine Translation |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13619v1 |
https://arxiv.org/pdf/2003.13619v1.pdf | |
PWC | https://paperswithcode.com/paper/repository-for-reusing-artifacts-of |
Repo | |
Framework | |
Stereo Endoscopic Image Super-Resolution Using Disparity-Constrained Parallel Attention
Title | Stereo Endoscopic Image Super-Resolution Using Disparity-Constrained Parallel Attention |
Authors | Tianyi Zhang, Yun Gu, Xiaolin Huang, Enmei Tu, Jie Yang |
Abstract | With the popularity of stereo cameras in computer assisted surgery techniques, a second viewpoint would provide additional information in surgery. However, how to effectively access and use stereo information for the super-resolution (SR) purpose is often a challenge. In this paper, we propose a disparity-constrained stereo super-resolution network (DCSSRnet) to simultaneously compute a super-resolved image in a stereo image pair. In particular, we incorporate a disparity-based constraint mechanism into the generation of SR images in a deep neural network framework with an additional atrous parallax-attention modules. Experiment results on laparoscopic images demonstrate that the proposed framework outperforms current SR methods on both quantitative and qualitative evaluations. Our DCSSRnet provides a promising solution on enhancing spatial resolution of stereo image pairs, which will be extremely beneficial for the endoscopic surgery. |
Tasks | Image Super-Resolution, Super-Resolution |
Published | 2020-03-19 |
URL | https://arxiv.org/abs/2003.08539v1 |
https://arxiv.org/pdf/2003.08539v1.pdf | |
PWC | https://paperswithcode.com/paper/stereo-endoscopic-image-super-resolution |
Repo | |
Framework | |
Any-Shot Object Detection
Title | Any-Shot Object Detection |
Authors | Shafin Rahman, Salman Khan, Nick Barnes, Fahad Shahbaz Khan |
Abstract | Previous work on novel object detection considers zero or few-shot settings where none or few examples of each category are available for training. In real world scenarios, it is less practical to expect that ‘all’ the novel classes are either unseen or {have} few-examples. Here, we propose a more realistic setting termed ‘Any-shot detection’, where totally unseen and few-shot categories can simultaneously co-occur during inference. Any-shot detection offers unique challenges compared to conventional novel object detection such as, a high imbalance between unseen, few-shot and seen object classes, susceptibility to forget base-training while learning novel classes and distinguishing novel classes from the background. To address these challenges, we propose a unified any-shot detection model, that can concurrently learn to detect both zero-shot and few-shot object classes. Our core idea is to use class semantics as prototypes for object detection, a formulation that naturally minimizes knowledge forgetting and mitigates the class-imbalance in the label space. Besides, we propose a rebalanced loss function that emphasizes difficult few-shot cases but avoids overfitting on the novel classes to allow detection of totally unseen classes. Without bells and whistles, our framework can also be used solely for Zero-shot detection and Few-shot detection tasks. We report extensive experiments on Pascal VOC and MS-COCO datasets where our approach is shown to provide significant improvements. |
Tasks | Object Detection |
Published | 2020-03-16 |
URL | https://arxiv.org/abs/2003.07003v1 |
https://arxiv.org/pdf/2003.07003v1.pdf | |
PWC | https://paperswithcode.com/paper/any-shot-object-detection |
Repo | |
Framework | |
Querying and Repairing Inconsistent Prioritized Knowledge Bases: Complexity Analysis and Links with Abstract Argumentation
Title | Querying and Repairing Inconsistent Prioritized Knowledge Bases: Complexity Analysis and Links with Abstract Argumentation |
Authors | Meghyn Bienvenu, Camille Bourgaux |
Abstract | In this paper, we explore the issue of inconsistency handling over prioritized knowledge bases (KBs), which consist of an ontology, a set of facts, and a priority relation between conflicting facts. In the database setting, a closely related scenario has been studied and led to the definition of three different notions of optimal repairs (global, Pareto, and completion) of a prioritized inconsistent database. After transferring the notions of globally-, Pareto- and completion-optimal repairs to our setting, we study the data complexity of the core reasoning tasks: query entailment under inconsistency-tolerant semantics based upon optimal repairs, existence of a unique optimal repair, and enumeration of all optimal repairs. Our results provide a nearly complete picture of the data complexity of these tasks for ontologies formulated in common DL-Lite dialects. The second contribution of our work is to clarify the relationship between optimal repairs and different notions of extensions for (set-based) argumentation frameworks. Among our results, we show that Pareto-optimal repairs correspond precisely to stable extensions (and often also to preferred extensions), and we propose a novel semantics for prioritized KBs which is inspired by grounded extensions and enjoys favourable computational properties. Our study also yields some results of independent interest concerning preference-based argumentation frameworks. |
Tasks | Abstract Argumentation |
Published | 2020-03-12 |
URL | https://arxiv.org/abs/2003.05746v1 |
https://arxiv.org/pdf/2003.05746v1.pdf | |
PWC | https://paperswithcode.com/paper/querying-and-repairing-inconsistent |
Repo | |
Framework | |
Large Scale Tensor Regression using Kernels and Variational Inference
Title | Large Scale Tensor Regression using Kernels and Variational Inference |
Authors | Robert Hu, Geoff K. Nicholls, Dino Sejdinovic |
Abstract | We outline an inherent weakness of tensor factorization models when latent factors are expressed as a function of side information and propose a novel method to mitigate this weakness. We coin our method \textit{Kernel Fried Tensor}(KFT) and present it as a large scale forecasting tool for high dimensional data. Our results show superior performance against \textit{LightGBM} and \textit{Field Aware Factorization Machines}(FFM), two algorithms with proven track records widely used in industrial forecasting. We also develop a variational inference framework for KFT and associate our forecasts with calibrated uncertainty estimates on three large scale datasets. Furthermore, KFT is empirically shown to be robust against uninformative side information in terms of constants and Gaussian noise. |
Tasks | |
Published | 2020-02-11 |
URL | https://arxiv.org/abs/2002.04704v1 |
https://arxiv.org/pdf/2002.04704v1.pdf | |
PWC | https://paperswithcode.com/paper/large-scale-tensor-regression-using-kernels |
Repo | |
Framework | |
Unsupervised Image-generation Enhanced Adaptation for Object Detection in Thermal images
Title | Unsupervised Image-generation Enhanced Adaptation for Object Detection in Thermal images |
Authors | Wanyi Li, Fuyu Li, Yongkang Luo, Peng Wang |
Abstract | Object detection in thermal images is an important computer vision task and has many applications such as unmanned vehicles, robotics, surveillance and night vision. Deep learning based detectors have achieved major progress, which usually need large amount of labelled training data. However, labelled data for object detection in thermal images is scarce and expensive to collect. How to take advantage of the large number labelled visible images and adapt them into thermal image domain, is expected to solve. This paper proposes an unsupervised image-generation enhanced adaptation method for object detection in thermal images. To reduce the gap between visible domain and thermal domain, the proposed method manages to generate simulated fake thermal images that are similar to the target images, and preserves the annotation information of the visible source domain. The image generation includes a CycleGAN based image-to-image translation and an intensity inversion transformation. Generated fake thermal images are used as renewed source domain. And then the off-the-shelf Domain Adaptive Faster RCNN is utilized to reduce the gap between generated intermediate domain and the thermal target domain. Experiments demonstrate the effectiveness and superiority of the proposed method. |
Tasks | Image Generation, Image-to-Image Translation, Object Detection |
Published | 2020-02-17 |
URL | https://arxiv.org/abs/2002.06770v1 |
https://arxiv.org/pdf/2002.06770v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-image-generation-enhanced |
Repo | |
Framework | |
Continual Universal Object Detection
Title | Continual Universal Object Detection |
Authors | Xialei Liu, Hao Yang, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto |
Abstract | Object detection has improved significantly in recent years on multiple challenging benchmarks. However, most existing detectors are still domain-specific, where the models are trained and tested on a single domain. When adapting these detectors to new domains, they often suffer from catastrophic forgetting of previous knowledge. In this paper, we propose a continual object detector that can learn sequentially from different domains without forgetting. First, we explore learning the object detector continually in different scenarios across various domains and categories. Learning from the analysis, we propose attentive feature distillation leveraging both bottom-up and top-down attentions to mitigate forgetting. It takes advantage of attention to ignore the noisy background information and feature distillation to provide strong supervision. Finally, for the most challenging scenarios, we propose an adaptive exemplar sampling method to leverage exemplars from previous tasks for less forgetting effectively. The experimental results show the excellent performance of our proposed method in three different scenarios across seven different object detection datasets. |
Tasks | Object Detection |
Published | 2020-02-13 |
URL | https://arxiv.org/abs/2002.05347v1 |
https://arxiv.org/pdf/2002.05347v1.pdf | |
PWC | https://paperswithcode.com/paper/continual-universal-object-detection |
Repo | |
Framework | |
Towards a Resilient Machine Learning Classifier – a Case Study of Ransomware Detection
Title | Towards a Resilient Machine Learning Classifier – a Case Study of Ransomware Detection |
Authors | Chih-Yuan Yang, Ravi Sahita |
Abstract | The damage caused by crypto-ransomware, due to encryption, is difficult to revert and cause data losses. In this paper, a machine learning (ML) classifier was built to early detect ransomware (called crypto-ransomware) that uses cryptography by program behavior. If a signature-based detection was missed, a behavior-based detector can be the last line of defense to detect and contain the damages. We find that input/output activities of ransomware and the file-content entropy are unique traits to detect crypto-ransomware. A deep-learning (DL) classifier can detect ransomware with a high accuracy and a low false positive rate. We conduct an adversarial research against the models generated. We use simulated ransomware programs to launch a gray-box analysis to probe the weakness of ML classifiers and to improve model robustness. In addition to accuracy and resiliency, trustworthiness is the other key criteria for a quality detector. Making sure that the correct information was used for inference is important for a security application. The Integrated Gradient method was used to explain the deep learning model and also to reveal why false negatives evade the detection. The approaches to build and to evaluate a real-world detector were demonstrated and discussed. |
Tasks | |
Published | 2020-03-13 |
URL | https://arxiv.org/abs/2003.06428v1 |
https://arxiv.org/pdf/2003.06428v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-a-resilient-machine-learning |
Repo | |
Framework | |
Progressive Object Transfer Detection
Title | Progressive Object Transfer Detection |
Authors | Hao Chen, Yali Wang, Guoyou Wang, Xiang Bai, Yu Qiao |
Abstract | Recent development of object detection mainly depends on deep learning with large-scale benchmarks. However, collecting such fully-annotated data is often difficult or expensive for real-world applications, which restricts the power of deep neural networks in practice. Alternatively, humans can detect new objects with little annotation burden, since humans often use the prior knowledge to identify new objects with few elaborately-annotated examples, and subsequently generalize this capacity by exploiting objects from wild images. Inspired by this procedure of learning to detect, we propose a novel Progressive Object Transfer Detection (POTD) framework. Specifically, we make three main contributions in this paper. First, POTD can leverage various object supervision of different domains effectively into a progressive detection procedure. Via such human-like learning, one can boost a target detection task with few annotations. Second, POTD consists of two delicate transfer stages, i.e., Low-Shot Transfer Detection (LSTD), and Weakly-Supervised Transfer Detection (WSTD). In LSTD, we distill the implicit object knowledge of source detector to enhance target detector with few annotations. It can effectively warm up WSTD later on. In WSTD, we design a recurrent object labelling mechanism for learning to annotate weakly-labeled images. More importantly, we exploit the reliable object supervision from LSTD, which can further enhance the robustness of target detector in the WSTD stage. Finally, we perform extensive experiments on a number of challenging detection benchmarks with different settings. The results demonstrate that, our POTD outperforms the recent state-of-the-art approaches. |
Tasks | Object Detection |
Published | 2020-02-12 |
URL | https://arxiv.org/abs/2002.04741v2 |
https://arxiv.org/pdf/2002.04741v2.pdf | |
PWC | https://paperswithcode.com/paper/progressive-object-transfer-detection |
Repo | |
Framework | |
A Novel Graph based Trajectory Predictor with Pseudo Oracle
Title | A Novel Graph based Trajectory Predictor with Pseudo Oracle |
Authors | Biao Yang, Guocheng Yan, Pin Wang, Chingyao Chan, Xiaofeng Liu, Yang Chen |
Abstract | Pedestrian trajectory prediction in dynamic scenes remains a challenging and critical problem in numerous applications, such as self-driving cars and socially aware robots. Challenges concentrate on capturing pedestrians’ social interactions and handling their future uncertainties. Pedestrians’ head orientations can be used as an oracle that indicates relevant pedestrians[1], thus is beneficial to model social interactions. Moreover, latent variable distributions of pedestrians’future trajectories can be termed as another oracle. However, few works fully utilize these oracle information for an improved prediction performance. In this work, we propose GTPPO (Graph-based Trajectory Predictor with Pseudo Oracle), which is a generative model-based trajectory predictor. Pedestrians’social interactions are captured by the proposed GA2T (Graph Attention social Attention neTwork) module. Social attention is calculated on the basis of pedestrians’ moving directions, which are termed as a pseudo oracle. Moreover, we propose a latent variable predictor to learn the latent variable distribution from observed trajectories. Such latent variable distribution reflects pedestrians’future trajectories, and therefore can be taken as another pseudo oracle. We compare the performance of GTPPO with several recently proposed methods on benchmarking datasets. Quantitative evaluations demonstrate that GTPPO outperforms state-of-the-art methods with lower average and final displacement errors. Qualitative evaluations show that GTPPO successfully recognizes the sudden motion changes since the estimated latent variable reflects the future trajectories. |
Tasks | Self-Driving Cars, Trajectory Prediction |
Published | 2020-02-02 |
URL | https://arxiv.org/abs/2002.00391v1 |
https://arxiv.org/pdf/2002.00391v1.pdf | |
PWC | https://paperswithcode.com/paper/a-novel-graph-based-trajectory-predictor-with |
Repo | |
Framework | |
Mining Changes in User Expectation Over Time From Online Reviews
Title | Mining Changes in User Expectation Over Time From Online Reviews |
Authors | Tianjun Hou, Bernard Yannou, Yann Leroy, Emilie Poirson |
Abstract | Customers post online reviews at any time. With the timestamp of online reviews, they can be regarded as a flow of information. With this characteristic, designers can capture the changes in customer feedback to help set up product improvement strategies. Here we propose an approach for capturing changes of user expectation on product affordances based on the online reviews for two generations of products. First, the approach uses a rule-based natural language processing method to automatically identify and structure product affordances from review text. Then, inspired by the Kano model which classifies preferences of product attributes in five categories, conjoint analysis is used to quantitatively categorize the structured affordances. Finally, changes of user expectation can be found by applying the conjoint analysis on the online reviews posted for two successive generations of products. A case study based on the online reviews of Kindle e-readers downloaded from amazon.com shows that designers can use our proposed approach to evaluate their product improvement strategies for previous products and develop new product improvement strategies for future products. |
Tasks | |
Published | 2020-01-13 |
URL | https://arxiv.org/abs/2001.09898v1 |
https://arxiv.org/pdf/2001.09898v1.pdf | |
PWC | https://paperswithcode.com/paper/mining-changes-in-user-expectation-over-time |
Repo | |
Framework | |