Paper Group ANR 893
Single MR Image Super-Resolution via Channel Splitting and Serial Fusion Network. Learning a Deep Convolution Network with Turing Test Adversaries for Microscopy Image Super Resolution. Combined State and Parameter Estimation in Level-Set Methods. Multi-Task Ordinal Regression for Jointly Predicting the Trustworthiness and the Leading Political Ide …
Single MR Image Super-Resolution via Channel Splitting and Serial Fusion Network
Title | Single MR Image Super-Resolution via Channel Splitting and Serial Fusion Network |
Authors | Zhao Xiaole, Huali Zhang, Hangfei Liu, Yun Qin, Tao Zhang, Xueming Zou |
Abstract | Spatial resolution is a critical imaging parameter in magnetic resonance imaging (MRI). Acquiring high resolution MRI data usually takes long scanning time and would subject to motion artifacts due to hardware, physical, and physiological limitations. Single image super-resolution (SISR), especially that based on deep learning techniques, is an effective and promising alternative technique to improve the current spatial resolution of magnetic resonance (MR) images. However, the deeper network is more difficult to be effectively trained because the information is gradually weakened as the network deepens. This problem becomes more serious for medical images due to the degradation of training examples. In this paper, we present a novel channel splitting and serial fusion network (CSSFN) for single MR image super-resolution. Specifically, the proposed CSSFN network splits the hierarchical features into a series of subfeatures, which are then integrated together in a serial manner. Thus, the network becomes deeper and can deal with the subfeatures on different channels discriminatively. Besides, a dense global feature fusion (DGFF) is adopted to integrate the intermediate features, which further promotes the information flow in the network. Extensive experiments on several typical MR images show the superiority of our CSSFN model over other advanced SISR methods. |
Tasks | Image Super-Resolution, Super-Resolution |
Published | 2019-01-19 |
URL | http://arxiv.org/abs/1901.06484v1 |
http://arxiv.org/pdf/1901.06484v1.pdf | |
PWC | https://paperswithcode.com/paper/single-mr-image-super-resolution-via-channel |
Repo | |
Framework | |
Learning a Deep Convolution Network with Turing Test Adversaries for Microscopy Image Super Resolution
Title | Learning a Deep Convolution Network with Turing Test Adversaries for Microscopy Image Super Resolution |
Authors | Francis Tom, Himanshu Sharma, Dheeraj Mundhra, Tathagato Rai Dastidar, Debdoot Sheet |
Abstract | Adversarially trained deep neural networks have significantly improved performance of single image super resolution, by hallucinating photorealistic local textures, thereby greatly reducing the perception difference between a real high resolution image and its super resolved (SR) counterpart. However, application to medical imaging requires preservation of diagnostically relevant features while refraining from introducing any diagnostically confusing artifacts. We propose using a deep convolutional super resolution network (SRNet) trained for (i) minimising reconstruction loss between the real and SR images, and (ii) maximally confusing learned relativistic visual Turing test (rVTT) networks to discriminate between (a) pair of real and SR images (T1) and (b) pair of patches in real and SR selected from region of interest (T2). The adversarial loss of T1 and T2 while backpropagated through SRNet helps it learn to reconstruct pathorealism in the regions of interest such as white blood cells (WBC) in peripheral blood smears or epithelial cells in histopathology of cancerous biopsy tissues, which are experimentally demonstrated here. Experiments performed for measuring signal distortion loss using peak signal to noise ratio (pSNR) and structural similarity (SSIM) with variation of SR scale factors, impact of rVTT adversarial losses, and impact on reporting using SR on a commercially available artificial intelligence (AI) digital pathology system substantiate our claims. |
Tasks | Image Super-Resolution, Super-Resolution |
Published | 2019-01-18 |
URL | http://arxiv.org/abs/1901.06405v1 |
http://arxiv.org/pdf/1901.06405v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-a-deep-convolution-network-with |
Repo | |
Framework | |
Combined State and Parameter Estimation in Level-Set Methods
Title | Combined State and Parameter Estimation in Level-Set Methods |
Authors | Hans Yu, Matthew P. Juniper, Luca Magri |
Abstract | Reduced-order models based on level-set methods are widely used tools to qualitatively capture and track the nonlinear dynamics of an interface. The aim of this paper is to develop a physics-informed, data-driven, statistically rigorous learning algorithm for state and parameter estimation with level-set methods. A Bayesian approach based on data assimilation is introduced. Data assimilation is enabled by the ensemble Kalman filter and smoother, which are used in their probabilistic formulations. The level-set data assimilation framework is verified in onedimensional and two-dimensional test cases, where state estimation, parameter estimation and uncertainty quantification are performed. The statistical performance of the proposed ensemble Kalman filter and smoother is quantified by twin experiments. In the twin experiments, the combined state and parameter estimation fully recovers the reference solution, which validates the proposed algorithm. The level-set data assimilation framework is then applied to the prediction of the nonlinear dynamics of a forced premixed flame, which exhibits the formation of sharp cusps and intricate topological changes, such as pinch-off events. The proposed physics-informed statistical learning algorithm opens up new possibilities for making reduced-order models of interfaces quantitatively predictive, any time that reference data is available. |
Tasks | |
Published | 2019-03-01 |
URL | https://arxiv.org/abs/1903.00321v2 |
https://arxiv.org/pdf/1903.00321v2.pdf | |
PWC | https://paperswithcode.com/paper/combined-state-and-parameter-estimation-in |
Repo | |
Framework | |
Multi-Task Ordinal Regression for Jointly Predicting the Trustworthiness and the Leading Political Ideology of News Media
Title | Multi-Task Ordinal Regression for Jointly Predicting the Trustworthiness and the Leading Political Ideology of News Media |
Authors | Ramy Baly, Georgi Karadzhov, Abdelrhman Saleh, James Glass, Preslav Nakov |
Abstract | In the context of fake news, bias, and propaganda, we study two important but relatively under-explored problems: (i) trustworthiness estimation (on a 3-point scale) and (ii) political ideology detection (left/right bias on a 7-point scale) of entire news outlets, as opposed to evaluating individual articles. In particular, we propose a multi-task ordinal regression framework that models the two problems jointly. This is motivated by the observation that hyper-partisanship is often linked to low trustworthiness, e.g., appealing to emotions rather than sticking to the facts, while center media tend to be generally more impartial and trustworthy. We further use several auxiliary tasks, modeling centrality, hyperpartisanship, as well as left-vs.-right bias on a coarse-grained scale. The evaluation results show sizable performance gains by the joint models over models that target the problems in isolation. |
Tasks | |
Published | 2019-04-01 |
URL | http://arxiv.org/abs/1904.00542v1 |
http://arxiv.org/pdf/1904.00542v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-task-ordinal-regression-for-jointly |
Repo | |
Framework | |
Icentia11K: An Unsupervised Representation Learning Dataset for Arrhythmia Subtype Discovery
Title | Icentia11K: An Unsupervised Representation Learning Dataset for Arrhythmia Subtype Discovery |
Authors | Shawn Tan, Guillaume Androz, Ahmad Chamseddine, Pierre Fecteau, Aaron Courville, Yoshua Bengio, Joseph Paul Cohen |
Abstract | We release the largest public ECG dataset of continuous raw signals for representation learning containing 11 thousand patients and 2 billion labelled beats. Our goal is to enable semi-supervised ECG models to be made as well as to discover unknown subtypes of arrhythmia and anomalous ECG signal events. To this end, we propose an unsupervised representation learning task, evaluated in a semi-supervised fashion. We provide a set of baselines for different feature extractors that can be built upon. Additionally, we perform qualitative evaluations on results from PCA embeddings, where we identify some clustering of known subtypes indicating the potential for representation learning in arrhythmia sub-type discovery. |
Tasks | Representation Learning, Unsupervised Representation Learning |
Published | 2019-10-21 |
URL | https://arxiv.org/abs/1910.09570v1 |
https://arxiv.org/pdf/1910.09570v1.pdf | |
PWC | https://paperswithcode.com/paper/icentia11k-an-unsupervised-representation |
Repo | |
Framework | |
The Limits of Morality in Strategic Games
Title | The Limits of Morality in Strategic Games |
Authors | Rui Cao, Pavel Naumov |
Abstract | A coalition is blameable for an outcome if the coalition had a strategy to prevent it. It has been previously suggested that the cost of prevention, or the cost of sacrifice, can be used to measure the degree of blameworthiness. The paper adopts this approach and proposes a modal logical system for reasoning about the degree of blameworthiness. The main technical result is a completeness theorem for the proposed system. |
Tasks | |
Published | 2019-01-22 |
URL | http://arxiv.org/abs/1901.08467v1 |
http://arxiv.org/pdf/1901.08467v1.pdf | |
PWC | https://paperswithcode.com/paper/the-limits-of-morality-in-strategic-games |
Repo | |
Framework | |
Combating the Elsagate phenomenon: Deep learning architectures for disturbing cartoons
Title | Combating the Elsagate phenomenon: Deep learning architectures for disturbing cartoons |
Authors | Akari Ishikawa, Edson Bollis, Sandra Avila |
Abstract | Watching cartoons can be useful for children’s intellectual, social and emotional development. However, the most popular video sharing platform today provides many videos with Elsagate content. Elsagate is a phenomenon that depicts childhood characters in disturbing circumstances (e.g., gore, toilet humor, drinking urine, stealing). Even with this threat easily available for children, there is no work in the literature addressing the problem. As the first to explore disturbing content in cartoons, we proceed from the most recent pornography detection literature applying deep convolutional neural networks combined with static and motion information of the video. Our solution is compatible with mobile platforms and achieved 92.6% of accuracy. Our goal is not only to introduce the first solution but also to bring up the discussion around Elsagate. |
Tasks | Pornography Detection |
Published | 2019-04-18 |
URL | http://arxiv.org/abs/1904.08910v1 |
http://arxiv.org/pdf/1904.08910v1.pdf | |
PWC | https://paperswithcode.com/paper/combating-the-elsagate-phenomenon-deep |
Repo | |
Framework | |
Explicit Pairwise Word Interaction Modeling Improves Pretrained Transformers for English Semantic Similarity Tasks
Title | Explicit Pairwise Word Interaction Modeling Improves Pretrained Transformers for English Semantic Similarity Tasks |
Authors | Yinan Zhang, Raphael Tang, Jimmy Lin |
Abstract | In English semantic similarity tasks, classic word embedding-based approaches explicitly model pairwise “interactions” between the word representations of a sentence pair. Transformer-based pretrained language models disregard this notion, instead modeling pairwise word interactions globally and implicitly through their self-attention mechanism. In this paper, we hypothesize that introducing an explicit, constrained pairwise word interaction mechanism to pretrained language models improves their effectiveness on semantic similarity tasks. We validate our hypothesis using BERT on four tasks in semantic textual similarity and answer sentence selection. We demonstrate consistent improvements in quality by adding an explicit pairwise word interaction module to BERT. |
Tasks | Semantic Similarity, Semantic Textual Similarity |
Published | 2019-11-07 |
URL | https://arxiv.org/abs/1911.02847v1 |
https://arxiv.org/pdf/1911.02847v1.pdf | |
PWC | https://paperswithcode.com/paper/explicit-pairwise-word-interaction-modeling |
Repo | |
Framework | |
DeepICP: An End-to-End Deep Neural Network for 3D Point Cloud Registration
Title | DeepICP: An End-to-End Deep Neural Network for 3D Point Cloud Registration |
Authors | Weixin Lu, Guowei Wan, Yao Zhou, Xiangyu Fu, Pengfei Yuan, Shiyu Song |
Abstract | We present DeepICP - a novel end-to-end learning-based 3D point cloud registration framework that achieves comparable registration accuracy to prior state-of-the-art geometric methods. Different from other keypoint based methods where a RANSAC procedure is usually needed, we implement the use of various deep neural network structures to establish an end-to-end trainable network. Our keypoint detector is trained through this end-to-end structure and enables the system to avoid the inference of dynamic objects, leverages the help of sufficiently salient features on stationary objects, and as a result, achieves high robustness. Rather than searching the corresponding points among existing points, the key contribution is that we innovatively generate them based on learned matching probabilities among a group of candidates, which can boost the registration accuracy. Our loss function incorporates both the local similarity and the global geometric constraints to ensure all above network designs can converge towards the right direction. We comprehensively validate the effectiveness of our approach using both the KITTI dataset and the Apollo-SouthBay dataset. Results demonstrate that our method achieves comparable or better performance than the state-of-the-art geometry-based methods. Detailed ablation and visualization analysis are included to further illustrate the behavior and insights of our network. The low registration error and high robustness of our method makes it attractive for substantial applications relying on the point cloud registration task. |
Tasks | Point Cloud Registration |
Published | 2019-05-10 |
URL | https://arxiv.org/abs/1905.04153v2 |
https://arxiv.org/pdf/1905.04153v2.pdf | |
PWC | https://paperswithcode.com/paper/deepicp-an-end-to-end-deep-neural-network-for |
Repo | |
Framework | |
Explanation based Handwriting Verification
Title | Explanation based Handwriting Verification |
Authors | Mihir Chauhan, Mohammad Abuzar Shaikh, Sargur N. Srihari |
Abstract | Deep learning system have drawback that their output is not accompanied with ex-planation. In a domain such as forensic handwriting verification it is essential to provideexplanation to jurors. The goal of handwriting verification is to find a measure of confi-dence whether the given handwritten samples are written by the same or different writer.We propose a method to generate explanations for the confidence provided by convolu-tional neural network (CNN) which maps the input image to 15 annotations (features)provided by experts. Our system comprises of: (1) Feature learning network (FLN),a differentiable system, (2) Inference module for providing explanations. Furthermore,inference module provides two types of explanations: (a) Based on cosine similaritybetween categorical probabilities of each feature, (b) Based on Log-Likelihood Ratio(LLR) using directed probabilistic graphical model. We perform experiments using acombination of feature learning network (FLN) and each inference module. We evaluateour system using XAI-AND dataset, containing 13700 handwritten samples and 15 cor-responding expert examined features for each sample. The dataset is released for publicuse and the methods can be extended to provide explanations on other verification taskslike face verification and bio-medical comparison. This dataset can serve as the basis and benchmark for future research in explanation based handwriting verification. The code is available on github. |
Tasks | Face Verification |
Published | 2019-08-14 |
URL | https://arxiv.org/abs/1909.02548v1 |
https://arxiv.org/pdf/1909.02548v1.pdf | |
PWC | https://paperswithcode.com/paper/explanation-based-handwriting-verification |
Repo | |
Framework | |
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT
Title | Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT |
Authors | Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer |
Abstract | Transformer based architectures have become de-facto models used for a range of Natural Language Processing tasks. In particular, the BERT based models achieved significant accuracy gain for GLUE tasks, CoNLL-03 and SQuAD. However, BERT based models have a prohibitive memory footprint and latency. As a result, deploying BERT based models in resource constrained environments has become a challenging task. In this work, we perform an extensive analysis of fine-tuned BERT models using second order Hessian information, and we use our results to propose a novel method for quantizing BERT models to ultra low precision. In particular, we propose a new group-wise quantization scheme, and we use a Hessian based mix-precision method to compress the model further. We extensively test our proposed method on BERT downstream tasks of SST-2, MNLI, CoNLL-03, and SQuAD. We can achieve comparable performance to baseline with at most $2.3%$ performance degradation, even with ultra-low precision quantization down to 2 bits, corresponding up to $13\times$ compression of the model parameters, and up to $4\times$ compression of the embedding table as well as activations. Among all tasks, we observed the highest performance loss for BERT fine-tuned on SQuAD. By probing into the Hessian based analysis as well as visualization, we show that this is related to the fact that current training/fine-tuning strategy of BERT does not converge for SQuAD. |
Tasks | Quantization |
Published | 2019-09-12 |
URL | https://arxiv.org/abs/1909.05840v2 |
https://arxiv.org/pdf/1909.05840v2.pdf | |
PWC | https://paperswithcode.com/paper/q-bert-hessian-based-ultra-low-precision |
Repo | |
Framework | |
Finite-Time 4-Expert Prediction Problem
Title | Finite-Time 4-Expert Prediction Problem |
Authors | Erhan Bayraktar, Ibrahim Ekren, Xin Zhang |
Abstract | We explicitly solve the nonlinear PDE that is the continuous limit of dynamic programming of \emph{expert prediction problem} in finite horizon setting with $N=4$ experts. The \emph{expert prediction problem} is formulated as a zero sum game between a player and an adversary. By showing that the solution is $\mathcal{C}^2$, we are able to show that the strategies conjectured in arXiv:1409.3040G form an asymptotic Nash equilibrium. We also prove the “Finite vs Geometric regret” conjecture proposed in arXiv:1409.3040G for $N=4$, and and show that this conjecture in fact follows from the conjecture that the comb strategies are optimal. |
Tasks | |
Published | 2019-11-22 |
URL | https://arxiv.org/abs/1911.10936v2 |
https://arxiv.org/pdf/1911.10936v2.pdf | |
PWC | https://paperswithcode.com/paper/finite-time-4-expert-prediction-problem |
Repo | |
Framework | |
Attention Control with Metric Learning Alignment for Image Set-based Recognition
Title | Attention Control with Metric Learning Alignment for Image Set-based Recognition |
Authors | Xiaofeng Liu, Zhenhua Guo, Jane You, B. V. K Vijaya Kumar |
Abstract | This paper considers the problem of image set-based face verification and identification. Unlike traditional single sample (an image or a video) setting, this situation assumes the availability of a set of heterogeneous collection of orderless images and videos. The samples can be taken at different check points, different identity documents $etc$. The importance of each image is usually considered either equal or based on a quality assessment of that image independent of other images and/or videos in that image set. How to model the relationship of orderless images within a set remains a challenge. We address this problem by formulating it as a Markov Decision Process (MDP) in a latent space. Specifically, we first propose a dependency-aware attention control (DAC) network, which uses actor-critic reinforcement learning for attention decision of each image to exploit the correlations among the unordered images. An off-policy experience replay is introduced to speed up the learning process. Moreover, the DAC is combined with a temporal model for videos using divide and conquer strategies. We also introduce a pose-guided representation (PGR) scheme that can further boost the performance at extreme poses. We propose a parameter-free PGR without the need for training as well as a novel metric learning-based PGR for pose alignment without the need for pose detection in testing stage. Extensive evaluations on IJB-A/B/C, YTF, Celebrity-1000 datasets demonstrate that our method outperforms many state-of-art approaches on the set-based as well as video-based face recognition databases. |
Tasks | Face Recognition, Face Verification, Metric Learning |
Published | 2019-08-05 |
URL | https://arxiv.org/abs/1908.01872v1 |
https://arxiv.org/pdf/1908.01872v1.pdf | |
PWC | https://paperswithcode.com/paper/attention-control-with-metric-learning |
Repo | |
Framework | |
Restricted Recurrent Neural Networks
Title | Restricted Recurrent Neural Networks |
Authors | Enmao Diao, Jie Ding, Vahid Tarokh |
Abstract | Recurrent Neural Network (RNN) and its variations such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), have become standard building blocks for learning online data of sequential nature in many research areas, including natural language processing and speech data analysis. In this paper, we present a new methodology to significantly reduce the number of parameters in RNNs while maintaining performance that is comparable or even better than classical RNNs. The new proposal, referred to as Restricted Recurrent Neural Network (RRNN), restricts the weight matrices corresponding to the input data and hidden states at each time step to share a large proportion of parameters. The new architecture can be regarded as a compression of its classical counterpart, but it does not require pre-training or sophisticated parameter fine-tuning, both of which are major issues in most existing compression techniques. Experiments on natural language modeling show that compared with its classical counterpart, the restricted recurrent architecture generally produces comparable results at about 50% compression rate. In particular, the Restricted LSTM can outperform classical RNN with even less number of parameters. |
Tasks | Language Modelling |
Published | 2019-08-21 |
URL | https://arxiv.org/abs/1908.07724v4 |
https://arxiv.org/pdf/1908.07724v4.pdf | |
PWC | https://paperswithcode.com/paper/190807724 |
Repo | |
Framework | |
Fast Registration for cross-source point clouds by using weak regional affinity and pixel-wise refinement
Title | Fast Registration for cross-source point clouds by using weak regional affinity and pixel-wise refinement |
Authors | Xiaoshui Huang, Lixin Fan, Qiang Wu, Jian Zhang, Chun Yuan |
Abstract | Many types of 3D acquisition sensors have emerged in recent years and point cloud has been widely used in many areas. Accurate and fast registration of cross-source 3D point clouds from different sensors is an emerged research problem in computer vision. This problem is extremely challenging because cross-source point clouds contain a mixture of various variances, such as density, partial overlap, large noise and outliers, viewpoint changing. In this paper, an algorithm is proposed to align cross-source point clouds with both high accuracy and high efficiency. There are two main contributions: firstly, two components, the weak region affinity and pixel-wise refinement, are proposed to maintain the global and local information of 3D point clouds. Then, these two components are integrated into an iterative tensor-based registration algorithm to solve the cross-source point cloud registration problem. We conduct experiments on synthetic cross-source benchmark dataset and real cross-source datasets. Comparison with six state-of-the-art methods, the proposed method obtains both higher efficiency and accuracy. |
Tasks | Point Cloud Registration |
Published | 2019-03-11 |
URL | http://arxiv.org/abs/1903.04630v1 |
http://arxiv.org/pdf/1903.04630v1.pdf | |
PWC | https://paperswithcode.com/paper/fast-registration-for-cross-source-point |
Repo | |
Framework | |