Paper Group AWR 108
Universal Deep Beamformer for Variable Rate Ultrasound Imaging. Clustering by the local intrinsic dimension: the hidden structure of real-world data. ‘Squeeze & Excite’ Guided Few-Shot Segmentation of Volumetric Images. Generating Positive Bounding Boxes for Balanced Training of Object Detectors. Embarrassingly Shallow Autoencoders for Sparse Data. …
Universal Deep Beamformer for Variable Rate Ultrasound Imaging
Title | Universal Deep Beamformer for Variable Rate Ultrasound Imaging |
Authors | Shujaat Khan, Jaeyoung Huh, Jong Chul Ye |
Abstract | Ultrasound (US) imaging is based on the time-reversal principle, in which individual channel RF measurements are back-propagated and accumulated to form an image after applying specific delays. While this time reversal is usually implemented as a delay-and-sum (DAS) beamformer, the image quality quickly degrades as the number of measurement channels decreases. To address this problem, various types of adaptive beamforming techniques have been proposed using predefined models of the signals. However, the performance of these adaptive beamforming approaches degrade when the underlying model is not sufficiently accurate. Here, we demonstrate for the first time that a single universal deep beamformer trained using a purely data-driven way can generate significantly improved images over widely varying aperture and channel subsampling patterns. In particular, we design an end-to-end deep learning framework that can directly process sub-sampled RF data acquired at different subsampling rate and detector configuration to generate high quality ultrasound images using a single beamformer. Experimental results using B-mode focused ultrasound confirm the efficacy of the proposed methods. |
Tasks | |
Published | 2019-01-07 |
URL | http://arxiv.org/abs/1901.01706v1 |
http://arxiv.org/pdf/1901.01706v1.pdf | |
PWC | https://paperswithcode.com/paper/universal-deep-beamformer-for-variable-rate |
Repo | https://github.com/Shujaat123/Universal-Deep-Beamformer-for-Robust-Ultrasound-Imaging |
Framework | none |
Clustering by the local intrinsic dimension: the hidden structure of real-world data
Title | Clustering by the local intrinsic dimension: the hidden structure of real-world data |
Authors | Michele Allegra, Elena Facco, Alessandro Laio, Antonietta Mira |
Abstract | It is well known that a small number of variables is often sufficient to effectively describe high-dimensional data. This number is called the intrinsic dimension (ID) of the data. What is not so commonly known is that the ID can vary within the same dataset. This fact has been highlighted in technical discussions, but seldom exploited to gain practical insight in the data structure. Here we develop a simple and robust approach to cluster regions with the same local ID in a given data landscape. Surprisingly, we find that many real-world data sets contain regions with widely heterogeneous dimensions. These regions host points differing in core properties: folded vs unfolded configurations in a protein molecular dynamics trajectory, active vs non-active regions in brain imaging data, and firms with different financial risk in company balance sheets. Our results show that a simple topological feature, the local ID, is sufficient to uncover a rich structure in high-dimensional data landscapes. |
Tasks | |
Published | 2019-02-27 |
URL | http://arxiv.org/abs/1902.10459v1 |
http://arxiv.org/pdf/1902.10459v1.pdf | |
PWC | https://paperswithcode.com/paper/clustering-by-the-local-intrinsic-dimension |
Repo | https://github.com/micheleallegra/Hidalgo |
Framework | none |
‘Squeeze & Excite’ Guided Few-Shot Segmentation of Volumetric Images
Title | ‘Squeeze & Excite’ Guided Few-Shot Segmentation of Volumetric Images |
Authors | Abhijit Guha Roy, Shayan Siddiqui, Sebastian Pölsterl, Nassir Navab, Christian Wachinger |
Abstract | Deep neural networks enable highly accurate image segmentation, but require large amounts of manually annotated data for supervised training. Few-shot learning aims to address this shortcoming by learning a new class from a few annotated support examples. We introduce, a novel few-shot framework, for the segmentation of volumetric medical images with only a few annotated slices. Compared to other related works in computer vision, the major challenges are the absence of pre-trained networks and the volumetric nature of medical scans. We address these challenges by proposing a new architecture for few-shot segmentation that incorporates ‘squeeze & excite’ blocks. Our two-armed architecture consists of a conditioner arm, which processes the annotated support input and generates a task-specific representation. This representation is passed on to the segmenter arm that uses this information to segment the new query image. To facilitate efficient interaction between the conditioner and the segmenter arm, we propose to use ‘channel squeeze & spatial excitation’ blocks - a light-weight computational module - that enables heavy interaction between both the arms with negligible increase in model complexity. This contribution allows us to perform image segmentation without relying on a pre-trained model, which generally is unavailable for medical scans. Furthermore, we propose an efficient strategy for volumetric segmentation by optimally pairing a few slices of the support volume to all the slices of the query volume. We perform experiments for organ segmentation on whole-body contrast-enhanced CT scans from the Visceral Dataset. Our proposed model outperforms multiple baselines and existing approaches with respect to the segmentation accuracy by a significant margin. The source code is available at https://github.com/abhi4ssj/few-shot-segmentation. |
Tasks | Few-Shot Learning, Semantic Segmentation |
Published | 2019-02-04 |
URL | https://arxiv.org/abs/1902.01314v2 |
https://arxiv.org/pdf/1902.01314v2.pdf | |
PWC | https://paperswithcode.com/paper/squeeze-excite-guided-few-shot-segmentation |
Repo | https://github.com/abhi4ssj/few-shot-segmentation |
Framework | pytorch |
Generating Positive Bounding Boxes for Balanced Training of Object Detectors
Title | Generating Positive Bounding Boxes for Balanced Training of Object Detectors |
Authors | Kemal Oksuz, Baris Can Cam, Emre Akbas, Sinan Kalkan |
Abstract | Two-stage deep object detectors generate a set of regions-of-interest (RoI) in the first stage, then, in the second stage, identify objects among the proposed RoIs that sufficiently overlap with a ground truth (GT) box. The second stage is known to suffer from a bias towards RoIs that have low intersection-over-union (IoU) with the associated GT boxes. To address this issue, we first propose a sampling method to generate bounding boxes (BB) that overlap with a given reference box more than a given IoU threshold. Then, we use this BB generation method to develop a positive RoI (pRoI) generator that produces RoIs following any desired spatial or IoU distribution, for the second-stage. We show that our pRoI generator is able to simulate other sampling methods for positive examples such as hard example mining and prime sampling. Using our generator as an analysis tool, we show that (i) IoU imbalance has an adverse effect on performance, (ii) hard positive example mining improves the performance only for certain input IoU distributions, and (iii) the imbalance among the foreground classes has an adverse effect on performance and that it can be alleviated at the batch level. Finally, we train Faster R-CNN using our pRoI generator and, compared to conventional training, obtain better or on-par performance for low IoUs and significant improvements when trained for higher IoUs for Pascal VOC and MS COCO datasets. The code is available at: https://github.com/kemaloksuz/BoundingBoxGenerator. |
Tasks | |
Published | 2019-09-21 |
URL | https://arxiv.org/abs/1909.09777v2 |
https://arxiv.org/pdf/1909.09777v2.pdf | |
PWC | https://paperswithcode.com/paper/190909777 |
Repo | https://github.com/kemaloksuz/BoundingBoxGenerator |
Framework | pytorch |
Embarrassingly Shallow Autoencoders for Sparse Data
Title | Embarrassingly Shallow Autoencoders for Sparse Data |
Authors | Harald Steck |
Abstract | Combining simple elements from the literature, we define a linear model that is geared toward sparse data, in particular implicit feedback data for recommender systems. We show that its training objective has a closed-form solution, and discuss the resulting conceptual insights. Surprisingly, this simple model achieves better ranking accuracy than various state-of-the-art collaborative-filtering approaches, including deep non-linear models, on most of the publicly available data-sets used in our experiments. |
Tasks | Recommendation Systems |
Published | 2019-05-08 |
URL | https://arxiv.org/abs/1905.03375v1 |
https://arxiv.org/pdf/1905.03375v1.pdf | |
PWC | https://paperswithcode.com/paper/190503375 |
Repo | https://github.com/Darel13712/ease_rec |
Framework | none |
CLOSURE: Assessing Systematic Generalization of CLEVR Models
Title | CLOSURE: Assessing Systematic Generalization of CLEVR Models |
Authors | Dzmitry Bahdanau, Harm de Vries, Timothy J. O’Donnell, Shikhar Murty, Philippe Beaudoin, Yoshua Bengio, Aaron Courville |
Abstract | The CLEVR dataset of natural-looking questions about 3D-rendered scenes has recently received much attention from the research community. A number of models have been proposed for this task, many of which achieved very high accuracies of around 97-99%. In this work, we study how systematic the generalization of such models is, that is to which extent they are capable of handling novel combinations of known linguistic constructs. To this end, we test models’ understanding of referring expressions based on matching object properties (such as e.g. “the object that is the same size as the red ball”) in novel contexts. Our experiments on the thereby constructed CLOSURE benchmark show that state-of-the-art models often do not exhibit systematicity after being trained on CLEVR. Surprisingly, we find that an explicitly compositional Neural Module Network model also generalizes badly on CLOSURE, even when it has access to the ground-truth programs at test time. We improve the NMN’s systematic generalization by developing a novel Vector-NMN module architecture with vector-valued inputs and outputs. Lastly, we investigate the extent to which few-shot transfer learning can help models that are pretrained on CLEVR to adapt to CLOSURE. Our few-shot learning experiments contrast the adaptation behavior of the models with intermediate discrete programs with that of the end-to-end continuous models. |
Tasks | Few-Shot Learning, Transfer Learning |
Published | 2019-12-12 |
URL | https://arxiv.org/abs/1912.05783v1 |
https://arxiv.org/pdf/1912.05783v1.pdf | |
PWC | https://paperswithcode.com/paper/closure-assessing-systematic-generalization |
Repo | https://github.com/rizar/CLOSURE |
Framework | none |
A Neural Topic-Attention Model for Medical Term Abbreviation Disambiguation
Title | A Neural Topic-Attention Model for Medical Term Abbreviation Disambiguation |
Authors | Irene Li, Michihiro Yasunaga, Muhammed Yavuz Nuzumlalı, Cesar Caraballo, Shiwani Mahajan, Harlan Krumholz, Dragomir Radev |
Abstract | Automated analysis of clinical notes is attracting increasing attention. However, there has not been much work on medical term abbreviation disambiguation. Such abbreviations are abundant, and highly ambiguous, in clinical documents. One of the main obstacles is the lack of large scale, balance labeled data sets. To address the issue, we propose a few-shot learning approach to take advantage of limited labeled data. Specifically, a neural topic-attention model is applied to learn improved contextualized sentence representations for medical term abbreviation disambiguation. Another vital issue is that the existing scarce annotations are noisy and missing. We re-examine and correct an existing dataset for training and collect a test set to evaluate the models fairly especially for rare senses. We train our model on the training set which contains 30 abbreviation terms as categories (on average, 479 samples and 3.24 classes in each term) selected from a public abbreviation disambiguation dataset, and then test on a manually-created balanced dataset (each class in each term has 15 samples). We show that enhancing the sentence representation with topic information improves the performance on small-scale unbalanced training datasets by a large margin, compared to a number of baseline models. |
Tasks | Few-Shot Learning |
Published | 2019-10-30 |
URL | https://arxiv.org/abs/1910.14076v1 |
https://arxiv.org/pdf/1910.14076v1.pdf | |
PWC | https://paperswithcode.com/paper/a-neural-topic-attention-model-for-medical |
Repo | https://github.com/IreneZihuiLi/TopicAttentionMedicalAD |
Framework | pytorch |
Adversarial Examples for Models of Code
Title | Adversarial Examples for Models of Code |
Authors | Noam Yefet, Uri Alon, Eran Yahav |
Abstract | Neural models of code have shown impressive performance for tasks such as predicting method names and identifying certain kinds of bugs. In this paper, we show that these models are vulnerable to adversarial examples, and introduce a novel approach for attacking trained models of code with adversarial examples. The main idea is to force a given trained model to make an incorrect prediction as specified by the adversary by introducing small perturbations that do not change the program’s semantics. To find such perturbations, we present a new technique for Discrete Adversarial Manipulation of Programs (DAMP). DAMP works by deriving the desired prediction with respect to the model’s inputs while holding the model weights constant and following the gradients to slightly modify the code. To defend a model against such attacks, we propose placing a defensive model (Anti-DAMP) in front of it. Anti-DAMP detects unlikely mutations and masks them before feeding the input to the downstream model. We show that our DAMP attack is effective across three neural architectures: code2vec, GGNN, and GNN-FiLM, in both Java and C#. We show that DAMP has up to 89% success rate in changing a prediction to the adversary’s choice (“targeted attack”), and a success rate of up to 94% in changing a given prediction to any incorrect prediction (“non-targeted attack”). By using Anti-DAMP, the success rate of the attack drops drastically for both targeted and non-targeted attacks, with a minor penalty of 2% relative degradation in accuracy while not performing under attack. |
Tasks | |
Published | 2019-10-15 |
URL | https://arxiv.org/abs/1910.07517v3 |
https://arxiv.org/pdf/1910.07517v3.pdf | |
PWC | https://paperswithcode.com/paper/adversarial-examples-for-models-of-code |
Repo | https://github.com/tech-srl/code2vec |
Framework | tf |
DNNSurv: Deep Neural Networks for Survival Analysis Using Pseudo Values
Title | DNNSurv: Deep Neural Networks for Survival Analysis Using Pseudo Values |
Authors | Lili Zhao, Dai Feng |
Abstract | There has been increasing interest in modelling survival data using deep learning methods in medical research. Current approaches have focused on designing special cost functions to handle censored survival data. We propose a very different method with two steps. In the first step, we transform each subject’s survival time into a series of jackknife pseudo conditional survival probabilities and then use these pseudo probabilities as a quantitative response variable in the deep neural network model. By using the pseudo values, we reduce a complex survival analysis to a standard regression problem, which greatly simplifies the neural network construction. Our two-step approach is simple, yet very flexible in making risk predictions for survival data, which is very appealing from the practice point of view. The source code is freely available at http://github.com/lilizhaoUM/DNNSurv. |
Tasks | Survival Analysis |
Published | 2019-08-06 |
URL | https://arxiv.org/abs/1908.02337v2 |
https://arxiv.org/pdf/1908.02337v2.pdf | |
PWC | https://paperswithcode.com/paper/dnnsurv-deep-neural-networks-for-survival |
Repo | https://github.com/lilizhaoUM/DNNSurv |
Framework | none |
BERTje: A Dutch BERT Model
Title | BERTje: A Dutch BERT Model |
Authors | Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, Malvina Nissim |
Abstract | The transformer-based pre-trained language model BERT has helped to improve state-of-the-art performance on many natural language processing (NLP) tasks. Using the same architecture and parameters, we developed and evaluated a monolingual Dutch BERT model called BERTje. Compared to the multilingual BERT model, which includes Dutch but is only based on Wikipedia text, BERTje is based on a large and diverse dataset of 2.4 billion tokens. BERTje consistently outperforms the equally-sized multilingual BERT model on downstream NLP tasks (part-of-speech tagging, named-entity recognition, semantic role labeling, and sentiment analysis). Our pre-trained Dutch BERT model is made available at https://github.com/wietsedv/bertje. |
Tasks | Language Modelling, Named Entity Recognition, Part-Of-Speech Tagging, Semantic Role Labeling, Sentiment Analysis |
Published | 2019-12-19 |
URL | https://arxiv.org/abs/1912.09582v1 |
https://arxiv.org/pdf/1912.09582v1.pdf | |
PWC | https://paperswithcode.com/paper/bertje-a-dutch-bert-model |
Repo | https://github.com/wietsedv/bertje |
Framework | tf |
Reasoning Over Semantic-Level Graph for Fact Checking
Title | Reasoning Over Semantic-Level Graph for Fact Checking |
Authors | Wanjun Zhong, Jingjing Xu, Duyu Tang, Zenan Xu, Nan Duan, Ming Zhou, Jiahai Wang, Jian Yin |
Abstract | We study fact-checking in this paper, which aims to verify a textual claim given textual evidence (e.g., retrieved sentences from Wikipedia). Existing studies typically either concatenate retrieved sentences as a single string or use feature fusion on the top of features of sentences, while ignoring semantic-level information including participants, location, and temporality of an event occurred in a sentence and relationships among multiple events. Such semantic-level information is crucial for understanding the relational structure of evidence and the deep reasoning procedure over that. In this paper, we address this issue by proposing a graph-based reasoning framework, called the Dynamic REAsoning Machine (DREAM) framework. We first construct a semantic-level graph, where nodes are extracted by semantic role labeling toolkits and are connected by inner- and inter- sentence edges. After having the automatically constructed graph, we use XLNet as the backbone of our approach and propose a graph-based contextual word representation learning module and a graph-based reasoning module to leverage the information of graphs. The first module is designed by considering a claim as a sequence, in which case we use the graph structure to re-define the relative distance of words. On top of this, we propose the second module by considering both the claim and the evidence as graphs and use a graph neural network to capture the semantic relationship at a more abstract level. We conduct experiments on FEVER, a large-scale benchmark dataset for fact-checking. Results show that both of the graph-based modules improve performance. Our system is the state-of-the-art system on the public leaderboard in terms of both accuracy and FEVER score. |
Tasks | Representation Learning, Semantic Role Labeling |
Published | 2019-09-09 |
URL | https://arxiv.org/abs/1909.03745v2 |
https://arxiv.org/pdf/1909.03745v2.pdf | |
PWC | https://paperswithcode.com/paper/reasoning-over-semantic-level-graph-for-fact |
Repo | https://github.com/thunlp/KernelGAT |
Framework | pytorch |
Semantic Role Labeling with Associated Memory Network
Title | Semantic Role Labeling with Associated Memory Network |
Authors | Chaoyu Guan, Yuhao Cheng, Hai Zhao |
Abstract | Semantic role labeling (SRL) is a task to recognize all the predicate-argument pairs of a sentence, which has been in a performance improvement bottleneck after a series of latest works were presented. This paper proposes a novel syntax-agnostic SRL model enhanced by the proposed associated memory network (AMN), which makes use of inter-sentence attention of label-known associated sentences as a kind of memory to further enhance dependency-based SRL. In detail, we use sentences and their labels from train dataset as an associated memory cue to help label the target sentence. Furthermore, we compare several associated sentences selecting strategies and label merging methods in AMN to find and utilize the label of associated sentences while attending them. By leveraging the attentive memory from known training data, Our full model reaches state-of-the-art on CoNLL-2009 benchmark datasets for syntax-agnostic setting, showing a new effective research line of SRL enhancement other than exploiting external resources such as well pre-trained language models. |
Tasks | Semantic Role Labeling |
Published | 2019-08-05 |
URL | https://arxiv.org/abs/1908.02367v1 |
https://arxiv.org/pdf/1908.02367v1.pdf | |
PWC | https://paperswithcode.com/paper/semantic-role-labeling-with-associated-memory-1 |
Repo | https://github.com/Frozenmad/AMN_SRL |
Framework | pytorch |
Towards Amortized Ranking-Critical Training for Collaborative Filtering
Title | Towards Amortized Ranking-Critical Training for Collaborative Filtering |
Authors | Sam Lobel, Chunyuan Li, Jianfeng Gao, Lawrence Carin |
Abstract | Collaborative filtering is widely used in modern recommender systems. Recent research shows that variational autoencoders (VAEs) yield state-of-the-art performance by integrating flexible representations from deep neural networks into latent variable models, mitigating limitations of traditional linear factor models. VAEs are typically trained by maximizing the likelihood (MLE) of users interacting with ground-truth items. While simple and often effective, MLE-based training does not directly maximize the recommendation-quality metrics one typically cares about, such as top-N ranking. In this paper we investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to directly optimize the non-differentiable quality metrics of interest. Specifically, we train a critic network to approximate ranking-based metrics, and then update the actor network (represented here by a VAE) to directly optimize against the learned metrics. In contrast to traditional learning-to-rank methods that require to re-run the optimization procedure for new lists, our critic-based method amortizes the scoring process with a neural network, and can directly provide the (approximate) ranking scores for new lists. Empirically, we show that the proposed methods outperform several state-of-the-art baselines, including recently-proposed deep learning approaches, on three large-scale real-world datasets. The code to reproduce the experimental results and figure plots is on Github: https://github.com/samlobel/RaCT_CF |
Tasks | Latent Variable Models, Learning-To-Rank, Recommendation Systems |
Published | 2019-06-10 |
URL | https://arxiv.org/abs/1906.04281v2 |
https://arxiv.org/pdf/1906.04281v2.pdf | |
PWC | https://paperswithcode.com/paper/towards-amortized-ranking-critical-training |
Repo | https://github.com/samlobel/RaCT_CF |
Framework | tf |
Pretraining boosts out-of-domain robustness for pose estimation
Title | Pretraining boosts out-of-domain robustness for pose estimation |
Authors | Alexander Mathis, Mert Yüksekgönül, Byron Rogers, Matthias Bethge, Mackenzie W. Mathis |
Abstract | Deep neural networks are highly effective tools for human and animal pose estimation. However, robustness to out-of-domain data remains a challenge. Here, we probe the transfer and generalization ability for pose estimation with two architecture classes (MobileNetV2s and ResNets) pretrained on ImageNet. We generated a novel dataset of 30 horses that allowed for both within-domain and out-of-domain (unseen horse) testing. We find that pretraining on ImageNet strongly improves out-of-domain performance. Moreover, we show that for both pretrained and networks trained from scratch, better ImageNet-performing architectures perform better for pose estimation, with a substantial improvement on out-of-domain data when pretrained. Collectively, our results demonstrate that transfer learning is particularly beneficial for out-of-domain robustness. |
Tasks | Animal Pose Estimation, Pose Estimation, Transfer Learning |
Published | 2019-09-24 |
URL | https://arxiv.org/abs/1909.11229v1 |
https://arxiv.org/pdf/1909.11229v1.pdf | |
PWC | https://paperswithcode.com/paper/pretraining-boosts-out-of-domain-robustness |
Repo | https://github.com/AlexEMG/DeepLabCut |
Framework | tf |
Zero-Shot Semantic Parsing for Instructions
Title | Zero-Shot Semantic Parsing for Instructions |
Authors | Ofer Givoli, Roi Reichart |
Abstract | We consider a zero-shot semantic parsing task: parsing instructions into compositional logical forms, in domains that were not seen during training. We present a new dataset with 1,390 examples from 7 application domains (e.g. a calendar or a file manager), each example consisting of a triplet: (a) the application’s initial state, (b) an instruction, to be carried out in the context of that state, and (c) the state of the application after carrying out the instruction. We introduce a new training algorithm that aims to train a semantic parser on examples from a set of source domains, so that it can effectively parse instructions from an unknown target domain. We integrate our algorithm into the floating parser of Pasupat and Liang (2015), and further augment the parser with features and a logical form candidate filtering logic, to support zero-shot adaptation. Our experiments with various zero-shot adaptation setups demonstrate substantial performance gains over a non-adapted parser. |
Tasks | Semantic Parsing |
Published | 2019-11-20 |
URL | https://arxiv.org/abs/1911.08827v1 |
https://arxiv.org/pdf/1911.08827v1.pdf | |
PWC | https://paperswithcode.com/paper/zero-shot-semantic-parsing-for-instructions-1 |
Repo | https://github.com/givoli/TechnionNLI |
Framework | none |