July 29, 2019

3233 words 16 mins read

Paper Group AWR 156

Paper Group AWR 156

Familia: An Open-Source Toolkit for Industrial Topic Modeling. NegBio: a high-performance tool for negation and uncertainty detection in radiology reports. Crowdsourcing for Beyond Polarity Sentiment Analysis A Pure Emotion Lexicon. Scattertext: a Browser-Based Tool for Visualizing how Corpora Differ. Language Identification Using Deep Convolutiona …

Familia: An Open-Source Toolkit for Industrial Topic Modeling

Title Familia: An Open-Source Toolkit for Industrial Topic Modeling
Authors Di Jiang, Zeyu Chen, Rongzhong Lian, Siqi Bao, Chen Li
Abstract Familia is an open-source toolkit for pragmatic topic modeling in industry. Familia abstracts the utilities of topic modeling in industry as two paradigms: semantic representation and semantic matching. Efficient implementations of the two paradigms are made publicly available for the first time. Furthermore, we provide off-the-shelf topic models trained on large-scale industrial corpora, including Latent Dirichlet Allocation (LDA), SentenceLDA and Topical Word Embedding (TWE). We further describe typical applications which are successfully powered by topic modeling, in order to ease the confusions and difficulties of software engineers during topic model selection and utilization.
Tasks Model Selection, Topic Models
Published 2017-07-31
URL http://arxiv.org/abs/1707.09823v1
PDF http://arxiv.org/pdf/1707.09823v1.pdf
PWC https://paperswithcode.com/paper/familia-an-open-source-toolkit-for-industrial
Repo https://github.com/Ridew/Familia
Framework none

NegBio: a high-performance tool for negation and uncertainty detection in radiology reports

Title NegBio: a high-performance tool for negation and uncertainty detection in radiology reports
Authors Yifan Peng, Xiaosong Wang, Le Lu, Mohammadhadi Bagheri, Ronald Summers, Zhiyong Lu
Abstract Negative and uncertain medical findings are frequent in radiology reports, but discriminating them from positive findings remains challenging for information extraction. Here, we propose a new algorithm, NegBio, to detect negative and uncertain findings in radiology reports. Unlike previous rule-based methods, NegBio utilizes patterns on universal dependencies to identify the scope of triggers that are indicative of negation or uncertainty. We evaluated NegBio on four datasets, including two public benchmarking corpora of radiology reports, a new radiology corpus that we annotated for this work, and a public corpus of general clinical texts. Evaluation on these datasets demonstrates that NegBio is highly accurate for detecting negative and uncertain findings and compares favorably to a widely-used state-of-the-art system NegEx (an average of 9.5% improvement in precision and 5.1% in F1-score).
Tasks
Published 2017-12-16
URL http://arxiv.org/abs/1712.05898v2
PDF http://arxiv.org/pdf/1712.05898v2.pdf
PWC https://paperswithcode.com/paper/negbio-a-high-performance-tool-for-negation
Repo https://github.com/ncbi-nlp/NegBio
Framework none

Crowdsourcing for Beyond Polarity Sentiment Analysis A Pure Emotion Lexicon

Title Crowdsourcing for Beyond Polarity Sentiment Analysis A Pure Emotion Lexicon
Authors Giannis Haralabopoulos, Elena Simperl
Abstract Sentiment analysis aims to uncover emotions conveyed through information. In its simplest form, it is performed on a polarity basis, where the goal is to classify information with positive or negative emotion. Recent research has explored more nuanced ways to capture emotions that go beyond polarity. For these methods to work, they require a critical resource: a lexicon that is appropriate for the task at hand, in terms of the range of emotions it captures diversity. In the past, sentiment analysis lexicons have been created by experts, such as linguists and behavioural scientists, with strict rules. Lexicon evaluation was also performed by experts or gold standards. In our paper, we propose a crowdsourcing method for lexicon acquisition, which is scalable, cost-effective, and doesn’t require experts or gold standards. We also compare crowd and expert evaluations of the lexicon, to assess the overall lexicon quality, and the evaluation capabilities of the crowd.
Tasks Sentiment Analysis
Published 2017-10-04
URL http://arxiv.org/abs/1710.04203v1
PDF http://arxiv.org/pdf/1710.04203v1.pdf
PWC https://paperswithcode.com/paper/crowdsourcing-for-beyond-polarity-sentiment
Repo https://github.com/GiannisH/Lexicon
Framework none

Scattertext: a Browser-Based Tool for Visualizing how Corpora Differ

Title Scattertext: a Browser-Based Tool for Visualizing how Corpora Differ
Authors Jason S. Kessler
Abstract Scattertext is an open source tool for visualizing linguistic variation between document categories in a language-independent way. The tool presents a scatterplot, where each axis corresponds to the rank-frequency a term occurs in a category of documents. Through a tie-breaking strategy, the tool is able to display thousands of visible term-representing points and find space to legibly label hundreds of them. Scattertext also lends itself to a query-based visualization of how the use of terms with similar embeddings differs between document categories, as well as a visualization for comparing the importance scores of bag-of-words features to univariate metrics.
Tasks
Published 2017-03-02
URL http://arxiv.org/abs/1703.00565v3
PDF http://arxiv.org/pdf/1703.00565v3.pdf
PWC https://paperswithcode.com/paper/scattertext-a-browser-based-tool-for-1
Repo https://github.com/JasonKessler/scattertext
Framework tf

Language Identification Using Deep Convolutional Recurrent Neural Networks

Title Language Identification Using Deep Convolutional Recurrent Neural Networks
Authors Christian Bartz, Tom Herold, Haojin Yang, Christoph Meinel
Abstract Language Identification (LID) systems are used to classify the spoken language from a given audio sample and are typically the first step for many spoken language processing tasks, such as Automatic Speech Recognition (ASR) systems. Without automatic language detection, speech utterances cannot be parsed correctly and grammar rules cannot be applied, causing subsequent speech recognition steps to fail. We propose a LID system that solves the problem in the image domain, rather than the audio domain. We use a hybrid Convolutional Recurrent Neural Network (CRNN) that operates on spectrogram images of the provided audio snippets. In extensive experiments we show, that our model is applicable to a range of noisy scenarios and can easily be extended to previously unknown languages, while maintaining its classification accuracy. We release our code and a large scale training set for LID systems to the community.
Tasks Language Identification, Speech Recognition
Published 2017-08-16
URL http://arxiv.org/abs/1708.04811v1
PDF http://arxiv.org/pdf/1708.04811v1.pdf
PWC https://paperswithcode.com/paper/language-identification-using-deep
Repo https://github.com/HPI-DeepLearning/crnn-lid
Framework tf

AffordanceNet: An End-to-End Deep Learning Approach for Object Affordance Detection

Title AffordanceNet: An End-to-End Deep Learning Approach for Object Affordance Detection
Authors Thanh-Toan Do, Anh Nguyen, Ian Reid
Abstract We propose AffordanceNet, a new deep learning approach to simultaneously detect multiple objects and their affordances from RGB images. Our AffordanceNet has two branches: an object detection branch to localize and classify the object, and an affordance detection branch to assign each pixel in the object to its most probable affordance label. The proposed framework employs three key components for effectively handling the multiclass problem in the affordance mask: a sequence of deconvolutional layers, a robust resizing strategy, and a multi-task loss function. The experimental results on the public datasets show that our AffordanceNet outperforms recent state-of-the-art methods by a fair margin, while its end-to-end architecture allows the inference at the speed of 150ms per image. This makes our AffordanceNet well suitable for real-time robotic applications. Furthermore, we demonstrate the effectiveness of AffordanceNet in different testing environments and in real robotic applications. The source code is available at https://github.com/nqanh/affordance-net
Tasks Object Detection
Published 2017-09-21
URL http://arxiv.org/abs/1709.07326v3
PDF http://arxiv.org/pdf/1709.07326v3.pdf
PWC https://paperswithcode.com/paper/affordancenet-an-end-to-end-deep-learning
Repo https://github.com/nqanh/affordance-net
Framework none

Asymmetric Tri-training for Unsupervised Domain Adaptation

Title Asymmetric Tri-training for Unsupervised Domain Adaptation
Authors Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada
Abstract Deep-layered models trained on a large number of labeled samples boost the accuracy of many tasks. It is important to apply such models to different domains because collecting many labeled samples in various domains is expensive. In unsupervised domain adaptation, one needs to train a classifier that works well on a target domain when provided with labeled source samples and unlabeled target samples. Although many methods aim to match the distributions of source and target samples, simply matching the distribution cannot ensure accuracy on the target domain. To learn discriminative representations for the target domain, we assume that artificially labeling target samples can result in a good representation. Tri-training leverages three classifiers equally to give pseudo-labels to unlabeled samples, but the method does not assume labeling samples generated from a different domain.In this paper, we propose an asymmetric tri-training method for unsupervised domain adaptation, where we assign pseudo-labels to unlabeled samples and train neural networks as if they are true labels. In our work, we use three networks asymmetrically. By asymmetric, we mean that two networks are used to label unlabeled target samples and one network is trained by the samples to obtain target-discriminative representations. We evaluate our method on digit recognition and sentiment analysis datasets. Our proposed method achieves state-of-the-art performance on the benchmark digit recognition datasets of domain adaptation.
Tasks Domain Adaptation, Sentiment Analysis, Unsupervised Domain Adaptation
Published 2017-02-27
URL http://arxiv.org/abs/1702.08400v3
PDF http://arxiv.org/pdf/1702.08400v3.pdf
PWC https://paperswithcode.com/paper/asymmetric-tri-training-for-unsupervised
Repo https://github.com/ksaito-ut/atda
Framework tf

Nearly Optimal Robust Subspace Tracking

Title Nearly Optimal Robust Subspace Tracking
Authors Praneeth Narayanamurthy, Namrata Vaswani
Abstract In this work, we study the robust subspace tracking (RST) problem and obtain one of the first two provable guarantees for it. The goal of RST is to track sequentially arriving data vectors that lie in a slowly changing low-dimensional subspace, while being robust to corruption by additive sparse outliers. It can also be interpreted as a dynamic (time-varying) extension of robust PCA (RPCA), with the minor difference that RST also requires a short tracking delay. We develop a recursive projected compressive sensing algorithm that we call Nearly Optimal RST via ReProCS (ReProCS-NORST) because its tracking delay is nearly optimal. We prove that NORST solves both the RST and the dynamic RPCA problems under weakened standard RPCA assumptions, two simple extra assumptions (slow subspace change and most outlier magnitudes lower bounded), and a few minor assumptions. Our guarantee shows that NORST enjoys a near optimal tracking delay of $O(r \log n \log(1/\epsilon))$. Its required delay between subspace change times is the same, and its memory complexity is $n$ times this value. Thus both these are also nearly optimal. Here $n$ is the ambient space dimension, $r$ is the subspaces’ dimension, and $\epsilon$ is the tracking accuracy. NORST also has the best outlier tolerance compared with all previous RPCA or RST methods, both theoretically and empirically (including for real videos), without requiring any model on how the outlier support is generated. This is possible because of the extra assumptions it uses.
Tasks Compressive Sensing
Published 2017-12-17
URL http://arxiv.org/abs/1712.06061v4
PDF http://arxiv.org/pdf/1712.06061v4.pdf
PWC https://paperswithcode.com/paper/nearly-optimal-robust-subspace-tracking
Repo https://github.com/praneethmurthy/NORST
Framework none

Exploring the Bounds of the Utility of Context for Object Detection

Title Exploring the Bounds of the Utility of Context for Object Detection
Authors Ehud Barnea, Ohad Ben-Shahar
Abstract The recurring context in which objects appear holds valuable information that can be employed to predict their existence. This intuitive observation indeed led many researchers to endow appearance-based detectors with explicit reasoning about context. The underlying thesis suggests that stronger contextual relations would facilitate greater improvements in detection capacity. In practice, however, the observed improvement in many cases is modest at best, and often only marginal. In this work we seek to improve our understanding of this phenomenon, in part by pursuing an opposite approach. Instead of attempting to improve detection scores by employing context, we treat the utility of context as an optimization problem: to what extent can detection scores be improved by considering context or any other kind of additional information? With this approach we explore the bounds on improvement by using contextual relations between objects and provide a tool for identifying the most helpful ones. We show that simple co-occurrence relations can often provide large gains, while in other cases a significant improvement is simply impossible or impractical with either co-occurrence or more precise spatial relations. To better understand these results we then analyze the ability of context to handle different types of false detections, revealing that tested contextual information cannot ameliorate localization errors, severely limiting its gains. These and additional insights further our understanding on where and why utilization of context for object detection succeeds and fails.
Tasks Object Detection
Published 2017-11-15
URL http://arxiv.org/abs/1711.05471v4
PDF http://arxiv.org/pdf/1711.05471v4.pdf
PWC https://paperswithcode.com/paper/on-the-utility-of-context-or-the-lack-thereof
Repo https://github.com/EhudBarnea/ContextAnalysis
Framework none

Simple And Efficient Architecture Search for Convolutional Neural Networks

Title Simple And Efficient Architecture Search for Convolutional Neural Networks
Authors Thomas Elsken, Jan-Hendrik Metzen, Frank Hutter
Abstract Neural networks have recently had a lot of success for many tasks. However, neural network architectures that perform well are still typically designed manually by experts in a cumbersome trial-and-error process. We propose a new method to automatically search for well-performing CNN architectures based on a simple hill climbing procedure whose operators apply network morphisms, followed by short optimization runs by cosine annealing. Surprisingly, this simple method yields competitive results, despite only requiring resources in the same order of magnitude as training a single network. E.g., on CIFAR-10, our method designs and trains networks with an error rate below 6% in only 12 hours on a single GPU; training for one day reduces this error further, to almost 5%.
Tasks Neural Architecture Search
Published 2017-11-13
URL http://arxiv.org/abs/1711.04528v1
PDF http://arxiv.org/pdf/1711.04528v1.pdf
PWC https://paperswithcode.com/paper/simple-and-efficient-architecture-search-for
Repo https://github.com/akwasigroch/NAS_network_morphism
Framework tf

Adaptive Measurement Network for CS Image Reconstruction

Title Adaptive Measurement Network for CS Image Reconstruction
Authors Xuemei Xie, Yuxiang Wang, Guangming Shi, Chenye Wang, Jiang Du, Zhifu Zhao
Abstract Conventional compressive sensing (CS) reconstruction is very slow for its characteristic of solving an optimization problem. Convolu- tional neural network can realize fast processing while achieving compa- rable results. While CS image recovery with high quality not only de- pends on good reconstruction algorithms, but also good measurements. In this paper, we propose an adaptive measurement network in which measurement is obtained by learning. The new network consists of a fully-connected layer and ReconNet. The fully-connected layer which has low-dimension output acts as measurement. We train the fully-connected layer and ReconNet simultaneously and obtain adaptive measurement. Because the adaptive measurement fits dataset better, in contrast with random Gaussian measurement matrix, under the same measuremen- t rate, it can extract the information of scene more efficiently and get better reconstruction results. Experiments show that the new network outperforms the original one.
Tasks Compressive Sensing, Image Reconstruction
Published 2017-09-23
URL http://arxiv.org/abs/1710.01244v1
PDF http://arxiv.org/pdf/1710.01244v1.pdf
PWC https://paperswithcode.com/paper/adaptive-measurement-network-for-cs-image
Repo https://github.com/Chinmayrane16/ReconNet-PyTorch
Framework pytorch

Learning from Millions of 3D Scans for Large-scale 3D Face Recognition

Title Learning from Millions of 3D Scans for Large-scale 3D Face Recognition
Authors Syed Zulqarnain Gilani, Ajmal Mian
Abstract Deep networks trained on millions of facial images are believed to be closely approaching human-level performance in face recognition. However, open world face recognition still remains a challenge. Although, 3D face recognition has an inherent edge over its 2D counterpart, it has not benefited from the recent developments in deep learning due to the unavailability of large training as well as large test datasets. Recognition accuracies have already saturated on existing 3D face datasets due to their small gallery sizes. Unlike 2D photographs, 3D facial scans cannot be sourced from the web causing a bottleneck in the development of deep 3D face recognition networks and datasets. In this backdrop, we propose a method for generating a large corpus of labeled 3D face identities and their multiple instances for training and a protocol for merging the most challenging existing 3D datasets for testing. We also propose the first deep CNN model designed specifically for 3D face recognition and trained on 3.1 Million 3D facial scans of 100K identities. Our test dataset comprises 1,853 identities with a single 3D scan in the gallery and another 31K scans as probes, which is several orders of magnitude larger than existing ones. Without fine tuning on this dataset, our network already outperforms state of the art face recognition by over 10%. We fine tune our network on the gallery set to perform end-to-end large scale 3D face recognition which further improves accuracy. Finally, we show the efficacy of our method for the open world face recognition problem.
Tasks Face Recognition
Published 2017-11-16
URL http://arxiv.org/abs/1711.05942v3
PDF http://arxiv.org/pdf/1711.05942v3.pdf
PWC https://paperswithcode.com/paper/learning-from-millions-of-3d-scans-for-large
Repo https://github.com/huyhieupham/3D-Face-Recognition
Framework none

H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volumes

Title H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volumes
Authors Xiaomeng Li, Hao Chen, Xiaojuan Qi, Qi Dou, Chi-Wing Fu, Pheng Ann Heng
Abstract Liver cancer is one of the leading causes of cancer death. To assist doctors in hepatocellular carcinoma diagnosis and treatment planning, an accurate and automatic liver and tumor segmentation method is highly demanded in the clinical practice. Recently, fully convolutional neural networks (FCNs), including 2D and 3D FCNs, serve as the back-bone in many volumetric image segmentation. However, 2D convolutions can not fully leverage the spatial information along the third dimension while 3D convolutions suffer from high computational cost and GPU memory consumption. To address these issues, we propose a novel hybrid densely connected UNet (H-DenseUNet), which consists of a 2D DenseUNet for efficiently extracting intra-slice features and a 3D counterpart for hierarchically aggregating volumetric contexts under the spirit of the auto-context algorithm for liver and tumor segmentation. We formulate the learning process of H-DenseUNet in an end-to-end manner, where the intra-slice representations and inter-slice features can be jointly optimized through a hybrid feature fusion (HFF) layer. We extensively evaluated our method on the dataset of MICCAI 2017 Liver Tumor Segmentation (LiTS) Challenge and 3DIRCADb Dataset. Our method outperformed other state-of-the-arts on the segmentation results of tumors and achieved very competitive performance for liver segmentation even with a single model.
Tasks Automatic Liver And Tumor Segmentation, Lesion Segmentation, Liver Segmentation, Semantic Segmentation
Published 2017-09-21
URL http://arxiv.org/abs/1709.07330v3
PDF http://arxiv.org/pdf/1709.07330v3.pdf
PWC https://paperswithcode.com/paper/h-denseunet-hybrid-densely-connected-unet-for
Repo https://github.com/xmengli999/H-DenseUNet
Framework none

ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing

Title ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing
Authors Jian Zhang, Bernard Ghanem
Abstract With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones. Specifically, we propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $\ell_1$ norm CS reconstruction model. To cast ISTA into deep network form, we develop an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms. All the parameters in ISTA-Net (\eg nonlinear transforms, shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than being hand-crafted. Moreover, considering that the residuals of natural images are more compressible, an enhanced version of ISTA-Net in the residual domain, dubbed {ISTA-Net}$^+$, is derived to further improve CS reconstruction. Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform existing state-of-the-art optimization-based and network-based CS methods by large margins, while maintaining fast computational speed. Our source codes are available: \textsl{http://jianzhang.tech/projects/ISTA-Net}.
Tasks Compressive Sensing
Published 2017-06-24
URL http://arxiv.org/abs/1706.07929v2
PDF http://arxiv.org/pdf/1706.07929v2.pdf
PWC https://paperswithcode.com/paper/ista-net-interpretable-optimization-inspired
Repo https://github.com/jianzhangcs/ISTA-Net-PyTorch
Framework pytorch

Learning to Invert: Signal Recovery via Deep Convolutional Networks

Title Learning to Invert: Signal Recovery via Deep Convolutional Networks
Authors Ali Mousavi, Richard G. Baraniuk
Abstract The promise of compressive sensing (CS) has been offset by two significant challenges. First, real-world data is not exactly sparse in a fixed basis. Second, current high-performance recovery algorithms are slow to converge, which limits CS to either non-real-time applications or scenarios where massive back-end computing is available. In this paper, we attack both of these challenges head-on by developing a new signal recovery framework we call {\em DeepInverse} that learns the inverse transformation from measurement vectors to signals using a {\em deep convolutional network}. When trained on a set of representative images, the network learns both a representation for the signals (addressing challenge one) and an inverse map approximating a greedy or convex recovery algorithm (addressing challenge two). Our experiments indicate that the DeepInverse network closely approximates the solution produced by state-of-the-art CS recovery algorithms yet is hundreds of times faster in run time. The tradeoff for the ultrafast run time is a computationally intensive, off-line training procedure typical to deep networks. However, the training needs to be completed only once, which makes the approach attractive for a host of sparse recovery problems.
Tasks Compressive Sensing
Published 2017-01-14
URL http://arxiv.org/abs/1701.03891v1
PDF http://arxiv.org/pdf/1701.03891v1.pdf
PWC https://paperswithcode.com/paper/learning-to-invert-signal-recovery-via-deep
Repo https://github.com/TaihuLight/DeepInverse-Pytorch
Framework pytorch
comments powered by Disqus