October 21, 2019

3197 words 16 mins read

Paper Group AWR 105

Paper Group AWR 105

Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction. Parallel Clustering of Single Cell Transcriptomic Data with Split-Merge Sampling on Dirichlet Process Mixtures. Actionable Recourse in Linear Classification. GeoDesc: Learning Local Descriptors by Integrating Geometry Constraints. Surrogate-assisted para …

Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction

Title Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction
Authors Patrick Verga, Emma Strubell, Andrew McCallum
Abstract Most work in relation extraction forms a prediction by looking at a short span of text within a single sentence containing a single entity pair mention. This approach often does not consider interactions across mentions, requires redundant computation for each mention pair, and ignores relationships expressed across sentence boundaries. These problems are exacerbated by the document- (rather than sentence-) level annotation common in biological text. In response, we propose a model which simultaneously predicts relationships between all mention pairs in a document. We form pairwise predictions over entire paper abstracts using an efficient self-attention encoder. All-pairs mention scores allow us to perform multi-instance learning by aggregating over mentions to form entity pair representations. We further adapt to settings without mention-level annotation by jointly training to predict named entities and adding a corpus of weakly labeled data. In experiments on two Biocreative benchmark datasets, we achieve state of the art performance on the Biocreative V Chemical Disease Relation dataset for models without external KB resources. We also introduce a new dataset an order of magnitude larger than existing human-annotated biological information extraction datasets and more accurate than distantly supervised alternatives.
Tasks Relation Extraction
Published 2018-02-28
URL http://arxiv.org/abs/1802.10569v1
PDF http://arxiv.org/pdf/1802.10569v1.pdf
PWC https://paperswithcode.com/paper/simultaneously-self-attending-to-all-mentions
Repo https://github.com/patverga/bran
Framework tf

Parallel Clustering of Single Cell Transcriptomic Data with Split-Merge Sampling on Dirichlet Process Mixtures

Title Parallel Clustering of Single Cell Transcriptomic Data with Split-Merge Sampling on Dirichlet Process Mixtures
Authors Tiehang Duan, José P. Pinto, Xiaohui Xie
Abstract Motivation: With the development of droplet based systems, massive single cell transcriptome data has become available, which enables analysis of cellular and molecular processes at single cell resolution and is instrumental to understanding many biological processes. While state-of-the-art clustering methods have been applied to the data, they face challenges in the following aspects: (1) the clustering quality still needs to be improved; (2) most models need prior knowledge on number of clusters, which is not always available; (3) there is a demand for faster computational speed. Results: We propose to tackle these challenges with Parallel Split Merge Sampling on Dirichlet Process Mixture Model (the Para-DPMM model). Unlike classic DPMM methods that perform sampling on each single data point, the split merge mechanism samples on the cluster level, which significantly improves convergence and optimality of the result. The model is highly parallelized and can utilize the computing power of high performance computing (HPC) clusters, enabling massive clustering on huge datasets. Experiment results show the model outperforms current widely used models in both clustering quality and computational speed. Availability: Source code is publicly available on https://github.com/tiehangd/Para_DPMM/tree/master/Para_DPMM_package
Tasks
Published 2018-12-25
URL http://arxiv.org/abs/1812.10048v1
PDF http://arxiv.org/pdf/1812.10048v1.pdf
PWC https://paperswithcode.com/paper/parallel-clustering-of-single-cell
Repo https://github.com/tiehangd/Para_DPMM
Framework none

Actionable Recourse in Linear Classification

Title Actionable Recourse in Linear Classification
Authors Berk Ustun, Alexander Spangher, Yang Liu
Abstract Machine learning models are increasingly used to automate decisions that affect humans - deciding who should receive a loan, a job interview, or a social service. In such applications, a person should have the ability to change the decision of a model. When a person is denied a loan by a credit score, for example, they should be able to alter its input variables in a way that guarantees approval. Otherwise, they will be denied the loan as long as the model is deployed. More importantly, they will lack the ability to influence a decision that affects their livelihood. In this paper, we frame these issues in terms of recourse, which we define as the ability of a person to change the decision of a model by altering actionable input variables (e.g., income vs. age or marital status). We present integer programming tools to ensure recourse in linear classification problems without interfering in model development. We demonstrate how our tools can inform stakeholders through experiments on credit scoring problems. Our results show that recourse can be significantly affected by standard practices in model development, and motivate the need to evaluate recourse in practice.
Tasks Decision Making
Published 2018-09-18
URL https://arxiv.org/abs/1809.06514v2
PDF https://arxiv.org/pdf/1809.06514v2.pdf
PWC https://paperswithcode.com/paper/actionable-recourse-in-linear-classification
Repo https://github.com/ustunb/actionable-recourse
Framework none

GeoDesc: Learning Local Descriptors by Integrating Geometry Constraints

Title GeoDesc: Learning Local Descriptors by Integrating Geometry Constraints
Authors Zixin Luo, Tianwei Shen, Lei Zhou, Siyu Zhu, Runze Zhang, Yao Yao, Tian Fang, Long Quan
Abstract Learned local descriptors based on Convolutional Neural Networks (CNNs) have achieved significant improvements on patch-based benchmarks, whereas not having demonstrated strong generalization ability on recent benchmarks of image-based 3D reconstruction. In this paper, we mitigate this limitation by proposing a novel local descriptor learning approach that integrates geometry constraints from multi-view reconstructions, which benefits the learning process in terms of data generation, data sampling and loss computation. We refer to the proposed descriptor as GeoDesc, and demonstrate its superior performance on various large-scale benchmarks, and in particular show its great success on challenging reconstruction tasks. Moreover, we provide guidelines towards practical integration of learned descriptors in Structure-from-Motion (SfM) pipelines, showing the good trade-off that GeoDesc delivers to 3D reconstruction tasks between accuracy and efficiency.
Tasks 3D Reconstruction
Published 2018-07-17
URL http://arxiv.org/abs/1807.06294v2
PDF http://arxiv.org/pdf/1807.06294v2.pdf
PWC https://paperswithcode.com/paper/geodesc-learning-local-descriptors-by
Repo https://github.com/lzx551402/geodesc
Framework tf

Surrogate-assisted parallel tempering for Bayesian neural learning

Title Surrogate-assisted parallel tempering for Bayesian neural learning
Authors Rohitash Chandra, Konark Jain, Arpit Kapoor
Abstract Parallel tempering addresses some of the drawbacks of canonical Markov Chain Monte-Carlo methods for Bayesian neural learning with the ability to utilize high performance computing. However, certain challenges remain given the large range of network parameters and big data. Surrogate-assisted optimization considers the estimation of an objective function for models given computational inefficiency or difficulty to obtain clear results. We address the inefficiency of parallel tempering for large-scale problems by combining parallel computing features with surrogate assisted estimation of likelihood function that describes the plausibility of a model parameter value, given specific observed data. In this paper, we present surrogate-assisted parallel tempering for Bayesian neural learning where the surrogates are used to estimate the likelihood. The estimation via the surrogate becomes useful rather than evaluating computationally expensive models that feature large number of parameters and datasets. Our results demonstrate that the methodology significantly lowers the computational cost while maintaining quality in decision making using Bayesian neural learning. The method has applications for a Bayesian inversion and uncertainty quantification for a broad range of numerical models.
Tasks Decision Making
Published 2018-11-21
URL http://arxiv.org/abs/1811.08687v1
PDF http://arxiv.org/pdf/1811.08687v1.pdf
PWC https://paperswithcode.com/paper/surrogate-assisted-parallel-tempering-for
Repo https://github.com/sydney-machine-learning/surrogate-assisted-parallel-tempering
Framework tf

Residual Convolutional Neural Network Revisited with Active Weighted Mapping

Title Residual Convolutional Neural Network Revisited with Active Weighted Mapping
Authors Jung HyoungHo, Lee Ryong, Lee Sanghwan, Hwang Wonjun
Abstract In visual recognition, the key to the performance improvement of ResNet is the success in establishing the stack of deep sequential convolutional layers using identical mapping by a shortcut connection. It results in multiple paths of data flow under a network and the paths are merged with the equal weights. However, it is questionable whether it is correct to use the fixed and predefined weights at the mapping units of all paths. In this paper, we introduce the active weighted mapping method which infers proper weight values based on the characteristic of input data on the fly. The weight values of each mapping unit are not fixed but changed as the input image is changed, and the most proper weight values for each mapping unit are derived according to the input image. For this purpose, channel-wise information is embedded from both the shortcut connection and convolutional block, and then the fully connected layers are used to estimate the weight values for the mapping units. We train the backbone network and the proposed module alternately for a more stable learning of the proposed method. Results of the extensive experiments show that the proposed method works successfully on the various backbone architectures from ResNet to DenseNet. We also verify the superiority and generality of the proposed method on various datasets in comparison with the baseline.
Tasks
Published 2018-11-16
URL http://arxiv.org/abs/1811.06878v1
PDF http://arxiv.org/pdf/1811.06878v1.pdf
PWC https://paperswithcode.com/paper/residual-convolutional-neural-network
Repo https://github.com/ChaofWang/AWSRN
Framework pytorch

DSSLIC: Deep Semantic Segmentation-based Layered Image Compression

Title DSSLIC: Deep Semantic Segmentation-based Layered Image Compression
Authors Mohammad Akbari, Jie Liang, Jingning Han
Abstract Deep learning has revolutionized many computer vision fields in the last few years, including learning-based image compression. In this paper, we propose a deep semantic segmentation-based layered image compression (DSSLIC) framework in which the semantic segmentation map of the input image is obtained and encoded as the base layer of the bit-stream. A compact representation of the input image is also generated and encoded as the first enhancement layer. The segmentation map and the compact version of the image are then employed to obtain a coarse reconstruction of the image. The residual between the input and the coarse reconstruction is additionally encoded as another enhancement layer. Experimental results show that the proposed framework outperforms the H.265/HEVC-based BPG and other codecs in both PSNR and MS-SSIM metrics across a wide range of bit rates in RGB domain. Besides, since semantic segmentation map is included in the bit-stream, the proposed scheme can facilitate many other tasks such as image search and object-based adaptive image compression.
Tasks Image Compression, Image Retrieval, Semantic Segmentation
Published 2018-06-08
URL http://arxiv.org/abs/1806.03348v3
PDF http://arxiv.org/pdf/1806.03348v3.pdf
PWC https://paperswithcode.com/paper/dsslic-deep-semantic-segmentation-based
Repo https://github.com/Iamanorange/DSSLIC
Framework pytorch

A modified fuzzy C means algorithm for shading correction in craniofacial CBCT images

Title A modified fuzzy C means algorithm for shading correction in craniofacial CBCT images
Authors Awais Ashfaq, Jonas Adler
Abstract CBCT images suffer from acute shading artifacts primarily due to scatter. Numerous image-domain correction algorithms have been proposed in the literature that use patient-specific planning CT images to estimate shading contributions in CBCT images. However, in the context of radiosurgery applications such as gamma knife, planning images are often acquired through MRI which impedes the use of polynomial fitting approaches for shading correction. We present a new shading correction approach that is independent of planning CT images. Our algorithm is based on the assumption that true CBCT images follow a uniform volumetric intensity distribution per material, and scatter perturbs this uniform texture by contributing cupping and shading artifacts in the image domain. The framework is a combination of fuzzy C-means coupled with a neighborhood regularization term and Otsu’s method. Experimental results on artificially simulated craniofacial CBCT images are provided to demonstrate the effectiveness of our algorithm. Spatial non-uniformity is reduced from 16% to 7% in soft tissue and from 44% to 8% in bone regions. With shading-correction, thresholding based segmentation accuracy for bone pixels is improved from 85% to 91% when compared to thresholding without shading-correction. The proposed algorithm is thus practical and qualifies as a plug and play extension into any CBCT reconstruction software for shading correction.
Tasks
Published 2018-01-17
URL http://arxiv.org/abs/1801.05694v1
PDF http://arxiv.org/pdf/1801.05694v1.pdf
PWC https://paperswithcode.com/paper/a-modified-fuzzy-c-means-algorithm-for
Repo https://github.com/adler-j/mfcm_article
Framework none

Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context

Title Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context
Authors Urvashi Khandelwal, He He, Peng Qi, Dan Jurafsky
Abstract We know very little about how neural language models (LM) use prior linguistic context. In this paper, we investigate the role of context in an LSTM LM, through ablation studies. Specifically, we analyze the increase in perplexity when prior context words are shuffled, replaced, or dropped. On two standard datasets, Penn Treebank and WikiText-2, we find that the model is capable of using about 200 tokens of context on average, but sharply distinguishes nearby context (recent 50 tokens) from the distant history. The model is highly sensitive to the order of words within the most recent sentence, but ignores word order in the long-range context (beyond 50 tokens), suggesting the distant past is modeled only as a rough semantic field or topic. We further find that the neural caching model (Grave et al., 2017b) especially helps the LSTM to copy words from within this distant context. Overall, our analysis not only provides a better understanding of how neural LMs use their context, but also sheds light on recent success from cache-based models.
Tasks
Published 2018-05-12
URL http://arxiv.org/abs/1805.04623v1
PDF http://arxiv.org/pdf/1805.04623v1.pdf
PWC https://paperswithcode.com/paper/sharp-nearby-fuzzy-far-away-how-neural
Repo https://github.com/urvashik/lm-context-analysis
Framework pytorch

LPCNet: Improving Neural Speech Synthesis Through Linear Prediction

Title LPCNet: Improving Neural Speech Synthesis Through Linear Prediction
Authors Jean-Marc Valin, Jan Skoglund
Abstract Neural speech synthesis models have recently demonstrated the ability to synthesize high quality speech for text-to-speech and compression applications. These new models often require powerful GPUs to achieve real-time operation, so being able to reduce their complexity would open the way for many new applications. We propose LPCNet, a WaveRNN variant that combines linear prediction with recurrent neural networks to significantly improve the efficiency of speech synthesis. We demonstrate that LPCNet can achieve significantly higher quality than WaveRNN for the same network size and that high quality LPCNet speech synthesis is achievable with a complexity under 3 GFLOPS. This makes it easier to deploy neural synthesis applications on lower-power devices, such as embedded systems and mobile phones.
Tasks Speech Synthesis
Published 2018-10-28
URL http://arxiv.org/abs/1810.11846v2
PDF http://arxiv.org/pdf/1810.11846v2.pdf
PWC https://paperswithcode.com/paper/lpcnet-improving-neural-speech-synthesis
Repo https://github.com/mozilla/LPCNet
Framework none

ChoiceNet: Robust Learning by Revealing Output Correlations

Title ChoiceNet: Robust Learning by Revealing Output Correlations
Authors Sungjoon Choi, Sanghoon Hong, Sungbin Lim
Abstract In this paper, we focus on the supervised learning problem with corrupted training data. We assume that the training dataset is generated from a mixture of a target distribution and other unknown distributions. We estimate the quality of each data by revealing the correlation between the generated distribution and the target distribution. To this end, we present a novel framework referred to here as ChoiceNet that can robustly infer the target distribution in the presence of inconsistent data. We demonstrate that the proposed framework is applicable to both classification and regression tasks. ChoiceNet is evaluated in comprehensive experiments, where we show that it constantly outperforms existing baseline methods in the handling of noisy data. Particularly, ChoiceNet is successfully applied to autonomous driving tasks where it learns a safe driving policy from a dataset with mixed qualities. In the classification task, we apply the proposed method to the MNIST and CIFAR-10 datasets and it shows superior performances in terms of robustness to noisy labels.
Tasks Autonomous Driving
Published 2018-05-16
URL http://arxiv.org/abs/1805.06431v2
PDF http://arxiv.org/pdf/1805.06431v2.pdf
PWC https://paperswithcode.com/paper/choicenet-robust-learning-by-revealing-output
Repo https://github.com/sjchoi86/choicenet
Framework tf

Deep Depth from Defocus: how can defocus blur improve 3D estimation using dense neural networks?

Title Deep Depth from Defocus: how can defocus blur improve 3D estimation using dense neural networks?
Authors Marcela Carvalho, Bertrand Le Saux, Pauline Trouvé-Peloux, Andrés Almansa, Frédéric Champagnat
Abstract Depth estimation is of critical interest for scene understanding and accurate 3D reconstruction. Most recent approaches in depth estimation with deep learning exploit geometrical structures of standard sharp images to predict corresponding depth maps. However, cameras can also produce images with defocus blur depending on the depth of the objects and camera settings. Hence, these features may represent an important hint for learning to predict depth. In this paper, we propose a full system for single-image depth prediction in the wild using depth-from-defocus and neural networks. We carry out thorough experiments to test deep convolutional networks on real and simulated defocused images using a realistic model of blur variation with respect to depth. We also investigate the influence of blur on depth prediction observing model uncertainty with a Bayesian neural network approach. From these studies, we show that out-of-focus blur greatly improves the depth-prediction network performances. Furthermore, we transfer the ability learned on a synthetic, indoor dataset to real, indoor and outdoor images. For this purpose, we present a new dataset containing real all-focus and defocused images from a Digital Single-Lens Reflex (DSLR) camera, paired with ground truth depth maps obtained with an active 3D sensor for indoor scenes. The proposed approach is successfully validated on both this new dataset and standard ones as NYUv2 or Depth-in-the-Wild. Code and new datasets are available at https://github.com/marcelampc/d3net_depth_estimation
Tasks 3D Reconstruction, Depth Estimation, Scene Understanding
Published 2018-09-05
URL http://arxiv.org/abs/1809.01567v2
PDF http://arxiv.org/pdf/1809.01567v2.pdf
PWC https://paperswithcode.com/paper/deep-depth-from-defocus-how-can-defocus-blur
Repo https://github.com/marcelampc/d3net_depth_estimation
Framework pytorch

History PCA: A New Algorithm for Streaming PCA

Title History PCA: A New Algorithm for Streaming PCA
Authors Puyudi Yang, Cho-Jui Hsieh, Jane-Ling Wang
Abstract In this paper we propose a new algorithm for streaming principal component analysis. With limited memory, small devices cannot store all the samples in the high-dimensional regime. Streaming principal component analysis aims to find the $k$-dimensional subspace which can explain the most variation of the $d$-dimensional data points that come into memory sequentially. In order to deal with large $d$ and large $N$ (number of samples), most streaming PCA algorithms update the current model using only the incoming sample and then dump the information right away to save memory. However the information contained in previously streamed data could be useful. Motivated by this idea, we develop a new streaming PCA algorithm called History PCA that achieves this goal. By using $O(Bd)$ memory with $B\approx 10$ being the block size, our algorithm converges much faster than existing streaming PCA algorithms. By changing the number of inner iterations, the memory usage can be further reduced to $O(d)$ while maintaining a comparable convergence speed. We provide theoretical guarantees for the convergence of our algorithm along with the rate of convergence. We also demonstrate on synthetic and real world data sets that our algorithm compares favorably with other state-of-the-art streaming PCA methods in terms of the convergence speed and performance.
Tasks
Published 2018-02-15
URL http://arxiv.org/abs/1802.05447v1
PDF http://arxiv.org/pdf/1802.05447v1.pdf
PWC https://paperswithcode.com/paper/history-pca-a-new-algorithm-for-streaming-pca
Repo https://github.com/aamcbee/AdaOja
Framework none

Learning Intrinsic Image Decomposition from Watching the World

Title Learning Intrinsic Image Decomposition from Watching the World
Authors Zhengqi Li, Noah Snavely
Abstract Single-view intrinsic image decomposition is a highly ill-posed problem, and so a promising approach is to learn from large amounts of data. However, it is difficult to collect ground truth training data at scale for intrinsic images. In this paper, we explore a different approach to learning intrinsic images: observing image sequences over time depicting the same scene under changing illumination, and learning single-view decompositions that are consistent with these changes. This approach allows us to learn without ground truth decompositions, and to instead exploit information available from multiple images when training. Our trained model can then be applied at test time to single views. We describe a new learning framework based on this idea, including new loss functions that can be efficiently evaluated over entire sequences. While prior learning-based methods achieve good performance on specific benchmarks, we show that our approach generalizes well to several diverse datasets, including MIT intrinsic images, Intrinsic Images in the Wild and Shading Annotations in the Wild.
Tasks Intrinsic Image Decomposition
Published 2018-04-02
URL http://arxiv.org/abs/1804.00582v1
PDF http://arxiv.org/pdf/1804.00582v1.pdf
PWC https://paperswithcode.com/paper/learning-intrinsic-image-decomposition-from
Repo https://github.com/lixx2938/unsupervised-learning-intrinsic-images
Framework pytorch

Group Normalization

Title Group Normalization
Authors Yuxin Wu, Kaiming He
Abstract FAIR’s research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.
Tasks Object Detection, Video Classification
Published 2018-03-22
URL http://arxiv.org/abs/1803.08494v3
PDF http://arxiv.org/pdf/1803.08494v3.pdf
PWC https://paperswithcode.com/paper/group-normalization
Repo https://github.com/blankWorld/GroupNorm-caffe
Framework none
comments powered by Disqus