February 1, 2020

3284 words 16 mins read

Paper Group AWR 242

Paper Group AWR 242

Semantic Bottleneck Scene Generation. UAFS: Uncertainty-Aware Feature Selection for Problems with Missing Data. Instance Segmentation based Semantic Matting for Compositing Applications. Stock Forecasting using M-Band Wavelet-Based SVR and RNN-LSTMs Models. Hadamard Matrix Guided Online Hashing. Learning Without Loss. Pre-train and Learn: Preserve …

Semantic Bottleneck Scene Generation

Title Semantic Bottleneck Scene Generation
Authors Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic
Abstract Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the flexibility of unconditional generative models, we propose a semantic bottleneck GAN model for unconditional synthesis of complex scenes. We assume pixel-wise segmentation labels are available during training and use them to learn the scene structure. During inference, our model first synthesizes a realistic segmentation layout from scratch, then synthesizes a realistic scene conditioned on that layout. For the former, we use an unconditional progressive segmentation generation network that captures the distribution of realistic semantic scene layouts. For the latter, we use a conditional segmentation-to-image synthesis network that captures the distribution of photo-realistic images conditioned on the semantic layout. When trained end-to-end, the resulting model outperforms state-of-the-art generative models in unsupervised image synthesis on two challenging domains in terms of the Frechet Inception Distance and user-study evaluations. Moreover, we demonstrate the generated segmentation maps can be used as additional training data to strongly improve recent segmentation-to-image synthesis networks.
Tasks Conditional Image Generation, Image Generation, Image-to-Image Translation, Scene Generation
Published 2019-11-26
URL https://arxiv.org/abs/1911.11357v1
PDF https://arxiv.org/pdf/1911.11357v1.pdf
PWC https://paperswithcode.com/paper/semantic-bottleneck-scene-generation
Repo https://github.com/azadis/SB-GAN
Framework none

UAFS: Uncertainty-Aware Feature Selection for Problems with Missing Data

Title UAFS: Uncertainty-Aware Feature Selection for Problems with Missing Data
Authors Andrew J. Becker, James P. Bagrow
Abstract Missing data are a concern in many real world data sets and imputation methods are often needed to estimate the values of missing data, but data sets with excessive missingness and high dimensionality challenge most approaches to imputation. Here we show that appropriate feature selection can be an effective preprocessing step for imputation, allowing for more accurate imputation and subsequent model predictions. The key feature of this preprocessing is that it incorporates uncertainty: by accounting for uncertainty due to missingness when selecting features we can reduce the degree of missingness while also limiting the number of uninformative features being used to make predictive models. We introduce a method to perform uncertainty-aware feature selection (UAFS), provide a theoretical motivation, and test UAFS on both real and synthetic problems, demonstrating that across a variety of data sets and levels of missingness we can improve the accuracy of imputations. Improved imputation due to UAFS also results in improved prediction accuracy when performing supervised learning using these imputed data sets. Our UAFS method is general and can be fruitfully coupled with a variety of imputation methods.
Tasks Feature Selection, Imputation
Published 2019-04-02
URL http://arxiv.org/abs/1904.01385v1
PDF http://arxiv.org/pdf/1904.01385v1.pdf
PWC https://paperswithcode.com/paper/uafs-uncertainty-aware-feature-selection-for
Repo https://github.com/abecker93/UAFS
Framework none

Instance Segmentation based Semantic Matting for Compositing Applications

Title Instance Segmentation based Semantic Matting for Compositing Applications
Authors Guanqing Hu, James J. Clark
Abstract Image compositing is a key step in film making and image editing that aims to segment a foreground object and combine it with a new background. Automatic image compositing can be done easily in a studio using chroma-keying when the background is pure blue or green. However, image compositing in natural scenes with complex backgrounds remains a tedious task, requiring experienced artists to hand-segment. In order to achieve automatic compositing in natural scenes, we propose a fully automated method that integrates instance segmentation and image matting processes to generate high-quality semantic mattes that can be used for image editing task. Our approach can be seen both as a refinement of existing instance segmentation algorithms and as a fully automated semantic image matting method. It extends automatic image compositing techniques such as chroma-keying to scenes with complex natural backgrounds without the need for any kind of user interaction. The output of our approach can be considered as both refined instance segmentations and alpha mattes with semantic meanings. We provide experimental results which show improved performance results as compared to existing approaches.
Tasks Image Matting, Instance Segmentation, Semantic Segmentation
Published 2019-04-10
URL http://arxiv.org/abs/1904.05457v1
PDF http://arxiv.org/pdf/1904.05457v1.pdf
PWC https://paperswithcode.com/paper/instance-segmentation-based-semantic-matting
Repo https://github.com/Bonnie970/Instance-Segmentation-based-Semantic-Matting-for-Compositing-Applications
Framework none

Stock Forecasting using M-Band Wavelet-Based SVR and RNN-LSTMs Models

Title Stock Forecasting using M-Band Wavelet-Based SVR and RNN-LSTMs Models
Authors Hieu Quang Nguyen, Abdul Hasib Rahimyar, Xiaodi Wang
Abstract The task of predicting future stock values has always been one that is heavily desired albeit very difficult. This difficulty arises from stocks with non-stationary behavior, and without any explicit form. Hence, predictions are best made through analysis of financial stock data. To handle big data sets, current convention involves the use of the Moving Average. However, by utilizing the Wavelet Transform in place of the Moving Average to denoise stock signals, financial data can be smoothened and more accurately broken down. This newly transformed, denoised, and more stable stock data can be followed up by non-parametric statistical methods, such as Support Vector Regression (SVR) and Recurrent Neural Network (RNN) based Long Short-Term Memory (LSTM) networks to predict future stock prices. Through the implementation of these methods, one is left with a more accurate stock forecast, and in turn, increased profits.
Tasks
Published 2019-04-17
URL http://arxiv.org/abs/1904.08459v1
PDF http://arxiv.org/pdf/1904.08459v1.pdf
PWC https://paperswithcode.com/paper/stock-forecasting-using-m-band-wavelet-based
Repo https://github.com/ZhuoLiupku/Stock-Prediction
Framework none

Hadamard Matrix Guided Online Hashing

Title Hadamard Matrix Guided Online Hashing
Authors Mingbao Lin, Rongrong Ji, Hong Liu, Xiaoshuai Sun, Shen Chen, Qi Tian
Abstract Online image hashing has attracted increasing research attention recently, which receives large-scale data in a streaming manner to update the hash functions on-the-fly. Its key challenge lies in the difficulty of balancing the learning timeliness and model accuracy. To this end, most works follow a supervised setting, i.e., using class labels to boost the hashing performance, which defects in two aspects: First, strong constraints, e.g., orthogonal or similarity preserving, are used, which however are typically relaxed and lead to large accuracy drop. Second, large amounts of training batches are required to learn the up-to-date hash functions, which largely increase the learning complexity. To handle the above challenges, a novel supervised online hashing scheme termed Hadamard Matrix Guided Online Hashing (HMOH) is proposed in this paper. Our key innovation lies in introducing Hadamard matrix, which is an orthogonal binary matrix built via Sylvester method. In particular, to release the need of strong constraints, we regard each column of Hadamard matrix as the target code for each class label, which by nature satisfies several desired properties of hashing codes. To accelerate the online training, LSH is first adopted to align the lengths of target code and to-be-learned binary code. We then treat the learning of hash functions as a set of binary classification problems to fit the assigned target code. Finally, extensive experiments demonstrate the superior accuracy and efficiency of the proposed method over various state-of-the-art methods. Codes are available at https://github.com/lmbxmu/mycode.
Tasks
Published 2019-05-11
URL https://arxiv.org/abs/1905.04454v3
PDF https://arxiv.org/pdf/1905.04454v3.pdf
PWC https://paperswithcode.com/paper/hadamard-matrix-guided-online-hashing
Repo https://github.com/lmbxmu/mycode
Framework none

Learning Without Loss

Title Learning Without Loss
Authors Veit Elser
Abstract We explore a new approach for training neural networks where all loss functions are replaced by hard constraints. The same approach is very successful in phase retrieval, where signals are reconstructed from magnitude constraints and general characteristics (sparsity, support, etc.). Instead of taking gradient steps, the optimizer in the constraint based approach, called relaxed-reflect-reflect (RRR), derives its steps from projections to local constraints. In neural networks one such projection makes the minimal modification to the inputs $x$, the associated weights $w$, and the pre-activation value $y$ at each neuron, to satisfy the equation $x\cdot w=y$. These projections, along with a host of other local projections (constraining pre- and post-activations, etc.) can be partitioned into two sets such that all the projections in each set can be applied concurrently, across the network and across all data in the training batch. This partitioning into two sets is analogous to the situation in phase retrieval and the setting for which the general purpose RRR optimizer was designed. Owing to the novelty of the method, this paper also serves as a self-contained tutorial. Starting with a single-layer network that performs non-negative matrix factorization, and concluding with a generative model comprising an autoencoder and classifier, all applications and their implementations by projections are described in complete detail. Although the new approach has the potential to extend the scope of neural networks (e.g. by defining activation not through functions but constraint sets), most of the featured models are standard to allow comparison with stochastic gradient descent.
Tasks
Published 2019-10-29
URL https://arxiv.org/abs/1911.00493v1
PDF https://arxiv.org/pdf/1911.00493v1.pdf
PWC https://paperswithcode.com/paper/learning-without-loss
Repo https://github.com/veitelser/LWL
Framework none

Pre-train and Learn: Preserve Global Information for Graph Neural Networks

Title Pre-train and Learn: Preserve Global Information for Graph Neural Networks
Authors Danhao Zhu, Xin-yu Dai, Jiajun Chen
Abstract Graph neural networks (GNNs) have shown great power in learning on attributed graphs. However, it is still a challenge for GNNs to utilize information faraway from the source node. Moreover, general GNNs require graph attributes as input, so they cannot be appled to plain graphs. In the paper, we propose new models named G-GNNs (Global information for GNNs) to address the above limitations. First, the global structure and attribute features for each node are obtained via unsupervised pre-training, which preserve the global information associated to the node. Then, using the global features and the raw network attributes, we propose a parallel framework of GNNs to learn different aspects from these features. The proposed learning methods can be applied to both plain graphs and attributed graphs. Extensive experiments have shown that G-GNNs can outperform other state-of-the-art models on three standard evaluation graphs. Specially, our methods establish new benchmark records on Cora (84.31%) and Pubmed (80.95%) when learning on attributed graphs.
Tasks Node Classification
Published 2019-10-27
URL https://arxiv.org/abs/1910.12241v1
PDF https://arxiv.org/pdf/1910.12241v1.pdf
PWC https://paperswithcode.com/paper/pre-train-and-learn-preserve-global
Repo https://github.com/zhudanhao/g-gnn
Framework pytorch

Motif Enhanced Recommendation over Heterogeneous Information Network

Title Motif Enhanced Recommendation over Heterogeneous Information Network
Authors Huan Zhao, Yingqi Zhou, Yangqiu Song, Dik Lun Lee
Abstract Heterogeneous Information Networks (HIN) has been widely used in recommender systems (RSs). In previous HIN-based RSs, meta-path is used to compute the similarity between users and items. However, existing meta-path based methods only consider first-order relations, ignoring higher-order relations among the nodes of \textit{same} type, captured by \textit{motifs}. In this paper, we propose to use motifs to capture higher-order relations among nodes of same type in a HIN and develop the motif-enhanced meta-path (MEMP) to combine motif-based higher-order relations with edge-based first-order relations. With MEMP-based similarities between users and items, we design a recommending model MoHINRec, and experimental results on two real-world datasets, Epinions and CiaoDVD, demonstrate its superiority over existing HIN-based RS methods.
Tasks Recommendation Systems
Published 2019-08-26
URL https://arxiv.org/abs/1908.09701v1
PDF https://arxiv.org/pdf/1908.09701v1.pdf
PWC https://paperswithcode.com/paper/motif-enhanced-recommendation-over
Repo https://github.com/HKUST-KnowComp/MoHINRec
Framework none

Text-to-SQL Generation for Question Answering on Electronic Medical Records

Title Text-to-SQL Generation for Question Answering on Electronic Medical Records
Authors Ping Wang, Tian Shi, Chandan K. Reddy
Abstract Electronic medical records (EMR) contain comprehensive patient information and are typically stored in a relational database with multiple tables. Effective and efficient patient information retrieval from EMR data is a challenging task for medical experts. Question-to-SQL generation methods tackle this problem by first predicting the SQL query for a given question about a database, and then, executing the query on the database. However, most of the existing approaches have not been adapted to the healthcare domain due to a lack of healthcare Question-to-SQL dataset for learning models specific to this domain. In addition, wide use of the abbreviation of terminologies and possible typos in questions introduce additional challenges for accurately generating the corresponding SQL queries. In this paper, we tackle these challenges by developing a deep learning based TRanslate-Edit Model for Question-to-SQL (TREQS) generation, which adapts the widely used sequence-to-sequence model to directly generate the SQL query for a given question, and further performs the required edits using an attentive-copying mechanism and task-specific look-up tables. Based on the widely used publicly available electronic medical database, we create a new large-scale Question-SQL pair dataset, named MIMICSQL, in order to perform the Question-to-SQL generation task in healthcare domain. An extensive set of experiments are conducted to evaluate the performance of our proposed model on MIMICSQL. Both quantitative and qualitative experimental results indicate the flexibility and efficiency of our proposed method in predicting condition values and its robustness to random questions with abbreviations and typos.
Tasks Information Retrieval, Question Answering, Text-To-Sql
Published 2019-07-28
URL https://arxiv.org/abs/1908.01839v2
PDF https://arxiv.org/pdf/1908.01839v2.pdf
PWC https://paperswithcode.com/paper/a-translate-edit-model-for-natural-language
Repo https://github.com/wangpinggl/TREQS
Framework pytorch

With Malice Towards None: Assessing Uncertainty via Equalized Coverage

Title With Malice Towards None: Assessing Uncertainty via Equalized Coverage
Authors Yaniv Romano, Rina Foygel Barber, Chiara Sabatti, Emmanuel J. Candès
Abstract An important factor to guarantee a fair use of data-driven recommendation systems is that we should be able to communicate their uncertainty to decision makers. This can be accomplished by constructing prediction intervals, which provide an intuitive measure of the limits of predictive performance. To support equitable treatment, we force the construction of such intervals to be unbiased in the sense that their coverage must be equal across all protected groups of interest. We present an operational methodology that achieves this goal by offering rigorous distribution-free coverage guarantees holding in finite samples. Our methodology, equalized coverage, is flexible as it can be viewed as a wrapper around any predictive algorithm. We test the applicability of the proposed framework on real data, demonstrating that equalized coverage constructs unbiased prediction intervals, unlike competitive methods.
Tasks Recommendation Systems
Published 2019-08-15
URL https://arxiv.org/abs/1908.05428v1
PDF https://arxiv.org/pdf/1908.05428v1.pdf
PWC https://paperswithcode.com/paper/with-malice-towards-none-assessing
Repo https://github.com/yromano/cqr
Framework pytorch

Synthesized Policies for Transfer and Adaptation across Tasks and Environments

Title Synthesized Policies for Transfer and Adaptation across Tasks and Environments
Authors Hexiang Hu, Liyu Chen, Boqing Gong, Fei Sha
Abstract The ability to transfer in reinforcement learning is key towards building an agent of general artificial intelligence. In this paper, we consider the problem of learning to simultaneously transfer across both environments (ENV) and tasks (TASK), probably more importantly, by learning from only sparse (ENV, TASK) pairs out of all the possible combinations. We propose a novel compositional neural network architecture which depicts a meta rule for composing policies from the environment and task embeddings. Notably, one of the main challenges is to learn the embeddings jointly with the meta rule. We further propose new training methods to disentangle the embeddings, making them both distinctive signatures of the environments and tasks and effective building blocks for composing the policies. Experiments on GridWorld and Thor, of which the agent takes as input an egocentric view, show that our approach gives rise to high success rates on all the (ENV, TASK) pairs after learning from only 40% of them.
Tasks
Published 2019-04-05
URL http://arxiv.org/abs/1904.03276v1
PDF http://arxiv.org/pdf/1904.03276v1.pdf
PWC https://paperswithcode.com/paper/synthesized-policies-for-transfer-and-1
Repo https://github.com/sha-lab/gridworld
Framework none

Temporally Consistent Horizon Lines

Title Temporally Consistent Horizon Lines
Authors Florian Kluger, Hanno Ackermann, Michael Ying Yang, Bodo Rosenhahn
Abstract The horizon line is an important geometric feature for many image processing and scene understanding tasks in computer vision. For instance, in navigation of autonomous vehicles or driver assistance, it can be used to improve 3D reconstruction as well as for semantic interpretation of dynamic environments. While both algorithms and datasets exist for single images, the problem of horizon line estimation from video sequences has not gained attention. In this paper, we show how convolutional neural networks are able to utilise the temporal consistency imposed by video sequences in order to increase the accuracy and reduce the variance of horizon line estimates. A novel CNN architecture with an improved residual convolutional LSTM is presented for temporally consistent horizon line estimation. We propose an adaptive loss function that ensures stable training as well as accurate results. Furthermore, we introduce an extension of the KITTI dataset which contains precise horizon line labels for 43699 images across 72 video sequences. A comprehensive evaluation shows that the proposed approach consistently achieves superior performance compared with existing methods.
Tasks 3D Reconstruction, Autonomous Vehicles, Horizon Line Estimation, Scene Understanding
Published 2019-07-23
URL https://arxiv.org/abs/1907.10014v2
PDF https://arxiv.org/pdf/1907.10014v2.pdf
PWC https://paperswithcode.com/paper/temporally-consistent-horizon-lines
Repo https://github.com/fkluger/tchl
Framework pytorch

MeLU: Meta-Learned User Preference Estimator for Cold-Start Recommendation

Title MeLU: Meta-Learned User Preference Estimator for Cold-Start Recommendation
Authors Hoyeop Lee, Jinbae Im, Seongwon Jang, Hyunsouk Cho, Sehee Chung
Abstract This paper proposes a recommender system to alleviate the cold-start problem that can estimate user preferences based on only a small number of items. To identify a user’s preference in the cold state, existing recommender systems, such as Netflix, initially provide items to a user; we call those items evidence candidates. Recommendations are then made based on the items selected by the user. Previous recommendation studies have two limitations: (1) the users who consumed a few items have poor recommendations and (2) inadequate evidence candidates are used to identify user preferences. We propose a meta-learning-based recommender system called MeLU to overcome these two limitations. From meta-learning, which can rapidly adopt new task with a few examples, MeLU can estimate new user’s preferences with a few consumed items. In addition, we provide an evidence candidate selection strategy that determines distinguishing items for customized preference estimation. We validate MeLU with two benchmark datasets, and the proposed model reduces at least 5.92% mean absolute error than two comparative models on the datasets. We also conduct a user study experiment to verify the evidence selection strategy.
Tasks Meta-Learning, Recommendation Systems
Published 2019-07-31
URL https://arxiv.org/abs/1908.00413v1
PDF https://arxiv.org/pdf/1908.00413v1.pdf
PWC https://paperswithcode.com/paper/melu-meta-learned-user-preference-estimator
Repo https://github.com/hoyeoplee/MeLU
Framework pytorch

Calibration of Deep Probabilistic Models with Decoupled Bayesian Neural Networks

Title Calibration of Deep Probabilistic Models with Decoupled Bayesian Neural Networks
Authors Juan Maroñas, Roberto Paredes, Daniel Ramos
Abstract Deep Neural Networks (DNNs) have achieved state-of-the-art accuracy performance in many tasks. However, recent works have pointed out that the outputs provided by these models are not well-calibrated, seriously limiting their use in critical decision scenarios. In this work, we propose to use a decoupled Bayesian stage, implemented with a Bayesian Neural Network (BNN), to map the uncalibrated probabilities provided by a DNN to calibrated ones, consistently improving calibration. Our results evidence that incorporating uncertainty provides more reliable probabilistic models, a critical condition for achieving good calibration. We report a generous collection of experimental results using high-accuracy DNNs in standardized image classification benchmarks, showing the good performance, flexibility and robust behavior of our approach with respect to several state-of-the-art calibration methods. Code for reproducibility is provided.
Tasks Calibration, Image Classification
Published 2019-08-23
URL https://arxiv.org/abs/1908.08972v3
PDF https://arxiv.org/pdf/1908.08972v3.pdf
PWC https://paperswithcode.com/paper/calibration-of-deep-probabilistic-models-with
Repo https://github.com/jmaronas/DecoupledBayesianCalibration.pytorch
Framework pytorch

Cross-Lingual Training for Automatic Question Generation

Title Cross-Lingual Training for Automatic Question Generation
Authors Vishwajeet Kumar, Nitish Joshi, Arijit Mukherjee, Ganesh Ramakrishnan, Preethi Jyothi
Abstract Automatic question generation (QG) is a challenging problem in natural language understanding. QG systems are typically built assuming access to a large number of training instances where each instance is a question and its corresponding answer. For a new language, such training instances are hard to obtain making the QG problem even more challenging. Using this as our motivation, we study the reuse of an available large QG dataset in a secondary language (e.g. English) to learn a QG model for a primary language (e.g. Hindi) of interest. For the primary language, we assume access to a large amount of monolingual text but only a small QG dataset. We propose a cross-lingual QG model which uses the following training regime: (i) Unsupervised pretraining of language models in both primary and secondary languages and (ii) joint supervised training for QG in both languages. We demonstrate the efficacy of our proposed approach using two different primary languages, Hindi and Chinese. We also create and release a new question answering dataset for Hindi consisting of 6555 sentences.
Tasks Question Answering, Question Generation
Published 2019-06-06
URL https://arxiv.org/abs/1906.02525v1
PDF https://arxiv.org/pdf/1906.02525v1.pdf
PWC https://paperswithcode.com/paper/cross-lingual-training-for-automatic-question
Repo https://github.com/vishwajeet93/clqg
Framework tf
comments powered by Disqus