October 16, 2019

3232 words 16 mins read

Paper Group ANR 1161

Paper Group ANR 1161

Fast Exact Computation of Expected HyperVolume Improvement. Weakly supervised collective feature learning from curated media. External Patch-Based Image Restoration Using Importance Sampling. Context-aware Synthesis for Video Frame Interpolation. WikiAtomicEdits: A Multilingual Corpus of Wikipedia Edits for Modeling Language and Discourse. Cross-Do …

Fast Exact Computation of Expected HyperVolume Improvement

Title Fast Exact Computation of Expected HyperVolume Improvement
Authors Guang Zhao, Raymundo Arroyave, Xiaoning Qian
Abstract In multi-objective Bayesian optimization and surrogate-based evolutionary algorithms, Expected HyperVolume Improvement (EHVI) is widely used as the acquisition function to guide the search approaching the Pareto front. This paper focuses on the exact calculation of EHVI given a nondominated set, for which the existing exact algorithms are complex and can be inefficient for problems with more than three objectives. Integrating with different decomposition algorithms, we propose a new method for calculating the integral in each decomposed high-dimensional box in constant time. We develop three new exact EHVI calculation algorithms based on three region decomposition methods. The first grid-based algorithm has a complexity of $O(m\cdot n^m)$ with $n$ denoting the size of the nondominated set and $m$ the number of objectives. The Walking Fish Group (WFG)-based algorithm has a worst-case complexity of $O(m\cdot 2^n)$ but has a better average performance. These two can be applied for problems with any $m$. The third CLM-based algorithm is only for $m=3$ and asymptotically optimal with complexity $\Theta(n\log{n})$. Performance comparison results show that all our three algorithms are at least twice faster than the state-of-the-art algorithms with the same decomposition methods. When $m>3$, our WFG-based algorithm can be over $10^2$ faster than the corresponding existing algorithms. Our algorithm is demonstrated in an example involving efficient multi-objective material design with Bayesian optimization.
Tasks
Published 2018-12-18
URL http://arxiv.org/abs/1812.07692v2
PDF http://arxiv.org/pdf/1812.07692v2.pdf
PWC https://paperswithcode.com/paper/fast-exact-computation-of-expected
Repo
Framework

Weakly supervised collective feature learning from curated media

Title Weakly supervised collective feature learning from curated media
Authors Yusuke Mukuta, Akisato Kimura, David B Adrian, Zoubin Ghahramani
Abstract The current state-of-the-art in feature learning relies on the supervised learning of large-scale datasets consisting of target content items and their respective category labels. However, constructing such large-scale fully-labeled datasets generally requires painstaking manual effort. One possible solution to this problem is to employ community contributed text tags as weak labels, however, the concepts underlying a single text tag strongly depends on the users. We instead present a new paradigm for learning discriminative features by making full use of the human curation process on social networking services (SNSs). During the process of content curation, SNS users collect content items manually from various sources and group them by context, all for their own benefit. Due to the nature of this process, we can assume that (1) content items in the same group share the same semantic concept and (2) groups sharing the same images might have related semantic concepts. Through these insights, we can define human curated groups as weak labels from which our proposed framework can learn discriminative features as a representation in the space of semantic concepts the users intended when creating the groups. We show that this feature learning can be formulated as a problem of link prediction for a bipartite graph whose nodes corresponds to content items and human curated groups, and propose a novel method for feature learning based on sparse coding or network fine-tuning.
Tasks Link Prediction
Published 2018-02-13
URL http://arxiv.org/abs/1802.04668v1
PDF http://arxiv.org/pdf/1802.04668v1.pdf
PWC https://paperswithcode.com/paper/weakly-supervised-collective-feature-learning
Repo
Framework

External Patch-Based Image Restoration Using Importance Sampling

Title External Patch-Based Image Restoration Using Importance Sampling
Authors Milad Niknejad, Jose M. Bioucas-Dias, Mario A. T. Figueiredo
Abstract This paper introduces a new approach to patch-based image restoration based on external datasets and importance sampling. The Minimum Mean Squared Error (MMSE) estimate of the image patches, the computation of which requires solving a multidimensional (typically intractable) integral, is approximated using samples from an external dataset. The new method, which can be interpreted as a generalization of the external non-local means (NLM), uses self-normalized importance sampling to efficiently approximate the MMSE estimates. The use of self-normalized importance sampling endows the proposed method with great flexibility, namely regarding the statistical properties of the measurement noise. The effectiveness of the proposed method is shown in a series of experiments using both generic large-scale and class-specific external datasets.
Tasks Image Restoration
Published 2018-07-09
URL http://arxiv.org/abs/1807.03018v1
PDF http://arxiv.org/pdf/1807.03018v1.pdf
PWC https://paperswithcode.com/paper/external-patch-based-image-restoration-using
Repo
Framework

Context-aware Synthesis for Video Frame Interpolation

Title Context-aware Synthesis for Video Frame Interpolation
Authors Simon Niklaus, Feng Liu
Abstract Video frame interpolation algorithms typically estimate optical flow or its variations and then use it to guide the synthesis of an intermediate frame between two consecutive original frames. To handle challenges like occlusion, bidirectional flow between the two input frames is often estimated and used to warp and blend the input frames. However, how to effectively blend the two warped frames still remains a challenging problem. This paper presents a context-aware synthesis approach that warps not only the input frames but also their pixel-wise contextual information and uses them to interpolate a high-quality intermediate frame. Specifically, we first use a pre-trained neural network to extract per-pixel contextual information for input frames. We then employ a state-of-the-art optical flow algorithm to estimate bidirectional flow between them and pre-warp both input frames and their context maps. Finally, unlike common approaches that blend the pre-warped frames, our method feeds them and their context maps to a video frame synthesis neural network to produce the interpolated frame in a context-aware fashion. Our neural network is fully convolutional and is trained end to end. Our experiments show that our method can handle challenging scenarios such as occlusion and large motion and outperforms representative state-of-the-art approaches.
Tasks Optical Flow Estimation, Video Frame Interpolation
Published 2018-03-29
URL http://arxiv.org/abs/1803.10967v1
PDF http://arxiv.org/pdf/1803.10967v1.pdf
PWC https://paperswithcode.com/paper/context-aware-synthesis-for-video-frame
Repo
Framework

WikiAtomicEdits: A Multilingual Corpus of Wikipedia Edits for Modeling Language and Discourse

Title WikiAtomicEdits: A Multilingual Corpus of Wikipedia Edits for Modeling Language and Discourse
Authors Manaal Faruqui, Ellie Pavlick, Ian Tenney, Dipanjan Das
Abstract We release a corpus of 43 million atomic edits across 8 languages. These edits are mined from Wikipedia edit history and consist of instances in which a human editor has inserted a single contiguous phrase into, or deleted a single contiguous phrase from, an existing sentence. We use the collected data to show that the language generated during editing differs from the language that we observe in standard corpora, and that models trained on edits encode different aspects of semantics and discourse than models trained on raw, unstructured text. We release the full corpus as a resource to aid ongoing research in semantics, discourse, and representation learning.
Tasks Representation Learning
Published 2018-08-28
URL http://arxiv.org/abs/1808.09422v1
PDF http://arxiv.org/pdf/1808.09422v1.pdf
PWC https://paperswithcode.com/paper/wikiatomicedits-a-multilingual-corpus-of
Repo
Framework

Cross-Domain Collaborative Learning via Cluster Canonical Correlation Analysis and Random Walker for Hyperspectral Image Classification

Title Cross-Domain Collaborative Learning via Cluster Canonical Correlation Analysis and Random Walker for Hyperspectral Image Classification
Authors Yao Qin, Lorenzo Bruzzone, Biao Li, Yuanxin Ye
Abstract This paper introduces a novel heterogenous domain adaptation (HDA) method for hyperspectral image classification with a limited amount of labeled samples in both domains. The method is achieved in the way of cross-domain collaborative learning (CDCL), which is addressed via cluster canonical correlation analysis (C-CCA) and random walker (RW) algorithms. To be specific, the proposed CDCL method is an iterative process of three main stages, i.e. twice of RW-based pseudolabeling and cross domain learning via C-CCA. Firstly, given the initially labeled target samples as training set ($\mathbf{TS}$), the RW-based pseudolabeling is employed to update $\mathbf{TS}$ and extract target clusters ($\mathbf{TCs}$) by fusing the segmentation results obtained by RW and extended RW (ERW) classifiers. Secondly, cross domain learning via C-CCA is applied using labeled source samples and $\mathbf{TCs}$. The unlabeled target samples are then classified with the estimated probability maps using the model trained in the projected correlation subspace. Thirdly, both $\mathbf{TS}$ and estimated probability maps are used for updating $\mathbf{TS}$ again via RW-based pseudolabeling. When the iterative process finishes, the result obtained by the ERW classifier using the final $\mathbf{TS}$ and estimated probability maps is regarded as the final classification map. Experimental results on four real HSIs demonstrate that the proposed method can achieve better performance compared with the state-of-the-art HDA and ERW methods.
Tasks Domain Adaptation, Hyperspectral Image Classification, Image Classification
Published 2018-08-29
URL http://arxiv.org/abs/1808.09740v2
PDF http://arxiv.org/pdf/1808.09740v2.pdf
PWC https://paperswithcode.com/paper/cross-domain-collaborative-learning-via
Repo
Framework

Unsupervised Multi-Target Domain Adaptation: An Information Theoretic Approach

Title Unsupervised Multi-Target Domain Adaptation: An Information Theoretic Approach
Authors Behnam Gholami, Pritish Sahu, Ognjen Rudovic, Konstantinos Bousmalis, Vladimir Pavlovic
Abstract Unsupervised domain adaptation (uDA) models focus on pairwise adaptation settings where there is a single, labeled, source and a single target domain. However, in many real-world settings one seeks to adapt to multiple, but somewhat similar, target domains. Applying pairwise adaptation approaches to this setting may be suboptimal, as they fail to leverage shared information among multiple domains. In this work we propose an information theoretic approach for domain adaptation in the novel context of multiple target domains with unlabeled instances and one source domain with labeled instances. Our model aims to find a shared latent space common to all domains, while simultaneously accounting for the remaining private, domain-specific factors. Disentanglement of shared and private information is accomplished using a unified information-theoretic approach, which also serves to establish a stronger link between the latent representations and the observed data. The resulting model, accompanied by an efficient optimization algorithm, allows simultaneous adaptation from a single source to multiple target domains. We test our approach on three challenging publicly-available datasets, showing that it outperforms several popular domain adaptation methods.
Tasks Domain Adaptation, Unsupervised Domain Adaptation
Published 2018-10-26
URL http://arxiv.org/abs/1810.11547v1
PDF http://arxiv.org/pdf/1810.11547v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-multi-target-domain-adaptation
Repo
Framework

Learning Non-Uniform Hypergraph for Multi-Object Tracking

Title Learning Non-Uniform Hypergraph for Multi-Object Tracking
Authors Longyin Wen, Dawei Du, Shengkun Li, Xiao Bian, Siwei Lyu
Abstract The majority of Multi-Object Tracking (MOT) algorithms based on the tracking-by-detection scheme do not use higher order dependencies among objects or tracklets, which makes them less effective in handling complex scenarios. In this work, we present a new near-online MOT algorithm based on non-uniform hypergraph, which can model different degrees of dependencies among tracklets in a unified objective. The nodes in the hypergraph correspond to the tracklets and the hyperedges with different degrees encode various kinds of dependencies among them. Specifically, instead of setting the weights of hyperedges with different degrees empirically, they are learned automatically using the structural support vector machine algorithm (SSVM). Several experiments are carried out on various challenging datasets (i.e., PETS09, ParkingLot sequence, SubwayFace, and MOT16 benchmark), to demonstrate that our method achieves favorable performance against the state-of-the-art MOT methods.
Tasks Multi-Object Tracking, Object Tracking
Published 2018-12-10
URL http://arxiv.org/abs/1812.03621v1
PDF http://arxiv.org/pdf/1812.03621v1.pdf
PWC https://paperswithcode.com/paper/learning-non-uniform-hypergraph-for-multi
Repo
Framework

Latent Filter Scaling for Multimodal Unsupervised Image-to-Image Translation

Title Latent Filter Scaling for Multimodal Unsupervised Image-to-Image Translation
Authors Yazeed Alharbi, Neil Smith, Peter Wonka
Abstract In multimodal unsupervised image-to-image translation tasks, the goal is to translate an image from the source domain to many images in the target domain. We present a simple method that produces higher quality images than current state-of-the-art while maintaining the same amount of multimodal diversity. Previous methods follow the unconditional approach of trying to map the latent code directly to a full-size image. This leads to complicated network architectures with several introduced hyperparameters to tune. By treating the latent code as a modifier of the convolutional filters, we produce multimodal output while maintaining the traditional Generative Adversarial Network (GAN) loss and without additional hyperparameters. The only tuning required by our method controls the tradeoff between variability and quality of generated images. Furthermore, we achieve disentanglement between source domain content and target domain style for free as a by-product of our formulation. We perform qualitative and quantitative experiments showing the advantages of our method compared with the state-of-the art on multiple benchmark image-to-image translation datasets.
Tasks Image-to-Image Translation, Multimodal Unsupervised Image-To-Image Translation, Unsupervised Image-To-Image Translation
Published 2018-12-24
URL http://arxiv.org/abs/1812.09877v3
PDF http://arxiv.org/pdf/1812.09877v3.pdf
PWC https://paperswithcode.com/paper/latent-filter-scaling-for-multimodal
Repo
Framework

Antithetic and Monte Carlo kernel estimators for partial rankings

Title Antithetic and Monte Carlo kernel estimators for partial rankings
Authors Maria Lomeli, Mark Rowland, Arthur Gretton, Zoubin Ghahramani
Abstract In the modern age, rankings data is ubiquitous and it is useful for a variety of applications such as recommender systems, multi-object tracking and preference learning. However, most rankings data encountered in the real world is incomplete, which prevents the direct application of existing modelling tools for complete rankings. Our contribution is a novel way to extend kernel methods for complete rankings to partial rankings, via consistent Monte Carlo estimators for Gram matrices: matrices of kernel values between pairs of observations. We also present a novel variance reduction scheme based on an antithetic variate construction between permutations to obtain an improved estimator for the Mallows kernel. The corresponding antithetic kernel estimator has lower variance and we demonstrate empirically that it has a better performance in a variety of Machine Learning tasks. Both kernel estimators are based on extending kernel mean embeddings to the embedding of a set of full rankings consistent with an observed partial ranking. They form a computationally tractable alternative to previous approaches for partial rankings data. An overview of the existing kernels and metrics for permutations is also provided.
Tasks Multi-Object Tracking, Object Tracking, Recommendation Systems
Published 2018-07-01
URL http://arxiv.org/abs/1807.00400v2
PDF http://arxiv.org/pdf/1807.00400v2.pdf
PWC https://paperswithcode.com/paper/antithetic-and-monte-carlo-kernel-estimators
Repo
Framework

Taking Advantage of Multitask Learning for Fair Classification

Title Taking Advantage of Multitask Learning for Fair Classification
Authors Luca Oneto, Michele Donini, Amon Elders, Massimiliano Pontil
Abstract A central goal of algorithmic fairness is to reduce bias in automated decision making. An unavoidable tension exists between accuracy gains obtained by using sensitive information (e.g., gender or ethnic group) as part of a statistical model, and any commitment to protect these characteristics. Often, due to biases present in the data, using the sensitive information in the functional form of a classifier improves classification accuracy. In this paper we show how it is possible to get the best of both worlds: optimize model accuracy and fairness without explicitly using the sensitive feature in the functional form of the model, thereby treating different individuals equally. Our method is based on two key ideas. On the one hand, we propose to use Multitask Learning (MTL), enhanced with fairness constraints, to jointly learn group specific classifiers that leverage information between sensitive groups. On the other hand, since learning group specific models might not be permitted, we propose to first predict the sensitive features by any learning method and then to use the predicted sensitive feature to train MTL with fairness constraints. This enables us to tackle fairness with a three-pronged approach, that is, by increasing accuracy on each group, enforcing measures of fairness during training, and protecting sensitive information during testing. Experimental results on two real datasets support our proposal, showing substantial improvements in both accuracy and fairness.
Tasks Decision Making
Published 2018-10-19
URL https://arxiv.org/abs/1810.08683v2
PDF https://arxiv.org/pdf/1810.08683v2.pdf
PWC https://paperswithcode.com/paper/taking-advantage-of-multitask-learning-for
Repo
Framework

Near Optimal Coded Data Shuffling for Distributed Learning

Title Near Optimal Coded Data Shuffling for Distributed Learning
Authors Mohamed A. Attia, Ravi Tandon
Abstract Data shuffling between distributed cluster of nodes is one of the critical steps in implementing large-scale learning algorithms. Randomly shuffling the data-set among a cluster of workers allows different nodes to obtain fresh data assignments at each learning epoch. This process has been shown to provide improvements in the learning process. However, the statistical benefits of distributed data shuffling come at the cost of extra communication overhead from the master node to worker nodes, and can act as one of the major bottlenecks in the overall time for computation. There has been significant recent interest in devising approaches to minimize this communication overhead. One approach is to provision for extra storage at the computing nodes. The other emerging approach is to leverage coded communication to minimize the overall communication overhead. The focus of this work is to understand the fundamental trade-off between the amount of storage and the communication overhead for distributed data shuffling. In this work, we first present an information theoretic formulation for the data shuffling problem, accounting for the underlying problem parameters (number of workers, $K$, number of data points, $N$, and the available storage, $S$ per node). We then present an information theoretic lower bound on the communication overhead for data shuffling as a function of these parameters. We next present a novel coded communication scheme and show that the resulting communication overhead of the proposed scheme is within a multiplicative factor of at most $\frac{K}{K-1}$ from the information-theoretic lower bound. Furthermore, we present the aligned coded shuffling scheme for some storage values, which achieves the optimal storage vs communication trade-off for $K<5$, and further reduces the maximum multiplicative gap down to $\frac{K-\frac{1}{3}}{K-1}$, for $K\geq 5$.
Tasks
Published 2018-01-05
URL http://arxiv.org/abs/1801.01875v1
PDF http://arxiv.org/pdf/1801.01875v1.pdf
PWC https://paperswithcode.com/paper/near-optimal-coded-data-shuffling-for
Repo
Framework

Brain MRI super-resolution using 3D generative adversarial networks

Title Brain MRI super-resolution using 3D generative adversarial networks
Authors Irina Sanchez, Veronica Vilaplana
Abstract In this work we propose an adversarial learning approach to generate high resolution MRI scans from low resolution images. The architecture, based on the SRGAN model, adopts 3D convolutions to exploit volumetric information. For the discriminator, the adversarial loss uses least squares in order to stabilize the training. For the generator, the loss function is a combination of a least squares adversarial loss and a content term based on mean square error and image gradients in order to improve the quality of the generated images. We explore different solutions for the upsampling phase. We present promising results that improve classical interpolation, showing the potential of the approach for 3D medical imaging super-resolution. Source code available at https://github.com/imatge-upc/3D-GAN-superresolution
Tasks Super-Resolution
Published 2018-12-29
URL http://arxiv.org/abs/1812.11440v1
PDF http://arxiv.org/pdf/1812.11440v1.pdf
PWC https://paperswithcode.com/paper/brain-mri-super-resolution-using-3d
Repo
Framework

Clinical Document Classification Using Labeled and Unlabeled Data Across Hospitals

Title Clinical Document Classification Using Labeled and Unlabeled Data Across Hospitals
Authors Hamed Hassanzadeh, Mahnoosh Kholghi, Anthony Nguyen, Kevin Chu
Abstract Reviewing radiology reports in emergency departments is an essential but laborious task. Timely follow-up of patients with abnormal cases in their radiology reports may dramatically affect the patient’s outcome, especially if they have been discharged with a different initial diagnosis. Machine learning approaches have been devised to expedite the process and detect the cases that demand instant follow up. However, these approaches require a large amount of labeled data to train reliable predictive models. Preparing such a large dataset, which needs to be manually annotated by health professionals, is costly and time-consuming. This paper investigates a semi-supervised learning framework for radiology report classification across three hospitals. The main goal is to leverage clinical unlabeled data in order to augment the learning process where limited labeled data is available. To further improve the classification performance, we also integrate a transfer learning technique into the semi-supervised learning pipeline . Our experimental findings show that (1) convolutional neural networks (CNNs), while being independent of any problem-specific feature engineering, achieve significantly higher effectiveness compared to conventional supervised learning approaches, (2) leveraging unlabeled data in training a CNN-based classifier reduces the dependency on labeled data by more than 50% to reach the same performance of a fully supervised CNN, and (3) transferring the knowledge gained from available labeled data in an external source hospital significantly improves the performance of a semi-supervised CNN model over their fully supervised counterparts in a target hospital.
Tasks Document Classification, Feature Engineering, Transfer Learning
Published 2018-12-03
URL http://arxiv.org/abs/1812.00677v2
PDF http://arxiv.org/pdf/1812.00677v2.pdf
PWC https://paperswithcode.com/paper/clinical-document-classification-using
Repo
Framework
comments powered by Disqus