Paper Group ANR 109
Online Clustering by Penalized Weighted GMM. Wasserstein-2 Generative Networks. Style Transfer by Rigid Alignment in Neural Net Feature Space. Semi-supervised Text Style Transfer: Cross Projection in Latent Space. Automated curricula through setter-solver interactions. Stabilizing Generative Adversarial Networks: A Survey. Latent Distance Estimatio …
Online Clustering by Penalized Weighted GMM
Title | Online Clustering by Penalized Weighted GMM |
Authors | Shlomo Bugdary, Shay Maymon |
Abstract | With the dawn of the Big Data era, data sets are growing rapidly. Data is streaming from everywhere - from cameras, mobile phones, cars, and other electronic devices. Clustering streaming data is a very challenging problem. Unlike the traditional clustering algorithms where the dataset can be stored and scanned multiple times, clustering streaming data has to satisfy constraints such as limit memory size, real-time response, unknown data statistics and an unknown number of clusters. In this paper, we present a novel online clustering algorithm which can be used to cluster streaming data without knowing the number of clusters a priori. Results on both synthetic and real datasets show that the proposed algorithm produces partitions which are close to what you could get if you clustered the whole data at one time. |
Tasks | |
Published | 2019-02-07 |
URL | http://arxiv.org/abs/1902.02544v1 |
http://arxiv.org/pdf/1902.02544v1.pdf | |
PWC | https://paperswithcode.com/paper/online-clustering-by-penalized-weighted-gmm |
Repo | |
Framework | |
Wasserstein-2 Generative Networks
Title | Wasserstein-2 Generative Networks |
Authors | Alexander Korotin, Vage Egiazarian, Arip Asadulaev, Evgeny Burnaev |
Abstract | Generative Adversarial Networks training is not easy due to the minimax nature of the optimization objective. In this paper, we propose a novel end-to-end algorithm for training generative models which uses a non-minimax objective simplifying model training. The proposed algorithm uses the approximation of Wasserstein-2 distance by Input Convex Neural Networks. From the theoretical side, we estimate the properties of the generative mapping fitted by the algorithm. From the practical side, we conduct computational experiments which confirm the efficiency of our algorithm in various applied problems: image-to-image color transfer, latent space optimal transport, image-to-image style transfer, and domain adaptation. |
Tasks | Domain Adaptation, Style Transfer |
Published | 2019-09-28 |
URL | https://arxiv.org/abs/1909.13082v2 |
https://arxiv.org/pdf/1909.13082v2.pdf | |
PWC | https://paperswithcode.com/paper/wasserstein-2-generative-networks |
Repo | |
Framework | |
Style Transfer by Rigid Alignment in Neural Net Feature Space
Title | Style Transfer by Rigid Alignment in Neural Net Feature Space |
Authors | Suryabhan Singh Hada, Miguel Á. Carreira-Perpiñán |
Abstract | Arbitrary style transfer is an important problem in computer vision that aims to transfer style patterns from an arbitrary style image to a given content image. However, current methods either rely on slow iterative optimization or fast pre-determined feature transformation, but at the cost of compromised visual quality of the styled image; especially, distorted content structure. In this work, we present an effective and efficient approach for arbitrary style transfer that seamlessly transfers style patterns as well as keep content structure intact in the styled image. We achieve this by aligning style features to content features using rigid alignment; thus modifying style features, unlike the existing methods that do the opposite. We demonstrate the effectiveness of the proposed approach by generating high-quality stylized images and compare the results with the current state-of-the-art techniques for arbitrary style transfer. |
Tasks | Style Transfer |
Published | 2019-09-27 |
URL | https://arxiv.org/abs/1909.13690v1 |
https://arxiv.org/pdf/1909.13690v1.pdf | |
PWC | https://paperswithcode.com/paper/style-transfer-by-rigid-alignment-in-neural |
Repo | |
Framework | |
Semi-supervised Text Style Transfer: Cross Projection in Latent Space
Title | Semi-supervised Text Style Transfer: Cross Projection in Latent Space |
Authors | Mingyue Shang, Piji Li, Zhenxin Fu, Lidong Bing, Dongyan Zhao, Shuming Shi, Rui Yan |
Abstract | Text style transfer task requires the model to transfer a sentence of one style to another style while retaining its original content meaning, which is a challenging problem that has long suffered from the shortage of parallel data. In this paper, we first propose a semi-supervised text style transfer model that combines the small-scale parallel data with the large-scale nonparallel data. With these two types of training data, we introduce a projection function between the latent space of different styles and design two constraints to train it. We also introduce two other simple but effective semi-supervised methods to compare with. To evaluate the performance of the proposed methods, we build and release a novel style transfer dataset that alters sentences between the style of ancient Chinese poem and the modern Chinese. |
Tasks | Style Transfer, Text Style Transfer |
Published | 2019-09-25 |
URL | https://arxiv.org/abs/1909.11493v1 |
https://arxiv.org/pdf/1909.11493v1.pdf | |
PWC | https://paperswithcode.com/paper/semi-supervised-text-style-transfer-cross |
Repo | |
Framework | |
Automated curricula through setter-solver interactions
Title | Automated curricula through setter-solver interactions |
Authors | Sebastien Racaniere, Andrew K. Lampinen, Adam Santoro, David P. Reichert, Vlad Firoiu, Timothy P. Lillicrap |
Abstract | Reinforcement learning algorithms use correlations between policies and rewards to improve agent performance. But in dynamic or sparsely rewarding environments these correlations are often too small, or rewarding events are too infrequent to make learning feasible. Human education instead relies on curricula–the breakdown of tasks into simpler, static challenges with dense rewards–to build up to complex behaviors. While curricula are also useful for artificial agents, hand-crafting them is time consuming. This has lead researchers to explore automatic curriculum generation. Here we explore automatic curriculum generation in rich, dynamic environments. Using a setter-solver paradigm we show the importance of considering goal validity, goal feasibility, and goal coverage to construct useful curricula. We demonstrate the success of our approach in rich but sparsely rewarding 2D and 3D environments, where an agent is tasked to achieve a single goal selected from a set of possible goals that varies between episodes, and identify challenges for future work. Finally, we demonstrate the value of a novel technique that guides agents towards a desired goal distribution. Altogether, these results represent a substantial step towards applying automatic task curricula to learn complex, otherwise unlearnable goals, and to our knowledge are the first to demonstrate automated curriculum generation for goal-conditioned agents in environments where the possible goals vary between episodes. |
Tasks | |
Published | 2019-09-27 |
URL | https://arxiv.org/abs/1909.12892v2 |
https://arxiv.org/pdf/1909.12892v2.pdf | |
PWC | https://paperswithcode.com/paper/automated-curricula-through-setter-solver |
Repo | |
Framework | |
Stabilizing Generative Adversarial Networks: A Survey
Title | Stabilizing Generative Adversarial Networks: A Survey |
Authors | Maciej Wiatrak, Stefano V. Albrecht, Andrew Nystrom |
Abstract | Generative Adversarial Networks (GANs) are a type of generative model which have received much attention due to their ability to model complex real-world data. Despite their recent successes, the process of training GANs remains challenging, suffering from instability problems such as non-convergence, vanishing or exploding gradients, and mode collapse. In recent years, a diverse set of approaches have been proposed which focus on stabilizing the GAN training procedure. The purpose of this survey is to provide a comprehensive overview of the GAN training stabilization methods which can be found in the literature. We discuss the advantages and disadvantages of each approach, offer a comparative summary, and conclude with a discussion of open problems. |
Tasks | |
Published | 2019-09-30 |
URL | https://arxiv.org/abs/1910.00927v2 |
https://arxiv.org/pdf/1910.00927v2.pdf | |
PWC | https://paperswithcode.com/paper/stabilizing-generative-adversarial-network |
Repo | |
Framework | |
Latent Distance Estimation for Random Geometric Graphs
Title | Latent Distance Estimation for Random Geometric Graphs |
Authors | Ernesto Araya, Yohann De Castro |
Abstract | Random geometric graphs are a popular choice for a latent points generative model for networks. Their definition is based on a sample of $n$ points $X_1,X_2,\cdots,X_n$ on the Euclidean sphere~$\mathbb{S}^{d-1}$ which represents the latent positions of nodes of the network. The connection probabilities between the nodes are determined by an unknown function (referred to as the “link” function) evaluated at the distance between the latent points. We introduce a spectral estimator of the pairwise distance between latent points and we prove that its rate of convergence is the same as the nonparametric estimation of a function on $\mathbb{S}^{d-1}$, up to a logarithmic factor. In addition, we provide an efficient spectral algorithm to compute this estimator without any knowledge on the nonparametric link function. As a byproduct, our method can also consistently estimate the dimension $d$ of the latent space. |
Tasks | |
Published | 2019-09-15 |
URL | https://arxiv.org/abs/1909.06841v1 |
https://arxiv.org/pdf/1909.06841v1.pdf | |
PWC | https://paperswithcode.com/paper/latent-distance-estimation-for-random |
Repo | |
Framework | |
Human-centric Metric for Accelerating Pathology Reports Annotation
Title | Human-centric Metric for Accelerating Pathology Reports Annotation |
Authors | Ruibin Ma, Po-Hsuan Cameron Chen, Gang Li, Wei-Hung Weng, Angela Lin, Krishna Gadepalli, Yuannan Cai |
Abstract | Pathology reports contain useful information such as the main involved organ, diagnosis, etc. These information can be identified from the free text reports and used for large-scale statistical analysis or serve as annotation for other modalities such as pathology slides images. However, manual classification for a huge number of reports on multiple tasks is labor-intensive. In this paper, we have developed an automatic text classifier based on BERT and we propose a human-centric metric to evaluate the model. According to the model confidence, we identify low-confidence cases that require further expert annotation and high-confidence cases that are automatically classified. We report the percentage of low-confidence cases and the performance of automatically classified cases. On the high-confidence cases, the model achieves classification accuracy comparable to pathologists. This leads a potential of reducing 80% to 98% of the manual annotation workload. |
Tasks | |
Published | 2019-10-31 |
URL | https://arxiv.org/abs/1911.01226v2 |
https://arxiv.org/pdf/1911.01226v2.pdf | |
PWC | https://paperswithcode.com/paper/human-centric-metric-for-accelerating |
Repo | |
Framework | |
ALTER: Auxiliary Text Rewriting Tool for Natural Language Generation
Title | ALTER: Auxiliary Text Rewriting Tool for Natural Language Generation |
Authors | Qiongkai Xu, Chenchen Xu, Lizhen Qu |
Abstract | In this paper, we describe ALTER, an auxiliary text rewriting tool that facilitates the rewriting process for natural language generation tasks, such as paraphrasing, text simplification, fairness-aware text rewriting, and text style transfer. Our tool is characterized by two features, i) recording of word-level revision histories and ii) flexible auxiliary edit support and feedback to annotators. The text rewriting assist and traceable rewriting history are potentially beneficial to the future research of natural language generation. |
Tasks | Style Transfer, Text Generation, Text Simplification, Text Style Transfer |
Published | 2019-09-14 |
URL | https://arxiv.org/abs/1909.06564v1 |
https://arxiv.org/pdf/1909.06564v1.pdf | |
PWC | https://paperswithcode.com/paper/alter-auxiliary-text-rewriting-tool-for |
Repo | |
Framework | |
A Survey of Optimization Methods from a Machine Learning Perspective
Title | A Survey of Optimization Methods from a Machine Learning Perspective |
Authors | Shiliang Sun, Zehui Cao, Han Zhu, Jing Zhao |
Abstract | Machine learning develops rapidly, which has made many theoretical breakthroughs and is widely applied in various fields. Optimization, as an important part of machine learning, has attracted much attention of researchers. With the exponential growth of data amount and the increase of model complexity, optimization methods in machine learning face more and more challenges. A lot of work on solving optimization problems or improving optimization methods in machine learning has been proposed successively. The systematic retrospect and summary of the optimization methods from the perspective of machine learning are of great significance, which can offer guidance for both developments of optimization and machine learning research. In this paper, we first describe the optimization problems in machine learning. Then, we introduce the principles and progresses of commonly used optimization methods. Next, we summarize the applications and developments of optimization methods in some popular machine learning fields. Finally, we explore and give some challenges and open problems for the optimization in machine learning. |
Tasks | |
Published | 2019-06-17 |
URL | https://arxiv.org/abs/1906.06821v2 |
https://arxiv.org/pdf/1906.06821v2.pdf | |
PWC | https://paperswithcode.com/paper/a-survey-of-optimization-methods-from-a |
Repo | |
Framework | |
On Connected Sublevel Sets in Deep Learning
Title | On Connected Sublevel Sets in Deep Learning |
Authors | Quynh Nguyen |
Abstract | This paper shows that every sublevel set of the loss function of a class of deep over-parameterized neural nets with piecewise linear activation functions is connected and unbounded. This implies that the loss has no bad local valleys and all of its global minima are connected within a unique and potentially very large global valley. |
Tasks | |
Published | 2019-01-22 |
URL | https://arxiv.org/abs/1901.07417v2 |
https://arxiv.org/pdf/1901.07417v2.pdf | |
PWC | https://paperswithcode.com/paper/on-connected-sublevel-sets-in-deep-learning |
Repo | |
Framework | |
Fast Supervised Discrete Hashing
Title | Fast Supervised Discrete Hashing |
Authors | Jie Gui, Tongliang Liu, Zhenan Sun, Dacheng Tao, Tieniu Tan |
Abstract | Learning-based hashing algorithms are hot topics" because they can greatly increase the scale at which existing methods operate. In this paper, we propose a new learning-based hashing method called fast supervised discrete hashing” (FSDH) based on ``supervised discrete hashing” (SDH). Regressing the training examples (or hash code) to the corresponding class labels is widely used in ordinary least squares regression. Rather than adopting this method, FSDH uses a very simple yet effective regression of the class labels of training examples to the corresponding hash code to accelerate the algorithm. To the best of our knowledge, this strategy has not previously been used for hashing. Traditional SDH decomposes the optimization into three sub-problems, with the most critical sub-problem - discrete optimization for binary hash codes - solved using iterative discrete cyclic coordinate descent (DCC), which is time-consuming. However, FSDH has a closed-form solution and only requires a single rather than iterative hash code-solving step, which is highly efficient. Furthermore, FSDH is usually faster than SDH for solving the projection matrix for least squares regression, making FSDH generally faster than SDH. For example, our results show that FSDH is about 12-times faster than SDH when the number of hashing bits is 128 on the CIFAR-10 data base, and FSDH is about 151-times faster than FastHash when the number of hashing bits is 64 on the MNIST data-base. Our experimental results show that FSDH is not only fast, but also outperforms other comparative methods. | |
Tasks | |
Published | 2019-04-07 |
URL | http://arxiv.org/abs/1904.03556v1 |
http://arxiv.org/pdf/1904.03556v1.pdf | |
PWC | https://paperswithcode.com/paper/fast-supervised-discrete-hashing |
Repo | |
Framework | |
DECT-MULTRA: Dual-Energy CT Image Decomposition With Learned Mixed Material Models and Efficient Clustering
Title | DECT-MULTRA: Dual-Energy CT Image Decomposition With Learned Mixed Material Models and Efficient Clustering |
Authors | Zhipeng Li, Saiprasad Ravishankar, Yong Long, Jeffrey A. Fessler |
Abstract | Dual energy computed tomography (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Image-domain decomposition operates directly on CT images using linear matrix inversion, but the decomposed material images can be severely degraded by noise and artifacts. This paper proposes a new method dubbed DECT-MULTRA for image-domain DECT material decomposition that combines conventional penalized weighted-least squares (PWLS) estimation with regularization based on a mixed union of learned transforms (MULTRA) model. Our proposed approach pre-learns a union of common-material sparsifying transforms from patches extracted from all the basis materials, and a union of cross-material sparsifying transforms from multi-material patches. The common-material transforms capture the common properties among different material images, while the cross-material transforms capture the cross-dependencies. The proposed PWLS formulation is optimized efficiently by alternating between an image update step and a sparse coding and clustering step, with both of these steps having closed-form solutions. The effectiveness of our method is validated with both XCAT phantom and clinical head data. The results demonstrate that our proposed method provides superior material image quality and decomposition accuracy compared to other competing methods. |
Tasks | |
Published | 2019-01-01 |
URL | https://arxiv.org/abs/1901.00106v2 |
https://arxiv.org/pdf/1901.00106v2.pdf | |
PWC | https://paperswithcode.com/paper/dect-multra-dual-energy-ct-image |
Repo | |
Framework | |
Inducing Cooperation via Team Regret Minimization based Multi-Agent Deep Reinforcement Learning
Title | Inducing Cooperation via Team Regret Minimization based Multi-Agent Deep Reinforcement Learning |
Authors | Runsheng Yu, Zhenyu Shi, Xinrun Wang, Rundong Wang, Buhong Liu, Xinwen Hou, Hanjiang Lai, Bo An |
Abstract | Existing value-factorized based Multi-Agent deep Reinforce-ment Learning (MARL) approaches are well-performing invarious multi-agent cooperative environment under thecen-tralized training and decentralized execution(CTDE) scheme,where all agents are trained together by the centralized valuenetwork and each agent execute its policy independently. How-ever, an issue remains open: in the centralized training process,when the environment for the team is partially observable ornon-stationary, i.e., the observation and action informationof all the agents cannot represent the global states, existingmethods perform poorly and sample inefficiently. Regret Min-imization (RM) can be a promising approach as it performswell in partially observable and fully competitive settings.However, it tends to model others as opponents and thus can-not work well under the CTDE scheme. In this work, wepropose a novel team RM based Bayesian MARL with threekey contributions: (a) we design a novel RM method to traincooperative agents as a team and obtain a team regret-basedpolicy for that team; (b) we introduce a novel method to de-compose the team regret to generate the policy for each agentfor decentralized execution; (c) to further improve the perfor-mance, we leverage a differential particle filter (a SequentialMonte Carlo method) network to get an accurate estimation ofthe state for each agent. Experimental results on two-step ma-trix games (cooperative game) and battle games (large-scalemixed cooperative-competitive games) demonstrate that ouralgorithm significantly outperforms state-of-the-art methods. |
Tasks | |
Published | 2019-11-18 |
URL | https://arxiv.org/abs/1911.07712v1 |
https://arxiv.org/pdf/1911.07712v1.pdf | |
PWC | https://paperswithcode.com/paper/inducing-cooperation-via-team-regret |
Repo | |
Framework | |
Planning with Expectation Models
Title | Planning with Expectation Models |
Authors | Yi Wan, Muhammad Zaheer, Adam White, Martha White, Richard S. Sutton |
Abstract | Distribution and sample models are two popular model choices in model-based reinforcement learning (MBRL). However, learning these models can be intractable, particularly when the state and action spaces are large. Expectation models, on the other hand, are relatively easier to learn due to their compactness and have also been widely used for deterministic environments. For stochastic environments, it is not obvious how expectation models can be used for planning as they only partially characterize a distribution. In this paper, we propose a sound way of using approximate expectation models for MBRL. In particular, we 1) show that planning with an expectation model is equivalent to planning with a distribution model if the state value function is linear in state features, 2) analyze two common parametrization choices for approximating the expectation: linear and non-linear expectation models, 3) propose a sound model-based policy evaluation algorithm and present its convergence results, and 4) empirically demonstrate the effectiveness of the proposed planning algorithm. |
Tasks | |
Published | 2019-04-02 |
URL | https://arxiv.org/abs/1904.01191v3 |
https://arxiv.org/pdf/1904.01191v3.pdf | |
PWC | https://paperswithcode.com/paper/planning-with-expectation-models |
Repo | |
Framework | |