Paper Group ANR 461
Domain Adaptive Text Style Transfer. 2-D Embedding of Large and High-dimensional Data with Minimal Memory and Computational Time Requirements. Hill Climbing on Value Estimates for Search-control in Dyna. Predictive Coding Networks Meet Action Recognition. A Joint Model for Aspect-Category Sentiment Analysis with Contextualized Aspect Embedding. MSR …
Domain Adaptive Text Style Transfer
Title | Domain Adaptive Text Style Transfer |
Authors | Dianqi Li, Yizhe Zhang, Zhe Gan, Yu Cheng, Chris Brockett, Ming-Ting Sun, Bill Dolan |
Abstract | Text style transfer without parallel data has achieved some practical success. However, in the scenario where less data is available, these methods may yield poor performance. In this paper, we examine domain adaptation for text style transfer to leverage massively available data from other domains. These data may demonstrate domain shift, which impedes the benefits of utilizing such data for training. To address this challenge, we propose simple yet effective domain adaptive text style transfer models, enabling domain-adaptive information exchange. The proposed models presumably learn from the source domain to: (i) distinguish stylized information and generic content information; (ii) maximally preserve content information; and (iii) adaptively transfer the styles in a domain-aware manner. We evaluate the proposed models on two style transfer tasks (sentiment and formality) over multiple target domains where only limited non-parallel data is available. Extensive experiments demonstrate the effectiveness of the proposed model compared to the baselines. |
Tasks | Domain Adaptation, Style Transfer, Text Style Transfer |
Published | 2019-08-25 |
URL | https://arxiv.org/abs/1908.09395v1 |
https://arxiv.org/pdf/1908.09395v1.pdf | |
PWC | https://paperswithcode.com/paper/domain-adaptive-text-style-transfer |
Repo | |
Framework | |
2-D Embedding of Large and High-dimensional Data with Minimal Memory and Computational Time Requirements
Title | 2-D Embedding of Large and High-dimensional Data with Minimal Memory and Computational Time Requirements |
Authors | Witold Dzwinel, Rafal Wcislo, Stan Matwin |
Abstract | In the advent of big data era, interactive visualization of large data sets consisting of M*10^5+ high-dimensional feature vectors of length N (N ~ 10^3+), is an indispensable tool for data exploratory analysis. The state-of-the-art data embedding (DE) methods of N-D data into 2-D (3-D) visually perceptible space (e.g., based on t-SNE concept) are too demanding computationally to be efficiently employed for interactive data analytics of large and high-dimensional datasets. Herein we present a simple method, ivhd (interactive visualization of high-dimensional data tool), which radically outperforms the modern data-embedding algorithms in both computational and memory loads, while retaining high quality of N-D data embedding in 2-D (3-D). We show that DE problem is equivalent to the nearest neighbor nn-graph visualization, where only indices of a few nearest neighbors of each data sample has to be known, and binary distance between data samples – 0 to the nearest and 1 to the other samples – is defined. These improvements reduce the time-complexity and memory load from O(M log M) to O(M), and ensure minimal O(M) proportionality coefficient as well. We demonstrate high efficiency, quality and robustness of ivhd on popular benchmark datasets such as MNIST, 20NG, NORB and RCV1. |
Tasks | |
Published | 2019-02-04 |
URL | http://arxiv.org/abs/1902.01108v1 |
http://arxiv.org/pdf/1902.01108v1.pdf | |
PWC | https://paperswithcode.com/paper/2-d-embedding-of-large-and-high-dimensional |
Repo | |
Framework | |
Hill Climbing on Value Estimates for Search-control in Dyna
Title | Hill Climbing on Value Estimates for Search-control in Dyna |
Authors | Yangchen Pan, Hengshuai Yao, Amir-massoud Farahmand, Martha White |
Abstract | Dyna is an architecture for model-based reinforcement learning (RL), where simulated experience from a model is used to update policies or value functions. A key component of Dyna is search-control, the mechanism to generate the state and action from which the agent queries the model, which remains largely unexplored. In this work, we propose to generate such states by using the trajectory obtained from Hill Climbing (HC) the current estimate of the value function. This has the effect of propagating value from high-value regions and of preemptively updating value estimates of the regions that the agent is likely to visit next. We derive a noisy projected natural gradient algorithm for hill climbing, and highlight a connection to Langevin dynamics. We provide an empirical demonstration on four classical domains that our algorithm, HC-Dyna, can obtain significant sample efficiency improvements. We study the properties of different sampling distributions for search-control, and find that there appears to be a benefit specifically from using the samples generated by climbing on current value estimates from low-value to high-value region. |
Tasks | |
Published | 2019-06-18 |
URL | https://arxiv.org/abs/1906.07791v3 |
https://arxiv.org/pdf/1906.07791v3.pdf | |
PWC | https://paperswithcode.com/paper/hill-climbing-on-value-estimates-for-search |
Repo | |
Framework | |
Predictive Coding Networks Meet Action Recognition
Title | Predictive Coding Networks Meet Action Recognition |
Authors | Xia Huang, Hossein Mousavi, Gemma Roig |
Abstract | Action recognition is a key problem in computer vision that labels videos with a set of predefined actions. Capturing both, semantic content and motion, along the video frames is key to achieve high accuracy performance on this task. Most of the state-of-the-art methods rely on RGB frames for extracting the semantics and pre-computed optical flow fields as a motion cue. Then, both are combined using deep neural networks. Yet, it has been argued that such models are not able to leverage the motion information extracted from the optical flow, but instead the optical flow allows for better recognition of people and objects in the video. This urges the need to explore different cues or models that can extract motion in a more informative fashion. To tackle this issue, we propose to explore the predictive coding network, so called PredNet, a recurrent neural network that propagates predictive coding errors across layers and time steps. We analyze whether PredNet can better capture motions in videos by estimating over time the representations extracted from pre-trained networks for action recognition. In this way, the model only relies on the video frames, and does not need pre-processed optical flows as input. We report the effectiveness of our proposed model on UCF101 and HMDB51 datasets. |
Tasks | Optical Flow Estimation |
Published | 2019-10-22 |
URL | https://arxiv.org/abs/1910.10056v1 |
https://arxiv.org/pdf/1910.10056v1.pdf | |
PWC | https://paperswithcode.com/paper/predictive-coding-networks-meet-action |
Repo | |
Framework | |
A Joint Model for Aspect-Category Sentiment Analysis with Contextualized Aspect Embedding
Title | A Joint Model for Aspect-Category Sentiment Analysis with Contextualized Aspect Embedding |
Authors | Yuncong Li, Cunxiang Yin, Ting Wei, Huiqiang Zhong, Jinchang Luo, Siqi Xu, Xiaohui Wu |
Abstract | Aspect-category sentiment analysis (ACSA) aims to identify all the aspect categories mentioned in the text and their corresponding sentiment polarities. Some joint models have been proposed to address this task. However, these joint models do not solve the following two problems well: mismatching between the aspect categories and the sentiment words, and data deficiency of some aspect categories. To solve them, we propose a novel joint model which contains a contextualized aspect embedding layer and a shared sentiment prediction layer. The contextualized aspect embedding layer extracts the aspect category related information, which is used to generate aspect-specific representations for sentiment classification like traditional context-independent aspect embedding (CIAE) and is therefore called contextualized aspect embedding (CAE). The CAE can mitigate the mismatching problem because it is semantically more related to sentiment words than CIAE. The shared sentiment prediction layer transfers sentiment knowledge between aspect categories and alleviates the problem caused by data deficiency. Experiments conducted on SemEval 2016 Datasets show that our proposed model achieves state-of-the-art performance. |
Tasks | Sentiment Analysis |
Published | 2019-08-29 |
URL | https://arxiv.org/abs/1908.11017v2 |
https://arxiv.org/pdf/1908.11017v2.pdf | |
PWC | https://paperswithcode.com/paper/a-joint-model-for-aspect-category-sentiment |
Repo | |
Framework | |
MSR: Multi-Scale Shape Regression for Scene Text Detection
Title | MSR: Multi-Scale Shape Regression for Scene Text Detection |
Authors | Chuhui Xue, Shijian Lu, Wei Zhang |
Abstract | State-of-the-art scene text detection techniques predict quadrilateral boxes that are prone to localization errors while dealing with straight or curved text lines of different orientations and lengths in scenes. This paper presents a novel multi-scale shape regression network (MSR) that is capable of locating text lines of different lengths, shapes and curvatures in scenes. The proposed MSR detects scene texts by predicting dense text boundary points that inherently capture the location and shape of text lines accurately and are also more tolerant to the variation of text line length as compared with the state of the arts using proposals or segmentation. Additionally, the multi-scale network extracts and fuses features at different scales which demonstrates superb tolerance to the text scale variation. Extensive experiments over several public datasets show that the proposed MSR obtains superior detection performance for both curved and straight text lines of different lengths and orientations. |
Tasks | Scene Text Detection |
Published | 2019-01-09 |
URL | https://arxiv.org/abs/1901.02596v2 |
https://arxiv.org/pdf/1901.02596v2.pdf | |
PWC | https://paperswithcode.com/paper/msr-multi-scale-shape-regression-for-scene |
Repo | |
Framework | |
Drivers Drowsiness Detection using Condition-Adaptive Representation Learning Framework
Title | Drivers Drowsiness Detection using Condition-Adaptive Representation Learning Framework |
Authors | Jongmin Yu, Sangwoo Park, Sangwook Lee, Moongu Jeon |
Abstract | We propose a condition-adaptive representation learning framework for the driver drowsiness detection based on 3D-deep convolutional neural network. The proposed framework consists of four models: spatio-temporal representation learning, scene condition understanding, feature fusion, and drowsiness detection. The spatio-temporal representation learning extracts features that can describe motions and appearances in video simultaneously. The scene condition understanding classifies the scene conditions related to various conditions about the drivers and driving situations such as statuses of wearing glasses, illumination condition of driving, and motion of facial elements such as head, eye, and mouth. The feature fusion generates a condition-adaptive representation using two features extracted from above models. The detection model recognizes drivers drowsiness status using the condition-adaptive representation. The condition-adaptive representation learning framework can extract more discriminative features focusing on each scene condition than the general representation so that the drowsiness detection method can provide more accurate results for the various driving situations. The proposed framework is evaluated with the NTHU Drowsy Driver Detection video dataset. The experimental results show that our framework outperforms the existing drowsiness detection methods based on visual analysis. |
Tasks | Representation Learning |
Published | 2019-10-22 |
URL | https://arxiv.org/abs/1910.09722v1 |
https://arxiv.org/pdf/1910.09722v1.pdf | |
PWC | https://paperswithcode.com/paper/drivers-drowsiness-detection-using-condition |
Repo | |
Framework | |
Unifying mirror descent and dual averaging
Title | Unifying mirror descent and dual averaging |
Authors | Anatoli Juditsky, Joon Kwon, Éric Moulines |
Abstract | We introduce and analyse a new family of algorithms which generalizes and unifies both the mirror descent and the dual averaging algorithms. The unified analysis of the algorithms involves the introduction of a generalized Bregman divergence which utilizes subgradients instead of gradients. Our approach is general enough to encompass classical settings in convex optimization, online learning, and variational inequalities such as saddle-point problems. |
Tasks | |
Published | 2019-10-30 |
URL | https://arxiv.org/abs/1910.13742v1 |
https://arxiv.org/pdf/1910.13742v1.pdf | |
PWC | https://paperswithcode.com/paper/unifying-mirror-descent-and-dual-averaging |
Repo | |
Framework | |
Chaining Meets Chain Rule: Multilevel Entropic Regularization and Training of Neural Nets
Title | Chaining Meets Chain Rule: Multilevel Entropic Regularization and Training of Neural Nets |
Authors | Amir R. Asadi, Emmanuel Abbe |
Abstract | We derive generalization and excess risk bounds for neural nets using a family of complexity measures based on a multilevel relative entropy. The bounds are obtained by introducing the notion of generated hierarchical coverings of neural nets and by using the technique of chaining mutual information introduced in Asadi et al. NeurIPS’18. The resulting bounds are algorithm-dependent and exploit the multilevel structure of neural nets. This, in turn, leads to an empirical risk minimization problem with a multilevel entropic regularization. The minimization problem is resolved by introducing a multi-scale generalization of the celebrated Gibbs posterior distribution, proving that the derived distribution achieves the unique minimum. This leads to a new training procedure for neural nets with performance guarantees, which exploits the chain rule of relative entropy rather than the chain rule of derivatives (as in backpropagation). To obtain an efficient implementation of the latter, we further develop a multilevel Metropolis algorithm simulating the multi-scale Gibbs distribution, with an experiment for a two-layer neural net on the MNIST data set. |
Tasks | |
Published | 2019-06-26 |
URL | https://arxiv.org/abs/1906.11148v1 |
https://arxiv.org/pdf/1906.11148v1.pdf | |
PWC | https://paperswithcode.com/paper/chaining-meets-chain-rule-multilevel-entropic |
Repo | |
Framework | |
Mixture factorized auto-encoder for unsupervised hierarchical deep factorization of speech signal
Title | Mixture factorized auto-encoder for unsupervised hierarchical deep factorization of speech signal |
Authors | Zhiyuan Peng, Siyuan Feng, Tan Lee |
Abstract | Speech signal is constituted and contributed by various informative factors, such as linguistic content and speaker characteristic. There have been notable recent studies attempting to factorize speech signal into these individual factors without requiring any annotation. These studies typically assume continuous representation for linguistic content, which is not in accordance with general linguistic knowledge and may make the extraction of speaker information less successful. This paper proposes the mixture factorized auto-encoder (mFAE) for unsupervised deep factorization. The encoder part of mFAE comprises a frame tokenizer and an utterance embedder. The frame tokenizer models linguistic content of input speech with a discrete categorical distribution. It performs frame clustering by assigning each frame a soft mixture label. The utterance embedder generates an utterance-level vector representation. A frame decoder serves to reconstruct speech features from the encoders’outputs. The mFAE is evaluated on speaker verification (SV) task and unsupervised subword modeling (USM) task. The SV experiments on VoxCeleb 1 show that the utterance embedder is capable of extracting speaker-discriminative embeddings with performance comparable to a x-vector baseline. The USM experiments on ZeroSpeech 2017 dataset verify that the frame tokenizer is able to capture linguistic content and the utterance embedder can acquire speaker-related information. |
Tasks | Speaker Verification |
Published | 2019-10-30 |
URL | https://arxiv.org/abs/1911.01806v1 |
https://arxiv.org/pdf/1911.01806v1.pdf | |
PWC | https://paperswithcode.com/paper/mixture-factorized-auto-encoder-for |
Repo | |
Framework | |
Fourier Transform Approach to Machine Learning II: Fourier Clustering
Title | Fourier Transform Approach to Machine Learning II: Fourier Clustering |
Authors | Soheil Mehrabkhani |
Abstract | We propose a Fourier-based approach for optimization of several clustering algorithms. Mathematically, clusters data can be described by a density function represented by the Dirac mixture distribution. The density function can be smoothed by applying the Fourier transform and a Gaussian filter. The determination of the optimal standard deviation of the Gaussian filter will be accomplished by the use of a convergence criterion related to the correlation between the smoothed and the original density functions. In principle, the optimal smoothed density function exhibits local maxima, which correspond to the cluster centroids. Thus, the complex task of finding the centroids of the clusters is simplified by the detection of the peaks of the smoothed density function. A multiple sliding windows procedure is used to detect the peaks. The remarkable accuracy of the proposed algorithm demonstrates its capability as a reliable general method for enhancement of the clustering performance, its global optimization and also removing the initialization problem in many clustering methods. |
Tasks | |
Published | 2019-04-29 |
URL | https://arxiv.org/abs/1904.13241v3 |
https://arxiv.org/pdf/1904.13241v3.pdf | |
PWC | https://paperswithcode.com/paper/clustering-optimization-finding-the-number |
Repo | |
Framework | |
Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models
Title | Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models |
Authors | Mor Shpigel Nacson, Suriya Gunasekar, Jason D. Lee, Nathan Srebro, Daniel Soudry |
Abstract | With an eye toward understanding complexity control in deep learning, we study how infinitesimal regularization or gradient descent optimization lead to margin maximizing solutions in both homogeneous and non-homogeneous models, extending previous work that focused on infinitesimal regularization only in homogeneous models. To this end we study the limit of loss minimization with a diverging norm constraint (the “constrained path”), relate it to the limit of a “margin path” and characterize the resulting solution. For non-homogeneous ensemble models, which output is a sum of homogeneous sub-models, we show that this solution discards the shallowest sub-models if they are unnecessary. For homogeneous models, we show convergence to a “lexicographic max-margin solution”, and provide conditions under which max-margin solutions are also attained as the limit of unconstrained gradient descent. |
Tasks | |
Published | 2019-05-17 |
URL | https://arxiv.org/abs/1905.07325v1 |
https://arxiv.org/pdf/1905.07325v1.pdf | |
PWC | https://paperswithcode.com/paper/lexicographic-and-depth-sensitive-margins-in |
Repo | |
Framework | |
A case study of Consistent Vehicle Routing Problem with Time Windows
Title | A case study of Consistent Vehicle Routing Problem with Time Windows |
Authors | Hernán Lespay, Karol Suchan |
Abstract | We develop a heuristic solution method for the Consistent Vehicle Routing Problem with Time Windows (ConVRPTW), motivated by a real-world application at a distribution center of a food company. Additional to standard VRPTW restrictions, ConVRP assigns to each customer just one fixed driver to fulfill their orders during the complete multi-period planning horizon. For each driver and day of the planning horizon, a route has to be determined to serve all their assigned customers with positive demand. The customers do not buy every day and the frequency with which they do so is irregular. Moreover, the quantities ordered change from one order to another. This causes difficulties in the daily routing, negatively impacting the service level of the company. Unlike the previous works on ConVRP, where the number of drivers is fixed a priori and only the total travel time is minimized, we give priority to minimizing the number of drivers. To evaluate the performance of the heuristic, we compare the solution of the heuristic with the routing plan in use by the food company. The results show significant improvements, with a lower number of trucks and a higher rate of orders delivered within the prescribed time window. |
Tasks | |
Published | 2019-12-06 |
URL | https://arxiv.org/abs/1912.05929v1 |
https://arxiv.org/pdf/1912.05929v1.pdf | |
PWC | https://paperswithcode.com/paper/a-case-study-of-consistent-vehicle-routing |
Repo | |
Framework | |
A Novel Smoothed Loss and Penalty Function for Noncrossing Composite Quantile Estimation via Deep Neural Networks
Title | A Novel Smoothed Loss and Penalty Function for Noncrossing Composite Quantile Estimation via Deep Neural Networks |
Authors | Kostas Hatalis, Alberto J. Lamadrid, Katya Scheinberg, Shalinee Kishore |
Abstract | Uncertainty analysis in the form of probabilistic forecasting can significantly improve decision making processes in the smart power grid when integrating renewable energy sources such as wind. Whereas point forecasting provides a single expected value, probabilistic forecasts provide more information in the form of quantiles, prediction intervals, or full predictive densities. Traditionally quantile regression is applied for such forecasting and recently quantile regression neural networks have become popular for weather and renewable energy forecasting. However, one major shortcoming of composite quantile estimation in neural networks is the quantile crossover problem. This paper analyzes the effectiveness of a novel smoothed loss and penalty function for neural network architectures to prevent the quantile crossover problem. Its efficacy is examined on the wind power forecasting problem. A numerical case study is conducted using publicly available wind data from the Global Energy Forecasting Competition 2014. Multiple quantiles are estimated to form 10%, to 90% prediction intervals which are evaluated using a quantile score and reliability measures. Benchmark models such as the persistence and climatology distributions, multiple quantile regression, and support vector quantile regression are used for comparison where results demonstrate the proposed approach leads to improved performance while preventing the problem of overlapping quantile estimates. |
Tasks | Decision Making |
Published | 2019-09-24 |
URL | https://arxiv.org/abs/1909.12122v1 |
https://arxiv.org/pdf/1909.12122v1.pdf | |
PWC | https://paperswithcode.com/paper/a-novel-smoothed-loss-and-penalty-function |
Repo | |
Framework | |
Probabilistic Inference for Camera Calibration in Light Microscopy under Circular Motion
Title | Probabilistic Inference for Camera Calibration in Light Microscopy under Circular Motion |
Authors | Yuanhao Guo, Fons J. Verbeek, Ge Yang |
Abstract | Robust and accurate camera calibration is essential for 3D reconstruction in light microscopy under circular motion. Conventional methods require either accurate key point matching or precise segmentation of the axial-view images. Both remain challenging because specimens often exhibit transparency/translucency in a light microscope. To address those issues, we propose a probabilistic inference based method for the camera calibration that does not require sophisticated image pre-processing. Based on 3D projective geometry, our method assigns a probability on each of a range of voxels that cover the whole object. The probability indicates the likelihood of a voxel belonging to the object to be reconstructed. Our method maximizes a joint probability that distinguishes the object from the background. Experimental results show that the proposed method can accurately recover camera configurations in both light microscopy and natural scene imaging. Furthermore, the method can be used to produce high-fidelity 3D reconstructions and accurate 3D measurements. |
Tasks | 3D Reconstruction, Calibration |
Published | 2019-10-30 |
URL | https://arxiv.org/abs/1910.13740v1 |
https://arxiv.org/pdf/1910.13740v1.pdf | |
PWC | https://paperswithcode.com/paper/probabilistic-inference-for-camera |
Repo | |
Framework | |