Paper Group ANR 1334
Unsupervised Deep Structured Semantic Models for Commonsense Reasoning. 3D Morphable Face Models – Past, Present and Future. DenseAttentionSeg: Segment Hands from Interacted Objects Using Depth Input. Model-Free Learning of Optimal Ergodic Policies in Wireless Systems. Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors. …
Unsupervised Deep Structured Semantic Models for Commonsense Reasoning
Title | Unsupervised Deep Structured Semantic Models for Commonsense Reasoning |
Authors | Shuohang Wang, Sheng Zhang, Yelong Shen, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Jing Jiang |
Abstract | Commonsense reasoning is fundamental to natural language understanding. While traditional methods rely heavily on human-crafted features and knowledge bases, we explore learning commonsense knowledge from a large amount of raw text via unsupervised learning. We propose two neural network models based on the Deep Structured Semantic Models (DSSM) framework to tackle two classic commonsense reasoning tasks, Winograd Schema challenges (WSC) and Pronoun Disambiguation (PDP). Evaluation shows that the proposed models effectively capture contextual information in the sentence and co-reference information between pronouns and nouns, and achieve significant improvement over previous state-of-the-art approaches. |
Tasks | |
Published | 2019-04-03 |
URL | http://arxiv.org/abs/1904.01938v1 |
http://arxiv.org/pdf/1904.01938v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-deep-structured-semantic-models |
Repo | |
Framework | |
3D Morphable Face Models – Past, Present and Future
Title | 3D Morphable Face Models – Past, Present and Future |
Authors | Bernhard Egger, William A. P. Smith, Ayush Tewari, Stefanie Wuhrer, Michael Zollhoefer, Thabo Beeler, Florian Bernard, Timo Bolkart, Adam Kortylewski, Sami Romdhani, Christian Theobalt, Volker Blanz, Thomas Vetter |
Abstract | In this paper, we provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed. The challenges in building and applying these models, namely capture, modeling, image formation, and image analysis, are still active research topics, and we review the state-of-the-art in each of these areas. We also look ahead, identifying unsolved challenges, proposing directions for future research and highlighting the broad range of current and future applications. |
Tasks | |
Published | 2019-09-03 |
URL | https://arxiv.org/abs/1909.01815v1 |
https://arxiv.org/pdf/1909.01815v1.pdf | |
PWC | https://paperswithcode.com/paper/3d-morphable-face-models-past-present-and |
Repo | |
Framework | |
DenseAttentionSeg: Segment Hands from Interacted Objects Using Depth Input
Title | DenseAttentionSeg: Segment Hands from Interacted Objects Using Depth Input |
Authors | Zihao Bo, Hao Zhang, Junhai Yong, Feng Xu |
Abstract | We propose a real-time DNN-based technique to segment hand and object of interacting motions from depth inputs. Our model is called DenseAttentionSeg, which contains a dense attention mechanism to fuse information in different scales and improves the results quality with skip-connections. Besides, we introduce a contour loss in model training, which helps to generate accurate hand and object boundaries. Finally, we propose and release our InterSegHands dataset, a fine-scale hand segmentation dataset containing about 52k depth maps of hand-object interactions. Our experiments evaluate the effectiveness of our techniques and datasets, and indicate that our method outperforms the current state-of-the-art deep segmentation methods on interaction segmentation. |
Tasks | Hand Segmentation |
Published | 2019-03-29 |
URL | http://arxiv.org/abs/1903.12368v2 |
http://arxiv.org/pdf/1903.12368v2.pdf | |
PWC | https://paperswithcode.com/paper/denseattentionseg-segment-hands-from |
Repo | |
Framework | |
Model-Free Learning of Optimal Ergodic Policies in Wireless Systems
Title | Model-Free Learning of Optimal Ergodic Policies in Wireless Systems |
Authors | Dionysios S. Kalogerias, Mark Eisen, George J. Pappas, Alejandro Ribeiro |
Abstract | Learning optimal resource allocation policies in wireless systems can be effectively achieved by formulating finite dimensional constrained programs which depend on system configuration, as well as the adopted learning parameterization. The interest here is in cases where system models are unavailable, prompting methods that probe the wireless system with candidate policies, and then use observed performance to determine better policies. This generic procedure is difficult because of the need to cull accurate gradient estimates out of these limited system queries. This paper constructs and exploits smoothed surrogates of constrained ergodic resource allocation problems, the gradients of the former being representable exactly as averages of finite differences that can be obtained through limited system probing. Leveraging this unique property, we develop a new model-free primal-dual algorithm for learning optimal ergodic resource allocations, while we rigorously analyze the relationships between original policy search problems and their surrogates, in both primal and dual domains. First, we show that both primal and dual domain surrogates are uniformly consistent approximations of their corresponding original finite dimensional counterparts. Upon further assuming the use of near-universal policy parameterizations, we also develop explicit bounds on the gap between optimal values of initial, infinite dimensional resource allocation problems, and dual values of their parameterized smoothed surrogates. In fact, we show that this duality gap decreases at a linear rate relative to smoothing and universality parameters. Thus, it can be made arbitrarily small at will, also justifying our proposed primal-dual algorithmic recipe. Numerical simulations confirm the effectiveness of our approach. |
Tasks | |
Published | 2019-11-10 |
URL | https://arxiv.org/abs/1911.03988v1 |
https://arxiv.org/pdf/1911.03988v1.pdf | |
PWC | https://paperswithcode.com/paper/model-free-learning-of-optimal-ergodic |
Repo | |
Framework | |
Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors
Title | Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors |
Authors | Gilad Cohen, Guillermo Sapiro, Raja Giryes |
Abstract | Deep neural networks (DNNs) are notorious for their vulnerability to adversarial attacks, which are small perturbations added to their input images to mislead their prediction. Detection of adversarial examples is, therefore, a fundamental requirement for robust classification frameworks. In this work, we present a method for detecting such adversarial attacks, which is suitable for any pre-trained neural network classifier. We use influence functions to measure the impact of every training sample on the validation set data. From the influence scores, we find the most supportive training samples for any given validation example. A k-nearest neighbor (k-NN) model fitted on the DNN’s activation layers is employed to search for the ranking of these supporting training samples. We observe that these samples are highly correlated with the nearest neighbors of the normal inputs, while this correlation is much weaker for adversarial inputs. We train an adversarial detector using the k-NN ranks and distances and show that it successfully distinguishes adversarial examples, getting state-of-the-art results on six attack methods with three datasets. Code is available at https://github.com/giladcohen/NNIF_adv_defense. |
Tasks | |
Published | 2019-09-15 |
URL | https://arxiv.org/abs/1909.06872v2 |
https://arxiv.org/pdf/1909.06872v2.pdf | |
PWC | https://paperswithcode.com/paper/detecting-adversarial-samples-using-influence |
Repo | |
Framework | |
A Corpus for Automatic Readability Assessment and Text Simplification of German
Title | A Corpus for Automatic Readability Assessment and Text Simplification of German |
Authors | Alessia Battisti, Sarah Ebling |
Abstract | In this paper, we present a corpus for use in automatic readability assessment and automatic text simplification of German. The corpus is compiled from web sources and consists of approximately 211,000 sentences. As a novel contribution, it contains information on text structure, typography, and images, which can be exploited as part of machine learning approaches to readability assessment and text simplification. The focus of this publication is on representing such information as an extension to an existing corpus standard. |
Tasks | Text Simplification |
Published | 2019-09-19 |
URL | https://arxiv.org/abs/1909.09067v1 |
https://arxiv.org/pdf/1909.09067v1.pdf | |
PWC | https://paperswithcode.com/paper/a-corpus-for-automatic-readability-assessment |
Repo | |
Framework | |
Change your singer: a transfer learning generative adversarial framework for song to song conversion
Title | Change your singer: a transfer learning generative adversarial framework for song to song conversion |
Authors | Rema Daher, Mohammad Kassem Zein, Julia El Zini, Mariette Awad, Daniel Asmar |
Abstract | Have you ever wondered how a song might sound if performed by a different artist? In this work, we propose SCM-GAN, an end-to-end non-parallel song conversion system powered by generative adversarial and transfer learning that allows users to listen to a selected target singer singing any song. SCM-GAN first separates songs into vocals and instrumental music using a U-Net network, then converts the vocal segments to the target singer using advanced CycleGAN-VC, before merging the converted vocals with their corresponding background music. SCM-GAN is first initialized with feature representations learned from a state-of-the-art voice-to-voice conversion and then trained on a dataset of non-parallel songs. Furthermore, SCM-GAN is evaluated against a set of metrics including global variance GV and modulation spectra MS on the 24 Mel-cepstral coefficients (MCEPs). Transfer learning improves the GV by 35% and the MS by 13% on average. A subjective comparison is conducted to test the user satisfaction with the quality and the naturalness of the conversion. Results show above par similarity between SCM-GAN’s output and the target (70% on average) as well as great naturalness of the converted songs. |
Tasks | Transfer Learning, Voice Conversion |
Published | 2019-11-07 |
URL | https://arxiv.org/abs/1911.02933v2 |
https://arxiv.org/pdf/1911.02933v2.pdf | |
PWC | https://paperswithcode.com/paper/change-your-singer-a-transfer-learning |
Repo | |
Framework | |
Alternative Function Approximation Parameterizations for Solving Games: An Analysis of $f$-Regression Counterfactual Regret Minimization
Title | Alternative Function Approximation Parameterizations for Solving Games: An Analysis of $f$-Regression Counterfactual Regret Minimization |
Authors | Ryan D’Orazio, Dustin Morrill, James R. Wright, Michael Bowling |
Abstract | Function approximation is a powerful approach for structuring large decision problems that has facilitated great achievements in the areas of reinforcement learning and game playing. Regression counterfactual regret minimization (RCFR) is a simple algorithm for approximately solving imperfect information games with normalized rectified linear unit (ReLU) parameterized policies. In contrast, the more conventional softmax parameterization is standard in the field of reinforcement learning and yields a regret bound with a better dependence on the number of actions. We derive approximation error-aware regret bounds for $(\Phi, f)$-regret matching, which applies to a general class of link functions and regret objectives. These bounds recover a tighter bound for RCFR and provide a theoretical justification for RCFR implementations with alternative policy parameterizations ($f$-RCFR), including softmax. We provide exploitability bounds for $f$-RCFR with the polynomial and exponential link functions in zero-sum imperfect information games and examine empirically how the link function interacts with the severity of the approximation. We find that the previously studied ReLU parameterization performs better when the approximation error is small while the softmax parameterization can perform better when the approximation error is large. |
Tasks | |
Published | 2019-12-06 |
URL | https://arxiv.org/abs/1912.02967v2 |
https://arxiv.org/pdf/1912.02967v2.pdf | |
PWC | https://paperswithcode.com/paper/alternative-function-approximation |
Repo | |
Framework | |
Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization
Title | Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization |
Authors | Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma, Yuling Yan |
Abstract | This paper studies noisy low-rank matrix completion: given partial and noisy entries of a large low-rank matrix, the goal is to estimate the underlying matrix faithfully and efficiently. Arguably one of the most popular paradigms to tackle this problem is convex relaxation, which achieves remarkable efficacy in practice. However, the theoretical support of this approach is still far from optimal in the noisy setting, falling short of explaining its empirical success. We make progress towards demystifying the practical efficacy of convex relaxation vis-`a-vis random noise. When the rank and the condition number of the unknown matrix are bounded by a constant, we demonstrate that the convex programming approach achieves near-optimal estimation errors — in terms of the Euclidean loss, the entrywise loss, and the spectral norm loss — for a wide range of noise levels. All of this is enabled by bridging convex relaxation with the nonconvex Burer-Monteiro approach, a seemingly distinct algorithmic paradigm that is provably robust against noise. More specifically, we show that an approximate critical point of the nonconvex formulation serves as an extremely tight approximation of the convex solution, thus allowing us to transfer the desired statistical guarantees of the nonconvex approach to its convex counterpart. |
Tasks | Low-Rank Matrix Completion, Matrix Completion |
Published | 2019-02-20 |
URL | https://arxiv.org/abs/1902.07698v2 |
https://arxiv.org/pdf/1902.07698v2.pdf | |
PWC | https://paperswithcode.com/paper/noisy-matrix-completion-understanding |
Repo | |
Framework | |
Teach an all-rounder with experts in different domains
Title | Teach an all-rounder with experts in different domains |
Authors | Zhao You, Dan Su, Dong Yu |
Abstract | In many automatic speech recognition (ASR) tasks, an ideal model has to be applicable over multiple domains. In this paper, we propose to teach an all-rounder with experts in different domains. Concretely, we build a multi-domain acoustic model by applying the teacher-student training framework. First, for each domain, a teacher model (domain-dependent model) is trained by fine-tuning a multi-condition model with domain-specific subset. Then all these teacher models are used to teach one single student model simultaneously. We perform experiments on two predefined domain setups. One is domains with different speaking styles, the other is nearfield, far-field and far-field with noise. Moreover, two types of models are examined: deep feedforward sequential memory network (DFSMN) and long short term memory (LSTM). Experimental results show that the model trained with this framework outperforms not only multi-condition model but also domain-dependent model. Specially, our training method provides up to 10.4% relative character error rate improvement over baseline model (multi-condition model). |
Tasks | Speech Recognition |
Published | 2019-07-09 |
URL | https://arxiv.org/abs/1907.05698v1 |
https://arxiv.org/pdf/1907.05698v1.pdf | |
PWC | https://paperswithcode.com/paper/teach-an-all-rounder-with-experts-in |
Repo | |
Framework | |
Second-order Information in First-order Optimization Methods
Title | Second-order Information in First-order Optimization Methods |
Authors | Yuzheng Hu, Licong Lin, Shange Tang |
Abstract | In this paper, we try to uncover the second-order essence of several first-order optimization methods. For Nesterov Accelerated Gradient, we rigorously prove that the algorithm makes use of the difference between past and current gradients, thus approximates the Hessian and accelerates the training. For adaptive methods, we related Adam and Adagrad to a powerful technique in computation statistics—Natural Gradient Descent. These adaptive methods can in fact be treated as relaxations of NGD with only a slight difference lying in the square root of the denominator in the update rules. Skeptical about the effect of such difference, we design a new algorithm—AdaSqrt, which removes the square root in the denominator and scales the learning rate by sqrt(T). Surprisingly, our new algorithm is comparable to various first-order methods(such as SGD and Adam) on MNIST and even beats Adam on CIFAR-10! This phenomenon casts doubt on the convention view that the square root is crucial and training without it will lead to terrible performance. As far as we have concerned, so long as the algorithm tries to explore second or even higher information of the loss surface, then proper scaling of the learning rate alone will guarantee fast training and good generalization performance. To the best of our knowledge, this is the first paper that seriously considers the necessity of square root among all adaptive methods. We believe that our work can shed light on the importance of higher-order information and inspire the design of more powerful algorithms in the future. |
Tasks | |
Published | 2019-12-20 |
URL | https://arxiv.org/abs/1912.09926v1 |
https://arxiv.org/pdf/1912.09926v1.pdf | |
PWC | https://paperswithcode.com/paper/second-order-information-in-first-order |
Repo | |
Framework | |
Double Weighted Truncated Nuclear Norm Regularization for Low-Rank Matrix Completion
Title | Double Weighted Truncated Nuclear Norm Regularization for Low-Rank Matrix Completion |
Authors | Shengke Xue, Wenyuan Qiu, Fan Liu, Xinyu Jin |
Abstract | Matrix completion focuses on recovering a matrix from a small subset of its observed elements, and has already gained cumulative attention in computer vision. Many previous approaches formulate this issue as a low-rank matrix approximation problem. Recently, a truncated nuclear norm has been presented as a surrogate of traditional nuclear norm, for better estimation to the rank of a matrix. The truncated nuclear norm regularization (TNNR) method is applicable in real-world scenarios. However, it is sensitive to the selection of the number of truncated singular values and requires numerous iterations to converge. Hereby, this paper proposes a revised approach called the double weighted truncated nuclear norm regularization (DW-TNNR), which assigns different weights to the rows and columns of a matrix separately, to accelerate the convergence with acceptable performance. The DW-TNNR is more robust to the number of truncated singular values than the TNNR. Instead of the iterative updating scheme in the second step of TNNR, this paper devises an efficient strategy that uses a gradient descent manner in a concise form, with a theoretical guarantee in optimization. Sufficient experiments conducted on real visual data prove that DW-TNNR has promising performance and holds the superiority in both speed and accuracy for matrix completion. |
Tasks | Low-Rank Matrix Completion, Matrix Completion |
Published | 2019-01-07 |
URL | http://arxiv.org/abs/1901.01711v1 |
http://arxiv.org/pdf/1901.01711v1.pdf | |
PWC | https://paperswithcode.com/paper/double-weighted-truncated-nuclear-norm |
Repo | |
Framework | |
Deep Latent Factor Model for Collaborative Filtering
Title | Deep Latent Factor Model for Collaborative Filtering |
Authors | Aanchal Mongia, Neha Jhamb, Emilie Chouzenoux, Angshul Majumdar |
Abstract | Latent factor models have been used widely in collaborative filtering based recommender systems. In recent years, deep learning has been successful in solving a wide variety of machine learning problems. Motivated by the success of deep learning, we propose a deeper version of latent factor model. Experiments on benchmark datasets shows that our proposed technique significantly outperforms all state-of-the-art collaborative filtering techniques. |
Tasks | Recommendation Systems |
Published | 2019-12-10 |
URL | https://arxiv.org/abs/1912.04754v1 |
https://arxiv.org/pdf/1912.04754v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-latent-factor-model-for-collaborative |
Repo | |
Framework | |
A Syllable-Structured, Contextually-Based Conditionally Generation of Chinese Lyrics
Title | A Syllable-Structured, Contextually-Based Conditionally Generation of Chinese Lyrics |
Authors | Xu Lu, Jie Wang, Bojin Zhuang, Shaojun Wang, Jing Xiao |
Abstract | This paper presents a novel, syllable-structured Chinese lyrics generation model given a piece of original melody. Most previously reported lyrics generation models fail to include the relationship between lyrics and melody. In this work, we propose to interpret lyrics-melody alignments as syllable structural information and use a multi-channel sequence-to-sequence model with considering both phrasal structures and semantics. Two different RNN encoders are applied, one of which is for encoding syllable structures while the other for semantic encoding with contextual sentences or input keywords. Moreover, a large Chinese lyrics corpus for model training is leveraged. With automatic and human evaluations, results demonstrate the effectiveness of our proposed lyrics generation model. To the best of our knowledge, there is few previous reports on lyrics generation considering both music and linguistic perspectives. |
Tasks | |
Published | 2019-06-15 |
URL | https://arxiv.org/abs/1906.09322v1 |
https://arxiv.org/pdf/1906.09322v1.pdf | |
PWC | https://paperswithcode.com/paper/a-syllable-structured-contextually-based |
Repo | |
Framework | |
Recurrent Point Processes for Dynamic Review Models
Title | Recurrent Point Processes for Dynamic Review Models |
Authors | Kostadin Cvejoski, Ramses J. Sanchez, Bogdan Georgiev, Jannis Schuecker, Christian Bauckhage, Cesar Ojeda |
Abstract | Recent progress in recommender system research has shown the importance of including temporal representations to improve interpretability and performance. Here, we incorporate temporal representations in continuous time via recurrent point process for a dynamical model of reviews. Our goal is to characterize how changes in perception, user interest and seasonal effects affect review text. |
Tasks | Point Processes, Recommendation Systems |
Published | 2019-12-09 |
URL | https://arxiv.org/abs/1912.04132v2 |
https://arxiv.org/pdf/1912.04132v2.pdf | |
PWC | https://paperswithcode.com/paper/recurrent-point-processes-for-dynamic-review |
Repo | |
Framework | |