January 30, 2020

3235 words 16 mins read

Paper Group ANR 471

Paper Group ANR 471

Music Recommendations in Hyperbolic Space: An Application of Empirical Bayes and Hierarchical Poincaré Embeddings. Deep Transfer Learning for Thermal Dynamics Modeling in Smart Buildings. Noiseless Privacy. PI-GAN: Learning Pose Independent representations for multiple profile face synthesis. Distributed Vector Representations of Folksong Motifs. I …

Music Recommendations in Hyperbolic Space: An Application of Empirical Bayes and Hierarchical Poincaré Embeddings

Title Music Recommendations in Hyperbolic Space: An Application of Empirical Bayes and Hierarchical Poincaré Embeddings
Authors Tim Schmeier, Sam Garrett, Joseph Chisari, Brett Vintch
Abstract Matrix Factorization (MF) is a common method for generating recommendations, where the proximity of entities like users or items in the embedded space indicates their similarity to one another. Though almost all applications implicitly use a Euclidean embedding space to represent two entity types, recent work has suggested that a hyperbolic Poincar'e ball may be more well suited to representing multiple entity types, and in particular, hierarchies. We describe a novel method to embed a hierarchy of related music entities in hyperbolic space. We also describe how a parametric empirical Bayes approach can be used to estimate link reliability between entities in the hierarchy. Applying these methods together to build personalized playlists for users in a digital music service yielded a large and statistically significant increase in performance during an A/B test, as compared to the Euclidean model.
Tasks
Published 2019-07-24
URL https://arxiv.org/abs/1907.12378v1
PDF https://arxiv.org/pdf/1907.12378v1.pdf
PWC https://paperswithcode.com/paper/music-recommendations-in-hyperbolic-space-an
Repo
Framework

Deep Transfer Learning for Thermal Dynamics Modeling in Smart Buildings

Title Deep Transfer Learning for Thermal Dynamics Modeling in Smart Buildings
Authors Zhanhong Jiang, Young M. Lee
Abstract Thermal dynamics modeling has been a critical issue in building heating, ventilation, and air-conditioning (HVAC) systems, which can significantly affect the control and maintenance strategies. Due to the uniqueness of each specific building, traditional thermal dynamics modeling approaches heavily depending on physics knowledge cannot generalize well. This study proposes a deep supervised domain adaptation (DSDA) method for thermal dynamics modeling of building indoor temperature evolution and energy consumption. A long short term memory network based Sequence to Sequence scheme is pre-trained based on a large amount of data collected from a building and then adapted to another building which has a limited amount of data by applying the model fine-tuning. We use four publicly available datasets: SML and AHU for temperature evolution, long-term datasets from two different commercial buildings, termed as Building 1 and Building 2 for energy consumption. We show that the deep supervised domain adaptation is effective to adapt the pre-trained model from one building to another building and has better predictive performance than learning from scratch with only a limited amount of data.
Tasks Domain Adaptation, Transfer Learning
Published 2019-11-08
URL https://arxiv.org/abs/1911.03318v1
PDF https://arxiv.org/pdf/1911.03318v1.pdf
PWC https://paperswithcode.com/paper/deep-transfer-learning-for-thermal-dynamics
Repo
Framework

Noiseless Privacy

Title Noiseless Privacy
Authors Farhad Farokhi
Abstract In this paper, we define noiseless privacy, as a non-stochastic rival to differential privacy, requiring that the outputs of a mechanism (i.e., function composition of a privacy-preserving mapping and a query) can attain only a few values while varying the data of an individual (the logarithm of the number of the distinct values is bounded by the privacy budget). Therefore, the output of the mechanism is not fully informative of the data of the individuals in the dataset. We prove several guarantees for noiselessly-private mechanisms. The information content of the output about the data of an individual, even if an adversary knows all the other entries of the private dataset, is bounded by the privacy budget. The zero-error capacity of memory-less channels using noiselessly private mechanisms for transmission is upper bounded by the privacy budget. The performance of a non-stochastic hypothesis-testing adversary is bounded again by the privacy budget. Finally, assuming that an adversary has access to a stochastic prior on the dataset, we prove that the estimation error of the adversary for individual entries of the dataset is lower bounded by a decreasing function of the privacy budget. In this case, we also show that the maximal information leakage is bounded by the privacy budget. In addition to privacy guarantees, we prove that noiselessly-private mechanisms admit composition theorem and post-processing does not weaken their privacy guarantees. We prove that quantization operators can ensure noiseless privacy if the number of quantization levels is appropriately selected based on the sensitivity of the query and the privacy budget. Finally, we illustrate the privacy merits of noiseless privacy using multiple datasets in energy and transport.
Tasks Quantization
Published 2019-10-29
URL https://arxiv.org/abs/1910.13027v1
PDF https://arxiv.org/pdf/1910.13027v1.pdf
PWC https://paperswithcode.com/paper/noiseless-privacy
Repo
Framework

PI-GAN: Learning Pose Independent representations for multiple profile face synthesis

Title PI-GAN: Learning Pose Independent representations for multiple profile face synthesis
Authors Hamed Alqahtani
Abstract Generating a pose-invariant representation capable of synthesizing multiple face pose views from a single pose is still a difficult problem. The solution is demanded in various areas like multimedia security, computer vision, robotics, etc. Generative adversarial networks (GANs) have encoder-decoder structures possessing the capability to learn pose-independent representation incorporated with discriminator network for realistic face synthesis. We present PIGAN, a cyclic shared encoder-decoder framework, in an attempt to solve the problem. As compared to traditional GAN, it consists of secondary encoder-decoder framework sharing weights from the primary structure and reconstructs the face with the original pose. The primary framework focuses on creating disentangle representation, and secondary framework aims to restore the original face. We use CFP high-resolution, realistic dataset to check the performance.
Tasks Face Generation
Published 2019-12-26
URL https://arxiv.org/abs/2001.00645v1
PDF https://arxiv.org/pdf/2001.00645v1.pdf
PWC https://paperswithcode.com/paper/pi-gan-learning-pose-independent
Repo
Framework

Distributed Vector Representations of Folksong Motifs

Title Distributed Vector Representations of Folksong Motifs
Authors Aitor Arronte-Alvarez, Francisco Gómez-Martin
Abstract This article presents a distributed vector representation model for learning folksong motifs. A skip-gram version of word2vec with negative sampling is used to represent high quality embeddings. Motifs from the Essen Folksong collection are compared based on their cosine similarity. A new evaluation method for testing the quality of the embeddings based on a melodic similarity task is presented to show how the vector space can represent complex contextual features, and how it can be utilized for the study of folksong variation.
Tasks
Published 2019-03-20
URL http://arxiv.org/abs/1903.08756v1
PDF http://arxiv.org/pdf/1903.08756v1.pdf
PWC https://paperswithcode.com/paper/distributed-vector-representations-of
Repo
Framework

InfoBot: Transfer and Exploration via the Information Bottleneck

Title InfoBot: Transfer and Exploration via the Information Bottleneck
Authors Anirudh Goyal, Riashat Islam, Daniel Strouse, Zafarali Ahmed, Matthew Botvinick, Hugo Larochelle, Yoshua Bengio, Sergey Levine
Abstract A central challenge in reinforcement learning is discovering effective policies for tasks where rewards are sparsely distributed. We postulate that in the absence of useful reward signals, an effective exploration strategy should seek out {\it decision states}. These states lie at critical junctions in the state space from where the agent can transition to new, potentially unexplored regions. We propose to learn about decision states from prior experience. By training a goal-conditioned policy with an information bottleneck, we can identify decision states by examining where the model actually leverages the goal state. We find that this simple mechanism effectively identifies decision states, even in partially observed settings. In effect, the model learns the sensory cues that correlate with potential subgoals. In new environments, this model can then identify novel subgoals for further exploration, guiding the agent through a sequence of potential decision states and through new regions of the state space.
Tasks
Published 2019-01-30
URL http://arxiv.org/abs/1901.10902v4
PDF http://arxiv.org/pdf/1901.10902v4.pdf
PWC https://paperswithcode.com/paper/infobot-transfer-and-exploration-via-the
Repo
Framework

An Improved Neural Baseline for Temporal Relation Extraction

Title An Improved Neural Baseline for Temporal Relation Extraction
Authors Qiang Ning, Sanjay Subramanian, Dan Roth
Abstract Determining temporal relations (e.g., before or after) between events has been a challenging natural language understanding task, partly due to the difficulty to generate large amounts of high-quality training data. Consequently, neural approaches have not been widely used on it, or showed only moderate improvements. This paper proposes a new neural system that achieves about 10% absolute improvement in accuracy over the previous best system (25% error reduction) on two benchmark datasets. The proposed system is trained on the state-of-the-art MATRES dataset and applies contextualized word embeddings, a Siamese encoder of a temporal common sense knowledge base, and global inference via integer linear programming (ILP). We suggest that the new approach could serve as a strong baseline for future research in this area.
Tasks Common Sense Reasoning, Relation Extraction, Word Embeddings
Published 2019-09-01
URL https://arxiv.org/abs/1909.00429v1
PDF https://arxiv.org/pdf/1909.00429v1.pdf
PWC https://paperswithcode.com/paper/an-improved-neural-baseline-for-temporal
Repo
Framework

Unsupervised motion saliency map estimation based on optical flow inpainting

Title Unsupervised motion saliency map estimation based on optical flow inpainting
Authors L. Maczyta, P. Bouthemy, O. Le Meur
Abstract The paper addresses the problem of motion saliency in videos, that is, identifying regions that undergo motion departing from its context. We propose a new unsupervised paradigm to compute motion saliency maps. The key ingredient is the flow inpainting stage. Candidate regions are determined from the optical flow boundaries. The residual flow in these regions is given by the difference between the optical flow and the flow inpainted from the surrounding areas. It provides the cue for motion saliency. The method is flexible and general by relying on motion information only. Experimental results on the DAVIS 2016 benchmark demonstrate that the method compares favourably with state-of-the-art video saliency methods.
Tasks Optical Flow Estimation
Published 2019-03-12
URL https://arxiv.org/abs/1903.04842v2
PDF https://arxiv.org/pdf/1903.04842v2.pdf
PWC https://paperswithcode.com/paper/unsupervised-motion-saliency-map-estimation
Repo
Framework

Improving SGD convergence by online linear regression of gradients in multiple statistically relevant directions

Title Improving SGD convergence by online linear regression of gradients in multiple statistically relevant directions
Authors Jarek Duda
Abstract Deep neural networks are usually trained with stochastic gradient descent (SGD), which minimizes objective function using very rough approximations of gradient, only averaging to the real gradient. Standard approaches like momentum or ADAM only consider a single direction, and do not try to model distance from extremum - neglecting valuable information from calculated sequence of gradients, often stagnating in some suboptimal plateau. Second order methods could exploit these missed opportunities, however, beside suffering from very large cost and numerical instabilities, many of them attract to suboptimal points like saddles due to negligence of signs of curvatures (as eigenvalues of Hessian). Saddle-free Newton method (SFN)~\cite{SFN} is a rare example of addressing this issue - changes saddle attraction into repulsion, and was shown to provide essential improvement for final value this way. However, it neglects noise while modelling second order behavior, focuses on Krylov subspace for numerical reasons, and requires costly eigendecomposion. Maintaining SFN advantages, there are proposed inexpensive ways for exploiting these opportunities. Second order behavior is linear dependence of first derivative - we can optimally estimate it from sequence of noisy gradients with least square linear regression, in online setting here: with weakening weights of old gradients. Statistically relevant subspace is suggested by PCA of recent noisy gradients - in online setting it can be made by slowly rotating considered directions toward new gradients, gradually replacing old directions with recent statistically relevant. Eigendecomposition can be also performed online: with regularly performed step of QR method to maintain diagonal Hessian. Outside the second order modeled subspace we can simultaneously perform gradient descent.
Tasks
Published 2019-01-31
URL http://arxiv.org/abs/1901.11457v5
PDF http://arxiv.org/pdf/1901.11457v5.pdf
PWC https://paperswithcode.com/paper/improving-sgd-convergence-by-tracing-multiple
Repo
Framework

Two-level protein folding optimization on a three-dimensional AB off-lattice model

Title Two-level protein folding optimization on a three-dimensional AB off-lattice model
Authors Borko Bošković, Janez Brest
Abstract This paper presents a two-level protein folding optimization on a three-dimensional AB off-lattice model. The first level is responsible for forming conformations with a good hydrophobic core or a set of compact hydrophobic amino acid positions. These conformations are forwarded to the second level, where an accurate search is performed with the aim of locating conformations with the best energy value. The optimization process switches between these two levels until the stopping condition is satisfied. An auxiliary fitness function was designed for the first level, while the original fitness function is used in the second level. The auxiliary fitness function includes expression about the quality of the hydrophobic core. This expression is crucial for leading the search process to the promising solutions that have a good hydrophobic core and, consequently, improves the efficiency of the whole optimization process. Our differential evolution algorithm was used for demonstrating the efficiency of the two-level optimization. It was analyzed on well-known amino acid sequences that are used frequently in the literature. The obtained experimental results show that the employed two-level optimization improves the efficiency of our algorithm significantly, and that the proposed algorithm is superior to other state-of-the-art algorithms.
Tasks
Published 2019-03-04
URL http://arxiv.org/abs/1903.01456v1
PDF http://arxiv.org/pdf/1903.01456v1.pdf
PWC https://paperswithcode.com/paper/two-level-protein-folding-optimization-on-a
Repo
Framework

DAST Model: Deciding About Semantic Complexity of a Text

Title DAST Model: Deciding About Semantic Complexity of a Text
Authors MohammadReza Besharati, Mohammad Izadi
Abstract Measuring text complexity is an essential task in several fields and applications (such as NLP, semantic web, smart education, etc.). The semantic layer of text is more tacit than its syntactic structure and, as a result, calculation of semantic complexity is more difficult than syntactic complexity. While there are famous and powerful academic and commercial syntactic complexity measures, the problem of measuring semantic complexity is still a challenging one. In this paper, we introduce the DAST model, which stands for Deciding About Semantic Complexity of a Text. DAST proposes an intuitionistic approach to semantics that lets us have a well-defined model for the semantics of a text and its complexity: semantic is considered as a lattice of intuitions and, as a result, semantic complexity is defined as the result of a calculation on this lattice. A set theoretic formal definition of semantic complexity, as a 6-tuple formal system, is provided. By using this formal system, a method for measuring semantic complexity is presented. The evaluation of the proposed approach is done by a set of three human-judgment experiments. The results show that DAST model is capable of deciding about semantic complexity of text. Furthermore, the analysis of the results leads us to introduce a Markovian model for the process of common-sense, multiple-steps and semantic-complexity reasoning in people. The results of Experiments demonstrate that our method outperforms the random baseline with improvement in better precision and competes with other methods by less error percentage.
Tasks Common Sense Reasoning
Published 2019-08-24
URL https://arxiv.org/abs/1908.09080v5
PDF https://arxiv.org/pdf/1908.09080v5.pdf
PWC https://paperswithcode.com/paper/dast-model-deciding-about-semantic-complexity
Repo
Framework
Title Dealing with Qualitative and Quantitative Features in Legal Domains
Authors Maximiliano C. D. Budán, María Laura Cobo, Diego I. Martínez, Antonino Rotolo
Abstract In this work, we enrich a formalism for argumentation by including a formal characterization of features related to the knowledge, in order to capture proper reasoning in legal domains. We add meta-data information to the arguments in the form of labels representing quantitative and qualitative data about them. These labels are propagated through an argumentative graph according to the relations of support, conflict, and aggregation between arguments.
Tasks
Published 2019-03-05
URL http://arxiv.org/abs/1903.01966v1
PDF http://arxiv.org/pdf/1903.01966v1.pdf
PWC https://paperswithcode.com/paper/dealing-with-qualitative-and-quantitative
Repo
Framework

Tight Sensitivity Bounds For Smaller Coresets

Title Tight Sensitivity Bounds For Smaller Coresets
Authors Alaa Maalouf, Adiel Statman, Dan Feldman
Abstract An $\varepsilon$-coreset for Least-Mean-Squares (LMS) of a matrix $A\in{\mathbb{R}}^{n\times d}$ is a small weighted subset of its rows that approximates the sum of squared distances from its rows to every affine $k$-dimensional subspace of ${\mathbb{R}}^d$, up to a factor of $1\pm\varepsilon$. Such coresets are useful for hyper-parameter tuning and solving many least-mean-squares problems such as low-rank approximation ($k$-SVD), $k$-PCA, Lassso/Ridge/Linear regression and many more. Coresets are also useful for handling streaming, dynamic and distributed big data in parallel. With high probability, non-uniform sampling based on upper bounds on what is known as importance or sensitivity of each row in $A$ yields a coreset. The size of the (sampled) coreset is then near-linear in the total sum of these sensitivity bounds. We provide algorithms that compute provably \emph{tight} bounds for the sensitivity of each input row. It is based on two ingredients: (i) iterative algorithm that computes the exact sensitivity of each point up to arbitrary small precision for (non-affine) $k$-subspaces, and (ii) a general reduction of independent interest from computing sensitivity for the family of affine $k$-subspaces in ${\mathbb{R}}^d$ to (non-affine) $(k+1)$- subspaces in ${\mathbb{R}}^{d+1}$. Experimental results on real-world datasets, including the English Wikipedia documents-term matrix, show that our bounds provide significantly smaller and data-dependent coresets also in practice. Full open source is also provided.
Tasks
Published 2019-07-02
URL https://arxiv.org/abs/1907.01433v1
PDF https://arxiv.org/pdf/1907.01433v1.pdf
PWC https://paperswithcode.com/paper/tight-sensitivity-bounds-for-smaller-coresets
Repo
Framework

Reasoning-Driven Question-Answering for Natural Language Understanding

Title Reasoning-Driven Question-Answering for Natural Language Understanding
Authors Daniel Khashabi
Abstract Natural language understanding (NLU) of text is a fundamental challenge in AI, and it has received significant attention throughout the history of NLP research. This primary goal has been studied under different tasks, such as Question Answering (QA) and Textual Entailment (TE). In this thesis, we investigate the NLU problem through the QA task and focus on the aspects that make it a challenge for the current state-of-the-art technology. This thesis is organized into three main parts: In the first part, we explore multiple formalisms to improve existing machine comprehension systems. We propose a formulation for abductive reasoning in natural language and show its effectiveness, especially in domains with limited training data. Additionally, to help reasoning systems cope with irrelevant or redundant information, we create a supervised approach to learn and detect the essential terms in questions. In the second part, we propose two new challenge datasets. In particular, we create two datasets of natural language questions where (i) the first one requires reasoning over multiple sentences; (ii) the second one requires temporal common sense reasoning. We hope that the two proposed datasets will motivate the field to address more complex problems. In the final part, we present the first formal framework for multi-step reasoning algorithms, in the presence of a few important properties of language use, such as incompleteness, ambiguity, etc. We apply this framework to prove fundamental limitations for reasoning algorithms. These theoretical results provide extra intuition into the existing empirical evidence in the field.
Tasks Common Sense Reasoning, Natural Language Inference, Question Answering, Reading Comprehension
Published 2019-08-14
URL https://arxiv.org/abs/1908.04926v1
PDF https://arxiv.org/pdf/1908.04926v1.pdf
PWC https://paperswithcode.com/paper/reasoning-driven-question-answering-for
Repo
Framework

Processamento de linguagem natural em Português e aprendizagem profunda para o domínio de Óleo e Gás

Title Processamento de linguagem natural em Português e aprendizagem profunda para o domínio de Óleo e Gás
Authors Diogo Gomes, Alexandre Evsukoff
Abstract Over the last few decades, institutions around the world have been challenged to deal with the sheer volume of information captured in unstructured formats, especially in textual documents. The so called Digital Transformation age, characterized by important technological advances and the advent of disruptive methods in Artificial Intelligence, offers opportunities to make better use of this information. Recent techniques in Natural Language Processing (NLP) with Deep Learning approaches allow to efficiently process a large volume of data in order to obtain relevant information, to identify patterns, classify text, among other applications. In this context, the highly technical vocabulary of Oil and Gas (O&G) domain represents a challenge for these NLP algorithms, in which terms can assume a very different meaning in relation to common sense understanding. The search for suitable mathematical representations and specific models requires a large amount of representative corpora in the O&G domain. However, public access to this material is scarce in the scientific literature, especially considering the Portuguese language. This paper presents a literature review about the main techniques for deep learning NLP and their major applications for O&G domain in Portuguese.
Tasks Common Sense Reasoning
Published 2019-08-05
URL https://arxiv.org/abs/1908.01674v2
PDF https://arxiv.org/pdf/1908.01674v2.pdf
PWC https://paperswithcode.com/paper/processamento-de-linguagem-natural-em
Repo
Framework
comments powered by Disqus