Paper Group NANR 174
Speech Coding Combining Chaos Encryption and Error Recovery for G.722.2 Codec. Lifting Vectorial Variational Problems: A Natural Formulation Based on Geometric Measure Theory and Discrete Exterior Calculus. Conversation Initiation by Diverse News Contents Introduction. Contextual Action Cues from Camera Sensor for Multi-Stream Action Recognition. A …
Speech Coding Combining Chaos Encryption and Error Recovery for G.722.2 Codec
Title | Speech Coding Combining Chaos Encryption and Error Recovery for G.722.2 Codec |
Authors | Messaouda Boumaraf, Fatiha Merazka |
Abstract | |
Tasks | |
Published | 2019-09-01 |
URL | https://www.aclweb.org/anthology/W19-7418/ |
https://www.aclweb.org/anthology/W19-7418 | |
PWC | https://paperswithcode.com/paper/speech-coding-combining-chaos-encryption-and |
Repo | |
Framework | |
Lifting Vectorial Variational Problems: A Natural Formulation Based on Geometric Measure Theory and Discrete Exterior Calculus
Title | Lifting Vectorial Variational Problems: A Natural Formulation Based on Geometric Measure Theory and Discrete Exterior Calculus |
Authors | Thomas Mollenhoff, Daniel Cremers |
Abstract | Numerous tasks in imaging and vision can be formulated as variational problems over vector-valued maps. We approach the relaxation and convexification of such vectorial variational problems via a lifting to the space of currents. To that end, we recall that functionals with polyconvex Lagrangians can be reparametrized as convex one-homogeneous functionals on the graph of the function. This leads to an equivalent shape optimization problem over oriented surfaces in the product space of domain and codomain. A convex formulation is then obtained by relaxing the search space from oriented surfaces to more general currents. We propose a discretization of the resulting infinite-dimensional optimization problem using Whitney forms, which also generalizes recent “sublabel-accurate” multilabeling approaches. |
Tasks | |
Published | 2019-06-01 |
URL | http://openaccess.thecvf.com/content_CVPR_2019/html/Mollenhoff_Lifting_Vectorial_Variational_Problems_A_Natural_Formulation_Based_on_Geometric_CVPR_2019_paper.html |
http://openaccess.thecvf.com/content_CVPR_2019/papers/Mollenhoff_Lifting_Vectorial_Variational_Problems_A_Natural_Formulation_Based_on_Geometric_CVPR_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/lifting-vectorial-variational-problems-a-1 |
Repo | |
Framework | |
Conversation Initiation by Diverse News Contents Introduction
Title | Conversation Initiation by Diverse News Contents Introduction |
Authors | Satoshi Akasaki, Nobuhiro Kaji |
Abstract | In our everyday chit-chat, there is a conversation initiator, who proactively casts an initial utterance to start chatting. However, most existing conversation systems cannot play this role. Previous studies on conversation systems assume that the user always initiates conversation, and have placed emphasis on how to respond to the given user{'}s utterance. As a result, existing conversation systems become passive. Namely they continue waiting until being spoken to by the users. In this paper, we consider the system as a conversation initiator and propose a novel task of generating the initial utterance in open-domain non-task-oriented conversation. Here, in order not to make users bored, it is necessary to generate diverse utterances to initiate conversation without relying on boilerplate utterances like greetings. To this end, we propose to generate initial utterance by summarizing and chatting about news articles, which provide fresh and various contents everyday. To address the lack of the training data for this task, we constructed a novel large-scale dataset through crowd-sourcing. We also analyzed the dataset in detail to examine how humans initiate conversations (the dataset will be released to facilitate future research activities). We present several approaches to conversation initiation including information retrieval based and generation based models. Experimental results showed that the proposed models trained on our dataset performed reasonably well and outperformed baselines that utilize automatically collected training data in both automatic and manual evaluation. |
Tasks | Information Retrieval |
Published | 2019-06-01 |
URL | https://www.aclweb.org/anthology/N19-1400/ |
https://www.aclweb.org/anthology/N19-1400 | |
PWC | https://paperswithcode.com/paper/conversation-initiation-by-diverse-news |
Repo | |
Framework | |
Contextual Action Cues from Camera Sensor for Multi-Stream Action Recognition
Title | Contextual Action Cues from Camera Sensor for Multi-Stream Action Recognition |
Authors | Jongkwang Hong, Bora Cho, Yong Won Hong, Hyeran Byun |
Abstract | In action recognition research, two primary types of information are appearance and motion information that is learned from RGB images through visual sensors. However, depending on the action characteristics, contextual information, such as the existence of specific objects or globally-shared information in the image, becomes vital information to define the action. For example, the existence of the ball is vital information distinguishing “kicking” from “running”. Furthermore, some actions share typical global abstract poses, which can be used as a key to classify actions. Based on these observations, we propose the multi-stream network model, which incorporates spatial, temporal, and contextual cues in the image for action recognition. We experimented on the proposed method using C3D or inflated 3D ConvNet (I3D) as a backbone network, regarding two different action recognition datasets. As a result, we observed overall improvement in accuracy, demonstrating the effectiveness of our proposed method. |
Tasks | Action Recognition In Videos |
Published | 2019-03-20 |
URL | https://doi.org/10.3390/s19061382 |
https://www.mdpi.com/1424-8220/19/6/1382/htm | |
PWC | https://paperswithcode.com/paper/contextual-action-cues-from-camera-sensor-for |
Repo | |
Framework | |
A Linearly Convergent Method for Non-Smooth Non-Convex Optimization on the Grassmannian with Applications to Robust Subspace and Dictionary Learning
Title | A Linearly Convergent Method for Non-Smooth Non-Convex Optimization on the Grassmannian with Applications to Robust Subspace and Dictionary Learning |
Authors | Zhihui Zhu, Tianyu Ding, Daniel Robinson, Manolis Tsakiris, René Vidal |
Abstract | Minimizing a non-smooth function over the Grassmannian appears in many applications in machine learning. In this paper we show that if the objective satisfies a certain Riemannian regularity condition with respect to some point in the Grassmannian, then a Riemannian subgradient method with appropriate initialization and geometrically diminishing step size converges at a linear rate to that point. We show that for both the robust subspace learning method Dual Principal Component Pursuit (DPCP) and the Orthogonal Dictionary Learning (ODL) problem, the Riemannian regularity condition is satisfied with respect to appropriate points of interest, namely the subspace orthogonal to the sought subspace for DPCP and the orthonormal dictionary atoms for ODL. Consequently, we obtain in a unified framework significant improvements for the convergence theory of both methods. |
Tasks | Dictionary Learning |
Published | 2019-12-01 |
URL | http://papers.nips.cc/paper/9141-a-linearly-convergent-method-for-non-smooth-non-convex-optimization-on-the-grassmannian-with-applications-to-robust-subspace-and-dictionary-learning |
http://papers.nips.cc/paper/9141-a-linearly-convergent-method-for-non-smooth-non-convex-optimization-on-the-grassmannian-with-applications-to-robust-subspace-and-dictionary-learning.pdf | |
PWC | https://paperswithcode.com/paper/a-linearly-convergent-method-for-non-smooth |
Repo | |
Framework | |
Combining PBSMT and NMT Back-translated Data for Efficient NMT
Title | Combining PBSMT and NMT Back-translated Data for Efficient NMT |
Authors | Alberto Poncelas, Maja Popovi{'c}, Dimitar Shterionov, Gideon Maillette de Buy Wenniger, Andy Way |
Abstract | Neural Machine Translation (NMT) models achieve their best performance when large sets of parallel data are used for training. Consequently, techniques for augmenting the training set have become popular recently. One of these methods is back-translation, which consists on generating synthetic sentences by translating a set of monolingual, target-language sentences using a Machine Translation (MT) model. Generally, NMT models are used for back-translation. In this work, we analyze the performance of models when the training data is extended with synthetic data using different MT approaches. In particular we investigate back-translated data generated not only by NMT but also by Statistical Machine Translation (SMT) models and combinations of both. The results reveal that the models achieve the best performances when the training set is augmented with back-translated data created by merging different MT approaches. |
Tasks | Machine Translation |
Published | 2019-09-01 |
URL | https://www.aclweb.org/anthology/R19-1107/ |
https://www.aclweb.org/anthology/R19-1107 | |
PWC | https://paperswithcode.com/paper/combining-pbsmt-and-nmt-back-translated-data |
Repo | |
Framework | |
Zero-Shot Emotion Recognition via Affective Structural Embedding
Title | Zero-Shot Emotion Recognition via Affective Structural Embedding |
Authors | Chi Zhan, Dongyu She, Sicheng Zhao, Ming-Ming Cheng, Jufeng Yang |
Abstract | Image emotion recognition attracts much attention in recent years due to its wide applications. It aims to classify the emotional response of humans, where candidate emotion categories are generally defined by specific psychological theories, such as Ekman’s six basic emotions. However, with the development of psychological theories, emotion categories become increasingly diverse, fine-grained, and difficult to collect samples. In this paper, we investigate zero-shot learning (ZSL) problem in the emotion recognition task, which tries to recognize the new unseen emotions. Specifically, we propose a novel affective-structural embedding framework, utilizing mid-level semantic representation, i.e., adjective-noun pairs (ANP) features, to construct an affective embedding space. By doing this, the learned intermediate space can narrow the semantic gap between low-level visual and high-level semantic features. In addition, we introduce an affective adversarial constraint to retain the discriminative capacity of visual features and the affective structural information of semantic features during training process. Our method is evaluated on five widely used affective datasets and the perimental results show the proposed algorithm outperforms the state-of-the-art approaches. |
Tasks | Emotion Recognition, Zero-Shot Learning |
Published | 2019-10-01 |
URL | http://openaccess.thecvf.com/content_ICCV_2019/html/Zhan_Zero-Shot_Emotion_Recognition_via_Affective_Structural_Embedding_ICCV_2019_paper.html |
http://openaccess.thecvf.com/content_ICCV_2019/papers/Zhan_Zero-Shot_Emotion_Recognition_via_Affective_Structural_Embedding_ICCV_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/zero-shot-emotion-recognition-via-affective |
Repo | |
Framework | |
Topological Data Analysis for Discourse Semantics?
Title | Topological Data Analysis for Discourse Semantics? |
Authors | Ketki Savle, Wlodek Zadrozny, Minwoo Lee |
Abstract | In this paper we present new results on applying topological data analysis to discourse structures. We show that topological information, extracted from the relationships between sentences can be used in inference, namely it can be applied to the very difficult legal entailment given in the COLIEE 2018 data set. Previous results of Doshi and Zadrozny (2018) and Gholizadeh et al. (2018) show that topological features are useful for classification. The applications of computational topology to entailment are novel in our view provide a new set of tools for discourse semantics: computational topology can perhaps provide a bridge between the brittleness of logic and the regression of neural networks. We discuss the advantages and disadvantages of using topological information, and some open problems such as explainability of the classifier decisions. |
Tasks | Topological Data Analysis |
Published | 2019-05-01 |
URL | https://www.aclweb.org/anthology/W19-0605/ |
https://www.aclweb.org/anthology/W19-0605 | |
PWC | https://paperswithcode.com/paper/topological-data-analysis-for-discourse |
Repo | |
Framework | |
The Parameterized Complexity of Cascading Portfolio Scheduling
Title | The Parameterized Complexity of Cascading Portfolio Scheduling |
Authors | Eduard Eiben, Robert Ganian, Iyad Kanj, Stefan Szeider |
Abstract | Cascading portfolio scheduling is a static algorithm selection strategy which uses a sample of test instances to compute an optimal ordering (a cascading schedule) of a portfolio of available algorithms. The algorithms are then applied to each future instance according to this cascading schedule, until some algorithm in the schedule succeeds. Cascading algorithm scheduling has proven to be effective in several applications, including QBF solving and the generation of ImageNet classification models. It is known that the computation of an optimal cascading schedule in the offline phase is NP-hard. In this paper we study the parameterized complexity of this problem and establish its fixed-parameter tractability by utilizing structural properties of the success relation between algorithms and test instances. Our findings are significant as they reveal that in spite of the intractability of the problem in its general form, one can indeed exploit sparseness or density of the success relation to obtain non-trivial runtime guarantees for finding an optimal cascading schedule. |
Tasks | |
Published | 2019-12-01 |
URL | http://papers.nips.cc/paper/8983-the-parameterized-complexity-of-cascading-portfolio-scheduling |
http://papers.nips.cc/paper/8983-the-parameterized-complexity-of-cascading-portfolio-scheduling.pdf | |
PWC | https://paperswithcode.com/paper/the-parameterized-complexity-of-cascading |
Repo | |
Framework | |
FAST OBJECT LOCALIZATION VIA SENSITIVITY ANALYSIS
Title | FAST OBJECT LOCALIZATION VIA SENSITIVITY ANALYSIS |
Authors | Mohammad K. Ebrahimpour, David C. Noelle |
Abstract | Deep Convolutional Neural Networks (CNNs) have been repeatedly shown to perform well on image classification tasks, successfully recognizing a broad array of objects when given sufficient training data. Methods for object localization, however, are still in need of substantial improvement. Common approaches to this problem involve the use of a sliding window, sometimes at multiple scales, providing input to a deep CNN trained to classify the contents of the window. In general, these approaches are time consuming, requiring many classification calculations. In this paper, we offer a fundamentally different approach to the localization of recognized objects in images. Our method is predicated on the idea that a deep CNN capable of recognizing an object must implicitly contain knowledge about object location in its connection weights. We provide a simple method to interpret classifier weights in the context of individual classified images. This method involves the calculation of the derivative of network generated activation patterns, such as the activation of output class label units, with regard to each in- put pixel, performing a sensitivity analysis that identifies the pixels that, in a local sense, have the greatest influence on internal representations and object recognition. These derivatives can be efficiently computed using a single backward pass through the deep CNN classifier, producing a sensitivity map of the image. We demonstrate that a simple linear mapping can be learned from sensitivity maps to bounding box coordinates, localizing the recognized object. Our experimental results, using real-world data sets for which ground truth localization information is known, reveal competitive accuracy from our fast technique. |
Tasks | Image Classification, Object Localization, Object Recognition |
Published | 2019-05-01 |
URL | https://openreview.net/forum?id=rkzUYjCcFm |
https://openreview.net/pdf?id=rkzUYjCcFm | |
PWC | https://paperswithcode.com/paper/fast-object-localization-via-sensitivity |
Repo | |
Framework | |
A comparison of statistical association measures for identifying dependency-based collocations in various languages.
Title | A comparison of statistical association measures for identifying dependency-based collocations in various languages. |
Authors | Marcos Garcia, Marcos Garc{'\i}a Salido, Margarita Alonso-Ramos |
Abstract | This paper presents an exploration of different statistical association measures to automatically identify collocations from corpora in English, Portuguese, and Spanish. To evaluate the impact of the association metrics we manually annotated corpora with three different syntactic patterns of collocations (adjective-noun, verb-object and nominal compounds). We took advantage of the PARSEME 1.1 Shared Task corpora by selecting a subset of 155k tokens in the three referred languages, in which we annotated 1,526 collocations with the corresponding Lexical Functions according to the Meaning-Text Theory. Using the resulting gold-standard, we have carried out a comparison between frequency data and several well-known association measures, both symmetric and asymmetric. The results show that the combination of dependency triples with raw frequency information is as powerful as the best association measures in most syntactic patterns and languages. Furthermore, and despite the asymmetric behaviour of collocations, directional approaches perform worse than the symmetric ones in the extraction of these phraseological combinations. |
Tasks | |
Published | 2019-08-01 |
URL | https://www.aclweb.org/anthology/W19-5107/ |
https://www.aclweb.org/anthology/W19-5107 | |
PWC | https://paperswithcode.com/paper/a-comparison-of-statistical-association |
Repo | |
Framework | |
Parts of Speech Tagging for Kannada
Title | Parts of Speech Tagging for Kannada |
Authors | Swaroop L R, Rakshith Gowda G S, Sourabh U, Shriram Hegde |
Abstract | Parts of speech (POS) tagging is the process of assigning the part of speech tag to each and every word in a sentence. In this paper, we have presented POS tagger for Kannada, a low resource south Asian language, using Condition Random Fields. POS tagger developed in the work uses novel features native to Kannada language. The novel features include Sandhi splitting, where a compound word is broken down into two or more meaningful constituent words. The proposed model is trained and tested on the tagged dataset which contains 21 thousand sentences and achieves a highest accuracy of 94.56{%}. |
Tasks | |
Published | 2019-09-01 |
URL | https://www.aclweb.org/anthology/R19-2005/ |
https://www.aclweb.org/anthology/R19-2005 | |
PWC | https://paperswithcode.com/paper/parts-of-speech-tagging-for-kannada |
Repo | |
Framework | |
The Effectiveness of Simple Hybrid Systems for Hypernym Discovery
Title | The Effectiveness of Simple Hybrid Systems for Hypernym Discovery |
Authors | William Held, Nizar Habash |
Abstract | Hypernymy modeling has largely been separated according to two paradigms, pattern-based methods and distributional methods. However, recent works utilizing a mix of these strategies have yielded state-of-the-art results. This paper evaluates the contribution of both paradigms to hybrid success by evaluating the benefits of hybrid treatment of baseline models from each paradigm. Even with a simple methodology for each individual system, utilizing a hybrid approach establishes new state-of-the-art results on two domain-specific English hypernym discovery tasks and outperforms all non-hybrid approaches in a general English hypernym discovery task. |
Tasks | Hypernym Discovery |
Published | 2019-07-01 |
URL | https://www.aclweb.org/anthology/P19-1327/ |
https://www.aclweb.org/anthology/P19-1327 | |
PWC | https://paperswithcode.com/paper/the-effectiveness-of-simple-hybrid-systems |
Repo | |
Framework | |
RANLP 2019 Multilingual Headline Generation Task Overview
Title | RANLP 2019 Multilingual Headline Generation Task Overview |
Authors | Marina Litvak, John M. Conroy, Peter A. Rankel |
Abstract | The objective of the 2019 RANLP Multilingual Headline Generation (HG) Task is to explore some of the challenges highlighted by current state of the art approaches on creating informative headlines to news articles: non-descriptive headlines, out-of-domain training data, generating headlines from long documents which are not well represented by the head heuristic, and dealing with multilingual domain. This tasks makes available a large set of training data for headline generation and provides an evaluation methods for the task. Our data sets are drawn from Wikinews as well as Wikipedia. Participants were required to generate headlines for at least 3 languages, which were evaluated via automatic methods. A key aspect of the task is multilinguality. The task measures the performance of multilingual headline generation systems using the Wikipedia and Wikinews articles in multiple languages. The objective is to assess the performance of automatic headline generation techniques on text documents covering a diverse range of languages and topics outside the news domain. |
Tasks | |
Published | 2019-09-01 |
URL | https://www.aclweb.org/anthology/W19-8901/ |
https://www.aclweb.org/anthology/W19-8901 | |
PWC | https://paperswithcode.com/paper/ranlp-2019-multilingual-headline-generation |
Repo | |
Framework | |
Automatic Detection and Classification of Argument Components using Multi-task Deep Neural Network
Title | Automatic Detection and Classification of Argument Components using Multi-task Deep Neural Network |
Authors | Jean-Christophe Mensonides, S{'e}bastien Harispe, Jacky Montmain, V{'e}ronique Thireau |
Abstract | |
Tasks | |
Published | 2019-09-01 |
URL | https://www.aclweb.org/anthology/W19-7404/ |
https://www.aclweb.org/anthology/W19-7404 | |
PWC | https://paperswithcode.com/paper/automatic-detection-and-classification-of-1 |
Repo | |
Framework | |