July 26, 2019

1817 words 9 mins read

Paper Group NANR 14

Paper Group NANR 14

Developing LexO: a Collaborative Editor of Multilingual Lexica and Termino-Ontological Resources in the Humanities. CUNI Experiments for WMT17 Metrics Task. Applying BLAST to Text Reuse Detection in Finnish Newspapers and Journals, 1771-1910. Fusion of Simple Models for Native Language Identification. Normalizing Medieval German Texts: from rules t …

Developing LexO: a Collaborative Editor of Multilingual Lexica and Termino-Ontological Resources in the Humanities

Title Developing LexO: a Collaborative Editor of Multilingual Lexica and Termino-Ontological Resources in the Humanities
Authors Bell, Andrea i, Emiliano Giovannetti, Silvia Piccini, Anja Weingart
Abstract
Tasks
Published 2017-09-01
URL https://www.aclweb.org/anthology/W17-7010/
PDF https://www.aclweb.org/anthology/W17-7010
PWC https://paperswithcode.com/paper/developing-lexo-a-collaborative-editor-of
Repo
Framework

CUNI Experiments for WMT17 Metrics Task

Title CUNI Experiments for WMT17 Metrics Task
Authors David Mare{\v{c}}ek, Ond{\v{r}}ej Bojar, Ond{\v{r}}ej H{"u}bsch, Rudolf Rosa, Du{\v{s}}an Vari{\v{s}}
Abstract
Tasks Dependency Parsing, Machine Translation, Word Alignment
Published 2017-09-01
URL https://www.aclweb.org/anthology/W17-4769/
PDF https://www.aclweb.org/anthology/W17-4769
PWC https://paperswithcode.com/paper/cuni-experiments-for-wmt17-metrics-task
Repo
Framework

Applying BLAST to Text Reuse Detection in Finnish Newspapers and Journals, 1771-1910

Title Applying BLAST to Text Reuse Detection in Finnish Newspapers and Journals, 1771-1910
Authors Aleksi Vesanto, Asko Nivala, Heli Rantala, Tapio Salakoski, Hannu Salmi, Filip Ginter
Abstract
Tasks Optical Character Recognition
Published 2017-05-01
URL https://www.aclweb.org/anthology/W17-0510/
PDF https://www.aclweb.org/anthology/W17-0510
PWC https://paperswithcode.com/paper/applying-blast-to-text-reuse-detection-in
Repo
Framework

Fusion of Simple Models for Native Language Identification

Title Fusion of Simple Models for Native Language Identification
Authors Fabio Kepler, Ramon Astudillo, Alberto Abad
Abstract In this paper we describe the approaches we explored for the 2017 Native Language Identification shared task. We focused on simple word and sub-word units avoiding heavy use of hand-crafted features. Following recent trends, we explored linear and neural networks models to attempt to compensate for the lack of rich feature use. Initial efforts yielded f1-scores of 82.39{%} and 83.77{%} in the development and test sets of the fusion track, and were officially submitted to the task as team L2F. After the task was closed, we carried on further experiments and relied on a late fusion strategy for combining our simple proposed approaches with modifications of the baselines provided by the task. As expected, the i-vectors based sub-system dominates the performance of the system combinations, and results in the major contributor to our achieved scores. Our best combined system achieves 90.1{%} and 90.2{%} f1-score in the development and test sets of the fusion track, respectively.
Tasks Information Retrieval, Language Identification, Native Language Identification
Published 2017-09-01
URL https://www.aclweb.org/anthology/W17-5048/
PDF https://www.aclweb.org/anthology/W17-5048
PWC https://paperswithcode.com/paper/fusion-of-simple-models-for-native-language
Repo
Framework

Normalizing Medieval German Texts: from rules to deep learning

Title Normalizing Medieval German Texts: from rules to deep learning
Authors Natalia Korchagina
Abstract
Tasks Machine Translation
Published 2017-05-01
URL https://www.aclweb.org/anthology/W17-0504/
PDF https://www.aclweb.org/anthology/W17-0504
PWC https://paperswithcode.com/paper/normalizing-medieval-german-texts-from-rules
Repo
Framework

Squib: Effects of Cognitive Effort on the Resolution of Overspecified Descriptions

Title Squib: Effects of Cognitive Effort on the Resolution of Overspecified Descriptions
Authors Iv Paraboni, r{'e}, Alex Gwo Jen Lan, Matheus Mendes de Sant{'}Ana, Fl{'a}vio Luiz Coutinho
Abstract Studies in referring expression generation (REG) have shown different effects of referential overspecification on the resolution of certain descriptions. To further investigate effects of this kind, this article reports two eye-tracking experiments that measure the time required to recognize target objects based on different kinds of information. Results suggest that referential overspecification may be either helpful or detrimental to identification depending on the kind of information that is actually overspecified, an insight that may be useful for the design of more informed hearer-oriented REG algorithms.
Tasks Eye Tracking, Text Generation
Published 2017-06-01
URL https://www.aclweb.org/anthology/J17-2006/
PDF https://www.aclweb.org/anthology/J17-2006
PWC https://paperswithcode.com/paper/squib-effects-of-cognitive-effort-on-the
Repo
Framework

Enhanced skeleton visualization for view invariant human action recognition

Title Enhanced skeleton visualization for view invariant human action recognition
Authors Mengyuan Liu, Hong Liu, Chen Chen
Abstract Human action recognition based on skeletons has wide applications in human–computer interaction and intelligent surveillance. However, view variations and noisy data bring challenges to this task. What’s more, it remains a problem to effectively represent spatio-temporal skeleton sequences. To solve these problems in one goal, this work presents an enhanced skeleton visualization method for view invariant human action recognition. Our method consists of three stages. First, a sequence-based view invariant transform is developed to eliminate the effect of view variations on spatio-temporal locations of skeleton joints. Second, the transformed skeletons are visualized as a series of color images, which implicitly encode the spatio-temporal information of skeleton joints. Furthermore, visual and motion enhancement methods are applied on color images to enhance their local patterns. Third, a convolutional neural networks-based model is adopted to extract robust and discriminative features from color images. The final action class scores are generated by decision level fusion of deep features. Extensive experiments on four challenging datasets consistently demonstrate the superiority of our method.
Tasks Skeleton Based Action Recognition, Temporal Action Localization
Published 2017-08-01
URL https://doi.org/10.1016/j.patcog.2017.02.030
PDF https://nkliuyifang.github.io/papers/PR2017.pdf
PWC https://paperswithcode.com/paper/enhanced-skeleton-visualization-for-view
Repo
Framework

Automatically Extracting Variant-Normalization Pairs for Japanese Text Normalization

Title Automatically Extracting Variant-Normalization Pairs for Japanese Text Normalization
Authors Itsumi Saito, Kyosuke Nishida, Kugatsu Sadamitsu, Kuniko Saito, Junji Tomita
Abstract Social media texts, such as tweets from Twitter, contain many types of non-standard tokens, and the number of normalization approaches for handling such noisy text has been increasing. We present a method for automatically extracting pairs of a variant word and its normal form from unsegmented text on the basis of a pair-wise similarity approach. We incorporated the acquired variant-normalization pairs into Japanese morphological analysis. The experimental results show that our method can extract widely covered variants from large Twitter data and improve the recall of normalization without degrading the overall accuracy of Japanese morphological analysis.
Tasks Machine Translation, Morphological Analysis
Published 2017-11-01
URL https://www.aclweb.org/anthology/I17-1094/
PDF https://www.aclweb.org/anthology/I17-1094
PWC https://paperswithcode.com/paper/automatically-extracting-variant
Repo
Framework

Japanese all-words WSD system using the Kyoto Text Analysis ToolKit

Title Japanese all-words WSD system using the Kyoto Text Analysis ToolKit
Authors Hiroyuki Shinnou, Kanako Komiya, Minoru Sasaki, Shinsuke Mori
Abstract
Tasks Domain Adaptation, Morphological Analysis, Word Sense Disambiguation
Published 2017-11-01
URL https://www.aclweb.org/anthology/Y17-1052/
PDF https://www.aclweb.org/anthology/Y17-1052
PWC https://paperswithcode.com/paper/japanese-all-words-wsd-system-using-the-kyoto
Repo
Framework

Compositional Semantics using Feature-Based Models from WordNet

Title Compositional Semantics using Feature-Based Models from WordNet
Authors Pablo Gamallo, Mart{'\i}n Pereira-Fari{~n}a
Abstract This article describes a method to build semantic representations of composite expressions in a compositional way by using WordNet relations to represent the meaning of words. The meaning of a target word is modelled as a vector in which its semantically related words are assigned weights according to both the type of the relationship and the distance to the target word. Word vectors are compositionally combined by syntactic dependencies. Each syntactic dependency triggers two complementary compositional functions: the named head function and dependent function. The experiments show that the proposed compositional method outperforms the state-of-the-art for both intransitive subject-verb and transitive subject-verb-object constructions.
Tasks
Published 2017-04-01
URL https://www.aclweb.org/anthology/W17-1901/
PDF https://www.aclweb.org/anthology/W17-1901
PWC https://paperswithcode.com/paper/compositional-semantics-using-feature-based
Repo
Framework

Skeletonnet: Mining deep part features for 3-d action recognition

Title Skeletonnet: Mining deep part features for 3-d action recognition
Authors Qiuhong Ke, Senjian An, Mohammed Bennamoun, Ferdous Sohel, Farid Boussaid
Abstract This letter presents SkeletonNet, a deep learning framework for skeleton-based 3-D action recognition. Given a skeleton sequence, the spatial structure of the skeleton joints in each frame and the temporal information between multiple frames are two important factors for action recognition. We first extract body-part-based features from each frame of the skeleton sequence. Compared to the original coordinates of the skeleton joints, the proposed features are translation, rotation, and scale invariant. To learn robust temporal information, instead of treating the features of all frames as a time series, we transform the features into images and feed them to the proposed deep learning network, which contains two parts: one to extract general features from the input images, while the other to generate a discriminative and compact representation for action recognition. The proposed method is tested on the SBU kinect interaction dataset, the CMU dataset, and the large-scale NTU RGB+D dataset and achieves state-of-the-art performance.
Tasks Skeleton Based Action Recognition, Time Series
Published 2017-03-31
URL https://doi.org/10.1109/LSP.2017.2690339
PDF https://api.research-repository.uwa.edu.au/portalfiles/portal/32524403/Ke_et_al._2016_SkeletonNet_Mining_deep_part.pdf
PWC https://paperswithcode.com/paper/skeletonnet-mining-deep-part-features-for-3-d
Repo
Framework

The Phrasal-Prepositional Verbs in Philippine English: A Corpus-based Analysis

Title The Phrasal-Prepositional Verbs in Philippine English: A Corpus-based Analysis
Authors Jennibelle Ella, Shirley Dita
Abstract
Tasks
Published 2017-11-01
URL https://www.aclweb.org/anthology/Y17-1008/
PDF https://www.aclweb.org/anthology/Y17-1008
PWC https://paperswithcode.com/paper/the-phrasal-prepositional-verbs-in-philippine
Repo
Framework

Efficient Encoding of Pathology Reports Using Natural Language Processing

Title Efficient Encoding of Pathology Reports Using Natural Language Processing
Authors Rebecka Weegar, Jan F Nyg{\aa}rd, Hercules Dalianis
Abstract In this article we present a system that extracts information from pathology reports. The reports are written in Norwegian and contain free text describing prostate biopsies. Currently, these reports are manually coded for research and statistical purposes by trained experts at the Cancer Registry of Norway where the coders extract values for a set of predefined fields that are specific for prostate cancer. The presented system is rule based and achieves an average F-score of 0.91 for the fields Gleason grade, Gleason score, the number of biopsies that contain tumor tissue, and the orientation of the biopsies. The system also identifies reports that contain ambiguity or other content that should be reviewed by an expert. The system shows potential to encode the reports considerably faster, with less resources, and similar high quality to the manual encoding.
Tasks
Published 2017-09-01
URL https://www.aclweb.org/anthology/R17-1100/
PDF https://doi.org/10.26615/978-954-452-049-6_100
PWC https://paperswithcode.com/paper/efficient-encoding-of-pathology-reports-using
Repo
Framework

Tensor Decomposition with Smoothness

Title Tensor Decomposition with Smoothness
Authors Masaaki Imaizumi, Kohei Hayashi
Abstract Real data tensors are usually high dimensional but their intrinsic information is preserved in low-dimensional space, which motivates to use tensor decompositions such as Tucker decomposition. Often, real data tensors are not only low dimensional, but also smooth, meaning that the adjacent elements are similar or continuously changing, which typically appear as spatial or temporal data. To incorporate the smoothness property, we propose the smoothed Tucker decomposition (STD). STD leverages the smoothness by the sum of a few basis functions, which reduces the number of parameters. The objective function is formulated as a convex problem and, to solve that, an algorithm based on the alternating direction method of multipliers is derived. We theoretically show that, under the smoothness assumption, STD achieves a better error bound. The theoretical result and performances of STD are numerically verified.
Tasks
Published 2017-08-01
URL https://icml.cc/Conferences/2017/Schedule?showEvent=556
PDF http://proceedings.mlr.press/v70/imaizumi17a/imaizumi17a.pdf
PWC https://paperswithcode.com/paper/tensor-decomposition-with-smoothness
Repo
Framework

Joint Optimization of User-desired Content in Multi-document Summaries by Learning from User Feedback

Title Joint Optimization of User-desired Content in Multi-document Summaries by Learning from User Feedback
Authors Avinesh P.V.S, Christian M. Meyer
Abstract In this paper, we propose an extractive multi-document summarization (MDS) system using joint optimization and active learning for content selection grounded in user feedback. Our method interactively obtains user feedback to gradually improve the results of a state-of-the-art integer linear programming (ILP) framework for MDS. Our methods complement fully automatic methods in producing high-quality summaries with a minimum number of iterations and feedbacks. We conduct multiple simulation-based experiments and analyze the effect of feedback-based concept selection in the ILP setup in order to maximize the user-desired content in the summary.
Tasks Active Learning, Document Summarization, Multi-Document Summarization
Published 2017-07-01
URL https://www.aclweb.org/anthology/P17-1124/
PDF https://www.aclweb.org/anthology/P17-1124
PWC https://paperswithcode.com/paper/joint-optimization-of-user-desired-content-in
Repo
Framework
comments powered by Disqus