July 26, 2019

2558 words 13 mins read

Paper Group NAWR 5

Paper Group NAWR 5

RDF2Vec: RDF Graph Embeddings and Their Applications. Neural Networks for Efficient Bayesian Decoding of Natural Images from Retinal Neurons. Context Selection for Embedding Models. Image Super-Resolution via Deep Recursive Residual Network. Instance Weighting for Neural Machine Translation Domain Adaptation. Leveraging Linguistic Structures for Na …

RDF2Vec: RDF Graph Embeddings and Their Applications

Title RDF2Vec: RDF Graph Embeddings and Their Applications
Authors Petar Ristoski, Jessica Rosati, Tommaso Di Noia, Renato De Leone, Heiko Paulheim
Abstract Linked Open Data has been recognized as a valuable source for background information in many data mining and information retrieval tasks. However, most of the existing tools require features in propositional form, i.e., a vector of nominal or numerical features associated with an instance, while Linked Open Data sources are graphs by nature. In this paper, we present RDF2Vec, an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs.We generate sequences by leveraging local information from graph sub-structures, harvested by Weisfeiler-Lehman Subtree RDF Graph Kernels and graph walks, and learn latent numerical representations of entities in RDF graphs.We evaluate our approach on three different tasks: (i) standard machine learning tasks, (ii) entity and document modeling, and (iii) content-based recommender systems. The evaluation shows that the proposed entity embeddings outperform existing techniques, and that pre-computed feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be easily reused for different tasks.
Tasks Entity Embeddings, Information Retrieval, Knowledge Graph Embedding, Knowledge Graph Embeddings, Knowledge Graphs, Language Modelling, Node Classification, Recommendation Systems
Published 2017-11-10
URL http://www.semantic-web-journal.net/content/rdf2vec-rdf-graph-embeddings-and-their-applications-1
PDF http://www.semantic-web-journal.net/system/files/swj1738.pdf
PWC https://paperswithcode.com/paper/rdf2vec-rdf-graph-embeddings-and-their
Repo https://github.com/IBCNServices/pyRDF2Vec
Framework none

Neural Networks for Efficient Bayesian Decoding of Natural Images from Retinal Neurons

Title Neural Networks for Efficient Bayesian Decoding of Natural Images from Retinal Neurons
Authors Nikhil Parthasarathy, Eleanor Batty, William Falcon, Thomas Rutten, Mohit Rajpal, E.J. Chichilnisky, Liam Paninski
Abstract Decoding sensory stimuli from neural signals can be used to reveal how we sense our physical environment, and is valuable for the design of brain-machine interfaces. However, existing linear techniques for neural decoding may not fully reveal or exploit the fidelity of the neural signal. Here we develop a new approximate Bayesian method for decoding natural images from the spiking activity of populations of retinal ganglion cells (RGCs). We sidestep known computational challenges with Bayesian inference by exploiting artificial neural networks developed for computer vision, enabling fast nonlinear decoding that incorporates natural scene statistics implicitly. We use a decoder architecture that first linearly reconstructs an image from RGC spikes, then applies a convolutional autoencoder to enhance the image. The resulting decoder, trained on natural images and simulated neural responses, significantly outperforms linear decoding, as well as simple point-wise nonlinear decoding. These results provide a tool for the assessment and optimization of retinal prosthesis technologies, and reveal that the retina may provide a more accurate representation of the visual scene than previously appreciated.
Tasks Bayesian Inference
Published 2017-12-01
URL http://papers.nips.cc/paper/7222-neural-networks-for-efficient-bayesian-decoding-of-natural-images-from-retinal-neurons
PDF http://papers.nips.cc/paper/7222-neural-networks-for-efficient-bayesian-decoding-of-natural-images-from-retinal-neurons.pdf
PWC https://paperswithcode.com/paper/neural-networks-for-efficient-bayesian
Repo https://github.com/nikparth/visual-neural-decode
Framework tf

Context Selection for Embedding Models

Title Context Selection for Embedding Models
Authors Liping Liu, Francisco Ruiz, Susan Athey, David Blei
Abstract Word embeddings are an effective tool to analyze language. They have been recently extended to model other types of data beyond text, such as items in recommendation systems. Embedding models consider the probability of a target observation (a word or an item) conditioned on the elements in the context (other words or items). In this paper, we show that conditioning on all the elements in the context is not optimal. Instead, we model the probability of the target conditioned on a learned subset of the elements in the context. We use amortized variational inference to automatically choose this subset. Compared to standard embedding models, this method improves predictions and the quality of the embeddings.
Tasks Recommendation Systems, Word Embeddings
Published 2017-12-01
URL http://papers.nips.cc/paper/7067-context-selection-for-embedding-models
PDF http://papers.nips.cc/paper/7067-context-selection-for-embedding-models.pdf
PWC https://paperswithcode.com/paper/context-selection-for-embedding-models
Repo https://github.com/blei-lab/context-selection-embedding
Framework tf

Image Super-Resolution via Deep Recursive Residual Network

Title Image Super-Resolution via Deep Recursive Residual Network
Authors Ying Tai, Jian Yang, Xiaoming Liu
Abstract Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks; recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17.
Tasks Image Super-Resolution, Super-Resolution
Published 2017-07-01
URL http://openaccess.thecvf.com/content_cvpr_2017/html/Tai_Image_Super-Resolution_via_CVPR_2017_paper.html
PDF http://openaccess.thecvf.com/content_cvpr_2017/papers/Tai_Image_Super-Resolution_via_CVPR_2017_paper.pdf
PWC https://paperswithcode.com/paper/image-super-resolution-via-deep-recursive
Repo https://github.com/tyshiwo/DRRN_CVPR17
Framework pytorch

Instance Weighting for Neural Machine Translation Domain Adaptation

Title Instance Weighting for Neural Machine Translation Domain Adaptation
Authors Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, Eiichiro Sumita
Abstract Instance weighting has been widely applied to phrase-based machine translation domain adaptation. However, it is challenging to be applied to Neural Machine Translation (NMT) directly, because NMT is not a linear model. In this paper, two instance weighting technologies, i.e., sentence weighting and domain weighting with a dynamic weight learning strategy, are proposed for NMT domain adaptation. Empirical results on the IWSLT English-German/French tasks show that the proposed methods can substantially improve NMT performance by up to 2.7-6.7 BLEU points, outperforming the existing baselines by up to 1.6-3.6 BLEU points.
Tasks Domain Adaptation, Machine Translation
Published 2017-09-01
URL https://www.aclweb.org/anthology/D17-1155/
PDF https://www.aclweb.org/anthology/D17-1155
PWC https://paperswithcode.com/paper/instance-weighting-for-neural-machine
Repo https://github.com/wangruinlp/nmt_instance_weighting
Framework none

Leveraging Linguistic Structures for Named Entity Recognition with Bidirectional Recursive Neural Networks

Title Leveraging Linguistic Structures for Named Entity Recognition with Bidirectional Recursive Neural Networks
Authors Peng-Hsuan Li, Ruo-Ping Dong, Yu-Siang Wang, Ju-Chieh Chou, Wei-Yun Ma
Abstract In this paper, we utilize the linguistic structures of texts to improve named entity recognition by BRNN-CNN, a special bidirectional recursive network attached with a convolutional network. Motivated by the observation that named entities are highly related to linguistic constituents, we propose a constituent-based BRNN-CNN for named entity recognition. In contrast to classical sequential labeling methods, the system first identifies which text chunks are possible named entities by whether they are linguistic constituents. Then it classifies these chunks with a constituency tree structure by recursively propagating syntactic and semantic information to each constituent node. This method surpasses current state-of-the-art on OntoNotes 5.0 with automatically generated parses.
Tasks Named Entity Recognition
Published 2017-09-01
URL https://www.aclweb.org/anthology/D17-1282/
PDF https://www.aclweb.org/anthology/D17-1282
PWC https://paperswithcode.com/paper/leveraging-linguistic-structures-for-named
Repo https://github.com/jacobvsdanniel/tf_rnn
Framework tf

Transition-Based Disfluency Detection using LSTMs

Title Transition-Based Disfluency Detection using LSTMs
Authors Shaolei Wang, Wanxiang Che, Yue Zhang, Meishan Zhang, Ting Liu
Abstract In this paper, we model the problem of disfluency detection using a transition-based framework, which incrementally constructs and labels the disfluency chunk of input sentences using a new transition system without syntax information. Compared with sequence labeling methods, it can capture non-local chunk-level features; compared with joint parsing and disfluency detection methods, it is free for noise in syntax. Experiments show that our model achieves state-of-the-art f-score of 87.5{%} on the commonly used English Switchboard test set, and a set of in-house annotated Chinese data.
Tasks Information Retrieval
Published 2017-09-01
URL https://www.aclweb.org/anthology/D17-1296/
PDF https://www.aclweb.org/anthology/D17-1296
PWC https://paperswithcode.com/paper/transition-based-disfluency-detection-using
Repo https://github.com/hitwsl/transition_disfluency
Framework none

Toward Automated Early Sepsis Alerting: Identifying Infection Patients from Nursing Notes

Title Toward Automated Early Sepsis Alerting: Identifying Infection Patients from Nursing Notes
Authors Emilia Apostolova, Tom Velez
Abstract Severe sepsis and septic shock are conditions that affect millions of patients and have close to 50{%} mortality rate. Early identification of at-risk patients significantly improves outcomes. Electronic surveillance tools have been developed to monitor structured Electronic Medical Records and automatically recognize early signs of sepsis. However, many sepsis risk factors (e.g. symptoms and signs of infection) are often captured only in free text clinical notes. In this study, we developed a method for automatic monitoring of nursing notes for signs and symptoms of infection. We utilized a creative approach to automatically generate an annotated dataset. The dataset was used to create a Machine Learning model that achieved an F1-score ranging from 79 to 96{%}.
Tasks
Published 2017-08-01
URL https://www.aclweb.org/anthology/W17-2332/
PDF https://www.aclweb.org/anthology/W17-2332
PWC https://paperswithcode.com/paper/toward-automated-early-sepsis-alerting
Repo https://github.com/ema-/antibiotic-dictionary
Framework none

Filter Flow Made Practical: Massively Parallel and Lock-Free

Title Filter Flow Made Practical: Massively Parallel and Lock-Free
Authors Sathya N. Ravi, Yunyang Xiong, Lopamudra Mukherjee, Vikas Singh
Abstract This paper is inspired by a relatively recent work of Seitz and Baker which introduced the so-called Filter Flow model. Filter flow finds the transformation relating a pair of (or multiple) images by identifying a large set of local linear filters; imposing additional constraints on certain structural properties of these filters enables Filter Flow to serve as a general “one stop” construction for a spectrum of problems in vision: from optical flow to defocus to stereo to affine alignment. The idea is beautiful yet the benefits are not borne out in practice because of significant computational challenges. This issue makes most (if not all) deployments for practical vision problems out of reach. The key thrust of our work is to identify mathematically (near) equivalent reformulations of this model that can eliminate this serious limitation. We demonstrate via a detailed optimization-focused development that Filter Flow can indeed be solved fairly efficiently for a wide range of instantiations. We derive efficient algorithms, perform extensive theoretical analysis focused on convergence and parallelization and show how results competitive with the state of the art for many applications can be achieved with negligible application specific adjustments or post-processing. The actual numerical scheme is easy to understand and, implement (30 lines in Matlab) – this development will enable Filter Flow to be a viable general solver and testbed for numerous applications in the community, going forward.
Tasks Optical Flow Estimation
Published 2017-07-01
URL http://openaccess.thecvf.com/content_cvpr_2017/html/Ravi_Filter_Flow_Made_CVPR_2017_paper.html
PDF http://openaccess.thecvf.com/content_cvpr_2017/papers/Ravi_Filter_Flow_Made_CVPR_2017_paper.pdf
PWC https://paperswithcode.com/paper/filter-flow-made-practical-massively-parallel
Repo https://github.com/sravi-uwmadison/fast_filter_flow
Framework none

Combining Graph Degeneracy and Submodularity for Unsupervised Extractive Summarization

Title Combining Graph Degeneracy and Submodularity for Unsupervised Extractive Summarization
Authors Antoine Tixier, Polykarpos Meladianos, Michalis Vazirgiannis
Abstract We present a fully unsupervised, extractive text summarization system that leverages a submodularity framework introduced by past research. The framework allows summaries to be generated in a greedy way while preserving near-optimal performance guarantees. Our main contribution is the novel coverage reward term of the objective function optimized by the greedy algorithm. This component builds on the graph-of-words representation of text and the k-core decomposition algorithm to assign meaningful scores to words. We evaluate our approach on the AMI and ICSI meeting speech corpora, and on the DUC2001 news corpus. We reach state-of-the-art performance on all datasets. Results indicate that our method is particularly well-suited to the meeting domain.
Tasks Document Summarization, Information Retrieval, Keyword Extraction, Sentence Compression, Text Summarization
Published 2017-09-01
URL https://www.aclweb.org/anthology/W17-4507/
PDF https://www.aclweb.org/anthology/W17-4507
PWC https://paperswithcode.com/paper/combining-graph-degeneracy-and-submodularity
Repo https://github.com/Tixierae/EMNLP2017_NewSum
Framework none

Detecting Dementia through Retrospective Analysis of Routine Blog Posts by Bloggers with Dementia

Title Detecting Dementia through Retrospective Analysis of Routine Blog Posts by Bloggers with Dementia
Authors Vaden Masrani, Gabriel Murray, Thalia Field, Giuseppe Carenini
Abstract We investigate if writers with dementia can be automatically distinguished from those without by analyzing linguistic markers in written text, in the form of blog posts. We have built a corpus of several thousand blog posts, some by people with dementia and others by people with loved ones with dementia. We use this dataset to train and test several machine learning methods, and achieve prediction performance at a level far above the baseline.
Tasks
Published 2017-08-01
URL https://www.aclweb.org/anthology/W17-2329/
PDF https://www.aclweb.org/anthology/W17-2329
PWC https://paperswithcode.com/paper/detecting-dementia-through-retrospective
Repo https://github.com/vadmas/blog_corpus
Framework none

Benchmark for Complex Answer Retrieval

Title Benchmark for Complex Answer Retrieval
Authors Federico Nanni, Bhaskar Mitra, MattŠ Magnusson and Laura Dietz
Abstract Retrieving paragraphs to populate a Wikipedia article is a challenging task. The new TREC Complex Answer Retrieval (TREC CAR) track introduces a comprehensive dataset that targets this retrieval scenario. We present early results from a variety of approaches – from standard information retrieval methods (e.g., tf-idf) to complex systems that using query expansion using knowledge bases and deep neural networks. The goal is to offer future participants of this track an overview of some promising approaches to tackle this problem.
Tasks Information Retrieval, Passage Re-Ranking
Published 2017-10-01
URL https://arxiv.org/abs/1705.04803
PDF https://arxiv.org/pdf/1705.04803.pdf
PWC https://paperswithcode.com/paper/benchmark-for-complex-answer-retrieval
Repo https://github.com/bmitra-msft/NDRM
Framework none

Amodal Detection of 3D Objects: Inferring 3D Bounding Boxes From 2D Ones in RGB-Depth Images

Title Amodal Detection of 3D Objects: Inferring 3D Bounding Boxes From 2D Ones in RGB-Depth Images
Authors Zhuo Deng, Longin Jan Latecki
Abstract This paper addresses the problem of amodal perception of 3D object detection. The task is to not only find object localizations in the 3D world, but also estimate their physical sizes and poses, even if only parts of them are visible in the RGB-D image. Recent approaches have attempted to harness point cloud from depth channel to exploit 3D features directly in the 3D space and demonstrated the superiority over traditional 2.5D representation approaches. We revisit the amodal 3D detection problem by sticking to the 2.5D representation framework, and directly relate 2.5D visual appearance to 3D objects. We propose a novel 3D object detection system that simultaneously predicts objects’ 3D locations, physical sizes, and orientations in indoor scenes. Experiments on the NYUV2 dataset show our algorithm significantly outperforms the state-of-the-art and indicates 2.5D representation is capable of encoding features for 3D amodal object detection. All source code and data is on https://github.com/phoenixnn/Amodal3Det.
Tasks 3D Object Detection, Object Detection
Published 2017-07-01
URL http://openaccess.thecvf.com/content_cvpr_2017/html/Deng_Amodal_Detection_of_CVPR_2017_paper.html
PDF http://openaccess.thecvf.com/content_cvpr_2017/papers/Deng_Amodal_Detection_of_CVPR_2017_paper.pdf
PWC https://paperswithcode.com/paper/amodal-detection-of-3d-objects-inferring-3d
Repo https://github.com/phoenixnn/Amodal3Det
Framework none

SubUNets: End-To-End Hand Shape and Continuous Sign Language Recognition

Title SubUNets: End-To-End Hand Shape and Continuous Sign Language Recognition
Authors Necati Cihan Camgoz, Simon Hadfield, Oscar Koller, Richard Bowden
Abstract We propose a novel deep learning approach to solve simultaneous alignment and recognition problems (referred to as “Sequence-to-sequence” learning). We decompose the problem into a series of specialised expert systems referred to as SubUNets. The spatio-temporal relationships between these SubUNets are then modelled to solve the task, while remaining trainable end-to-end. The approach mimics human learning and educational techniques, and has a number of significant advantages. SubUNets allow us to inject domain-specific expert knowledge into the system regarding suitable intermediate representations. They also allow us to implicitly perform transfer learning between different interrelated tasks, which also allows us to exploit a wider range of more varied data sources. In our experiments we demonstrate that each of these properties serves to significantly improve the performance of the overarching recognition system, by better constraining the learning problem. The proposed techniques are demonstrated in the challenging domain of sign language recognition. We demonstrate state-of-the-art performance on hand-shape recognition outperforming previous techniques by more than 30%). Furthermore, we are able to obtain comparable sign recognition rates to previous research, without the need for an alignment step to segment out the signs for recognition.
Tasks Sign Language Recognition, Transfer Learning
Published 2017-10-01
URL http://openaccess.thecvf.com/content_iccv_2017/html/Camgoz_SubUNets_End-To-End_Hand_ICCV_2017_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2017/papers/Camgoz_SubUNets_End-To-End_Hand_ICCV_2017_paper.pdf
PWC https://paperswithcode.com/paper/subunets-end-to-end-hand-shape-and-continuous
Repo https://github.com/neccam/SubUNets
Framework tf

Class Disjointness Constraints as Specific Objective Functions in Neural Network Classifiers

Title Class Disjointness Constraints as Specific Objective Functions in Neural Network Classifiers
Authors Fran{\c{c}}ois Scharffe
Abstract
Tasks Object Classification, Object Detection
Published 2017-09-01
URL https://www.aclweb.org/anthology/W17-7303/
PDF https://www.aclweb.org/anthology/W17-7303
PWC https://paperswithcode.com/paper/class-disjointness-constraints-as-specific
Repo https://github.com/OpenAxon/constrained-nn
Framework tf
comments powered by Disqus