October 17, 2019

3089 words 15 mins read

Paper Group ANR 891

Paper Group ANR 891

SANTIS: Sampling-Augmented Neural neTwork with Incoherent Structure for MR image reconstruction. Brain Segmentation from k-space with End-to-end Recurrent Attention Network. Machine Learning in Network Centrality Measures: Tutorial and Outlook. Glyph: Symbolic Regression Tools. Beltrami-Net: Domain Independent Deep D-bar Learning for Absolute Imagi …

SANTIS: Sampling-Augmented Neural neTwork with Incoherent Structure for MR image reconstruction

Title SANTIS: Sampling-Augmented Neural neTwork with Incoherent Structure for MR image reconstruction
Authors Fang Liu, Lihua Chen, Richard Kijowski, Li Feng
Abstract Deep learning holds great promise in the reconstruction of undersampled Magnetic Resonance Imaging (MRI) data, providing new opportunities to escalate the performance of rapid MRI. In existing deep learning-based reconstruction methods, supervised training is performed using artifact-free reference images and their corresponding undersampled pairs. The undersampled images are generated by a fixed undersampling pattern in the training, and the trained network is then applied to reconstruct new images acquired with the same pattern in the inference. While such a training strategy can maintain a favorable reconstruction for a pre-selected undersampling pattern, the robustness of the trained network against any discrepancy of undersampling schemes is typically poor. We developed a novel deep learning-based reconstruction framework called SANTIS for efficient MR image reconstruction with improved robustness against sampling pattern discrepancy. SANTIS uses a data cycle-consistent adversarial network combining efficient end-to-end convolutional neural network mapping, data fidelity enforcement and adversarial training for reconstructing accelerated MR images more faithfully. A training strategy employing sampling augmentation with extensive variation of undersampling patterns was further introduced to promote the robustness of the trained network. Compared to conventional reconstruction and standard deep learning methods, SANTIS achieved consistent better reconstruction performance, with lower errors, greater image sharpness and higher similarity with respect to the reference regardless of the undersampling patterns during inference. This novel concept behind SANTIS can particularly be useful towards improving the robustness of deep learning-based image reconstruction against discrepancy between training and evaluation, which is currently an important but less studied open question.
Tasks Image Reconstruction
Published 2018-12-08
URL http://arxiv.org/abs/1812.03278v1
PDF http://arxiv.org/pdf/1812.03278v1.pdf
PWC https://paperswithcode.com/paper/santis-sampling-augmented-neural-network-with
Repo
Framework

Brain Segmentation from k-space with End-to-end Recurrent Attention Network

Title Brain Segmentation from k-space with End-to-end Recurrent Attention Network
Authors Qiaoying Huang, Xiao Chen, Dimitris Metaxas, Mariappan S. Nadar
Abstract The task of medical image segmentation commonly involves an image reconstruction step to convert acquired raw data to images before any analysis. However, noises, artifacts and loss of information due to the reconstruction process are almost inevitable, which compromises the final performance of segmentation. We present a novel learning framework that performs magnetic resonance brain image segmentation directly from k-space data. The end-to-end framework consists of a unique task-driven attention module that recurrently utilizes intermediate segmentation estimation to facilitate image-domain feature extraction from the raw data, thus closely bridging the reconstruction and the segmentation tasks. In addition, to address the challenge of manual labeling, we introduce a novel workflow to generate labeled training data for segmentation by exploiting imaging modality simulators and digital phantoms. Extensive experimental results show that the proposed method outperforms several state-of-the-art methods.
Tasks Brain Image Segmentation, Brain Segmentation, Image Reconstruction, Medical Image Segmentation, Semantic Segmentation
Published 2018-12-05
URL https://arxiv.org/abs/1812.02068v2
PDF https://arxiv.org/pdf/1812.02068v2.pdf
PWC https://paperswithcode.com/paper/end-to-end-segmentation-with-recurrent
Repo
Framework

Machine Learning in Network Centrality Measures: Tutorial and Outlook

Title Machine Learning in Network Centrality Measures: Tutorial and Outlook
Authors Felipe Grando, Lisando Z. Granville, Luis C. Lamb
Abstract Complex networks are ubiquitous to several Computer Science domains. Centrality measures are an important analysis mechanism to uncover vital elements of complex networks. However, these metrics have high computational costs and requirements that hinder their applications in large real-world networks. In this tutorial, we explain how the use of neural network learning algorithms can render the application of the metrics in complex networks of arbitrary size. Moreover, the tutorial describes how to identify the best configuration for neural network training and learning such for tasks, besides presenting an easy way to generate and acquire training data. We do so by means of a general methodology, using complex network models adaptable to any application. We show that a regression model generated by the neural network successfully approximates the metric values and therefore are a robust, effective alternative in real-world applications. The methodology and proposed machine learning model use only a fraction of time with respect to other approximation algorithms, which is crucial in complex network applications.
Tasks
Published 2018-10-28
URL http://arxiv.org/abs/1810.11760v1
PDF http://arxiv.org/pdf/1810.11760v1.pdf
PWC https://paperswithcode.com/paper/machine-learning-in-network-centrality
Repo
Framework

Glyph: Symbolic Regression Tools

Title Glyph: Symbolic Regression Tools
Authors Markus Quade, Julien Gout, Markus Abel
Abstract We present Glyph - a Python package for genetic programming based symbolic regression. Glyph is designed for usage let by numerical simulations let by real world experiments. For experimentalists, glyph-remote provides a separation of tasks: a ZeroMQ interface splits the genetic programming optimization task from the evaluation of an experimental (or numerical) run. Glyph can be accessed at http://github.com/ambrosys/glyph . Domain experts are be able to employ symbolic regression in their experiments with ease, even if they are not expert programmers. The reuse potential is kept high by a generic interface design. Glyph is available on PyPI and Github.
Tasks
Published 2018-03-13
URL http://arxiv.org/abs/1803.06226v2
PDF http://arxiv.org/pdf/1803.06226v2.pdf
PWC https://paperswithcode.com/paper/glyph-symbolic-regression-tools
Repo
Framework

Beltrami-Net: Domain Independent Deep D-bar Learning for Absolute Imaging with Electrical Impedance Tomography (a-EIT)

Title Beltrami-Net: Domain Independent Deep D-bar Learning for Absolute Imaging with Electrical Impedance Tomography (a-EIT)
Authors S. J. Hamilton, A. Hänninen, A. Hauptmann, V. Kolehmainen
Abstract Objective: To develop, and demonstrate the feasibility of, a novel image reconstruction method for absolute Electrical Impedance Tomography (a-EIT) that pairs deep learning techniques with real-time robust D-bar methods. Approach: A D-bar method is paired with a trained Convolutional Neural Network (CNN) as a post-processing step. Training data is simulated for the network using no knowledge of the boundary shape by using an associated nonphysical Beltrami equation rather than simulating the traditional current and voltage data specific to a given domain. This allows the training data to be boundary shape independent. The method is tested on experimental data from two EIT systems (ACT4 and KIT4). Main Results: Post processing the D-bar images with a CNN produces significant improvements in image quality measured by Structural SIMilarity indices (SSIMs) as well as relative $\ell_2$ and $\ell_1$ image errors. Significance: This work demonstrates that more general networks can be trained without being specific about boundary shape, a key challenge in EIT image reconstruction. The work is promising for future studies involving databases of anatomical atlases.
Tasks Image Reconstruction
Published 2018-11-30
URL http://arxiv.org/abs/1811.12830v1
PDF http://arxiv.org/pdf/1811.12830v1.pdf
PWC https://paperswithcode.com/paper/beltrami-net-domain-independent-deep-d-bar
Repo
Framework

Unsupervised Stylish Image Description Generation via Domain Layer Norm

Title Unsupervised Stylish Image Description Generation via Domain Layer Norm
Authors Cheng Kuan Chen, Zhu Feng Pan, Min Sun, Ming-Yu Liu
Abstract Most of the existing works on image description focus on generating expressive descriptions. The only few works that are dedicated to generating stylish (e.g., romantic, lyric, etc.) descriptions suffer from limited style variation and content digression. To address these limitations, we propose a controllable stylish image description generation model. It can learn to generate stylish image descriptions that are more related to image content and can be trained with the arbitrary monolingual corpus without collecting new paired image and stylish descriptions. Moreover, it enables users to generate various stylish descriptions by plugging in style-specific parameters to include new styles into the existing model. We achieve this capability via a novel layer normalization layer design, which we will refer to as the Domain Layer Norm (DLN). Extensive experimental validation and user study on various stylish image description generation tasks are conducted to show the competitive advantages of the proposed model.
Tasks
Published 2018-09-11
URL http://arxiv.org/abs/1809.06214v1
PDF http://arxiv.org/pdf/1809.06214v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-stylish-image-description
Repo
Framework

On Evaluating Embedding Models for Knowledge Base Completion

Title On Evaluating Embedding Models for Knowledge Base Completion
Authors Yanjie Wang, Daniel Ruffinelli, Rainer Gemulla, Samuel Broscheit, Christian Meilicke
Abstract Knowledge bases contribute to many web search and mining tasks, yet they are often incomplete. To add missing facts to a given knowledge base, various embedding models have been proposed in the recent literature. Perhaps surprisingly, relatively simple models with limited expressiveness often performed remarkably well under today’s most commonly used evaluation protocols. In this paper, we explore whether recent models work well for knowledge base completion and argue that the current evaluation protocols are more suited for question answering rather than knowledge base completion. We show that when focusing on a different prediction task for evaluating knowledge base completion, the performance of current embedding models is unsatisfactory even on datasets previously thought to be too easy. This is especially true when embedding models are compared against a simple rule-based baseline. This work indicates the need for more research into the embedding models and evaluation protocols for knowledge base completion.
Tasks Knowledge Base Completion, Question Answering
Published 2018-10-17
URL http://arxiv.org/abs/1810.07180v4
PDF http://arxiv.org/pdf/1810.07180v4.pdf
PWC https://paperswithcode.com/paper/on-evaluating-embedding-models-for-knowledge
Repo
Framework

Learning to Emulate an Expert Projective Cone Scheduler

Title Learning to Emulate an Expert Projective Cone Scheduler
Authors Neal Master
Abstract Projective cone scheduling defines a large class of rate-stabilizing policies for queueing models relevant to several applications. While there exists considerable theory on the properties of projective cone schedulers, there is little practical guidance on choosing the parameters that define them. In this paper, we propose an algorithm for designing an automated projective cone scheduling system based on observations of an expert projective cone scheduler. We show that the estimated scheduling policy is able to emulate the expert in the sense that the average loss realized by the learned policy will converge to zero. Specifically, for a system with $n$ queues observed over a time horizon $T$, the average loss for the algorithm is $O(\ln(T)\sqrt{\ln(n)/T})$. This upper bound holds regardless of the statistical characteristics of the system. The algorithm uses the multiplicative weights update method and can be applied online so that additional observations of the expert scheduler can be used to improve an existing estimate of the policy. This provides a data-driven method for designing a scheduling policy based on observations of a human expert. We demonstrate the efficacy of the algorithm with a simple numerical example and discuss several extensions.
Tasks
Published 2018-01-30
URL http://arxiv.org/abs/1801.09821v1
PDF http://arxiv.org/pdf/1801.09821v1.pdf
PWC https://paperswithcode.com/paper/learning-to-emulate-an-expert-projective-cone
Repo
Framework

Coarse-to-Fine Annotation Enrichment for Semantic Segmentation Learning

Title Coarse-to-Fine Annotation Enrichment for Semantic Segmentation Learning
Authors Yadan Luo, Ziwei Wang, Zi Huang, Yang Yang, Cong Zhao
Abstract Rich high-quality annotated data is critical for semantic segmentation learning, yet acquiring dense and pixel-wise ground-truth is both labor- and time-consuming. Coarse annotations (e.g., scribbles, coarse polygons) offer an economical alternative, with which training phase could hardly generate satisfactory performance unfortunately. In order to generate high-quality annotated data with a low time cost for accurate segmentation, in this paper, we propose a novel annotation enrichment strategy, which expands existing coarse annotations of training data to a finer scale. Extensive experiments on the Cityscapes and PASCAL VOC 2012 benchmarks have shown that the neural networks trained with the enriched annotations from our framework yield a significant improvement over that trained with the original coarse labels. It is highly competitive to the performance obtained by using human annotated dense annotations. The proposed method also outperforms among other state-of-the-art weakly-supervised segmentation methods.
Tasks Semantic Segmentation
Published 2018-08-22
URL http://arxiv.org/abs/1808.07209v1
PDF http://arxiv.org/pdf/1808.07209v1.pdf
PWC https://paperswithcode.com/paper/coarse-to-fine-annotation-enrichment-for
Repo
Framework

Equal But Not The Same: Understanding the Implicit Relationship Between Persuasive Images and Text

Title Equal But Not The Same: Understanding the Implicit Relationship Between Persuasive Images and Text
Authors Mingda Zhang, Rebecca Hwa, Adriana Kovashka
Abstract Images and text in advertisements interact in complex, non-literal ways. The two channels are usually complementary, with each channel telling a different part of the story. Current approaches, such as image captioning methods, only examine literal, redundant relationships, where image and text show exactly the same content. To understand more complex relationships, we first collect a dataset of advertisement interpretations for whether the image and slogan in the same visual advertisement form a parallel (conveying the same message without literally saying the same thing) or non-parallel relationship, with the help of workers recruited on Amazon Mechanical Turk. We develop a variety of features that capture the creativity of images and the specificity or ambiguity of text, as well as methods that analyze the semantics within and across channels. We show that our method outperforms standard image-text alignment approaches on predicting the parallel/non-parallel relationship between image and text.
Tasks Image Captioning
Published 2018-07-21
URL http://arxiv.org/abs/1807.08205v1
PDF http://arxiv.org/pdf/1807.08205v1.pdf
PWC https://paperswithcode.com/paper/equal-but-not-the-same-understanding-the
Repo
Framework

Automatic Analysis of Facial Expressions Based on Deep Covariance Trajectories

Title Automatic Analysis of Facial Expressions Based on Deep Covariance Trajectories
Authors Naima Otberdout, Anis Kacem, Mohamed Daoudi, Lahoucine Ballihi, Stefano Berretti
Abstract In this paper, we propose a new approach for facial expression recognition using deep covariance descriptors. The solution is based on the idea of encoding local and global Deep Convolutional Neural Network (DCNN) features extracted from still images, in compact local and global covariance descriptors. The space geometry of the covariance matrices is that of Symmetric Positive Definite (SPD) matrices. By conducting the classification of static facial expressions using Support Vector Machine (SVM) with a valid Gaussian kernel on the SPD manifold, we show that deep covariance descriptors are more effective than the standard classification with fully connected layers and softmax. Besides, we propose a completely new and original solution to model the temporal dynamic of facial expressions as deep trajectories on the SPD manifold. As an extension of the classification pipeline of covariance descriptors, we apply SVM with valid positive definite kernels derived from global alignment for deep covariance trajectories classification. By performing extensive experiments on the Oulu-CASIA, CK+, and SFEW datasets, we show that both the proposed static and dynamic approaches achieve state-of-the-art performance for facial expression recognition outperforming many recent approaches.
Tasks Facial Expression Recognition
Published 2018-10-25
URL https://arxiv.org/abs/1810.11392v3
PDF https://arxiv.org/pdf/1810.11392v3.pdf
PWC https://paperswithcode.com/paper/automatic-analysis-of-facial-expressions
Repo
Framework

“You are no Jack Kennedy”: On Media Selection of Highlights from Presidential Debates

Title “You are no Jack Kennedy”: On Media Selection of Highlights from Presidential Debates
Authors Chenhao Tan, Hao Peng, Noah A. Smith
Abstract Political speeches and debates play an important role in shaping the images of politicians, and the public often relies on media outlets to select bits of political communication from a large pool of utterances. It is an important research question to understand what factors impact this selection process. To quantitatively explore the selection process, we build a three- decade dataset of presidential debate transcripts and post-debate coverage. We first examine the effect of wording and propose a binary classification framework that controls for both the speaker and the debate situation. We find that crowdworkers can only achieve an accuracy of 60% in this task, indicating that media choices are not entirely obvious. Our classifiers outperform crowdworkers on average, mainly in primary debates. We also compare important factors from crowdworkers’ free-form explanations with those from data-driven methods and find interesting differences. Few crowdworkers mentioned that “context matters”, whereas our data show that well-quoted sentences are more distinct from the previous utterance by the same speaker than less-quoted sentences. Finally, we examine the aggregate effect of media preferences towards different wordings to understand the extent of fragmentation among media outlets. By analyzing a bipartite graph built from quoting behavior in our data, we observe a decreasing trend in bipartisan coverage.
Tasks
Published 2018-02-23
URL http://arxiv.org/abs/1802.08690v1
PDF http://arxiv.org/pdf/1802.08690v1.pdf
PWC https://paperswithcode.com/paper/you-are-no-jack-kennedy-on-media-selection-of
Repo
Framework

Region Based Extensive Response Index Pattern for Facial Expression Recognition

Title Region Based Extensive Response Index Pattern for Facial Expression Recognition
Authors Monu Verma, Santosh. K. Vipparthi, Girdhari Singh
Abstract This paper presents a novel descriptor named Region based Extensive Response Index Pattern (RETRaIN) for facial expression recognition. The RETRaIN encodes the relation among the reference and neighboring pixels of facial active regions. These relations are computed by using directional compass mask on an input image and extract the high edge responses in foremost directions. Further extreme edge index positions are selected and encoded into six-bit compact code to reduce feature dimensionality and distinguish between the uniform and non-uniform patterns in the facial features. The performance of the proposed descriptor is tested and evaluated on three benchmark datasets Extended Cohn Kanade, JAFFE, and MUG. The RETRaIN achieves superior recognition accuracy in comparison to state-of-the-art techniques.
Tasks Facial Expression Recognition
Published 2018-11-26
URL http://arxiv.org/abs/1811.10261v1
PDF http://arxiv.org/pdf/1811.10261v1.pdf
PWC https://paperswithcode.com/paper/region-based-extensive-response-index-pattern
Repo
Framework

Spontaneous Facial Expression Recognition using Sparse Representation

Title Spontaneous Facial Expression Recognition using Sparse Representation
Authors Dawood Al Chanti, Alice Caplier
Abstract Facial expression is the most natural means for human beings to communicate their emotions. Most facial expression analysis studies consider the case of acted expressions. Spontaneous facial expression recognition is significantly more challenging since each person has a different way to react to a given emotion. We consider the problem of recognizing spontaneous facial expression by learning discriminative dictionaries for sparse representation. Facial images are represented as a sparse linear combination of prototype atoms via Orthogonal Matching Pursuit algorithm. Sparse codes are then used to train an SVM classifier dedicated to the recognition task. The dictionary that sparsifies the facial images (feature points with the same class labels should have similar sparse codes) is crucial for robust classification. Learning sparsifying dictionaries heavily relies on the initialization process of the dictionary. To improve the performance of dictionaries, a random face feature descriptor based on the Random Projection concept is developed. The effectiveness of the proposed method is evaluated through several experiments on the spontaneous facial expressions DynEmo database. It is also estimated on the well-known acted facial expressions JAFFE database for a purpose of comparison with state-of-the-art methods.
Tasks Facial Expression Recognition
Published 2018-09-30
URL http://arxiv.org/abs/1810.00362v1
PDF http://arxiv.org/pdf/1810.00362v1.pdf
PWC https://paperswithcode.com/paper/spontaneous-facial-expression-recognition
Repo
Framework

Active Learning for Efficient Testing of Student Programs

Title Active Learning for Efficient Testing of Student Programs
Authors Ishan Rastogi, Aditya Kanade, Shirish Shevade
Abstract In this work, we propose an automated method to identify semantic bugs in student programs, called ATAS, which builds upon the recent advances in both symbolic execution and active learning. Symbolic execution is a program analysis technique which can generate test cases through symbolic constraint solving. Our method makes use of a reference implementation of the task as its sole input. We compare our method with a symbolic execution-based baseline on 6 programming tasks retrieved from CodeForces comprising a total of 23K student submissions. We show an average improvement of over 2.5x over the baseline in terms of runtime (thus making it more suitable for online evaluation), without a significant degradation in evaluation accuracy.
Tasks Active Learning
Published 2018-04-13
URL http://arxiv.org/abs/1804.05655v1
PDF http://arxiv.org/pdf/1804.05655v1.pdf
PWC https://paperswithcode.com/paper/active-learning-for-efficient-testing-of
Repo
Framework
comments powered by Disqus