October 18, 2019

3325 words 16 mins read

Paper Group ANR 638

Paper Group ANR 638

Transparent, Efficient, and Robust Word Embedding Access with WOMBAT. Automatic deep learning-based normalization of breast dynamic contrast-enhanced magnetic resonance images. Multiplicative Latent Force Models. On Discrimination Discovery and Removal in Ranked Data using Causal Graph. Dimension Reduction Using Active Manifolds. A Walk with SGD. R …

Transparent, Efficient, and Robust Word Embedding Access with WOMBAT

Title Transparent, Efficient, and Robust Word Embedding Access with WOMBAT
Authors Mark-Christoph Müller, Michael Strube
Abstract We present WOMBAT, a Python tool which supports NLP practitioners in accessing word embeddings from code. WOMBAT addresses common research problems, including unified access, scaling, and robust and reproducible preprocessing. Code that uses WOMBAT for accessing word embeddings is not only cleaner, more readable, and easier to reuse, but also much more efficient than code using standard in-memory methods: a Python script using WOMBAT for evaluating seven large word embedding collections (8.7M embedding vectors in total) on a simple SemEval sentence similarity task involving 250 raw sentence pairs completes in under ten seconds end-to-end on a standard notebook computer.
Tasks Word Embeddings
Published 2018-07-02
URL http://arxiv.org/abs/1807.00717v1
PDF http://arxiv.org/pdf/1807.00717v1.pdf
PWC https://paperswithcode.com/paper/transparent-efficient-and-robust-word
Repo
Framework

Automatic deep learning-based normalization of breast dynamic contrast-enhanced magnetic resonance images

Title Automatic deep learning-based normalization of breast dynamic contrast-enhanced magnetic resonance images
Authors Jun Zhang, Ashirbani Saha, Brian J. Soher, Maciej A. Mazurowski
Abstract Objective: To develop an automatic image normalization algorithm for intensity correction of images from breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) acquired by different MRI scanners with various imaging parameters, using only image information. Methods: DCE-MR images of 460 subjects with breast cancer acquired by different scanners were used in this study. Each subject had one T1-weighted pre-contrast image and three T1-weighted post-contrast images available. Our normalization algorithm operated under the assumption that the same type of tissue in different patients should be represented by the same voxel value. We used four tissue/material types as the anchors for the normalization: 1) air, 2) fat tissue, 3) dense tissue, and 4) heart. The algorithm proceeded in the following two steps: First, a state-of-the-art deep learning-based algorithm was applied to perform tissue segmentation accurately and efficiently. Then, based on the segmentation results, a subject-specific piecewise linear mapping function was applied between the anchor points to normalize the same type of tissue in different patients into the same intensity ranges. We evaluated the algorithm with 300 subjects used for training and the rest used for testing. Results: The application of our algorithm to images with different scanning parameters resulted in highly improved consistency in pixel values and extracted radiomics features. Conclusion: The proposed image normalization strategy based on tissue segmentation can perform intensity correction fully automatically, without the knowledge of the scanner parameters. Significance: We have thoroughly tested our algorithm and showed that it successfully normalizes the intensity of DCE-MR images. We made our software publicly available for others to apply in their analyses.
Tasks
Published 2018-07-05
URL http://arxiv.org/abs/1807.02152v1
PDF http://arxiv.org/pdf/1807.02152v1.pdf
PWC https://paperswithcode.com/paper/automatic-deep-learning-based-normalization
Repo
Framework

Multiplicative Latent Force Models

Title Multiplicative Latent Force Models
Authors Daniel J. Tait, Bruce J. Worton
Abstract Bayesian modelling of dynamic systems must achieve a compromise between providing a complete mechanistic specification of the process while retaining the flexibility to handle those situations in which data is sparse relative to model complexity, or a full specification is hard to motivate. Latent force models achieve this dual aim by specifying a parsimonious linear evolution equation which an additive latent Gaussian process (GP) forcing term. In this work we extend the latent force framework to allow for multiplicative interactions between the GP and the latent states leading to more control over the geometry of the trajectories. Unfortunately inference is no longer straightforward and so we introduce an approximation based on the method of successive approximations and examine its performance using a simulation study.
Tasks
Published 2018-11-01
URL http://arxiv.org/abs/1811.00423v1
PDF http://arxiv.org/pdf/1811.00423v1.pdf
PWC https://paperswithcode.com/paper/multiplicative-latent-force-models
Repo
Framework

On Discrimination Discovery and Removal in Ranked Data using Causal Graph

Title On Discrimination Discovery and Removal in Ranked Data using Causal Graph
Authors Yongkai Wu, Lu Zhang, Xintao Wu
Abstract Predictive models learned from historical data are widely used to help companies and organizations make decisions. However, they may digitally unfairly treat unwanted groups, raising concerns about fairness and discrimination. In this paper, we study the fairness-aware ranking problem which aims to discover discrimination in ranked datasets and reconstruct the fair ranking. Existing methods in fairness-aware ranking are mainly based on statistical parity that cannot measure the true discriminatory effect since discrimination is causal. On the other hand, existing methods in causal-based anti-discrimination learning focus on classification problems and cannot be directly applied to handle the ranked data. To address these limitations, we propose to map the rank position to a continuous score variable that represents the qualification of the candidates. Then, we build a causal graph that consists of both the discrete profile attributes and the continuous score. The path-specific effect technique is extended to the mixed-variable causal graph to identify both direct and indirect discrimination. The relationship between the path-specific effects for the ranked data and those for the binary decision is theoretically analyzed. Finally, algorithms for discovering and removing discrimination from a ranked dataset are developed. Experiments using the real dataset show the effectiveness of our approaches.
Tasks
Published 2018-03-05
URL http://arxiv.org/abs/1803.01901v1
PDF http://arxiv.org/pdf/1803.01901v1.pdf
PWC https://paperswithcode.com/paper/on-discrimination-discovery-and-removal-in
Repo
Framework

Dimension Reduction Using Active Manifolds

Title Dimension Reduction Using Active Manifolds
Authors Robert A. Bridges, Chris Felder, Chelsey Hoff
Abstract Scientists and engineers rely on accurate mathematical models to quantify the objects of their studies, which are often high-dimensional. Unfortunately, high-dimensional models are inherently difficult, i.e. when observations are sparse or expensive to determine. One way to address this problem is to approximate the original model with fewer input dimensions. Our project goal was to recover a function f that takes n inputs and returns one output, where n is potentially large. For any given n-tuple, we assume that we can observe a sample of the gradient and output of the function but it is computationally expensive to do so. This project was inspired by an approach known as Active Subspaces, which works by linearly projecting to a linear subspace where the function changes most on average. Our research gives mathematical developments informing a novel algorithm for this problem. Our approach, Active Manifolds, increases accuracy by seeking nonlinear analogues that approximate the function. The benefits of our approach are eliminated unprincipled parameter, choices, guaranteed accessible visualization, and improved estimation accuracy.
Tasks Dimensionality Reduction
Published 2018-02-07
URL http://arxiv.org/abs/1802.04178v1
PDF http://arxiv.org/pdf/1802.04178v1.pdf
PWC https://paperswithcode.com/paper/dimension-reduction-using-active-manifolds
Repo
Framework

A Walk with SGD

Title A Walk with SGD
Authors Chen Xing, Devansh Arpit, Christos Tsirigotis, Yoshua Bengio
Abstract We present novel empirical observations regarding how stochastic gradient descent (SGD) navigates the loss landscape of over-parametrized deep neural networks (DNNs). These observations expose the qualitatively different roles of learning rate and batch-size in DNN optimization and generalization. Specifically we study the DNN loss surface along the trajectory of SGD by interpolating the loss surface between parameters from consecutive \textit{iterations} and tracking various metrics during training. We find that the loss interpolation between parameters before and after each training iteration’s update is roughly convex with a minimum (\textit{valley floor}) in between for most of the training. Based on this and other metrics, we deduce that for most of the training update steps, SGD moves in valley like regions of the loss surface by jumping from one valley wall to another at a height above the valley floor. This ‘bouncing between walls at a height’ mechanism helps SGD traverse larger distance for small batch sizes and large learning rates which we find play qualitatively different roles in the dynamics. While a large learning rate maintains a large height from the valley floor, a small batch size injects noise facilitating exploration. We find this mechanism is crucial for generalization because the valley floor has barriers and this exploration above the valley floor allows SGD to quickly travel far away from the initialization point (without being affected by barriers) and find flatter regions, corresponding to better generalization.
Tasks
Published 2018-02-24
URL http://arxiv.org/abs/1802.08770v4
PDF http://arxiv.org/pdf/1802.08770v4.pdf
PWC https://paperswithcode.com/paper/a-walk-with-sgd
Repo
Framework

RECS: Robust Graph Embedding Using Connection Subgraphs

Title RECS: Robust Graph Embedding Using Connection Subgraphs
Authors Saba A. Al-Sayouri, Danai Koutra, Evangelos E. Papalexakis, Sarah S. Lam
Abstract The success of graph embeddings or node representation learning in a variety of downstream tasks, such as node classification, link prediction, and recommendation systems, has led to their popularity in recent years. Representation learning algorithms aim to preserve local and global network structure by identifying node neighborhood notions. However, many existing algorithms generate embeddings that fail to properly preserve the network structure, or lead to unstable representations due to random processes (e.g., random walks to generate context) and, thus, cannot generate to multi-graph problems. In this paper, we propose RECS, a novel, stable graph embedding algorithmic framework. RECS learns graph representations using connection subgraphs by employing the analogy of graphs with electrical circuits. It preserves both local and global connectivity patterns, and addresses the issue of high-degree nodes. Further, it exploits the strength of weak ties and meta-data that have been neglected by baselines. The experiments show that RECS outperforms state-of-the-art algorithms by up to 36.85% on multi-label classification problem. Further, in contrast to baselines, RECS, being deterministic, is completely stable.
Tasks Graph Embedding, Link Prediction, Multi-Label Classification, Node Classification, Recommendation Systems, Representation Learning
Published 2018-05-03
URL http://arxiv.org/abs/1805.01509v3
PDF http://arxiv.org/pdf/1805.01509v3.pdf
PWC https://paperswithcode.com/paper/recs-robust-graph-embedding-using-connection
Repo
Framework

Almost-unsupervised Speech Recognition with Close-to-zero Resource Based on Phonetic Structures Learned from Very Small Unpaired Speech and Text Data

Title Almost-unsupervised Speech Recognition with Close-to-zero Resource Based on Phonetic Structures Learned from Very Small Unpaired Speech and Text Data
Authors Yi-Chen Chen, Chia-Hao Shen, Sung-Feng Huang, Hung-yi Lee, Lin-shan Lee
Abstract Producing a large amount of annotated speech data for training ASR systems remains difficult for more than 95% of languages all over the world which are low-resourced. However, we note human babies start to learn the language by the sounds of a small number of exemplar words without hearing a large amount of data. We initiate some preliminary work in this direction in this paper. Audio Word2Vec is used to obtain embeddings of spoken words which carry phonetic information extracted from the signals. An autoencoder is used to generate embeddings of text words based on the articulatory features for the phoneme sequences. Both sets of embeddings for spoken and text words describe similar phonetic structures among words in their respective latent spaces. A mapping relation from the audio embeddings to text embeddings actually gives the word-level ASR. This can be learned by aligning a small number of spoken words and the corresponding text words in the embedding spaces. In the initial experiments only 200 annotated spoken words and one hour of speech data without annotation gave a word accuracy of 27.5%, which is low but a good starting point.
Tasks Speech Recognition
Published 2018-10-30
URL http://arxiv.org/abs/1810.12566v1
PDF http://arxiv.org/pdf/1810.12566v1.pdf
PWC https://paperswithcode.com/paper/almost-unsupervised-speech-recognition-with
Repo
Framework

Visual Weather Temperature Prediction

Title Visual Weather Temperature Prediction
Authors Wei-Ta Chu, Kai-Chia Ho, Ali Borji
Abstract In this paper, we attempt to employ convolutional recurrent neural networks for weather temperature estimation using only image data. We study ambient temperature estimation based on deep neural networks in two scenarios a) estimating temperature of a single outdoor image, and b) predicting temperature of the last image in an image sequence. In the first scenario, visual features are extracted by a convolutional neural network trained on a large-scale image dataset. We demonstrate that promising performance can be obtained, and analyze how volume of training data influences performance. In the second scenario, we consider the temporal evolution of visual appearance, and construct a recurrent neural network to predict the temperature of the last image in a given image sequence. We obtain better prediction accuracy compared to the state-of-the-art models. Further, we investigate how performance varies when information is extracted from different scene regions, and when images are captured in different daytime hours. Our approach further reinforces the idea of using only visual information for cost efficient weather prediction in the future.
Tasks
Published 2018-01-25
URL http://arxiv.org/abs/1801.08267v1
PDF http://arxiv.org/pdf/1801.08267v1.pdf
PWC https://paperswithcode.com/paper/visual-weather-temperature-prediction
Repo
Framework

Visual Question Answering as Reading Comprehension

Title Visual Question Answering as Reading Comprehension
Authors Hui Li, Peng Wang, Chunhua Shen, Anton van den Hengel
Abstract Visual question answering (VQA) demands simultaneous comprehension of both the image visual content and natural language questions. In some cases, the reasoning needs the help of common sense or general knowledge which usually appear in the form of text. Current methods jointly embed both the visual information and the textual feature into the same space. However, how to model the complex interactions between the two different modalities is not an easy task. In contrast to struggling on multimodal feature fusion, in this paper, we propose to unify all the input information by natural language so as to convert VQA into a machine reading comprehension problem. With this transformation, our method not only can tackle VQA datasets that focus on observation based questions, but can also be naturally extended to handle knowledge-based VQA which requires to explore large-scale external knowledge base. It is a step towards being able to exploit large volumes of text and natural language processing techniques to address VQA problem. Two types of models are proposed to deal with open-ended VQA and multiple-choice VQA respectively. We evaluate our models on three VQA benchmarks. The comparable performance with the state-of-the-art demonstrates the effectiveness of the proposed method.
Tasks Common Sense Reasoning, Machine Reading Comprehension, Question Answering, Reading Comprehension, Visual Question Answering
Published 2018-11-29
URL http://arxiv.org/abs/1811.11903v1
PDF http://arxiv.org/pdf/1811.11903v1.pdf
PWC https://paperswithcode.com/paper/visual-question-answering-as-reading
Repo
Framework

A Novel Weighted Distance Measure for Multi-Attributed Graph

Title A Novel Weighted Distance Measure for Multi-Attributed Graph
Authors Muhammad Abulaish, Jahiruddin
Abstract Due to exponential growth of complex data, graph structure has become increasingly important to model various entities and their interactions, with many interesting applications including, bioinformatics, social network analysis, etc. Depending on the complexity of the data, the underlying graph model can be a simple directed/undirected and/or weighted/un-weighted graph to a complex graph (aka multi-attributed graph) where vertices and edges are labelled with multi-dimensional vectors. In this paper, we present a novel weighted distance measure based on weighted Euclidean norm which is defined as a function of both vertex and edge attributes, and it can be used for various graph analysis tasks including classification and cluster analysis. The proposed distance measure has flexibility to increase/decrease the weightage of edge labels while calculating the distance between vertex-pairs. We have also proposed a MAGDist algorithm, which reads multi-attributed graph stored in CSV files containing the list of vertex vectors and edge vectors, and calculates the distance between each vertex-pair using the proposed weighted distance measure. Finally, we have proposed a multi-attributed similarity graph generation algorithm, MAGSim, which reads the output of MAGDist algorithm and generates a similarity graph that can be analysed using classification and clustering algorithms. The significance and accuracy of the proposed distance measure and algorithms is evaluated on Iris and Twitter data sets, and it is found that the similarity graph generated by our proposed method yields better clustering results than the existing similarity graph generation methods.
Tasks Graph Generation
Published 2018-01-22
URL http://arxiv.org/abs/1801.07150v1
PDF http://arxiv.org/pdf/1801.07150v1.pdf
PWC https://paperswithcode.com/paper/a-novel-weighted-distance-measure-for-multi
Repo
Framework

On the Feasibility of Real-Time 3D Hand Tracking using Edge GPGPU Acceleration

Title On the Feasibility of Real-Time 3D Hand Tracking using Edge GPGPU Acceleration
Authors Ammar Qammaz, Sokol Kosta, Nikolaos Kyriazis, Antonis Argyros
Abstract This paper presents the case study of a non-intrusive porting of a monolithic C++ library for real-time 3D hand tracking, to the domain of edge-based computation. Towards a proof of concept, the case study considers a pair of workstations, a computationally powerful and a computationally weak one. By wrapping the C++ library in Java container and by capitalizing on a Java-based offloading infrastructure that supports both CPU and GPGPU computations, we are able to establish automatically the required server-client workflow that best addresses the resource allocation problem in the effort to execute from the weak workstation. As a result, the weak workstation can perform well at the task, despite lacking the sufficient hardware to do the required computations locally. This is achieved by offloading computations which rely on GPGPU, to the powerful workstation, across the network that connects them. We show the edge-based computation challenges associated with the information flow of the ported algorithm, demonstrate how we cope with them, and identify what needs to be improved for achieving even better performance.
Tasks
Published 2018-04-30
URL http://arxiv.org/abs/1804.11256v1
PDF http://arxiv.org/pdf/1804.11256v1.pdf
PWC https://paperswithcode.com/paper/on-the-feasibility-of-real-time-3d-hand
Repo
Framework

An Implementation, Empirical Evaluation and Proposed Improvement for Bidirectional Splitting Method for Argumentation Frameworks under Stable Semantics

Title An Implementation, Empirical Evaluation and Proposed Improvement for Bidirectional Splitting Method for Argumentation Frameworks under Stable Semantics
Authors Renata Wong
Abstract Abstract argumentation frameworks are formal systems that facilitate obtaining conclusions from non-monotonic knowledge systems. Within such a system, an argumentation semantics is defined as a set of arguments with some desired qualities, for example, that the elements are not in conflict with each other. Splitting an argumentation framework can efficiently speed up the computation of argumentation semantics. With respect to stable semantics, two methods have been proposed to split an argumentation framework either in a unidirectional or bidirectional fashion. The advantage of bidirectional splitting is that it is not structure-dependent and, unlike unidirectional splitting, it can be used for frameworks consisting of a single strongly connected component. Bidirectional splitting makes use of a minimum cut. In this paper, we implement and test the performance of the bidirectional splitting method, along with two types of graph cut algorithms. Experimental data suggest that using a minimum cut will not improve the performance of computing stable semantics in most cases. Hence, instead of a minimum cut, we propose to use a balanced cut, where the framework is split into two sub-frameworks of equal size. Experimental results conducted on bidirectional splitting using the balanced cut show a significant improvement in the performance of computing semantics.
Tasks Abstract Argumentation
Published 2018-08-11
URL http://arxiv.org/abs/1808.03736v1
PDF http://arxiv.org/pdf/1808.03736v1.pdf
PWC https://paperswithcode.com/paper/an-implementation-empirical-evaluation-and
Repo
Framework

Scale Estimation of Monocular SfM for a Multi-modal Stereo Camera

Title Scale Estimation of Monocular SfM for a Multi-modal Stereo Camera
Authors Shinya Sumikura, Ken Sakurada, Nobuo Kawaguchi, Ryosuke Nakamura
Abstract This paper proposes a novel method of estimating the absolute scale of monocular SfM for a multi-modal stereo camera. In the fields of computer vision and robotics, scale estimation for monocular SfM has been widely investigated in order to simplify systems. This paper addresses the scale estimation problem for a stereo camera system in which two cameras capture different spectral images (e.g., RGB and FIR), whose feature points are difficult to directly match using descriptors. Furthermore, the number of matching points between FIR images can be comparatively small, owing to the low resolution and lack of thermal scene texture. To cope with these difficulties, the proposed method estimates the scale parameter using batch optimization, based on the epipolar constraint of a small number of feature correspondences between the invisible light images. The accuracy and numerical stability of the proposed method are verified by synthetic and real image experiments.
Tasks
Published 2018-10-28
URL http://arxiv.org/abs/1810.11856v1
PDF http://arxiv.org/pdf/1810.11856v1.pdf
PWC https://paperswithcode.com/paper/scale-estimation-of-monocular-sfm-for-a-multi
Repo
Framework

Content-based Popularity Prediction of Online Petitions Using a Deep Regression Model

Title Content-based Popularity Prediction of Online Petitions Using a Deep Regression Model
Authors Shivashankar Subramanian, Timothy Baldwin, Trevor Cohn
Abstract Online petitions are a cost-effective way for citizens to collectively engage with policy-makers in a democracy. Predicting the popularity of a petition — commonly measured by its signature count — based on its textual content has utility for policy-makers as well as those posting the petition. In this work, we model this task using CNN regression with an auxiliary ordinal regression objective. We demonstrate the effectiveness of our proposed approach using UK and US government petition datasets.
Tasks
Published 2018-05-17
URL http://arxiv.org/abs/1805.06566v1
PDF http://arxiv.org/pdf/1805.06566v1.pdf
PWC https://paperswithcode.com/paper/content-based-popularity-prediction-of-online
Repo
Framework
comments powered by Disqus