October 17, 2019

3623 words 18 mins read

Paper Group ANR 684

Paper Group ANR 684

Compressed Sensing Plus Motion (CS+M): A New Perspective for Improving Undersampled MR Image Reconstruction. Augmented LiDAR Simulator for Autonomous Driving. Stacked Filters Stationary Flow For Hardware-Oriented Acceleration Of Deep Convolutional Neural Networks. Automatic Evaluation of Neural Personality-based Chatbots. Building Extraction at Sca …

Compressed Sensing Plus Motion (CS+M): A New Perspective for Improving Undersampled MR Image Reconstruction

Title Compressed Sensing Plus Motion (CS+M): A New Perspective for Improving Undersampled MR Image Reconstruction
Authors Angelica I. Aviles-Rivero, Guy Williams, Martin J. Graves, Carola-Bibiane Schonlieb
Abstract Purpose: To obtain high-quality reconstructions from highly undersampled dynamic MRI data with the goal of reducing the acquisition time and towards improving physicians’ outcome in clinical practice in a range of clinical applications. Theory and Methods: In dynamic MRI scans, the interaction between the target structure and the physical motion affects the acquired measurements. We exploit the strong repercussion of motion in MRI by proposing a variational framework - called Compressed Sensing Plus Motion (CS+M) - that links in a single model, simultaneously and explicitly, the computation of the algorithmic MRI reconstruction and the physical motion. Most precisely, we recast the image reconstruction and motion estimation problems as a single optimisation problem that is solved, iteratively, by breaking it up into two more computationally tractable problems. The potentials and generalisation capabilities of our approach are demonstrated in different clinical applications including cardiac cine, cardiac perfusion and brain perfusion imaging. Results: The proposed scheme reduces blurring artefacts and preserves the target shape and fine details whilst observing the lowest reconstruction error under highly undersampling up to 12x. This results in lower residual aliasing artefacts than the compared reconstructions algorithms. Overall, the results coming from our scheme exhibit more stable behaviour and generate a reconstruction closer to the gold-standard. Conclusion: We show that incorporating physical motion to the CS computation yields a significant improvement of the MR image reconstruction, that in fact, is closer to the gold-standard. This translates to higher reconstruction quality whilst requiring less measurements.
Tasks Image Reconstruction, Motion Estimation
Published 2018-10-25
URL http://arxiv.org/abs/1810.10828v1
PDF http://arxiv.org/pdf/1810.10828v1.pdf
PWC https://paperswithcode.com/paper/compressed-sensing-plus-motion-csm-a-new
Repo
Framework

Augmented LiDAR Simulator for Autonomous Driving

Title Augmented LiDAR Simulator for Autonomous Driving
Authors Jin Fang, Dingfu Zhou, Feilong Yan, Tongtong Zhao, Feihu Zhang, Yu Ma, Liang Wang, Ruigang Yang
Abstract In Autonomous Driving (AD), detection and tracking of obstacles on the roads is a critical task. Deep-learning based methods using annotated LiDAR data have been the most widely adopted approach for this. Unfortunately, annotating 3D point cloud is a very challenging, time- and money-consuming task. In this paper, we propose a novel LiDAR simulator that augments real point cloud with synthetic obstacles (e.g., cars, pedestrians, and other movable objects). Unlike previous simulators that entirely rely on CG models and game engines, our augmented simulator bypasses the requirement to create high-fidelity background CAD models. Instead, we can simply deploy a vehicle with a LiDAR scanner to sweep the street of interests to obtain the background point cloud, based on which annotated point cloud can be automatically generated. This unique “scan-and-simulate” capability makes our approach scalable and practical, ready for large-scale industrial applications. In this paper, we describe our simulator in detail, in particular the placement of obstacles that is critical for performance enhancement. We show that detectors with our simulated LiDAR point cloud alone can perform comparably (within two percentage points) with these trained with real data. Mixing real and simulated data can achieve over 95% accuracy.
Tasks Autonomous Driving
Published 2018-11-17
URL http://arxiv.org/abs/1811.07112v2
PDF http://arxiv.org/pdf/1811.07112v2.pdf
PWC https://paperswithcode.com/paper/simulating-lidar-point-cloud-for-autonomous
Repo
Framework

Stacked Filters Stationary Flow For Hardware-Oriented Acceleration Of Deep Convolutional Neural Networks

Title Stacked Filters Stationary Flow For Hardware-Oriented Acceleration Of Deep Convolutional Neural Networks
Authors Yuechao Gao, Nianhong Liu, Sheng Zhang
Abstract To address memory and computation resource limitations for hardware-oriented acceleration of deep convolutional neural networks (CNNs), we present a computation flow, stacked filters stationary flow (SFS), and a corresponding data encoding format, relative indexed compressed sparse filter format (CSF), to make the best of data sparsity, and simplify data handling at execution time. And we also propose a three dimensional Single Instruction Multiple Data (3D-SIMD) processor architecture to illustrate how to accelerate deep CNNs by taking advantage of SFS flow and CSF format. Comparing with the state-of-the-art result (Han et al., 2016b), our methods achieve 1.11x improvement in reducing the storage required by AlexNet, and 1.09x improvement in reducing the storage required by SqueezeNet, without loss of accuracy on the ImageNet dataset. Moreover, using these approaches, chip area for logics handling irregular sparse data access can be saved. Comparing with the 2D-SIMD processor structures in DVAS, ENVISION, etc., our methods achieve about 3.65x processing element (PE) array utilization rate improvement (from 26.4% to 96.5%) on the data from Deep Compression on AlexNet.
Tasks
Published 2018-01-23
URL http://arxiv.org/abs/1801.07459v3
PDF http://arxiv.org/pdf/1801.07459v3.pdf
PWC https://paperswithcode.com/paper/stacked-filters-stationary-flow-for-hardware
Repo
Framework

Automatic Evaluation of Neural Personality-based Chatbots

Title Automatic Evaluation of Neural Personality-based Chatbots
Authors Yujie Xing, Raquel Fernández
Abstract Stylistic variation is critical to render the utterances generated by conversational agents natural and engaging. In this paper, we focus on sequence-to-sequence models for open-domain dialogue response generation and propose a new method to evaluate the extent to which such models are able to generate responses that reflect different personality traits.
Tasks
Published 2018-09-30
URL http://arxiv.org/abs/1810.00472v1
PDF http://arxiv.org/pdf/1810.00472v1.pdf
PWC https://paperswithcode.com/paper/automatic-evaluation-of-neural-personality
Repo
Framework

Building Extraction at Scale using Convolutional Neural Network: Mapping of the United States

Title Building Extraction at Scale using Convolutional Neural Network: Mapping of the United States
Authors Hsiuhan Lexie Yang, Jiangye Yuan, Dalton Lunga, Melanie Laverdiere, Amy Rose, Budhendra Bhaduri
Abstract Establishing up-to-date large scale building maps is essential to understand urban dynamics, such as estimating population, urban planning and many other applications. Although many computer vision tasks has been successfully carried out with deep convolutional neural networks, there is a growing need to understand their large scale impact on building mapping with remote sensing imagery. Taking advantage of the scalability of CNNs and using only few areas with the abundance of building footprints, for the first time we conduct a comparative analysis of four state-of-the-art CNNs for extracting building footprints across the entire continental United States. The four CNN architectures namely: branch-out CNN, fully convolutional neural network (FCN), conditional random field as recurrent neural network (CRFasRNN), and SegNet, support semantic pixel-wise labeling and focus on capturing textural information at multi-scale. We use 1-meter resolution aerial images from National Agriculture Imagery Program (NAIP) as the test-bed, and compare the extraction results across the four methods. In addition, we propose to combine signed-distance labels with SegNet, the preferred CNN architecture identified by our extensive evaluations, to advance building extraction results to instance level. We further demonstrate the usefulness of fusing additional near IR information into the building extraction framework. Large scale experimental evaluations are conducted and reported using metrics that include: precision, recall rate, intersection over union, and the number of buildings extracted. With the improved CNN model and no requirement of further post-processing, we have generated building maps for the United States. The quality of extracted buildings and processing time demonstrated the proposed CNN-based framework fits the need of building extraction at scale.
Tasks
Published 2018-05-23
URL http://arxiv.org/abs/1805.08946v1
PDF http://arxiv.org/pdf/1805.08946v1.pdf
PWC https://paperswithcode.com/paper/building-extraction-at-scale-using
Repo
Framework

Surrogate Scoring Rules

Title Surrogate Scoring Rules
Authors Yang Liu, Juntao Wang, Yiling Chen
Abstract Strictly proper scoring rules (SPSR) are incentive compatible for eliciting information about random variables from strategic agents when the principal can reward agents after the realization of the random variables. They also quantify the quality of elicited information, with more accurate predictions receiving higher score in expectation. In this paper, we extend such scoring rules to settings where a principal elicits private probabilistic beliefs but only has access to agents’ reports. We name our solution \emph{Surrogate Scoring Rules} (SSR). SSR build on a bias correction step and an error rate estimation procedure for a reference answer defined using agents’ reports. We show that, with one bit of information about the prior distribution of the random variables, SSR in a multi-task setting recover SPSR in expectation, as if having access to the ground truth. Therefore, a salient feature of SSR is that they quantify the quality of information despite the lack of ground truth, just as SPSR do for the {\em with} ground truth setting. As a by-product, SSR induce \emph{dominant truthfulness} in reporting. Our work complements the proper scoring rule literature via extending existing SPSR to operate when there is no clean ground truth verification. Because of the non-existence of verification, our setting falls into the classical information elicitation without verification (IEWV) domain, which has focused on eliciting discrete signals. Therefore our work also contributes to the peer prediction literature via providing a scoring rule that elicits continuous probabilistic beliefs, an approach that rewards accuracy instead of correlation, and a mechanism that achieves truthfulness in \emph{dominant strategy} in a multi-task setting. Our method is verified both theoretically and empirically using data collected from real human forecasters.
Tasks
Published 2018-02-26
URL https://arxiv.org/abs/1802.09158v5
PDF https://arxiv.org/pdf/1802.09158v5.pdf
PWC https://paperswithcode.com/paper/surrogate-scoring-rules-and-a-dominant-truth
Repo
Framework
Title Automatic Judgment Prediction via Legal Reading Comprehension
Authors Shangbang Long, Cunchao Tu, Zhiyuan Liu, Maosong Sun
Abstract Automatic judgment prediction aims to predict the judicial results based on case materials. It has been studied for several decades mainly by lawyers and judges, considered as a novel and prospective application of artificial intelligence techniques in the legal field. Most existing methods follow the text classification framework, which fails to model the complex interactions among complementary case materials. To address this issue, we formalize the task as Legal Reading Comprehension according to the legal scenario. Following the working protocol of human judges, LRC predicts the final judgment results based on three types of information, including fact description, plaintiffs’ pleas, and law articles. Moreover, we propose a novel LRC model, AutoJudge, which captures the complex semantic interactions among facts, pleas, and laws. In experiments, we construct a real-world civil case dataset for LRC. Experimental results on this dataset demonstrate that our model achieves significant improvement over state-of-the-art models. We will publish all source codes and datasets of this work on \urlgithub.com for further research.
Tasks Reading Comprehension, Text Classification
Published 2018-09-18
URL http://arxiv.org/abs/1809.06537v1
PDF http://arxiv.org/pdf/1809.06537v1.pdf
PWC https://paperswithcode.com/paper/automatic-judgment-prediction-via-legal
Repo
Framework

A Time Series Graph Cut Image Segmentation Scheme for Liver Tumors

Title A Time Series Graph Cut Image Segmentation Scheme for Liver Tumors
Authors Laramie Paxton, Yufeng Cao, Kevin R. Vixie, Yuan Wang, Brian Hobbs, Chaan Ng
Abstract Tumor detection in biomedical imaging is a time-consuming process for medical professionals and is not without errors. Thus in recent decades, researchers have developed algorithmic techniques for image processing using a wide variety of mathematical methods, such as statistical modeling, variational techniques, and machine learning. In this paper, we propose a semi-automatic method for liver segmentation of 2D CT scans into three labels denoting healthy, vessel, or tumor tissue based on graph cuts. First, we create a feature vector for each pixel in a novel way that consists of the 59 intensity values in the time series data and propose a simplified perimeter cost term in the energy functional. We normalize the data and perimeter terms in the functional to expedite the graph cut without having to optimize the scaling parameter $\lambda$. In place of a training process, predetermined tissue means are computed based on sample regions identified by expert radiologists. The proposed method also has the advantage of being relatively simple to implement computationally. It was evaluated against the ground truth on a clinical CT dataset of 10 tumors and yielded segmentations with a mean Dice similarity coefficient (DSC) of .77 and mean volume overlap error (VOE) of 36.7%. The average processing time was 1.25 minutes per slice.
Tasks Liver Segmentation, Semantic Segmentation, Time Series
Published 2018-09-13
URL http://arxiv.org/abs/1809.05210v1
PDF http://arxiv.org/pdf/1809.05210v1.pdf
PWC https://paperswithcode.com/paper/a-time-series-graph-cut-image-segmentation
Repo
Framework

DeepSource: Point Source Detection using Deep Learning

Title DeepSource: Point Source Detection using Deep Learning
Authors A. Vafaei Sadr, Etienne. E. Vos, Bruce A. Bassett, Zafiirah Hosenie, N. Oozeer, Michelle Lochner
Abstract Point source detection at low signal-to-noise is challenging for astronomical surveys, particularly in radio interferometry images where the noise is correlated. Machine learning is a promising solution, allowing the development of algorithms tailored to specific telescope arrays and science cases. We present DeepSource - a deep learning solution - that uses convolutional neural networks to achieve these goals. DeepSource enhances the Signal-to-Noise Ratio (SNR) of the original map and then uses dynamic blob detection to detect sources. Trained and tested on two sets of 500 simulated 1 deg x 1 deg MeerKAT images with a total of 300,000 sources, DeepSource is essentially perfect in both purity and completeness down to SNR = 4 and outperforms PyBDSF in all metrics. For uniformly-weighted images it achieves a Purity x Completeness (PC) score at SNR = 3 of 0.73, compared to 0.31 for the best PyBDSF model. For natural-weighting we find a smaller improvement of ~40% in the PC score at SNR = 3. If instead we ask where either of the purity or completeness first drop to 90%, we find that DeepSource reaches this value at SNR = 3.6 compared to the 4.3 of PyBDSF (natural-weighting). A key advantage of DeepSource is that it can learn to optimally trade off purity and completeness for any science case under consideration. Our results show that deep learning is a promising approach to point source detection in astronomical images.
Tasks Radio Interferometry
Published 2018-07-07
URL http://arxiv.org/abs/1807.02701v1
PDF http://arxiv.org/pdf/1807.02701v1.pdf
PWC https://paperswithcode.com/paper/deepsource-point-source-detection-using-deep
Repo
Framework

#phramacovigilance - Exploring Deep Learning Techniques for Identifying Mentions of Medication Intake from Twitter

Title #phramacovigilance - Exploring Deep Learning Techniques for Identifying Mentions of Medication Intake from Twitter
Authors Debanjan Mahata, Jasper Friedrichs, Hitkul, Rajiv Ratn Shah
Abstract Mining social media messages for health and drug related information has received significant interest in pharmacovigilance research. Social media sites (e.g., Twitter), have been used for monitoring drug abuse, adverse reactions of drug usage and analyzing expression of sentiments related to drugs. Most of these studies are based on aggregated results from a large population rather than specific sets of individuals. In order to conduct studies at an individual level or specific cohorts, identifying posts mentioning intake of medicine by the user is necessary. Towards this objective, we train different deep neural network classification models on a publicly available annotated dataset and study their performances on identifying mentions of personal intake of medicine in tweets. We also design and train a new architecture of a stacked ensemble of shallow convolutional neural network (CNN) ensembles. We use random search for tuning the hyperparameters of the models and share the details of the values taken by the hyperparameters for the best learnt model in different deep neural network architectures. Our system produces state-of-the-art results, with a micro- averaged F-score of 0.693.
Tasks
Published 2018-05-16
URL http://arxiv.org/abs/1805.06375v1
PDF http://arxiv.org/pdf/1805.06375v1.pdf
PWC https://paperswithcode.com/paper/phramacovigilance-exploring-deep-learning
Repo
Framework

Attentional Multilabel Learning over Graphs: A Message Passing Approach

Title Attentional Multilabel Learning over Graphs: A Message Passing Approach
Authors Kien Do, Truyen Tran, Thin Nguyen, Svetha Venkatesh
Abstract We address a largely open problem of multilabel classification over graphs. Unlike traditional vector input, a graph has rich variable-size substructures which are related to the labels in some ways. We believe that uncovering these relations might hold the key to classification performance and explainability. We introduce GAML (Graph Attentional Multi-Label learning), a novel graph neural network that can handle this problem effectively. GAML regards labels as auxiliary nodes and models them in conjunction with the input graph. By applying message passing and attention mechanisms to both the label nodes and the input nodes iteratively, GAML can capture the relations between the labels and the input subgraphs at various resolution scales. Moreover, our model can take advantage of explicit label dependencies. It also scales linearly with the number of labels and graph size thanks to our proposed hierarchical attention. We evaluate GAML on an extensive set of experiments with both graph-structured inputs and classical unstructured inputs. The results show that GAML significantly outperforms other competing methods. Importantly, GAML enables intuitive visualizations for better understanding of the label-substructure relations and explanation of the model behaviors.
Tasks Multi-Label Learning
Published 2018-04-01
URL http://arxiv.org/abs/1804.00293v2
PDF http://arxiv.org/pdf/1804.00293v2.pdf
PWC https://paperswithcode.com/paper/attentional-multilabel-learning-over-graphs-a
Repo
Framework

Survey on Emotional Body Gesture Recognition

Title Survey on Emotional Body Gesture Recognition
Authors Fatemeh Noroozi, Ciprian Adrian Corneanu, Dorota Kamińska, Tomasz Sapiński, Sergio Escalera, Gholamreza Anbarjafari
Abstract Automatic emotion recognition has become a trending research topic in the past decade. While works based on facial expressions or speech abound, recognizing affect from body gestures remains a less explored topic. We present a new comprehensive survey hoping to boost research in the field. We first introduce emotional body gestures as a component of what is commonly known as “body language” and comment general aspects as gender differences and culture dependence. We then define a complete framework for automatic emotional body gesture recognition. We introduce person detection and comment static and dynamic body pose estimation methods both in RGB and 3D. We then comment the recent literature related to representation learning and emotion recognition from images of emotionally expressive gestures. We also discuss multi-modal approaches that combine speech or face with body gestures for improved emotion recognition. While pre-processing methodologies (e.g. human detection and pose estimation) are nowadays mature technologies fully developed for robust large scale analysis, we show that for emotion recognition the quantity of labelled data is scarce, there is no agreement on clearly defined output spaces and the representations are shallow and largely based on naive geometrical representations.
Tasks Emotion Recognition, Gesture Recognition, Human Detection, Pose Estimation, Representation Learning
Published 2018-01-23
URL http://arxiv.org/abs/1801.07481v1
PDF http://arxiv.org/pdf/1801.07481v1.pdf
PWC https://paperswithcode.com/paper/survey-on-emotional-body-gesture-recognition
Repo
Framework

Exploring Brain-wide Development of Inhibition through Deep Learning

Title Exploring Brain-wide Development of Inhibition through Deep Learning
Authors Asim Iqbal, Asfandyar Sheikh, Theofanis Karayannis
Abstract We introduce here a fully automated convolutional neural network-based method for brain image processing to Detect Neurons in different brain Regions during Development (DeNeRD). Our method takes a developing mouse brain as input and i) registers the brain sections against a developing mouse reference atlas, ii) detects various types of neurons, and iii) quantifies the neural density in many unique brain regions at different postnatal (P) time points. Our method is invariant to the shape, size and expression of neurons and by using DeNeRD, we compare the brain-wide neural density of all GABAergic neurons in developing brains of ages P4, P14 and P56. We discover and report 6 different clusters of regions in the mouse brain in which GABAergic neurons develop in a differential manner from early age (P4) to adulthood (P56). These clusters reveal key steps of GABAergic cell development that seem to track with the functional development of diverse brain regions as the mouse transitions from a passive receiver of sensory information (<P14) to an active seeker (>P14).
Tasks
Published 2018-07-09
URL http://arxiv.org/abs/1807.03238v1
PDF http://arxiv.org/pdf/1807.03238v1.pdf
PWC https://paperswithcode.com/paper/exploring-brain-wide-development-of
Repo
Framework

Tree Edit Distance Learning via Adaptive Symbol Embeddings: Supplementary Materials and Results

Title Tree Edit Distance Learning via Adaptive Symbol Embeddings: Supplementary Materials and Results
Authors Benjamin Paaßen
Abstract Metric learning has the aim to improve classification accuracy by learning a distance measure which brings data points from the same class closer together and pushes data points from different classes further apart. Recent research has demonstrated that metric learning approaches can also be applied to trees, such as molecular structures, abstract syntax trees of computer programs, or syntax trees of natural language, by learning the cost function of an edit distance, i.e. the costs of replacing, deleting, or inserting nodes in a tree. However, learning such costs directly may yield an edit distance which violates metric axioms, is challenging to interpret, and may not generalize well. In this contribution, we propose a novel metric learning approach for trees which learns an edit distance indirectly by embedding the tree nodes as vectors, such that the Euclidean distance between those vectors supports class discrimination. We learn such embeddings by reducing the distance to prototypical trees from the same class and increasing the distance to prototypical trees from different classes. In our experiments, we show that our proposed metric learning approach improves upon the state-of-the-art in metric learning for trees on six benchmark data sets, ranging from computer science over biomedical data to a natural-language processing data set containing over 300,000 nodes.
Tasks Metric Learning
Published 2018-05-18
URL http://arxiv.org/abs/1805.07123v1
PDF http://arxiv.org/pdf/1805.07123v1.pdf
PWC https://paperswithcode.com/paper/tree-edit-distance-learning-via-adaptive
Repo
Framework

Entangled-photon decision maker

Title Entangled-photon decision maker
Authors Nicolas Chauvet, David Jegouso, Benoît Boulanger, Hayato Saigo, Kazuya Okamura, Hirokazu Hori, Aurélien Drezet, Serge Huant, Guillaume Bachelier, Makoto Naruse
Abstract The competitive multi-armed bandit (CMAB) problem is related to social issues such as maximizing total social benefits while preserving equality among individuals by overcoming conflicts between individual decisions, which could seriously decrease social benefits. The study described herein provides experimental evidence that entangled photons physically resolve the CMAB in the 2-arms 2-players case, maximizing the social rewards while ensuring equality. Moreover, we demonstrated that deception, or outperforming the other player by receiving a greater reward, cannot be accomplished in a polarization-entangled-photon-based system, while deception is achievable in systems based on classical polarization-correlated photons with fixed polarizations. Besides, random polarization-correlated photons have been studied numerically and shown to ensure equality between players and deception prevention as well, although the CMAB maximum performance is reduced as compared with entangled photon experiments. Autonomous alignment schemes for polarization bases were also experimentally demonstrated based only on decision conflict information observed by an individual without communications between players. This study paves a way for collective decision making in uncertain dynamically changing environments based on entangled quantum states, a crucial step toward utilizing quantum systems for intelligent functionalities.
Tasks Decision Making
Published 2018-04-12
URL https://arxiv.org/abs/1804.04316v2
PDF https://arxiv.org/pdf/1804.04316v2.pdf
PWC https://paperswithcode.com/paper/entangled-photons-for-competitive-multi-armed
Repo
Framework
comments powered by Disqus