January 29, 2020

3296 words 16 mins read

Paper Group ANR 754

Paper Group ANR 754

General Convolutional Sparse Coding with Unknown Noise. Fully Automated Multi-Organ Segmentation in Abdominal Magnetic Resonance Imaging with Deep Neural Networks. Comparison of Different Spike Sorting Subtechniques Based on Rat Brain Basolateral Amygdala Neuronal Activity. Extending Description Logic EL++ with Linear Constraints on the Probability …

General Convolutional Sparse Coding with Unknown Noise

Title General Convolutional Sparse Coding with Unknown Noise
Authors Yaqing Wang, James T. Kwok, Lionel M. Ni
Abstract Convolutional sparse coding (CSC) can learn representative shift-invariant patterns from multiple kinds of data. However, existing CSC methods can only model noises from Gaussian distribution, which is restrictive and unrealistic. In this paper, we propose a general CSC model capable of dealing with complicated unknown noise. The noise is now modeled by Gaussian mixture model, which can approximate any continuous probability density function. We use the expectation-maximization algorithm to solve the problem and design an efficient method for the weighted CSC problem in maximization step. The crux is to speed up the convolution in the frequency domain while keeping the other computation involving weight matrix in the spatial domain. Besides, we simultaneously update the dictionary and codes by nonconvex accelerated proximal gradient algorithm without bringing in extra alternating loops. The resultant method obtains comparable time and space complexity compared with existing CSC methods. Extensive experiments on synthetic and real noisy biomedical data sets validate that our method can model noise effectively and obtain high-quality filters and representation.
Tasks
Published 2019-03-08
URL http://arxiv.org/abs/1903.03253v1
PDF http://arxiv.org/pdf/1903.03253v1.pdf
PWC https://paperswithcode.com/paper/general-convolutional-sparse-coding-with
Repo
Framework

Fully Automated Multi-Organ Segmentation in Abdominal Magnetic Resonance Imaging with Deep Neural Networks

Title Fully Automated Multi-Organ Segmentation in Abdominal Magnetic Resonance Imaging with Deep Neural Networks
Authors Yuhua Chen, Dan Ruan, Jiayu Xiao, Lixia Wang, Bin Sun, Rola Saouaf, Wensha Yang, Debiao Li, Zhaoyang Fan
Abstract Segmentation of multiple organs-at-risk (OARs) is essential for radiation therapy treatment planning and other clinical applications. We developed an Automated deep Learning-based Abdominal Multi-Organ segmentation (ALAMO) framework based on 2D U-net and a densely connected network structure with tailored design in data augmentation and training procedures such as deep connection, auxiliary supervision, and multi-view. The model takes in multi-slice MR images and generates the output of segmentation results. Three-Tesla T1 VIBE (Volumetric Interpolated Breath-hold Examination) images of 102 subjects were collected and used in our study. Ten OARs were studied, including the liver, spleen, pancreas, left/right kidneys, stomach, duodenum, small intestine, spinal cord, and vertebral bodies. Two radiologists manually labeled and obtained the consensus contours as the ground-truth. In the complete cohort of 102, 20 samples were held out for independent testing, and the rest were used for training and validation. The performance was measured using volume overlapping and surface distance. The ALAMO framework generated segmentation labels in good agreement with the manual results. Specifically, among the 10 OARs, 9 achieved high Dice Similarity Coefficients (DSCs) in the range of 0.87-0.96, except for the duodenum with a DSC of 0.80. The inference completes within one minute for a 3D volume of 320x288x180. Overall, the ALAMO model matches the state-of-the-art performance. The proposed ALAMO framework allows for fully automated abdominal MR segmentation with high accuracy and low memory and computation time demands.
Tasks Data Augmentation
Published 2019-12-23
URL https://arxiv.org/abs/1912.11000v1
PDF https://arxiv.org/pdf/1912.11000v1.pdf
PWC https://paperswithcode.com/paper/fully-automated-multi-organ-segmentation-in
Repo
Framework

Comparison of Different Spike Sorting Subtechniques Based on Rat Brain Basolateral Amygdala Neuronal Activity

Title Comparison of Different Spike Sorting Subtechniques Based on Rat Brain Basolateral Amygdala Neuronal Activity
Authors Sahar Hojjatinia, Constantino M. Lagoa
Abstract Developing electrophysiological recordings of brain neuronal activity and their analysis provide a basis for exploring the structure of brain function and nervous system investigation. The recorded signals are typically a combination of spikes and noise. High amounts of background noise and possibility of electric signaling recording from several neurons adjacent to the recording site have led scientists to develop neuronal signal processing tools such as spike sorting to facilitate brain data analysis. Spike sorting plays a pivotal role in understanding the electrophysiological activity of neuronal networks. This process prepares recorded data for interpretations of neurons interactions and understanding the overall structure of brain functions. Spike sorting consists of three steps: spike detection, feature extraction, and spike clustering. There are several methods to implement each of spike sorting steps. This paper provides a systematic comparison of various spike sorting sub-techniques applied to real extracellularly recorded data from a rat brain basolateral amygdala. An efficient sorted data resulted from careful choice of spike sorting sub-methods leads to better interpretation of the brain structures connectivity under different conditions, which is a very sensitive concept in diagnosis and treatment of neurological disorders. Here, spike detection is performed by appropriate choice of threshold level via three different approaches. Feature extraction is done through PCA and Kernel PCA methods, which Kernel PCA outperforms. We have applied four different algorithms for spike clustering including K-means, Fuzzy C-means, Bayesian and Fuzzy maximum likelihood estimation. As one requirement of most clustering algorithms, optimal number of clusters is achieved through validity indices for each method. Finally, the sorting results are evaluated using inter-spike interval histograms.
Tasks
Published 2019-10-27
URL https://arxiv.org/abs/1910.14098v1
PDF https://arxiv.org/pdf/1910.14098v1.pdf
PWC https://paperswithcode.com/paper/comparison-of-different-spike-sorting
Repo
Framework

Extending Description Logic EL++ with Linear Constraints on the Probability of Axioms

Title Extending Description Logic EL++ with Linear Constraints on the Probability of Axioms
Authors Marcelo Finger
Abstract One of the main reasons to employ a description logic such as EL or EL++ is the fact that it has efficient, polynomial-time algorithmic properties such as deciding consistency and inferring subsumption. However, simply by adding negation of concepts to it, we obtain the expressivity of description logics whose decision procedure is {ExpTime}-complete. Similar complexity explosion occurs if we add probability assignments on concepts. To lower the resulting complexity, we instead concentrate on assigning probabilities to Axioms (GCIs). We show that the consistency detection problem for such a probabilistic description logic is NP-complete, and present a linear algebraic deterministic algorithm to solve it, using the column generation technique. We also examine and provide algorithms for the probabilistic extension problem, which consists of inferring the minimum and maximum probabilities for a new axiom, given a consistent probabilistic knowledge base.
Tasks
Published 2019-08-27
URL https://arxiv.org/abs/1908.10405v1
PDF https://arxiv.org/pdf/1908.10405v1.pdf
PWC https://paperswithcode.com/paper/extending-description-logic-el-with-linear
Repo
Framework

Adversarial Bootstrapping for Dialogue Model Training

Title Adversarial Bootstrapping for Dialogue Model Training
Authors Oluwatobi Olabiyi, Erik T. Mueller, Christopher Larson, Tarek Lahlou
Abstract Open domain neural dialogue models, despite their successes, are known to produce responses that lack relevance, diversity, and in many cases coherence. These shortcomings stem from the limited ability of common training objectives to directly express these properties as well as their interplay with training datasets and model architectures. Toward addressing these problems, this paper proposes bootstrapping a dialogue response generator with an adversarially trained discriminator. The method involves training a neural generator in both autoregressive and traditional teacher-forcing modes, with the maximum likelihood loss of the auto-regressive outputs weighted by the score from a metric-based discriminator model. The discriminator input is a mixture of ground truth labels, the teacher-forcing outputs of the generator, and distractors sampled from the dataset, thereby allowing for richer feedback on the autoregressive outputs of the generator. To improve the calibration of the discriminator output, we also bootstrap the discriminator with the matching of the intermediate features of the ground truth and the generator’s autoregressive output. We explore different sampling and adversarial policy optimization strategies during training in order to understand how to encourage response diversity without sacrificing relevance. Our experiments shows that adversarial bootstrapping is effective at addressing exposure bias, leading to improvement in response relevance and coherence. The improvement is demonstrated with the state-of-the-art results on the Movie and Ubuntu dialogue datasets with respect to human evaluations and BLUE, ROGUE, and distinct n-gram scores.
Tasks Calibration
Published 2019-09-03
URL https://arxiv.org/abs/1909.00925v2
PDF https://arxiv.org/pdf/1909.00925v2.pdf
PWC https://paperswithcode.com/paper/adversarial-bootstrapping-for-dialogue-model
Repo
Framework

Temporally Coherent Full 3D Mesh Human Pose Recovery from Monocular Video

Title Temporally Coherent Full 3D Mesh Human Pose Recovery from Monocular Video
Authors Jian Liu, Naveed Akhtar, Ajmal Mian
Abstract Advances in Deep Learning have recently made it possible to recover full 3D meshes of human poses from individual images. However, extension of this notion to videos for recovering temporally coherent poses still remains unexplored. A major challenge in this regard is the lack of appropriately annotated video data for learning the desired deep models. Existing human pose datasets only provide 2D or 3D skeleton joint annotations, whereas the datasets are also recorded in constrained environments. We first contribute a technique to synthesize monocular action videos with rich 3D annotations that are suitable for learning computational models for full mesh 3D human pose recovery. Compared to the existing methods which simply “texture-map” clothes onto the 3D human pose models, our approach incorporates Physics based realistic cloth deformations with the human body movements. The generated videos cover a large variety of human actions, poses, and visual appearances, whereas the annotations record accurate human pose dynamics and human body surface information. Our second major contribution is an end-to-end trainable Recurrent Neural Network for full pose mesh recovery from monocular video. Using the proposed video data and LSTM based recurrent structure, our network explicitly learns to model the temporal coherence in videos and imposes geometric consistency over the recovered meshes. We establish the effectiveness of the proposed model with quantitative and qualitative analysis using the proposed and benchmark datasets.
Tasks
Published 2019-06-01
URL https://arxiv.org/abs/1906.00161v2
PDF https://arxiv.org/pdf/1906.00161v2.pdf
PWC https://paperswithcode.com/paper/190600161
Repo
Framework

Synergistic Team Composition: A Computational Approach to Foster Diversity in Teams

Title Synergistic Team Composition: A Computational Approach to Foster Diversity in Teams
Authors Ewa Andrejczuk, Filippo Bistaffa, Christian Blum, Juan A. Rodríguez-Aguilar, Carles Sierra
Abstract Co-operative learning in heterogeneous teams refers to learning methods in which teams are organised both to accomplish academic tasks and for individuals to gain knowledge. Competencies, personality and the gender of team members are key factors that influence team performance. Here, we introduce a team composition problem, the so-called synergistic team composition problem (STCP), which incorporates such key factors when arranging teams. Thus, the goal of the STCP is to partition a set of individuals into a set of synergistic teams: teams that are diverse in personality and gender and whose members cover all required competencies to complete a task. Furthermore, the STCP requires that all teams are balanced in that they are expected to exhibit similar performances when completing the task. We propose two efficient algorithms to solve the STCP. Our first algorithm is based on a linear programming formulation and is appropriate to solve small instances of the problem. Our second algorithm is an anytime heuristic that is effective for large instances of the STCP. Finally, we thoroughly study the computational properties of both algorithms in an educational context when grouping students in a classroom into teams using actual-world data.
Tasks
Published 2019-09-26
URL https://arxiv.org/abs/1909.11994v1
PDF https://arxiv.org/pdf/1909.11994v1.pdf
PWC https://paperswithcode.com/paper/synergistic-team-composition-a-computational
Repo
Framework

Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks

Title Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks
Authors Yuan Cao, Quanquan Gu
Abstract We study the training and generalization of deep neural networks (DNNs) in the over-parameterized regime, where the network width (i.e., number of hidden nodes per layer) is much larger than the number of training data points. We show that, the expected $0$-$1$ loss of a wide enough ReLU network trained with stochastic gradient descent (SGD) and random initialization can be bounded by the training loss of a random feature model induced by the network gradient at initialization, which we call a neural tangent random feature (NTRF) model. For data distributions that can be classified by NTRF model with sufficiently small error, our result yields a generalization error bound in the order of $\tilde{\mathcal{O}}(n^{-1/2})$ that is independent of the network width. Our result is more general and sharper than many existing generalization error bounds for over-parameterized neural networks. In addition, we establish a strong connection between our generalization error bound and the neural tangent kernel (NTK) proposed in recent work.
Tasks
Published 2019-05-30
URL https://arxiv.org/abs/1905.13210v3
PDF https://arxiv.org/pdf/1905.13210v3.pdf
PWC https://paperswithcode.com/paper/generalization-bounds-of-stochastic-gradient
Repo
Framework

Integrated Clustering and Anomaly Detection (INCAD) for Streaming Data (Revised)

Title Integrated Clustering and Anomaly Detection (INCAD) for Streaming Data (Revised)
Authors Sreelekha Guggilam, Syed M. A. Zaidi, Varun Chandola, Abani K. Patra
Abstract Most current clustering based anomaly detection methods use scoring schema and thresholds to classify anomalies. These methods are often tailored to target specific data sets with “known” number of clusters. The paper provides a streaming clustering and anomaly detection algorithm that does not require strict arbitrary thresholds on the anomaly scores or knowledge of the number of clusters while performing probabilistic anomaly detection and clustering simultaneously. This ensures that the cluster formation is not impacted by the presence of anomalous data, thereby leading to more reliable definition of “normal vs abnormal” behavior. The motivations behind developing the INCAD model and the path that leads to the streaming model is discussed.
Tasks Anomaly Detection
Published 2019-11-01
URL https://arxiv.org/abs/1911.00184v1
PDF https://arxiv.org/pdf/1911.00184v1.pdf
PWC https://paperswithcode.com/paper/integrated-clustering-and-anomaly-detection
Repo
Framework

Scene Graph Parsing by Attention Graph

Title Scene Graph Parsing by Attention Graph
Authors Martin Andrews, Yew Ken Chia, Sam Witteveen
Abstract Scene graph representations, which form a graph of visual object nodes together with their attributes and relations, have proved useful across a variety of vision and language applications. Recent work in the area has used Natural Language Processing dependency tree methods to automatically build scene graphs. In this work, we present an ‘Attention Graph’ mechanism that can be trained end-to-end, and produces a scene graph structure that can be lifted directly from the top layer of a standard Transformer model. The scene graphs generated by our model achieve an F-score similarity of 52.21% to ground-truth graphs on the evaluation set using the SPICE metric, surpassing the best previous approaches by 2.5%.
Tasks
Published 2019-09-13
URL https://arxiv.org/abs/1909.06273v1
PDF https://arxiv.org/pdf/1909.06273v1.pdf
PWC https://paperswithcode.com/paper/scene-graph-parsing-by-attention-graph
Repo
Framework

SAN: Scale-Aware Network for Semantic Segmentation of High-Resolution Aerial Images

Title SAN: Scale-Aware Network for Semantic Segmentation of High-Resolution Aerial Images
Authors Jingbo Lin, Weipeng Jing, Houbing Song
Abstract High-resolution aerial images have a wide range of applications, such as military exploration, and urban planning. Semantic segmentation is a fundamental method extensively used in the analysis of high-resolution aerial images. However, the ground objects in high-resolution aerial images have the characteristics of inconsistent scales, and this feature usually leads to unexpected predictions. To tackle this issue, we propose a novel scale-aware module (SAM). In SAM, we employ the re-sampling method aimed to make pixels adjust their positions to fit the ground objects with different scales, and it implicitly introduces spatial attention by employing a re-sampling map as the weighted map. As a result, the network with the proposed module named scale-aware network (SANet) has a stronger ability to distinguish the ground objects with inconsistent scale. Other than this, our proposed modules can easily embed in most of the existing network to improve their performance. We evaluate our modules on the International Society for Photogrammetry and Remote Sensing Vaihingen Dataset, and the experimental results and comprehensive analysis demonstrate the effectiveness of our proposed module.
Tasks Semantic Segmentation
Published 2019-07-06
URL https://arxiv.org/abs/1907.03089v1
PDF https://arxiv.org/pdf/1907.03089v1.pdf
PWC https://paperswithcode.com/paper/san-scale-aware-network-for-semantic
Repo
Framework

A Value-based Trust Assessment Model for Multi-agent Systems

Title A Value-based Trust Assessment Model for Multi-agent Systems
Authors Kinzang Chhogyal, Abhaya Nayak, Aditya Ghose, Hoa Khanh Dam
Abstract An agent’s assessment of its trust in another agent is commonly taken to be a measure of the reliability/predictability of the latter’s actions. It is based on the trustor’s past observations of the behaviour of the trustee and requires no knowledge of the inner-workings of the trustee. However, in situations that are new or unfamiliar, past observations are of little help in assessing trust. In such cases, knowledge about the trustee can help. A particular type of knowledge is that of values - things that are important to the trustor and the trustee. In this paper, based on the premise that the more values two agents share, the more they should trust one another, we propose a simple approach to trust assessment between agents based on values, taking into account if agents trust cautiously or boldly, and if they depend on others in carrying out a task.
Tasks
Published 2019-05-31
URL https://arxiv.org/abs/1905.13380v1
PDF https://arxiv.org/pdf/1905.13380v1.pdf
PWC https://paperswithcode.com/paper/a-value-based-trust-assessment-model-for
Repo
Framework

APE at Scale and its Implications on MT Evaluation Biases

Title APE at Scale and its Implications on MT Evaluation Biases
Authors Markus Freitag, Isaac Caswell, Scott Roy
Abstract In this work, we train an Automatic Post-Editing (APE) model and use it to reveal biases in standard Machine Translation (MT) evaluation procedures. The goal of our APE model is to correct typical errors introduced by the translation process, and convert the “translationese” output into natural text. Our APE model is trained entirely on monolingual data that has been round-trip translated through English, to mimic errors that are similar to the ones introduced by NMT. We apply our model to the output of existing NMT systems, and demonstrate that, while the human-judged quality improves in all cases, BLEU scores drop with forward-translated test sets. We verify these results for the WMT18 English to German, WMT15 English to French, and WMT16 English to Romanian tasks. Furthermore, we selectively apply our APE model on the output of the top submissions of the most recent WMT evaluation campaigns. We see quality improvements on all tasks of up to 2.5 BLEU points.
Tasks Automatic Post-Editing, Machine Translation
Published 2019-04-09
URL https://arxiv.org/abs/1904.04790v2
PDF https://arxiv.org/pdf/1904.04790v2.pdf
PWC https://paperswithcode.com/paper/text-repair-model-for-neural-machine
Repo
Framework

Use of Artificial Intelligence Techniques / Applications in Cyber Defense

Title Use of Artificial Intelligence Techniques / Applications in Cyber Defense
Authors Ensar Şeker
Abstract Nowadays, considering the speed of the processes and the amount of data used in cyber defense, it cannot be expected to have an effective defense by using only human power without the help of automation systems. However, for the effective defense against dynamically evolving attacks on networks, it is difficult to develop software with conventional fixed algorithms. This can be achieved by using artificial intelligence methods that provide flexibility and learning capability. The likelihood of developing cyber defense capabilities through increased intelligence of defense systems is quite high. Given the problems associated with cyber defense in real life, it is clear that many cyber defense problems can be successfully solved only when artificial intelligence methods are used. In this article, the current artificial intelligence practices and techniques are reviewed and the use and importance of artificial intelligence in cyber defense systems is mentioned. The aim of this article is to be able to explain the use of these methods in the field of cyber defense with current examples by considering and analyzing the artificial intelligence technologies and methodologies that are currently being developed and integrating them with the role and adaptation of the technology and methodology in the defense of cyberspace.
Tasks
Published 2019-05-24
URL https://arxiv.org/abs/1905.12556v1
PDF https://arxiv.org/pdf/1905.12556v1.pdf
PWC https://paperswithcode.com/paper/190512556
Repo
Framework

SemEval-2014 Task 9: Sentiment Analysis in Twitter

Title SemEval-2014 Task 9: Sentiment Analysis in Twitter
Authors Sara Rosenthal, Preslav Nakov, Alan Ritter, Veselin Stoyanov
Abstract We describe the Sentiment Analysis in Twitter task, ran as part of SemEval-2014. It is a continuation of the last year’s task that ran successfully as part of SemEval-2013. As in 2013, this was the most popular SemEval task; a total of 46 teams contributed 27 submissions for subtask A (21 teams) and 50 submissions for subtask B (44 teams). This year, we introduced three new test sets: (i) regular tweets, (ii) sarcastic tweets, and (iii) LiveJournal sentences. We further tested on (iv) 2013 tweets, and (v) 2013 SMS messages. The highest F1-score on (i) was achieved by NRC-Canada at 86.63 for subtask A and by TeamX at 70.96 for subtask B.
Tasks Sentiment Analysis
Published 2019-12-06
URL https://arxiv.org/abs/1912.02990v1
PDF https://arxiv.org/pdf/1912.02990v1.pdf
PWC https://paperswithcode.com/paper/semeval-2014-task-9-sentiment-analysis-in-1
Repo
Framework
comments powered by Disqus