July 28, 2019

2928 words 14 mins read

Paper Group ANR 216

Paper Group ANR 216

A Generalized Motion Pattern and FCN based approach for retinal fluid detection and segmentation. Multi-Label Learning with Label Enhancement. Language Modeling with Highway LSTM. AI, Native Supercomputing and The Revival of Moore’s Law. Partial Labeled Gastric Tumor Segmentation via patch-based Reiterative Learning. Interpreting Classifiers throug …

A Generalized Motion Pattern and FCN based approach for retinal fluid detection and segmentation

Title A Generalized Motion Pattern and FCN based approach for retinal fluid detection and segmentation
Authors Shivin Yadav, Karthik Gopinath, Jayanthi Sivaswamy
Abstract SD-OCT is a non-invasive cross-sectional imaging modality used for diagnosis of macular defects. Efficient detection and segmentation of the abnormalities seen as biomarkers in OCT can help in analyzing the progression of the disease and advising effective treatment for the associated disease. In this work, we propose a fully automated Generalized Motion Pattern(GMP) based segmentation method using a cascade of fully convolutional networks for detection and segmentation of retinal fluids from SD-OCT scans. General methods for segmentation depend on domain knowledge-based feature extraction, whereas we propose a method based on Generalized Motion Pattern (GMP) which is derived by inducing motion to an image to suppress the background.The proposed method is parallelizable and handles inter-scanner variability efficiently. Our method achieves a mean Dice score of 0.61,0.70 and 0.73 during segmentation and a mean AUC of 0.85,0.84 and 0.87 during detection for the 3 types of fluids IRF, SRF and PDE respectively.
Tasks
Published 2017-12-04
URL http://arxiv.org/abs/1712.01073v1
PDF http://arxiv.org/pdf/1712.01073v1.pdf
PWC https://paperswithcode.com/paper/a-generalized-motion-pattern-and-fcn-based
Repo
Framework

Multi-Label Learning with Label Enhancement

Title Multi-Label Learning with Label Enhancement
Authors Ruifeng Shao, Ning Xu, Xin Geng
Abstract The task of multi-label learning is to predict a set of relevant labels for the unseen instance. Traditional multi-label learning algorithms treat each class label as a logical indicator of whether the corresponding label is relevant or irrelevant to the instance, i.e., +1 represents relevant to the instance and -1 represents irrelevant to the instance. Such label represented by -1 or +1 is called logical label. Logical label cannot reflect different label importance. However, for real-world multi-label learning problems, the importance of each possible label is generally different. For the real applications, it is difficult to obtain the label importance information directly. Thus we need a method to reconstruct the essential label importance from the logical multilabel data. To solve this problem, we assume that each multi-label instance is described by a vector of latent real-valued labels, which can reflect the importance of the corresponding labels. Such label is called numerical label. The process of reconstructing the numerical labels from the logical multi-label data via utilizing the logical label information and the topological structure in the feature space is called Label Enhancement. In this paper, we propose a novel multi-label learning framework called LEMLL, i.e., Label Enhanced Multi-Label Learning, which incorporates regression of the numerical labels and label enhancement into a unified framework. Extensive comparative studies validate that the performance of multi-label learning can be improved significantly with label enhancement and LEMLL can effectively reconstruct latent label importance information from logical multi-label data.
Tasks Multi-Label Learning
Published 2017-06-26
URL http://arxiv.org/abs/1706.08323v4
PDF http://arxiv.org/pdf/1706.08323v4.pdf
PWC https://paperswithcode.com/paper/multi-label-learning-with-label-enhancement
Repo
Framework

Language Modeling with Highway LSTM

Title Language Modeling with Highway LSTM
Authors Gakuto Kurata, Bhuvana Ramabhadran, George Saon, Abhinav Sethy
Abstract Language models (LMs) based on Long Short Term Memory (LSTM) have shown good gains in many automatic speech recognition tasks. In this paper, we extend an LSTM by adding highway networks inside an LSTM and use the resulting Highway LSTM (HW-LSTM) model for language modeling. The added highway networks increase the depth in the time dimension. Since a typical LSTM has two internal states, a memory cell and a hidden state, we compare various types of HW-LSTM by adding highway networks onto the memory cell and/or the hidden state. Experimental results on English broadcast news and conversational telephone speech recognition show that the proposed HW-LSTM LM improves speech recognition accuracy on top of a strong LSTM LM baseline. We report 5.1% and 9.9% on the Switchboard and CallHome subsets of the Hub5 2000 evaluation, which reaches the best performance numbers reported on these tasks to date.
Tasks Language Modelling, Speech Recognition
Published 2017-09-19
URL http://arxiv.org/abs/1709.06436v1
PDF http://arxiv.org/pdf/1709.06436v1.pdf
PWC https://paperswithcode.com/paper/language-modeling-with-highway-lstm
Repo
Framework

AI, Native Supercomputing and The Revival of Moore’s Law

Title AI, Native Supercomputing and The Revival of Moore’s Law
Authors Chien-Ping Lu
Abstract Based on Alan Turing’s proposition on AI and computing machinery, which shaped Computing as we know it today, the new AI computing machinery should comprise a universal computer and a universal learning machine. The later should understand linear algebra natively to overcome the slowdown of Moore’s law. In such a universal learnig machine, a computing unit does not need to keep the legacy of a universal computing core. The data can be distributed to the computing units, and the results can be collected from them through Collective Streaming, reminiscent of Collective Communication in Supercomputing. It is not necessary to use a GPU-like deep memory hierarchy, nor a TPU-like fine-grain mesh.
Tasks
Published 2017-05-17
URL http://arxiv.org/abs/1705.05983v2
PDF http://arxiv.org/pdf/1705.05983v2.pdf
PWC https://paperswithcode.com/paper/ai-native-supercomputing-and-the-revival-of
Repo
Framework

Partial Labeled Gastric Tumor Segmentation via patch-based Reiterative Learning

Title Partial Labeled Gastric Tumor Segmentation via patch-based Reiterative Learning
Authors Yang Nan, Gianmarc Coppola, Qiaokang Liang, Kunglin Zou, Wei Sun, Dan Zhang, Yaonan Wang, Guanzhen Yu
Abstract Gastric cancer is the second leading cause of cancer-related deaths worldwide, and the major hurdle in biomedical image analysis is the determination of the cancer extent. This assignment has high clinical relevance and would generally require vast microscopic assessment by pathologists. Recent advances in deep learning have produced inspiring results on biomedical image segmentation, while its outcome is reliant on comprehensive annotation. This requires plenty of labor costs, for the ground truth must be annotated meticulously by pathologists. In this paper, a reiterative learning framework was presented to train our network on partial annotated biomedical images, and superior performance was achieved without any pre-trained or further manual annotation. We eliminate the boundary error of patch-based model through our overlapped region forecast algorithm. Through these advisable methods, a mean intersection over union coefficient (IOU) of 0.883 and mean accuracy of 91.09% on the partial labeled dataset was achieved, which made us win the 2017 China Big Data & Artificial Intelligence Innovation and Entrepreneurship Competitions.
Tasks Semantic Segmentation
Published 2017-12-20
URL http://arxiv.org/abs/1712.07488v1
PDF http://arxiv.org/pdf/1712.07488v1.pdf
PWC https://paperswithcode.com/paper/partial-labeled-gastric-tumor-segmentation
Repo
Framework

Interpreting Classifiers through Attribute Interactions in Datasets

Title Interpreting Classifiers through Attribute Interactions in Datasets
Authors Andreas Henelius, Kai Puolamäki, Antti Ukkonen
Abstract In this work we present the novel ASTRID method for investigating which attribute interactions classifiers exploit when making predictions. Attribute interactions in classification tasks mean that two or more attributes together provide stronger evidence for a particular class label. Knowledge of such interactions makes models more interpretable by revealing associations between attributes. This has applications, e.g., in pharmacovigilance to identify interactions between drugs or in bioinformatics to investigate associations between single nucleotide polymorphisms. We also show how the found attribute partitioning is related to a factorisation of the data generating distribution and empirically demonstrate the utility of the proposed method.
Tasks
Published 2017-07-24
URL http://arxiv.org/abs/1707.07576v1
PDF http://arxiv.org/pdf/1707.07576v1.pdf
PWC https://paperswithcode.com/paper/interpreting-classifiers-through-attribute
Repo
Framework

A Recorded Debating Dataset

Title A Recorded Debating Dataset
Authors Shachar Mirkin, Michal Jacovi, Tamar Lavee, Hong-Kwang Kuo, Samuel Thomas, Leslie Sager, Lili Kotlerman, Elad Venezian, Noam Slonim
Abstract This paper describes an English audio and textual dataset of debating speeches, a unique resource for the growing research field of computational argumentation and debating technologies. We detail the process of speech recording by professional debaters, the transcription of the speeches with an Automatic Speech Recognition (ASR) system, their consequent automatic processing to produce a text that is more “NLP-friendly”, and in parallel – the manual transcription of the speeches in order to produce gold-standard “reference” transcripts. We release 60 speeches on various controversial topics, each in five formats corresponding to the different stages in the production of the data. The intention is to allow utilizing this resource for multiple research purposes, be it the addition of in-domain training data for a debate-specific ASR system, or applying argumentation mining on either noisy or clean debate transcripts. We intend to make further releases of this data in the future.
Tasks Speech Recognition
Published 2017-09-19
URL http://arxiv.org/abs/1709.06438v2
PDF http://arxiv.org/pdf/1709.06438v2.pdf
PWC https://paperswithcode.com/paper/a-recorded-debating-dataset
Repo
Framework

Avaliação do método dialético na quantização de imagens multiespectrais

Title Avaliação do método dialético na quantização de imagens multiespectrais
Authors Wellington Pinheiro dos Santos, Francisco Marcos de Assis
Abstract The unsupervised classification has a very important role in the analysis of multispectral images, given its ability to assist the extraction of a priori knowledge of images. Algorithms like k-means and fuzzy c-means has long been used in this task. Computational Intelligence has proven to be an important field to assist in building classifiers optimized according to the quality of the grouping of classes and the evaluation of the quality of vector quantization. Several studies have shown that Philosophy, especially the Dialectical Method, has served as an important inspiration for the construction of new computational methods. This paper presents an evaluation of four methods based on the Dialectics: the Objective Dialectical Classifier and the Dialectical Optimization Method adapted to build a version of k-means with optimal quality indices; each of them is presented in two versions: a canonical version and another version obtained by applying the Principle of Maximum Entropy. These methods were compared to k-means, fuzzy c-means and Kohonen’s self-organizing maps. The results showed that the methods based on Dialectics are robust to noise, and quantization can achieve results as good as those obtained with the Kohonen map, considered an optimal quantizer.
Tasks Quantization
Published 2017-12-03
URL http://arxiv.org/abs/1712.01696v1
PDF http://arxiv.org/pdf/1712.01696v1.pdf
PWC https://paperswithcode.com/paper/avaliacao-do-metodo-dialetico-na-quantizacao
Repo
Framework

Optimising the topological information of the $A_\infty$-persistence groups

Title Optimising the topological information of the $A_\infty$-persistence groups
Authors Francisco Belchí
Abstract Persistent homology typically studies the evolution of homology groups $H_p(X)$ (with coefficients in a field) along a filtration of topological spaces. $A_\infty$-persistence extends this theory by analysing the evolution of subspaces such as $V := \text{Ker}, {\Delta_n}_{ H_p(X)} \subseteq H_p(X)$, where ${\Delta_m}_{m\geq1}$ denotes a structure of $A_\infty$-coalgebra on $H_*(X)$. In this paper we illustrate how $A_\infty$-persistence can be useful beyond persistent homology by discussing the topological meaning of $V$, which is the most basic form of $A_\infty$-persistence group. In addition, we explore how to choose $A_\infty$-coalgebras along a filtration to make the $A_\infty$-persistence groups carry more faithful information.
Tasks
Published 2017-06-19
URL http://arxiv.org/abs/1706.06019v1
PDF http://arxiv.org/pdf/1706.06019v1.pdf
PWC https://paperswithcode.com/paper/optimising-the-topological-information-of-the
Repo
Framework

The Continuous Hint Factory - Providing Hints in Vast and Sparsely Populated Edit Distance Spaces

Title The Continuous Hint Factory - Providing Hints in Vast and Sparsely Populated Edit Distance Spaces
Authors Benjamin Paaßen, Barbara Hammer, Thomas William Price, Tiffany Barnes, Sebastian Gross, Niels Pinkwart
Abstract Intelligent tutoring systems can support students in solving multi-step tasks by providing hints regarding what to do next. However, engineering such next-step hints manually or via an expert model becomes infeasible if the space of possible states is too large. Therefore, several approaches have emerged to infer next-step hints automatically, relying on past students’ data. In particular, the Hint Factory (Barnes & Stamper, 2008) recommends edits that are most likely to guide students from their current state towards a correct solution, based on what successful students in the past have done in the same situation. Still, the Hint Factory relies on student data being available for any state a student might visit while solving the task, which is not the case for some learning tasks, such as open-ended programming tasks. In this contribution we provide a mathematical framework for edit-based hint policies and, based on this theory, propose a novel hint policy to provide edit hints in vast and sparsely populated state spaces. In particular, we extend the Hint Factory by considering data of past students in all states which are similar to the student’s current state and creating hints approximating the weighted average of all these reference states. Because the space of possible weighted averages is continuous, we call this approach the Continuous Hint Factory. In our experimental evaluation, we demonstrate that the Continuous Hint Factory can predict more accurately what capable students would do compared to existing prediction schemes on two learning tasks, especially in an open-ended programming task, and that the Continuous Hint Factory is comparable to existing hint policies at reproducing tutor hints on a simple UML diagram task.
Tasks
Published 2017-08-22
URL http://arxiv.org/abs/1708.06564v2
PDF http://arxiv.org/pdf/1708.06564v2.pdf
PWC https://paperswithcode.com/paper/the-continuous-hint-factory-providing-hints
Repo
Framework

Segmentation of skin lesions based on fuzzy classification of pixels and histogram thresholding

Title Segmentation of skin lesions based on fuzzy classification of pixels and histogram thresholding
Authors Jose Luis Garcia-Arroyo, Begonya Garcia-Zapirain
Abstract This paper proposes an innovative method for segmentation of skin lesions in dermoscopy images developed by the authors, based on fuzzy classification of pixels and histogram thresholding.
Tasks
Published 2017-03-11
URL http://arxiv.org/abs/1703.03888v1
PDF http://arxiv.org/pdf/1703.03888v1.pdf
PWC https://paperswithcode.com/paper/segmentation-of-skin-lesions-based-on-fuzzy
Repo
Framework

Spatially-Varying Blur Detection Based on Multiscale Fused and Sorted Transform Coefficients of Gradient Magnitudes

Title Spatially-Varying Blur Detection Based on Multiscale Fused and Sorted Transform Coefficients of Gradient Magnitudes
Authors S. Alireza Golestaneh, Lina J. Karam
Abstract The detection of spatially-varying blur without having any information about the blur type is a challenging task. In this paper, we propose a novel effective approach to address the blur detection problem from a single image without requiring any knowledge about the blur type, level, or camera settings. Our approach computes blur detection maps based on a novel High-frequency multiscale Fusion and Sort Transform (HiFST) of gradient magnitudes. The evaluations of the proposed approach on a diverse set of blurry images with different blur types, levels, and contents demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods qualitatively and quantitatively.
Tasks
Published 2017-03-22
URL http://arxiv.org/abs/1703.07478v3
PDF http://arxiv.org/pdf/1703.07478v3.pdf
PWC https://paperswithcode.com/paper/spatially-varying-blur-detection-based-on
Repo
Framework

Finding News Citations for Wikipedia

Title Finding News Citations for Wikipedia
Authors Besnik Fetahu, Katja Markert, Wolfgang Nejdl, Avishek Anand
Abstract An important editing policy in Wikipedia is to provide citations for added statements in Wikipedia pages, where statements can be arbitrary pieces of text, ranging from a sentence to a paragraph. In many cases citations are either outdated or missing altogether. In this work we address the problem of finding and updating news citations for statements in entity pages. We propose a two-stage supervised approach for this problem. In the first step, we construct a classifier to find out whether statements need a news citation or other kinds of citations (web, book, journal, etc.). In the second step, we develop a news citation algorithm for Wikipedia statements, which recommends appropriate citations from a given news collection. Apart from IR techniques that use the statement to query the news collection, we also formalize three properties of an appropriate citation, namely: (i) the citation should entail the Wikipedia statement, (ii) the statement should be central to the citation, and (iii) the citation should be from an authoritative source. We perform an extensive evaluation of both steps, using 20 million articles from a real-world news collection. Our results are quite promising, and show that we can perform this task with high precision and at scale.
Tasks
Published 2017-03-30
URL http://arxiv.org/abs/1703.10339v2
PDF http://arxiv.org/pdf/1703.10339v2.pdf
PWC https://paperswithcode.com/paper/finding-news-citations-for-wikipedia
Repo
Framework

Non-Convex Weighted Lp Nuclear Norm based ADMM Framework for Image Restoration

Title Non-Convex Weighted Lp Nuclear Norm based ADMM Framework for Image Restoration
Authors Zhiyuan Zha, Xinggan Zhang, Yu Wu, Qiong Wang, Lan Tang
Abstract Since the matrix formed by nonlocal similar patches in a natural image is of low rank, the nuclear norm minimization (NNM) has been widely used in various image processing studies. Nonetheless, nuclear norm based convex surrogate of the rank function usually over-shrinks the rank components and makes different components equally, and thus may produce a result far from the optimum. To alleviate the above-mentioned limitations of the nuclear norm, in this paper we propose a new method for image restoration via the non-convex weighted Lp nuclear norm minimization (NCW-NNM), which is able to more accurately enforce the image structural sparsity and self-similarity simultaneously. To make the proposed model tractable and robust, the alternative direction multiplier method (ADMM) is adopted to solve the associated non-convex minimization problem. Experimental results on various types of image restoration problems, including image deblurring, image inpainting and image compressive sensing (CS) recovery, demonstrate that the proposed method outperforms many current state-of-the-art methods in both the objective and the perceptual qualities.
Tasks Compressive Sensing, Deblurring, Image Inpainting, Image Restoration
Published 2017-04-24
URL http://arxiv.org/abs/1704.07056v2
PDF http://arxiv.org/pdf/1704.07056v2.pdf
PWC https://paperswithcode.com/paper/non-convex-weighted-lp-nuclear-norm-based
Repo
Framework

Approximate Muscle Guided Beam Search for Three-Index Assignment Problem

Title Approximate Muscle Guided Beam Search for Three-Index Assignment Problem
Authors He Jiang, Shuwei Zhang, Zhilei Ren, Xiaochen Lai, Yong Piao
Abstract As a well-known NP-hard problem, the Three-Index Assignment Problem (AP3) has attracted lots of research efforts for developing heuristics. However, existing heuristics either obtain less competitive solutions or consume too much time. In this paper, a new heuristic named Approximate Muscle guided Beam Search (AMBS) is developed to achieve a good trade-off between solution quality and running time. By combining the approximate muscle with beam search, the solution space size can be significantly decreased, thus the time for searching the solution can be sharply reduced. Extensive experimental results on the benchmark indicate that the new algorithm is able to obtain solutions with competitive quality and it can be employed on instances with largescale. Work of this paper not only proposes a new efficient heuristic, but also provides a promising method to improve the efficiency of beam search.
Tasks
Published 2017-03-06
URL http://arxiv.org/abs/1703.01893v1
PDF http://arxiv.org/pdf/1703.01893v1.pdf
PWC https://paperswithcode.com/paper/approximate-muscle-guided-beam-search-for
Repo
Framework
comments powered by Disqus