January 29, 2020

3333 words 16 mins read

Paper Group ANR 615

Paper Group ANR 615

Fetal Head and Abdomen Measurement Using Convolutional Neural Network, Hough Transform, and Difference of Gaussian Revolved along Elliptical Path (Dogell) Algorithm. What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge. Scaling in Words on Twitter. See Clearer at Night: Towards Robust Nighttime Semantic Segmentation through …

Fetal Head and Abdomen Measurement Using Convolutional Neural Network, Hough Transform, and Difference of Gaussian Revolved along Elliptical Path (Dogell) Algorithm

Title Fetal Head and Abdomen Measurement Using Convolutional Neural Network, Hough Transform, and Difference of Gaussian Revolved along Elliptical Path (Dogell) Algorithm
Authors Kezia Irene, Aditya Yudha P., Harlan Haidi, Nurul Faza, Winston Chandra
Abstract The number of fetal-neonatal death in Indonesia is still high compared to developed countries. This is caused by the absence of maternal monitoring during pregnancy. This paper presents an automated measurement for fetal head circumference (HC) and abdominal circumference (AC) from the ultrasonography (USG) image. This automated measurement is beneficial to detect early fetal abnormalities during the pregnancy period. We used the convolutional neural network (CNN) method, to preprocess the USG data. After that, we approximate the head and abdominal circumference using the Hough transform algorithm and the difference of Gaussian Revolved along Elliptical Path (Dogell) Algorithm. We used the data set from national hospitals in Indonesia and for the accuracy measurement, we compared our results to the annotated images measured by professional obstetricians. The result shows that by using CNN, we reduced errors caused by a noisy image. We found that the Dogell algorithm performs better than the Hough transform algorithm in both time and accuracy. This is the first HC and AC approximation that used the CNN method to preprocess the data.
Tasks
Published 2019-11-14
URL https://arxiv.org/abs/1911.06298v1
PDF https://arxiv.org/pdf/1911.06298v1.pdf
PWC https://paperswithcode.com/paper/fetal-head-and-abdomen-measurement-using
Repo
Framework

What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge

Title What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge
Authors Kyle Richardson, Ashish Sabharwal
Abstract Open-domain question answering (QA) is known to involve several underlying knowledge and reasoning challenges, but are models actually learning such knowledge when trained on benchmark tasks? To investigate this, we introduce several new challenge tasks that probe whether state-of-the-art QA models have general knowledge about word definitions and general taxonomic reasoning, both of which are fundamental to more complex forms of reasoning and are widespread in benchmark datasets. As an alternative to expensive crowd-sourcing, we introduce a methodology for automatically building datasets from various types of expert knowledge (e.g., knowledge graphs and lexical taxonomies), allowing for systematic control over the resulting probes and for a more comprehensive evaluation. We find automatically constructing probes to be vulnerable to annotation artifacts, which we carefully control for. Our evaluation confirms that transformer-based QA models are already predisposed to recognize certain types of structural lexical knowledge. However, it also reveals a more nuanced picture: their performance degrades substantially with even a slight increase in the number of hops in the underlying taxonomic hierarchy, or as more challenging distractor candidate answers are introduced. Further, even when these models succeed at the standard instance-level evaluation, they leave much room for improvement when assessed at the level of clusters of semantically connected probes (e.g., all Isa questions about a concept).
Tasks Knowledge Graphs, Open-Domain Question Answering, Question Answering
Published 2019-12-31
URL https://arxiv.org/abs/1912.13337v1
PDF https://arxiv.org/pdf/1912.13337v1.pdf
PWC https://paperswithcode.com/paper/what-does-my-qa-model-know-devising
Repo
Framework

Scaling in Words on Twitter

Title Scaling in Words on Twitter
Authors Eszter Bokányi, Dániel Kondor, Gábor Vattay
Abstract Scaling properties of language are a useful tool for understanding generative processes in texts. We investigate the scaling relations in citywise Twitter corpora coming from the Metropolitan and Micropolitan Statistical Areas of the United States. We observe a slightly superlinear urban scaling with the city population for the total volume of the tweets and words created in a city. We then find that a certain core vocabulary follows the scaling relationship of that of the bulk text, but most words are sensitive to city size, exhibiting a super- or a sublinear urban scaling. For both regimes we can offer a plausible explanation based on the meaning of the words. We also show that the parameters for Zipf’s law and Heaps law differ on Twitter from that of other texts, and that the exponent of Zipf’s law changes with city size.
Tasks
Published 2019-03-11
URL http://arxiv.org/abs/1903.04329v1
PDF http://arxiv.org/pdf/1903.04329v1.pdf
PWC https://paperswithcode.com/paper/scaling-in-words-on-twitter
Repo
Framework

See Clearer at Night: Towards Robust Nighttime Semantic Segmentation through Day-Night Image Conversion

Title See Clearer at Night: Towards Robust Nighttime Semantic Segmentation through Day-Night Image Conversion
Authors Lei Sun, Kaiwei Wang, Kailun Yang, Kaite Xiang
Abstract Currently, semantic segmentation shows remarkable efficiency and reliability in standard scenarios such as daytime scenes with favorable illumination conditions. However, in face of adverse conditions such as the nighttime, semantic segmentation loses its accuracy significantly. One of the main causes of the problem is the lack of sufficient annotated segmentation datasets of nighttime scenes. In this paper, we propose a framework to alleviate the accuracy decline when semantic segmentation is taken to adverse conditions by using Generative Adversarial Networks (GANs). To bridge the daytime and nighttime image domains, we made key observation that compared to datasets in adverse conditions, there are considerable amount of segmentation datasets in standard conditions such as BDD and our collected ZJU datasets. Our GAN-based nighttime semantic segmentation framework includes two methods. In the first method, GANs were used to translate nighttime images to the daytime, thus semantic segmentation can be performed using robust models already trained on daytime datasets. In another method, we use GANs to translate different ratio of daytime images in the dataset to the nighttime but still with their labels. In this sense, synthetic nighttime segmentation datasets can be generated to yield models prepared to operate at nighttime conditions robustly. In our experiment, the later method significantly boosts the performance at the nighttime evidenced by quantitative results using Intersection over Union (IoU) and Pixel Accuracy (Acc). We show that the performance varies with respect to the proportion of synthetic nighttime images in the dataset, where the sweet spot corresponds to most robust performance across the day and night.
Tasks Semantic Segmentation
Published 2019-08-16
URL https://arxiv.org/abs/1908.05868v1
PDF https://arxiv.org/pdf/1908.05868v1.pdf
PWC https://paperswithcode.com/paper/see-clearer-at-night-towards-robust-nighttime
Repo
Framework

Classical linear logic, cobordisms and categorial grammars

Title Classical linear logic, cobordisms and categorial grammars
Authors Sergey Slavnov
Abstract We propose a categorial grammar based on classical multiplicative linear logic. This can be seen as an extension of abstract categorial grammars (ACG) and is at least as expressive. However, constituents of {\it linear logic grammars (LLG)} are not abstract ${\lambda}$-terms, but simply tuples of words with labeled endpoints and supplied with specific {\it plugging instructions}: the sets of endpoints are subdivided into the {\it incoming} and the {\it outgoing} parts. We call such objects {\it word cobordisms}. A key observation is that word cobordisms can be organized in a category, very similar to the familiar category of topological cobordisms. This category is symmetric monoidal closed and compact closed and thus is a model of linear $\lambda$-calculus and classical, as well as intuitionistic linear logic. This allows us using linear logic as a typing system for word cobordisms. At least, this gives a concrete and intuitive representation of ACG. We think, however, that the category of word cobordisms, which has a rich structure and is independent of any grammar, might be interesting on its own right.
Tasks
Published 2019-11-10
URL https://arxiv.org/abs/1911.03962v1
PDF https://arxiv.org/pdf/1911.03962v1.pdf
PWC https://paperswithcode.com/paper/classical-linear-logic-cobordisms-and-1
Repo
Framework

Improving the Projection of Global Structures in Data through Spanning Trees

Title Improving the Projection of Global Structures in Data through Spanning Trees
Authors Daniel Alcaide, Jan Aerts
Abstract The connection of edges in a graph generates a structure that is independent of a coordinate system. This visual metaphor allows creating a more flexible representation of data than a two-dimensional scatterplot. In this work, we present STAD (Spanning Trees as Approximation of Data), a dimensionality reduction method to approximate the high-dimensional structure into a graph with or without formulating prior hypotheses. STAD generates an abstract representation of high-dimensional data by giving each data point a location in a graph which preserves the distances in the original high-dimensional space. The STAD graph is built upon the Minimum Spanning Tree (MST) to which new edges are added until the correlation between the distances from the graph and the original dataset is maximized. Additionally, STAD supports the inclusion of additional functions to focus the exploration and allow the analysis of data from new perspectives, emphasizing traits in data which otherwise would remain hidden. We demonstrate the effectiveness of our method by applying it to two real-world datasets: traffic density in Barcelona and temporal measurements of air quality in Castile and Le'on in Spain.
Tasks Dimensionality Reduction
Published 2019-07-12
URL https://arxiv.org/abs/1907.05783v1
PDF https://arxiv.org/pdf/1907.05783v1.pdf
PWC https://paperswithcode.com/paper/improving-the-projection-of-global-structures
Repo
Framework

Defeating Opaque Predicates Statically through Machine Learning and Binary Analysis

Title Defeating Opaque Predicates Statically through Machine Learning and Binary Analysis
Authors Ramtine Tofighi-Shirazi, Irina As{ă}voae, Philippe Elbaz-Vincent, Thanh-Ha Le
Abstract We present a new approach that bridges binary analysis techniques with machine learning classification for the purpose of providing a static and generic evaluation technique for opaque predicates, regardless of their constructions. We use this technique as a static automated deobfuscation tool to remove the opaque predicates introduced by obfuscation mechanisms. According to our experimental results, our models have up to 98% accuracy at detecting and deob-fuscating state-of-the-art opaque predicates patterns. By contrast, the leading edge deobfuscation methods based on symbolic execution show less accuracy mostly due to the SMT solvers constraints and the lack of scalability of dynamic symbolic analyses. Our approach underlines the efficiency of hybrid symbolic analysis and machine learning techniques for a static and generic deobfuscation methodology.
Tasks
Published 2019-09-04
URL https://arxiv.org/abs/1909.01640v1
PDF https://arxiv.org/pdf/1909.01640v1.pdf
PWC https://paperswithcode.com/paper/defeating-opaque-predicates-statically
Repo
Framework

Fetal Pose Estimation in Volumetric MRI using a 3D Convolution Neural Network

Title Fetal Pose Estimation in Volumetric MRI using a 3D Convolution Neural Network
Authors Junshen Xu, Molin Zhang, Esra Abaci Turk, Larry Zhang, Ellen Grant, Kui Ying, Polina Golland, Elfar Adalsteinsson
Abstract The performance and diagnostic utility of magnetic resonance imaging (MRI) in pregnancy is fundamentally constrained by fetal motion. Motion of the fetus, which is unpredictable and rapid on the scale of conventional imaging times, limits the set of viable acquisition techniques to single-shot imaging with severe compromises in signal-to-noise ratio and diagnostic contrast, and frequently results in unacceptable image quality. Surprisingly little is known about the characteristics of fetal motion during MRI and here we propose and demonstrate methods that exploit a growing repository of MRI observations of the gravid abdomen that are acquired at low spatial resolution but relatively high temporal resolution and over long durations (10-30 minutes). We estimate fetal pose per frame in MRI volumes of the pregnant abdomen via deep learning algorithms that detect key fetal landmarks. Evaluation of the proposed method shows that our framework achieves quantitatively an average error of 4.47 mm and 96.4% accuracy (with error less than 10 mm). Fetal pose estimation in MRI time series yields novel means of quantifying fetal movements in health and disease, and enables the learning of kinematic models that may enhance prospective mitigation of fetal motion artifacts during MRI acquisition.
Tasks Pose Estimation, Time Series
Published 2019-07-10
URL https://arxiv.org/abs/1907.04500v1
PDF https://arxiv.org/pdf/1907.04500v1.pdf
PWC https://paperswithcode.com/paper/fetal-pose-estimation-in-volumetric-mri-using
Repo
Framework

Towards countering hate speech and personal attack in social media

Title Towards countering hate speech and personal attack in social media
Authors Polychronis Charitidis, Stavros Doropoulos, Stavros Vologiannidis, Ioannis Papastergiou, Sophia Karakeva
Abstract The damaging effects of hate speech in social media are evident during the last few years, and several organizations, researchers and the social media platforms themselves have tried to harness them without great success. Recently, following the advent of deep learning, several novel approaches appeared in the field of hate speech detection. However, it is apparent that such approaches depend on large-scale datasets in order to exhibit competitive performance. In this paper, we present a novel, publicly available collection of datasets in five different languages, that consists of tweets referring to journalism-related accounts, including high-quality human annotations for hate speech and personal attack. To build the datasets we follow a concise annotation strategy and employ an active learning approach. Additionally, we present a number of state-of-the-art deep learning architectures for hate speech detection and use these datasets to train and evaluate them. Finally, we propose an ensemble model that outperforms all individual models.
Tasks Active Learning, Hate Speech Detection
Published 2019-12-05
URL https://arxiv.org/abs/1912.04106v1
PDF https://arxiv.org/pdf/1912.04106v1.pdf
PWC https://paperswithcode.com/paper/towards-countering-hate-speech-and-personal
Repo
Framework

Know What You Don’t Know: Modeling a Pragmatic Speaker that Refers to Objects of Unknown Categories

Title Know What You Don’t Know: Modeling a Pragmatic Speaker that Refers to Objects of Unknown Categories
Authors Sina Zarrieß, David Schlangen
Abstract Zero-shot learning in Language & Vision is the task of correctly labelling (or naming) objects of novel categories. Another strand of work in L&V aims at pragmatically informative rather than ``correct’’ object descriptions, e.g. in reference games. We combine these lines of research and model zero-shot reference games, where a speaker needs to successfully refer to a novel object in an image. Inspired by models of “rational speech acts”, we extend a neural generator to become a pragmatic speaker reasoning about uncertain object categories. As a result of this reasoning, the generator produces fewer nouns and names of distractor categories as compared to a literal speaker. We show that this conversational strategy for dealing with novel objects often improves communicative success, in terms of resolution accuracy of an automatic listener. |
Tasks Zero-Shot Learning
Published 2019-06-13
URL https://arxiv.org/abs/1906.05518v1
PDF https://arxiv.org/pdf/1906.05518v1.pdf
PWC https://paperswithcode.com/paper/know-what-you-dont-know-modeling-a-pragmatic
Repo
Framework

Hate Speech Detection on Vietnamese Social Media Text using the Bi-GRU-LSTM-CNN Model

Title Hate Speech Detection on Vietnamese Social Media Text using the Bi-GRU-LSTM-CNN Model
Authors Tin Van Huynh, Vu Duc Nguyen, Kiet Van Nguyen, Ngan Luu-Thuy Nguyen, Anh Gia-Tuan Nguyen
Abstract In recent years, Hate Speech Detection has become one of the interesting fields in natural language processing or computational linguistics. In this paper, we present the description of our system to solve this problem at the VLSP shared task 2019: Hate Speech Detection on Social Networks with the corpus which contains 20,345 human-labeled comments/posts for training and 5,086 for public-testing. We implement a deep learning method based on the Bi-GRU-LSTM-CNN classifier into this task. Our result in this task is 70.576% of F1-score, ranking the 5th of performance on public-test set.
Tasks Hate Speech Detection
Published 2019-11-09
URL https://arxiv.org/abs/1911.03644v3
PDF https://arxiv.org/pdf/1911.03644v3.pdf
PWC https://paperswithcode.com/paper/hate-speech-detection-on-vietnamese-social-1
Repo
Framework

Sparse Density Estimation with Measurement Errors

Title Sparse Density Estimation with Measurement Errors
Authors Xiaowei Yang, Huiming Zhang, Haoyu Wei, Shouzheng Zhang
Abstract This paper aims to build an estimate of an unknown density of the data with measurement error as a linear combination of functions of a dictionary. Inspired by penalization approach, we propose the weighted Elastic-net penalized minimal $L_2$-distance method for sparse coefficients estimation, where the weights adaptively is coming from sharp concentration inequalities. The optimal weighted tuning parameters are obtained by the first-order conditions holding with high-probability. Under local coherence or minimal eigenvalue assumptions, non-asymptotical oracle inequalities are derived. These theoretical results are transposed to obtain the support recovery with high-probability. Then, the issue of calibrating these procedures is studied by some numerical experiments for discrete and continuous distributions, it shows the significant improvement obtained by our procedure when compared with other conventional approaches. Finally, the application is performed for a meteorology data set. It shows that our method has potency and superiority of detecting the shape of multi-mode density compared with other conventional approaches.
Tasks Density Estimation
Published 2019-11-14
URL https://arxiv.org/abs/1911.06215v2
PDF https://arxiv.org/pdf/1911.06215v2.pdf
PWC https://paperswithcode.com/paper/sparse-density-estimation-with-measurement
Repo
Framework

Structured 2D Representation of 3D Data for Shape Processing

Title Structured 2D Representation of 3D Data for Shape Processing
Authors Kripasindhu Sarkar, Elizabeth Mathews, Didier Stricker
Abstract We represent 3D shape by structured 2D representations of fixed length making it feasible to apply well investigated 2D convolutional neural networks (CNN) for both discriminative and geometric tasks on 3D shapes. We first provide a general introduction to such structured descriptors, analyze their different forms and show how a simple 2D CNN can be used to achieve good classification result. With a specialized classification network for images and our structured representation, we achieve the classification accuracy of 99.7% in the ModelNet40 test set - improving the previous state-of-the-art by a large margin. We finally provide a novel framework for performing the geometric task of 3D segmentation using 2D CNNs and the structured representation - concluding the utility of such descriptors for both discriminative and geometric tasks.
Tasks
Published 2019-03-25
URL http://arxiv.org/abs/1903.10360v1
PDF http://arxiv.org/pdf/1903.10360v1.pdf
PWC https://paperswithcode.com/paper/structured-2d-representation-of-3d-data-for
Repo
Framework

Efficiently Learning Structured Distributions from Untrusted Batches

Title Efficiently Learning Structured Distributions from Untrusted Batches
Authors Sitan Chen, Jerry Li, Ankur Moitra
Abstract We study the problem, introduced by Qiao and Valiant, of learning from untrusted batches. Here, we assume $m$ users, all of whom have samples from some underlying distribution $p$ over $1, \ldots, n$. Each user sends a batch of $k$ i.i.d. samples from this distribution; however an $\epsilon$-fraction of users are untrustworthy and can send adversarially chosen responses. The goal is then to learn $p$ in total variation distance. When $k = 1$ this is the standard robust univariate density estimation setting and it is well-understood that $\Omega (\epsilon)$ error is unavoidable. Suprisingly, Qiao and Valiant gave an estimator which improves upon this rate when $k$ is large. Unfortunately, their algorithms run in time exponential in either $n$ or $k$. We first give a sequence of polynomial time algorithms whose estimation error approaches the information-theoretically optimal bound for this problem. Our approach is based on recent algorithms derived from the sum-of-squares hierarchy, in the context of high-dimensional robust estimation. We show that algorithms for learning from untrusted batches can also be cast in this framework, but by working with a more complicated set of test functions. It turns out this abstraction is quite powerful and can be generalized to incorporate additional problem specific constraints. Our second and main result is to show that this technology can be leveraged to build in prior knowledge about the shape of the distribution. Crucially, this allows us to reduce the sample complexity of learning from untrusted batches to polylogarithmic in $n$ for most natural classes of distributions, which is important in many applications. To do so, we demonstrate that these sum-of-squares algorithms for robust mean estimation can be made to handle complex combinatorial constraints (e.g. those arising from VC theory), which may be of independent technical interest.
Tasks Density Estimation
Published 2019-11-05
URL https://arxiv.org/abs/1911.02035v1
PDF https://arxiv.org/pdf/1911.02035v1.pdf
PWC https://paperswithcode.com/paper/efficiently-learning-structured-distributions
Repo
Framework

Spectrum Sensing Based on Deep Learning Classification for Cognitive Radios

Title Spectrum Sensing Based on Deep Learning Classification for Cognitive Radios
Authors Shilian Zheng, Shichuan Chen, Peihan Qi, Huaji Zhou, Xiaoniu Yang
Abstract Spectrum sensing is a key technology for cognitive radios. We present spectrum sensing as a classification problem and propose a sensing method based on deep learning classification. We normalize the received signal power to overcome the effects of noise power uncertainty. We train the model with as many types of signals as possible as well as noise data to enable the trained network model to adapt to untrained new signals. We also use transfer learning strategies to improve the performance for real-world signals. Extensive experiments are conducted to evaluate the performance of this method. The simulation results show that the proposed method performs better than two traditional spectrum sensing methods, i.e., maximum-minimum eigenvalue ratio-based method and frequency domain entropy-based method. In addition, the experimental results of the new untrained signal types show that our method can adapt to the detection of these new signals. Furthermore, the real-world signal detection experiment results show that the detection performance can be further improved by transfer learning. Finally, experiments under colored noise show that our proposed method has superior detection performance under colored noise, while the traditional methods have a significant performance degradation, which further validate the superiority of our method.
Tasks Transfer Learning
Published 2019-09-13
URL https://arxiv.org/abs/1909.06020v1
PDF https://arxiv.org/pdf/1909.06020v1.pdf
PWC https://paperswithcode.com/paper/spectrum-sensing-based-on-deep-learning
Repo
Framework
comments powered by Disqus