Paper Group ANR 1329
Hindi Question Generation Using Dependency Structures. Evolutionary Dataset Optimisation: learning algorithm quality through evolution. Learning Internal Representations (COLT 1995). Multi-scale GANs for Memory-efficient Generation of High Resolution Medical Images. Word Embeddings for Sentiment Analysis: A Comprehensive Empirical Survey. Transfera …
Hindi Question Generation Using Dependency Structures
Title | Hindi Question Generation Using Dependency Structures |
Authors | Kaveri Anuranjana, Vijjini Anvesh Rao, Radhika Mamidi |
Abstract | Hindi question answering systems suffer from a lack of data. To address the same, this paper presents an approach towards automatic question generation. We present a rule-based system for question generation in Hindi by formalizing question transformation methods based on karaka-dependency theory. We use a Hindi dependency parser to mark the karaka roles and use IndoWordNet a Hindi ontology to detect the semantic category of the karaka role heads to generate the interrogatives. We analyze how one sentence can have multiple generations from the same karaka role’s rule. The generations are manually annotated by multiple annotators on a semantic and syntactic scale for evaluation. Further, we constrain our generation with the help of various semantic and syntactic filters so as to improve the generation quality. Using these methods, we are able to generate diverse questions, significantly more than number of sentences fed to the system. |
Tasks | Question Answering, Question Generation |
Published | 2019-06-20 |
URL | https://arxiv.org/abs/1906.08570v1 |
https://arxiv.org/pdf/1906.08570v1.pdf | |
PWC | https://paperswithcode.com/paper/hindi-question-generation-using-dependency |
Repo | |
Framework | |
Evolutionary Dataset Optimisation: learning algorithm quality through evolution
Title | Evolutionary Dataset Optimisation: learning algorithm quality through evolution |
Authors | Henry Wilde, Vincent Knight, Jonathan Gillard |
Abstract | In this paper we propose a novel method for learning how algorithms perform. Classically, algorithms are compared on a finite number of existing (or newly simulated) benchmark datasets based on some fixed metrics. The algorithm(s) with the smallest value of this metric are chosen to be the `best performing’. We offer a new approach to flip this paradigm. We instead aim to gain a richer picture of the performance of an algorithm by generating artificial data through genetic evolution, the purpose of which is to create populations of datasets for which a particular algorithm performs well on a given metric. These datasets can be studied so as to learn what attributes lead to a particular progression of a given algorithm. Following a detailed description of the algorithm as well as a brief description of an open source implementation, a case study in clustering is presented. This case study demonstrates the performance and nuances of the method which we call Evolutionary Dataset Optimisation. In this study, a number of known properties about preferable datasets for the clustering algorithms known as (k)-means and DBSCAN are realised in the generated datasets. | |
Tasks | |
Published | 2019-07-31 |
URL | https://arxiv.org/abs/1907.13508v3 |
https://arxiv.org/pdf/1907.13508v3.pdf | |
PWC | https://paperswithcode.com/paper/evolutionary-dataset-optimisation-learning |
Repo | |
Framework | |
Learning Internal Representations (COLT 1995)
Title | Learning Internal Representations (COLT 1995) |
Authors | Jonathan Baxter |
Abstract | Probably the most important problem in machine learning is the preliminary biasing of a learner’s hypothesis space so that it is small enough to ensure good generalisation from reasonable training sets, yet large enough that it contains a good solution to the problem being learnt. In this paper a mechanism for {\em automatically} learning or biasing the learner’s hypothesis space is introduced. It works by first learning an appropriate {\em internal representation} for a learning environment and then using that representation to bias the learner’s hypothesis space for the learning of future tasks drawn from the same environment. An internal representation must be learnt by sampling from {\em many similar tasks}, not just a single task as occurs in ordinary machine learning. It is proved that the number of examples $m$ {\em per task} required to ensure good generalisation from a representation learner obeys $m = O(a+b/n)$ where $n$ is the number of tasks being learnt and $a$ and $b$ are constants. If the tasks are learnt independently ({\em i.e.} without a common representation) then $m=O(a+b)$. It is argued that for learning environments such as speech and character recognition $b\gg a$ and hence representation learning in these environments can potentially yield a drastic reduction in the number of examples required per task. It is also proved that if $n = O(b)$ (with $m=O(a+b/n)$) then the representation learnt will be good for learning novel tasks from the same environment, and that the number of examples required to generalise well on a novel task will be reduced to $O(a)$ (as opposed to $O(a+b)$ if no representation is used). It is shown that gradient descent can be used to train neural network representations and experiment results are reported providing strong qualitative support for the theoretical results. |
Tasks | Representation Learning |
Published | 2019-11-13 |
URL | https://arxiv.org/abs/1911.05781v3 |
https://arxiv.org/pdf/1911.05781v3.pdf | |
PWC | https://paperswithcode.com/paper/learning-internal-representations-1 |
Repo | |
Framework | |
Multi-scale GANs for Memory-efficient Generation of High Resolution Medical Images
Title | Multi-scale GANs for Memory-efficient Generation of High Resolution Medical Images |
Authors | Hristina Uzunova, Jan Ehrhardt, Fabian Jacob, Alex Frydrychowicz, Heinz Handels |
Abstract | Currently generative adversarial networks (GANs) are rarely applied to medical images of large sizes, especially 3D volumes, due to their large computational demand. We propose a novel multi-scale patch-based GAN approach to generate large high resolution 2D and 3D images. Our key idea is to first learn a low-resolution version of the image and then generate patches of successively growing resolutions conditioned on previous scales. In a domain translation use-case scenario, 3D thorax CTs of size 512x512x512 and thorax X-rays of size 2048x2048 are generated and we show that, due to the constant GPU memory demand of our method, arbitrarily large images of high resolution can be generated. Moreover, compared to common patch-based approaches, our multi-resolution scheme enables better image quality and prevents patch artifacts. |
Tasks | |
Published | 2019-07-02 |
URL | https://arxiv.org/abs/1907.01376v2 |
https://arxiv.org/pdf/1907.01376v2.pdf | |
PWC | https://paperswithcode.com/paper/multi-scale-gans-for-memory-efficient |
Repo | |
Framework | |
Word Embeddings for Sentiment Analysis: A Comprehensive Empirical Survey
Title | Word Embeddings for Sentiment Analysis: A Comprehensive Empirical Survey |
Authors | Erion Çano, Maurizio Morisio |
Abstract | This work investigates the role of factors like training method, training corpus size and thematic relevance of texts in the performance of word embedding features on sentiment analysis of tweets, song lyrics, movie reviews and item reviews. We also explore specific training or post-processing methods that can be used to enhance the performance of word embeddings in certain tasks or domains. Our empirical observations indicate that models trained with multithematic texts that are large and rich in vocabulary are the best in answering syntactic and semantic word analogy questions. We further observe that influence of thematic relevance is stronger on movie and phone reviews, but weaker on tweets and lyrics. These two later domains are more sensitive to corpus size and training method, with Glove outperforming Word2vec. “Injecting” extra intelligence from lexicons or generating sentiment specific word embeddings are two prominent alternatives for increasing performance of word embedding features. |
Tasks | Sentiment Analysis, Word Embeddings |
Published | 2019-02-02 |
URL | http://arxiv.org/abs/1902.00753v1 |
http://arxiv.org/pdf/1902.00753v1.pdf | |
PWC | https://paperswithcode.com/paper/word-embeddings-for-sentiment-analysis-a |
Repo | |
Framework | |
Transferability of Spectral Graph Convolutional Neural Networks
Title | Transferability of Spectral Graph Convolutional Neural Networks |
Authors | Ron Levie, Wei Huang, Lorenzo Bucci, Michael M. Bronstein, Gitta Kutyniok |
Abstract | This paper focuses on spectral graph convolutional neural networks (ConvNets), where filters are defined as elementwise multiplication in the frequency domain of a graph. In machine learning settings where the dataset consists of signals defined on many different graphs, the trained ConvNet should generalize to signals on graphs unseen in the training set. It is thus important to transfer ConvNets between graphs. Transferability, which is a certain type of generalization capability, can be loosely defined as follows: if two graphs describe the same phenomenon, then a single filter or ConvNet should have similar repercussions on both graphs. This paper aims at debunking the common misconception that spectral filters are not transferable. We show that if two graphs discretize the same “continuous” space, then a spectral filter or ConvNet has approximately the same repercussion on both graphs. Our analysis is more permissive than the standard analysis. Transferability is typically described as the robustness of the filter to small graph perturbations and re-indexing of the vertices. Our analysis accounts also for large graph perturbations. We prove transferability between graphs that can have completely different dimensions and topologies, only requiring that both graphs discretize the same underlying space in some generic sense. |
Tasks | |
Published | 2019-07-30 |
URL | https://arxiv.org/abs/1907.12972v2 |
https://arxiv.org/pdf/1907.12972v2.pdf | |
PWC | https://paperswithcode.com/paper/transferability-of-spectral-graph |
Repo | |
Framework | |
3D Dynamic Point Cloud Denoising via Spatio-temporal Graph Modeling
Title | 3D Dynamic Point Cloud Denoising via Spatio-temporal Graph Modeling |
Authors | Qianjiang Hu, Zehua Wang, Wei Hu, Xiang Gao, Zongming Guo |
Abstract | The prevalence of accessible depth sensing and 3D laser scanning techniques has enabled the convenient acquisition of 3D dynamic point clouds, which provide efficient representation of arbitrarily-shaped objects in motion. Nevertheless, dynamic point clouds are often perturbed by noise due to hardware, software or other causes. While many methods have been proposed for the denoising of static point clouds, dynamic point cloud denoising has not been studied in the literature yet. Hence, we address this problem based on the proposed spatio-temporal graph modeling, exploiting both the intra-frame similarity and inter-frame consistency. Specifically, we first represent a point cloud sequence on graphs and model it via spatio-temporal Gaussian Markov Random Fields on defined patches. Then for each target patch, we pose a Maximum a Posteriori estimation, and propose the corresponding likelihood and prior functions via spectral graph theory, leveraging its similar patches within the same frame and corresponding patch in the previous frame. This leads to our problem formulation, which jointly optimizes the underlying dynamic point cloud and spatio-temporal graph. Finally, we propose an efficient algorithm for patch construction, similar/corresponding patch search, intra- and inter-frame graph construction, and the optimization of our problem formulation via alternating minimization. Experimental results show that the proposed method outperforms frame-by-frame denoising from state-of-the-art static point cloud denoising approaches. |
Tasks | Denoising, graph construction |
Published | 2019-04-28 |
URL | http://arxiv.org/abs/1904.12284v1 |
http://arxiv.org/pdf/1904.12284v1.pdf | |
PWC | https://paperswithcode.com/paper/3d-dynamic-point-cloud-denoising-via-spatio |
Repo | |
Framework | |
Empirical Likelihood Under Mis-specification: Degeneracies and Random Critical Points
Title | Empirical Likelihood Under Mis-specification: Degeneracies and Random Critical Points |
Authors | Subhro Ghosh, Sanjay Chaudhuri |
Abstract | We investigate empirical likelihood obtained from mis-specified (i.e. biased) estimating equations. We establish that the behaviour of the optimal weights under mis-specification differ markedly from their properties under the null, i.e. when the estimating equations are unbiased and correctly specified. This is manifested by certain ``degeneracies’’ in the optimal weights which define the likelihood. Such degeneracies in weights are not observed under the null. Furthermore, we establish an anomalous behaviour of the Wilks’ statistic, which, unlike under correct specification, does not exhibit a chi-squared limit. In the Bayesian setting, we rigorously establish the posterior consistency of so called BayesEL procedures, where instead of a parametric likelihood, an empirical likelihood is used to define the posterior. In particular, we show that the BayesEL posterior, as a random probability measure, rapidly converges to the delta measure at the true parameter value. A novel feature of our approach is the investigation of critical points of random functions in the context of empirical likelihood. In particular, we obtain the location and the mass of the degenerate optimal weights as the leading and sub-leading terms in a canonical expansion of a particular critical point of a random function that is naturally associated with the model. | |
Tasks | |
Published | 2019-10-03 |
URL | https://arxiv.org/abs/1910.01396v1 |
https://arxiv.org/pdf/1910.01396v1.pdf | |
PWC | https://paperswithcode.com/paper/empirical-likelihood-under-mis-specification |
Repo | |
Framework | |
Distanced LSTM: Time-Distanced Gates in Long Short-Term Memory Models for Lung Cancer Detection
Title | Distanced LSTM: Time-Distanced Gates in Long Short-Term Memory Models for Lung Cancer Detection |
Authors | Riqiang Gao, Yuankai Huo, Shunxing Bao, Yucheng Tang, Sanja L. Antic, Emily S. Epstein, Aneri B. Balar, Steve Deppen, Alexis B. Paulson, Kim L. Sandler, Pierre P. Massion, Bennett A. Landman |
Abstract | The field of lung nodule detection and cancer prediction has been rapidly developing with the support of large public data archives. Previous studies have largely focused on cross-sectional (single) CT data. Herein, we consider longitudinal data. The Long Short-Term Memory (LSTM) model addresses learning with regularly spaced time points (i.e., equal temporal intervals). However, clinical imaging follows patient needs with often heterogeneous, irregular acquisitions. To model both regular and irregular longitudinal samples, we generalize the LSTM model with the Distanced LSTM (DLSTM) for temporally varied acquisitions. The DLSTM includes a Temporal Emphasis Model (TEM) that enables learning across regularly and irregularly sampled intervals. Briefly, (1) the time intervals between longitudinal scans are modeled explicitly, (2) temporally adjustable forget and input gates are introduced for irregular temporal sampling; and (3) the latest longitudinal scan has an additional emphasis term. We evaluate the DLSTM framework in three datasets including simulated data, 1794 National Lung Screening Trial (NLST) scans, and 1420 clinically acquired data with heterogeneous and irregular temporal accession. The experiments on the first two datasets demonstrate that our method achieves competitive performance on both simulated and regularly sampled datasets (e.g. improve LSTM from 0.6785 to 0.7085 on F1 score in NLST). In external validation of clinically and irregularly acquired data, the benchmarks achieved 0.8350 (CNN feature) and 0.8380 (LSTM) on the area under the ROC curve (AUC) score, while the proposed DLSTM achieves 0.8905. |
Tasks | Lung Nodule Detection |
Published | 2019-09-11 |
URL | https://arxiv.org/abs/1909.05321v1 |
https://arxiv.org/pdf/1909.05321v1.pdf | |
PWC | https://paperswithcode.com/paper/distanced-lstm-time-distanced-gates-in-long |
Repo | |
Framework | |
Acoustic Modeling for Automatic Lyrics-to-Audio Alignment
Title | Acoustic Modeling for Automatic Lyrics-to-Audio Alignment |
Authors | Chitralekha Gupta, Emre Yılmaz, Haizhou Li |
Abstract | Automatic lyrics to polyphonic audio alignment is a challenging task not only because the vocals are corrupted by background music, but also there is a lack of annotated polyphonic corpus for effective acoustic modeling. In this work, we propose (1) using additional speech and music-informed features and (2) adapting the acoustic models trained on a large amount of solo singing vocals towards polyphonic music using a small amount of in-domain data. Incorporating additional information such as voicing and auditory features together with conventional acoustic features aims to bring robustness against the increased spectro-temporal variations in singing vocals. By adapting the acoustic model using a small amount of polyphonic audio data, we reduce the domain mismatch between training and testing data. We perform several alignment experiments and present an in-depth alignment error analysis on acoustic features, and model adaptation techniques. The results demonstrate that the proposed strategy provides a significant error reduction of word boundary alignment over comparable existing systems, especially on more challenging polyphonic data with long-duration musical interludes. |
Tasks | |
Published | 2019-06-25 |
URL | https://arxiv.org/abs/1906.10369v1 |
https://arxiv.org/pdf/1906.10369v1.pdf | |
PWC | https://paperswithcode.com/paper/acoustic-modeling-for-automatic-lyrics-to |
Repo | |
Framework | |
GELP: GAN-Excited Linear Prediction for Speech Synthesis from Mel-spectrogram
Title | GELP: GAN-Excited Linear Prediction for Speech Synthesis from Mel-spectrogram |
Authors | Lauri Juvela, Bajibabu Bollepalli, Junichi Yamagishi, Paavo Alku |
Abstract | Recent advances in neural network -based text-to-speech have reached human level naturalness in synthetic speech. The present sequence-to-sequence models can directly map text to mel-spectrogram acoustic features, which are convenient for modeling, but present additional challenges for vocoding (i.e., waveform generation from the acoustic features). High-quality synthesis can be achieved with neural vocoders, such as WaveNet, but such autoregressive models suffer from slow sequential inference. Meanwhile, their existing parallel inference counterparts are difficult to train and require increasingly large model sizes. In this paper, we propose an alternative training strategy for a parallel neural vocoder utilizing generative adversarial networks, and integrate a linear predictive synthesis filter into the model. Results show that the proposed model achieves significant improvement in inference speed, while outperforming a WaveNet in copy-synthesis quality. |
Tasks | Speech Synthesis |
Published | 2019-04-08 |
URL | https://arxiv.org/abs/1904.03976v3 |
https://arxiv.org/pdf/1904.03976v3.pdf | |
PWC | https://paperswithcode.com/paper/gelp-gan-excited-liner-prediction-for-speech |
Repo | |
Framework | |
Requirements for Developing Robust Neural Networks
Title | Requirements for Developing Robust Neural Networks |
Authors | John S. Hyatt, Michael S. Lee |
Abstract | Validation accuracy is a necessary, but not sufficient, measure of a neural network classifier’s quality. High validation accuracy during development does not guarantee that a model is free of serious flaws, such as vulnerability to adversarial attacks or a tendency to misclassify (with high confidence) data it was not trained on. The model may also be incomprehensible to a human or base its decisions on unreasonable criteria. These problems, which are not unique to classifiers, have been the focus of a substantial amount of recent research. However, they are not prioritized during model development, which almost always optimizes on validation accuracy to the exclusion of everything else. The product of this approach is likely to fail in unexpected ways outside of the training environment. We believe that, in addition to validation accuracy, the model development process must give added weight to other performance metrics such as explainability, resistance to adversarial attacks, and overconfidence on out-of-distribution data. |
Tasks | |
Published | 2019-10-04 |
URL | https://arxiv.org/abs/1910.02125v1 |
https://arxiv.org/pdf/1910.02125v1.pdf | |
PWC | https://paperswithcode.com/paper/requirements-for-developing-robust-neural |
Repo | |
Framework | |
Automatic Game Design via Mechanic Generation
Title | Automatic Game Design via Mechanic Generation |
Authors | Alexander Zook, Mark O. Riedl |
Abstract | Game designs often center on the game mechanics—rules governing the logical evolution of the game. We seek to develop an intelligent system that generates computer games. As first steps towards this goal we present a composable and cross-domain representation for game mechanics that draws from AI planning action representations. We use a constraint solver to generate mechanics subject to design requirements on the form of those mechanics—what they do in the game. A planner takes a set of generated mechanics and tests whether those mechanics meet playability requirements—controlling how mechanics function in a game to affect player behavior. We demonstrate our system by modeling and generating mechanics in a role-playing game, platformer game, and combined role-playing-platformer game. |
Tasks | |
Published | 2019-08-04 |
URL | https://arxiv.org/abs/1908.01420v1 |
https://arxiv.org/pdf/1908.01420v1.pdf | |
PWC | https://paperswithcode.com/paper/automatic-game-design-via-mechanic-generation |
Repo | |
Framework | |
Statistically Significant Discriminative Patterns Searching
Title | Statistically Significant Discriminative Patterns Searching |
Authors | Hoang Son Pham, Gwendal Virlet, Dominique Lavenier, Alexandre Termier |
Abstract | Discriminative pattern mining is an essential task of data mining. This task aims to discover patterns which occur more frequently in a class than other classes in a class-labeled dataset. This type of patterns is valuable in various domains such as bioinformatics, data classification. In this paper, we propose a novel algorithm, named SSDPS, to discover patterns in two-class datasets. The SSDPS algorithm owes its efficiency to an original enumeration strategy of the patterns, which allows to exploit some degrees of anti-monotonicity on the measures of discriminance and statistical significance. Experimental results demonstrate that the performance of the SSDPS algorithm is better than others. In addition, the number of generated patterns is much less than the number of other algorithms. Experiment on real data also shows that SSDPS efficiently detects multiple SNPs combinations in genetic data. |
Tasks | |
Published | 2019-06-02 |
URL | https://arxiv.org/abs/1906.01581v1 |
https://arxiv.org/pdf/1906.01581v1.pdf | |
PWC | https://paperswithcode.com/paper/statistically-significant-discriminative |
Repo | |
Framework | |
Motivo: fast motif counting via succinct color coding and adaptive sampling
Title | Motivo: fast motif counting via succinct color coding and adaptive sampling |
Authors | Marco Bressan, Stefano Leucci, Alessandro Panconesi |
Abstract | The randomized technique of color coding is behind state-of-the-art algorithms for estimating graph motif counts. Those algorithms, however, are not yet capable of scaling well to very large graphs with billions of edges. In this paper we develop novel tools for the `motif counting via color coding’ framework. As a result, our new algorithm, Motivo, is able to scale well to larger graphs while at the same time provide more accurate graphlet counts than ever before. This is achieved thanks to two types of improvements. First, we design new succinct data structures that support fast common color coding operations, and a biased coloring trick that trades accuracy versus running time and memory usage. These adaptations drastically reduce the time and memory requirements of color coding. Second, we develop an adaptive graphlet sampling strategy, based on a fractional set cover problem, that breaks the additive approximation barrier of standard sampling. This strategy gives multiplicative approximations for all graphlets at once, allowing us to count not only the most frequent graphlets but also extremely rare ones. To give an idea of the improvements, in $40$ minutes Motivo counts $7$-nodes motifs on a graph with $65$M nodes and $1.8$B edges; this is $30$ and $500$ times larger than the state of the art, respectively in terms of nodes and edges. On the accuracy side, in one hour Motivo produces accurate counts of $\approx ! 10.000$ distinct $8$-node motifs on graphs where state-of-the-art algorithms fail even to find the second most frequent motif. Our method requires just a high-end desktop machine. These results show how color coding can bring motif mining to the realm of truly massive graphs using only ordinary hardware. | |
Tasks | |
Published | 2019-06-04 |
URL | https://arxiv.org/abs/1906.01599v1 |
https://arxiv.org/pdf/1906.01599v1.pdf | |
PWC | https://paperswithcode.com/paper/motivo-fast-motif-counting-via-succinct-color |
Repo | |
Framework | |