Paper Group ANR 286
Learning for Biomedical Information Extraction: Methodological Review of Recent Advances. How Deep Neural Networks Can Improve Emotion Recognition on Video Data. On the Application of Support Vector Machines to the Prediction of Propagation Losses at 169 MHz for Smart Metering Applications. Understanding and Improving Convolutional Neural Networks …
Learning for Biomedical Information Extraction: Methodological Review of Recent Advances
Title | Learning for Biomedical Information Extraction: Methodological Review of Recent Advances |
Authors | Feifan Liu, Jinying Chen, Abhyuday Jagannatha, Hong Yu |
Abstract | Biomedical information extraction (BioIE) is important to many applications, including clinical decision support, integrative biology, and pharmacovigilance, and therefore it has been an active research. Unlike existing reviews covering a holistic view on BioIE, this review focuses on mainly recent advances in learning based approaches, by systematically summarizing them into different aspects of methodological development. In addition, we dive into open information extraction and deep learning, two emerging and influential techniques and envision next generation of BioIE. |
Tasks | Open Information Extraction |
Published | 2016-06-26 |
URL | http://arxiv.org/abs/1606.07993v1 |
http://arxiv.org/pdf/1606.07993v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-for-biomedical-information |
Repo | |
Framework | |
How Deep Neural Networks Can Improve Emotion Recognition on Video Data
Title | How Deep Neural Networks Can Improve Emotion Recognition on Video Data |
Authors | Pooya Khorrami, Tom Le Paine, Kevin Brady, Charlie Dagli, Thomas S. Huang |
Abstract | We consider the task of dimensional emotion recognition on video data using deep learning. While several previous methods have shown the benefits of training temporal neural network models such as recurrent neural networks (RNNs) on hand-crafted features, few works have considered combining convolutional neural networks (CNNs) with RNNs. In this work, we present a system that performs emotion recognition on video data using both CNNs and RNNs, and we also analyze how much each neural network component contributes to the system’s overall performance. We present our findings on videos from the Audio/Visual+Emotion Challenge (AV+EC2015). In our experiments, we analyze the effects of several hyperparameters on overall performance while also achieving superior performance to the baseline and other competing methods. |
Tasks | Emotion Recognition |
Published | 2016-02-24 |
URL | http://arxiv.org/abs/1602.07377v5 |
http://arxiv.org/pdf/1602.07377v5.pdf | |
PWC | https://paperswithcode.com/paper/how-deep-neural-networks-can-improve-emotion |
Repo | |
Framework | |
On the Application of Support Vector Machines to the Prediction of Propagation Losses at 169 MHz for Smart Metering Applications
Title | On the Application of Support Vector Machines to the Prediction of Propagation Losses at 169 MHz for Smart Metering Applications |
Authors | Martino Uccellari, Francesca Facchini, Matteo Sola, Emilio Sirignano, Giorgio M. Vitetta, Andrea Barbieri, Stefano Tondelli |
Abstract | Recently, the need of deploying new wireless networks for smart gas metering has raised the problem of radio planning in the169 MHz band. Unluckily, software tools commonly adopted for radio planning in cellular communication systems cannot be employed to solve this problem because of the substantially lower transmission frequencies characterizing this application. In this manuscript a novel data-centric solution, based on the use of support vector machine techniques for classification and regression, is proposed. Our method requires the availability of a limited set of received signal strength measurements and the knowledge of a three-dimensional map of the propagation environment of interest, and generates both an estimate of the coverage area and a prediction of the field strength within it. Numerical results referring to different Italian villages and cities evidence that our method is able to achieve good accuracy at the price of an acceptable computational cost and of a limited effort for the acquisition of measurements in the considered environments. |
Tasks | |
Published | 2016-07-18 |
URL | http://arxiv.org/abs/1607.05154v1 |
http://arxiv.org/pdf/1607.05154v1.pdf | |
PWC | https://paperswithcode.com/paper/on-the-application-of-support-vector-machines |
Repo | |
Framework | |
Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units
Title | Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units |
Authors | Wenling Shang, Kihyuk Sohn, Diogo Almeida, Honglak Lee |
Abstract | Recently, convolutional neural networks (CNNs) have been used as a powerful tool to solve many problems of machine learning and computer vision. In this paper, we aim to provide insight on the property of convolutional neural networks, as well as a generic method to improve the performance of many CNN architectures. Specifically, we first examine existing CNN models and observe an intriguing property that the filters in the lower layers form pairs (i.e., filters with opposite phase). Inspired by our observation, we propose a novel, simple yet effective activation scheme called concatenated ReLU (CRelu) and theoretically analyze its reconstruction property in CNNs. We integrate CRelu into several state-of-the-art CNN architectures and demonstrate improvement in their recognition performance on CIFAR-10/100 and ImageNet datasets with fewer trainable parameters. Our results suggest that better understanding of the properties of CNNs can lead to significant performance improvement with a simple modification. |
Tasks | |
Published | 2016-03-16 |
URL | http://arxiv.org/abs/1603.05201v2 |
http://arxiv.org/pdf/1603.05201v2.pdf | |
PWC | https://paperswithcode.com/paper/understanding-and-improving-convolutional |
Repo | |
Framework | |
Learning Optimal Interventions
Title | Learning Optimal Interventions |
Authors | Jonas Mueller, David N. Reshef, George Du, Tommi Jaakkola |
Abstract | Our goal is to identify beneficial interventions from observational data. We consider interventions that are narrowly focused (impacting few covariates) and may be tailored to each individual or globally enacted over a population. For applications where harmful intervention is drastically worse than proposing no change, we propose a conservative definition of the optimal intervention. Assuming the underlying relationship remains invariant under intervention, we develop efficient algorithms to identify the optimal intervention policy from limited data and provide theoretical guarantees for our approach in a Gaussian Process setting. Although our methods assume covariates can be precisely adjusted, they remain capable of improving outcomes in misspecified settings where interventions incur unintentional downstream effects. Empirically, our approach identifies good interventions in two practical applications: gene perturbation and writing improvement. |
Tasks | |
Published | 2016-06-16 |
URL | http://arxiv.org/abs/1606.05027v2 |
http://arxiv.org/pdf/1606.05027v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-optimal-interventions |
Repo | |
Framework | |
Integrating Know-How into the Linked Data Cloud
Title | Integrating Know-How into the Linked Data Cloud |
Authors | Paolo Pareti, Benoit Testu, Ryutaro Ichise, Ewan Klein, Adam Barker |
Abstract | This paper presents the first framework for integrating procedural knowledge, or “know-how”, into the Linked Data Cloud. Know-how available on the Web, such as step-by-step instructions, is largely unstructured and isolated from other sources of online knowledge. To overcome these limitations, we propose extending to procedural knowledge the benefits that Linked Data has already brought to representing, retrieving and reusing declarative knowledge. We describe a framework for representing generic know-how as Linked Data and for automatically acquiring this representation from existing resources on the Web. This system also allows the automatic generation of links between different know-how resources, and between those resources and other online knowledge bases, such as DBpedia. We discuss the results of applying this framework to a real-world scenario and we show how it outperforms existing manual community-driven integration efforts. |
Tasks | |
Published | 2016-04-15 |
URL | http://arxiv.org/abs/1604.04506v1 |
http://arxiv.org/pdf/1604.04506v1.pdf | |
PWC | https://paperswithcode.com/paper/integrating-know-how-into-the-linked-data |
Repo | |
Framework | |
A tentative model for dimensionless phoneme distance from binary distinctive features
Title | A tentative model for dimensionless phoneme distance from binary distinctive features |
Authors | Tiago Tresoldi |
Abstract | This work proposes a tentative model for the calculation of dimensionless distances between phonemes; sounds are described with binary distinctive features and distances show linear consistency in terms of such features. The model can be used as a scoring function for local and global pairwise alignment of phoneme sequences, and the distances can be used as prior probabilities for Bayesian analyses on the phylogenetic relationship between languages, particularly for cognate identification in cases where no empirical prior probability is available. |
Tasks | |
Published | 2016-10-05 |
URL | http://arxiv.org/abs/1610.01486v2 |
http://arxiv.org/pdf/1610.01486v2.pdf | |
PWC | https://paperswithcode.com/paper/a-tentative-model-for-dimensionless-phoneme |
Repo | |
Framework | |
An ensemble learning method for scene classification based on Hidden Markov Model image representation
Title | An ensemble learning method for scene classification based on Hidden Markov Model image representation |
Authors | Fariborz Taherkhani, Reza Hedayati |
Abstract | Low level images representation in feature space performs poorly for classification with high accuracy since this level of representation is not able to project images into the discriminative feature space. In this work, we propose an efficient image representation model for classification. First we apply Hidden Markov Model (HMM) on ordered grids represented by different type of image descriptors in order to include causality of local properties existing in image for feature extraction and then we train up a separate classifier for each of these features sets. Finally we ensemble these classifiers efficiently in a way that they can cancel out each other errors for obtaining higher accuracy. This method is evaluated on 15 natural scene dataset. Experimental results show the superiority of the proposed method in comparison to some current existing methods |
Tasks | Scene Classification |
Published | 2016-07-22 |
URL | http://arxiv.org/abs/1607.06794v3 |
http://arxiv.org/pdf/1607.06794v3.pdf | |
PWC | https://paperswithcode.com/paper/an-ensemble-learning-method-for-scene |
Repo | |
Framework | |
Geo-distinctive Visual Element Matching for Location Estimation of Images
Title | Geo-distinctive Visual Element Matching for Location Estimation of Images |
Authors | Xinchao Li, Martha A. Larson, Alan Hanjalic |
Abstract | We propose an image representation and matching approach that substantially improves visual-based location estimation for images. The main novelty of the approach, called distinctive visual element matching (DVEM), is its use of representations that are specific to the query image whose location is being predicted. These representations are based on visual element clouds, which robustly capture the connection between the query and visual evidence from candidate locations. We then maximize the influence of visual elements that are geo-distinctive because they do not occur in images taken at many other locations. We carry out experiments and analysis for both geo-constrained and geo-unconstrained location estimation cases using two large-scale, publicly-available datasets: the San Francisco Landmark dataset with $1.06$ million street-view images and the MediaEval ‘15 Placing Task dataset with $5.6$ million geo-tagged images from Flickr. We present examples that illustrate the highly-transparent mechanics of the approach, which are based on common sense observations about the visual patterns in image collections. Our results show that the proposed method delivers a considerable performance improvement compared to the state of the art. |
Tasks | Common Sense Reasoning |
Published | 2016-01-28 |
URL | http://arxiv.org/abs/1601.07884v1 |
http://arxiv.org/pdf/1601.07884v1.pdf | |
PWC | https://paperswithcode.com/paper/geo-distinctive-visual-element-matching-for |
Repo | |
Framework | |
Natural Language Processing using Hadoop and KOSHIK
Title | Natural Language Processing using Hadoop and KOSHIK |
Authors | Emre Erturk, Hong Shi |
Abstract | Natural language processing, as a data analytics related technology, is used widely in many research areas such as artificial intelligence, human language processing, and translation. At present, due to explosive growth of data, there are many challenges for natural language processing. Hadoop is one of the platforms that can process the large amount of data required for natural language processing. KOSHIK is one of the natural language processing architectures, and utilizes Hadoop and contains language processing components such as Stanford CoreNLP and OpenNLP. This study describes how to build a KOSHIK platform with the relevant tools, and provides the steps to analyze wiki data. Finally, it evaluates and discusses the advantages and disadvantages of the KOSHIK architecture, and gives recommendations on improving the processing performance. |
Tasks | |
Published | 2016-08-15 |
URL | http://arxiv.org/abs/1608.04434v1 |
http://arxiv.org/pdf/1608.04434v1.pdf | |
PWC | https://paperswithcode.com/paper/natural-language-processing-using-hadoop-and |
Repo | |
Framework | |
Performance Tuning of Hadoop MapReduce: A Noisy Gradient Approach
Title | Performance Tuning of Hadoop MapReduce: A Noisy Gradient Approach |
Authors | Sandeep Kumar, Sindhu Padakandla, Chandrashekar L, Priyank Parihar, K Gopinath, Shalabh Bhatnagar |
Abstract | Hadoop MapReduce is a framework for distributed storage and processing of large datasets that is quite popular in big data analytics. It has various configuration parameters (knobs) which play an important role in deciding the performance i.e., the execution time of a given big data processing job. Default values of these parameters do not always result in good performance and hence it is important to tune them. However, there is inherent difficulty in tuning the parameters due to two important reasons - firstly, the parameter search space is large and secondly, there are cross-parameter interactions. Hence, there is a need for a dimensionality-free method which can automatically tune the configuration parameters by taking into account the cross-parameter dependencies. In this paper, we propose a novel Hadoop parameter tuning methodology, based on a noisy gradient algorithm known as the simultaneous perturbation stochastic approximation (SPSA). The SPSA algorithm tunes the parameters by directly observing the performance of the Hadoop MapReduce system. The approach followed is independent of parameter dimensions and requires only $2$ observations per iteration while tuning. We demonstrate the effectiveness of our methodology in achieving good performance on popular Hadoop benchmarks namely \emph{Grep}, \emph{Bigram}, \emph{Inverted Index}, \emph{Word Co-occurrence} and \emph{Terasort}. Our method, when tested on a 25 node Hadoop cluster shows 66% decrease in execution time of Hadoop jobs on an average, when compared to the default configuration. Further, we also observe a reduction of 45% in execution times, when compared to prior methods. |
Tasks | |
Published | 2016-11-30 |
URL | http://arxiv.org/abs/1611.10052v2 |
http://arxiv.org/pdf/1611.10052v2.pdf | |
PWC | https://paperswithcode.com/paper/performance-tuning-of-hadoop-mapreduce-a |
Repo | |
Framework | |
A Tale of Two Bases: Local-Nonlocal Regularization on Image Patches with Convolution Framelets
Title | A Tale of Two Bases: Local-Nonlocal Regularization on Image Patches with Convolution Framelets |
Authors | Rujie Yin, Tingran Gao, Yue M. Lu, Ingrid Daubechies |
Abstract | We propose an image representation scheme combining the local and nonlocal characterization of patches in an image. Our representation scheme can be shown to be equivalent to a tight frame constructed from convolving local bases (e.g. wavelet frames, discrete cosine transforms, etc.) with nonlocal bases (e.g. spectral basis induced by nonlinear dimension reduction on patches), and we call the resulting frame elements {\it convolution framelets}. Insight gained from analyzing the proposed representation leads to a novel interpretation of a recent high-performance patch-based image inpainting algorithm using Point Integral Method (PIM) and Low Dimension Manifold Model (LDMM) [Osher, Shi and Zhu, 2016]. In particular, we show that LDMM is a weighted $\ell_2$-regularization on the coefficients obtained by decomposing images into linear combinations of convolution framelets; based on this understanding, we extend the original LDMM to a reweighted version that yields further improved inpainting results. In addition, we establish the energy concentration property of convolution framelet coefficients for the setting where the local basis is constructed from a given nonlocal basis via a linear reconstruction framework; a generalization of this framework to unions of local embeddings can provide a natural setting for interpreting BM3D, one of the state-of-the-art image denoising algorithms. |
Tasks | Denoising, Dimensionality Reduction, Image Denoising, Image Inpainting |
Published | 2016-06-04 |
URL | http://arxiv.org/abs/1606.01377v3 |
http://arxiv.org/pdf/1606.01377v3.pdf | |
PWC | https://paperswithcode.com/paper/a-tale-of-two-bases-local-nonlocal |
Repo | |
Framework | |
An Arabic-Hebrew parallel corpus of TED talks
Title | An Arabic-Hebrew parallel corpus of TED talks |
Authors | Mauro Cettolo |
Abstract | We describe an Arabic-Hebrew parallel corpus of TED talks built upon WIT3, the Web inventory that repurposes the original content of the TED website in a way which is more convenient for MT researchers. The benchmark consists of about 2,000 talks, whose subtitles in Arabic and Hebrew have been accurately aligned and rearranged in sentences, for a total of about 3.5M tokens per language. Talks have been partitioned in train, development and test sets similarly in all respects to the MT tasks of the IWSLT 2016 evaluation campaign. In addition to describing the benchmark, we list the problems encountered in preparing it and the novel methods designed to solve them. Baseline MT results and some measures on sentence length are provided as an extrinsic evaluation of the quality of the benchmark. |
Tasks | |
Published | 2016-10-03 |
URL | http://arxiv.org/abs/1610.00572v1 |
http://arxiv.org/pdf/1610.00572v1.pdf | |
PWC | https://paperswithcode.com/paper/an-arabic-hebrew-parallel-corpus-of-ted-talks |
Repo | |
Framework | |
A Survey on Social Media Anomaly Detection
Title | A Survey on Social Media Anomaly Detection |
Authors | Rose Yu, Huida Qiu, Zhen Wen, Ching-Yung Lin, Yan Liu |
Abstract | Social media anomaly detection is of critical importance to prevent malicious activities such as bullying, terrorist attack planning, and fraud information dissemination. With the recent popularity of social media, new types of anomalous behaviors arise, causing concerns from various parties. While a large amount of work have been dedicated to traditional anomaly detection problems, we observe a surge of research interests in the new realm of social media anomaly detection. In this paper, we present a survey on existing approaches to address this problem. We focus on the new type of anomalous phenomena in the social media and review the recent developed techniques to detect those special types of anomalies. We provide a general overview of the problem domain, common formulations, existing methodologies and potential directions. With this work, we hope to call out the attention from the research community on this challenging problem and open up new directions that we can contribute in the future. |
Tasks | Anomaly Detection |
Published | 2016-01-06 |
URL | http://arxiv.org/abs/1601.01102v2 |
http://arxiv.org/pdf/1601.01102v2.pdf | |
PWC | https://paperswithcode.com/paper/a-survey-on-social-media-anomaly-detection |
Repo | |
Framework | |
Fast Rates for General Unbounded Loss Functions: from ERM to Generalized Bayes
Title | Fast Rates for General Unbounded Loss Functions: from ERM to Generalized Bayes |
Authors | Peter D. Grünwald, Nishant A. Mehta |
Abstract | We present new excess risk bounds for general unbounded loss functions including log loss and squared loss, where the distribution of the losses may be heavy-tailed. The bounds hold for general estimators, but they are optimized when applied to $\eta$-generalized Bayesian, MDL, and empirical risk minimization estimators. In the case of log loss, the bounds imply convergence rates for generalized Bayesian inference under misspecification in terms of a generalization of the Hellinger metric as long as the learning rate $\eta$ is set correctly. For general loss functions, our bounds rely on two separate conditions: the $v$-GRIP (generalized reversed information projection) conditions, which control the lower tail of the excess loss; and the newly introduced witness condition, which controls the upper tail. The parameter $v$ in the $v$-GRIP conditions determines the achievable rate and is akin to the exponent in the Tsybakov margin condition and the Bernstein condition for bounded losses, which the $v$-GRIP conditions generalize; favorable $v$ in combination with small model complexity leads to $\tilde{O}(1/n)$ rates. The witness condition allows us to connect the excess risk to an “annealed” version thereof, by which we generalize several previous results connecting Hellinger and R'enyi divergence to KL divergence. |
Tasks | Bayesian Inference |
Published | 2016-05-01 |
URL | https://arxiv.org/abs/1605.00252v4 |
https://arxiv.org/pdf/1605.00252v4.pdf | |
PWC | https://paperswithcode.com/paper/fast-rates-for-general-unbounded-loss |
Repo | |
Framework | |