May 6, 2019

3013 words 15 mins read

Paper Group ANR 202

Paper Group ANR 202

Variational hybridization and transformation for large inaccurate noisy-or networks. Non-Redundant Spectral Dimensionality Reduction. A New Statistic Feature of the Short-Time Amplitude Spectrum Values for Human’s Unvoiced Pronunciation. “Did I Say Something Wrong?” A Word-Level Analysis of Wikipedia Articles for Deletion Discussions. A CRF Based P …

Variational hybridization and transformation for large inaccurate noisy-or networks

Title Variational hybridization and transformation for large inaccurate noisy-or networks
Authors Yusheng Xie, Nan Du, Wei Fan, Jing Zhai, Weicheng Zhu
Abstract Variational inference provides approximations to the computationally intractable posterior distribution in Bayesian networks. A prominent medical application of noisy-or Bayesian network is to infer potential diseases given observed symptoms. Previous studies focus on approximating a handful of complicated pathological cases using variational transformation. Our goal is to use variational transformation as part of a novel hybridized inference for serving reliable and real time diagnosis at web scale. We propose a hybridized inference that allows variational parameters to be estimated without disease posteriors or priors, making the inference faster and much of its computation recyclable. In addition, we propose a transformation ranking algorithm that is very stable to large variances in network prior probabilities, a common issue that arises in medical applications of Bayesian networks. In experiments, we perform comparative study on a large real life medical network and scalability study on a much larger (36,000x) synthesized network.
Tasks
Published 2016-05-20
URL http://arxiv.org/abs/1605.06181v1
PDF http://arxiv.org/pdf/1605.06181v1.pdf
PWC https://paperswithcode.com/paper/variational-hybridization-and-transformation
Repo
Framework

Non-Redundant Spectral Dimensionality Reduction

Title Non-Redundant Spectral Dimensionality Reduction
Authors Yochai Blau, Tomer Michaeli
Abstract Spectral dimensionality reduction algorithms are widely used in numerous domains, including for recognition, segmentation, tracking and visualization. However, despite their popularity, these algorithms suffer from a major limitation known as the “repeated Eigen-directions” phenomenon. That is, many of the embedding coordinates they produce typically capture the same direction along the data manifold. This leads to redundant and inefficient representations that do not reveal the true intrinsic dimensionality of the data. In this paper, we propose a general method for avoiding redundancy in spectral algorithms. Our approach relies on replacing the orthogonality constraints underlying those methods by unpredictability constraints. Specifically, we require that each embedding coordinate be unpredictable (in the statistical sense) from all previous ones. We prove that these constraints necessarily prevent redundancy, and provide a simple technique to incorporate them into existing methods. As we illustrate on challenging high-dimensional scenarios, our approach produces significantly more informative and compact representations, which improve visualization and classification tasks.
Tasks Dimensionality Reduction
Published 2016-12-11
URL http://arxiv.org/abs/1612.03412v2
PDF http://arxiv.org/pdf/1612.03412v2.pdf
PWC https://paperswithcode.com/paper/non-redundant-spectral-dimensionality
Repo
Framework

A New Statistic Feature of the Short-Time Amplitude Spectrum Values for Human’s Unvoiced Pronunciation

Title A New Statistic Feature of the Short-Time Amplitude Spectrum Values for Human’s Unvoiced Pronunciation
Authors Xiaodong Zhuang
Abstract In this paper, a new statistic feature of the discrete short-time amplitude spectrum is discovered by experiments for the signals of unvoiced pronunciation. For the random-varying short-time spectrum, this feature reveals the relationship between the amplitude’s average and its standard for every frequency component. On the other hand, the association between the amplitude distributions for different frequency components is also studied. A new model representing such association is inspired by the normalized histogram of amplitude. By mathematical analysis, the new statistic feature discovered is proved to be necessary evidence which supports the proposed model, and also can be direct evidence for the widely used hypothesis of “identical distribution of amplitude for all frequencies”.
Tasks
Published 2016-09-23
URL http://arxiv.org/abs/1609.07245v2
PDF http://arxiv.org/pdf/1609.07245v2.pdf
PWC https://paperswithcode.com/paper/a-new-statistic-feature-of-the-short-time
Repo
Framework

“Did I Say Something Wrong?” A Word-Level Analysis of Wikipedia Articles for Deletion Discussions

Title “Did I Say Something Wrong?” A Word-Level Analysis of Wikipedia Articles for Deletion Discussions
Authors Michael Ruster
Abstract This thesis focuses on gaining linguistic insights into textual discussions on a word level. It was of special interest to distinguish messages that constructively contribute to a discussion from those that are detrimental to them. Thereby, we wanted to determine whether “I”- and “You”-messages are indicators for either of the two discussion styles. These messages are nowadays often used in guidelines for successful communication. Although their effects have been successfully evaluated multiple times, a large-scale analysis has never been conducted. Thus, we used Wikipedia Articles for Deletion (short: AfD) discussions together with the records of blocked users and developed a fully automated creation of an annotated data set. In this data set, messages were labelled either constructive or disruptive. We applied binary classifiers to the data to determine characteristic words for both discussion styles. Thereby, we also investigated whether function words like pronouns and conjunctions play an important role in distinguishing the two. We found that “You”-messages were a strong indicator for disruptive messages which matches their attributed effects on communication. However, we found “I”-messages to be indicative for disruptive messages as well which is contrary to their attributed effects. The importance of function words could neither be confirmed nor refuted. Other characteristic words for either communication style were not found. Yet, the results suggest that a different model might represent disruptive and constructive messages in textual discussions better.
Tasks
Published 2016-03-25
URL http://arxiv.org/abs/1603.08048v1
PDF http://arxiv.org/pdf/1603.08048v1.pdf
PWC https://paperswithcode.com/paper/did-i-say-something-wrong-a-word-level
Repo
Framework

A CRF Based POS Tagger for Code-mixed Indian Social Media Text

Title A CRF Based POS Tagger for Code-mixed Indian Social Media Text
Authors Kamal Sarkar
Abstract In this work, we describe a conditional random fields (CRF) based system for Part-Of- Speech (POS) tagging of code-mixed Indian social media text as part of our participation in the tool contest on POS tagging for codemixed Indian social media text, held in conjunction with the 2016 International Conference on Natural Language Processing, IIT(BHU), India. We participated only in constrained mode contest for all three language pairs, Bengali-English, Hindi-English and Telegu-English. Our system achieves the overall average F1 score of 79.99, which is the highest overall average F1 score among all 16 systems participated in constrained mode contest.
Tasks Part-Of-Speech Tagging
Published 2016-12-23
URL http://arxiv.org/abs/1612.07956v1
PDF http://arxiv.org/pdf/1612.07956v1.pdf
PWC https://paperswithcode.com/paper/a-crf-based-pos-tagger-for-code-mixed-indian
Repo
Framework

Segmentation Free Object Discovery in Video

Title Segmentation Free Object Discovery in Video
Authors Giovanni Cuffaro, Federico Becattini, Claudio Baecchi, Lorenzo Seidenari, Alberto Del Bimbo
Abstract In this paper we present a simple yet effective approach to extend without supervision any object proposal from static images to videos. Unlike previous methods, these spatio-temporal proposals, to which we refer as tracks, are generated relying on little or no visual content by only exploiting bounding boxes spatial correlations through time. The tracks that we obtain are likely to represent objects and are a general-purpose tool to represent meaningful video content for a wide variety of tasks. For unannotated videos, tracks can be used to discover content without any supervision. As further contribution we also propose a novel and dataset-independent method to evaluate a generic object proposal based on the entropy of a classifier output response. We experiment on two competitive datasets, namely YouTube Objects and ILSVRC-2015 VID.
Tasks
Published 2016-09-01
URL http://arxiv.org/abs/1609.00221v1
PDF http://arxiv.org/pdf/1609.00221v1.pdf
PWC https://paperswithcode.com/paper/segmentation-free-object-discovery-in-video
Repo
Framework

Near-Infrared Image Dehazing Via Color Regularization

Title Near-Infrared Image Dehazing Via Color Regularization
Authors Chang-Hwan Son, Xiao-Ping Zhang
Abstract Near-infrared imaging can capture haze-free near-infrared gray images and visible color images, according to physical scattering models, e.g., Rayleigh or Mie models. However, there exist serious discrepancies in brightness and image structures between the near-infrared gray images and the visible color images. The direct use of the near-infrared gray images brings about another color distortion problem in the dehazed images. Therefore, the color distortion should also be considered for near-infrared dehazing. To reflect this point, this paper presents an approach of adding a new color regularization to conventional dehazing framework. The proposed color regularization can model the color prior for unknown haze-free images from two captured images. Thus, natural-looking colors and fine details can be induced on the dehazed images. The experimental results show that the proposed color regularization model can help remove the color distortion and the haze at the same time. Also, the effectiveness of the proposed color regularization is verified by comparing with other conventional regularizations. It is also shown that the proposed color regularization can remove the edge artifacts which arise from the use of the conventional dark prior model.
Tasks Image Dehazing
Published 2016-10-01
URL http://arxiv.org/abs/1610.00175v1
PDF http://arxiv.org/pdf/1610.00175v1.pdf
PWC https://paperswithcode.com/paper/near-infrared-image-dehazing-via-color
Repo
Framework

Deep Learning in Finance

Title Deep Learning in Finance
Authors J. B. Heaton, N. G. Polson, J. H. Witte
Abstract We explore the use of deep learning hierarchical models for problems in financial prediction and classification. Financial prediction problems – such as those presented in designing and pricing securities, constructing portfolios, and risk management – often involve large data sets with complex data interactions that currently are difficult or impossible to specify in a full economic model. Applying deep learning methods to these problems can produce more useful results than standard methods in finance. In particular, deep learning can detect and exploit interactions in the data that are, at least currently, invisible to any existing financial economic theory.
Tasks
Published 2016-02-21
URL http://arxiv.org/abs/1602.06561v3
PDF http://arxiv.org/pdf/1602.06561v3.pdf
PWC https://paperswithcode.com/paper/deep-learning-in-finance
Repo
Framework

On kernel methods for covariates that are rankings

Title On kernel methods for covariates that are rankings
Authors Horia Mania, Aaditya Ramdas, Martin J. Wainwright, Michael I. Jordan, Benjamin Recht
Abstract Permutation-valued features arise in a variety of applications, either in a direct way when preferences are elicited over a collection of items, or an indirect way in which numerical ratings are converted to a ranking. To date, there has been relatively limited study of regression, classification, and testing problems based on permutation-valued features, as opposed to permutation-valued responses. This paper studies the use of reproducing kernel Hilbert space methods for learning from permutation-valued features. These methods embed the rankings into an implicitly defined function space, and allow for efficient estimation of regression and test functions in this richer space. Our first contribution is to characterize both the feature spaces and spectral properties associated with two kernels for rankings, the Kendall and Mallows kernels. Using tools from representation theory, we explain the limited expressive power of the Kendall kernel by characterizing its degenerate spectrum, and in sharp contrast, we prove that Mallows’ kernel is universal and characteristic. We also introduce families of polynomial kernels that interpolate between the Kendall (degree one) and Mallows’ (infinite degree) kernels. We show the practical effectiveness of our methods via applications to Eurobarometer survey data as well as a Movielens ratings dataset.
Tasks
Published 2016-03-25
URL http://arxiv.org/abs/1603.08035v2
PDF http://arxiv.org/pdf/1603.08035v2.pdf
PWC https://paperswithcode.com/paper/on-kernel-methods-for-covariates-that-are
Repo
Framework

Learning Multi-level Deep Representations for Image Emotion Classification

Title Learning Multi-level Deep Representations for Image Emotion Classification
Authors Tianrong Rao, Min Xu, Dong Xu
Abstract In this paper, we propose a new deep network that learns multi-level deep representations for image emotion classification (MldrNet). Image emotion can be recognized through image semantics, image aesthetics and low-level visual features from both global and local views. Existing image emotion classification works using hand-crafted features or deep features mainly focus on either low-level visual features or semantic-level image representations without taking all factors into consideration. The proposed MldrNet combines deep representations of different levels, i.e. image semantics, image aesthetics, and low-level visual features to effectively classify the emotion types of different kinds of images, such as abstract paintings and web images. Extensive experiments on both Internet images and abstract paintings demonstrate the proposed method outperforms the state-of-the-art methods using deep features or hand-crafted features. The proposed approach also outperforms the state-of-the-art methods with at least 6% performance improvement in terms of overall classification accuracy.
Tasks Emotion Classification
Published 2016-11-22
URL http://arxiv.org/abs/1611.07145v2
PDF http://arxiv.org/pdf/1611.07145v2.pdf
PWC https://paperswithcode.com/paper/learning-multi-level-deep-representations-for
Repo
Framework

Structured Group Sparsity: A Novel Indoor WLAN Localization, Outlier Detection, and Radio Map Interpolation Scheme

Title Structured Group Sparsity: A Novel Indoor WLAN Localization, Outlier Detection, and Radio Map Interpolation Scheme
Authors Ali Khalajmehrabadi, Nikolaos Gatsis, David Akopian
Abstract This paper introduces novel schemes for indoor localization, outlier detection, and radio map interpolation using Wireless Local Area Networks (WLANs). The localization method consists of a novel multicomponent optimization technique that minimizes the squared $\ell_{2}$-norm of the residuals between the radio map and the online Received Signal Strength (RSS) measurements, the $\ell_{1}$-norm of the user’s location vector, and weighted $\ell_{2}$-norms of layered groups of Reference Points (RPs). RPs are grouped using a new criterion based on the similarity between the so-called Access Point (AP) coverage vectors. In addition, since AP readings are prone to containing inordinate readings, called outliers, an augmented optimization problem is proposed to detect the outliers and localize the user with cleaned online measurements. Moreover, a novel scheme to record fingerprints from a smaller number of RPs and estimate the radio map at RPs without recorded fingerprints is developed using sparse recovery techniques. All localization schemes are tested on RSS fingerprints collected from a real environment. The overall scheme has comparable complexity with competing approaches, while it performs with high accuracy under a small number of APs and finer granularity of RPs.
Tasks Outlier Detection
Published 2016-10-18
URL http://arxiv.org/abs/1610.05421v1
PDF http://arxiv.org/pdf/1610.05421v1.pdf
PWC https://paperswithcode.com/paper/structured-group-sparsity-a-novel-indoor-wlan
Repo
Framework

DAVE: A Unified Framework for Fast Vehicle Detection and Annotation

Title DAVE: A Unified Framework for Fast Vehicle Detection and Annotation
Authors Yi Zhou, Li Liu, Ling Shao, Matt Mellor
Abstract Vehicle detection and annotation for streaming video data with complex scenes is an interesting but challenging task for urban traffic surveillance. In this paper, we present a fast framework of Detection and Annotation for Vehicles (DAVE), which effectively combines vehicle detection and attributes annotation. DAVE consists of two convolutional neural networks (CNNs): a fast vehicle proposal network (FVPN) for vehicle-like objects extraction and an attributes learning network (ALN) aiming to verify each proposal and infer each vehicle’s pose, color and type simultaneously. These two nets are jointly optimized so that abundant latent knowledge learned from the ALN can be exploited to guide FVPN training. Once the system is trained, it can achieve efficient vehicle detection and annotation for real-world traffic surveillance data. We evaluate DAVE on a new self-collected UTS dataset and the public PASCAL VOC2007 car and LISA 2010 datasets, with consistent improvements over existing algorithms.
Tasks Fast Vehicle Detection
Published 2016-07-15
URL http://arxiv.org/abs/1607.04564v3
PDF http://arxiv.org/pdf/1607.04564v3.pdf
PWC https://paperswithcode.com/paper/dave-a-unified-framework-for-fast-vehicle
Repo
Framework

Outlier Detection from Network Data with Subnetwork Interpretation

Title Outlier Detection from Network Data with Subnetwork Interpretation
Authors Xuan-Hong Dang, Arlei Silva, Ambuj Singh, Ananthram Swami, Prithwish Basu
Abstract Detecting a small number of outliers from a set of data observations is always challenging. This problem is more difficult in the setting of multiple network samples, where computing the anomalous degree of a network sample is generally not sufficient. In fact, explaining why the network is exceptional, expressed in the form of subnetwork, is also equally important. In this paper, we develop a novel algorithm to address these two key problems. We treat each network sample as a potential outlier and identify subnetworks that mostly discriminate it from nearby regular samples. The algorithm is developed in the framework of network regression combined with the constraints on both network topology and L1-norm shrinkage to perform subnetwork discovery. Our method thus goes beyond subspace/subgraph discovery and we show that it converges to a global optimum. Evaluation on various real-world network datasets demonstrates that our algorithm not only outperforms baselines in both network and high dimensional setting, but also discovers highly relevant and interpretable local subnetworks, further enhancing our understanding of anomalous networks.
Tasks Outlier Detection
Published 2016-09-30
URL http://arxiv.org/abs/1610.00054v1
PDF http://arxiv.org/pdf/1610.00054v1.pdf
PWC https://paperswithcode.com/paper/outlier-detection-from-network-data-with
Repo
Framework

Deep-Anomaly: Fully Convolutional Neural Network for Fast Anomaly Detection in Crowded Scenes

Title Deep-Anomaly: Fully Convolutional Neural Network for Fast Anomaly Detection in Crowded Scenes
Authors Mohammad Sabokrou, Mohsen Fayyaz, Mahmood Fathy, Zahra Moayedd, Reinhard klette
Abstract The detection of abnormal behaviours in crowded scenes has to deal with many challenges. This paper presents an efficient method for detection and localization of anomalies in videos. Using fully convolutional neural networks (FCNs) and temporal data, a pre-trained supervised FCN is transferred into an unsupervised FCN ensuring the detection of (global) anomalies in scenes. High performance in terms of speed and accuracy is achieved by investigating the cascaded detection as a result of reducing computation complexities. This FCN-based architecture addresses two main tasks, feature representation and cascaded outlier detection. Experimental results on two benchmarks suggest that detection and localization of the proposed method outperforms existing methods in terms of accuracy.
Tasks Anomaly Detection, Outlier Detection
Published 2016-09-03
URL http://arxiv.org/abs/1609.00866v2
PDF http://arxiv.org/pdf/1609.00866v2.pdf
PWC https://paperswithcode.com/paper/deep-anomaly-fully-convolutional-neural
Repo
Framework

Learning Reporting Dynamics during Breaking News for Rumour Detection in Social Media

Title Learning Reporting Dynamics during Breaking News for Rumour Detection in Social Media
Authors Arkaitz Zubiaga, Maria Liakata, Rob Procter
Abstract Breaking news leads to situations of fast-paced reporting in social media, producing all kinds of updates related to news stories, albeit with the caveat that some of those early updates tend to be rumours, i.e., information with an unverified status at the time of posting. Flagging information that is unverified can be helpful to avoid the spread of information that may turn out to be false. Detection of rumours can also feed a rumour tracking system that ultimately determines their veracity. In this paper we introduce a novel approach to rumour detection that learns from the sequential dynamics of reporting during breaking news in social media to detect rumours in new stories. Using Twitter datasets collected during five breaking news stories, we experiment with Conditional Random Fields as a sequential classifier that leverages context learnt during an event for rumour detection, which we compare with the state-of-the-art rumour detection system as well as other baselines. In contrast to existing work, our classifier does not need to observe tweets querying a piece of information to deem it a rumour, but instead we detect rumours from the tweet alone by exploiting context learnt during the event. Our classifier achieves competitive performance, beating the state-of-the-art classifier that relies on querying tweets with improved precision and recall, as well as outperforming our best baseline with nearly 40% improvement in terms of F1 score. The scale and diversity of our experiments reinforces the generalisability of our classifier.
Tasks Rumour Detection
Published 2016-10-24
URL http://arxiv.org/abs/1610.07363v1
PDF http://arxiv.org/pdf/1610.07363v1.pdf
PWC https://paperswithcode.com/paper/learning-reporting-dynamics-during-breaking
Repo
Framework
comments powered by Disqus