Paper Group ANR 41
Color-based Segmentation of Sky/Cloud Images From Ground-based Cameras. Approximate cross-validation formula for Bayesian linear regression. An Integrated Classification Model for Financial Data Mining. Simpler Context-Dependent Logical Forms via Model Projections. On Avoidance Learning with Partial Observability. Rectified Gaussian Scale Mixtures …
Color-based Segmentation of Sky/Cloud Images From Ground-based Cameras
Title | Color-based Segmentation of Sky/Cloud Images From Ground-based Cameras |
Authors | Soumyabrata Dev, Yee Hui Lee, Stefan Winkler |
Abstract | Sky/cloud images captured by ground-based cameras (a.k.a. whole sky imagers) are increasingly used nowadays because of their applications in a number of fields, including climate modeling, weather prediction, renewable energy generation, and satellite communications. Due to the wide variety of cloud types and lighting conditions in such images, accurate and robust segmentation of clouds is challenging. In this paper, we present a supervised segmentation framework for ground-based sky/cloud images based on a systematic analysis of different color spaces and components, using partial least squares (PLS) regression. Unlike other state-of-the-art methods, our proposed approach is entirely learning-based and does not require any manually-defined parameters. In addition, we release the Singapore Whole Sky IMaging SEGmentation Database (SWIMSEG), a large database of annotated sky/cloud images, to the research community. |
Tasks | |
Published | 2016-06-12 |
URL | http://arxiv.org/abs/1606.03669v1 |
http://arxiv.org/pdf/1606.03669v1.pdf | |
PWC | https://paperswithcode.com/paper/color-based-segmentation-of-skycloud-images |
Repo | |
Framework | |
Approximate cross-validation formula for Bayesian linear regression
Title | Approximate cross-validation formula for Bayesian linear regression |
Authors | Yoshiyuki Kabashima, Tomoyuki Obuchi, Makoto Uemura |
Abstract | Cross-validation (CV) is a technique for evaluating the ability of statistical models/learning systems based on a given data set. Despite its wide applicability, the rather heavy computational cost can prevent its use as the system size grows. To resolve this difficulty in the case of Bayesian linear regression, we develop a formula for evaluating the leave-one-out CV error approximately without actually performing CV. The usefulness of the developed formula is tested by statistical mechanical analysis for a synthetic model. This is confirmed by application to a real-world supernova data set as well. |
Tasks | |
Published | 2016-10-25 |
URL | http://arxiv.org/abs/1610.07733v1 |
http://arxiv.org/pdf/1610.07733v1.pdf | |
PWC | https://paperswithcode.com/paper/approximate-cross-validation-formula-for |
Repo | |
Framework | |
An Integrated Classification Model for Financial Data Mining
Title | An Integrated Classification Model for Financial Data Mining |
Authors | Fan Cai, Nhien-An Le-Khac, M-T. Kechadi |
Abstract | Nowadays, financial data analysis is becoming increasingly important in the business market. As companies collect more and more data from daily operations, they expect to extract useful knowledge from existing collected data to help make reasonable decisions for new customer requests, e.g. user credit category, churn analysis, real estate analysis, etc. Financial institutes have applied different data mining techniques to enhance their business performance. However, simple ap-proach of these techniques could raise a performance issue. Besides, there are very few general models for both understanding and forecasting different finan-cial fields. We present in this paper a new classification model for analyzing fi-nancial data. We also evaluate this model with different real-world data to show its performance. |
Tasks | |
Published | 2016-09-09 |
URL | http://arxiv.org/abs/1609.02976v1 |
http://arxiv.org/pdf/1609.02976v1.pdf | |
PWC | https://paperswithcode.com/paper/an-integrated-classification-model-for |
Repo | |
Framework | |
Simpler Context-Dependent Logical Forms via Model Projections
Title | Simpler Context-Dependent Logical Forms via Model Projections |
Authors | Reginald Long, Panupong Pasupat, Percy Liang |
Abstract | We consider the task of learning a context-dependent mapping from utterances to denotations. With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances. To cope with this challenge, we perform successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Though less expressive, we find that these simpler models are much faster and can be surprisingly effective. Moreover, they can be used to bootstrap the full model. Finally, we collected three new context-dependent semantic parsing datasets, and develop a new left-to-right parser. |
Tasks | Semantic Parsing |
Published | 2016-06-16 |
URL | http://arxiv.org/abs/1606.05378v1 |
http://arxiv.org/pdf/1606.05378v1.pdf | |
PWC | https://paperswithcode.com/paper/simpler-context-dependent-logical-forms-via |
Repo | |
Framework | |
On Avoidance Learning with Partial Observability
Title | On Avoidance Learning with Partial Observability |
Authors | Tom J. Ameloot |
Abstract | We study a framework where agents have to avoid aversive signals. The agents are given only partial information, in the form of features that are projections of task states. Additionally, the agents have to cope with non-determinism, defined as unpredictability on the way that actions are executed. The goal of each agent is to define its behavior based on feature-action pairs that reliably avoid aversive signals. We study a learning algorithm, called A-learning, that exhibits fixpoint convergence, where the belief of the allowed feature-action pairs eventually becomes fixed. A-learning is parameter-free and easy to implement. |
Tasks | |
Published | 2016-05-16 |
URL | http://arxiv.org/abs/1605.04691v1 |
http://arxiv.org/pdf/1605.04691v1.pdf | |
PWC | https://paperswithcode.com/paper/on-avoidance-learning-with-partial |
Repo | |
Framework | |
Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem
Title | Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem |
Authors | Alican Nalci, Igor Fedorov, Maher Al-Shoukairi, Thomas T. Liu, Bhaskar D. Rao |
Abstract | In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negative least squares (S-NNLS) problem. We introduce a family of probability densities referred to as the Rectified Gaussian Scale Mixture (R- GSM) to model the sparsity enforcing prior distribution for the solution. The R-GSM prior encompasses a variety of heavy-tailed densities such as the rectified Laplacian and rectified Student- t distributions with a proper choice of the mixing density. We utilize the hierarchical representation induced by the R-GSM prior and develop an evidence maximization framework based on the Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate the hyper-parameters and obtain a point estimate for the solution. We refer to the proposed method as rectified sparse Bayesian learning (R-SBL). We provide four R- SBL variants that offer a range of options for computational complexity and the quality of the E-step computation. These methods include the Markov chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate message passing and a diagonal approximation. Using numerical experiments, we show that the proposed R-SBL method outperforms existing S-NNLS solvers in terms of both signal and support recovery performance, and is also very robust against the structure of the design matrix. |
Tasks | |
Published | 2016-01-22 |
URL | http://arxiv.org/abs/1601.06207v6 |
http://arxiv.org/pdf/1601.06207v6.pdf | |
PWC | https://paperswithcode.com/paper/rectified-gaussian-scale-mixtures-and-the |
Repo | |
Framework | |
Model-Coupled Autoencoder for Time Series Visualisation
Title | Model-Coupled Autoencoder for Time Series Visualisation |
Authors | Nikolaos Gianniotis, Sven D. Kügler, Peter Tiňo, Kai L. Polsterer |
Abstract | We present an approach for the visualisation of a set of time series that combines an echo state network with an autoencoder. For each time series in the dataset we train an echo state network, using a common and fixed reservoir of hidden neurons, and use the optimised readout weights as the new representation. Dimensionality reduction is then performed via an autoencoder on the readout weight representations. The crux of the work is to equip the autoencoder with a loss function that correctly interprets the reconstructed readout weights by associating them with a reconstruction error measured in the data space of sequences. This essentially amounts to measuring the predictive performance that the reconstructed readout weights exhibit on their corresponding sequences when plugged back into the echo state network with the same fixed reservoir. We demonstrate that the proposed visualisation framework can deal both with real valued sequences as well as binary sequences. We derive magnification factors in order to analyse distance preservations and distortions in the visualisation space. The versatility and advantages of the proposed method are demonstrated on datasets of time series that originate from diverse domains. |
Tasks | Dimensionality Reduction, Time Series |
Published | 2016-01-21 |
URL | http://arxiv.org/abs/1601.05654v1 |
http://arxiv.org/pdf/1601.05654v1.pdf | |
PWC | https://paperswithcode.com/paper/model-coupled-autoencoder-for-time-series |
Repo | |
Framework | |
Blind signal separation and identification of mixtures of images
Title | Blind signal separation and identification of mixtures of images |
Authors | Felipe P. do Carmo, Joaquim T. de Assis, Vania V. Estrela, Alessandra M. Coelho |
Abstract | In this paper, a fresh procedure to handle image mixtures by means of blind signal separation relying on a combination of second order and higher order statistics techniques are introduced. The problem of blind signal separation is reassigned to the wavelet domain. The key idea behind this method is that the image mixture can be decomposed into the sum of uncorrelated and/or independent sub-bands using wavelet transform. Initially, the observed image is pre-whitened in the space domain. Afterwards, an initial separation matrix is estimated from the second order statistics de-correlation model in the wavelet domain. Later, this matrix will be used as an initial separation matrix for the higher order statistics stage in order to find the best separation matrix. The suggested algorithm was tested using natural images.Experiments have confirmed that the use of the proposed process provides promising outcomes in identifying an image from noisy mixtures of images. |
Tasks | |
Published | 2016-03-26 |
URL | http://arxiv.org/abs/1603.08095v1 |
http://arxiv.org/pdf/1603.08095v1.pdf | |
PWC | https://paperswithcode.com/paper/blind-signal-separation-and-identification-of |
Repo | |
Framework | |
A simple and efficient SNN and its performance & robustness evaluation method to enable hardware implementation
Title | A simple and efficient SNN and its performance & robustness evaluation method to enable hardware implementation |
Authors | Anmol Biswas, Sidharth Prasad, Sandip Lashkare, Udayan Ganguly |
Abstract | Spiking Neural Networks (SNN) are more closely related to brain-like computation and inspire hardware implementation. This is enabled by small networks that give high performance on standard classification problems. In literature, typical SNNs are deep and complex in terms of network structure, weight update rules and learning algorithms. This makes it difficult to translate them into hardware. In this paper, we first develop a simple 2-layered network in software which compares with the state of the art on four different standard data-sets within SNNs and has improved efficiency. For example, it uses lower number of neurons (3 x), synapses (3.5 x) and epochs for training (30 x) for the Fisher Iris classification problem. The efficient network is based on effective population coding and synapse-neuron co-design. Second, we develop a computationally efficient (15000 x) and accurate (correlation of 0.98) method to evaluate the performance of the network without standard recognition tests. Third, we show that the method produces a robustness metric that can be used to evaluate noise tolerance. |
Tasks | |
Published | 2016-12-07 |
URL | http://arxiv.org/abs/1612.02233v1 |
http://arxiv.org/pdf/1612.02233v1.pdf | |
PWC | https://paperswithcode.com/paper/a-simple-and-efficient-snn-and-its |
Repo | |
Framework | |
Stance Classification in Rumours as a Sequential Task Exploiting the Tree Structure of Social Media Conversations
Title | Stance Classification in Rumours as a Sequential Task Exploiting the Tree Structure of Social Media Conversations |
Authors | Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, Michal Lukasik |
Abstract | Rumour stance classification, the task that determines if each tweet in a collection discussing a rumour is supporting, denying, questioning or simply commenting on the rumour, has been attracting substantial interest. Here we introduce a novel approach that makes use of the sequence of transitions observed in tree-structured conversation threads in Twitter. The conversation threads are formed by harvesting users’ replies to one another, which results in a nested tree-like structure. Previous work addressing the stance classification task has treated each tweet as a separate unit. Here we analyse tweets by virtue of their position in a sequence and test two sequential classifiers, Linear-Chain CRF and Tree CRF, each of which makes different assumptions about the conversational structure. We experiment with eight Twitter datasets, collected during breaking news, and show that exploiting the sequential structure of Twitter conversations achieves significant improvements over the non-sequential methods. Our work is the first to model Twitter conversations as a tree structure in this manner, introducing a novel way of tackling NLP tasks on Twitter conversations. |
Tasks | Rumour Detection |
Published | 2016-09-28 |
URL | http://arxiv.org/abs/1609.09028v2 |
http://arxiv.org/pdf/1609.09028v2.pdf | |
PWC | https://paperswithcode.com/paper/stance-classification-in-rumours-as-a |
Repo | |
Framework | |
Learning to Communicate: Channel Auto-encoders, Domain Specific Regularizers, and Attention
Title | Learning to Communicate: Channel Auto-encoders, Domain Specific Regularizers, and Attention |
Authors | Timothy J O’Shea, Kiran Karra, T. Charles Clancy |
Abstract | We address the problem of learning efficient and adaptive ways to communicate binary information over an impaired channel. We treat the problem as reconstruction optimization through impairment layers in a channel autoencoder and introduce several new domain-specific regularizing layers to emulate common channel impairments. We also apply a radio transformer network based attention model on the input of the decoder to help recover canonical signal representations. We demonstrate some promising initial capacity results from this architecture and address several remaining challenges before such a system could become practical. |
Tasks | |
Published | 2016-08-23 |
URL | http://arxiv.org/abs/1608.06409v1 |
http://arxiv.org/pdf/1608.06409v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-communicate-channel-auto-encoders |
Repo | |
Framework | |
Find Your Own Way: Weakly-Supervised Segmentation of Path Proposals for Urban Autonomy
Title | Find Your Own Way: Weakly-Supervised Segmentation of Path Proposals for Urban Autonomy |
Authors | Dan Barnes, Will Maddern, Ingmar Posner |
Abstract | We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the large-scale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving. |
Tasks | Autonomous Driving, Semantic Segmentation |
Published | 2016-10-05 |
URL | http://arxiv.org/abs/1610.01238v3 |
http://arxiv.org/pdf/1610.01238v3.pdf | |
PWC | https://paperswithcode.com/paper/find-your-own-way-weakly-supervised |
Repo | |
Framework | |
E-commerce in Your Inbox: Product Recommendations at Scale
Title | E-commerce in Your Inbox: Product Recommendations at Scale |
Authors | Mihajlo Grbovic, Vladan Radosavljevic, Nemanja Djuric, Narayan Bhamidipati, Jaikit Savla, Varun Bhagwan, Doug Sharp |
Abstract | In recent years online advertising has become increasingly ubiquitous and effective. Advertisements shown to visitors fund sites and apps that publish digital content, manage social networks, and operate e-mail services. Given such large variety of internet resources, determining an appropriate type of advertising for a given platform has become critical to financial success. Native advertisements, namely ads that are similar in look and feel to content, have had great success in news and social feeds. However, to date there has not been a winning formula for ads in e-mail clients. In this paper we describe a system that leverages user purchase history determined from e-mail receipts to deliver highly personalized product ads to Yahoo Mail users. We propose to use a novel neural language-based algorithm specifically tailored for delivering effective product recommendations, which was evaluated against baselines that included showing popular products and products predicted based on co-occurrence. We conducted rigorous offline testing using a large-scale product purchase data set, covering purchases of more than 29 million users from 172 e-commerce websites. Ads in the form of product recommendations were successfully tested on online traffic, where we observed a steady 9% lift in click-through rates over other ad formats in mail, as well as comparable lift in conversion rates. Following successful tests, the system was launched into production during the holiday season of 2014. |
Tasks | |
Published | 2016-06-23 |
URL | http://arxiv.org/abs/1606.07154v1 |
http://arxiv.org/pdf/1606.07154v1.pdf | |
PWC | https://paperswithcode.com/paper/e-commerce-in-your-inbox-product |
Repo | |
Framework | |
Fast Single Shot Detection and Pose Estimation
Title | Fast Single Shot Detection and Pose Estimation |
Authors | Patrick Poirson, Phil Ammirato, Cheng-Yang Fu, Wei Liu, Jana Kosecka, Alexander C. Berg |
Abstract | For applications in navigation and robotics, estimating the 3D pose of objects is as important as detection. Many approaches to pose estimation rely on detecting or tracking parts or keypoints [11, 21]. In this paper we build on a recent state-of-the-art convolutional network for slidingwindow detection [10] to provide detection and rough pose estimation in a single shot, without intermediate stages of detecting parts or initial bounding boxes. While not the first system to treat pose estimation as a categorization problem, this is the first attempt to combine detection and pose estimation at the same level using a deep learning approach. The key to the architecture is a deep convolutional network where scores for the presence of an object category, the offset for its location, and the approximate pose are all estimated on a regular grid of locations in the image. The resulting system is as accurate as recent work on pose estimation (42.4% 8 View mAVP on Pascal 3D+ [21] ) and significantly faster (46 frames per second (FPS) on a TITAN X GPU). This approach to detection and rough pose estimation is fast and accurate enough to be widely applied as a pre-processing step for tasks including high-accuracy pose estimation, object tracking and localization, and vSLAM. |
Tasks | Object Tracking, Pose Estimation |
Published | 2016-09-19 |
URL | http://arxiv.org/abs/1609.05590v1 |
http://arxiv.org/pdf/1609.05590v1.pdf | |
PWC | https://paperswithcode.com/paper/fast-single-shot-detection-and-pose |
Repo | |
Framework | |
An Extended Neo-Fuzzy Neuron and its Adaptive Learning Algorithm
Title | An Extended Neo-Fuzzy Neuron and its Adaptive Learning Algorithm |
Authors | Yevgeniy V. Bodyanskiy, Oleksii K. Tyshchenko, Daria S. Kopaliani |
Abstract | A modification of the neo-fuzzy neuron is proposed (an extended neo-fuzzy neuron (ENFN)) that is characterized by improved approximating properties. An adaptive learning algorithm is proposed that has both tracking and smoothing properties. An ENFN distinctive feature is its computational simplicity compared to other artificial neural networks and neuro-fuzzy systems. |
Tasks | |
Published | 2016-10-20 |
URL | http://arxiv.org/abs/1610.06483v1 |
http://arxiv.org/pdf/1610.06483v1.pdf | |
PWC | https://paperswithcode.com/paper/an-extended-neo-fuzzy-neuron-and-its-adaptive |
Repo | |
Framework | |