January 29, 2020

3418 words 17 mins read

Paper Group ANR 674

Paper Group ANR 674

Are Bitcoins price predictable? Evidence from machine learning techniques using technical indicators. A Novel Approach to Enhance the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques. Kinetic Song Comprehension: Deciphering Personal Listening Habits via Phone Vibrations. Self-Paced Contextual Reinforcem …

Are Bitcoins price predictable? Evidence from machine learning techniques using technical indicators

Title Are Bitcoins price predictable? Evidence from machine learning techniques using technical indicators
Authors Samuel Asante Gyamerah
Abstract The uncertainties in future Bitcoin price make it difficult to accurately predict the price of Bitcoin. Accurately predicting the price for Bitcoin is therefore important for decision-making process of investors and market players in the cryptocurrency market. Using historical data from 01/01/2012 to 16/08/2019, machine learning techniques (Generalized linear model via penalized maximum likelihood, random forest, support vector regression with linear kernel, and stacking ensemble) were used to forecast the price of Bitcoin. The prediction models employed key and high dimensional technical indicators as the predictors. The performance of these techniques were evaluated using mean absolute percentage error (MAPE), root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R-squared). The performance metrics revealed that the stacking ensemble model with two base learner (random forest and generalized linear model via penalized maximum likelihood) and support vector regression with linear kernel as meta-learner was the optimal model for forecasting Bitcoin price. The MAPE, RMSE, MAE, and R-squared values for the stacking ensemble model were 0.0191%, 15.5331 USD, 124.5508 USD, and 0.9967 respectively. These values show a high degree of reliability in predicting the price of Bitcoin using the stacking ensemble model. Accurately predicting the future price of Bitcoin will yield significant returns for investors and market players in the cryptocurrency market.
Tasks Decision Making
Published 2019-09-03
URL https://arxiv.org/abs/1909.01268v1
PDF https://arxiv.org/pdf/1909.01268v1.pdf
PWC https://paperswithcode.com/paper/are-bitcoins-price-predictable-evidence-from
Repo
Framework

A Novel Approach to Enhance the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques

Title A Novel Approach to Enhance the Performance of Semantic Search in Bengali using Neural Net and other Classification Techniques
Authors Arijit Das, Diganta Saha
Abstract Search has for a long time been an important tool for users to retrieve information. Syntactic search is matching documents or objects containing specific keywords like user-history, location, preference etc. to improve the results. However, it is often possible that the query and the best answer have no term or very less number of terms in common and syntactic search can not perform properly in such cases. Semantic search, on the other hand, resolves these issues but suffers from lack of annotation, absence of WordNet in case of low resource languages. In this work, we have demonstrated an end to end procedure to improve the performance of semantic search using semi-supervised and unsupervised learning algorithms. An available Bengali repository was chosen to have seven types of semantic properties primarily to develop the system. Performance has been tested using Support Vector Machine, Naive Bayes, Decision Tree and Artificial Neural Network (ANN). Our system has achieved the efficiency to predict the correct semantics using knowledge base over the time of learning. A repository containing around a million sentences, a product of TDIL project of Govt. of India, was used to test our system at first instance. Then the testing has been done for other languages. Being a cognitive system it may be very useful for improving user satisfaction in e-Governance or m-Governance in the multilingual environment and also for other applications.
Tasks
Published 2019-11-04
URL https://arxiv.org/abs/1911.01256v2
PDF https://arxiv.org/pdf/1911.01256v2.pdf
PWC https://paperswithcode.com/paper/a-novel-approach-to-enhance-the-performance
Repo
Framework

Kinetic Song Comprehension: Deciphering Personal Listening Habits via Phone Vibrations

Title Kinetic Song Comprehension: Deciphering Personal Listening Habits via Phone Vibrations
Authors Richard Matovu, Isaac Griswold-Steiner, Abdul Serwadda
Abstract Music is an expression of our identity, showing a significant correlation with other personal traits, beliefs, and habits. If accessed by a malicious entity, an individual’s music listening habits could be used to make critical inferences about the user. In this paper, we showcase an attack in which the vibrations propagated through a user’s phone while playing music via its speakers can be used to detect and classify songs. Our attack shows that known songs can be detected with an accuracy of just under 80%, while a corpus of 100 songs can be classified with an accuracy greater than 80%. We investigate such questions under a wide variety of experimental scenarios involving three surfaces and five phone speaker volumes. Although users can mitigate some of the risk by using a phone cover to dampen the vibrations, we show that a sophisticated attacker could adapt the attack to still classify songs with a decent accuracy. This paper demonstrates a new way in which motion sensor data can be leveraged to intrude on user music preferences without their express permission. Whether this information is leveraged for financial gain or political purposes, our research makes a case for why more rigorous methods of protecting user data should be utilized by companies, and if necessary, individuals.
Tasks
Published 2019-09-19
URL https://arxiv.org/abs/1909.09123v1
PDF https://arxiv.org/pdf/1909.09123v1.pdf
PWC https://paperswithcode.com/paper/kinetic-song-comprehension-deciphering
Repo
Framework

Self-Paced Contextual Reinforcement Learning

Title Self-Paced Contextual Reinforcement Learning
Authors Pascal Klink, Hany Abdulsamad, Boris Belousov, Jan Peters
Abstract Generalization and adaptation of learned skills to novel situations is a core requirement for intelligent autonomous robots. Although contextual reinforcement learning provides a principled framework for learning and generalization of behaviors across related tasks, it generally relies on uninformed sampling of environments from an unknown, uncontrolled context distribution, thus missing the benefits of structured, sequential learning. We introduce a novel relative entropy reinforcement learning algorithm that gives the agent the freedom to control the intermediate task distribution, allowing for its gradual progression towards the target context distribution. Empirical evaluation shows that the proposed curriculum learning scheme drastically improves sample efficiency and enables learning in scenarios with both broad and sharp target context distributions in which classical approaches perform sub-optimally.
Tasks
Published 2019-10-07
URL https://arxiv.org/abs/1910.02826v1
PDF https://arxiv.org/pdf/1910.02826v1.pdf
PWC https://paperswithcode.com/paper/self-paced-contextual-reinforcement-learning
Repo
Framework

Time Series Vector Autoregression Prediction of the Ecological Footprint based on Energy Parameters

Title Time Series Vector Autoregression Prediction of the Ecological Footprint based on Energy Parameters
Authors Radmila Janković, Ivan Mihajlović, Alessia Amelio
Abstract Sustainability became the most important component of world development, as countries worldwide fight the battle against the climate change. To understand the effects of climate change, the ecological footprint, along with the biocapacity should be observed. The big part of the ecological footprint, the carbon footprint, is most directly associated with the energy, and specifically fuel sources. This paper develops a time series vector autoregression prediction model of the ecological footprint based on energy parameters. The objective of the paper is to forecast the EF based solely on energy parameters and determine the relationship between the energy and the EF. The dataset included global yearly observations of the variables for the period 1971-2014. Predictions were generated for every variable that was used in the model for the period 2015-2024. The results indicate that the ecological footprint of consumption will continue increasing, as well as the primary energy consumption from different sources. However, the energy consumption from coal sources is predicted to have a declining trend.
Tasks Time Series
Published 2019-10-25
URL https://arxiv.org/abs/1910.11800v1
PDF https://arxiv.org/pdf/1910.11800v1.pdf
PWC https://paperswithcode.com/paper/time-series-vector-autoregression-prediction
Repo
Framework

From web crawled text to project descriptions: automatic summarizing of social innovation projects

Title From web crawled text to project descriptions: automatic summarizing of social innovation projects
Authors Nikola Milosevic, Dimitar Marinov, Abdullah Gok, Goran Nenadic
Abstract In the past decade, social innovation projects have gained the attention of policy makers, as they address important social issues in an innovative manner. A database of social innovation is an important source of information that can expand collaboration between social innovators, drive policy and serve as an important resource for research. Such a database needs to have projects described and summarized. In this paper, we propose and compare several methods (e.g. SVM-based, recurrent neural network based, ensambled) for describing projects based on the text that is available on project websites. We also address and propose a new metric for automated evaluation of summaries based on topic modelling.
Tasks
Published 2019-05-22
URL https://arxiv.org/abs/1905.09086v1
PDF https://arxiv.org/pdf/1905.09086v1.pdf
PWC https://paperswithcode.com/paper/from-web-crawled-text-to-project-descriptions
Repo
Framework

Radial Bayesian Neural Networks: Beyond Discrete Support In Large-Scale Bayesian Deep Learning

Title Radial Bayesian Neural Networks: Beyond Discrete Support In Large-Scale Bayesian Deep Learning
Authors Sebastian Farquhar, Michael Osborne, Yarin Gal
Abstract We propose Radial Bayesian Neural Networks (BNNs): a variational approximate posterior for BNNs which scales well to large models while maintaining a distribution over weight-space with full support. Other scalable Bayesian deep learning methods, like MC dropout or deep ensembles, have discrete support-they assign zero probability to almost all of the weight-space. Unlike these discrete support methods, Radial BNNs’ full support makes them suitable for use as a prior for sequential inference. In addition, they solve the conceptual challenges with the a priori implausibility of weight distributions with discrete support. The Radial BNN is motivated by avoiding a sampling problem in ‘mean-field’ variational inference (MFVI) caused by the so-called ‘soap-bubble’ pathology of multivariate Gaussians. We show that, unlike MFVI, Radial BNNs are robust to hyperparameters and can be efficiently applied to a challenging real-world medical application without needing ad-hoc tweaks and intensive tuning. In fact, in this setting Radial BNNs out-perform discrete-support methods like MC dropout. Lastly, by using Radial BNNs as a theoretically principled, robust alternative to MFVI we make significant strides in a Bayesian continual learning evaluation.
Tasks Continual Learning
Published 2019-07-01
URL https://arxiv.org/abs/1907.00865v2
PDF https://arxiv.org/pdf/1907.00865v2.pdf
PWC https://paperswithcode.com/paper/radial-bayesian-neural-networks-robust
Repo
Framework

Hierarchical Scene Coordinate Classification and Regression for Visual Localization

Title Hierarchical Scene Coordinate Classification and Regression for Visual Localization
Authors Xiaotian Li, Shuzhe Wang, Yi Zhao, Jakob Verbeek, Juho Kannala
Abstract Visual localization is critical to many applications in computer vision and robotics. To address single-image RGB localization, state-of-the-art feature-based methods match local descriptors between a query image and a pre-built 3D model. Recently, deep neural networks have been exploited to regress the mapping between raw pixels and 3D coordinates in the scene, and thus the matching is implicitly performed by the forward pass through the network. However, in a large and ambiguous environment, learning such a regression task directly can be difficult for a single network. In this work, we present a new hierarchical scene coordinate network to predict pixel scene coordinates in a coarse-to-fine manner from a single RGB image. The network consists of a series of output layers, each of them conditioned on the previous ones. The final output layer predicts the 3D coordinates and the others produce progressively finer discrete location labels. The proposed method outperforms the baseline regression-only network and allows us to train compact models which scale robustly to large environments. It sets a new state-of-the-art for single-image RGB localization performance on the 7-Scenes, 12-Scenes, Cambridge Landmarks datasets, and three combined scenes. Moreover, for large-scale outdoor localization on the Aachen Day-Night dataset, we present a hybrid approach which outperforms existing scene coordinate regression methods, and reduces significantly the performance gap w.r.t. explicit feature matching methods.
Tasks Data Augmentation, Visual Localization
Published 2019-09-13
URL https://arxiv.org/abs/1909.06216v3
PDF https://arxiv.org/pdf/1909.06216v3.pdf
PWC https://paperswithcode.com/paper/hierarchical-joint-scene-coordinate
Repo
Framework

Tracking in Urban Traffic Scenes from Background Subtraction and Object Detection

Title Tracking in Urban Traffic Scenes from Background Subtraction and Object Detection
Authors Hui-Lee Ooi, Guillaume-Alexandre Bilodeau, Nicolas Saunier
Abstract In this paper, we propose to combine detections from background subtraction and from a multiclass object detector for multiple object tracking (MOT) in urban traffic scenes. These objects are associated across frames using spatial, colour and class label information, and trajectory prediction is evaluated to yield the final MOT outputs. The proposed method was tested on the Urban tracker dataset and shows competitive performances compared to state-of-the-art approaches. Results show that the integration of different detection inputs remains a challenging task that greatly affects the MOT performance.
Tasks Multiple Object Tracking, Object Detection, Object Tracking, Trajectory Prediction
Published 2019-05-15
URL https://arxiv.org/abs/1905.06381v1
PDF https://arxiv.org/pdf/1905.06381v1.pdf
PWC https://paperswithcode.com/paper/tracking-in-urban-traffic-scenes-from
Repo
Framework

Sparse-to-Dense Hypercolumn Matching for Long-Term Visual Localization

Title Sparse-to-Dense Hypercolumn Matching for Long-Term Visual Localization
Authors Hugo Germain, Guillaume Bourmaud, Vincent Lepetit
Abstract We propose a novel approach to feature point matching, suitable for robust and accurate outdoor visual localization in long-term scenarios. Given a query image, we first match it against a database of registered reference images, using recent retrieval techniques. This gives us a first estimate of the camera pose. To refine this estimate, like previous approaches, we match 2D points across the query image and the retrieved reference image. This step, however, is prone to fail as it is still very difficult to detect and match sparse feature points across images captured in potentially very different conditions. Our key contribution is to show that we need to extract sparse feature points only in the retrieved reference image: We then search for the corresponding 2D locations in the query image exhaustively. This search can be performed efficiently using convolutional operations, and robustly by using hypercolumn descriptors, i.e. image features computed for retrieval. We refer to this method as Sparse-to-Dense Hypercolumn Matching. Because we know the 3D locations of the sparse feature points in the reference images thanks to an offline reconstruction stage, it is then possible to accurately estimate the camera pose from these matches. Our experiments show that this method allows us to outperform the state-of-the-art on several challenging outdoor datasets.
Tasks Visual Localization
Published 2019-07-09
URL https://arxiv.org/abs/1907.03965v2
PDF https://arxiv.org/pdf/1907.03965v2.pdf
PWC https://paperswithcode.com/paper/sparse-to-dense-hypercolumn-matching-for-long
Repo
Framework

Maximizing Stylistic Control and Semantic Accuracy in NLG: Personality Variation and Discourse Contrast

Title Maximizing Stylistic Control and Semantic Accuracy in NLG: Personality Variation and Discourse Contrast
Authors Vrindavan Harrison, Lena Reed, Shereen Oraby, Marilyn Walker
Abstract Neural generation methods for task-oriented dialogue typically generate from a meaning representation that is populated using a database of domain information, such as a table of data describing a restaurant. While earlier work focused solely on the semantic fidelity of outputs, recent work has started to explore methods for controlling the style of the generated text while simultaneously achieving semantic accuracy. Here we experiment with two stylistic benchmark tasks, generating language that exhibits variation in personality, and generating discourse contrast. We report a huge performance improvement in both stylistic control and semantic accuracy over the state of the art on both of these benchmarks. We test several different models and show that putting stylistic conditioning in the decoder and eliminating the semantic re-ranker used in earlier models results in more than 15 points higher BLEU for Personality, with a reduction of semantic error to near zero. We also report an improvement from .75 to .81 in controlling contrast and a reduction in semantic error from 16% to 2%.
Tasks
Published 2019-07-22
URL https://arxiv.org/abs/1907.09527v1
PDF https://arxiv.org/pdf/1907.09527v1.pdf
PWC https://paperswithcode.com/paper/maximizing-stylistic-control-and-semantic
Repo
Framework

Automatic Fact-Checking Using Context and Discourse Information

Title Automatic Fact-Checking Using Context and Discourse Information
Authors Pepa Atanasova, Preslav Nakov, Lluís Màrquez, Alberto Barrón-Cedeño, Georgi Karadzhov, Tsvetomila Mihaylova, Mitra Mohtarami, James Glass
Abstract We study the problem of automatic fact-checking, paying special attention to the impact of contextual and discourse information. We address two related tasks: (i) detecting check-worthy claims, and (ii) fact-checking claims. We develop supervised systems based on neural networks, kernel-based support vector machines, and combinations thereof, which make use of rich input representations in terms of discourse cues and contextual features. For the check-worthiness estimation task, we focus on political debates, and we model the target claim in the context of the full intervention of a participant and the previous and the following turns in the debate, taking into account contextual meta information. For the fact-checking task, we focus on answer verification in a community forum, and we model the veracity of the answer with respect to the entire question–answer thread in which it occurs as well as with respect to other related posts from the entire forum. We develop annotated datasets for both tasks and we run extensive experimental evaluation, confirming that both types of information —but especially contextual features— play an important role.
Tasks
Published 2019-08-04
URL https://arxiv.org/abs/1908.01328v1
PDF https://arxiv.org/pdf/1908.01328v1.pdf
PWC https://paperswithcode.com/paper/automatic-fact-checking-using-context-and
Repo
Framework

A Novel Design of Adaptive and Hierarchical Convolutional Neural Networks using Partial Reconfiguration on FPGA

Title A Novel Design of Adaptive and Hierarchical Convolutional Neural Networks using Partial Reconfiguration on FPGA
Authors Mohammad Farhadi, Mehdi Ghasemi, Yezhou Yang
Abstract Nowadays most research in visual recognition using Convolutional Neural Networks (CNNs) follows the “deeper model with deeper confidence” belief to gain a higher recognition accuracy. At the same time, deeper model brings heavier computation. On the other hand, for a large chunk of recognition challenges, a system can classify images correctly using simple models or so-called shallow networks. Moreover, the implementation of CNNs faces with the size, weight, and energy constraints on the embedded devices. In this paper, we implement the adaptive switching between shallow and deep networks to reach the highest throughput on a resource-constrained MPSoC with CPU and FPGA. To this end, we develop and present a novel architecture for the CNNs where a gate makes the decision whether using the deeper model is beneficial or not. Due to resource limitation on FPGA, the idea of partial reconfiguration has been used to accommodate deep CNNs on the FPGA resources. We report experimental results on CIFAR-10, CIFAR-100, and SVHN datasets to validate our approach. Using confidence metric as the decision making factor, only 69.8%, 71.8%, and 43.8% of the computation in the deepest network is done for CIFAR-10, CIFAR-100, and SVHN while it can maintain the desired accuracy with the throughput of around 400 images per second for SVHN dataset.
Tasks Decision Making
Published 2019-09-05
URL https://arxiv.org/abs/1909.05653v1
PDF https://arxiv.org/pdf/1909.05653v1.pdf
PWC https://paperswithcode.com/paper/a-novel-design-of-adaptive-and-hierarchical
Repo
Framework

Small Data Challenges in Big Data Era: A Survey of Recent Progress on Unsupervised and Semi-Supervised Methods

Title Small Data Challenges in Big Data Era: A Survey of Recent Progress on Unsupervised and Semi-Supervised Methods
Authors Guo-Jun Qi, Jiebo Luo
Abstract Small data challenges have emerged in many learning problems, since the success of deep neural networks often relies on the availability of a huge amount of labeled data that is expensive to collect. To address it, many efforts have been made on training complex models with small data in an unsupervised and semi-supervised fashion. In this paper, we will review the recent progresses on these two major categories of methods. A wide spectrum of small data models will be categorized in a big picture, where we will show how they interplay with each other to motivate explorations of new ideas. We will review the criteria of learning the transformation equivariant, disentangled, self-supervised and semi-supervised representations, which underpin the foundations of recent developments. Many instantiations of unsupervised and semi-supervised generative models have been developed on the basis of these criteria, greatly expanding the territory of existing autoencoders, generative adversarial nets (GANs) and other deep networks by exploring the distribution of unlabeled data for more powerful representations. While we focus on the unsupervised and semi-supervised methods, we will also provide a broader review of other emerging topics, from unsupervised and semi-supervised domain adaptation to the fundamental roles of transformation equivariance and invariance in training a wide spectrum of deep networks. It is impossible for us to write an exclusive encyclopedia to include all related works. Instead, we aim at exploring the main ideas, principles and methods in this area to reveal where we are heading on the journey towards addressing the small data challenges in this big data era.
Tasks Domain Adaptation
Published 2019-03-27
URL http://arxiv.org/abs/1903.11260v1
PDF http://arxiv.org/pdf/1903.11260v1.pdf
PWC https://paperswithcode.com/paper/small-data-challenges-in-big-data-era-a
Repo
Framework

Learning audio representations via phase prediction

Title Learning audio representations via phase prediction
Authors Félix de Chaumont Quitry, Marco Tagliasacchi, Dominik Roblek
Abstract We learn audio representations by solving a novel self-supervised learning task, which consists of predicting the phase of the short-time Fourier transform from its magnitude. A convolutional encoder is used to map the magnitude spectrum of the input waveform to a lower dimensional embedding. A convolutional decoder is then used to predict the instantaneous frequency (i.e., the temporal rate of change of the phase) from such embedding. To evaluate the quality of the learned representations, we evaluate how they transfer to a wide variety of downstream audio tasks. Our experiments reveal that the phase prediction task leads to representations that generalize across different tasks, partially bridging the gap with fully-supervised models. In addition, we show that the predicted phase can be used as initialization of the Griffin-Lim algorithm, thus reducing the number of iterations needed to reconstruct the waveform in the time domain.
Tasks
Published 2019-10-25
URL https://arxiv.org/abs/1910.11910v1
PDF https://arxiv.org/pdf/1910.11910v1.pdf
PWC https://paperswithcode.com/paper/learning-audio-representations-via-phase
Repo
Framework
comments powered by Disqus