Paper Group ANR 211
Binary Stochastic Representations for Large Multi-class Classification. Improved Forecasting of Cryptocurrency Price using Social Signals. Macrocanonical Models for Texture Synthesis. An Automatic Design Framework of Swarm Pattern Formation based on Multi-objective Genetic Programming. CogCompTime: A Tool for Understanding Time in Natural Language …
Binary Stochastic Representations for Large Multi-class Classification
Title | Binary Stochastic Representations for Large Multi-class Classification |
Authors | Thomas Gerald, Aurélia Léon, Nicolas Baskiotis, Ludovic Denoyer |
Abstract | Classification with a large number of classes is a key problem in machine learning and corresponds to many real-world applications like tagging of images or textual documents in social networks. If one-vs-all methods usually reach top performance in this context, these approaches suffer from a high inference complexity, linear w.r.t the number of categories. Different models based on the notion of binary codes have been proposed to overcome this limitation, achieving in a sublinear inference complexity. But they a priori need to decide which binary code to associate to which category before learning using more or less complex heuristics. We propose a new end-to-end model which aims at simultaneously learning to associate binary codes with categories, but also learning to map inputs to binary codes. This approach called Deep Stochastic Neural Codes (DSNC) keeps the sublinear inference complexity but do not need any a priori tuning. Experimental results on different datasets show the effectiveness of the approach w.r.t baseline methods. |
Tasks | |
Published | 2019-06-24 |
URL | https://arxiv.org/abs/1906.09838v1 |
https://arxiv.org/pdf/1906.09838v1.pdf | |
PWC | https://paperswithcode.com/paper/binary-stochastic-representations-for-large |
Repo | |
Framework | |
Improved Forecasting of Cryptocurrency Price using Social Signals
Title | Improved Forecasting of Cryptocurrency Price using Social Signals |
Authors | Maria Glenski, Tim Weninger, Svitlana Volkova |
Abstract | Social media signals have been successfully used to develop large-scale predictive and anticipatory analytics. For example, forecasting stock market prices and influenza outbreaks. Recently, social data has been explored to forecast price fluctuations of cryptocurrencies, which are a novel disruptive technology with significant political and economic implications. In this paper we leverage and contrast the predictive power of social signals, specifically user behavior and communication patterns, from multiple social platforms GitHub and Reddit to forecast prices for three cyptocurrencies with high developer and community interest - Bitcoin, Ethereum, and Monero. We evaluate the performance of neural network models that rely on long short-term memory units (LSTMs) trained on historical price data and social data against price only LSTMs and baseline autoregressive integrated moving average (ARIMA) models, commonly used to predict stock prices. Our results not only demonstrate that social signals reduce error when forecasting daily coin price, but also show that the language used in comments within the official communities on Reddit (r/Bitcoin, r/Ethereum, and r/Monero) are the best predictors overall. We observe that models are more accurate in forecasting price one day ahead for Bitcoin (4% root mean squared percent error) compared to Ethereum (7%) and Monero (8%). |
Tasks | |
Published | 2019-07-01 |
URL | https://arxiv.org/abs/1907.00558v1 |
https://arxiv.org/pdf/1907.00558v1.pdf | |
PWC | https://paperswithcode.com/paper/improved-forecasting-of-cryptocurrency-price |
Repo | |
Framework | |
Macrocanonical Models for Texture Synthesis
Title | Macrocanonical Models for Texture Synthesis |
Authors | De Bortoli Valentin, Desolneux Agnès, Galerne Bruno, Leclaire Arthur |
Abstract | In this article we consider macrocanonical models for texture synthesis. In these models samples are generated given an input texture image and a set of features which should be matched in expectation. It is known that if the images are quantized, macrocanonical models are given by Gibbs measures, using the maximum entropy principle. We study conditions under which this result extends to real-valued images. If these conditions hold, finding a macrocanonical model amounts to minimizing a convex function and sampling from an associated Gibbs measure. We analyze an algorithm which alternates between sampling and minimizing. We present experiments with neural network features and study the drawbacks and advantages of using this sampling scheme. |
Tasks | Texture Synthesis |
Published | 2019-04-12 |
URL | http://arxiv.org/abs/1904.06396v1 |
http://arxiv.org/pdf/1904.06396v1.pdf | |
PWC | https://paperswithcode.com/paper/macrocanonical-models-for-texture-synthesis |
Repo | |
Framework | |
An Automatic Design Framework of Swarm Pattern Formation based on Multi-objective Genetic Programming
Title | An Automatic Design Framework of Swarm Pattern Formation based on Multi-objective Genetic Programming |
Authors | Zhun Fan, Zhaojun Wang, Xiaomin Zhu, Bingliang Hu, Anmin Zou, Dongwei Bao |
Abstract | Most existing swarm pattern formation methods depend on a predefined gene regulatory network (GRN) structure that requires designers’ priori knowledge, which is difficult to adapt to complex and changeable environments. To dynamically adapt to the complex and changeable environments, we propose an automatic design framework of swarm pattern formation based on multi-objective genetic programming. The proposed framework does not need to define the structure of the GRN-based model in advance, and it applies some basic network motifs to automatically structure the GRN-based model. In addition, a multi-objective genetic programming (MOGP) combines with NSGA-II, namely MOGP-NSGA-II, to balance the complexity and accuracy of the GRN-based model. In evolutionary process, an MOGP-NSGA-II and differential evolution (DE) are applied to optimize the structures and parameters of the GRN-based model in parallel. Simulation results demonstrate that the proposed framework can effectively evolve some novel GRN-based models, and these GRN-based models not only have a simpler structure and a better performance, but also are robust to the complex and changeable environments. |
Tasks | |
Published | 2019-10-31 |
URL | https://arxiv.org/abs/1910.14627v2 |
https://arxiv.org/pdf/1910.14627v2.pdf | |
PWC | https://paperswithcode.com/paper/an-automatic-design-framework-of-swarm |
Repo | |
Framework | |
CogCompTime: A Tool for Understanding Time in Natural Language Text
Title | CogCompTime: A Tool for Understanding Time in Natural Language Text |
Authors | Qiang Ning, Ben Zhou, Zhili Feng, Haoruo Peng, Dan Roth |
Abstract | Automatic extraction of temporal information in text is an important component of natural language understanding. It involves two basic tasks: (1) Understanding time expressions that are mentioned explicitly in text (e.g., February 27, 1998 or tomorrow), and (2) Understanding temporal information that is conveyed implicitly via relations. In this paper, we introduce CogCompTime, a system that has these two important functionalities. It incorporates the most recent progress, achieves state-of-the-art performance, and is publicly available.1 We believe that this demo will be useful for multiple time-aware applications and provide valuable insight for future research in temporal understanding. |
Tasks | |
Published | 2019-06-12 |
URL | https://arxiv.org/abs/1906.04940v1 |
https://arxiv.org/pdf/1906.04940v1.pdf | |
PWC | https://paperswithcode.com/paper/cogcomptime-a-tool-for-understanding-time-in-1 |
Repo | |
Framework | |
UprightNet: Geometry-Aware Camera Orientation Estimation from Single Images
Title | UprightNet: Geometry-Aware Camera Orientation Estimation from Single Images |
Authors | Wenqi Xian, Zhengqi Li, Matthew Fisher, Jonathan Eisenmann, Eli Shechtman, Noah Snavely |
Abstract | We introduce UprightNet, a learning-based approach for estimating 2DoF camera orientation from a single RGB image of an indoor scene. Unlike recent methods that leverage deep learning to perform black-box regression from image to orientation parameters, we propose an end-to-end framework that incorporates explicit geometric reasoning. In particular, we design a network that predicts two representations of scene geometry, in both the local camera and global reference coordinate systems, and solves for the camera orientation as the rotation that best aligns these two predictions via a differentiable least squares module. This network can be trained end-to-end, and can be supervised with both ground truth camera poses and intermediate representations of surface geometry. We evaluate UprightNet on the single-image camera orientation task on synthetic and real datasets, and show significant improvements over prior state-of-the-art approaches. |
Tasks | |
Published | 2019-08-19 |
URL | https://arxiv.org/abs/1908.07070v1 |
https://arxiv.org/pdf/1908.07070v1.pdf | |
PWC | https://paperswithcode.com/paper/uprightnet-geometry-aware-camera-orientation |
Repo | |
Framework | |
Re-Identification Supervised Texture Generation
Title | Re-Identification Supervised Texture Generation |
Authors | Jian Wang, Yunshan Zhong, Yachun Li, Chi Zhang, Yichen Wei |
Abstract | The estimation of 3D human body pose and shape from a single image has been extensively studied in recent years. However, the texture generation problem has not been fully discussed. In this paper, we propose an end-to-end learning strategy to generate textures of human bodies under the supervision of person re-identification. We render the synthetic images with textures extracted from the inputs and maximize the similarity between the rendered and input images by using the re-identification network as the perceptual metrics. Experiment results on pedestrian images show that our model can generate the texture from a single image and demonstrate that our textures are of higher quality than those generated by other available methods. Furthermore, we extend the application scope to other categories and explore the possible utilization of our generated textures. |
Tasks | Person Re-Identification, Texture Synthesis |
Published | 2019-04-06 |
URL | http://arxiv.org/abs/1904.03385v1 |
http://arxiv.org/pdf/1904.03385v1.pdf | |
PWC | https://paperswithcode.com/paper/re-identification-supervised-texture |
Repo | |
Framework | |
PubMedQA: A Dataset for Biomedical Research Question Answering
Title | PubMedQA: A Dataset for Biomedical Research Question Answering |
Authors | Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W. Cohen, Xinghua Lu |
Abstract | We introduce PubMedQA, a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances. Each PubMedQA instance is composed of (1) a question which is either an existing research article title or derived from one, (2) a context which is the corresponding abstract without its conclusion, (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and (4) a yes/no/maybe answer which summarizes the conclusion. PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their quantitative contents, is required to answer the questions. Our best performing model, multi-phase fine-tuning of BioBERT with long answer bag-of-word statistics as additional supervision, achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy and majority-baseline of 55.2% accuracy, leaving much room for improvement. PubMedQA is publicly available at https://pubmedqa.github.io. |
Tasks | Question Answering |
Published | 2019-09-13 |
URL | https://arxiv.org/abs/1909.06146v1 |
https://arxiv.org/pdf/1909.06146v1.pdf | |
PWC | https://paperswithcode.com/paper/pubmedqa-a-dataset-for-biomedical-research |
Repo | |
Framework | |
Realistic Speech-Driven Facial Animation with GANs
Title | Realistic Speech-Driven Facial Animation with GANs |
Authors | Konstantinos Vougioukas, Stavros Petridis, Maja Pantic |
Abstract | Speech-driven facial animation is the process that automatically synthesizes talking characters based on speech signals. The majority of work in this domain creates a mapping from audio features to visual features. This approach often requires post-processing using computer graphics techniques to produce realistic albeit subject dependent results. We present an end-to-end system that generates videos of a talking head, using only a still image of a person and an audio clip containing speech, without relying on handcrafted intermediate features. Our method generates videos which have (a) lip movements that are in sync with the audio and (b) natural facial expressions such as blinks and eyebrow movements. Our temporal GAN uses 3 discriminators focused on achieving detailed frames, audio-visual synchronization, and realistic expressions. We quantify the contribution of each component in our model using an ablation study and we provide insights into the latent representation of the model. The generated videos are evaluated based on sharpness, reconstruction quality, lip-reading accuracy, synchronization as well as their ability to generate natural blinks. |
Tasks | Audio-Visual Synchronization |
Published | 2019-06-14 |
URL | https://arxiv.org/abs/1906.06337v1 |
https://arxiv.org/pdf/1906.06337v1.pdf | |
PWC | https://paperswithcode.com/paper/realistic-speech-driven-facial-animation-with |
Repo | |
Framework | |
A Survey of Natural Language Generation Techniques with a Focus on Dialogue Systems - Past, Present and Future Directions
Title | A Survey of Natural Language Generation Techniques with a Focus on Dialogue Systems - Past, Present and Future Directions |
Authors | Sashank Santhanam, Samira Shaikh |
Abstract | One of the hardest problems in the area of Natural Language Processing and Artificial Intelligence is automatically generating language that is coherent and understandable to humans. Teaching machines how to converse as humans do falls under the broad umbrella of Natural Language Generation. Recent years have seen unprecedented growth in the number of research articles published on this subject in conferences and journals both by academic and industry researchers. There have also been several workshops organized alongside top-tier NLP conferences dedicated specifically to this problem. All this activity makes it hard to clearly define the state of the field and reason about its future directions. In this work, we provide an overview of this important and thriving area, covering traditional approaches, statistical approaches and also approaches that use deep neural networks. We provide a comprehensive review towards building open domain dialogue systems, an important application of natural language generation. We find that, predominantly, the approaches for building dialogue systems use seq2seq or language models architecture. Notably, we identify three important areas of further research towards building more effective dialogue systems: 1) incorporating larger context, including conversation context and world knowledge; 2) adding personae or personality in the NLG system; and 3) overcoming dull and generic responses that affect the quality of system-produced responses. We provide pointers on how to tackle these open problems through the use of cognitive architectures that mimic human language understanding and generation capabilities. |
Tasks | Text Generation |
Published | 2019-06-02 |
URL | https://arxiv.org/abs/1906.00500v1 |
https://arxiv.org/pdf/1906.00500v1.pdf | |
PWC | https://paperswithcode.com/paper/190600500 |
Repo | |
Framework | |
Evaluation of Machine Learning Classifiers for Zero-Day Intrusion Detection – An Analysis on CIC-AWS-2018 dataset
Title | Evaluation of Machine Learning Classifiers for Zero-Day Intrusion Detection – An Analysis on CIC-AWS-2018 dataset |
Authors | Qianru Zhou, Dimitrios Pezaros |
Abstract | Detecting Zero-Day intrusions has been the goal of Cybersecurity, especially intrusion detection for a long time. Machine learning is believed to be the promising methodology to solve that problem, numerous models have been proposed but a practical solution is still yet to come, mainly due to the limitation caused by the out-of-date open datasets available. In this paper, we take a deep inspection of the flow-based statistical data generated by CICFlowMeter, with six most popular machine learning classification models for Zero-Day attacks detection. The training dataset CIC-AWS-2018 Dataset contains fourteen types of intrusions, while the testing datasets contains eight different types of attacks. The six classification models are evaluated and cross validated on CIC-AWS-2018 Dataset for their accuracy in terms of false-positive rate, true-positive rate, and time overhead. Testing dataset, including eight novel (or Zero-Day) real-life attacks and benign traffic flows collected in real research production network are used to test the performance of the chosen decision tree classifier. Promising results are received with the accuracy as high as 100% and reasonable time overhead. We argue that with the statistical data collected from CICFlowMeter, simple machine learning models such as the decision tree classification could be able to take charge in detecting Zero-Day attacks. |
Tasks | Intrusion Detection |
Published | 2019-05-09 |
URL | https://arxiv.org/abs/1905.03685v1 |
https://arxiv.org/pdf/1905.03685v1.pdf | |
PWC | https://paperswithcode.com/paper/190503685 |
Repo | |
Framework | |
Data Interpolations in Deep Generative Models under Non-Simply-Connected Manifold Topology
Title | Data Interpolations in Deep Generative Models under Non-Simply-Connected Manifold Topology |
Authors | Jiseob Kim, Byoung-Tak Zhang |
Abstract | Exploiting the deep generative model’s remarkable ability of learning the data-manifold structure, some recent researches proposed a geometric data interpolation method based on the geodesic curves on the learned data-manifold. However, this interpolation method often gives poor results due to a topological difference between the model and the dataset. The model defines a family of simply-connected manifolds, whereas the dataset generally contains disconnected regions or holes that make them non-simply-connected. To compensate this difference, we propose a novel density regularizer that make the interpolation path circumvent the holes denoted by low probability density. We confirm that our method gives consistently better interpolation results from the experiments with real-world image datasets. |
Tasks | |
Published | 2019-01-20 |
URL | http://arxiv.org/abs/1901.08553v1 |
http://arxiv.org/pdf/1901.08553v1.pdf | |
PWC | https://paperswithcode.com/paper/data-interpolations-in-deep-generative-models |
Repo | |
Framework | |
Self-supervised Adversarial Training
Title | Self-supervised Adversarial Training |
Authors | Kejiang Chen, Hang Zhou, Yuefeng Chen, Xiaofeng Mao, Yuhong Li, Yuan He, Hui Xue, Weiming Zhang, Nenghai Yu |
Abstract | Recent work has demonstrated that neural networks are vulnerable to adversarial examples. To escape from the predicament, many works try to harden the model in various ways, in which adversarial training is an effective way which learns robust feature representation so as to resist adversarial attacks. Meanwhile, the self-supervised learning aims to learn robust and semantic embedding from data itself. With these views, we introduce self-supervised learning to against adversarial examples in this paper. Specifically, the self-supervised representation coupled with k-Nearest Neighbour is proposed for classification. To further strengthen the defense ability, self-supervised adversarial training is proposed, which maximizes the mutual information between the representations of original examples and the corresponding adversarial examples. Experimental results show that the self-supervised representation outperforms its supervised version in respect of robustness and self-supervised adversarial training can further improve the defense ability efficiently. |
Tasks | |
Published | 2019-11-15 |
URL | https://arxiv.org/abs/1911.06470v2 |
https://arxiv.org/pdf/1911.06470v2.pdf | |
PWC | https://paperswithcode.com/paper/self-supervised-adversarial-training |
Repo | |
Framework | |
Convolutional Neural Networks with Dynamic Regularization
Title | Convolutional Neural Networks with Dynamic Regularization |
Authors | Yi Wang, Zhen-Peng Bian, Junhui Hou, Lap-Pui Chau |
Abstract | Regularization is commonly used for alleviating overfitting in machine learning. For convolutional neural networks (CNNs), regularization methods, such as DropBlock and Shake-Shake, have illustrated the improvement in the generalization performance. However, these methods lack a self-adaptive ability throughout training. That is, the regularization strength is fixed to a predefined schedule, and manual adjustments are required to adapt to various network architectures. In this paper, we propose a dynamic regularization method for CNNs. Specifically, we model the regularization strength as a function of the training loss. According to the change of the training loss, our methods can dynamically adjust the regularization strength in the training procedure, thereby balancing the underfitting and overfitting of CNNs. With dynamic regularization, a large-scale model is automatically regularized by the strong perturbation, and vice versa. Experimental results show that the proposed method can improve the generalization capability on off-the-shelf network architectures and outperform state-of-the-art regularization methods. |
Tasks | |
Published | 2019-09-26 |
URL | https://arxiv.org/abs/1909.11862v2 |
https://arxiv.org/pdf/1909.11862v2.pdf | |
PWC | https://paperswithcode.com/paper/convolutional-neural-networks-with-dynamic |
Repo | |
Framework | |
Beating humans in a penny-matching game by leveraging cognitive hierarchy theory and Bayesian learning
Title | Beating humans in a penny-matching game by leveraging cognitive hierarchy theory and Bayesian learning |
Authors | Ran Tian, Nan Li, Ilya Kolmanovsky, Anouck Girard |
Abstract | It is a long-standing goal of artificial intelligence (AI) to be superior to human beings in decision making. Games are suitable for testing AI capabilities of making good decisions in non-numerical tasks. In this paper, we develop a new AI algorithm to play the penny-matching game considered in Shannon’s “mind-reading machine” (1953) against human players. In particular, we exploit cognitive hierarchy theory and Bayesian learning techniques to continually evolve a model for predicting human player decisions, and let the AI player make decisions according to the model predictions to pursue the best chance of winning. Experimental results show that our AI algorithm beats 27 out of 30 volunteer human players. |
Tasks | Decision Making |
Published | 2019-09-27 |
URL | https://arxiv.org/abs/1909.12701v2 |
https://arxiv.org/pdf/1909.12701v2.pdf | |
PWC | https://paperswithcode.com/paper/beating-humans-in-a-penny-matching-game-by |
Repo | |
Framework | |