Paper Group ANR 938
Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision. Dependence in Propositional Logic: Formula-Formula Dependence and Formula Forgetting – Application to Belief Update and Conservative Extension. Unbiased scalable softmax optimization. The Art of Drafting: A Team-Oriented Hero Recommendation System for Multiplayer Onlin …
Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision
Title | Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision |
Authors | Boitumelo Ruf, Sebastian Monka, Matthias Kollmann, Michael Grinberg |
Abstract | In order to improve usability and safety, modern unmanned aerial vehicles (UAVs) are equipped with sensors to monitor the environment, such as laser-scanners and cameras. One important aspect in this monitoring process is to detect obstacles in the flight path in order to avoid collisions. Since a large number of consumer UAVs suffer from tight weight and power constraints, our work focuses on obstacle avoidance based on a lightweight stereo camera setup. We use disparity maps, which are computed from the camera images, to locate obstacles and to automatically steer the UAV around them. For disparity map computation we optimize the well-known semi-global matching (SGM) approach for the deployment on an embedded FPGA. The disparity maps are then converted into simpler representations, the so called U-/V-Maps, which are used for obstacle detection. Obstacle avoidance is based on a reactive approach which finds the shortest path around the obstacles as soon as they have a critical distance to the UAV. One of the fundamental goals of our work was the reduction of development costs by closing the gap between application development and hardware optimization. Hence, we aimed at using high-level synthesis (HLS) for porting our algorithms, which are written in C/C++, to the embedded FPGA. We evaluated our implementation of the disparity estimation on the KITTI Stereo 2015 benchmark. The integrity of the overall realtime reactive obstacle avoidance algorithm has been evaluated by using Hardware-in-the-Loop testing in conjunction with two flight simulators. |
Tasks | Disparity Estimation |
Published | 2018-07-17 |
URL | https://arxiv.org/abs/1807.06271v2 |
https://arxiv.org/pdf/1807.06271v2.pdf | |
PWC | https://paperswithcode.com/paper/real-time-on-board-obstacle-avoidance-for |
Repo | |
Framework | |
Dependence in Propositional Logic: Formula-Formula Dependence and Formula Forgetting – Application to Belief Update and Conservative Extension
Title | Dependence in Propositional Logic: Formula-Formula Dependence and Formula Forgetting – Application to Belief Update and Conservative Extension |
Authors | Liangda Fang, Hai Wan, Xianqiao Liu, Biqing Fang, Zhaorong Lai |
Abstract | Dependence is an important concept for many tasks in artificial intelligence. A task can be executed more efficiently by discarding something independent from the task. In this paper, we propose two novel notions of dependence in propositional logic: formula-formula dependence and formula forgetting. The first is a relation between formulas capturing whether a formula depends on another one, while the second is an operation that returns the strongest consequence independent of a formula. We also apply these two notions in two well-known issues: belief update and conservative extension. Firstly, we define a new update operator based on formula-formula dependence. Furthermore, we reduce conservative extension to formula forgetting. |
Tasks | |
Published | 2018-06-29 |
URL | https://arxiv.org/abs/1806.11304v2 |
https://arxiv.org/pdf/1806.11304v2.pdf | |
PWC | https://paperswithcode.com/paper/dependence-in-propositional-logic-formula |
Repo | |
Framework | |
Unbiased scalable softmax optimization
Title | Unbiased scalable softmax optimization |
Authors | Francois Fagan, Garud Iyengar |
Abstract | Recent neural network and language models rely on softmax distributions with an extremely large number of categories. Since calculating the softmax normalizing constant in this context is prohibitively expensive, there is a growing literature of efficiently computable but biased estimates of the softmax. In this paper we propose the first unbiased algorithms for maximizing the softmax likelihood whose work per iteration is independent of the number of classes and datapoints (and no extra work is required at the end of each epoch). We show that our proposed unbiased methods comprehensively outperform the state-of-the-art on seven real world datasets. |
Tasks | |
Published | 2018-03-22 |
URL | http://arxiv.org/abs/1803.08577v1 |
http://arxiv.org/pdf/1803.08577v1.pdf | |
PWC | https://paperswithcode.com/paper/unbiased-scalable-softmax-optimization |
Repo | |
Framework | |
The Art of Drafting: A Team-Oriented Hero Recommendation System for Multiplayer Online Battle Arena Games
Title | The Art of Drafting: A Team-Oriented Hero Recommendation System for Multiplayer Online Battle Arena Games |
Authors | Zhengxing Chen, Truong-Huy D Nguyen, Yuyu Xu, Chris Amato, Seth Cooper, Yizhou Sun, Magy Seif El-Nasr |
Abstract | Multiplayer Online Battle Arena (MOBA) games have received increasing popularity recently. In a match of such games, players compete in two teams of five, each controlling an in-game avatars, known as heroes, selected from a roster of more than 100. The selection of heroes, also known as pick or draft, takes place before the match starts and alternates between the two teams until each player has selected one hero. Heroes are designed with different strengths and weaknesses to promote team cooperation in a game. Intuitively, heroes in a strong team should complement each other’s strengths and suppressing those of opponents. Hero drafting is therefore a challenging problem due to the complex hero-to-hero relationships to consider. In this paper, we propose a novel hero recommendation system that suggests heroes to add to an existing team while maximizing the team’s prospect for victory. To that end, we model the drafting between two teams as a combinatorial game and use Monte Carlo Tree Search (MCTS) for estimating the values of hero combinations. Our empirical evaluation shows that hero teams drafted by our recommendation algorithm have significantly higher win rate against teams constructed by other baseline and state-of-the-art strategies. |
Tasks | |
Published | 2018-06-26 |
URL | http://arxiv.org/abs/1806.10130v1 |
http://arxiv.org/pdf/1806.10130v1.pdf | |
PWC | https://paperswithcode.com/paper/the-art-of-drafting-a-team-oriented-hero |
Repo | |
Framework | |
Impact of Biases in Big Data
Title | Impact of Biases in Big Data |
Authors | Patrick Glauner, Petko Valtchev, Radu State |
Abstract | The underlying paradigm of big data-driven machine learning reflects the desire of deriving better conclusions from simply analyzing more data, without the necessity of looking at theory and models. Is having simply more data always helpful? In 1936, The Literary Digest collected 2.3M filled in questionnaires to predict the outcome of that year’s US presidential election. The outcome of this big data prediction proved to be entirely wrong, whereas George Gallup only needed 3K handpicked people to make an accurate prediction. Generally, biases occur in machine learning whenever the distributions of training set and test set are different. In this work, we provide a review of different sorts of biases in (big) data sets in machine learning. We provide definitions and discussions of the most commonly appearing biases in machine learning: class imbalance and covariate shift. We also show how these biases can be quantified and corrected. This work is an introductory text for both researchers and practitioners to become more aware of this topic and thus to derive more reliable models for their learning problems. |
Tasks | |
Published | 2018-03-02 |
URL | http://arxiv.org/abs/1803.00897v1 |
http://arxiv.org/pdf/1803.00897v1.pdf | |
PWC | https://paperswithcode.com/paper/impact-of-biases-in-big-data |
Repo | |
Framework | |
Learning to Separate Domains in Generalized Zero-Shot and Open Set Learning: a probabilistic perspective
Title | Learning to Separate Domains in Generalized Zero-Shot and Open Set Learning: a probabilistic perspective |
Authors | Hanze Dong, Yanwei Fu, Leonid Sigal, Sung Ju Hwang, Yu-Gang Jiang, Xiangyang Xue |
Abstract | This paper studies the problem of domain division which aims to segment instances drawn from different probabilistic distributions. Such a problem exists in many previous recognition tasks, such as Open Set Learning (OSL) and Generalized Zero-Shot Learning (G-ZSL), where the testing instances come from either seen or novel/unseen classes of different probabilistic distributions. Previous works focused on either only calibrating the confident prediction of classifiers of seen classes (W-SVM), or taking unseen classes as outliers. In contrast, this paper proposes a probabilistic way of directly estimating and fine-tuning the decision boundary between seen and novel/unseen classes. In particular, we propose a domain division algorithm of learning to split the testing instances into known, unknown and uncertain domains, and then conduct recognize tasks in each domain. Two statistical tools, namely, bootstrapping and Kolmogorov-Smirnov (K-S) Test, for the first time, are introduced to discover and fine-tune the decision boundary of each domain. Critically, the uncertain domain is newly introduced in our framework to adopt those instances whose domain cannot be predicted confidently. Extensive experiments demonstrate that our approach achieved the state-of-the-art performance on OSL and G-ZSL benchmarks. |
Tasks | Open Set Learning, Zero-Shot Learning |
Published | 2018-10-17 |
URL | http://arxiv.org/abs/1810.07368v2 |
http://arxiv.org/pdf/1810.07368v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-separate-domains-in-generalized |
Repo | |
Framework | |
What About Applied Fairness?
Title | What About Applied Fairness? |
Authors | Jared Sylvester, Edward Raff |
Abstract | Machine learning practitioners are often ambivalent about the ethical aspects of their products. We believe anything that gets us from that current state to one in which our systems are achieving some degree of fairness is an improvement that should be welcomed. This is true even when that progress does not get us 100% of the way to the goal of “complete” fairness or perfectly align with our personal belief on which measure of fairness is used. Some measure of fairness being built would still put us in a better position than the status quo. Impediments to getting fairness and ethical concerns applied in real applications, whether they are abstruse philosophical debates or technical overhead such as the introduction of ever more hyper-parameters, should be avoided. In this paper we further elaborate on our argument for this viewpoint and its importance. |
Tasks | |
Published | 2018-06-13 |
URL | http://arxiv.org/abs/1806.05250v1 |
http://arxiv.org/pdf/1806.05250v1.pdf | |
PWC | https://paperswithcode.com/paper/what-about-applied-fairness |
Repo | |
Framework | |
An Adaptive Learning Method of Restricted Boltzmann Machine by Neuron Generation and Annihilation Algorithm
Title | An Adaptive Learning Method of Restricted Boltzmann Machine by Neuron Generation and Annihilation Algorithm |
Authors | Shin Kamada, Takumi Ichimura |
Abstract | Restricted Boltzmann Machine (RBM) is a generative stochastic energy-based model of artificial neural network for unsupervised learning. Recently, RBM is well known to be a pre-training method of Deep Learning. In addition to visible and hidden neurons, the structure of RBM has a number of parameters such as the weights between neurons and the coefficients for them. Therefore, we may meet some difficulties to determine an optimal network structure to analyze big data. In order to evade the problem, we investigated the variance of parameters to find an optimal structure during learning. For the reason, we should check the variance of parameters to cause the fluctuation for energy function in RBM model. In this paper, we propose the adaptive learning method of RBM that can discover an optimal number of hidden neurons according to the training situation by applying the neuron generation and annihilation algorithm. In this method, a new hidden neuron is generated if the energy function is not still converged and the variance of the parameters is large. Moreover, the inactivated hidden neuron will be annihilated if the neuron does not affect the learning situation. The experimental results for some benchmark data sets were discussed in this paper. |
Tasks | |
Published | 2018-07-10 |
URL | http://arxiv.org/abs/1807.03478v2 |
http://arxiv.org/pdf/1807.03478v2.pdf | |
PWC | https://paperswithcode.com/paper/an-adaptive-learning-method-of-restricted |
Repo | |
Framework | |
Interact as You Intend: Intention-Driven Human-Object Interaction Detection
Title | Interact as You Intend: Intention-Driven Human-Object Interaction Detection |
Authors | Bingjie Xu, Junnan Li, Yongkang Wong, Mohan S. Kankanhalli, Qi Zhao |
Abstract | The recent advances in instance-level detection tasks lay strong foundation for genuine comprehension of the visual scenes. However, the ability to fully comprehend a social scene is still in its preliminary stage. In this work, we focus on detecting human-object interactions (HOIs) in social scene images, which is demanding in terms of research and increasingly useful for practical applications. To undertake social tasks interacting with objects, humans direct their attention and move their body based on their intention. Based on this observation, we provide a unique computational perspective to explore human intention in HOI detection. Specifically, the proposed human intention-driven HOI detection (iHOI) framework models human pose with the relative distances from body joints to the object instances. It also utilizes human gaze to guide the attended contextual regions in a weakly-supervised setting. In addition, we propose a hard negative sampling strategy to address the problem of mis-grouping. We perform extensive experiments on two benchmark datasets, namely V-COCO and HICO-DET. The efficacy of each proposed component has also been validated. |
Tasks | Human-Object Interaction Detection |
Published | 2018-08-29 |
URL | https://arxiv.org/abs/1808.09796v2 |
https://arxiv.org/pdf/1808.09796v2.pdf | |
PWC | https://paperswithcode.com/paper/interact-as-you-intend-intention-driven-human |
Repo | |
Framework | |
Deep Learning for Joint Source-Channel Coding of Text
Title | Deep Learning for Joint Source-Channel Coding of Text |
Authors | Nariman Farsad, Milind Rao, Andrea Goldsmith |
Abstract | We consider the problem of joint source and channel coding of structured data such as natural language over a noisy channel. The typical approach to this problem in both theory and practice involves performing source coding to first compress the text and then channel coding to add robustness for the transmission across the channel. This approach is optimal in terms of minimizing end-to-end distortion with arbitrarily large block lengths of both the source and channel codes when transmission is over discrete memoryless channels. However, the optimality of this approach is no longer ensured for documents of finite length and limitations on the length of the encoding. We will show in this scenario that we can achieve lower word error rates by developing a deep learning based encoder and decoder. While the approach of separate source and channel coding would minimize bit error rates, our approach preserves semantic information of sentences by first embedding sentences in a semantic space where sentences closer in meaning are located closer together, and then performing joint source and channel coding on these embeddings. |
Tasks | |
Published | 2018-02-19 |
URL | http://arxiv.org/abs/1802.06832v1 |
http://arxiv.org/pdf/1802.06832v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-learning-for-joint-source-channel-coding |
Repo | |
Framework | |
Two Birds with One Network: Unifying Failure Event Prediction and Time-to-failure Modeling
Title | Two Birds with One Network: Unifying Failure Event Prediction and Time-to-failure Modeling |
Authors | Karan Aggarwal, Onur Atan, Ahmed Farahat, Chi Zhang, Kosta Ristovski, Chetan Gupta |
Abstract | One of the key challenges in predictive maintenance is to predict the impending downtime of an equipment with a reasonable prediction horizon so that countermeasures can be put in place. Classically, this problem has been posed in two different ways which are typically solved independently: (1) Remaining useful life (RUL) estimation as a long-term prediction task to estimate how much time is left in the useful life of the equipment and (2) Failure prediction (FP) as a short-term prediction task to assess the probability of a failure within a pre-specified time window. As these two tasks are related, performing them separately is sub-optimal and might results in inconsistent predictions for the same equipment. In order to alleviate these issues, we propose two methods: Deep Weibull model (DW-RNN) and multi-task learning (MTL-RNN). DW-RNN is able to learn the underlying failure dynamics by fitting Weibull distribution parameters using a deep neural network, learned with a survival likelihood, without training directly on each task. While DW-RNN makes an explicit assumption on the data distribution, MTL-RNN exploits the implicit relationship between the long-term RUL and short-term FP tasks to learn the underlying distribution. Additionally, both our methods can leverage the non-failed equipment data for RUL estimation. We demonstrate that our methods consistently outperform baseline RUL methods that can be used for FP while producing consistent results for RUL and FP. We also show that our methods perform at par with baselines trained on the objectives optimized for either of the two tasks. |
Tasks | Multi-Task Learning |
Published | 2018-12-18 |
URL | http://arxiv.org/abs/1812.07142v1 |
http://arxiv.org/pdf/1812.07142v1.pdf | |
PWC | https://paperswithcode.com/paper/two-birds-with-one-network-unifying-failure |
Repo | |
Framework | |
Letting Emotions Flow: Success Prediction by Modeling the Flow of Emotions in Books
Title | Letting Emotions Flow: Success Prediction by Modeling the Flow of Emotions in Books |
Authors | Suraj Maharjan, Sudipta Kar, Manuel Montes-y-Gomez, Fabio A. Gonzalez, Thamar Solorio |
Abstract | Books have the power to make us feel happiness, sadness, pain, surprise, or sorrow. An author’s dexterity in the use of these emotions captivates readers and makes it difficult for them to put the book down. In this paper, we model the flow of emotions over a book using recurrent neural networks and quantify its usefulness in predicting success in books. We obtained the best weighted F1-score of 69% for predicting books’ success in a multitask setting (simultaneously predicting success and genre of books). |
Tasks | |
Published | 2018-05-24 |
URL | http://arxiv.org/abs/1805.09746v2 |
http://arxiv.org/pdf/1805.09746v2.pdf | |
PWC | https://paperswithcode.com/paper/letting-emotions-flow-success-prediction-by |
Repo | |
Framework | |
Comment on “All-optical machine learning using diffractive deep neural networks”
Title | Comment on “All-optical machine learning using diffractive deep neural networks” |
Authors | Haiqing Wei, Gang Huang, Xiuqing Wei, Yanlong Sun, Hongbin Wang |
Abstract | Lin et al. (Reports, 7 September 2018, p. 1004) reported a remarkable proposal that employs a passive, strictly linear optical setup to perform pattern classifications. But interpreting the multilayer diffractive setup as a deep neural network and advocating it as an all-optical deep learning framework are not well justified and represent a mischaracterization of the system by overlooking its defining characteristics of perfect linearity and strict passivity. |
Tasks | |
Published | 2018-09-22 |
URL | http://arxiv.org/abs/1809.08360v2 |
http://arxiv.org/pdf/1809.08360v2.pdf | |
PWC | https://paperswithcode.com/paper/comment-on-all-optical-machine-learning-using |
Repo | |
Framework | |
Single-Label Multi-Class Image Classification by Deep Logistic Regression
Title | Single-Label Multi-Class Image Classification by Deep Logistic Regression |
Authors | Qi Dong, Xiatian Zhu, Shaogang Gong |
Abstract | The objective learning formulation is essential for the success of convolutional neural networks. In this work, we analyse thoroughly the standard learning objective functions for multi-class classification CNNs: softmax regression (SR) for single-label scenario and logistic regression (LR) for multi-label scenario. Our analyses lead to an inspiration of exploiting LR for single-label classification learning, and then the disclosing of the negative class distraction problem in LR. To address this problem, we develop two novel LR based objective functions that not only generalise the conventional LR but importantly turn out to be competitive alternatives to SR in single label classification. Extensive comparative evaluations demonstrate the model learning advantages of the proposed LR functions over the commonly adopted SR in single-label coarse-grained object categorisation and cross-class fine-grained person instance identification tasks. We also show the performance superiority of our method on clothing attribute classification in comparison to the vanilla LR function. |
Tasks | Image Classification |
Published | 2018-11-20 |
URL | https://arxiv.org/abs/1811.08400v2 |
https://arxiv.org/pdf/1811.08400v2.pdf | |
PWC | https://paperswithcode.com/paper/single-label-multi-class-image-classification |
Repo | |
Framework | |
Generalized Canonical Polyadic Tensor Decomposition
Title | Generalized Canonical Polyadic Tensor Decomposition |
Authors | David Hong, Tamara G. Kolda, Jed A. Duersch |
Abstract | Tensor decomposition is a fundamental unsupervised machine learning method in data science, with applications including network analysis and sensor data processing. This work develops a generalized canonical polyadic (GCP) low-rank tensor decomposition that allows other loss functions besides squared error. For instance, we can use logistic loss or Kullback-Leibler divergence, enabling tensor decomposition for binary or count data. We present a variety statistically-motivated loss functions for various scenarios. We provide a generalized framework for computing gradients and handling missing data that enables the use of standard optimization methods for fitting the model. We demonstrate the flexibility of GCP on several real-world examples including interactions in a social network, neural activity in a mouse, and monthly rainfall measurements in India. |
Tasks | |
Published | 2018-08-22 |
URL | http://arxiv.org/abs/1808.07452v2 |
http://arxiv.org/pdf/1808.07452v2.pdf | |
PWC | https://paperswithcode.com/paper/generalized-canonical-polyadic-tensor |
Repo | |
Framework | |