Paper Group ANR 379
Automatic Target Detection for Sparse Hyperspectral Images. A Unified Deep Learning Formalism For Processing Graph Signals. Awareness of Voter Passion Greatly Improves the Distortion of Metric Social Choice. Defending Against Adversarial Attacks Using Random Forests. AgentBuddy: A Contextual Bandit based Decision Support System for Customer Support …
Automatic Target Detection for Sparse Hyperspectral Images
Title | Automatic Target Detection for Sparse Hyperspectral Images |
Authors | Ahmad W. Bitar, Jean-Philippe Ovarlez, Loong-Fah Cheong, Ali Chehab |
Abstract | In this work, a novel target detector for hyperspectral imagery is developed. The detector is independent on the unknown covariance matrix, behaves well in large dimensions, distributional free, invariant to atmospheric effects, and does not require a background dictionary to be constructed. Based on a modification of the robust principal component analysis (RPCA), a given hyperspectral image (HSI) is regarded as being made up of the sum of a low-rank background HSI and a sparse target HSI that contains the targets based on a pre-learned target dictionary specified by the user. The sparse component is directly used for the detection, that is, the targets are simply detected at the non-zero entries of the sparse target HSI. Hence, a novel target detector is developed, which is simply a sparse HSI generated automatically from the original HSI, but containing only the targets with the background is suppressed. The detector is evaluated on real experiments, and the results of which demonstrate its effectiveness for hyperspectral target detection especially when the targets are well matched to the surroundings. |
Tasks | |
Published | 2019-04-14 |
URL | https://arxiv.org/abs/1904.09030v3 |
https://arxiv.org/pdf/1904.09030v3.pdf | |
PWC | https://paperswithcode.com/paper/190409030 |
Repo | |
Framework | |
A Unified Deep Learning Formalism For Processing Graph Signals
Title | A Unified Deep Learning Formalism For Processing Graph Signals |
Authors | Myriam Bontonou, Carlos Lassance, Jean-Charles Vialatte, Vincent Gripon |
Abstract | Convolutional Neural Networks are very efficient at processing signals defined on a discrete Euclidean space (such as images). However, as they can not be used on signals defined on an arbitrary graph, other models have emerged, aiming to extend its properties. We propose to review some of the major deep learning models designed to exploit the underlying graph structure of signals. We express them in a unified formalism, giving them a new and comparative reading. |
Tasks | |
Published | 2019-05-01 |
URL | http://arxiv.org/abs/1905.00496v1 |
http://arxiv.org/pdf/1905.00496v1.pdf | |
PWC | https://paperswithcode.com/paper/a-unified-deep-learning-formalism-for |
Repo | |
Framework | |
Awareness of Voter Passion Greatly Improves the Distortion of Metric Social Choice
Title | Awareness of Voter Passion Greatly Improves the Distortion of Metric Social Choice |
Authors | Ben Abramowitz, Elliot Anshelevich, Wennan Zhu |
Abstract | We develop new voting mechanisms for the case when voters and candidates are located in an arbitrary unknown metric space, and the goal is to choose a candidate minimizing social cost: the total distance from the voters to this candidate. Previous work has often assumed that only ordinal preferences of the voters are known (instead of their true costs), and focused on minimizing distortion: the quality of the chosen candidate as compared with the best possible candidate. In this paper, we instead assume that a (very small) amount of information is known about the voter preference strengths, not just about their ordinal preferences. We provide mechanisms with much better distortion when this extra information is known as compared to mechanisms which use only ordinal information. We quantify tradeoffs between the amount of information known about preference strengths and the achievable distortion. We further provide advice about which type of information about preference strengths seems to be the most useful. Finally, we conclude by quantifying the ideal candidate distortion, which compares the quality of the chosen outcome with the best possible candidate that could ever exist, instead of only the best candidate that is actually in the running. |
Tasks | |
Published | 2019-06-25 |
URL | https://arxiv.org/abs/1906.10562v1 |
https://arxiv.org/pdf/1906.10562v1.pdf | |
PWC | https://paperswithcode.com/paper/awareness-of-voter-passion-greatly-improves |
Repo | |
Framework | |
Defending Against Adversarial Attacks Using Random Forests
Title | Defending Against Adversarial Attacks Using Random Forests |
Authors | Yifan Ding, Liqiang Wang, Huan Zhang, Jinfeng Yi, Deliang Fan, Boqing Gong |
Abstract | As deep neural networks (DNNs) have become increasingly important and popular, the robustness of DNNs is the key to the safety of both the Internet and the physical world. Unfortunately, some recent studies show that adversarial examples, which are hard to be distinguished from real examples, can easily fool DNNs and manipulate their predictions. Upon observing that adversarial examples are mostly generated by gradient-based methods, in this paper, we first propose to use a simple yet very effective non-differentiable hybrid model that combines DNNs and random forests, rather than hide gradients from attackers, to defend against the attacks. Our experiments show that our model can successfully and completely defend the white-box attacks, has a lower transferability, and is quite resistant to three representative types of black-box attacks; while at the same time, our model achieves similar classification accuracy as the original DNNs. Finally, we investigate and suggest a criterion to define where to grow random forests in DNNs. |
Tasks | |
Published | 2019-06-16 |
URL | https://arxiv.org/abs/1906.06765v1 |
https://arxiv.org/pdf/1906.06765v1.pdf | |
PWC | https://paperswithcode.com/paper/defending-against-adversarial-attacks-using |
Repo | |
Framework | |
AgentBuddy: A Contextual Bandit based Decision Support System for Customer Support Agents
Title | AgentBuddy: A Contextual Bandit based Decision Support System for Customer Support Agents |
Authors | Hrishikesh Ganu, Mithun Ghosh, Shashi Roshan |
Abstract | In this short paper, we present early insights from a Decision Support System for Customer Support Agents (CSAs) serving customers of a leading accounting software. The system is under development and is designed to provide suggestions to CSAs to make them more productive. A unique aspect of the solution is the use of bandit algorithms to create a tractable human-in-the-loop system that can learn from CSAs in an online fashion. In addition to discussing the ML aspects, we also bring out important insights we gleaned from early feedback from CSAs. These insights motivate our future work and also might be of wider interest to ML practitioners. |
Tasks | |
Published | 2019-02-24 |
URL | http://arxiv.org/abs/1903.03512v1 |
http://arxiv.org/pdf/1903.03512v1.pdf | |
PWC | https://paperswithcode.com/paper/agentbuddy-a-contextual-bandit-based-decision |
Repo | |
Framework | |
Optimizing Majority Voting Based Systems Under a Resource Constraint for Multiclass Problems
Title | Optimizing Majority Voting Based Systems Under a Resource Constraint for Multiclass Problems |
Authors | Attila Tiba, Andras Hajdu, Gyorgy Terdik, Henrietta Toman |
Abstract | Ensemble-based approaches are very effective in various fields in raising the accuracy of its individual members, when some voting rule is applied for aggregating the individual decisions. In this paper, we investigate how to find and characterize the ensembles having the highest accuracy if the total cost of the ensemble members is bounded. This question leads to Knapsack problem with non-linear and non-separable objective function in binary and multiclass classification if the majority voting is chosen for the aggregation. As the conventional solving methods cannot be applied for this task, a novel stochastic approach was introduced in the binary case where the energy function is discussed as the joint probability function of the member accuracy. We show some theoretical results with respect to the expected ensemble accuracy and its variance in the multiclass classification problem which can help us to solve the Knapsack problem. |
Tasks | |
Published | 2019-04-08 |
URL | http://arxiv.org/abs/1904.04360v1 |
http://arxiv.org/pdf/1904.04360v1.pdf | |
PWC | https://paperswithcode.com/paper/optimizing-majority-voting-based-systems |
Repo | |
Framework | |
Stochastic Optimization for Non-convex Inf-Projection Problems
Title | Stochastic Optimization for Non-convex Inf-Projection Problems |
Authors | Yan Yan, Yi Xu, Lijun Zhang, Xiaoyu Wang, Tianbao Yang |
Abstract | In this paper, we study a family of non-convex and possibly non-smooth inf-projection minimization problems, where the target objective function is equal to minimization of a joint function over another variable. This problem includes difference of convex (DC) functions and a family of bi-convex functions as special cases. We develop stochastic algorithms and establish their first-order convergence for finding a (nearly) stationary solution of the target non-convex function under different conditions of the component functions. To the best of our knowledge, this is the first work that comprehensively studies stochastic optimization of non-convex inf-projection minimization problems with provable convergence guarantee. Our algorithms enable efficient stochastic optimization of a family of non-decomposable DC functions and a family of bi-convex functions. To demonstrate the power of the proposed algorithms we consider an important application in variance-based regularization, and experiments verify the effectiveness of our inf-projection based formulation and the proposed stochastic algorithm in comparison with previous stochastic algorithms based on the min-max formulation for achieving the same effect. |
Tasks | Stochastic Optimization |
Published | 2019-08-26 |
URL | https://arxiv.org/abs/1908.09941v1 |
https://arxiv.org/pdf/1908.09941v1.pdf | |
PWC | https://paperswithcode.com/paper/stochastic-optimization-for-non-convex-inf |
Repo | |
Framework | |
Flatter is better: Percentile Transformations for Recommender Systems
Title | Flatter is better: Percentile Transformations for Recommender Systems |
Authors | Masoud Mansoury, Robin Burke, Bamshad Mobasher |
Abstract | It is well known that explicit user ratings in recommender systems are biased towards high ratings, and that users differ significantly in their usage of the rating scale. Implementers usually compensate for these issues through rating normalization or the inclusion of a user bias term in factorization models. However, these methods adjust only for the central tendency of users’ distributions. In this work, we demonstrate that lack of \textit{flatness} in rating distributions is negatively correlated with recommendation performance. We propose a rating transformation model that compensates for skew in the rating distribution as well as its central tendency by converting ratings into percentile values as a pre-processing step before recommendation generation. This transformation flattens the rating distribution, better compensates for differences in rating distributions, and improves recommendation performance. We also show a smoothed version of this transformation designed to yield more intuitive results for users with very narrow rating distributions. A comprehensive set of experiments show improved ranking performance for these percentile transformations with state-of-the-art recommendation algorithms in four real-world data sets. |
Tasks | Recommendation Systems |
Published | 2019-07-10 |
URL | https://arxiv.org/abs/1907.07766v1 |
https://arxiv.org/pdf/1907.07766v1.pdf | |
PWC | https://paperswithcode.com/paper/flatter-is-better-percentile-transformations |
Repo | |
Framework | |
Continuous Online Learning and New Insights to Online Imitation Learning
Title | Continuous Online Learning and New Insights to Online Imitation Learning |
Authors | Jonathan Lee, Ching-An Cheng, Ken Goldberg, Byron Boots |
Abstract | Online learning is a powerful tool for analyzing iterative algorithms. However, the classic adversarial setup sometimes fails to capture certain regularity in online problems in practice. Motivated by this, we establish a new setup, called Continuous Online Learning (COL), where the gradient of online loss function changes continuously across rounds with respect to the learner’s decisions. We show that COL covers and more appropriately describes many interesting applications, from general equilibrium problems (EPs) to optimization in episodic MDPs. Using this new setup, we revisit the difficulty of achieving sublinear dynamic regret. We prove that there is a fundamental equivalence between achieving sublinear dynamic regret in COL and solving certain EPs, and we present a reduction from dynamic regret to both static regret and convergence rate of the associated EP. At the end, we specialize these new insights into online imitation learning and show improved understanding of its learning stability. |
Tasks | Imitation Learning |
Published | 2019-12-03 |
URL | https://arxiv.org/abs/1912.01261v1 |
https://arxiv.org/pdf/1912.01261v1.pdf | |
PWC | https://paperswithcode.com/paper/continuous-online-learning-and-new-insights |
Repo | |
Framework | |
A smartphone application to detection and classification of coffee leaf miner and coffee leaf rust
Title | A smartphone application to detection and classification of coffee leaf miner and coffee leaf rust |
Authors | Giuliano L. Manso, Helder Knidel, Renato A. Krohling, Jose A. Ventura |
Abstract | Generally, the identification and classification of plant diseases and/or pests are performed by an expert . One of the problems facing coffee farmers in Brazil is crop infestation, particularly by leaf rust Hemileia vastatrix and leaf miner Leucoptera coffeella. The progression of the diseases and or pests occurs spatially and temporarily. So, it is very important to automatically identify the degree of severity. The main goal of this article consists on the development of a method and its i implementation as an App that allow the detection of the foliar damages from images of coffee leaf that are captured using a smartphone, and identify whether it is rust or leaf miner, and in turn the calculation of its severity degree. The method consists of identifying a leaf from the image and separates it from the background with the use of a segmentation algorithm. In the segmentation process, various types of backgrounds for the image using the HSV and YCbCr color spaces are tested. In the segmentation of foliar damages, the Otsu algorithm and the iterative threshold algorithm, in the YCgCr color space, have been used and compared to k-means. Next, features of the segmented foliar damages are calculated. For the classification, artificial neural network trained with extreme learning machine have been used. The results obtained shows the feasibility and effectiveness of the approach to identify and classify foliar damages, and the automatic calculation of the severity. The results obtained are very promising according to experts. |
Tasks | |
Published | 2019-03-19 |
URL | http://arxiv.org/abs/1904.00742v1 |
http://arxiv.org/pdf/1904.00742v1.pdf | |
PWC | https://paperswithcode.com/paper/a-smartphone-application-to-detection-and |
Repo | |
Framework | |
A Large RGB-D Dataset for Semi-supervised Monocular Depth Estimation
Title | A Large RGB-D Dataset for Semi-supervised Monocular Depth Estimation |
Authors | Jaehoon Cho, Dongbo Min, Youngjung Kim, Kwanghoon Sohn |
Abstract | The recent advance of monocular depth estimation is largely based on deeply nested convolutional networks, combined with supervised training. However, it still remains arduous to collect large-scale ground truth depth (or disparity) maps for supervising the networks. This paper presents a simple yet effective semi-supervised approach for monocular depth estimation. Inspired by the human visual system, we propose a student-teacher strategy in which a shallow student network is trained with the auxiliary information obtained from a deeper and accurate teacher network. Specifically, we first train the stereo teacher network fully utilizing the binocular perception of 3D geometry, and then use depth predictions of the teacher network for supervising the student network for monocular depth inference. This enables us to exploit all available depth data from massive unlabeled stereo pairs that are relatively easier-to-obtain. We further introduce a data ensemble strategy that merges multiple depth predictions of the teacher network to improve the training samples for the student network. Additionally, stereo confidence maps are provided to avoid inaccurate depth estimates being used when supervising the student network. Our new training data, consisting of 1 million outdoor stereo images taken using hand-held stereo cameras, is hosted at the project webpage. Lastly, we demonstrate that the monocular depth estimation network provides feature representations that are suitable for some high-level vision tasks such as semantic segmentation and road detection. Extensive experiments demonstrate the effectiveness and flexibility of the proposed method in various outdoor scenarios. |
Tasks | Depth Estimation, Monocular Depth Estimation, Semantic Segmentation |
Published | 2019-04-23 |
URL | http://arxiv.org/abs/1904.10230v1 |
http://arxiv.org/pdf/1904.10230v1.pdf | |
PWC | https://paperswithcode.com/paper/a-large-rgb-d-dataset-for-semi-supervised |
Repo | |
Framework | |
C2P2: A Collective Cryptocurrency Up/Down Price Prediction Engine
Title | C2P2: A Collective Cryptocurrency Up/Down Price Prediction Engine |
Authors | Chongyang Bai, Tommy White, Linda Xiao, V. S. Subrahmanian, Ziheng Zhou |
Abstract | We study the problem of predicting whether the price of the 21 most popular cryptocurrencies (according to coinmarketcap.com) will go up or down on day d, using data up to day d-1. Our C2P2 algorithm is the first algorithm to consider the fact that the price of a cryptocurrency c might depend not only on historical prices, sentiments, global stock indices, but also on the prices and predicted prices of other cryptocurrencies. C2P2 therefore does not predict cryptocurrency prices one coin at a time — rather it uses similarity metrics in conjunction with collective classification to compare multiple cryptocurrency features to jointly predict the cryptocurrency prices for all 21 coins considered. We show that our C2P2 algorithm beats out a recent competing 2017 paper by margins varying from 5.1-83% and another Bitcoin-specific prediction paper from 2018 by 16%. In both cases, C2P2 is the winner on all cryptocurrencies considered. Moreover, we experimentally show that the use of similarity metrics within our C2P2 algorithm leads to a direct improvement for 20 out of 21 cryptocurrencies ranging from 0.4% to 17.8%. Without the similarity component, C2P2 still beats competitors on 20 out of 21 cryptocurrencies considered. We show that all these results are statistically significant via a Student’s t-test with p<1e-5. Check our demo at https://www.cs.dartmouth.edu/dsail/demos/c2p2 |
Tasks | |
Published | 2019-06-03 |
URL | https://arxiv.org/abs/1906.00564v1 |
https://arxiv.org/pdf/1906.00564v1.pdf | |
PWC | https://paperswithcode.com/paper/190600564 |
Repo | |
Framework | |
Language Graph Distillation for Low-Resource Machine Translation
Title | Language Graph Distillation for Low-Resource Machine Translation |
Authors | Tianyu He, Jiale Chen, Xu Tan, Tao Qin |
Abstract | Neural machine translation on low-resource language is challenging due to the lack of bilingual sentence pairs. Previous works usually solve the low-resource translation problem with knowledge transfer in a multilingual setting. In this paper, we propose the concept of Language Graph and further design a novel graph distillation algorithm that boosts the accuracy of low-resource translations in the graph with forward and backward knowledge distillation. Preliminary experiments on the TED talks multilingual dataset demonstrate the effectiveness of our proposed method. Specifically, we improve the low-resource translation pair by more than 3.13 points in terms of BLEU score. |
Tasks | Machine Translation, Transfer Learning |
Published | 2019-08-17 |
URL | https://arxiv.org/abs/1908.06258v1 |
https://arxiv.org/pdf/1908.06258v1.pdf | |
PWC | https://paperswithcode.com/paper/language-graph-distillation-for-low-resource |
Repo | |
Framework | |
Dice Loss for Data-imbalanced NLP Tasks
Title | Dice Loss for Data-imbalanced NLP Tasks |
Authors | Xiaoya Li, Xiaofei Sun, Yuxian Meng, Junjun Liang, Fei Wu, Jiwei Li |
Abstract | Many NLP tasks such as tagging and machine reading comprehension are faced with the severe data imbalance issue: negative examples significantly outnumber positive examples, and the huge number of background examples (or easy-negative examples) overwhelms the training. The most commonly used cross entropy (CE) criteria is actually an accuracy-oriented objective, and thus creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function, while at test time F1 score concerns more about positive examples. In this paper, we propose to use dice loss in replacement of the standard cross-entropy objective for data-imbalanced NLP tasks. Dice loss is based on the Sorensen-Dice coefficient or Tversky index, which attaches similar importance to false positives and false negatives, and is more immune to the data-imbalance issue. To further alleviate the dominating influence from easy-negative examples in training, we propose to associate training examples with dynamically adjusted weights to deemphasize easy-negative examples.Theoretical analysis shows that this strategy narrows down the gap between the F1 score in evaluation and the dice loss in training. With the proposed training objective, we observe significant performance boost on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5, CTB6 and UD1.4 for the part of speech tagging task; SOTA results on CoNLL03, OntoNotes5.0, MSRA and OntoNotes4.0 for the named entity recognition task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification. |
Tasks | Machine Reading Comprehension, Named Entity Recognition, Paraphrase Identification, Part-Of-Speech Tagging, Reading Comprehension |
Published | 2019-11-07 |
URL | https://arxiv.org/abs/1911.02855v1 |
https://arxiv.org/pdf/1911.02855v1.pdf | |
PWC | https://paperswithcode.com/paper/dice-loss-for-data-imbalanced-nlp-tasks |
Repo | |
Framework | |
Model-Free Mean-Field Reinforcement Learning: Mean-Field MDP and Mean-Field Q-Learning
Title | Model-Free Mean-Field Reinforcement Learning: Mean-Field MDP and Mean-Field Q-Learning |
Authors | René Carmona, Mathieu Laurière, Zongjun Tan |
Abstract | We develop a general reinforcement learning framework for mean field control (MFC) problems. Such problems arise for instance as the limit of collaborative multi-agent control problems when the number of agents is very large. The asymptotic problem can be phrased as the optimal control of a non-linear dynamics. This can also be viewed as a Markov decision process (MDP) but the key difference with the usual RL setup is that the dynamics and the reward now depend on the state’s probability distribution itself. Alternatively, it can be recast as a MDP on the Wasserstein space of measures. In this work, we introduce generic model-free algorithms based on the state-action value function at the mean field level and we prove convergence for a prototypical Q-learning method. We then implement an actor-critic method and report numerical results on two archetypal problems: a finite space model motivated by a cyber security application and a continuous space model motivated by an application to swarm motion. |
Tasks | Q-Learning |
Published | 2019-10-28 |
URL | https://arxiv.org/abs/1910.12802v1 |
https://arxiv.org/pdf/1910.12802v1.pdf | |
PWC | https://paperswithcode.com/paper/model-free-mean-field-reinforcement-learning |
Repo | |
Framework | |