Paper Group AWR 10
High-Dimensional Feature Selection for Genomic Datasets. Localized Flow-Based Clustering in Hypergraphs. Train Scheduling with Hybrid Answer Set Programming. Regression via Implicit Models and Optimal Transport Cost Minimization. Context Based Emotion Recognition using EMOTIC Dataset. Decentralized Learning for Channel Allocation in IoT Networks ov …
High-Dimensional Feature Selection for Genomic Datasets
Title | High-Dimensional Feature Selection for Genomic Datasets |
Authors | Majid Afshar, Hamid Usefi |
Abstract | In the presence of large dimensional datasets that contain many irrelevant features (variables), dimensionality reduction algorithms have proven to be useful in removing features with low variance and combine features with high correlation. In this paper, we propose a new feature selection method which uses singular value decomposition of a matrix and the method of least squares to remove the irrelevant features and detect correlations between the remaining features. The effectiveness of our method has been verified by performing a series of comparisons with state-of-the-art feature selection methods over ten genetic datasets ranging up from 9,117 to 267,604 features. The results show that our method is favorable in various aspects compared to state-of-the-art feature selection methods. |
Tasks | Dimensionality Reduction, Feature Selection |
Published | 2020-02-27 |
URL | https://arxiv.org/abs/2002.12104v1 |
https://arxiv.org/pdf/2002.12104v1.pdf | |
PWC | https://paperswithcode.com/paper/high-dimensional-feature-selection-for |
Repo | https://github.com/majid1292/DRPT |
Framework | none |
Localized Flow-Based Clustering in Hypergraphs
Title | Localized Flow-Based Clustering in Hypergraphs |
Authors | Nate Veldt, Austin R. Benson, Jon Kleinberg |
Abstract | Local graph clustering algorithms are designed to efficiently detect small clusters of nodes that are biased to a localized region of a large graph. Although many techniques have been developed for local clustering in graphs, very few algorithms have been designed to detect local clusters in hypergraphs, which better model complex systems involving multiway relationships between data objects. In this paper we present a framework for local clustering in hypergraphs based on minimum cuts and maximum flows. Our approach extends previous research on flow-based local graph clustering, but has been generalized in a number of key ways. First of all, we demonstrate how to incorporate recent results on generalized hypergraph $s$-$t$ cut problems. This allows us to accommodate a wide range of different hypergraph cut functions, which can assign different penalties based on how each hyperedge is split across different clusters. Furthermore, our algorithm comes with a number of attractive theoretical properties in terms of recovering nodes sets with low hypergraph conductance and hypergraph normalized cut scores. Finally, and most importantly, our method is strongly-local, meaning that its runtime depends only on the size of an input set. In practice this allows our method to quickly find localized clusters without exploring an entire input hypergraph. We demonstrate the power of our method in local cluster detection experiments on an Amazon product hypergraph and a Stack Overflow question hypergraph. Although both datasets involve millions of nodes, millions of edges, and a large average hyperedge size, we are able to detect local clusters in a matter of a few seconds or a few minutes, depending on the size of the cluster. |
Tasks | Graph Clustering |
Published | 2020-02-21 |
URL | https://arxiv.org/abs/2002.09441v1 |
https://arxiv.org/pdf/2002.09441v1.pdf | |
PWC | https://paperswithcode.com/paper/localized-flow-based-clustering-in |
Repo | https://github.com/nveldt/HypergraphFlowClustering |
Framework | none |
Train Scheduling with Hybrid Answer Set Programming
Title | Train Scheduling with Hybrid Answer Set Programming |
Authors | Dirk Abels, Julian Jordi, Max Ostrowski, Torsten Schaub, Ambra Toletti, Philipp Wanko |
Abstract | We present a solution to real-world train scheduling problems, involving routing, scheduling, and optimization, based on Answer Set Programming (ASP). To this end, we pursue a hybrid approach that extends ASP with difference constraints to account for a fine-grained timing. More precisely, we exemplarily show how the hybrid ASP system clingo[DL] can be used to tackle demanding planning-and-scheduling problems. In particular, we investigate how to boost performance by combining distinct ASP solving techniques, such as approximations and heuristics, with preprocessing and encoding techniques for tackling large-scale, real-world train scheduling instances. Under consideration in Theory and Practice of Logic Programming (TPLP) |
Tasks | |
Published | 2020-03-19 |
URL | https://arxiv.org/abs/2003.08598v1 |
https://arxiv.org/pdf/2003.08598v1.pdf | |
PWC | https://paperswithcode.com/paper/train-scheduling-with-hybrid-answer-set |
Repo | https://github.com/potassco/train-scheduling-with-hybrid-asp |
Framework | none |
Regression via Implicit Models and Optimal Transport Cost Minimization
Title | Regression via Implicit Models and Optimal Transport Cost Minimization |
Authors | Saurav Manchanda, Khoa Doan, Pranjul Yadav, S. Sathiya Keerthi |
Abstract | This paper addresses the classic problem of regression, which involves the inductive learning of a map, $y=f(x,z)$, $z$ denoting noise, $f:\mathbb{R}^n\times \mathbb{R}^k \rightarrow \mathbb{R}^m$. Recently, Conditional GAN (CGAN) has been applied for regression and has shown to be advantageous over the other standard approaches like Gaussian Process Regression, given its ability to implicitly model complex noise forms. However, the current CGAN implementation for regression uses the classical generator-discriminator architecture with the minimax optimization approach, which is notorious for being difficult to train due to issues like training instability or failure to converge. In this paper, we take another step towards regression models that implicitly model the noise, and propose a solution which directly optimizes the optimal transport cost between the true probability distribution $p(yx)$ and the estimated distribution $\hat{p}(yx)$ and does not suffer from the issues associated with the minimax approach. On a variety of synthetic and real-world datasets, our proposed solution achieves state-of-the-art results. The code accompanying this paper is available at “https://github.com/gurdaspuriya/ot_regression". |
Tasks | |
Published | 2020-03-03 |
URL | https://arxiv.org/abs/2003.01296v1 |
https://arxiv.org/pdf/2003.01296v1.pdf | |
PWC | https://paperswithcode.com/paper/regression-via-implicit-models-and-optimal |
Repo | https://github.com/gurdaspuriya/ot_regression |
Framework | none |
Context Based Emotion Recognition using EMOTIC Dataset
Title | Context Based Emotion Recognition using EMOTIC Dataset |
Authors | Ronak Kosti, Jose M. Alvarez, Adria Recasens, Agata Lapedriza |
Abstract | In our everyday lives and social interactions we often try to perceive the emotional states of people. There has been a lot of research in providing machines with a similar capacity of recognizing emotions. From a computer vision perspective, most of the previous efforts have been focusing in analyzing the facial expressions and, in some cases, also the body pose. Some of these methods work remarkably well in specific settings. However, their performance is limited in natural, unconstrained environments. Psychological studies show that the scene context, in addition to facial expression and body pose, provides important information to our perception of people’s emotions. However, the processing of the context for automatic emotion recognition has not been explored in depth, partly due to the lack of proper data. In this paper we present EMOTIC, a dataset of images of people in a diverse set of natural situations, annotated with their apparent emotion. The EMOTIC dataset combines two different types of emotion representation: (1) a set of 26 discrete categories, and (2) the continuous dimensions Valence, Arousal, and Dominance. We also present a detailed statistical and algorithmic analysis of the dataset along with annotators’ agreement analysis. Using the EMOTIC dataset we train different CNN models for emotion recognition, combining the information of the bounding box containing the person with the contextual information extracted from the scene. Our results show how scene context provides important information to automatically recognize emotional states and motivate further research in this direction. Dataset and code is open-sourced and available at: https://github.com/rkosti/emotic and link for the peer-reviewed published article: https://ieeexplore.ieee.org/document/8713881 |
Tasks | Emotion Recognition |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13401v1 |
https://arxiv.org/pdf/2003.13401v1.pdf | |
PWC | https://paperswithcode.com/paper/context-based-emotion-recognition-using |
Repo | https://github.com/rkosti/emotic |
Framework | none |
Decentralized Learning for Channel Allocation in IoT Networks over Unlicensed Bandwidth as a Contextual Multi-player Multi-armed Bandit Game
Title | Decentralized Learning for Channel Allocation in IoT Networks over Unlicensed Bandwidth as a Contextual Multi-player Multi-armed Bandit Game |
Authors | Wenbo Wang, Amir Leshem, Dusit Niyato, Zhu Han |
Abstract | We study a decentralized channel allocation problem in an ad-hoc Internet of Things (IoT) network underlaying on a spectrum licensed to an existing wireless network. In the considered IoT network, the impoverished computation capability and the limited antenna number on the IoT devices make them difficult to acquire the Channel State Information (CSI) for the multi-channels over the shared spectrum. In addition, in practice, the unknown patterns of the licensed users’ transmission activities and the time-varying CSI due to fast fading or mobility of the IoT devices can also cause stochastic changes in the channel quality. Therefore, decentralized IoT links are expected to learn their channel statistics online based on the partial observations, while acquiring no information about the channels that they are not operating on. Meanwhile, they also have to reach an efficient, collision-free solution of channel allocation on the basis of limited coordination or message exchange. Our study maps this problem into a contextual multi-player, multi-arm bandit game, for which we propose a purely decentralized, three-stage policy learning algorithm through trial-and-error. Our theoretical analysis shows that the proposed learning algorithm guarantees the IoT devices to jointly converge to the social-optimal channel allocation with a sub-linear (i.e., polylogarithmic) regret with respect to the operational time. Simulation results demonstrate that the proposed algorithm strikes a good balance between efficient channel allocation and network scalability when compared with the other state-of-the-art distributed multi-armed bandit algorithms. |
Tasks | |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13314v1 |
https://arxiv.org/pdf/2003.13314v1.pdf | |
PWC | https://paperswithcode.com/paper/decentralized-learning-for-channel-allocation |
Repo | https://github.com/wbwang2020/MP-MAB |
Framework | none |
Multi-Objective Matrix Normalization for Fine-grained Visual Recognition
Title | Multi-Objective Matrix Normalization for Fine-grained Visual Recognition |
Authors | Shaobo Min, Hantao Yao, Hongtao Xie, Zheng-Jun Zha, Yongdong Zhang |
Abstract | Bilinear pooling achieves great success in fine-grained visual recognition (FGVC). Recent methods have shown that the matrix power normalization can stabilize the second-order information in bilinear features, but some problems, e.g., redundant information and over-fitting, remain to be resolved. In this paper, we propose an efficient Multi-Objective Matrix Normalization (MOMN) method that can simultaneously normalize a bilinear representation in terms of square-root, low-rank, and sparsity. These three regularizers can not only stabilize the second-order information, but also compact the bilinear features and promote model generalization. In MOMN, a core challenge is how to jointly optimize three non-smooth regularizers of different convex properties. To this end, MOMN first formulates them into an augmented Lagrange formula with approximated regularizer constraints. Then, auxiliary variables are introduced to relax different constraints, which allow each regularizer to be solved alternately. Finally, several updating strategies based on gradient descent are designed to obtain consistent convergence and efficient implementation. Consequently, MOMN is implemented with only matrix multiplication, which is well-compatible with GPU acceleration, and the normalized bilinear features are stabilized and discriminative. Experiments on five public benchmarks for FGVC demonstrate that the proposed MOMN is superior to existing normalization-based methods in terms of both accuracy and efficiency. The code is available: https://github.com/mboboGO/MOMN. |
Tasks | Fine-Grained Visual Recognition |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13272v1 |
https://arxiv.org/pdf/2003.13272v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-objective-matrix-normalization-for-fine |
Repo | https://github.com/mboboGO/MOMN |
Framework | pytorch |
Sample Efficient Ensemble Learning with Catalyst.RL
Title | Sample Efficient Ensemble Learning with Catalyst.RL |
Authors | Sergey Kolesnikov, Valentin Khrulkov |
Abstract | We present Catalyst.RL, an open-source PyTorch framework for reproducible and sample efficient reinforcement learning (RL) research. Main features of Catalyst.RL include large-scale asynchronous distributed training, efficient implementations of various RL algorithms and auxiliary tricks, such as n-step returns, value distributions, hyperbolic reinforcement learning, etc. To demonstrate the effectiveness of Catalyst.RL, we applied it to a physics-based reinforcement learning challenge “NeurIPS 2019: Learn to Move - Walk Around” with the objective to build a locomotion controller for a human musculoskeletal model. The environment is computationally expensive, has a high-dimensional continuous action space and is stochastic. Our team took the 2nd place, capitalizing on the ability of Catalyst.RL to train high-quality and sample-efficient RL agents in only a few hours of training time. The implementation along with experiments is open-sourced so results can be reproduced and novel ideas tried out. |
Tasks | |
Published | 2020-03-29 |
URL | https://arxiv.org/abs/2003.14210v1 |
https://arxiv.org/pdf/2003.14210v1.pdf | |
PWC | https://paperswithcode.com/paper/sample-efficient-ensemble-learning-with |
Repo | https://github.com/Scitator/run-skeleton-run-in-3d |
Framework | none |
Global-Local Bidirectional Reasoning for Unsupervised Representation Learning of 3D Point Clouds
Title | Global-Local Bidirectional Reasoning for Unsupervised Representation Learning of 3D Point Clouds |
Authors | Yongming Rao, Jiwen Lu, Jie Zhou |
Abstract | Local and global patterns of an object are closely related. Although each part of an object is incomplete, the underlying attributes about the object are shared among all parts, which makes reasoning the whole object from a single part possible. We hypothesize that a powerful representation of a 3D object should model the attributes that are shared between parts and the whole object, and distinguishable from other objects. Based on this hypothesis, we propose to learn point cloud representation by bidirectional reasoning between the local structures at different abstraction hierarchies and the global shape without human supervision. Experimental results on various benchmark datasets demonstrate the unsupervisedly learned representation is even better than supervised representation in discriminative power, generalization ability, and robustness. We show that unsupervisedly trained point cloud models can outperform their supervised counterparts on downstream classification tasks. Most notably, by simply increasing the channel width of an SSG PointNet++, our unsupervised model surpasses the state-of-the-art supervised methods on both synthetic and real-world 3D object classification datasets. We expect our observations to offer a new perspective on learning better representation from data structures instead of human annotations for point cloud understanding. |
Tasks | 3D Object Classification, Object Classification, Representation Learning, Unsupervised Representation Learning |
Published | 2020-03-29 |
URL | https://arxiv.org/abs/2003.12971v1 |
https://arxiv.org/pdf/2003.12971v1.pdf | |
PWC | https://paperswithcode.com/paper/global-local-bidirectional-reasoning-for |
Repo | https://github.com/raoyongming/PointGLR |
Framework | pytorch |
WeatherBench: A benchmark dataset for data-driven weather forecasting
Title | WeatherBench: A benchmark dataset for data-driven weather forecasting |
Authors | Stephan Rasp, Peter D. Dueben, Sebastian Scher, Jonathan A. Weyn, Soukayna Mouatadid, Nils Thuerey |
Abstract | Data-driven approaches, most prominently deep learning, have become powerful tools for prediction in many domains. A natural question to ask is whether data-driven methods could also be used for numerical weather prediction. First studies show promise but the lack of a common dataset and evaluation metrics make inter-comparison between studies difficult. Here we present a benchmark dataset for data-driven medium-range weather forecasting, a topic of high scientific interest for atmospheric and computer scientists alike. We provide data derived from the ERA5 archive that has been processed to facilitate the use in machine learning models. We propose a simple and clear evaluation metric which will enable a direct comparison between different methods. Further, we provide baseline scores from simple linear regression techniques, deep learning models as well as purely physical forecasting models. All data is publicly available at https://mediatum.ub.tum.de/1524895 and the companion code is reproducible with tutorials for getting started. We hope that this dataset will accelerate research in data-driven weather forecasting. |
Tasks | Weather Forecasting |
Published | 2020-02-02 |
URL | https://arxiv.org/abs/2002.00469v2 |
https://arxiv.org/pdf/2002.00469v2.pdf | |
PWC | https://paperswithcode.com/paper/weatherbench-a-benchmark-dataset-for-data |
Repo | https://github.com/pangeo-data/WeatherBench |
Framework | none |
Unsupervised pretraining transfers well across languages
Title | Unsupervised pretraining transfers well across languages |
Authors | Morgane Rivière, Armand Joulin, Pierre-Emmanuel Mazaré, Emmanuel Dupoux |
Abstract | Cross-lingual and multi-lingual training of Automatic Speech Recognition (ASR) has been extensively investigated in the supervised setting. This assumes the existence of a parallel corpus of speech and orthographic transcriptions. Recently, contrastive predictive coding (CPC) algorithms have been proposed to pretrain ASR systems with unlabelled data. In this work, we investigate whether unsupervised pretraining transfers well across languages. We show that a slight modification of the CPC pretraining extracts features that transfer well to other languages, being on par or even outperforming supervised pretraining. This shows the potential of unsupervised methods for languages with few linguistic resources. |
Tasks | Speech Recognition |
Published | 2020-02-07 |
URL | https://arxiv.org/abs/2002.02848v1 |
https://arxiv.org/pdf/2002.02848v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-pretraining-transfers-well |
Repo | https://github.com/facebookresearch/CPC_audio |
Framework | pytorch |
Closed-loop Matters: Dual Regression Networks for Single Image Super-Resolution
Title | Closed-loop Matters: Dual Regression Networks for Single Image Super-Resolution |
Authors | Yong Guo, Jian Chen, Jingdong Wang, Qi Chen, Jiezhang Cao, Zeshuai Deng, Yanwu Xu, Mingkui Tan |
Abstract | Deep neural networks have exhibited promising performance in image super-resolution (SR) by learning a nonlinear mapping function from low-resolution (LR) images to high-resolution (HR) images. However, there are two underlying limitations to existing SR methods. First, learning the mapping function from LR to HR images is typically an ill-posed problem, because there exist infinite HR images that can be downsampled to the same LR image. As a result, the space of the possible functions can be extremely large, which makes it hard to find a good solution. Second, the paired LR-HR data may be unavailable in real-world applications and the underlying degradation method is often unknown. For such a more general case, existing SR models often incur the adaptation problem and yield poor performance. To address the above issues, we propose a dual regression scheme by introducing an additional constraint on LR data to reduce the space of the possible functions. Specifically, besides the mapping from LR to HR images, we learn an additional dual regression mapping estimates the down-sampling kernel and reconstruct LR images, which forms a closed-loop to provide additional supervision. More critically, since the dual regression process does not depend on HR images, we can directly learn from LR images. In this sense, we can easily adapt SR models to real-world data, e.g., raw video frames from YouTube. Extensive experiments with paired training data and unpaired real-world data demonstrate our superiority over existing methods. |
Tasks | Image Super-Resolution, Super-Resolution |
Published | 2020-03-16 |
URL | https://arxiv.org/abs/2003.07018v1 |
https://arxiv.org/pdf/2003.07018v1.pdf | |
PWC | https://paperswithcode.com/paper/closed-loop-matters-dual-regression-networks |
Repo | https://github.com/guoyongcs/DRN |
Framework | pytorch |
Deep Image Spatial Transformation for Person Image Generation
Title | Deep Image Spatial Transformation for Person Image Generation |
Authors | Yurui Ren, Xiaoming Yu, Junming Chen, Thomas H. Li, Ge Li |
Abstract | Pose-guided person image generation is to transform a source person image to a target pose. This task requires spatial manipulations of source data. However, Convolutional Neural Networks are limited by the lack of ability to spatially transform the inputs. In this paper, we propose a differentiable global-flow local-attention framework to reassemble the inputs at the feature level. Specifically, our model first calculates the global correlations between sources and targets to predict flow fields. Then, the flowed local patch pairs are extracted from the feature maps to calculate the local attention coefficients. Finally, we warp the source features using a content-aware sampling method with the obtained local attention coefficients. The results of both subjective and objective experiments demonstrate the superiority of our model. Besides, additional results in video animation and view synthesis show that our model is applicable to other tasks requiring spatial transformation. Our source code is available at https://github.com/RenYurui/Global-Flow-Local-Attention. |
Tasks | Image Generation |
Published | 2020-03-02 |
URL | https://arxiv.org/abs/2003.00696v2 |
https://arxiv.org/pdf/2003.00696v2.pdf | |
PWC | https://paperswithcode.com/paper/deep-image-spatial-transformation-for-person |
Repo | https://github.com/RenYurui/Global-Flow-Local-Attention |
Framework | pytorch |
IntrA: 3D Intracranial Aneurysm Dataset for Deep Learning
Title | IntrA: 3D Intracranial Aneurysm Dataset for Deep Learning |
Authors | Xi Yang, Ding Xia, Taichi Kin, Takeo Igarashi |
Abstract | Medicine is an important application area for deep learning models. Research in this field is a combination of medical expertise and data science knowledge. In this paper, instead of 2D medical images, we introduce an open-access 3D intracranial aneurysm dataset, IntrA, that makes the application of points-based and mesh-based classification and segmentation models available. Our dataset can be used to diagnose intracranial aneurysms and to extract the neck for a clipping operation in medicine and other areas of deep learning, such as normal estimation and surface reconstruction. We provide a large-scale benchmark of classification and part segmentation by testing state-of-the-art networks. We also discuss the performance of each method and demonstrate the challenges of our dataset. The published dataset can be accessed here: https://github.com/intra3d2019/IntrA. |
Tasks | |
Published | 2020-03-02 |
URL | https://arxiv.org/abs/2003.02920v1 |
https://arxiv.org/pdf/2003.02920v1.pdf | |
PWC | https://paperswithcode.com/paper/intra-3d-intracranial-aneurysm-dataset-for |
Repo | https://github.com/intra3d2019/IntrA |
Framework | none |
Permutation Invariant Graph Generation via Score-Based Generative Modeling
Title | Permutation Invariant Graph Generation via Score-Based Generative Modeling |
Authors | Chenhao Niu, Yang Song, Jiaming Song, Shengjia Zhao, Aditya Grover, Stefano Ermon |
Abstract | Learning generative models for graph-structured data is challenging because graphs are discrete, combinatorial, and the underlying data distribution is invariant to the ordering of nodes. However, most of the existing generative models for graphs are not invariant to the chosen ordering, which might lead to an undesirable bias in the learned distribution. To address this difficulty, we propose a permutation invariant approach to modeling graphs, using the recent framework of score-based generative modeling. In particular, we design a permutation equivariant, multi-channel graph neural network to model the gradient of the data distribution at the input graph (a.k.a., the score function). This permutation equivariant model of gradients implicitly defines a permutation invariant distribution for graphs. We train this graph neural network with score matching and sample from it with annealed Langevin dynamics. In our experiments, we first demonstrate the capacity of this new architecture in learning discrete graph algorithms. For graph generation, we find that our learning approach achieves better or comparable results to existing models on benchmark datasets. |
Tasks | Graph Generation |
Published | 2020-03-02 |
URL | https://arxiv.org/abs/2003.00638v1 |
https://arxiv.org/pdf/2003.00638v1.pdf | |
PWC | https://paperswithcode.com/paper/permutation-invariant-graph-generation-via |
Repo | https://github.com/ermongroup/GraphScoreMatching |
Framework | pytorch |