Paper Group AWR 47
Margin Maximization as Lossless Maximal Compression. Plug-and-Play Algorithms for Large-scale Snapshot Compressive Imaging. Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural Networks Against Adversarial Attacks. Learning from Positive and Unlabeled Data with Arbitrary Positive Shift. Global Context-Aware Progressive Aggre …
Margin Maximization as Lossless Maximal Compression
Title | Margin Maximization as Lossless Maximal Compression |
Authors | Nikolaos Nikolaou, Henry Reeve, Gavin Brown |
Abstract | The ultimate goal of a supervised learning algorithm is to produce models constructed on the training data that can generalize well to new examples. In classification, functional margin maximization – correctly classifying as many training examples as possible with maximal confidence –has been known to construct models with good generalization guarantees. This work gives an information-theoretic interpretation of a margin maximizing model on a noiseless training dataset as one that achieves lossless maximal compression of said dataset – i.e. extracts from the features all the useful information for predicting the label and no more. The connection offers new insights on generalization in supervised machine learning, showing margin maximization as a special case (that of classification) of a more general principle and explains the success and potential limitations of popular learning algorithms like gradient boosting. We support our observations with theoretical arguments and empirical evidence and identify interesting directions for future work. |
Tasks | |
Published | 2020-01-28 |
URL | https://arxiv.org/abs/2001.10318v1 |
https://arxiv.org/pdf/2001.10318v1.pdf | |
PWC | https://paperswithcode.com/paper/margin-maximization-as-lossless-maximal |
Repo | https://github.com/nnikolaou/margin_maximization_LMC |
Framework | none |
Plug-and-Play Algorithms for Large-scale Snapshot Compressive Imaging
Title | Plug-and-Play Algorithms for Large-scale Snapshot Compressive Imaging |
Authors | Xin Yuan, Yang Liu, Jinli Suo, Qionghai Dai |
Abstract | Snapshot compressive imaging (SCI) aims to capture the high-dimensional (usually 3D) images using a 2D sensor (detector) in a single snapshot. Though enjoying the advantages of low-bandwidth, low-power and low-cost, applying SCI to large-scale problems (HD or UHD videos) in our daily life is still challenging. The bottleneck lies in the reconstruction algorithms; they are either too slow (iterative optimization algorithms) or not flexible to the encoding process (deep learning based end-to-end networks). In this paper, we develop fast and flexible algorithms for SCI based on the plug-and-play (PnP) framework. In addition to the widely used PnP-ADMM method, we further propose the PnP-GAP (generalized alternating projection) algorithm with a lower computational workload and prove the {global convergence} of PnP-GAP under the SCI hardware constraints. By employing deep denoising priors, we first time show that PnP can recover a UHD color video ($3840\times 1644\times 48$ with PNSR above 30dB) from a snapshot 2D measurement. Extensive results on both simulation and real datasets verify the superiority of our proposed algorithm. The code is available at https://github.com/liuyang12/PnP-SCI. |
Tasks | Denoising |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13654v1 |
https://arxiv.org/pdf/2003.13654v1.pdf | |
PWC | https://paperswithcode.com/paper/plug-and-play-algorithms-for-large-scale |
Repo | https://github.com/liuyang12/PnP-SCI |
Framework | none |
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural Networks Against Adversarial Attacks
Title | Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural Networks Against Adversarial Attacks |
Authors | Ying Meng, Jianhai Su, Jason O’Kane, Pooyan Jamshidi |
Abstract | Despite achieving state-of-the-art performance across many domains, machine learning systems are highly vulnerable to subtle adversarial perturbations. Although defense approaches have been proposed in recent years, many have been bypassed by even weak adversarial attacks. An early study~\cite{he2017adversarial} shows that ensembles created by combining multiple weak defenses (i.e., input data transformations) are still weak. We show that it is indeed possible to construct effective ensembles using weak defenses to block adversarial attacks. However, to do so requires a diverse set of such weak defenses. In this work, we propose Athena, an extensible framework for building effective defenses to adversarial attacks against machine learning systems. Here we conducted a comprehensive empirical study to evaluate several realizations of Athena. More specifically, we evaluated the effectiveness of 5 ensemble strategies with a diverse set of many weak defenses that comprise transforming the inputs (e.g., rotation, shifting, noising, denoising, and many more) before feeding them to target deep neural network (DNN) classifiers. We evaluate the effectiveness of the ensembles with adversarial examples generated by 9 various adversaries (i.e., FGSM, CW, etc.) in 4 threat models (i.e., zero-knowledge, black-box, gray-box, white-box) on MNIST. We also explain, via a comprehensive empirical study, why building defenses based on the idea of many diverse weak defenses works, when it is most effective, and what its inherent limitations and overhead are. |
Tasks | Denoising |
Published | 2020-01-02 |
URL | https://arxiv.org/abs/2001.00308v1 |
https://arxiv.org/pdf/2001.00308v1.pdf | |
PWC | https://paperswithcode.com/paper/ensembles-of-many-diverse-weak-defenses-can |
Repo | https://github.com/softsys4ai/athena |
Framework | tf |
Learning from Positive and Unlabeled Data with Arbitrary Positive Shift
Title | Learning from Positive and Unlabeled Data with Arbitrary Positive Shift |
Authors | Zayd Hammoudeh, Daniel Lowd |
Abstract | Positive-unlabeled (PU) learning trains a binary classifier using only positive and unlabeled data. A common simplifying assumption is that the positive data is representative of the target positive class. This assumption is often violated in practice due to time variation, domain shift, or adversarial concept drift. This paper shows that PU learning is possible even with arbitrarily non-representative positive data when provided unlabeled datasets from the source and target distributions. Our key insight is that only the negative class’s distribution need be fixed. We propose two methods to learn under such arbitrary positive bias. The first couples negative-unlabeled (NU) learning with unlabeled-unlabeled (UU) learning while the other uses a novel recursive risk estimator robust to positive shift. Experimental results demonstrate our methods’ effectiveness across numerous real-world datasets and forms of positive data bias, including disjoint positive class-conditional supports. |
Tasks | |
Published | 2020-02-24 |
URL | https://arxiv.org/abs/2002.10261v1 |
https://arxiv.org/pdf/2002.10261v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-from-positive-and-unlabeled-data-3 |
Repo | https://github.com/ZaydH/arbitrary_positive_unlabeled |
Framework | pytorch |
Global Context-Aware Progressive Aggregation Network for Salient Object Detection
Title | Global Context-Aware Progressive Aggregation Network for Salient Object Detection |
Authors | Zuyao Chen, Qianqian Xu, Runmin Cong, Qingming Huang |
Abstract | Deep convolutional neural networks have achieved competitive performance in salient object detection, in which how to learn effective and comprehensive features plays a critical role. Most of the previous works mainly adopted multiple level feature integration yet ignored the gap between different features. Besides, there also exists a dilution process of high-level features as they passed on the top-down pathway. To remedy these issues, we propose a novel network named GCPANet to effectively integrate low-level appearance features, high-level semantic features, and global context features through some progressive context-aware Feature Interweaved Aggregation (FIA) modules and generate the saliency map in a supervised way. Moreover, a Head Attention (HA) module is used to reduce information redundancy and enhance the top layers features by leveraging the spatial and channel-wise attention, and the Self Refinement (SR) module is utilized to further refine and heighten the input features. Furthermore, we design the Global Context Flow (GCF) module to generate the global context information at different stages, which aims to learn the relationship among different salient regions and alleviate the dilution effect of high-level features. Experimental results on six benchmark datasets demonstrate that the proposed approach outperforms the state-of-the-art methods both quantitatively and qualitatively. |
Tasks | Object Detection, Salient Object Detection |
Published | 2020-03-02 |
URL | https://arxiv.org/abs/2003.00651v1 |
https://arxiv.org/pdf/2003.00651v1.pdf | |
PWC | https://paperswithcode.com/paper/global-context-aware-progressive-aggregation |
Repo | https://github.com/anish9/GCPANet-tensorflow |
Framework | tf |
Deep Graph Mapper: Seeing Graphs through the Neural Lens
Title | Deep Graph Mapper: Seeing Graphs through the Neural Lens |
Authors | Cristian Bodnar, Cătălina Cangea, Pietro Liò |
Abstract | Recent advancements in graph representation learning have led to the emergence of condensed encodings that capture the main properties of a graph. However, even though these abstract representations are powerful for downstream tasks, they are not equally suitable for visualisation purposes. In this work, we merge Mapper, an algorithm from the field of Topological Data Analysis (TDA), with the expressive power of Graph Neural Networks (GNNs) to produce hierarchical, topologically-grounded visualisations of graphs. These visualisations do not only help discern the structure of complex graphs but also provide a means of understanding the models applied to them for solving various tasks. We further demonstrate the suitability of Mapper as a topological framework for graph pooling by mathematically proving an equivalence with Min-Cut and Diff Pool. Building upon this framework, we introduce a novel pooling algorithm based on PageRank, which obtains competitive results with state of the art methods on graph classification benchmarks. |
Tasks | Graph Classification, Graph Representation Learning, Representation Learning, Topological Data Analysis |
Published | 2020-02-10 |
URL | https://arxiv.org/abs/2002.03864v2 |
https://arxiv.org/pdf/2002.03864v2.pdf | |
PWC | https://paperswithcode.com/paper/deep-graph-mapper-seeing-graphs-through-the |
Repo | https://github.com/crisbodnar/dgm |
Framework | pytorch |
Hi-Net: Hybrid-fusion Network for Multi-modal MR Image Synthesis
Title | Hi-Net: Hybrid-fusion Network for Multi-modal MR Image Synthesis |
Authors | Tao Zhou, Huazhu Fu, Geng Chen, Jianbing Shen, Ling Shao |
Abstract | Magnetic resonance imaging (MRI) is a widely used neuroimaging technique that can provide images of different contrasts (i.e., modalities). Fusing this multi-modal data has proven particularly effective for boosting model performance in many tasks. However, due to poor data quality and frequent patient dropout, collecting all modalities for every patient remains a challenge. Medical image synthesis has been proposed as an effective solution to this, where any missing modalities are synthesized from the existing ones. In this paper, we propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis, which learns a mapping from multi-modal source images (i.e., existing modalities) to target images (i.e., missing modalities). In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality, and a fusion network is employed to learn the common latent representation of multi-modal data. Then, a multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality, acting as a generator to synthesize the target images. Moreover, a layer-wise multi-modal fusion strategy is presented to effectively exploit the correlations among multiple modalities, in which a Mixed Fusion Block (MFB) is proposed to adaptively weight different fusion strategies (i.e., element-wise summation, product, and maximization). Extensive experiments demonstrate that the proposed model outperforms other state-of-the-art medical image synthesis methods. |
Tasks | Image Generation |
Published | 2020-02-11 |
URL | https://arxiv.org/abs/2002.05000v1 |
https://arxiv.org/pdf/2002.05000v1.pdf | |
PWC | https://paperswithcode.com/paper/hi-net-hybrid-fusion-network-for-multi-modal |
Repo | https://github.com/taozh2017/Multi-modal-Medical-Imaging |
Framework | none |
Topological Machine Learning for Mixed Numeric and Categorical Data
Title | Topological Machine Learning for Mixed Numeric and Categorical Data |
Authors | Chengyuan Wu, Carol Anne Hargreaves |
Abstract | Topological data analysis is a relatively new branch of machine learning that excels in studying high dimensional data, and is theoretically known to be robust against noise. Meanwhile, data objects with mixed numeric and categorical attributes are ubiquitous in real-world applications. However, topological methods are usually applied to point cloud data, and to the best of our knowledge there is no available framework for the classification of mixed data using topological methods. In this paper, we propose a novel topological machine learning method for mixed data classification. In the proposed method, we use theory from topological data analysis such as persistent homology, persistence diagrams and Wasserstein distance to study mixed data. The performance of the proposed method is demonstrated by experiments on a real-world heart disease dataset. Experimental results show that our topological method outperforms several state-of-the-art algorithms in the prediction of heart disease. |
Tasks | Topological Data Analysis |
Published | 2020-03-10 |
URL | https://arxiv.org/abs/2003.04584v1 |
https://arxiv.org/pdf/2003.04584v1.pdf | |
PWC | https://paperswithcode.com/paper/topological-machine-learning-for-mixed |
Repo | https://github.com/wuchengyuan88/topology-mixed-data |
Framework | none |
Keyfilter-Aware Real-Time UAV Object Tracking
Title | Keyfilter-Aware Real-Time UAV Object Tracking |
Authors | Yiming Li, Changhong Fu, Ziyuan Huang, Yinqiang Zhang, Jia Pan |
Abstract | Correlation filter-based tracking has been widely applied in unmanned aerial vehicle (UAV) with high efficiency. However, it has two imperfections, i.e., boundary effect and filter corruption. Several methods enlarging the search area can mitigate boundary effect, yet introducing undesired background distraction. Existing frame-by-frame context learning strategies for repressing background distraction nevertheless lower the tracking speed. Inspired by keyframe-based simultaneous localization and mapping, keyfilter is proposed in visual tracking for the first time, in order to handle the above issues efficiently and effectively. Keyfilters generated by periodically selected keyframes learn the context intermittently and are used to restrain the learning of filters, so that 1) context awareness can be transmitted to all the filters via keyfilter restriction, and 2) filter corruption can be repressed. Compared to the state-of-the-art results, our tracker performs better on two challenging benchmarks, with enough speed for UAV real-time applications. |
Tasks | Object Tracking, Simultaneous Localization and Mapping, Visual Tracking |
Published | 2020-03-11 |
URL | https://arxiv.org/abs/2003.05218v1 |
https://arxiv.org/pdf/2003.05218v1.pdf | |
PWC | https://paperswithcode.com/paper/keyfilter-aware-real-time-uav-object-tracking |
Repo | https://github.com/vision4robotics/KAOT-tracker |
Framework | none |
Extended Markov Games to Learn Multiple Tasks in Multi-Agent Reinforcement Learning
Title | Extended Markov Games to Learn Multiple Tasks in Multi-Agent Reinforcement Learning |
Authors | Borja G. León, Francesco Belardinelli |
Abstract | The combination of Formal Methods with Reinforcement Learning (RL) has recently attracted interest as a way for single-agent RL to learn multiple-task specifications. In this paper we extend this convergence to multi-agent settings and formally define Extended Markov Games as a general mathematical model that allows multiple RL agents to concurrently learn various non-Markovian specifications. To introduce this new model we provide formal definitions and proofs as well as empirical tests of RL algorithms running on this framework. Specifically, we use our model to train two different logic-based multi-agent RL algorithms to solve diverse settings of non-Markovian co-safe LTL specifications. |
Tasks | Multi-agent Reinforcement Learning |
Published | 2020-02-14 |
URL | https://arxiv.org/abs/2002.06000v1 |
https://arxiv.org/pdf/2002.06000v1.pdf | |
PWC | https://paperswithcode.com/paper/extended-markov-games-to-learn-multiple-tasks |
Repo | https://github.com/bgLeon/EMG |
Framework | tf |
Ford Multi-AV Seasonal Dataset
Title | Ford Multi-AV Seasonal Dataset |
Authors | Siddharth Agarwal, Ankit Vora, Gaurav Pandey, Wayne Williams, Helen Kourous, James McBride |
Abstract | This paper presents a challenging multi-agent seasonal dataset collected by a fleet of Ford autonomous vehicles at different days and times during 2017-18. The vehicles traversed an average route of 66 km in Michigan that included a mix of driving scenarios such as the Detroit Airport, freeways, city-centers, university campus and suburban neighbourhoods, etc. Each vehicle used in this data collection is a Ford Fusion outfitted with an Applanix POS-LV GNSS system, four HDL-32E Velodyne 3D-lidar scanners, 6 Point Grey 1.3 MP Cameras arranged on the rooftop for 360-degree coverage and 1 Pointgrey 5 MP camera mounted behind the windshield for the forward field of view. We present the seasonal variation in weather, lighting, construction and traffic conditions experienced in dynamic urban environments. This dataset can help design robust algorithms for autonomous vehicles and multi-agent systems. Each log in the dataset is time-stamped and contains raw data from all the sensors, calibration values, pose trajectory, ground truth pose, and 3D maps. All data is available in Rosbag format that can be visualized, modified and applied using the open-source Robot Operating System (ROS). We also provide the output of state-of-the-art reflectivity-based localization for bench-marking purposes. The dataset can be freely downloaded at our website. |
Tasks | Autonomous Vehicles, Calibration |
Published | 2020-03-17 |
URL | https://arxiv.org/abs/2003.07969v1 |
https://arxiv.org/pdf/2003.07969v1.pdf | |
PWC | https://paperswithcode.com/paper/ford-multi-av-seasonal-dataset |
Repo | https://github.com/Ford/AVData |
Framework | none |
Learning entropy production via neural networks
Title | Learning entropy production via neural networks |
Authors | Dong-Kyum Kim, Youngkyoung Bae, Sangyun Lee, Hawoong Jeong |
Abstract | This Letter presents a neural estimator for entropy production, or NEEP, that estimates entropy production (EP) from trajectories without any prior knowledge of the system. For steady state, we rigorously prove that the estimator, which can be built up from different choices of deep neural networks, provides stochastic EP by optimizing the objective function proposed here. We verify the NEEP with the stochastic processes of the bead-spring and discrete flashing ratchet models, and also demonstrate that our method is applicable to high-dimensional data and non-Markovian systems. |
Tasks | |
Published | 2020-03-09 |
URL | https://arxiv.org/abs/2003.04166v2 |
https://arxiv.org/pdf/2003.04166v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-entropy-production-via-neural |
Repo | https://github.com/kdkyum/neep |
Framework | pytorch |
Schoenberg-Rao distances: Entropy-based and geometry-aware statistical Hilbert distances
Title | Schoenberg-Rao distances: Entropy-based and geometry-aware statistical Hilbert distances |
Authors | Gaëtan Hadjeres, Frank Nielsen |
Abstract | Distances between probability distributions that take into account the geometry of their sample space,like the Wasserstein or the Maximum Mean Discrepancy (MMD) distances have received a lot of attention in machine learning as they can, for instance, be used to compare probability distributions with disjoint supports. In this paper, we study a class of statistical Hilbert distances that we term the Schoenberg-Rao distances, a generalization of the MMD that allows one to consider a broader class of kernels, namely the conditionally negative semi-definite kernels. In particular, we introduce a principled way to construct such kernels and derive novel closed-form distances between mixtures of Gaussian distributions, among others. These distances, derived from the concave Rao’s quadratic entropy, enjoy nice theoretical properties and possess interpretable hyperparameters which can be tuned for specific applications. Our method constitutes a practical alternative to Wasserstein distances and we illustrate its efficiency on a broad range of machine learning tasks such as density estimation, generative modeling and mixture simplification. |
Tasks | Density Estimation |
Published | 2020-02-19 |
URL | https://arxiv.org/abs/2002.08345v1 |
https://arxiv.org/pdf/2002.08345v1.pdf | |
PWC | https://paperswithcode.com/paper/schoenberg-rao-distances-entropy-based-and |
Repo | https://github.com/Ghadjeres/schoenberg-rao |
Framework | pytorch |
TopologyGAN: Topology Optimization Using Generative Adversarial Networks Based on Physical Fields Over the Initial Domain
Title | TopologyGAN: Topology Optimization Using Generative Adversarial Networks Based on Physical Fields Over the Initial Domain |
Authors | Zhenguo Nie, Tong Lin, Haoliang Jiang, Levent Burak Kara |
Abstract | In topology optimization using deep learning, load and boundary conditions represented as vectors or sparse matrices often miss the opportunity to encode a rich view of the design problem, leading to less than ideal generalization results. We propose a new data-driven topology optimization model called TopologyGAN that takes advantage of various physical fields computed on the original, unoptimized material domain, as inputs to the generator of a conditional generative adversarial network (cGAN). Compared to a baseline cGAN, TopologyGAN achieves a nearly $3\times$ reduction in the mean squared error and a $2.5\times$ reduction in the mean absolute error on test problems involving previously unseen boundary conditions. Built on several existing network models, we also introduce a hybrid network called U-SE(Squeeze-and-Excitation)-ResNet for the generator that further increases the overall accuracy. We publicly share our full implementation and trained network. |
Tasks | |
Published | 2020-03-05 |
URL | https://arxiv.org/abs/2003.04685v2 |
https://arxiv.org/pdf/2003.04685v2.pdf | |
PWC | https://paperswithcode.com/paper/topologygan-topology-optimization-using |
Repo | https://github.com/zhenguonie/TopologyGAN |
Framework | none |
Learning for Video Compression with Hierarchical Quality and Recurrent Enhancement
Title | Learning for Video Compression with Hierarchical Quality and Recurrent Enhancement |
Authors | Ren Yang, Fabian Mentzer, Luc Van Gool, Radu Timofte |
Abstract | In this paper, we propose a Hierarchical Learned Video Compression (HLVC) method with three hierarchical quality layers and a recurrent enhancement network. The frames in the first layer are compressed by an image compression method with the highest quality. Using these frames as references, we propose the Bi-Directional Deep Compression (BDDC) network to compress the second layer with relatively high quality. Then, the third layer frames are compressed with the lowest quality, by the proposed Single Motion Deep Compression (SMDC) network, which adopts a single motion map to estimate the motions of multiple frames, thus saving bits for motion information. In our deep decoder, we develop the Weighted Recurrent Quality Enhancement (WRQE) network, which takes both compressed frames and the bit stream as inputs. In the recurrent cell of WRQE, the memory and update signal are weighted by quality features to reasonably leverage multi-frame information for enhancement. In our HLVC approach, the hierarchical quality benefits the coding efficiency, since the high quality information facilitates the compression and enhancement of low quality frames at encoder and decoder sides, respectively. Finally, the experiments validate that our HLVC approach advances the state-of-the-art of deep video compression methods, and outperforms the “Low-Delay P (LDP) very fast” mode of x265 in terms of both PSNR and MS-SSIM. The project page is at https://github.com/RenYang-home/HLVC. |
Tasks | Image Compression, Video Compression |
Published | 2020-03-04 |
URL | https://arxiv.org/abs/2003.01966v4 |
https://arxiv.org/pdf/2003.01966v4.pdf | |
PWC | https://paperswithcode.com/paper/learning-for-video-compression-with |
Repo | https://github.com/RenYang-home/HLVC |
Framework | none |