Paper Group ANR 63
Unpaired Image Super-Resolution using Pseudo-Supervision. PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling. Multi-Level Feature Fusion Mechanism for Single Image Super-Resolution. Learning Deep Analysis Dictionaries – Part II: Convolutional Dictionaries. An Automatic Relevance Determination Prior Bayesian Neural Network for Cont …
Unpaired Image Super-Resolution using Pseudo-Supervision
Title | Unpaired Image Super-Resolution using Pseudo-Supervision |
Authors | Shunta Maeda |
Abstract | In most studies on learning-based image super-resolution (SR), the paired training dataset is created by downscaling high-resolution (HR) images with a predetermined operation (e.g., bicubic). However, these methods fail to super-resolve real-world low-resolution (LR) images, for which the degradation process is much more complicated and unknown. In this paper, we propose an unpaired SR method using a generative adversarial network that does not require a paired/aligned training dataset. Our network consists of an unpaired kernel/noise correction network and a pseudo-paired SR network. The correction network removes noise and adjusts the kernel of the inputted LR image; then, the corrected clean LR image is upscaled by the SR network. In the training phase, the correction network also produces a pseudo-clean LR image from the inputted HR image, and then a mapping from the pseudo-clean LR image to the inputted HR image is learned by the SR network in a paired manner. Because our SR network is independent of the correction network, well-studied existing network architectures and pixel-wise loss functions can be integrated with the proposed framework. Experiments on diverse datasets show that the proposed method is superior to existing solutions to the unpaired SR problem. |
Tasks | Image Super-Resolution, Super-Resolution |
Published | 2020-02-26 |
URL | https://arxiv.org/abs/2002.11397v1 |
https://arxiv.org/pdf/2002.11397v1.pdf | |
PWC | https://paperswithcode.com/paper/unpaired-image-super-resolution-using-pseudo |
Repo | |
Framework | |
PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling
Title | PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling |
Authors | Yue Qian, Junhui Hou, Sam Kwong, Ying He |
Abstract | This paper addresses the problem of generating uniform dense point clouds to describe the underlying geometric structures from given sparse point clouds. Due to the irregular and unordered nature, point cloud densification as a generative task is challenging. To tackle the challenge, we propose a novel deep neural network based method, called PUGeo-Net, that learns a $3\times 3$ linear transformation matrix $\bf T$ for each input point. Matrix $\mathbf T$ approximates the augmented Jacobian matrix of a local parameterization and builds a one-to-one correspondence between the 2D parametric domain and the 3D tangent plane so that we can lift the adaptively distributed 2D samples (which are also learned from data) to 3D space. After that, we project the samples to the curved surface by computing a displacement along the normal of the tangent plane. PUGeo-Net is fundamentally different from the existing deep learning methods that are largely motivated by the image super-resolution techniques and generate new points in the abstract feature space. Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details. Moreover, PUGeo-Net can compute the normal for the original and generated points, which is highly desired by the surface reconstruction algorithms. Computational results show that PUGeo-Net, the first neural network that can jointly generate vertex coordinates and normals, consistently outperforms the state-of-the-art in terms of accuracy and efficiency for upsampling factor $4\sim 16$. |
Tasks | Image Super-Resolution, Super-Resolution |
Published | 2020-02-24 |
URL | https://arxiv.org/abs/2002.10277v2 |
https://arxiv.org/pdf/2002.10277v2.pdf | |
PWC | https://paperswithcode.com/paper/pugeo-net-a-geometry-centric-network-for-3d |
Repo | |
Framework | |
Multi-Level Feature Fusion Mechanism for Single Image Super-Resolution
Title | Multi-Level Feature Fusion Mechanism for Single Image Super-Resolution |
Authors | Jiawen Lyn |
Abstract | Convolution neural network (CNN) has been widely used in Single Image Super Resolution (SISR) so that SISR has been a great success recently. As the network deepens, the learning ability of network becomes more and more powerful. However, most SISR methods based on CNN do not make full use of hierarchical feature and the learning ability of network. These features cannot be extracted directly by subsequent layers, so the previous layer hierarchical information has little impact on the output and performance of subsequent layers relatively poor. To solve above problem, a novel Multi-Level Feature Fusion network (MLRN) is proposed, which can take full use of global intermediate features. We also introduce Feature Skip Fusion Block (FSFblock) as basic module. Each block can be extracted directly to the raw multiscale feature and fusion multi-level feature, then learn feature spatial correlation. The correlation among the features of the holistic approach leads to a continuous global memory of information mechanism. Extensive experiments on public datasets show that the method proposed by MLRN can be implemented, which is favorable performance for the most advanced methods. |
Tasks | Image Super-Resolution, Super-Resolution |
Published | 2020-02-14 |
URL | https://arxiv.org/abs/2002.05962v1 |
https://arxiv.org/pdf/2002.05962v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-level-feature-fusion-mechanism-for |
Repo | |
Framework | |
Learning Deep Analysis Dictionaries – Part II: Convolutional Dictionaries
Title | Learning Deep Analysis Dictionaries – Part II: Convolutional Dictionaries |
Authors | Jun-Jie Huang, Pier Luigi Dragotti |
Abstract | In this paper, we introduce a Deep Convolutional Analysis Dictionary Model (DeepCAM) by learning convolutional dictionaries instead of unstructured dictionaries as in the case of deep analysis dictionary model introduced in the companion paper. Convolutional dictionaries are more suitable for processing high-dimensional signals like for example images and have only a small number of free parameters. By exploiting the properties of a convolutional dictionary, we present an efficient convolutional analysis dictionary learning approach. A L-layer DeepCAM consists of L layers of convolutional analysis dictionary and element-wise soft-thresholding pairs and a single layer of convolutional synthesis dictionary. Similar to DeepAM, each convolutional analysis dictionary is composed of a convolutional Information Preserving Analysis Dictionary (IPAD) and a convolutional Clustering Analysis Dictionary (CAD). The IPAD and the CAD are learned using variations of the proposed learning algorithm. We demonstrate that DeepCAM is an effective multilayer convolutional model and, on single image super-resolution, achieves performance comparable with other methods while also showing good generalization capabilities. |
Tasks | Dictionary Learning, Image Super-Resolution, Super-Resolution |
Published | 2020-01-31 |
URL | https://arxiv.org/abs/2002.00022v1 |
https://arxiv.org/pdf/2002.00022v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-deep-analysis-dictionaries-part-ii |
Repo | |
Framework | |
An Automatic Relevance Determination Prior Bayesian Neural Network for Controlled Variable Selection
Title | An Automatic Relevance Determination Prior Bayesian Neural Network for Controlled Variable Selection |
Authors | Rendani Mbuvha, Illyes Boulkaibet, Tshilidzi Marwala |
Abstract | We present an Automatic Relevance Determination prior Bayesian Neural Network(BNN-ARD) weight l2-norm measure as a feature importance statistic for the model-x knockoff filter. We show on both simulated data and the Norwegian wind farm dataset that the proposed feature importance statistic yields statistically significant improvements relative to similar feature importance measures in both variable selection power and predictive performance on a real world dataset. |
Tasks | Feature Importance |
Published | 2020-01-06 |
URL | https://arxiv.org/abs/2001.01765v1 |
https://arxiv.org/pdf/2001.01765v1.pdf | |
PWC | https://paperswithcode.com/paper/an-automatic-relevance-determination-prior |
Repo | |
Framework | |
Increasing negotiation performance at the edge of the network
Title | Increasing negotiation performance at the edge of the network |
Authors | Sam Vente, Angelika Kimmig, Alun Preece, Federico Cerutti |
Abstract | Automated negotiation has been used in a variety of distributed settings, such as privacy in the Internet of Things (IoT) devices and power distribution in Smart Grids. The most common protocol under which these agents negotiate is the Alternating Offers Protocol (AOP). Under this protocol, agents cannot express any additional information to each other besides a counter offer. This can lead to unnecessarily long negotiations when, for example, negotiations are impossible, risking to waste bandwidth that is a precious resource at the edge of the network. While alternative protocols exist which alleviate this problem, these solutions are too complex for low power devices, such as IoT sensors operating at the edge of the network. To improve this bottleneck, we introduce an extension to AOP called Alternating Constrained Offers Protocol (ACOP), in which agents can also express constraints to each other. This allows agents to both search the possibility space more efficiently and recognise impossible situations sooner. We empirically show that agents using ACOP can significantly reduce the number of messages a negotiation takes, independently of the strategy agents choose. In particular, we show our method significantly reduces the number of messages when an agreement is not possible. Furthermore, when an agreement is possible it reaches this agreement sooner with no negative effect on the utility. |
Tasks | |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13668v1 |
https://arxiv.org/pdf/2003.13668v1.pdf | |
PWC | https://paperswithcode.com/paper/increasing-negotiation-performance-at-the |
Repo | |
Framework | |
Sets Clustering
Title | Sets Clustering |
Authors | Ibrahim Jubran, Murad Tukan, Alaa Maalouf, Dan Feldman |
Abstract | The input to the \emph{sets-$k$-means} problem is an integer $k\geq 1$ and a set $\mathcal{P}={P_1,\cdots,P_n}$ of sets in $\mathbb{R}^d$. The goal is to compute a set $C$ of $k$ centers (points) in $\mathbb{R}^d$ that minimizes the sum $\sum_{P\in \mathcal{P}} \min_{p\in P, c\in C}\left\ p-c \right^2$ of squared distances to these sets. An \emph{$\varepsilon$-core-set} for this problem is a weighted subset of $\mathcal{P}$ that approximates this sum up to $1\pm\varepsilon$ factor, for \emph{every} set $C$ of $k$ centers in $\mathbb{R}^d$. We prove that such a core-set of $O(\log^2{n})$ sets always exists, and can be computed in $O(n\log{n})$ time, for every input $\mathcal{P}$ and every fixed $d,k\geq 1$ and $\varepsilon \in (0,1)$. The result easily generalized for any metric space, distances to the power of $z>0$, and M-estimators that handle outliers. Applying an inefficient but optimal algorithm on this coreset allows us to obtain the first PTAS ($1+\varepsilon$ approximation) for the sets-$k$-means problem that takes time near linear in $n$. This is the first result even for sets-mean on the plane ($k=1$, $d=2$). Open source code and experimental results for document classification and facility locations are also provided. |
Tasks | Document Classification |
Published | 2020-03-09 |
URL | https://arxiv.org/abs/2003.04135v1 |
https://arxiv.org/pdf/2003.04135v1.pdf | |
PWC | https://paperswithcode.com/paper/sets-clustering |
Repo | |
Framework | |
Contrastive estimation reveals topic posterior information to linear models
Title | Contrastive estimation reveals topic posterior information to linear models |
Authors | Christopher Tosh, Akshay Krishnamurthy, Daniel Hsu |
Abstract | Contrastive learning is an approach to representation learning that utilizes naturally occurring similar and dissimilar pairs of data points to find useful embeddings of data. In the context of document classification under topic modeling assumptions, we prove that contrastive learning is capable of recovering a representation of documents that reveals their underlying topic posterior information to linear models. We apply this procedure in a semi-supervised setup and demonstrate empirically that linear classifiers with these representations perform well in document classification tasks with very few training examples. |
Tasks | Document Classification, Representation Learning |
Published | 2020-03-04 |
URL | https://arxiv.org/abs/2003.02234v1 |
https://arxiv.org/pdf/2003.02234v1.pdf | |
PWC | https://paperswithcode.com/paper/contrastive-estimation-reveals-topic |
Repo | |
Framework | |
Error bounds for PDE-regularized learning
Title | Error bounds for PDE-regularized learning |
Authors | Carsten Gräser, Prem Anand Alathur Srinivasan |
Abstract | In this work we consider the regularization of a supervised learning problem by partial differential equations (PDEs) and derive error bounds for the obtained approximation in terms of a PDE error term and a data error term. Assuming that the target function satisfies an unknown PDE, the PDE error term quantifies how well this PDE is approximated by the auxiliary PDE used for regularization. It is shown that this error term decreases if more data is provided. The data error term quantifies the accuracy of the given data. Furthermore, the PDE-regularized learning problem is discretized by generalized Galerkin discretizations solving the associated minimization problem in subsets of the infinite dimensional functions space, which are not necessarily subspaces. For such discretizations an error bound in terms of the PDE error, the data error, and a best approximation error is derived. |
Tasks | |
Published | 2020-03-14 |
URL | https://arxiv.org/abs/2003.06524v1 |
https://arxiv.org/pdf/2003.06524v1.pdf | |
PWC | https://paperswithcode.com/paper/error-bounds-for-pde-regularized-learning |
Repo | |
Framework | |
Unsupervised Gaze Prediction in Egocentric Videos by Energy-based Surprise Modeling
Title | Unsupervised Gaze Prediction in Egocentric Videos by Energy-based Surprise Modeling |
Authors | Sathyanarayanan N. Aakur, Arunkumar Bagavathi |
Abstract | Egocentric perception has grown rapidly with the advent of immersive computing devices. Human gaze prediction is an important problem in analyzing egocentric videos and has largely been tackled through either saliency-based modeling or highly supervised learning. In this work, we tackle the problem of jointly predicting human gaze points and temporal segmentation of egocentric videos, in an unsupervised manner without using any training data. We introduce an unsupervised computational model that draws inspiration from cognitive psychology models of human attention and event perception. We use Grenander’s pattern theory formalism to represent spatial-temporal features and model surprise as a mechanism to predict gaze fixation points and temporally segment egocentric videos. Extensive evaluation on two publicly available datasets - GTEA and GTEA+ datasets show that the proposed model is able to outperform all unsupervised baselines and some supervised gaze prediction baselines. Finally, we show that the model can also temporally segment egocentric videos with a performance comparable to more complex, fully supervised deep learning baselines. |
Tasks | Gaze Prediction |
Published | 2020-01-30 |
URL | https://arxiv.org/abs/2001.11580v1 |
https://arxiv.org/pdf/2001.11580v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-gaze-prediction-in-egocentric |
Repo | |
Framework | |
Trust dynamics and user attitudes on recommendation errors: preliminary results
Title | Trust dynamics and user attitudes on recommendation errors: preliminary results |
Authors | David A. Pelta, Jose L. Verdegay, Maria T. Lamata, Carlos Cruz Corona |
Abstract | Artificial Intelligence based systems may be used as digital nudging techniques that can steer or coerce users to make decisions not always aligned with their true interests. When such systems properly address the issues of Fairness, Accountability, Transparency, and Ethics, then the trust of the user in the system would just depend on the system’s output. The aim of this paper is to propose a model for exploring how good and bad recommendations affect the overall trust in an idealized recommender system that issues recommendations over a resource with limited capacity. The impact of different users attitudes on trust dynamics is also considered. Using simulations, we ran a large set of experiments that allowed to observe that: 1) under certain circumstances, all the users ended accepting the recommendations; and 2) the user attitude (controlled by a single parameter balancing the gain/loss of trust after a good/bad recommendation) has a great impact in the trust dynamics. |
Tasks | Recommendation Systems |
Published | 2020-02-11 |
URL | https://arxiv.org/abs/2002.04302v1 |
https://arxiv.org/pdf/2002.04302v1.pdf | |
PWC | https://paperswithcode.com/paper/trust-dynamics-and-user-attitudes-on |
Repo | |
Framework | |
Speech2Action: Cross-modal Supervision for Action Recognition
Title | Speech2Action: Cross-modal Supervision for Action Recognition |
Authors | Arsha Nagrani, Chen Sun, David Ross, Rahul Sukthankar, Cordelia Schmid, Andrew Zisserman |
Abstract | Is it possible to guess human action from dialogue alone? In this work we investigate the link between spoken words and actions in movies. We note that movie screenplays describe actions, as well as contain the speech of characters and hence can be used to learn this correlation with no additional supervision. We train a BERT-based Speech2Action classifier on over a thousand movie screenplays, to predict action labels from transcribed speech segments. We then apply this model to the speech segments of a large unlabelled movie corpus (188M speech segments from 288K movies). Using the predictions of this model, we obtain weak action labels for over 800K video clips. By training on these video clips, we demonstrate superior action recognition performance on standard action recognition benchmarks, without using a single manually labelled action example. |
Tasks | |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13594v1 |
https://arxiv.org/pdf/2003.13594v1.pdf | |
PWC | https://paperswithcode.com/paper/speech2action-cross-modal-supervision-for |
Repo | |
Framework | |
Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable Edge Computing Systems
Title | Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable Edge Computing Systems |
Authors | Md. Shirajum Munir, Nguyen H. Tran, Walid Saad, Choong Seon Hong |
Abstract | The stringent requirements of mobile edge computing (MEC) applications and functions fathom the high capacity and dense deployment of MEC hosts to the upcoming wireless networks. However, operating such high capacity MEC hosts can significantly increase energy consumption. Thus, a BS unit can act as a self-powered BS. In this paper, an effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied. First, a two-stage linear stochastic programming problem is formulated with the goal of minimizing the total energy consumption cost of the system while fulfilling the energy demand. Second, a semi-distributed data-driven solution is proposed by developing a novel multi-agent meta-reinforcement learning (MAMRL) framework to solve the formulated problem. In particular, each BS plays the role of a local agent that explores a Markovian behavior for both energy consumption and generation while each BS transfers time-varying features to a meta-agent. Sequentially, the meta-agent optimizes (i.e., exploits) the energy dispatch decision by accepting only the observations from each local agent with its own state information. Meanwhile, each BS agent estimates its own energy dispatch policy by applying the learned parameters from meta-agent. Finally, the proposed MAMRL framework is benchmarked by analyzing deterministic, asymmetric, and stochastic environments in terms of non-renewable energy usages, energy cost, and accuracy. Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost (with 95.8% prediction accuracy), compared to other baseline methods. |
Tasks | |
Published | 2020-02-20 |
URL | https://arxiv.org/abs/2002.08567v1 |
https://arxiv.org/pdf/2002.08567v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-agent-meta-reinforcement-learning-for |
Repo | |
Framework | |
A Privacy-Preserving Distributed Architecture for Deep-Learning-as-a-Service
Title | A Privacy-Preserving Distributed Architecture for Deep-Learning-as-a-Service |
Authors | Simone Disabato, Alessandro Falcetta, Alessio Mongelluzzo, Manuel Roveri |
Abstract | Deep-learning-as-a-service is a novel and promising computing paradigm aiming at providing machine/deep learning solutions and mechanisms through Cloud-based computing infrastructures. Thanks to its ability to remotely execute and train deep learning models (that typically require high computational loads and memory occupation), such an approach guarantees high performance, scalability, and availability. Unfortunately, such an approach requires to send information to be processed (e.g., signals, images, positions, sounds, videos) to the Cloud, hence having potentially catastrophic-impacts on the privacy of users. This paper introduces a novel distributed architecture for deep-learning-as-a-service that is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services. The proposed architecture, which relies on Homomorphic Encryption that is able to perform operations on encrypted data, has been tailored for Convolutional Neural Networks (CNNs) in the domain of image analysis and implemented through a client-server REST-based approach. Experimental results show the effectiveness of the proposed architecture. |
Tasks | |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13541v1 |
https://arxiv.org/pdf/2003.13541v1.pdf | |
PWC | https://paperswithcode.com/paper/a-privacy-preserving-distributed-architecture |
Repo | |
Framework | |
Bounding the expectation of the supremum of empirical processes indexed by Hölder classes
Title | Bounding the expectation of the supremum of empirical processes indexed by Hölder classes |
Authors | Nicolas Schreuder |
Abstract | We obtain upper bounds on the expectation of the supremum of empirical processes indexed by H"older classes of any smoothness and for any distribution supported on a bounded set. Another way to see it is from the point of view of integral probability metrics (IPM), a class of metrics on the space of probability measures: our rates quantify how quickly the empirical measure obtained from $n$ independent samples from a probability measure $P$ approaches $P$ with respect to the IPM indexed by H"older classes. As an extremal case we recover the known rates for the Wassertein-1 distance. |
Tasks | |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13530v1 |
https://arxiv.org/pdf/2003.13530v1.pdf | |
PWC | https://paperswithcode.com/paper/bounding-the-expectation-of-the-supremum-of |
Repo | |
Framework | |