Paper Group ANR 33
An End-to-end Framework For Low-Resolution Remote Sensing Semantic Segmentation. Memory-efficient Learning for Large-scale Computational Imaging. Sentiment and Knowledge Based Algorithmic Trading with Deep Reinforcement Learning. A Longitudinal Analysis of YouTube’s Promotion of Conspiracy Videos. Deep Reinforcement Learning with Linear Quadratic R …
An End-to-end Framework For Low-Resolution Remote Sensing Semantic Segmentation
Title | An End-to-end Framework For Low-Resolution Remote Sensing Semantic Segmentation |
Authors | Matheus Barros Pereira, Jefersson Alex dos Santos |
Abstract | High-resolution images for remote sensing applications are often not affordable or accessible, especially when in need of a wide temporal span of recordings. Given the easy access to low-resolution (LR) images from satellites, many remote sensing works rely on this type of data. The problem is that LR images are not appropriate for semantic segmentation, due to the need for high-quality data for accurate pixel prediction for this task. In this paper, we propose an end-to-end framework that unites a super-resolution and a semantic segmentation module in order to produce accurate thematic maps from LR inputs. It allows the semantic segmentation network to conduct the reconstruction process, modifying the input image with helpful textures. We evaluate the framework with three remote sensing datasets. The results show that the framework is capable of achieving a semantic segmentation performance close to native high-resolution data, while also surpassing the performance of a network trained with LR inputs. |
Tasks | Semantic Segmentation, Super-Resolution |
Published | 2020-03-17 |
URL | https://arxiv.org/abs/2003.07955v1 |
https://arxiv.org/pdf/2003.07955v1.pdf | |
PWC | https://paperswithcode.com/paper/an-end-to-end-framework-for-low-resolution |
Repo | |
Framework | |
Memory-efficient Learning for Large-scale Computational Imaging
Title | Memory-efficient Learning for Large-scale Computational Imaging |
Authors | Michael Kellman, Kevin Zhang, Jon Tamir, Emrah Bostan, Michael Lustig, Laura Waller |
Abstract | Critical aspects of computational imaging systems, such as experimental design and image priors, can be optimized through deep networks formed by the unrolled iterations of classical model-based reconstructions (termed physics-based networks). However, for real-world large-scale inverse problems, computing gradients via backpropagation is infeasible due to memory limitations of graphics processing units. In this work, we propose a memory-efficient learning procedure that exploits the reversibility of the network’s layers to enable data-driven design for large-scale computational imaging systems. We demonstrate our method on a small-scale compressed sensing example, as well as two large-scale real-world systems: multi-channel magnetic resonance imaging and super-resolution optical microscopy. |
Tasks | Super-Resolution |
Published | 2020-03-11 |
URL | https://arxiv.org/abs/2003.05551v1 |
https://arxiv.org/pdf/2003.05551v1.pdf | |
PWC | https://paperswithcode.com/paper/memory-efficient-learning-for-large-scale-1 |
Repo | |
Framework | |
Sentiment and Knowledge Based Algorithmic Trading with Deep Reinforcement Learning
Title | Sentiment and Knowledge Based Algorithmic Trading with Deep Reinforcement Learning |
Authors | Abhishek Nan, Anandh Perumal, Osmar R. Zaiane |
Abstract | Algorithmic trading, due to its inherent nature, is a difficult problem to tackle; there are too many variables involved in the real world which make it almost impossible to have reliable algorithms for automated stock trading. The lack of reliable labelled data that considers physical and physiological factors that dictate the ups and downs of the market, has hindered the supervised learning attempts for dependable predictions. To learn a good policy for trading, we formulate an approach using reinforcement learning which uses traditional time series stock price data and combines it with news headline sentiments, while leveraging knowledge graphs for exploiting news about implicit relationships. |
Tasks | Knowledge Graphs, Time Series |
Published | 2020-01-26 |
URL | https://arxiv.org/abs/2001.09403v1 |
https://arxiv.org/pdf/2001.09403v1.pdf | |
PWC | https://paperswithcode.com/paper/sentiment-and-knowledge-based-algorithmic |
Repo | |
Framework | |
A Longitudinal Analysis of YouTube’s Promotion of Conspiracy Videos
Title | A Longitudinal Analysis of YouTube’s Promotion of Conspiracy Videos |
Authors | Marc Faddoul, Guillaume Chaslot, Hany Farid |
Abstract | Conspiracy theories have flourished on social media, raising concerns that such content is fueling the spread of disinformation, supporting extremist ideologies, and in some cases, leading to violence. Under increased scrutiny and pressure from legislators and the public, YouTube announced efforts to change their recommendation algorithms so that the most egregious conspiracy videos are demoted and demonetized. To verify this claim, we have developed a classifier for automatically determining if a video is conspiratorial (e.g., the moon landing was faked, the pyramids of Giza were built by aliens, end of the world prophecies, etc.). We coupled this classifier with an emulation of YouTube’s watch-next algorithm on more than a thousand popular informational channels to obtain a year-long picture of the videos actively promoted by YouTube. We also obtained trends of the so-called filter-bubble effect for conspiracy theories. |
Tasks | |
Published | 2020-03-06 |
URL | https://arxiv.org/abs/2003.03318v1 |
https://arxiv.org/pdf/2003.03318v1.pdf | |
PWC | https://paperswithcode.com/paper/a-longitudinal-analysis-of-youtubes-promotion |
Repo | |
Framework | |
Deep Reinforcement Learning with Linear Quadratic Regulator Regions
Title | Deep Reinforcement Learning with Linear Quadratic Regulator Regions |
Authors | Gabriel I. Fernandez, Colin Togashi, Dennis W. Hong, Lin F. Yang |
Abstract | Practitioners often rely on compute-intensive domain randomization to ensure reinforcement learning policies trained in simulation can robustly transfer to the real world. Due to unmodeled nonlinearities in the real system, however, even such simulated policies can still fail to perform stably enough to acquire experience in real environments. In this paper we propose a novel method that guarantees a stable region of attraction for the output of a policy trained in simulation, even for highly nonlinear systems. Our core technique is to use “bias-shifted” neural networks for constructing the controller and training the network in the simulator. The modified neural networks not only capture the nonlinearities of the system but also provably preserve linearity in a certain region of the state space and thus can be tuned to resemble a linear quadratic regulator that is known to be stable for the real system. We have tested our new method by transferring simulated policies for a swing-up inverted pendulum to real systems and demonstrated its efficacy. |
Tasks | |
Published | 2020-02-23 |
URL | https://arxiv.org/abs/2002.09820v2 |
https://arxiv.org/pdf/2002.09820v2.pdf | |
PWC | https://paperswithcode.com/paper/deep-reinforcement-learning-with-linear |
Repo | |
Framework | |
A Visual-inertial Navigation Method for High-Speed Unmanned Aerial Vehicles
Title | A Visual-inertial Navigation Method for High-Speed Unmanned Aerial Vehicles |
Authors | Xin-long Luo, Jia-hui Lv, Geng Sun |
Abstract | This paper investigates the localization problem of high-speed high-altitude unmanned aerial vehicle (UAV) with a monocular camera and inertial navigation system. It proposes a navigation method utilizing the complementarity of vision and inertial devices to overcome the singularity which arises from the horizontal flight of UAV. Furthermore, it modifies the mathematical model of localization problem via separating linear parts from nonlinear parts and replaces a nonlinear least-squares problem with a linearly equality-constrained optimization problem. In order to avoid the ill-condition property near the optimal point of sequential unconstrained minimization techniques(penalty methods), it constructs a semi-implicit continuous method with a trust-region technique based on a differential-algebraic dynamical system to solve the linearly equality-constrained optimization problem. It also analyzes the global convergence property of the semi-implicit continuous method in an infinity integrated interval other than the traditional convergence analysis of numerical methods for ordinary differential equations in a finite integrated interval. Finally, the promising numerical results are also presented. |
Tasks | |
Published | 2020-02-12 |
URL | https://arxiv.org/abs/2002.04791v1 |
https://arxiv.org/pdf/2002.04791v1.pdf | |
PWC | https://paperswithcode.com/paper/a-visual-inertial-navigation-method-for-high |
Repo | |
Framework | |
Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion
Title | Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion |
Authors | Julian Chibane, Thiemo Alldieck, Gerard Pons-Moll |
Abstract | While many works focus on 3D reconstruction from images, in this paper, we focus on 3D shape reconstruction and completion from a variety of 3D inputs, which are deficient in some respect: low and high resolution voxels, sparse and dense point clouds, complete or incomplete. Processing of such 3D inputs is an increasingly important problem as they are the output of 3D scanners, which are becoming more accessible, and are the intermediate output of 3D computer vision algorithms. Recently, learned implicit functions have shown great promise as they produce continuous reconstructions. However, we identified two limitations in reconstruction from 3D inputs: 1) details present in the input data are not retained, and 2) poor reconstruction of articulated humans. To solve this, we propose Implicit Feature Networks (IF-Nets), which deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data retaining the nice properties of recent learned implicit functions, but critically they can also retain detail when it is present in the input data, and can reconstruct articulated humans. Our work differs from prior work in two crucial aspects. First, instead of using a single vector to encode a 3D shape, we extract a learnable 3-dimensional multi-scale tensor of deep features, which is aligned with the original Euclidean space embedding the shape. Second, instead of classifying x-y-z point coordinates directly, we classify deep features extracted from the tensor at a continuous query point. We show that this forces our model to make decisions based on global and local shape structure, as opposed to point coordinates, which are arbitrary under Euclidean transformations. Experiments demonstrate that IF-Nets outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions. |
Tasks | 3D Object Reconstruction, 3D Reconstruction, Object Reconstruction |
Published | 2020-03-03 |
URL | https://arxiv.org/abs/2003.01456v1 |
https://arxiv.org/pdf/2003.01456v1.pdf | |
PWC | https://paperswithcode.com/paper/implicit-functions-in-feature-space-for-3d |
Repo | |
Framework | |
Finding temporal patterns using algebraic fingerprints
Title | Finding temporal patterns using algebraic fingerprints |
Authors | Suhas Thejaswi, Aristides Gionis |
Abstract | In this paper we study a family of pattern-detection problems in vertex-colored temporal graphs. In particular, given a vertex-colored temporal graph and a multi-set of colors as a query, we search for temporal paths in the graph that contain the colors specified in the query. These types of problems have several interesting applications, for example, recommending tours for tourists, or searching for abnormal behavior in a network of financial transactions. For the family of pattern-detection problems we define, we establish complexity results and design an algebraic-algorithmic framework based on constrained multilinear sieving. We demonstrate that our solution can scale to massive graphs with up to hundred million edges, despite the problems being NP-hard. Our implementation, which is publicly available, exhibits practical edge-linear scalability and highly optimized. For example, in a real-world graph dataset with more than six million edges and a multi-set query with ten colors, we can extract an optimal solution in less than eight minutes on a haswell desktop with four cores. |
Tasks | |
Published | 2020-01-20 |
URL | https://arxiv.org/abs/2001.07158v2 |
https://arxiv.org/pdf/2001.07158v2.pdf | |
PWC | https://paperswithcode.com/paper/finding-temporal-patterns-using-algebraic |
Repo | |
Framework | |
Index-based Solutions for Efficient Density Peaks Clustering
Title | Index-based Solutions for Efficient Density Peaks Clustering |
Authors | Zafaryab Rasool, Rui Zhou, Lu Chen, Chengfei Liu, Jiajie Xu |
Abstract | Density Peaks Clustering (DPC), a novel density-based clustering approach, has received considerable attention of the research community primarily due to its simplicity and less parameter requirement. However, the resultant clusters obtained using DPC are influenced by its sensitive parameter dc which depends upon data distribution and requirements of different users. Besides, the original DPC algorithm requires visiting a large number of objects making it slow. To this end, this paper investigates index-based solutions for DPC. Specifically, we propose two list-based index methods viz. (i) a simple List Index and (ii) an advanced Cumulative Histogram Index. Efficient query algorithms are proposed for these indices which significantly avoids irrelevant comparisons at the cost of space. To remedy this for memory-constrained systems, we further introduce an approximate solution to the above indices which allows substantial reduction in the space cost provided slight inaccuracies are admissible. Furthermore, owing to considerably lower memory requirements of existing tree-based index structures, we also present effective pruning techniques and efficient query algorithms to support DPC using the popular Quadtree Index and R-tree Index. Finally, we practically evaluate all the above indices and present the findings and results, obtained from a set of extensive experiments on six synthetic and real datasets. The experimental insights obtained help to guide in selecting the befitting index. |
Tasks | |
Published | 2020-02-08 |
URL | https://arxiv.org/abs/2002.03182v1 |
https://arxiv.org/pdf/2002.03182v1.pdf | |
PWC | https://paperswithcode.com/paper/index-based-solutions-for-efficient-density |
Repo | |
Framework | |
A Precise High-Dimensional Asymptotic Theory for Boosting and Min-L1-Norm Interpolated Classifiers
Title | A Precise High-Dimensional Asymptotic Theory for Boosting and Min-L1-Norm Interpolated Classifiers |
Authors | Tengyuan Liang, Pragya Sur |
Abstract | This paper establishes a precise high-dimensional asymptotic theory for Boosting on separable data, taking statistical and computational perspectives. We consider the setting where the number of features (weak learners) p scales with the sample size n, in an over-parametrized regime. On the statistical front, we provide an exact analysis of the generalization error of Boosting, when the algorithm interpolates the training data and maximizes an empirical L1 margin. The angle between the Boosting solution and the ground truth is characterized explicitly. On the computational front, we provide a sharp analysis of the stopping time when Boosting approximately maximizes the empirical L1 margin. Furthermore, we discover that, the larger the margin, the smaller the proportion of active features (with zero initialization). At the heart of our theory lies a detailed study of the maximum L1 margin, using tools from convex geometry. The maximum L1 margin can be precisely described by a new system of non-linear equations, which we study using a novel uniform deviation argument. Preliminary numerical results are presented to demonstrate the accuracy of our theory. |
Tasks | |
Published | 2020-02-05 |
URL | https://arxiv.org/abs/2002.01586v1 |
https://arxiv.org/pdf/2002.01586v1.pdf | |
PWC | https://paperswithcode.com/paper/a-precise-high-dimensional-asymptotic-theory |
Repo | |
Framework | |
Assignment Flows for Data Labeling on Graphs: Convergence and Stability
Title | Assignment Flows for Data Labeling on Graphs: Convergence and Stability |
Authors | Artjom Zern, Alexander Zeilmann, Christoph Schnörr |
Abstract | The assignment flow recently introduced in the J. Math. Imaging and Vision 58/2 (2017), constitutes a high-dimensional dynamical system that evolves on an elementary statistical manifold and performs contextual labeling (classification) of data given in any metric space. Vertices of a given graph index the data points and define a system of neighborhoods. These neighborhoods together with nonnegative weight parameters define regularization of the evolution of label assignments to data points, through geometric averaging induced by the affine e-connection of information geometry. Regarding evolutionary game dynamics, the assignment flow may be characterized as a large system of replicator equations that are coupled by geometric averaging. This paper establishes conditions on the weight parameters that guarantee convergence of the continuous-time assignment flow to integral assignments (labelings), up to a negligible subset of situations that will not be encountered when working with real data in practice. Furthermore, we classify attractors of the flow and quantify corresponding basins of attraction. This provides convergence guarantees for the assignment flow which are extended to the discrete-time assignment flow that results from applying a Runge-Kutta-Munthe-Kaas scheme for numerical geometric integration of the assignment flow. Several counter-examples illustrate that violating the conditions may entail unfavorable behavior of the assignment flow regarding contextual data classification. |
Tasks | |
Published | 2020-02-26 |
URL | https://arxiv.org/abs/2002.11571v1 |
https://arxiv.org/pdf/2002.11571v1.pdf | |
PWC | https://paperswithcode.com/paper/assignment-flows-for-data-labeling-on-graphs |
Repo | |
Framework | |
Regret Bounds for Discounted MDPs
Title | Regret Bounds for Discounted MDPs |
Authors | Shuang Liu, Hao Su |
Abstract | Recently, it has been shown that carefully designed reinforcement learning (RL) algorithms can achieve near-optimal regret in the episodic or the average-reward setting. However, in practice, RL algorithms are applied mostly to the infinite-horizon discounted-reward setting, so it is natural to ask what the lowest regret an algorithm can achieve is in this case, and how close to the optimal the regrets of existing RL algorithms are. In this paper, we prove a regret lower bound of $\Omega\left(\frac{\sqrt{SAT}}{1 - \gamma} - \frac{1}{(1 - \gamma)^2}\right)$ when $T\geq SA$ on any learning algorithm for infinite-horizon discounted Markov decision processes (MDP), where $S$ and $A$ are the numbers of states and actions, $T$ is the number of actions taken, and $\gamma$ is the discounting factor. We also show that a modified version of the double Q-learning algorithm gives a regret upper bound of $\tilde{O}\left(\frac{\sqrt{SAT}}{(1 - \gamma)^{2.5}}\right)$ when $T\geq SA$. Compared to our bounds, previous best lower and upper bounds both have worse dependencies on $T$ and $\gamma$, while our dependencies on $S, A, T$ are optimal. The proof of our upper bound is inspired by recent advances in the analysis of Q-learning in the episodic setting, but the cyclic nature of infinite-horizon MDPs poses many new challenges. |
Tasks | Q-Learning |
Published | 2020-02-12 |
URL | https://arxiv.org/abs/2002.05138v1 |
https://arxiv.org/pdf/2002.05138v1.pdf | |
PWC | https://paperswithcode.com/paper/regret-bounds-for-discounted-mdps |
Repo | |
Framework | |
Streaming Submodular Maximization under a $k$-Set System Constraint
Title | Streaming Submodular Maximization under a $k$-Set System Constraint |
Authors | Ran Haba, Ehsan Kazemi, Moran Feldman, Amin Karbasi |
Abstract | In this paper, we propose a novel framework that converts streaming algorithms for monotone submodular maximization into streaming algorithms for non-monotone submodular maximization. This reduction readily leads to the currently tightest deterministic approximation ratio for submodular maximization subject to a $k$-matchoid constraint. Moreover, we propose the first streaming algorithm for monotone submodular maximization subject to $k$-extendible and $k$-set system constraints. Together with our proposed reduction, we obtain $O(k\log k)$ and $O(k^2\log k)$ approximation ratio for submodular maximization subject to the above constraints, respectively. We extensively evaluate the empirical performance of our algorithm against the existing work in a series of experiments including finding the maximum independent set in randomly generated graphs, maximizing linear functions over social networks, movie recommendation, Yelp location summarization, and Twitter data summarization. |
Tasks | Data Summarization |
Published | 2020-02-09 |
URL | https://arxiv.org/abs/2002.03352v1 |
https://arxiv.org/pdf/2002.03352v1.pdf | |
PWC | https://paperswithcode.com/paper/streaming-submodular-maximization-under-a-k |
Repo | |
Framework | |
Non-linear Neurons with Human-like Apical Dendrite Activations
Title | Non-linear Neurons with Human-like Apical Dendrite Activations |
Authors | Mariana-Iuliana Georgescu, Radu Tudor Ionescu, Nicolae-Catalin Ristea, Nicu Sebe |
Abstract | In order to classify linearly non-separable data, neurons are typically organized into multi-layer neural networks that are equipped with at least one hidden layer. Inspired by some recent discoveries in neuroscience, we propose a new neuron model along with a novel activation function enabling learning of non-linear decision boundaries using a single neuron. We show that a standard neuron followed by the novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy. Furthermore, we conduct experiments on three benchmark data sets from computer vision and natural language processing, i.e. Fashion-MNIST, UTKFace and MOROCO, showing that the ADA and the leaky ADA functions provide superior results to Rectified Liner Units (ReLU) and leaky ReLU, for various neural network architectures, e.g. 1-hidden layer or 2-hidden layers multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs) such as LeNet, VGG, ResNet and Character-level CNN. We also obtain further improvements when we change the standard model of the neuron with our pyramidal neuron with apical dendrite activations (PyNADA). |
Tasks | |
Published | 2020-02-02 |
URL | https://arxiv.org/abs/2003.03229v2 |
https://arxiv.org/pdf/2003.03229v2.pdf | |
PWC | https://paperswithcode.com/paper/non-linear-neurons-with-human-like-apical |
Repo | |
Framework | |
Finite-Sample Analysis of Stochastic Approximation Using Smooth Convex Envelopes
Title | Finite-Sample Analysis of Stochastic Approximation Using Smooth Convex Envelopes |
Authors | Zaiwei Chen, Siva Theja Maguluri, Sanjay Shakkottai, Karthikeyan Shanmugam |
Abstract | Stochastic Approximation (SA) is a popular approach for solving fixed point equations where the information is corrupted by noise. In this paper, we consider an SA involving a contraction mapping with respect to an arbitrary norm, and show its finite-sample bound for using either constant or diminishing step sizes. The idea is to construct a smooth Lyapunov function using the generalized Moreau envelope, and show that the iterates of SA are contracting in expectation with respect to that Lyapunov function. The result is applicable to various Reinforcement Learning (RL) algorithms. In particular, we use it to establish the first-known convergence rate of the V-trace algorithm for the off-policy TD-Learning [15]. Importantly, our construction results in only a logarithmic dependence of the convergence bound on the state-space dimension. |
Tasks | Q-Learning |
Published | 2020-02-03 |
URL | https://arxiv.org/abs/2002.00874v2 |
https://arxiv.org/pdf/2002.00874v2.pdf | |
PWC | https://paperswithcode.com/paper/finite-sample-analysis-of-stochastic |
Repo | |
Framework | |