July 28, 2019

3000 words 15 mins read

Paper Group ANR 285

Paper Group ANR 285

Optimal Resource Allocation in Distributed Broadband Wireless Communication Systems. Projection Free Rank-Drop Steps. Modeling the dynamics of domain specific terminology in diachronic corpora. Note on Evolution and Forecasting of Requirements: Communications Example. Predictive Liability Models and Visualizations of High Dimensional Retail Employe …

Optimal Resource Allocation in Distributed Broadband Wireless Communication Systems

Title Optimal Resource Allocation in Distributed Broadband Wireless Communication Systems
Authors Yao Yao, Mustafa Mehmet Ali, Shahin Vakilinia
Abstract This paper is concerned with optimization of distributed broadband wireless communication (BWC) systems. BWC systems contain a distributed antenna system (DAS) connected to a base station with optical fiber. Distributed BWC systems have been proposed as a solution to the power constraint problem in traditional cellular networks. So far, the research on BWC systems have advanced on two separate tracks, the design of the system to meet the quality of service requirements (QoS) and optimization of the location of the DAS. In this paper, we consider a combined optimization of BWC systems. We consider uplink communications in distributed BWC systems with multiple levels of priority traffic with arrivals and departures forming renewal processes. We develop an analysis that determines packet delay violation probability for each priority level as a function of the outage probability of the DAS through the application of results from renewal theory. Then, we determine the optimal locations of the antennas that minimize the antenna outage probability. We also study the tradeoff between the packet delay violation probability and packet loss probability. This work will be helpful in the designing of the distributed BWC systems.
Tasks
Published 2017-10-24
URL http://arxiv.org/abs/1710.11454v1
PDF http://arxiv.org/pdf/1710.11454v1.pdf
PWC https://paperswithcode.com/paper/optimal-resource-allocation-in-distributed
Repo
Framework

Projection Free Rank-Drop Steps

Title Projection Free Rank-Drop Steps
Authors Edward Cheung, Yuying Li
Abstract The Frank-Wolfe (FW) algorithm has been widely used in solving nuclear norm constrained problems, since it does not require projections. However, FW often yields high rank intermediate iterates, which can be very expensive in time and space costs for large problems. To address this issue, we propose a rank-drop method for nuclear norm constrained problems. The goal is to generate descent steps that lead to rank decreases, maintaining low-rank solutions throughout the algorithm. Moreover, the optimization problems are constrained to ensure that the rank-drop step is also feasible and can be readily incorporated into a projection-free minimization method, e.g., Frank-Wolfe. We demonstrate that by incorporating rank-drop steps into the Frank-Wolfe algorithm, the rank of the solution is greatly reduced compared to the original Frank-Wolfe or its common variants.
Tasks
Published 2017-04-13
URL http://arxiv.org/abs/1704.04285v2
PDF http://arxiv.org/pdf/1704.04285v2.pdf
PWC https://paperswithcode.com/paper/projection-free-rank-drop-steps
Repo
Framework

Modeling the dynamics of domain specific terminology in diachronic corpora

Title Modeling the dynamics of domain specific terminology in diachronic corpora
Authors Gerhard Heyer, Cathleen Kantner, Andreas Niekler, Max Overbeck, Gregor Wiedemann
Abstract In terminology work, natural language processing, and digital humanities, several studies address the analysis of variations in context and meaning of terms in order to detect semantic change and the evolution of terms. We distinguish three different approaches to describe contextual variations: methods based on the analysis of patterns and linguistic clues, methods exploring the latent semantic space of single words, and methods for the analysis of topic membership. The paper presents the notion of context volatility as a new measure for detecting semantic change and applies it to key term extraction in a political science case study. The measure quantifies the dynamics of a term’s contextual variation within a diachronic corpus to identify periods of time that are characterised by intense controversial debates or substantial semantic transformations.
Tasks
Published 2017-07-11
URL http://arxiv.org/abs/1707.03255v1
PDF http://arxiv.org/pdf/1707.03255v1.pdf
PWC https://paperswithcode.com/paper/modeling-the-dynamics-of-domain-specific
Repo
Framework

Note on Evolution and Forecasting of Requirements: Communications Example

Title Note on Evolution and Forecasting of Requirements: Communications Example
Authors Mark Sh. Levin
Abstract Combinatorial evolution and forecasting of system requirements is examined. The morphological model is used for a hierarchical requirements system (i.e., system parts, design alternatives for the system parts, ordinal estimates for the alternatives). A set of system changes involves changes of the system structure, component alternatives and their estimates. The composition process of the forecast is based on combinatorial synthesis (knapsack problem, multiple choice problem, hierarchical morphological design). An illustrative numerical example for four-phase evolution and forecasting of requirements to communications is described.
Tasks
Published 2017-05-22
URL http://arxiv.org/abs/1705.07558v1
PDF http://arxiv.org/pdf/1705.07558v1.pdf
PWC https://paperswithcode.com/paper/note-on-evolution-and-forecasting-of
Repo
Framework

Predictive Liability Models and Visualizations of High Dimensional Retail Employee Data

Title Predictive Liability Models and Visualizations of High Dimensional Retail Employee Data
Authors Richard R. Yang, Mike Borowczak
Abstract Employee theft and dishonesty is a major contributor to loss in the retail industry. Retailers have reported the need for more automated analytic tools to assess the liability of their employees. In this work, we train and optimize several machine learning models for regression prediction and analysis on this data, which will help retailers identify and manage risky employees. Since the data we use is very high dimensional, we use feature selection techniques to identify the most contributing factors to an employee’s assessed risk. We also use dimension reduction and data embedding techniques to present this dataset in a easy to interpret format.
Tasks Dimensionality Reduction, Feature Selection
Published 2017-07-14
URL http://arxiv.org/abs/1707.04639v3
PDF http://arxiv.org/pdf/1707.04639v3.pdf
PWC https://paperswithcode.com/paper/predictive-liability-models-and
Repo
Framework

Learning to Label Affordances from Simulated and Real Data

Title Learning to Label Affordances from Simulated and Real Data
Authors Timo Lüddecke, Florentin Wörgötter
Abstract An autonomous robot should be able to evaluate the affordances that are offered by a given situation. Here we address this problem by designing a system that can densely predict affordances given only a single 2D RGB image. This is achieved with a convolutional neural network (ResNet), which we combine with refinement modules recently proposed for addressing semantic image segmentation. We define a novel cost function, which is able to handle (potentially multiple) affordances of objects and their parts in a pixel-wise manner even in the case of incomplete data. We perform qualitative as well as quantitative evaluations with simulated and real data assessing 15 different affordances. In general, we find that affordances, which are well-enough represented in the training data, are correctly recognized with a substantial fraction of correctly assigned pixels. Furthermore, we show that our model outperforms several baselines. Hence, this method can give clear action guidelines for a robot.
Tasks Semantic Segmentation
Published 2017-09-26
URL http://arxiv.org/abs/1709.08872v1
PDF http://arxiv.org/pdf/1709.08872v1.pdf
PWC https://paperswithcode.com/paper/learning-to-label-affordances-from-simulated
Repo
Framework

LADAR-Based Mover Detection from Moving Vehicles

Title LADAR-Based Mover Detection from Moving Vehicles
Authors Daniel D. Morris, Brian Colonna, Paul Haley
Abstract Detecting moving vehicles and people is crucial for safe operation of UGVs but is challenging in cluttered, real world environments. We propose a registration technique that enables objects to be robustly matched and tracked, and hence movers to be detected even in high clutter. Range data are acquired using a 2D scanning Ladar from a moving platform. These are automatically clustered into objects and modeled using a surface density function. A Bhattacharya similarity is optimized to register subsequent views of each object enabling good discrimination and tracking, and hence mover detection.
Tasks
Published 2017-09-25
URL http://arxiv.org/abs/1709.08515v1
PDF http://arxiv.org/pdf/1709.08515v1.pdf
PWC https://paperswithcode.com/paper/ladar-based-mover-detection-from-moving
Repo
Framework

A Pitfall of Unsupervised Pre-Training

Title A Pitfall of Unsupervised Pre-Training
Authors Michele Alberti, Mathias Seuret, Rolf Ingold, Marcus Liwicki
Abstract The point of this paper is to question typical assumptions in deep learning and suggest alternatives. A particular contribution is to prove that even if a Stacked Convolutional Auto-Encoder is good at reconstructing pictures, it is not necessarily good at discriminating their classes. When using Auto-Encoders, intuitively one assumes that features which are good for reconstruction will also lead to high classification accuracy. Indeed, it became research practice and is a suggested strategy by introductory books. However, we prove that this is not always the case. We thoroughly investigate the quality of features produced by Stacked Convolutional Auto-Encoders when trained to reconstruct their input. In particular, we analyze the relation between the reconstruction and classification capabilities of the network, if we were to use the same features for both tasks. Experimental results suggest that in fact, there is no correlation between the reconstruction score and the quality of features for a classification task. This means, more formally, that the sub-dimension representation space learned from the Stacked Convolutional Auto-Encoder (while being trained for input reconstruction) is not necessarily better separable than the initial input space. Furthermore, we show that the reconstruction error is not a good metric to assess the quality of features, because it is biased by the decoder quality. We do not question the usefulness of pre-training, but we conclude that aiming for the lowest reconstruction error is not necessarily a good idea if afterwards one performs a classification task.
Tasks
Published 2017-03-13
URL http://arxiv.org/abs/1703.04332v4
PDF http://arxiv.org/pdf/1703.04332v4.pdf
PWC https://paperswithcode.com/paper/a-pitfall-of-unsupervised-pre-training-1
Repo
Framework

Large Scale Graph Learning from Smooth Signals

Title Large Scale Graph Learning from Smooth Signals
Authors Vassilis Kalofolias, Nathanaël Perraudin
Abstract Graphs are a prevalent tool in data science, as they model the inherent structure of the data. They have been used successfully in unsupervised and semi-supervised learning. Typically they are constructed either by connecting nearest samples, or by learning them from data, solving an optimization problem. While graph learning does achieve a better quality, it also comes with a higher computational cost. In particular, the current state-of-the-art model cost is $\mathcal{O}(n^2)$ for $n$ samples. In this paper, we show how to scale it, obtaining an approximation with leading cost of $\mathcal{O}(n\log(n))$, with quality that approaches the exact graph learning model. Our algorithm uses known approximate nearest neighbor techniques to reduce the number of variables, and automatically selects the correct parameters of the model, requiring a single intuitive input: the desired edge density.
Tasks
Published 2017-10-16
URL http://arxiv.org/abs/1710.05654v2
PDF http://arxiv.org/pdf/1710.05654v2.pdf
PWC https://paperswithcode.com/paper/large-scale-graph-learning-from-smooth
Repo
Framework

Symbol Grounding via Chaining of Morphisms

Title Symbol Grounding via Chaining of Morphisms
Authors Ruiting Lian, Ben Goertzel, Linas Vepstas, David Hanson, Changle Zhou
Abstract A new model of symbol grounding is presented, in which the structures of natural language, logical semantics, perception and action are represented categorically, and symbol grounding is modeled via the composition of morphisms between the relevant categories. This model gives conceptual insight into the fundamentally systematic nature of symbol grounding, and also connects naturally to practical real-world AI systems in current research and commercial use. Specifically, it is argued that the structure of linguistic syntax can be modeled as a certain asymmetric monoidal category, as e.g. implicit in the link grammar formalism; the structure of spatiotemporal relationships and action plans can be modeled similarly using “image grammars” and “action grammars”; and common-sense logical semantic structure can be modeled using dependently-typed lambda calculus with uncertain truth values. Given these formalisms, the grounding of linguistic descriptions in spatiotemporal perceptions and coordinated actions consists of following morphisms from language to logic through to spacetime and body (for comprehension), and vice versa (for generation). The mapping is indicated between the spatial relationships in the Region Connection Calculus and Allen Interval Algebra and corresponding entries in the link grammar syntax parsing dictionary. Further, the abstractions introduced here are shown to naturally model the structures and systems currently being deployed in the context of using the OpenCog cognitive architecture to control Hanson Robotics humanoid robots.
Tasks Common Sense Reasoning
Published 2017-03-13
URL http://arxiv.org/abs/1703.04368v1
PDF http://arxiv.org/pdf/1703.04368v1.pdf
PWC https://paperswithcode.com/paper/symbol-grounding-via-chaining-of-morphisms
Repo
Framework

Neural Lattice-to-Sequence Models for Uncertain Inputs

Title Neural Lattice-to-Sequence Models for Uncertain Inputs
Authors Matthias Sperber, Graham Neubig, Jan Niehues, Alex Waibel
Abstract The input to a neural sequence-to-sequence model is often determined by an up-stream system, e.g. a word segmenter, part of speech tagger, or speech recognizer. These up-stream models are potentially error-prone. Representing inputs through word lattices allows making this uncertainty explicit by capturing alternative sequences and their posterior probabilities in a compact form. In this work, we extend the TreeLSTM (Tai et al., 2015) into a LatticeLSTM that is able to consume word lattices, and can be used as encoder in an attentional encoder-decoder model. We integrate lattice posterior scores into this architecture by extending the TreeLSTM’s child-sum and forget gates and introducing a bias term into the attention mechanism. We experiment with speech translation lattices and report consistent improvements over baselines that translate either the 1-best hypothesis or the lattice without posterior scores.
Tasks
Published 2017-04-03
URL http://arxiv.org/abs/1704.00559v2
PDF http://arxiv.org/pdf/1704.00559v2.pdf
PWC https://paperswithcode.com/paper/neural-lattice-to-sequence-models-for
Repo
Framework

Sim2Real View Invariant Visual Servoing by Recurrent Control

Title Sim2Real View Invariant Visual Servoing by Recurrent Control
Authors Fereshteh Sadeghi, Alexander Toshev, Eric Jang, Sergey Levine
Abstract Humans are remarkably proficient at controlling their limbs and tools from a wide range of viewpoints and angles, even in the presence of optical distortions. In robotics, this ability is referred to as visual servoing: moving a tool or end-point to a desired location using primarily visual feedback. In this paper, we study how viewpoint-invariant visual servoing skills can be learned automatically in a robotic manipulation scenario. To this end, we train a deep recurrent controller that can automatically determine which actions move the end-point of a robotic arm to a desired object. The problem that must be solved by this controller is fundamentally ambiguous: under severe variation in viewpoint, it may be impossible to determine the actions in a single feedforward operation. Instead, our visual servoing system must use its memory of past movements to understand how the actions affect the robot motion from the current viewpoint, correcting mistakes and gradually moving closer to the target. This ability is in stark contrast to most visual servoing methods, which either assume known dynamics or require a calibration phase. We show how we can learn this recurrent controller using simulated data and a reinforcement learning objective. We then describe how the resulting model can be transferred to a real-world robot by disentangling perception from control and only adapting the visual layers. The adapted model can servo to previously unseen objects from novel viewpoints on a real-world Kuka IIWA robotic arm. For supplementary videos, see: https://fsadeghi.github.io/Sim2RealViewInvariantServo
Tasks Calibration
Published 2017-12-20
URL http://arxiv.org/abs/1712.07642v1
PDF http://arxiv.org/pdf/1712.07642v1.pdf
PWC https://paperswithcode.com/paper/sim2real-view-invariant-visual-servoing-by
Repo
Framework

Optimization and Testing in Linear Non-Gaussian Component Analysis

Title Optimization and Testing in Linear Non-Gaussian Component Analysis
Authors Ze Jin, Benjamin B. Risk, David S. Matteson
Abstract Independent component analysis (ICA) decomposes multivariate data into mutually independent components (ICs). The ICA model is subject to a constraint that at most one of these components is Gaussian, which is required for model identifiability. Linear non-Gaussian component analysis (LNGCA) generalizes the ICA model to a linear latent factor model with any number of both non-Gaussian components (signals) and Gaussian components (noise), where observations are linear combinations of independent components. Although the individual Gaussian components are not identifiable, the Gaussian subspace is identifiable. We introduce an estimator along with its optimization approach in which non-Gaussian and Gaussian components are estimated simultaneously, maximizing the discrepancy of each non-Gaussian component from Gaussianity while minimizing the discrepancy of each Gaussian component from Gaussianity. When the number of non-Gaussian components is unknown, we develop a statistical test to determine it based on resampling and the discrepancy of estimated components. Through a variety of simulation studies, we demonstrate the improvements of our estimator over competing estimators, and we illustrate the effectiveness of the test to determine the number of non-Gaussian components. Further, we apply our method to real data examples and demonstrate its practical value.
Tasks
Published 2017-12-23
URL http://arxiv.org/abs/1712.08837v2
PDF http://arxiv.org/pdf/1712.08837v2.pdf
PWC https://paperswithcode.com/paper/optimization-and-testing-in-linear-non
Repo
Framework

Extrinsic Calibration of 3D Range Finder and Camera without Auxiliary Object or Human Intervention

Title Extrinsic Calibration of 3D Range Finder and Camera without Auxiliary Object or Human Intervention
Authors Qinghai Liao, Ming Liu, Lei Tai, Haoyang Ye
Abstract Fusion of heterogeneous extroceptive sensors is the most effient and effective way to representing the environment precisely, as it overcomes various defects of each homogeneous sensor. The rigid transformation (aka. extrinsic parameters) of heterogeneous sensory systems should be available before precisely fusing the multisensor information. Researchers have proposed several approaches to estimating the extrinsic parameters. These approaches require either auxiliary objects, like chessboards, or extra help from human to select correspondences. In this paper, we proposed a novel extrinsic calibration approach for the extrinsic calibration of range and image sensors. As far as we know, it is the first automatic approach with no requirement of auxiliary objects or any human interventions. First, we estimate the initial extrinsic parameters from the individual motion of the range finder and the camera. Then we extract lines in the image and point-cloud pairs, to refine the line feature associations by the initial extrinsic parameters. At the end, we discussed the degenerate case which may lead to the algorithm failure and validate our approach by simulation. The results indicate high-precision extrinsic calibration results against the ground-truth.
Tasks Calibration
Published 2017-03-02
URL http://arxiv.org/abs/1703.04391v1
PDF http://arxiv.org/pdf/1703.04391v1.pdf
PWC https://paperswithcode.com/paper/extrinsic-calibration-of-3d-range-finder-and
Repo
Framework

Neural Collaborative Autoencoder

Title Neural Collaborative Autoencoder
Authors Qibing Li, Xiaolin Zheng, Xinyue Wu
Abstract In recent years, deep neural networks have yielded state-of-the-art performance on several tasks. Although some recent works have focused on combining deep learning with recommendation, we highlight three issues of existing models. First, these models cannot work on both explicit and implicit feedback, since the network structures are specially designed for one particular case. Second, due to the difficulty on training deep neural networks, existing explicit models do not fully exploit the expressive potential of deep learning. Third, neural network models are easier to overfit on the implicit setting than shallow models. To tackle these issues, we present a generic recommender framework called Neural Collaborative Autoencoder (NCAE) to perform collaborative filtering, which works well for both explicit feedback and implicit feedback. NCAE can effectively capture the subtle hidden relationships between interactions via a non-linear matrix factorization process. To optimize the deep architecture of NCAE, we develop a three-stage pre-training mechanism that combines supervised and unsupervised feature learning. Moreover, to prevent overfitting on the implicit setting, we propose an error reweighting module and a sparsity-aware data-augmentation strategy. Extensive experiments on three real-world datasets demonstrate that NCAE can significantly advance the state-of-the-art.
Tasks Data Augmentation
Published 2017-12-25
URL http://arxiv.org/abs/1712.09043v3
PDF http://arxiv.org/pdf/1712.09043v3.pdf
PWC https://paperswithcode.com/paper/neural-collaborative-autoencoder
Repo
Framework
comments powered by Disqus