July 29, 2019

3239 words 16 mins read

Paper Group ANR 19

Paper Group ANR 19

Folding membrane proteins by deep transfer learning. Joint Positioning and Radio Map Generation Based on Stochastic Variational Bayesian Inference for FWIPS. Two Hilbert schemes in computer vision. 3D Semantic Trajectory Reconstruction from 3D Pixel Continuum. Complex Event Recognition from Images with Few Training Examples. Supervised Quantile Nor …

Folding membrane proteins by deep transfer learning

Title Folding membrane proteins by deep transfer learning
Authors Sheng Wang, Zhen Li, Yizhou Yu, Jinbo Xu
Abstract Computational elucidation of membrane protein (MP) structures is challenging partially due to lack of sufficient solved structures for homology modeling. Here we describe a high-throughput deep transfer learning method that first predicts MP contacts by learning from non-membrane proteins (non-MPs) and then predicting three-dimensional structure models using the predicted contacts as distance restraints. Tested on 510 non-redundant MPs, our method has contact prediction accuracy at least 0.18 better than existing methods, predicts correct folds for 218 MPs (TMscore at least 0.6), and generates three-dimensional models with RMSD less than 4 Angstrom and 5 Angstrom for 57 and 108 MPs, respectively. A rigorous blind test in the continuous automated model evaluation (CAMEO) project shows that our method predicted high-resolution three-dimensional models for two recent test MPs of 210 residues with RMSD close to 2 Angstrom. We estimated that our method could predict correct folds for between 1,345 and 1,871 reviewed human multi-pass MPs including a few hundred new folds, which shall facilitate the discovery of drugs targeting at membrane proteins.
Tasks Transfer Learning
Published 2017-08-28
URL http://arxiv.org/abs/1708.08407v1
PDF http://arxiv.org/pdf/1708.08407v1.pdf
PWC https://paperswithcode.com/paper/folding-membrane-proteins-by-deep-transfer
Repo
Framework

Joint Positioning and Radio Map Generation Based on Stochastic Variational Bayesian Inference for FWIPS

Title Joint Positioning and Radio Map Generation Based on Stochastic Variational Bayesian Inference for FWIPS
Authors Caifa Zhou, Yang Gu
Abstract Fingerprinting based WLAN indoor positioning system (FWIPS) provides a promising indoor positioning solution to meet the growing interests for indoor location-based services (e.g., indoor way finding or geo-fencing). FWIPS is preferred because it requires no additional infrastructure for deploying an FWIPS and achieving the position estimation by reusing the available WLAN and mobile devices, and capable of providing absolute position estimation. For fingerprinting based positioning (FbP), a model is created to provide reference values of observable features (e.g., signal strength from access point (AP)) as a function of location during offline stage. One widely applied method to build a complete and an accurate reference database (i.e. radio map (RM)) for FWIPS is carrying out a site survey throughout the region of interest (RoI). Along the site survey, the readings of received signal strength (RSS) from all visible APs at each reference point (RP) are collected. This site survey, however, is time-consuming and labor-intensive, especially in the case that the RoI is large (e.g., an airport or a big mall). This bottleneck hinders the wide commercial applications of FWIPS (e.g., proximity promotions in a shopping center). To diminish the cost of site survey, we propose a probabilistic model, which combines fingerprinting based positioning (FbP) and RM generation based on stochastic variational Bayesian inference (SVBI). This SVBI based position and RSS estimation has three properties: i) being able to predict the distribution of the estimated position and RSS, ii) treating each observation of RSS at each RP as an example to learn for FbP and RM generation instead of using the whole RM as an example, and iii) requiring only one time training of the SVBI model for both localization and RSS estimation. These benefits make it outperforms the previous proposed approaches.
Tasks Bayesian Inference
Published 2017-05-17
URL http://arxiv.org/abs/1705.06025v1
PDF http://arxiv.org/pdf/1705.06025v1.pdf
PWC https://paperswithcode.com/paper/joint-positioning-and-radio-map-generation
Repo
Framework

Two Hilbert schemes in computer vision

Title Two Hilbert schemes in computer vision
Authors Max Lieblich, Lucas Van Meter
Abstract We study multiview moduli problems that arise in computer vision. We show that these moduli spaces are always smooth and irreducible, in both the calibrated and uncalibrated cases, for any number of views. We also show that these moduli spaces always admit open immersions into Hilbert schemes for more than two views, extending and refining work of Aholt-Sturmfels-Thomas. We use these moduli spaces to study and extend the classical twisted pair covering of the essential variety.
Tasks
Published 2017-07-28
URL https://arxiv.org/abs/1707.09332v6
PDF https://arxiv.org/pdf/1707.09332v6.pdf
PWC https://paperswithcode.com/paper/two-hilbert-schemes-in-computer-vision
Repo
Framework

3D Semantic Trajectory Reconstruction from 3D Pixel Continuum

Title 3D Semantic Trajectory Reconstruction from 3D Pixel Continuum
Authors Jae Shin Yoon, Ziwei Li, Hyun Soo Park
Abstract This paper presents a method to reconstruct dense semantic trajectory stream of human interactions in 3D from synchronized multiple videos. The interactions inherently introduce self-occlusion and illumination/appearance/shape changes, resulting in highly fragmented trajectory reconstruction with noisy and coarse semantic labels. Our conjecture is that among many views, there exists a set of views that can confidently recognize the visual semantic label of a 3D trajectory. We introduce a new representation called 3D semantic map—a probability distribution over the semantic labels per trajectory. We construct the 3D semantic map by reasoning about visibility and 2D recognition confidence based on view-pooling, i.e., finding the view that best represents the semantics of the trajectory. Using the 3D semantic map, we precisely infer all trajectory labels jointly by considering the affinity between long range trajectories via estimating their local rigid transformations. This inference quantitatively outperforms the baseline approaches in terms of predictive validity, representation robustness, and affinity effectiveness. We demonstrate that our algorithm can robustly compute the semantic labels of a large scale trajectory set involving real-world human interactions with object, scenes, and people.
Tasks
Published 2017-12-04
URL http://arxiv.org/abs/1712.01359v1
PDF http://arxiv.org/pdf/1712.01359v1.pdf
PWC https://paperswithcode.com/paper/3d-semantic-trajectory-reconstruction-from-3d
Repo
Framework

Complex Event Recognition from Images with Few Training Examples

Title Complex Event Recognition from Images with Few Training Examples
Authors Unaiza Ahsan, Chen Sun, James Hays, Irfan Essa
Abstract We propose to leverage concept-level representations for complex event recognition in photographs given limited training examples. We introduce a novel framework to discover event concept attributes from the web and use that to extract semantic features from images and classify them into social event categories with few training examples. Discovered concepts include a variety of objects, scenes, actions and event sub-types, leading to a discriminative and compact representation for event images. Web images are obtained for each discovered event concept and we use (pretrained) CNN features to train concept classifiers. Extensive experiments on challenging event datasets demonstrate that our proposed method outperforms several baselines using deep CNN features directly in classifying images into events with limited training examples. We also demonstrate that our method achieves the best overall accuracy on a dataset with unseen event categories using a single training example.
Tasks
Published 2017-01-17
URL http://arxiv.org/abs/1701.04769v1
PDF http://arxiv.org/pdf/1701.04769v1.pdf
PWC https://paperswithcode.com/paper/complex-event-recognition-from-images-with
Repo
Framework

Supervised Quantile Normalisation

Title Supervised Quantile Normalisation
Authors Marine Le Morvan, Jean-Philippe Vert
Abstract Quantile normalisation is a popular normalisation method for data subject to unwanted variations such as images, speech, or genomic data. It applies a monotonic transformation to the feature values of each sample to ensure that after normalisation, they follow the same target distribution for each sample. Choosing a “good” target distribution remains however largely empirical and heuristic, and is usually done independently of the subsequent analysis of normalised data. We propose instead to couple the quantile normalisation step with the subsequent analysis, and to optimise the target distribution jointly with the other parameters in the analysis. We illustrate this principle on the problem of estimating a linear model over normalised data, and show that it leads to a particular low-rank matrix regression problem that can be solved efficiently. We illustrate the potential of our method, which we term SUQUAN, on simulated data, images and genomic data, where it outperforms standard quantile normalisation.
Tasks
Published 2017-06-01
URL http://arxiv.org/abs/1706.00244v1
PDF http://arxiv.org/pdf/1706.00244v1.pdf
PWC https://paperswithcode.com/paper/supervised-quantile-normalisation
Repo
Framework

Generalised Wasserstein Dice Score for Imbalanced Multi-class Segmentation using Holistic Convolutional Networks

Title Generalised Wasserstein Dice Score for Imbalanced Multi-class Segmentation using Holistic Convolutional Networks
Authors Lucas Fidon, Wenqi Li, Luis C. Garcia-Peraza-Herrera, Jinendra Ekanayake, Neil Kitchen, Sebastien Ourselin, Tom Vercauteren
Abstract The Dice score is widely used for binary segmentation due to its robustness to class imbalance. Soft generalisations of the Dice score allow it to be used as a loss function for training convolutional neural networks (CNN). Although CNNs trained using mean-class Dice score achieve state-of-the-art results on multi-class segmentation, this loss function does neither take advantage of inter-class relationships nor multi-scale information. We argue that an improved loss function should balance misclassifications to favour predictions that are semantically meaningful. This paper investigates these issues in the context of multi-class brain tumour segmentation. Our contribution is threefold. 1) We propose a semantically-informed generalisation of the Dice score for multi-class segmentation based on the Wasserstein distance on the probabilistic label space. 2) We propose a holistic CNN that embeds spatial information at multiple scales with deep supervision. 3) We show that the joint use of holistic CNNs and generalised Wasserstein Dice scores achieves segmentations that are more semantically meaningful for brain tumour segmentation.
Tasks
Published 2017-07-03
URL http://arxiv.org/abs/1707.00478v4
PDF http://arxiv.org/pdf/1707.00478v4.pdf
PWC https://paperswithcode.com/paper/generalised-wasserstein-dice-score-for
Repo
Framework

Artificial Intelligence and Data Science in the Automotive Industry

Title Artificial Intelligence and Data Science in the Automotive Industry
Authors Martin Hofmann, Florian Neukart, Thomas Bäck
Abstract Data science and machine learning are the key technologies when it comes to the processes and products with automatic learning and optimization to be used in the automotive industry of the future. This article defines the terms “data science” (also referred to as “data analytics”) and “machine learning” and how they are related. In addition, it defines the term “optimizing analytics” and illustrates the role of automatic optimization as a key technology in combination with data analytics. It also uses examples to explain the way that these technologies are currently being used in the automotive industry on the basis of the major subprocesses in the automotive value chain (development, procurement; logistics, production, marketing, sales and after-sales, connected customer). Since the industry is just starting to explore the broad range of potential uses for these technologies, visionary application examples are used to illustrate the revolutionary possibilities that they offer. Finally, the article demonstrates how these technologies can make the automotive industry more efficient and enhance its customer focus throughout all its operations and activities, extending from the product and its development process to the customers and their connection to the product.
Tasks
Published 2017-09-06
URL http://arxiv.org/abs/1709.01989v1
PDF http://arxiv.org/pdf/1709.01989v1.pdf
PWC https://paperswithcode.com/paper/artificial-intelligence-and-data-science-in
Repo
Framework

SAR image despeckling through convolutional neural networks

Title SAR image despeckling through convolutional neural networks
Authors G. Chierchia, D. Cozzolino, G. Poggi, L. Verdoliva
Abstract In this paper we investigate the use of discriminative model learning through Convolutional Neural Networks (CNNs) for SAR image despeckling. The network uses a residual learning strategy, hence it does not recover the filtered image, but the speckle component, which is then subtracted from the noisy one. Training is carried out by considering a large multitemporal SAR image and its multilook version, in order to approximate a clean image. Experimental results, both on synthetic and real SAR data, show the method to achieve better performance with respect to state-of-the-art techniques.
Tasks Sar Image Despeckling
Published 2017-04-02
URL http://arxiv.org/abs/1704.00275v2
PDF http://arxiv.org/pdf/1704.00275v2.pdf
PWC https://paperswithcode.com/paper/sar-image-despeckling-through-convolutional
Repo
Framework

Learning to Predict with Highly Granular Temporal Data: Estimating individual behavioral profiles with smart meter data

Title Learning to Predict with Highly Granular Temporal Data: Estimating individual behavioral profiles with smart meter data
Authors Anastasia Ushakova, Slava J. Mikhaylov
Abstract Big spatio-temporal datasets, available through both open and administrative data sources, offer significant potential for social science research. The magnitude of the data allows for increased resolution and analysis at individual level. While there are recent advances in forecasting techniques for highly granular temporal data, little attention is given to segmenting the time series and finding homogeneous patterns. In this paper, it is proposed to estimate behavioral profiles of individuals’ activities over time using Gaussian Process-based models. In particular, the aim is to investigate how individuals or groups may be clustered according to the model parameters. Such a Bayesian non-parametric method is then tested by looking at the predictability of the segments using a combination of models to fit different parts of the temporal profiles. Model validity is then tested on a set of holdout data. The dataset consists of half hourly energy consumption records from smart meters from more than 100,000 households in the UK and covers the period from 2015 to 2016. The methodological approach developed in the paper may be easily applied to datasets of similar structure and granularity, for example social media data, and may lead to improved accuracy in the prediction of social dynamics and behavior.
Tasks Time Series
Published 2017-11-15
URL http://arxiv.org/abs/1711.05656v2
PDF http://arxiv.org/pdf/1711.05656v2.pdf
PWC https://paperswithcode.com/paper/learning-to-predict-with-highly-granular
Repo
Framework

Understanding the Learned Iterative Soft Thresholding Algorithm with matrix factorization

Title Understanding the Learned Iterative Soft Thresholding Algorithm with matrix factorization
Authors Thomas Moreau, Joan Bruna
Abstract Sparse coding is a core building block in many data analysis and machine learning pipelines. Typically it is solved by relying on generic optimization techniques, such as the Iterative Soft Thresholding Algorithm and its accelerated version (ISTA, FISTA). These methods are optimal in the class of first-order methods for non-smooth, convex functions. However, they do not exploit the particular structure of the problem at hand nor the input data distribution. An acceleration using neural networks, coined LISTA, was proposed in Gregor and Le Cun (2010), which showed empirically that one could achieve high quality estimates with few iterations by modifying the parameters of the proximal splitting appropriately. In this paper we study the reasons for such acceleration. Our mathematical analysis reveals that it is related to a specific matrix factorization of the Gram kernel of the dictionary, which attempts to nearly diagonalise the kernel with a basis that produces a small perturbation of the $\ell_1$ ball. When this factorization succeeds, we prove that the resulting splitting algorithm enjoys an improved convergence bound with respect to the non-adaptive version. Moreover, our analysis also shows that conditions for acceleration occur mostly at the beginning of the iterative process, consistent with numerical experiments. We further validate our analysis by showing that on dictionaries where this factorization does not exist, adaptive acceleration fails.
Tasks
Published 2017-06-02
URL http://arxiv.org/abs/1706.01338v1
PDF http://arxiv.org/pdf/1706.01338v1.pdf
PWC https://paperswithcode.com/paper/understanding-the-learned-iterative-soft
Repo
Framework

Application of Natural Language Processing to Determine User Satisfaction in Public Services

Title Application of Natural Language Processing to Determine User Satisfaction in Public Services
Authors Radoslaw Kowalski, Marc Esteve, Slava J. Mikhaylov
Abstract Research on customer satisfaction has increased substantially in recent years. However, the relative importance and relationships between different determinants of satisfaction remains uncertain. Moreover, quantitative studies to date tend to test for significance of pre-determined factors thought to have an influence with no scalable means to identify other causes of user satisfaction. The gaps in knowledge make it difficult to use available knowledge on user preference for public service improvement. Meanwhile, digital technology development has enabled new methods to collect user feedback, for example through online forums where users can comment freely on their experience. New tools are needed to analyze large volumes of such feedback. Use of topic models is proposed as a feasible solution to aggregate open-ended user opinions that can be easily deployed in the public sector. Generated insights can contribute to a more inclusive decision-making process in public service provision. This novel methodological approach is applied to a case of service reviews of publicly-funded primary care practices in England. Findings from the analysis of 145,000 reviews covering almost 7,700 primary care centers indicate that the quality of interactions with staff and bureaucratic exigencies are the key issues driving user satisfaction across England.
Tasks Decision Making, Topic Models
Published 2017-11-21
URL http://arxiv.org/abs/1711.08083v1
PDF http://arxiv.org/pdf/1711.08083v1.pdf
PWC https://paperswithcode.com/paper/application-of-natural-language-processing-to
Repo
Framework

Multi-Relevance Transfer Learning

Title Multi-Relevance Transfer Learning
Authors Tianchun Wang
Abstract Transfer learning aims to faciliate learning tasks in a label-scarce target domain by leveraging knowledge from a related source domain with plenty of labeled data. Often times we may have multiple domains with little or no labeled data as targets waiting to be solved. Most existing efforts tackle target domains separately by modeling the `source-target’ pairs without exploring the relatedness between them, which would cause loss of crucial information, thus failing to achieve optimal capability of knowledge transfer. In this paper, we propose a novel and effective approach called Multi-Relevance Transfer Learning (MRTL) for this purpose, which can simultaneously transfer different knowledge from the source and exploits the shared common latent factors between target domains. Specifically, we formulate the problem as an optimization task based on a collective nonnegative matrix tri-factorization framework. The proposed approach achieves both source-target transfer and target-target leveraging by sharing multiple decomposed latent subspaces. Further, an alternative minimization learning algorithm is developed with convergence guarantee. Empirical study validates the performance and effectiveness of MRTL compared to the state-of-the-art methods. |
Tasks Transfer Learning
Published 2017-11-09
URL http://arxiv.org/abs/1711.03361v1
PDF http://arxiv.org/pdf/1711.03361v1.pdf
PWC https://paperswithcode.com/paper/multi-relevance-transfer-learning
Repo
Framework

“Dave…I can assure you…that it’s going to be all right…” – A definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships

Title “Dave…I can assure you…that it’s going to be all right…” – A definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships
Authors Brett W Israelsen, Nisar R Ahmed
Abstract People who design, use, and are affected by autonomous artificially intelligent agents want to be able to \emph{trust} such agents – that is, to know that these agents will perform correctly, to understand the reasoning behind their actions, and to know how to use them appropriately. Many techniques have been devised to assess and influence human trust in artificially intelligent agents. However, these approaches are typically ad hoc, and have not been formally related to each other or to formal trust models. This paper presents a survey of \emph{algorithmic assurances}, i.e. programmed components of agent operation that are expressly designed to calibrate user trust in artificially intelligent agents. Algorithmic assurances are first formally defined and classified from the perspective of formally modeled human-artificially intelligent agent trust relationships. Building on these definitions, a synthesis of research across communities such as machine learning, human-computer interaction, robotics, e-commerce, and others reveals that assurance algorithms naturally fall along a spectrum in terms of their impact on an agent’s core functionality, with seven notable classes ranging from integral assurances (which impact an agent’s core functionality) to supplemental assurances (which have no direct effect on agent performance). Common approaches within each of these classes are identified and discussed; benefits and drawbacks of different approaches are also investigated.
Tasks
Published 2017-11-08
URL http://arxiv.org/abs/1711.03846v4
PDF http://arxiv.org/pdf/1711.03846v4.pdf
PWC https://paperswithcode.com/paper/davei-can-assure-youthat-its-going-to-be-all
Repo
Framework

On Deterministic Sampling Patterns for Robust Low-Rank Matrix Completion

Title On Deterministic Sampling Patterns for Robust Low-Rank Matrix Completion
Authors Morteza Ashraphijuo, Vaneet Aggarwal, Xiaodong Wang
Abstract In this letter, we study the deterministic sampling patterns for the completion of low rank matrix, when corrupted with a sparse noise, also known as robust matrix completion. We extend the recent results on the deterministic sampling patterns in the absence of noise based on the geometric analysis on the Grassmannian manifold. A special case where each column has a certain number of noisy entries is considered, where our probabilistic analysis performs very efficiently. Furthermore, assuming that the rank of the original matrix is not given, we provide an analysis to determine if the rank of a valid completion is indeed the actual rank of the data corrupted with sparse noise by verifying some conditions.
Tasks Low-Rank Matrix Completion, Matrix Completion
Published 2017-12-05
URL http://arxiv.org/abs/1712.01628v1
PDF http://arxiv.org/pdf/1712.01628v1.pdf
PWC https://paperswithcode.com/paper/on-deterministic-sampling-patterns-for-robust
Repo
Framework
comments powered by Disqus