Paper Group ANR 30
A Face Fairness Framework for 3D Meshes. Tunable GMM Kernels. A Foundry of Human Activities and Infrastructures. Computational Techniques in Multispectral Image Processing: Application to the Syriac Galen Palimpsest. Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding. Parallel Multi Channel Convolution using Gen …
A Face Fairness Framework for 3D Meshes
Title | A Face Fairness Framework for 3D Meshes |
Authors | Sk. Mohammadul Haque, Venu Madhav Govindu |
Abstract | In this paper, we present a face fairness framework for 3D meshes that preserves the regular shape of faces and is applicable to a variety of 3D mesh restoration tasks. Specifically, we present a number of desirable properties for any mesh restoration method and show that our framework satisfies them. We then apply our framework to two different tasks — mesh-denoising and mesh-refinement, and present comparative results for these two tasks showing improvement over other relevant methods in the literature. |
Tasks | Denoising |
Published | 2017-11-22 |
URL | http://arxiv.org/abs/1711.08155v1 |
http://arxiv.org/pdf/1711.08155v1.pdf | |
PWC | https://paperswithcode.com/paper/a-face-fairness-framework-for-3d-meshes |
Repo | |
Framework | |
Tunable GMM Kernels
Title | Tunable GMM Kernels |
Authors | Ping Li |
Abstract | The recently proposed “generalized min-max” (GMM) kernel can be efficiently linearized, with direct applications in large-scale statistical learning and fast near neighbor search. The linearized GMM kernel was extensively compared in with linearized radial basis function (RBF) kernel. On a large number of classification tasks, the tuning-free GMM kernel performs (surprisingly) well compared to the best-tuned RBF kernel. Nevertheless, one would naturally expect that the GMM kernel ought to be further improved if we introduce tuning parameters. In this paper, we study three simple constructions of tunable GMM kernels: (i) the exponentiated-GMM (or eGMM) kernel, (ii) the powered-GMM (or pGMM) kernel, and (iii) the exponentiated-powered-GMM (epGMM) kernel. The pGMM kernel can still be efficiently linearized by modifying the original hashing procedure for the GMM kernel. On about 60 publicly available classification datasets, we verify that the proposed tunable GMM kernels typically improve over the original GMM kernel. On some datasets, the improvements can be astonishingly significant. For example, on 11 popular datasets which were used for testing deep learning algorithms and tree methods, our experiments show that the proposed tunable GMM kernels are strong competitors to trees and deep nets. The previous studies developed tree methods including “abc-robust-logitboost” and demonstrated the excellent performance on those 11 datasets (and other datasets), by establishing the second-order tree-split formula and new derivatives for multi-class logistic loss. Compared to tree methods like “abc-robust-logitboost” (which are slow and need substantial model sizes), the tunable GMM kernels produce largely comparable results. |
Tasks | |
Published | 2017-01-09 |
URL | http://arxiv.org/abs/1701.02046v2 |
http://arxiv.org/pdf/1701.02046v2.pdf | |
PWC | https://paperswithcode.com/paper/tunable-gmm-kernels |
Repo | |
Framework | |
A Foundry of Human Activities and Infrastructures
Title | A Foundry of Human Activities and Infrastructures |
Authors | Robert B. Allen, Eunsang Yang, Tatsawan Timakum |
Abstract | Direct representation knowledgebases can enhance and even provide an alternative to document-centered digital libraries. Here we consider realist semantic modeling of everyday activities and infrastructures in such knowledgebases. Because we want to integrate a wide variety of topics, a collection of ontologies (a foundry) and a range of other knowledge resources are needed. We first consider modeling the routine procedures that support human activities and technologies. Next, we examine the interactions of technologies with aspects of social organization. Then, we consider approaches and issues for developing and validating explanations of the relationships among various entities. |
Tasks | |
Published | 2017-10-31 |
URL | http://arxiv.org/abs/1711.01927v1 |
http://arxiv.org/pdf/1711.01927v1.pdf | |
PWC | https://paperswithcode.com/paper/a-foundry-of-human-activities-and |
Repo | |
Framework | |
Computational Techniques in Multispectral Image Processing: Application to the Syriac Galen Palimpsest
Title | Computational Techniques in Multispectral Image Processing: Application to the Syriac Galen Palimpsest |
Authors | Corneliu Arsene, Peter Pormann, William Sellers, Siam Bhayro |
Abstract | Multispectral and hyperspectral image analysis has experienced much development in the last decade. The application of these methods to palimpsests has produced significant results, enabling researchers to recover texts that would be otherwise lost under the visible overtext, by improving the contrast between the undertext and the overtext. In this paper we explore an extended number of multispectral and hyperspectral image analysis methods, consisting of supervised and unsupervised dimensionality reduction techniques, on a part of the Syriac Galen Palimpsest dataset (www.digitalgalen.net). Of this extended set of methods, eight methods gave good results: three were supervised methods Generalized Discriminant Analysis (GDA), Linear Discriminant Analysis (LDA), and Neighborhood Component Analysis (NCA); and the other five methods were unsupervised methods (but still used in a supervised way) Gaussian Process Latent Variable Model (GPLVM), Isomap, Landmark Isomap, Principal Component Analysis (PCA), and Probabilistic Principal Component Analysis (PPCA). The relative success of these methods was determined visually, using color pictures, on the basis of whether the undertext was distinguishable from the overtext, resulting in the following ranking of the methods: LDA, NCA, GDA, Isomap, Landmark Isomap, PPCA, PCA, and GPLVM. These results were compared with those obtained using the Canonical Variates Analysis (CVA) method on the same dataset, which showed remarkably accuracy (LDA is a particular case of CVA where the objects are classified to two classes). |
Tasks | Dimensionality Reduction |
Published | 2017-01-31 |
URL | http://arxiv.org/abs/1702.02508v1 |
http://arxiv.org/pdf/1702.02508v1.pdf | |
PWC | https://paperswithcode.com/paper/computational-techniques-in-multispectral |
Repo | |
Framework | |
Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding
Title | Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding |
Authors | Mainak Jas, Tom Dupré La Tour, Umut Şimşekli, Alexandre Gramfort |
Abstract | Neural time-series data contain a wide variety of prototypical signal waveforms (atoms) that are of significant importance in clinical and cognitive research. One of the goals for analyzing such data is hence to extract such ‘shift-invariant’ atoms. Even though some success has been reported with existing algorithms, they are limited in applicability due to their heuristic nature. Moreover, they are often vulnerable to artifacts and impulsive noise, which are typically present in raw neural recordings. In this study, we address these issues and propose a novel probabilistic convolutional sparse coding (CSC) model for learning shift-invariant atoms from raw neural signals containing potentially severe artifacts. In the core of our model, which we call $\alpha$CSC, lies a family of heavy-tailed distributions called $\alpha$-stable distributions. We develop a novel, computationally efficient Monte Carlo expectation-maximization algorithm for inference. The maximization step boils down to a weighted CSC problem, for which we develop a computationally efficient optimization algorithm. Our results show that the proposed algorithm achieves state-of-the-art convergence speeds. Besides, $\alpha$CSC is significantly more robust to artifacts when compared to three competing algorithms: it can extract spike bursts, oscillations, and even reveal more subtle phenomena such as cross-frequency coupling when applied to noisy neural time series. |
Tasks | Time Series |
Published | 2017-05-22 |
URL | http://arxiv.org/abs/1705.08006v2 |
http://arxiv.org/pdf/1705.08006v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-the-morphology-of-brain-signals |
Repo | |
Framework | |
Parallel Multi Channel Convolution using General Matrix Multiplication
Title | Parallel Multi Channel Convolution using General Matrix Multiplication |
Authors | Aravind Vasudevan, Andrew Anderson, David Gregg |
Abstract | Convolutional neural networks (CNNs) have emerged as one of the most successful machine learning technologies for image and video processing. The most computationally intensive parts of CNNs are the convolutional layers, which convolve multi-channel images with multiple kernels. A common approach to implementing convolutional layers is to expand the image into a column matrix (im2col) and perform Multiple Channel Multiple Kernel (MCMK) convolution using an existing parallel General Matrix Multiplication (GEMM) library. This im2col conversion greatly increases the memory footprint of the input matrix and reduces data locality. In this paper we propose a new approach to MCMK convolution that is based on General Matrix Multiplication (GEMM), but not on im2col. Our algorithm eliminates the need for data replication on the input thereby enabling us to apply the convolution kernels on the input images directly. We have implemented several variants of our algorithm on a CPU processor and an embedded ARM processor. On the CPU, our algorithm is faster than im2col in most cases. |
Tasks | |
Published | 2017-04-06 |
URL | http://arxiv.org/abs/1704.04428v2 |
http://arxiv.org/pdf/1704.04428v2.pdf | |
PWC | https://paperswithcode.com/paper/parallel-multi-channel-convolution-using |
Repo | |
Framework | |
When Will AI Exceed Human Performance? Evidence from AI Experts
Title | When Will AI Exceed Human Performance? Evidence from AI Experts |
Authors | Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans |
Abstract | Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI. |
Tasks | |
Published | 2017-05-24 |
URL | http://arxiv.org/abs/1705.08807v3 |
http://arxiv.org/pdf/1705.08807v3.pdf | |
PWC | https://paperswithcode.com/paper/when-will-ai-exceed-human-performance |
Repo | |
Framework | |
Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps
Title | Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps |
Authors | Tobias Falke, Iryna Gurevych |
Abstract | Concept maps can be used to concisely represent important information and bring structure into large document collections. Therefore, we study a variant of multi-document summarization that produces summaries in the form of concept maps. However, suitable evaluation datasets for this task are currently missing. To close this gap, we present a newly created corpus of concept maps that summarize heterogeneous collections of web documents on educational topics. It was created using a novel crowdsourcing approach that allows us to efficiently determine important elements in large document collections. We release the corpus along with a baseline system and proposed evaluation protocol to enable further research on this variant of summarization. |
Tasks | Document Summarization, Multi-Document Summarization |
Published | 2017-04-14 |
URL | http://arxiv.org/abs/1704.04452v2 |
http://arxiv.org/pdf/1704.04452v2.pdf | |
PWC | https://paperswithcode.com/paper/bringing-structure-into-summaries |
Repo | |
Framework | |
An Algebraic Formalization of Forward and Forward-backward Algorithms
Title | An Algebraic Formalization of Forward and Forward-backward Algorithms |
Authors | Ai Azuma, Masashi Shimbo, Yuji Matsumoto |
Abstract | In this paper, we propose an algebraic formalization of the two important classes of dynamic programming algorithms called forward and forward-backward algorithms. They are generalized extensively in this study so that a wide range of other existing algorithms is subsumed. Forward algorithms generalized in this study subsume the ordinary forward algorithm on trellises for sequence labeling, the inside algorithm on derivation forests for CYK parsing, a unidirectional message passing on acyclic factor graphs, the forward mode of automatic differentiation on computation graphs with addition and multiplication, and so on. In addition, we reveal algebraic structures underlying complicated computation with forward algorithms. By the aid of the revealed algebraic structures, we also propose a systematic framework to design complicated variants of forward algorithms. Forward-backward algorithms generalized in this study subsume the ordinary forward-backward algorithm on trellises for sequence labeling, the inside-outside algorithm on derivation forests for CYK parsing, the sum-product algorithm on acyclic factor graphs, the reverse mode of automatic differentiation (a.k.a. back propagation) on computation graphs with addition and multiplication, and so on. We also propose an algebraic characterization of what can be computed by forward-backward algorithms and elucidate the relationship between forward and forward-backward algorithms. |
Tasks | |
Published | 2017-02-22 |
URL | http://arxiv.org/abs/1702.06941v1 |
http://arxiv.org/pdf/1702.06941v1.pdf | |
PWC | https://paperswithcode.com/paper/an-algebraic-formalization-of-forward-and |
Repo | |
Framework | |
Liquid Splash Modeling with Neural Networks
Title | Liquid Splash Modeling with Neural Networks |
Authors | Kiwon Um, Xiangyu Hu, Nils Thuerey |
Abstract | This paper proposes a new data-driven approach to model detailed splashes for liquid simulations with neural networks. Our model learns to generate small-scale splash detail for the fluid-implicit-particle method using training data acquired from physically parametrized, high resolution simulations. We use neural networks to model the regression of splash formation using a classifier together with a velocity modifier. For the velocity modification, we employ a heteroscedastic model. We evaluate our method for different spatial scales, simulation setups, and solvers. Our simulation results demonstrate that our model significantly improves visual fidelity with a large amount of realistic droplet formation and yields splash detail much more efficiently than finer discretizations. |
Tasks | |
Published | 2017-04-14 |
URL | http://arxiv.org/abs/1704.04456v2 |
http://arxiv.org/pdf/1704.04456v2.pdf | |
PWC | https://paperswithcode.com/paper/liquid-splash-modeling-with-neural-networks |
Repo | |
Framework | |
Online Signature Verification using Recurrent Neural Network and Length-normalized Path Signature
Title | Online Signature Verification using Recurrent Neural Network and Length-normalized Path Signature |
Authors | Songxuan Lai, Lianwen Jin, Weixin Yang |
Abstract | Inspired by the great success of recurrent neural networks (RNNs) in sequential modeling, we introduce a novel RNN system to improve the performance of online signature verification. The training objective is to directly minimize intra-class variations and to push the distances between skilled forgeries and genuine samples above a given threshold. By back-propagating the training signals, our RNN network produced discriminative features with desired metrics. Additionally, we propose a novel descriptor, called the length-normalized path signature (LNPS), and apply it to online signature verification. LNPS has interesting properties, such as scale invariance and rotation invariance after linear combination, and shows promising results in online signature verification. Experiments on the publicly available SVC-2004 dataset yielded state-of-the-art performance of 2.37% equal error rate (EER). |
Tasks | |
Published | 2017-05-19 |
URL | http://arxiv.org/abs/1705.06849v1 |
http://arxiv.org/pdf/1705.06849v1.pdf | |
PWC | https://paperswithcode.com/paper/online-signature-verification-using-recurrent |
Repo | |
Framework | |
Improving Spectral Clustering using the Asymptotic Value of the Normalised Cut
Title | Improving Spectral Clustering using the Asymptotic Value of the Normalised Cut |
Authors | David Hofmeyr |
Abstract | Spectral clustering is a popular and versatile clustering method based on a relaxation of the normalised graph cut objective. Despite its popularity, however, there is no single agreed upon method for tuning the important scaling parameter, nor for determining automatically the number of clusters to extract. Popular heuristics exist, but corresponding theoretical results are scarce. In this paper we investigate the asymptotic value of the normalised cut for an increasing sample assumed to arise from an underlying probability distribution, and based on this result provide recommendations for improving spectral clustering methodology. A corresponding algorithm is proposed with strong empirical performance. |
Tasks | |
Published | 2017-03-29 |
URL | https://arxiv.org/abs/1703.09975v2 |
https://arxiv.org/pdf/1703.09975v2.pdf | |
PWC | https://paperswithcode.com/paper/improving-spectral-clustering-using-the |
Repo | |
Framework | |
Modeling Temporal Dynamics and Spatial Configurations of Actions Using Two-Stream Recurrent Neural Networks
Title | Modeling Temporal Dynamics and Spatial Configurations of Actions Using Two-Stream Recurrent Neural Networks |
Authors | Hongsong Wang, Liang Wang |
Abstract | Recently, skeleton based action recognition gains more popularity due to cost-effective depth sensors coupled with real-time skeleton estimation algorithms. Traditional approaches based on handcrafted features are limited to represent the complexity of motion patterns. Recent methods that use Recurrent Neural Networks (RNN) to handle raw skeletons only focus on the contextual dependency in the temporal domain and neglect the spatial configurations of articulated skeletons. In this paper, we propose a novel two-stream RNN architecture to model both temporal dynamics and spatial configurations for skeleton based action recognition. We explore two different structures for the temporal stream: stacked RNN and hierarchical RNN. Hierarchical RNN is designed according to human body kinematics. We also propose two effective methods to model the spatial structure by converting the spatial graph into a sequence of joints. To improve generalization of our model, we further exploit 3D transformation based data augmentation techniques including rotation and scaling transformation to transform the 3D coordinates of skeletons during training. Experiments on 3D action recognition benchmark datasets show that our method brings a considerable improvement for a variety of actions, i.e., generic actions, interaction activities and gestures. |
Tasks | 3D Human Action Recognition, Data Augmentation, Skeleton Based Action Recognition, Temporal Action Localization |
Published | 2017-04-09 |
URL | http://arxiv.org/abs/1704.02581v2 |
http://arxiv.org/pdf/1704.02581v2.pdf | |
PWC | https://paperswithcode.com/paper/modeling-temporal-dynamics-and-spatial |
Repo | |
Framework | |
Deep Metric Learning with Angular Loss
Title | Deep Metric Learning with Angular Loss |
Authors | Jian Wang, Feng Zhou, Shilei Wen, Xiao Liu, Yuanqing Lin |
Abstract | The modern image search system requires semantic understanding of image, and a key yet under-addressed problem is to learn a good metric for measuring the similarity between images. While deep metric learning has yielded impressive performance gains by extracting high level abstractions from image data, a proper objective loss function becomes the central issue to boost the performance. In this paper, we propose a novel angular loss, which takes angle relationship into account, for learning better similarity metric. Whereas previous metric learning methods focus on optimizing the similarity (contrastive loss) or relative similarity (triplet loss) of image pairs, our proposed method aims at constraining the angle at the negative point of triplet triangles. Several favorable properties are observed when compared with conventional methods. First, scale invariance is introduced, improving the robustness of objective against feature variance. Second, a third-order geometric constraint is inherently imposed, capturing additional local structure of triplet triangles than contrastive loss or triplet loss. Third, better convergence has been demonstrated by experiments on three publicly available datasets. |
Tasks | Image Retrieval, Metric Learning |
Published | 2017-08-04 |
URL | http://arxiv.org/abs/1708.01682v1 |
http://arxiv.org/pdf/1708.01682v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-metric-learning-with-angular-loss |
Repo | |
Framework | |
Display advertising: Estimating conversion probability efficiently
Title | Display advertising: Estimating conversion probability efficiently |
Authors | Abdollah Safari, Rachel MacKay Altman, Thomas M. Loughin |
Abstract | The goal of online display advertising is to entice users to “convert” (i.e., take a pre-defined action such as making a purchase) after clicking on the ad. An important measure of the value of an ad is the probability of conversion. The focus of this paper is the development of a computationally efficient, accurate, and precise estimator of conversion probability. The challenges associated with this estimation problem are the delays in observing conversions and the size of the data set (both number of observations and number of predictors). Two models have previously been considered as a basis for estimation: A logistic regression model and a joint model for observed conversion statuses and delay times. Fitting the former is simple, but ignoring the delays in conversion leads to an under-estimate of conversion probability. On the other hand, the latter is less biased but computationally expensive to fit. Our proposed estimator is a compromise between these two estimators. We apply our results to a data set from Criteo, a commerce marketing company that personalizes online display advertisements for users. |
Tasks | |
Published | 2017-10-24 |
URL | http://arxiv.org/abs/1710.08583v1 |
http://arxiv.org/pdf/1710.08583v1.pdf | |
PWC | https://paperswithcode.com/paper/display-advertising-estimating-conversion |
Repo | |
Framework | |