Paper Group ANR 338
Kernelized Covariance for Action Recognition. Comment on “Why does deep and cheap learning work so well?” [arXiv:1608.08225]. Dynamic change-point detection using similarity networks. HoneyFaces: Increasing the Security and Privacy of Authentication Using Synthetic Facial Images. gLOP: the global and Local Penalty for Capturing Predictive Heterogen …
Kernelized Covariance for Action Recognition
Title | Kernelized Covariance for Action Recognition |
Authors | Jacopo Cavazza, Andrea Zunino, Marco San Biagio, Vittorio Murino |
Abstract | In this paper we aim at increasing the descriptive power of the covariance matrix, limited in capturing linear mutual dependencies between variables only. We present a rigorous and principled mathematical pipeline to recover the kernel trick for computing the covariance matrix, enhancing it to model more complex, non-linear relationships conveyed by the raw data. To this end, we propose Kernelized-COV, which generalizes the original covariance representation without compromising the efficiency of the computation. In the experiments, we validate the proposed framework against many previous approaches in the literature, scoring on par or superior with respect to the state of the art on benchmark datasets for 3D action recognition. |
Tasks | 3D Human Action Recognition, Temporal Action Localization |
Published | 2016-04-22 |
URL | http://arxiv.org/abs/1604.06582v2 |
http://arxiv.org/pdf/1604.06582v2.pdf | |
PWC | https://paperswithcode.com/paper/kernelized-covariance-for-action-recognition |
Repo | |
Framework | |
Comment on “Why does deep and cheap learning work so well?” [arXiv:1608.08225]
Title | Comment on “Why does deep and cheap learning work so well?” [arXiv:1608.08225] |
Authors | David J. Schwab, Pankaj Mehta |
Abstract | In a recent paper, “Why does deep and cheap learning work so well?", Lin and Tegmark claim to show that the mapping between deep belief networks and the variational renormalization group derived in [arXiv:1410.3831] is invalid, and present a “counterexample” that claims to show that this mapping does not hold. In this comment, we show that these claims are incorrect and stem from a misunderstanding of the variational RG procedure proposed by Kadanoff. We also explain why the “counterexample” of Lin and Tegmark is compatible with the mapping proposed in [arXiv:1410.3831]. |
Tasks | |
Published | 2016-09-12 |
URL | http://arxiv.org/abs/1609.03541v1 |
http://arxiv.org/pdf/1609.03541v1.pdf | |
PWC | https://paperswithcode.com/paper/comment-on-why-does-deep-and-cheap-learning |
Repo | |
Framework | |
Dynamic change-point detection using similarity networks
Title | Dynamic change-point detection using similarity networks |
Authors | Shanshan Cao, Yao Xie |
Abstract | From a sequence of similarity networks, with edges representing certain similarity measures between nodes, we are interested in detecting a change-point which changes the statistical property of the networks. After the change, a subset of anomalous nodes which compares dissimilarly with the normal nodes. We study a simple sequential change detection procedure based on node-wise average similarity measures, and study its theoretical property. Simulation and real-data examples demonstrate such a simply stopping procedure has reasonably good performance. We further discuss the faulty sensor isolation (estimating anomalous nodes) using community detection. |
Tasks | Change Point Detection, Community Detection |
Published | 2016-12-05 |
URL | http://arxiv.org/abs/1612.01504v1 |
http://arxiv.org/pdf/1612.01504v1.pdf | |
PWC | https://paperswithcode.com/paper/dynamic-change-point-detection-using |
Repo | |
Framework | |
HoneyFaces: Increasing the Security and Privacy of Authentication Using Synthetic Facial Images
Title | HoneyFaces: Increasing the Security and Privacy of Authentication Using Synthetic Facial Images |
Authors | Mor Ohana, Orr Dunkelman, Stuart Gibson, Margarita Osadchy |
Abstract | One of the main challenges faced by Biometric-based authentication systems is the need to offer secure authentication while maintaining the privacy of the biometric data. Previous solutions, such as Secure Sketch and Fuzzy Extractors, rely on assumptions that cannot be guaranteed in practice, and often affect the authentication accuracy. In this paper, we introduce HoneyFaces: the concept of adding a large set of synthetic faces (indistinguishable from real) into the biometric “password file”. This password inflation protects the privacy of users and increases the security of the system without affecting the accuracy of the authentication. In particular, privacy for the real users is provided by “hiding” them among a large number of fake users (as the distributions of synthetic and real faces are equal). In addition to maintaining the authentication accuracy, and thus not affecting the security of the authentication process, HoneyFaces offer several security improvements: increased exfiltration hardness, improved leakage detection, and the ability to use a Two-server setting like in HoneyWords. Finally, HoneyFaces can be combined with other security and privacy mechanisms for biometric data. We implemented the HoneyFaces system and tested it with a password file composed of 270 real users. The “password file” was then inflated to accommodate up to $2^{36.5}$ users (resulting in a 56.6 TB “password file”). At the same time, the inclusion of additional faces does not affect the true acceptance rate or false acceptance rate which were 93.33% and 0.01%, respectively. |
Tasks | |
Published | 2016-11-11 |
URL | http://arxiv.org/abs/1611.03811v1 |
http://arxiv.org/pdf/1611.03811v1.pdf | |
PWC | https://paperswithcode.com/paper/honeyfaces-increasing-the-security-and |
Repo | |
Framework | |
gLOP: the global and Local Penalty for Capturing Predictive Heterogeneity
Title | gLOP: the global and Local Penalty for Capturing Predictive Heterogeneity |
Authors | Rhiannon V. Rose, Daniel J. Lizotte |
Abstract | When faced with a supervised learning problem, we hope to have rich enough data to build a model that predicts future instances well. However, in practice, problems can exhibit predictive heterogeneity: most instances might be relatively easy to predict, while others might be predictive outliers for which a model trained on the entire dataset does not perform well. Identifying these can help focus future data collection. We present gLOP, the global and Local Penalty, a framework for capturing predictive heterogeneity and identifying predictive outliers. gLOP is based on penalized regression for multitask learning, which improves learning by leveraging training signal information from related tasks. We give two optimization algorithms for gLOP, one space-efficient, and another giving the full regularization path. We also characterize uniqueness in terms of the data and tuning parameters, and present empirical results on synthetic data and on two health research problems. |
Tasks | |
Published | 2016-07-29 |
URL | http://arxiv.org/abs/1608.00027v1 |
http://arxiv.org/pdf/1608.00027v1.pdf | |
PWC | https://paperswithcode.com/paper/glop-the-global-and-local-penalty-for |
Repo | |
Framework | |
PLATO: Policy Learning using Adaptive Trajectory Optimization
Title | PLATO: Policy Learning using Adaptive Trajectory Optimization |
Authors | Gregory Kahn, Tianhao Zhang, Sergey Levine, Pieter Abbeel |
Abstract | Policy search can in principle acquire complex strategies for control of robots and other autonomous systems. When the policy is trained to process raw sensory inputs, such as images and depth maps, it can also acquire a strategy that combines perception and control. However, effectively processing such complex inputs requires an expressive policy class, such as a large neural network. These high-dimensional policies are difficult to train, especially when learning to control safety-critical systems. We propose PLATO, an algorithm that trains complex control policies with supervised learning, using model-predictive control (MPC) to generate the supervision, hence never in need of running a partially trained and potentially unsafe policy. PLATO uses an adaptive training method to modify the behavior of MPC to gradually match the learned policy in order to generate training samples at states that are likely to be visited by the learned policy. PLATO also maintains the MPC cost as an objective to avoid highly undesirable actions that would result from strictly following the learned policy before it has been fully trained. We prove that this type of adaptive MPC expert produces supervision that leads to good long-horizon performance of the resulting policy. We also empirically demonstrate that MPC can still avoid dangerous on-policy actions in unexpected situations during training. Our empirical results on a set of challenging simulated aerial vehicle tasks demonstrate that, compared to prior methods, PLATO learns faster, experiences substantially fewer catastrophic failures (crashes) during training, and often converges to a better policy. |
Tasks | |
Published | 2016-03-02 |
URL | http://arxiv.org/abs/1603.00622v4 |
http://arxiv.org/pdf/1603.00622v4.pdf | |
PWC | https://paperswithcode.com/paper/plato-policy-learning-using-adaptive |
Repo | |
Framework | |
Structured Prediction Theory Based on Factor Graph Complexity
Title | Structured Prediction Theory Based on Factor Graph Complexity |
Authors | Corinna Cortes, Mehryar Mohri, Vitaly Kuznetsov, Scott Yang |
Abstract | We present a general theoretical analysis of structured prediction with a series of new results. We give new data-dependent margin guarantees for structured prediction for a very wide family of loss functions and a general family of hypotheses, with an arbitrary factor graph decomposition. These are the tightest margin bounds known for both standard multi-class and general structured prediction problems. Our guarantees are expressed in terms of a data-dependent complexity measure, factor graph complexity, which we show can be estimated from data and bounded in terms of familiar quantities. We further extend our theory by leveraging the principle of Voted Risk Minimization (VRM) and show that learning is possible even with complex factor graphs. We present new learning bounds for this advanced setting, which we use to design two new algorithms, Voted Conditional Random Field (VCRF) and Voted Structured Boosting (StructBoost). These algorithms can make use of complex features and factor graphs and yet benefit from favorable learning guarantees. We also report the results of experiments with VCRF on several datasets to validate our theory. |
Tasks | Structured Prediction |
Published | 2016-05-20 |
URL | http://arxiv.org/abs/1605.06443v2 |
http://arxiv.org/pdf/1605.06443v2.pdf | |
PWC | https://paperswithcode.com/paper/structured-prediction-theory-based-on-factor |
Repo | |
Framework | |
A POS Tagger for Code Mixed Indian Social Media Text - ICON-2016 NLP Tools Contest Entry from Surukam
Title | A POS Tagger for Code Mixed Indian Social Media Text - ICON-2016 NLP Tools Contest Entry from Surukam |
Authors | Sree Harsha Ramesh, Raveena R Kumar |
Abstract | Building Part-of-Speech (POS) taggers for code-mixed Indian languages is a particularly challenging problem in computational linguistics due to a dearth of accurately annotated training corpora. ICON, as part of its NLP tools contest has organized this challenge as a shared task for the second consecutive year to improve the state-of-the-art. This paper describes the POS tagger built at Surukam to predict the coarse-grained and fine-grained POS tags for three language pairs - Bengali-English, Telugu-English and Hindi-English, with the text spanning three popular social media platforms - Facebook, WhatsApp and Twitter. We employed Conditional Random Fields as the sequence tagging algorithm and used a library called sklearn-crfsuite - a thin wrapper around CRFsuite for training our model. Among the features we used include - character n-grams, language information and patterns for emoji, number, punctuation and web-address. Our submissions in the constrained environment,i.e., without making any use of monolingual POS taggers or the like, obtained an overall average F1-score of 76.45%, which is comparable to the 2015 winning score of 76.79%. |
Tasks | |
Published | 2016-12-31 |
URL | http://arxiv.org/abs/1701.00066v1 |
http://arxiv.org/pdf/1701.00066v1.pdf | |
PWC | https://paperswithcode.com/paper/a-pos-tagger-for-code-mixed-indian-social |
Repo | |
Framework | |
A Novel Approach for Shot Boundary Detection in Videos
Title | A Novel Approach for Shot Boundary Detection in Videos |
Authors | D. S. Guru, Mahamad Suhil, P. Lolika |
Abstract | This paper presents a novel approach for video shot boundary detection. The proposed approach is based on split and merge concept. A fisher linear discriminant criterion is used to guide the process of both splitting and merging. For the purpose of capturing the between class and within class scatter we employ 2D2 FLD method which works on texture feature of regions in each frame of a video. Further to reduce the complexity of the process we propose to employ spectral clustering to group related regions together to a single there by achieving reduction in dimension. The proposed method is experimentally also validated on a cricket video. It is revealed that shots obtained by the proposed approach are highly cohesive and loosely coupled |
Tasks | Boundary Detection |
Published | 2016-08-24 |
URL | http://arxiv.org/abs/1608.06716v1 |
http://arxiv.org/pdf/1608.06716v1.pdf | |
PWC | https://paperswithcode.com/paper/a-novel-approach-for-shot-boundary-detection |
Repo | |
Framework | |
A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning
Title | A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning |
Authors | Aleksandr Aravkin, Damek Davis |
Abstract | In this paper, we show how to transform any optimization problem that arises from fitting a machine learning model into one that (1) detects and removes contaminated data from the training set while (2) simultaneously fitting the trimmed model on the uncontaminated data that remains. To solve the resulting nonconvex optimization problem, we introduce a fast stochastic proximal-gradient algorithm that incorporates prior knowledge through nonsmooth regularization. For datasets of size $n$, our approach requires $O(n^{2/3}/\varepsilon)$ gradient evaluations to reach $\varepsilon$-accuracy and, when a certain error bound holds, the complexity improves to $O(\kappa n^{2/3}\log(1/\varepsilon))$. These rates are $n^{1/3}$ times better than those achieved by typical, full gradient methods. |
Tasks | |
Published | 2016-10-04 |
URL | http://arxiv.org/abs/1610.01101v2 |
http://arxiv.org/pdf/1610.01101v2.pdf | |
PWC | https://paperswithcode.com/paper/a-smart-stochastic-algorithm-for-nonconvex |
Repo | |
Framework | |
Review Based Rating Prediction
Title | Review Based Rating Prediction |
Authors | Tal Hadad |
Abstract | Recommendation systems are an important units in today’s e-commerce applications, such as targeted advertising, personalized marketing and information retrieval. In recent years, the importance of contextual information has motivated generation of personalized recommendations according to the available contextual information of users. Compared to the traditional systems which mainly utilize users’ rating history, review-based recommendation hopefully provide more relevant results to users. We introduce a review-based recommendation approach that obtains contextual information by mining user reviews. The proposed approach relate to features obtained by analyzing textual reviews using methods developed in Natural Language Processing (NLP) and information retrieval discipline to compute a utility function over a given item. An item utility is a measure that shows how much it is preferred according to user’s current context. In our system, the context inference is modeled as similarity between the users reviews history and the item reviews history. As an example application, we used our method to mine contextual data from customers’ reviews of movies and use it to produce review-based rating prediction. The predicted ratings can generate recommendations that are item-based and should appear at the recommended items list in the product page. Our evaluations suggest that our system can help produce better prediction rating scores in comparison to the standard prediction methods. |
Tasks | Information Retrieval, Recommendation Systems |
Published | 2016-06-30 |
URL | http://arxiv.org/abs/1607.00024v4 |
http://arxiv.org/pdf/1607.00024v4.pdf | |
PWC | https://paperswithcode.com/paper/review-based-rating-prediction |
Repo | |
Framework | |
Truncated Variance Reduction: A Unified Approach to Bayesian Optimization and Level-Set Estimation
Title | Truncated Variance Reduction: A Unified Approach to Bayesian Optimization and Level-Set Estimation |
Authors | Ilija Bogunovic, Jonathan Scarlett, Andreas Krause, Volkan Cevher |
Abstract | We present a new algorithm, truncated variance reduction (TruVaR), that treats Bayesian optimization (BO) and level-set estimation (LSE) with Gaussian processes in a unified fashion. The algorithm greedily shrinks a sum of truncated variances within a set of potential maximizers (BO) or unclassified points (LSE), which is updated based on confidence bounds. TruVaR is effective in several important settings that are typically non-trivial to incorporate into myopic algorithms, including pointwise costs and heteroscedastic noise. We provide a general theoretical guarantee for TruVaR covering these aspects, and use it to recover and strengthen existing results on BO and LSE. Moreover, we provide a new result for a setting where one can select from a number of noise levels having associated costs. We demonstrate the effectiveness of the algorithm on both synthetic and real-world data sets. |
Tasks | Gaussian Processes |
Published | 2016-10-24 |
URL | http://arxiv.org/abs/1610.07379v1 |
http://arxiv.org/pdf/1610.07379v1.pdf | |
PWC | https://paperswithcode.com/paper/truncated-variance-reduction-a-unified |
Repo | |
Framework | |
A structured argumentation framework for detaching conditional obligations
Title | A structured argumentation framework for detaching conditional obligations |
Authors | Mathieu Beirlaen, Christian Straßer |
Abstract | We present a general formal argumentation system for dealing with the detachment of conditional obligations. Given a set of facts, constraints, and conditional obligations, we answer the question whether an unconditional obligation is detachable by considering reasons for and against its detachment. For the evaluation of arguments in favor of detaching obligations we use a Dung-style argumentation-theoretical semantics. We illustrate the modularity of the general framework by considering some extensions, and we compare the framework to some related approaches from the literature. |
Tasks | |
Published | 2016-06-01 |
URL | http://arxiv.org/abs/1606.00339v1 |
http://arxiv.org/pdf/1606.00339v1.pdf | |
PWC | https://paperswithcode.com/paper/a-structured-argumentation-framework-for |
Repo | |
Framework | |
Privacy-Preserving Human Activity Recognition from Extreme Low Resolution
Title | Privacy-Preserving Human Activity Recognition from Extreme Low Resolution |
Authors | Michael S. Ryoo, Brandon Rothrock, Charles Fleming, Hyun Jong Yang |
Abstract | Privacy protection from surreptitious video recordings is an important societal challenge. We desire a computer vision system (e.g., a robot) that can recognize human activities and assist our daily life, yet ensure that it is not recording video that may invade our privacy. This paper presents a fundamental approach to address such contradicting objectives: human activity recognition while only using extreme low-resolution (e.g., 16x12) anonymized videos. We introduce the paradigm of inverse super resolution (ISR), the concept of learning the optimal set of image transformations to generate multiple low-resolution (LR) training videos from a single video. Our ISR learns different types of sub-pixel transformations optimized for the activity classification, allowing the classifier to best take advantage of existing high-resolution videos (e.g., YouTube videos) by creating multiple LR training videos tailored for the problem. We experimentally confirm that the paradigm of inverse super resolution is able to benefit activity recognition from extreme low-resolution videos. |
Tasks | Activity Recognition, Human Activity Recognition, Super-Resolution |
Published | 2016-04-12 |
URL | http://arxiv.org/abs/1604.03196v3 |
http://arxiv.org/pdf/1604.03196v3.pdf | |
PWC | https://paperswithcode.com/paper/privacy-preserving-human-activity-recognition |
Repo | |
Framework | |
Effective Mean-Field Inference Method for Nonnegative Boltzmann Machines
Title | Effective Mean-Field Inference Method for Nonnegative Boltzmann Machines |
Authors | Muneki Yasuda |
Abstract | Nonnegative Boltzmann machines (NNBMs) are recurrent probabilistic neural network models that can describe multi-modal nonnegative data. NNBMs form rectified Gaussian distributions that appear in biological neural network models, positive matrix factorization, nonnegative matrix factorization, and so on. In this paper, an effective inference method for NNBMs is proposed that uses the mean-field method, referred to as the Thouless–Anderson–Palmer equation, and the diagonal consistency method, which was recently proposed. |
Tasks | |
Published | 2016-03-08 |
URL | http://arxiv.org/abs/1603.02434v1 |
http://arxiv.org/pdf/1603.02434v1.pdf | |
PWC | https://paperswithcode.com/paper/effective-mean-field-inference-method-for |
Repo | |
Framework | |