Paper Group ANR 82
Emotional Intensity analysis in Bipolar subjects. Theoretical Analysis of Domain Adaptation with Optimal Transport. Blind Facial Image Quality Enhancement using Non-Rigid Semantic Patches. Preference Elicitation For Single Crossing Domain. Situated Structure Learning of a Bayesian Logic Network for Commonsense Reasoning. A Sparse PCA Approach to Cl …
Emotional Intensity analysis in Bipolar subjects
Title | Emotional Intensity analysis in Bipolar subjects |
Authors | Facundo Carrillo, Natalia Mota, Mauro Copelli, Sidarta Ribeiro, Mariano Sigman, Guillermo Cecchi, Diego Fernandez Slezak |
Abstract | The massive availability of digital repositories of human thought opens radical novel way of studying the human mind. Natural language processing tools and computational models have evolved such that many mental conditions are predicted by analysing speech. Transcription of interviews and discourses are analyzed using syntactic, grammatical or sentiment analysis to infer the mental state. Here we set to investigate if classification of Bipolar and control subjects is possible. We develop the Emotion Intensity Index based on the Dictionary of Affect, and find that subjects categories are distinguishable. Using classical classification techniques we get more than 75% of labeling performance. These results sumed to previous studies show that current automated speech analysis is capable of identifying altered mental states towards a quantitative psychiatry. |
Tasks | Sentiment Analysis |
Published | 2016-06-07 |
URL | http://arxiv.org/abs/1606.02231v1 |
http://arxiv.org/pdf/1606.02231v1.pdf | |
PWC | https://paperswithcode.com/paper/emotional-intensity-analysis-in-bipolar |
Repo | |
Framework | |
Theoretical Analysis of Domain Adaptation with Optimal Transport
Title | Theoretical Analysis of Domain Adaptation with Optimal Transport |
Authors | Ievgen Redko, Amaury Habrard, Marc Sebban |
Abstract | Domain adaptation (DA) is an important and emerging field of machine learning that tackles the problem occurring when the distributions of training (source domain) and test (target domain) data are similar but different. Current theoretical results show that the efficiency of DA algorithms depends on their capacity of minimizing the divergence between source and target probability distributions. In this paper, we provide a theoretical study on the advantages that concepts borrowed from optimal transportation theory can bring to DA. In particular, we show that the Wasserstein metric can be used as a divergence measure between distributions to obtain generalization guarantees for three different learning settings: (i) classic DA with unsupervised target data (ii) DA combining source and target labeled data, (iii) multiple source DA. Based on the obtained results, we provide some insights showing when this analysis can be tighter than other existing frameworks. |
Tasks | Domain Adaptation |
Published | 2016-10-14 |
URL | http://arxiv.org/abs/1610.04420v4 |
http://arxiv.org/pdf/1610.04420v4.pdf | |
PWC | https://paperswithcode.com/paper/theoretical-analysis-of-domain-adaptation |
Repo | |
Framework | |
Blind Facial Image Quality Enhancement using Non-Rigid Semantic Patches
Title | Blind Facial Image Quality Enhancement using Non-Rigid Semantic Patches |
Authors | Ester Hait, Guy Gilboa |
Abstract | We propose to combine semantic data and registration algorithms to solve various image processing problems such as denoising, super-resolution and color-correction. It is shown how such new techniques can achieve significant quality enhancement, both visually and quantitatively, in the case of facial image enhancement. Our model assumes prior high quality data of the person to be processed, but no knowledge of the degradation model. We try to overcome the classical processing limits by using semantically-aware patches, with adaptive size and location regions of coherent structure and context, as building blocks. The method is demonstrated on the problem of cellular photography enhancement of dark facial images for different identities, expressions and poses. |
Tasks | Denoising, Image Enhancement, Super-Resolution |
Published | 2016-09-27 |
URL | http://arxiv.org/abs/1609.08475v2 |
http://arxiv.org/pdf/1609.08475v2.pdf | |
PWC | https://paperswithcode.com/paper/blind-facial-image-quality-enhancement-using |
Repo | |
Framework | |
Preference Elicitation For Single Crossing Domain
Title | Preference Elicitation For Single Crossing Domain |
Authors | Palash Dey, Neeldhara Misra |
Abstract | Eliciting the preferences of a set of agents over a set of alternatives is a problem of fundamental importance in social choice theory. Prior work on this problem has studied the query complexity of preference elicitation for the unrestricted domain and for the domain of single peaked preferences. In this paper, we consider the domain of single crossing preference profiles and study the query complexity of preference elicitation under various settings. We consider two distinct situations: when an ordering of the voters with respect to which the profile is single crossing is known versus when it is unknown. We also consider different access models: when the votes can be accessed at random, as opposed to when they are coming in a pre-defined sequence. In the sequential access model, we distinguish two cases when the ordering is known: the first is that sequence in which the votes appear is also a single-crossing order, versus when it is not. The main contribution of our work is to provide polynomial time algorithms with low query complexity for preference elicitation in all the above six cases. Further, we show that the query complexities of our algorithms are optimal up to constant factors for all but one of the above six cases. We then present preference elicitation algorithms for profiles which are close to being single crossing under various notions of closeness, for example, single crossing width, minimum number of candidates voters whose deletion makes a profile single crossing. |
Tasks | |
Published | 2016-04-15 |
URL | http://arxiv.org/abs/1604.05194v1 |
http://arxiv.org/pdf/1604.05194v1.pdf | |
PWC | https://paperswithcode.com/paper/preference-elicitation-for-single-crossing |
Repo | |
Framework | |
Situated Structure Learning of a Bayesian Logic Network for Commonsense Reasoning
Title | Situated Structure Learning of a Bayesian Logic Network for Commonsense Reasoning |
Authors | Haley Garrison, Sonia Chernova |
Abstract | This paper details the implementation of an algorithm for automatically generating a high-level knowledge network to perform commonsense reasoning, specifically with the application of robotic task repair. The network is represented using a Bayesian Logic Network (BLN) (Jain, Waldherr, and Beetz 2009), which combines a set of directed relations between abstract concepts, including IsA, AtLocation, HasProperty, and UsedFor, with a corresponding probability distribution that models the uncertainty inherent in these relations. Inference over this network enables reasoning over the abstract concepts in order to perform appropriate object substitution or to locate missing objects in the robot’s environment. The structure of the network is generated by combining information from two existing knowledge sources: ConceptNet (Speer and Havasi 2012), and WordNet (Miller 1995). This is done in a “situated” manner by only including information relevant a given context. Results show that the generated network is able to accurately predict object categories, locations, properties, and affordances in three different household scenarios. |
Tasks | |
Published | 2016-07-01 |
URL | http://arxiv.org/abs/1607.00428v1 |
http://arxiv.org/pdf/1607.00428v1.pdf | |
PWC | https://paperswithcode.com/paper/situated-structure-learning-of-a-bayesian |
Repo | |
Framework | |
A Sparse PCA Approach to Clustering
Title | A Sparse PCA Approach to Clustering |
Authors | T. Tony Cai, Linjun Zhang |
Abstract | We discuss a clustering method for Gaussian mixture model based on the sparse principal component analysis (SPCA) method and compare it with the IF-PCA method. We also discuss the dependent case where the covariance matrix $\Sigma$ is not necessarily diagonal. |
Tasks | |
Published | 2016-02-16 |
URL | http://arxiv.org/abs/1602.05236v1 |
http://arxiv.org/pdf/1602.05236v1.pdf | |
PWC | https://paperswithcode.com/paper/a-sparse-pca-approach-to-clustering |
Repo | |
Framework | |
Right whale recognition using convolutional neural networks
Title | Right whale recognition using convolutional neural networks |
Authors | Andrei Polzounov, Ilmira Terpugova, Deividas Skiparis, Andrei Mihai |
Abstract | We studied the feasibility of recognizing individual right whales (Eubalaena glacialis) using convolutional neural networks. Prior studies have shown that CNNs can be used in wide range of classification and categorization tasks such as automated human face recognition. To test applicability of deep learning to whale recognition we have developed several models based on best practices from literature. Here, we describe the performance of the models. We conclude that machine recognition of whales is feasible and comment on the difficulty of the problem |
Tasks | Face Recognition |
Published | 2016-04-19 |
URL | http://arxiv.org/abs/1604.05605v1 |
http://arxiv.org/pdf/1604.05605v1.pdf | |
PWC | https://paperswithcode.com/paper/right-whale-recognition-using-convolutional |
Repo | |
Framework | |
Dual Smoothing and Level Set Techniques for Variational Matrix Decomposition
Title | Dual Smoothing and Level Set Techniques for Variational Matrix Decomposition |
Authors | Aleksandr Y. Aravkin, Stephen Becker |
Abstract | We focus on the robust principal component analysis (RPCA) problem, and review a range of old and new convex formulations for the problem and its variants. We then review dual smoothing and level set techniques in convex optimization, present several novel theoretical results, and apply the techniques on the RPCA problem. In the final sections, we show a range of numerical experiments for simulated and real-world problems. |
Tasks | |
Published | 2016-03-01 |
URL | http://arxiv.org/abs/1603.00284v1 |
http://arxiv.org/pdf/1603.00284v1.pdf | |
PWC | https://paperswithcode.com/paper/dual-smoothing-and-level-set-techniques-for |
Repo | |
Framework | |
An algorithm with nearly optimal pseudo-regret for both stochastic and adversarial bandits
Title | An algorithm with nearly optimal pseudo-regret for both stochastic and adversarial bandits |
Authors | Peter Auer, Chao-Kai Chiang |
Abstract | We present an algorithm that achieves almost optimal pseudo-regret bounds against adversarial and stochastic bandits. Against adversarial bandits the pseudo-regret is $O(K\sqrt{n \log n})$ and against stochastic bandits the pseudo-regret is $O(\sum_i (\log n)/\Delta_i)$. We also show that no algorithm with $O(\log n)$ pseudo-regret against stochastic bandits can achieve $\tilde{O}(\sqrt{n})$ expected regret against adaptive adversarial bandits. This complements previous results of Bubeck and Slivkins (2012) that show $\tilde{O}(\sqrt{n})$ expected adversarial regret with $O((\log n)^2)$ stochastic pseudo-regret. |
Tasks | |
Published | 2016-05-27 |
URL | http://arxiv.org/abs/1605.08722v1 |
http://arxiv.org/pdf/1605.08722v1.pdf | |
PWC | https://paperswithcode.com/paper/an-algorithm-with-nearly-optimal-pseudo |
Repo | |
Framework | |
Learning From Hidden Traits: Joint Factor Analysis and Latent Clustering
Title | Learning From Hidden Traits: Joint Factor Analysis and Latent Clustering |
Authors | Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos |
Abstract | Dimensionality reduction techniques play an essential role in data analytics, signal processing and machine learning. Dimensionality reduction is usually performed in a preprocessing stage that is separate from subsequent data analysis, such as clustering or classification. Finding reduced-dimension representations that are well-suited for the intended task is more appealing. This paper proposes a joint factor analysis and latent clustering framework, which aims at learning cluster-aware low-dimensional representations of matrix and tensor data. The proposed approach leverages matrix and tensor factorization models that produce essentially unique latent representations of the data to unravel latent cluster structure – which is otherwise obscured because of the freedom to apply an oblique transformation in latent space. At the same time, latent cluster structure is used as prior information to enhance the performance of factorization. Specific contributions include several custom-built problem formulations, corresponding algorithms, and discussion of associated convergence properties. Besides extensive simulations, real-world datasets such as Reuters document data and MNIST image data are also employed to showcase the effectiveness of the proposed approaches. |
Tasks | Dimensionality Reduction |
Published | 2016-05-21 |
URL | http://arxiv.org/abs/1605.06711v1 |
http://arxiv.org/pdf/1605.06711v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-from-hidden-traits-joint-factor |
Repo | |
Framework | |
Deformable Parts Correlation Filters for Robust Visual Tracking
Title | Deformable Parts Correlation Filters for Robust Visual Tracking |
Authors | Alan Lukežič, Luka Čehovin, Matej Kristan |
Abstract | Deformable parts models show a great potential in tracking by principally addressing non-rigid object deformations and self occlusions, but according to recent benchmarks, they often lag behind the holistic approaches. The reason is that potentially large number of degrees of freedom have to be estimated for object localization and simplifications of the constellation topology are often assumed to make the inference tractable. We present a new formulation of the constellation model with correlation filters that treats the geometric and visual constraints within a single convex cost function and derive a highly efficient optimization for MAP inference of a fully-connected constellation. We propose a tracker that models the object at two levels of detail. The coarse level corresponds a root correlation filter and a novel color model for approximate object localization, while the mid-level representation is composed of the new deformable constellation of correlation filters that refine the object location. The resulting tracker is rigorously analyzed on a highly challenging OTB, VOT2014 and VOT2015 benchmarks, exhibits a state-of-the-art performance and runs in real-time. |
Tasks | Object Localization, Visual Tracking |
Published | 2016-05-12 |
URL | http://arxiv.org/abs/1605.03720v1 |
http://arxiv.org/pdf/1605.03720v1.pdf | |
PWC | https://paperswithcode.com/paper/deformable-parts-correlation-filters-for |
Repo | |
Framework | |
A Readability Analysis of Campaign Speeches from the 2016 US Presidential Campaign
Title | A Readability Analysis of Campaign Speeches from the 2016 US Presidential Campaign |
Authors | Elliot Schumacher, Maxine Eskenazi |
Abstract | Readability is defined as the reading level of the speech from grade 1 to grade 12. It results from the use of the REAP readability analysis (vocabulary - Collins-Thompson and Callan, 2004; syntax - Heilman et al ,2006, 2007), which use the lexical contents and grammatical structure of the sentences in a document to predict the reading level. After analysis, results were grouped into the average readability of each candidate, the evolution of the candidate’s speeches’ readability over time and the standard deviation, or how much each candidate varied their speech from one venue to another. For comparison, one speech from four past presidents and the Gettysburg Address were also analyzed. |
Tasks | |
Published | 2016-03-18 |
URL | http://arxiv.org/abs/1603.05739v1 |
http://arxiv.org/pdf/1603.05739v1.pdf | |
PWC | https://paperswithcode.com/paper/a-readability-analysis-of-campaign-speeches |
Repo | |
Framework | |
Loss factorization, weakly supervised learning and label noise robustness
Title | Loss factorization, weakly supervised learning and label noise robustness |
Authors | Giorgio Patrini, Frank Nielsen, Richard Nock, Marcello Carioni |
Abstract | We prove that the empirical risk of most well-known loss functions factors into a linear term aggregating all labels with a term that is label free, and can further be expressed by sums of the loss. This holds true even for non-smooth, non-convex losses and in any RKHS. The first term is a (kernel) mean operator –the focal quantity of this work– which we characterize as the sufficient statistic for the labels. The result tightens known generalization bounds and sheds new light on their interpretation. Factorization has a direct application on weakly supervised learning. In particular, we demonstrate that algorithms like SGD and proximal methods can be adapted with minimal effort to handle weak supervision, once the mean operator has been estimated. We apply this idea to learning with asymmetric noisy labels, connecting and extending prior work. Furthermore, we show that most losses enjoy a data-dependent (by the mean operator) form of noise robustness, in contrast with known negative results. |
Tasks | |
Published | 2016-02-08 |
URL | http://arxiv.org/abs/1602.02450v2 |
http://arxiv.org/pdf/1602.02450v2.pdf | |
PWC | https://paperswithcode.com/paper/loss-factorization-weakly-supervised-learning |
Repo | |
Framework | |
Detecting Changes Between Optical Images of Different Spatial and Spectral Resolutions: a Fusion-Based Approach
Title | Detecting Changes Between Optical Images of Different Spatial and Spectral Resolutions: a Fusion-Based Approach |
Authors | Vinicius Ferraris, Nicolas Dobigeon, Qi Wei, Marie Chabert |
Abstract | Change detection is one of the most challenging issues when analyzing remotely sensed images. Comparing several multi-date images acquired through the same kind of sensor is the most common scenario. Conversely, designing robust, flexible and scalable algorithms for change detection becomes even more challenging when the images have been acquired by two different kinds of sensors. This situation arises in case of emergency under critical constraints. This paper presents, to the best of authors’ knowledge, the first strategy to deal with optical images characterized by dissimilar spatial and spectral resolutions. Typical considered scenarios include change detection between panchromatic or multispectral and hyperspectral images. The proposed strategy consists of a 3-step procedure: i) inferring a high spatial and spectral resolution image by fusion of the two observed images characterized one by a low spatial resolution and the other by a low spectral resolution, ii) predicting two images with respectively the same spatial and spectral resolutions as the observed images by degradation of the fused one and iii) implementing a decision rule to each pair of observed and predicted images characterized by the same spatial and spectral resolutions to identify changes. The performance of the proposed framework is evaluated on real images with simulated realistic changes. |
Tasks | |
Published | 2016-09-20 |
URL | http://arxiv.org/abs/1609.06074v1 |
http://arxiv.org/pdf/1609.06074v1.pdf | |
PWC | https://paperswithcode.com/paper/detecting-changes-between-optical-images-of |
Repo | |
Framework | |
Human Action Localization with Sparse Spatial Supervision
Title | Human Action Localization with Sparse Spatial Supervision |
Authors | Philippe Weinzaepfel, Xavier Martin, Cordelia Schmid |
Abstract | We introduce an approach for spatio-temporal human action localization using sparse spatial supervision. Our method leverages the large amount of annotated humans available today and extracts human tubes by combining a state-of-the-art human detector with a tracking-by-detection approach. Given these high-quality human tubes and temporal supervision, we select positive and negative tubes with very sparse spatial supervision, i.e., only one spatially annotated frame per instance. The selected tubes allow us to effectively learn a spatio-temporal action detector based on dense trajectories or CNNs. We conduct experiments on existing action localization benchmarks: UCF-Sports, J-HMDB and UCF-101. Our results show that our approach, despite using sparse spatial supervision, performs on par with methods using full supervision, i.e., one bounding box annotation per frame. To further validate our method, we introduce DALY (Daily Action Localization in YouTube), a dataset for realistic action localization in space and time. It contains high quality temporal and spatial annotations for 3.6k instances of 10 actions in 31 hours of videos (3.3M frames). It is an order of magnitude larger than existing datasets, with more diversity in appearance and long untrimmed videos. |
Tasks | Action Localization |
Published | 2016-05-17 |
URL | http://arxiv.org/abs/1605.05197v2 |
http://arxiv.org/pdf/1605.05197v2.pdf | |
PWC | https://paperswithcode.com/paper/human-action-localization-with-sparse-spatial |
Repo | |
Framework | |