July 27, 2019

3246 words 16 mins read

Paper Group ANR 546

Paper Group ANR 546

A Comparative Analysis of Social Network Pages by Interests of Their Followers. High-Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transition. The High-Dimensional Geometry of Binary Neural Networks. Random Forest Missing Data Algorithms. Pose2Instance: Harnessing Keypoints for Person Instance Segmentation. D …

A Comparative Analysis of Social Network Pages by Interests of Their Followers

Title A Comparative Analysis of Social Network Pages by Interests of Their Followers
Authors Elena Mikhalkova, Nadezhda Ganzherli, Yuri Karyakin
Abstract Being a matter of cognition, user interests should be apt to classification independent of the language of users, social network and content of interest itself. To prove it, we analyze a collection of English and Russian Twitter and Vkontakte community pages by interests of their followers. First, we create a model of Major Interests (MaIs) with the help of expert analysis and then classify a set of pages using machine learning algorithms (SVM, Neural Network, Naive Bayes, and some other). We take three interest domains that are typical of both English and Russian-speaking communities: football, rock music, vegetarianism. The results of classification show a greater correlation between Russian-Vkontakte and Russian-Twitter pages while English-Twitterpages appear to provide the highest score.
Tasks
Published 2017-07-18
URL http://arxiv.org/abs/1707.05481v2
PDF http://arxiv.org/pdf/1707.05481v2.pdf
PWC https://paperswithcode.com/paper/a-comparative-analysis-of-social-network
Repo
Framework

High-Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transition

Title High-Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transition
Authors David Gamarnik, Ilias Zadik
Abstract We consider a sparse linear regression model Y=X\beta^{}+W where X has a Gaussian entries, W is the noise vector with mean zero Gaussian entries, and \beta^{} is a binary vector with support size (sparsity) k. Using a novel conditional second moment method we obtain a tight up to a multiplicative constant approximation of the optimal squared error \min_{\beta}\Y-X\beta_{2}, where the minimization is over all k-sparse binary vectors \beta. The approximation reveals interesting structural properties of the underlying regression problem. In particular, a) We establish that n^*=2k\log p/\log (2k/\sigma^{2}+1) is a phase transition point with the following “all-or-nothing” property. When n exceeds n^{*}, (2k)^{-1}\beta_{2}-\beta^*_0\approx 0, and when n is below n^{*}, (2k)^{-1}\beta_{2}-\beta^*_0\approx 1, where \beta_2 is the optimal solution achieving the smallest squared error. With this we prove that n^{*} is the asymptotic threshold for recovering \beta^* information theoretically. b) We compute the squared error for an intermediate problem \min_{\beta}\Y-X\beta_{2} where minimization is restricted to vectors \beta with \beta-\beta^{*}_0=2k \zeta, for \zeta\in [0,1]. We show that a lower bound part \Gamma(\zeta) of the estimate, which corresponds to the estimate based on the first moment method, undergoes a phase transition at three different thresholds, namely n_{\text{inf,1}}=\sigma^2\log p, which is information theoretic bound for recovering \beta^* when k=1 and \sigma is large, then at n^{*} and finally at n_{\text{LASSO/CS}}. c) We establish a certain Overlap Gap Property (OGP) on the space of all binary vectors \beta when n\le ck\log p for sufficiently small constant c. We conjecture that OGP is the source of algorithmic hardness of solving the minimization problem \min_{\beta}\Y-X\beta_{2} in the regime n<n_{\text{LASSO/CS}}.
Tasks
Published 2017-01-16
URL https://arxiv.org/abs/1701.04455v3
PDF https://arxiv.org/pdf/1701.04455v3.pdf
PWC https://paperswithcode.com/paper/high-dimensional-regression-with-binary
Repo
Framework

The High-Dimensional Geometry of Binary Neural Networks

Title The High-Dimensional Geometry of Binary Neural Networks
Authors Alexander G. Anderson, Cory P. Berg
Abstract Recent research has shown that one can train a neural network with binary weights and activations at train time by augmenting the weights with a high-precision continuous latent variable that accumulates small changes from stochastic gradient descent. However, there is a dearth of theoretical analysis to explain why we can effectively capture the features in our data with binary weights and activations. Our main result is that the neural networks with binary weights and activations trained using the method of Courbariaux, Hubara et al. (2016) work because of the high-dimensional geometry of binary vectors. In particular, the ideal continuous vectors that extract out features in the intermediate representations of these BNNs are well-approximated by binary vectors in the sense that dot products are approximately preserved. Compared to previous research that demonstrated the viability of such BNNs, our work explains why these BNNs work in terms of the HD geometry. Our theory serves as a foundation for understanding not only BNNs but a variety of methods that seek to compress traditional neural networks. Furthermore, a better understanding of multilayer binary neural networks serves as a starting point for generalizing BNNs to other neural network architectures such as recurrent neural networks.
Tasks
Published 2017-05-19
URL http://arxiv.org/abs/1705.07199v1
PDF http://arxiv.org/pdf/1705.07199v1.pdf
PWC https://paperswithcode.com/paper/the-high-dimensional-geometry-of-binary
Repo
Framework

Random Forest Missing Data Algorithms

Title Random Forest Missing Data Algorithms
Authors Fei Tang, Hemant Ishwaran
Abstract Random forest (RF) missing data algorithms are an attractive approach for dealing with missing data. They have the desirable properties of being able to handle mixed types of missing data, they are adaptive to interactions and nonlinearity, and they have the potential to scale to big data settings. Currently there are many different RF imputation algorithms but relatively little guidance about their efficacy, which motivated us to study their performance. Using a large, diverse collection of data sets, performance of various RF algorithms was assessed under different missing data mechanisms. Algorithms included proximity imputation, on the fly imputation, and imputation utilizing multivariate unsupervised and supervised splitting—the latter class representing a generalization of a new promising imputation algorithm called missForest. Performance of algorithms was assessed by ability to impute data accurately. Our findings reveal RF imputation to be generally robust with performance improving with increasing correlation. Performance was good under moderate to high missingness, and even (in certain cases) when data was missing not at random.
Tasks Imputation
Published 2017-01-19
URL http://arxiv.org/abs/1701.05305v2
PDF http://arxiv.org/pdf/1701.05305v2.pdf
PWC https://paperswithcode.com/paper/random-forest-missing-data-algorithms
Repo
Framework

Pose2Instance: Harnessing Keypoints for Person Instance Segmentation

Title Pose2Instance: Harnessing Keypoints for Person Instance Segmentation
Authors Subarna Tripathi, Maxwell Collins, Matthew Brown, Serge Belongie
Abstract Human keypoints are a well-studied representation of people.We explore how to use keypoint models to improve instance-level person segmentation. The main idea is to harness the notion of a distance transform of oracle provided keypoints or estimated keypoint heatmaps as a prior for person instance segmentation task within a deep neural network. For training and evaluation, we consider all those images from COCO where both instance segmentation and human keypoints annotations are available. We first show how oracle keypoints can boost the performance of existing human segmentation model during inference without any training. Next, we propose a framework to directly learn a deep instance segmentation model conditioned on human pose. Experimental results show that at various Intersection Over Union (IOU) thresholds, in a constrained environment with oracle keypoints, the instance segmentation accuracy achieves 10% to 12% relative improvements over a strong baseline of oracle bounding boxes. In a more realistic environment, without the oracle keypoints, the proposed deep person instance segmentation model conditioned on human pose achieves 3.8% to 10.5% relative improvements comparing with its strongest baseline of a deep network trained only for segmentation.
Tasks Instance Segmentation, Semantic Segmentation
Published 2017-04-04
URL http://arxiv.org/abs/1704.01152v1
PDF http://arxiv.org/pdf/1704.01152v1.pdf
PWC https://paperswithcode.com/paper/pose2instance-harnessing-keypoints-for-person
Repo
Framework

Dense Non-rigid Structure-from-Motion Made Easy - A Spatial-Temporal Smoothness based Solution

Title Dense Non-rigid Structure-from-Motion Made Easy - A Spatial-Temporal Smoothness based Solution
Authors Yuchao Dai, Huizhong Deng, Mingyi He
Abstract This paper proposes a simple spatial-temporal smoothness based method for solving dense non-rigid structure-from-motion (NRSfM). First, we revisit the temporal smoothness and demonstrate that it can be extended to dense case directly. Second, we propose to exploit the spatial smoothness by resorting to the Laplacian of the 3D non-rigid shape. Third, to handle real world noise and outliers in measurements, we robustify the data term by using the $L_1$ norm. In this way, our method could robustly exploit both spatial and temporal smoothness effectively and make dense non-rigid reconstruction easy. Our method is very easy to implement, which involves solving a series of least squares problems. Experimental results on both synthetic and real image dense NRSfM tasks show that the proposed method outperforms state-of-the-art dense non-rigid reconstruction methods.
Tasks
Published 2017-06-27
URL http://arxiv.org/abs/1706.08629v1
PDF http://arxiv.org/pdf/1706.08629v1.pdf
PWC https://paperswithcode.com/paper/dense-non-rigid-structure-from-motion-made
Repo
Framework

Deep Learning for Medical Image Processing: Overview, Challenges and Future

Title Deep Learning for Medical Image Processing: Overview, Challenges and Future
Authors Muhammad Imran Razzak, Saeeda Naz, Ahmad Zaib
Abstract Healthcare sector is totally different from other industry. It is on high priority sector and people expect highest level of care and services regardless of cost. It did not achieve social expectation even though it consume huge percentage of budget. Mostly the interpretations of medical data is being done by medical expert. In terms of image interpretation by human expert, it is quite limited due to its subjectivity, the complexity of the image, extensive variations exist across different interpreters, and fatigue. After the success of deep learning in other real world application, it is also providing exciting solutions with good accuracy for medical imaging and is seen as a key method for future applications in health secotr. In this chapter, we discussed state of the art deep learning architecture and its optimization used for medical image segmentation and classification. In the last section, we have discussed the challenges deep learning based methods for medical imaging and open research issue.
Tasks Medical Image Segmentation, Semantic Segmentation
Published 2017-04-22
URL http://arxiv.org/abs/1704.06825v1
PDF http://arxiv.org/pdf/1704.06825v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-for-medical-image-processing
Repo
Framework

Governing Governance: A Formal Framework for Analysing Institutional Design and Enactment Governance

Title Governing Governance: A Formal Framework for Analysing Institutional Design and Enactment Governance
Authors Thomas C. King
Abstract This dissertation is motivated by the need, in today’s globalist world, for a precise way to enable governments, organisations and other regulatory bodies to evaluate the constraints they place on themselves and others. An organisation’s modus operandi is enacting and fulfilling contracts between itself and its participants. Yet, organisational contracts should respect external laws, such as those setting out data privacy rights and liberties. Contracts can only be enacted by following contract law processes, which often require bilateral agreement and consideration. Governments need to legislate whilst understanding today’s context of national and international governance hierarchy where law makers shun isolationism and seek to influence one another. Governments should avoid punishment by respecting constraints from international treaties and human rights charters. Governments can only enact legislation by following their own, pre-existing, law making procedures. In other words, institutions, such as laws and contracts are designed and enacted under constraints.
Tasks
Published 2017-04-21
URL http://arxiv.org/abs/1704.06654v1
PDF http://arxiv.org/pdf/1704.06654v1.pdf
PWC https://paperswithcode.com/paper/governing-governance-a-formal-framework-for
Repo
Framework

Estimating Cosmological Parameters from the Dark Matter Distribution

Title Estimating Cosmological Parameters from the Dark Matter Distribution
Authors Siamak Ravanbakhsh, Junier Oliva, Sebastien Fromenteau, Layne C. Price, Shirley Ho, Jeff Schneider, Barnabas Poczos
Abstract A grand challenge of the 21st century cosmology is to accurately estimate the cosmological parameters of our Universe. A major approach to estimating the cosmological parameters is to use the large-scale matter distribution of the Universe. Galaxy surveys provide the means to map out cosmic large-scale structure in three dimensions. Information about galaxy locations is typically summarized in a “single” function of scale, such as the galaxy correlation function or power-spectrum. We show that it is possible to estimate these cosmological parameters directly from the distribution of matter. This paper presents the application of deep 3D convolutional networks to volumetric representation of dark-matter simulations as well as the results obtained using a recently proposed distribution regression framework, showing that machine learning techniques are comparable to, and can sometimes outperform, maximum-likelihood point estimates using “cosmological models”. This opens the way to estimating the parameters of our Universe with higher accuracy.
Tasks
Published 2017-11-06
URL http://arxiv.org/abs/1711.02033v1
PDF http://arxiv.org/pdf/1711.02033v1.pdf
PWC https://paperswithcode.com/paper/estimating-cosmological-parameters-from-the
Repo
Framework

Probabilistic Adaptive Computation Time

Title Probabilistic Adaptive Computation Time
Authors Michael Figurnov, Artem Sobolev, Dmitry Vetrov
Abstract We present a probabilistic model with discrete latent variables that control the computation time in deep learning models such as ResNets and LSTMs. A prior on the latent variables expresses the preference for faster computation. The amount of computation for an input is determined via amortized maximum a posteriori (MAP) inference. MAP inference is performed using a novel stochastic variational optimization method. The recently proposed Adaptive Computation Time mechanism can be seen as an ad-hoc relaxation of this model. We demonstrate training using the general-purpose Concrete relaxation of discrete variables. Evaluation on ResNet shows that our method matches the speed-accuracy trade-off of Adaptive Computation Time, while allowing for evaluation with a simple deterministic procedure that has a lower memory footprint.
Tasks
Published 2017-12-01
URL http://arxiv.org/abs/1712.00386v1
PDF http://arxiv.org/pdf/1712.00386v1.pdf
PWC https://paperswithcode.com/paper/probabilistic-adaptive-computation-time
Repo
Framework

Determining sentiment in citation text and analyzing its impact on the proposed ranking index

Title Determining sentiment in citation text and analyzing its impact on the proposed ranking index
Authors Souvick Ghosh, Dipankar Das, Tanmoy Chakraborty
Abstract Whenever human beings interact with each other, they exchange or express opinions, emotions, and sentiments. These opinions can be expressed in text, speech or images. Analysis of these sentiments is one of the popular research areas of present day researchers. Sentiment analysis, also known as opinion mining tries to identify or classify these sentiments or opinions into two broad categories - positive and negative. In recent years, the scientific community has taken a lot of interest in analyzing sentiment in textual data available in various social media platforms. Much work has been done on social media conversations, blog posts, newspaper articles and various narrative texts. However, when it comes to identifying emotions from scientific papers, researchers have faced some difficulties due to the implicit and hidden nature of opinion. By default, citation instances are considered inherently positive in emotion. Popular ranking and indexing paradigms often neglect the opinion present while citing. In this paper, we have tried to achieve three objectives. First, we try to identify the major sentiment in the citation text and assign a score to the instance. We have used a statistical classifier for this purpose. Secondly, we have proposed a new index (we shall refer to it hereafter as M-index) which takes into account both the quantitative and qualitative factors while scoring a paper. Thirdly, we developed a ranking of research papers based on the M-index. We also try to explain how the M-index impacts the ranking of scientific papers.
Tasks Opinion Mining, Sentiment Analysis
Published 2017-07-05
URL http://arxiv.org/abs/1707.01425v1
PDF http://arxiv.org/pdf/1707.01425v1.pdf
PWC https://paperswithcode.com/paper/determining-sentiment-in-citation-text-and
Repo
Framework

A Deep Learning Perspective on the Origin of Facial Expressions

Title A Deep Learning Perspective on the Origin of Facial Expressions
Authors Ran Breuer, Ron Kimmel
Abstract Facial expressions play a significant role in human communication and behavior. Psychologists have long studied the relationship between facial expressions and emotions. Paul Ekman et al., devised the Facial Action Coding System (FACS) to taxonomize human facial expressions and model their behavior. The ability to recognize facial expressions automatically, enables novel applications in fields like human-computer interaction, social gaming, and psychological research. There has been a tremendously active research in this field, with several recent papers utilizing convolutional neural networks (CNN) for feature extraction and inference. In this paper, we employ CNN understanding methods to study the relation between the features these computational networks are using, the FACS and Action Units (AU). We verify our findings on the Extended Cohn-Kanade (CK+), NovaEmotions and FER2013 datasets. We apply these models to various tasks and tests using transfer learning, including cross-dataset validation and cross-task performance. Finally, we exploit the nature of the FER based CNN models for the detection of micro-expressions and achieve state-of-the-art accuracy using a simple long-short-term-memory (LSTM) recurrent neural network (RNN).
Tasks Transfer Learning
Published 2017-05-04
URL http://arxiv.org/abs/1705.01842v2
PDF http://arxiv.org/pdf/1705.01842v2.pdf
PWC https://paperswithcode.com/paper/a-deep-learning-perspective-on-the-origin-of
Repo
Framework

Golden Years, Golden Shores: A Study of Elders in Online Travel Communities

Title Golden Years, Golden Shores: A Study of Elders in Online Travel Communities
Authors Bartłomiej Balcerzak, Radosław Nielek
Abstract In this paper we present our exploratory findings related to extracting knowledge and experiences from a community of senior tourists. By using tools of qualitative analysis as well as review of literature, we managed to verify a set of hypotheses related to the content created by senior tourists when participating in on-line communities. We also produced a codebook, representing various themes one may encounter in such communities. This codebook, derived from our own qualitative research, as well a literature review will serve as a basis for further development of automated tools of knowledge extraction. We also managed to find that older adults more often than other poster in tourists forums, mention their age in discussion, more often share their experiences and motivation to travel, however they do not differ in relation to describing barriers encountered while traveling.
Tasks
Published 2017-08-22
URL http://arxiv.org/abs/1708.06550v1
PDF http://arxiv.org/pdf/1708.06550v1.pdf
PWC https://paperswithcode.com/paper/golden-years-golden-shores-a-study-of-elders
Repo
Framework

How Much Data is Enough? A Statistical Approach with Case Study on Longitudinal Driving Behavior

Title How Much Data is Enough? A Statistical Approach with Case Study on Longitudinal Driving Behavior
Authors Wenshuo Wang, Chang Liu, Ding Zhao
Abstract Big data has shown its uniquely powerful ability to reveal, model, and understand driver behaviors. The amount of data affects the experiment cost and conclusions in the analysis. Insufficient data may lead to inaccurate models while excessive data waste resources. For projects that cost millions of dollars, it is critical to determine the right amount of data needed. However, how to decide the appropriate amount has not been fully studied in the realm of driver behaviors. This paper systematically investigates this issue to estimate how much naturalistic driving data (NDD) is needed for understanding driver behaviors from a statistical point of view. A general assessment method is proposed using a Gaussian kernel density estimation to catch the underlying characteristics of driver behaviors. We then apply the Kullback-Liebler divergence method to measure the similarity between density functions with differing amounts of NDD. A max-minimum approach is used to compute the appropriate amount of NDD. To validate our proposed method, we investigated the car-following case using NDD collected from the University of Michigan Safety Pilot Model Deployment (SPMD) program. We demonstrate that from a statistical perspective, the proposed approach can provide an appropriate amount of NDD capable of capturing most features of the normal car-following behavior, which is consistent with the experiment settings in many literatures.
Tasks Density Estimation
Published 2017-06-23
URL http://arxiv.org/abs/1706.07637v1
PDF http://arxiv.org/pdf/1706.07637v1.pdf
PWC https://paperswithcode.com/paper/how-much-data-is-enough-a-statistical
Repo
Framework

Calipso: Physics-based Image and Video Editing through CAD Model Proxies

Title Calipso: Physics-based Image and Video Editing through CAD Model Proxies
Authors Nazim Haouchine, Frederick Roy, Hadrien Courtecuisse, Matthias Nießner, Stephane Cotin
Abstract We present Calipso, an interactive method for editing images and videos in a physically-coherent manner. Our main idea is to realize physics-based manipulations by running a full physics simulation on proxy geometries given by non-rigidly aligned CAD models. Running these simulations allows us to apply new, unseen forces to move or deform selected objects, change physical parameters such as mass or elasticity, or even add entire new objects that interact with the rest of the underlying scene. In Calipso, the user makes edits directly in 3D; these edits are processed by the simulation and then transfered to the target 2D content using shape-to-image correspondences in a photo-realistic rendering process. To align the CAD models, we introduce an efficient CAD-to-image alignment procedure that jointly minimizes for rigid and non-rigid alignment while preserving the high-level structure of the input shape. Moreover, the user can choose to exploit image flow to estimate scene motion, producing coherent physical behavior with ambient dynamics. We demonstrate Calipso’s physics-based editing on a wide range of examples producing myriad physical behavior while preserving geometric and visual consistency.
Tasks
Published 2017-08-12
URL http://arxiv.org/abs/1708.03748v1
PDF http://arxiv.org/pdf/1708.03748v1.pdf
PWC https://paperswithcode.com/paper/calipso-physics-based-image-and-video-editing
Repo
Framework
comments powered by Disqus