May 6, 2019

2977 words 14 mins read

Paper Group ANR 323

Paper Group ANR 323

A Deterministic Annealing Approach to the Multiple Traveling Salesmen and Related Problems. Social Network Processes in the Isabelle and Coq Theorem Proving Communities. PrivLogit: Efficient Privacy-preserving Logistic Regression by Tailoring Numerical Optimizers. Parallel Large-Scale Attribute Reduction on Cloud Systems. Tensor Representations via …

Title A Deterministic Annealing Approach to the Multiple Traveling Salesmen and Related Problems
Authors Mayank Baranwal, Brian Roehl, Srinivasa M. Salapaka
Abstract This paper presents a novel and efficient heuristic framework for approximating the solutions to the multiple traveling salesmen problem (m-TSP) and other variants on the TSP. The approach adopted in this paper is an extension of the Maximum-Entropy-Principle (MEP) and the Deterministic Annealing (DA) algorithm. The framework is presented as a general tool that can be suitably adapted to a number of variants on the basic TSP. Additionally, unlike most other heuristics for the TSP, the framework presented in this paper is independent of the edges defined between any two pairs of nodes. This makes the algorithm particularly suited for variants such as the close-enough traveling salesman problem (CETSP) which are challenging due to added computational complexity. The examples presented in this paper illustrate the effectiveness of this new framework for use in TSP and many variants thereof.
Tasks
Published 2016-04-14
URL http://arxiv.org/abs/1604.04169v1
PDF http://arxiv.org/pdf/1604.04169v1.pdf
PWC https://paperswithcode.com/paper/a-deterministic-annealing-approach-to-the
Repo
Framework

Social Network Processes in the Isabelle and Coq Theorem Proving Communities

Title Social Network Processes in the Isabelle and Coq Theorem Proving Communities
Authors Jacques Fleuriot, Steven Obua, Phil Scott
Abstract We identify the main actors in the Isabelle and Coq communities and describe how they affect and influence their peers. This work explores selected foundations of social networking analysis that we expect to be useful in the context of the ProofPeer project, which is developing a new model for interactive theorem proving based on collaboration and social interactions.
Tasks Automated Theorem Proving
Published 2016-09-22
URL http://arxiv.org/abs/1609.07127v1
PDF http://arxiv.org/pdf/1609.07127v1.pdf
PWC https://paperswithcode.com/paper/social-network-processes-in-the-isabelle-and
Repo
Framework

PrivLogit: Efficient Privacy-preserving Logistic Regression by Tailoring Numerical Optimizers

Title PrivLogit: Efficient Privacy-preserving Logistic Regression by Tailoring Numerical Optimizers
Authors Wei Xie, Yang Wang, Steven M. Boker, Donald E. Brown
Abstract Safeguarding privacy in machine learning is highly desirable, especially in collaborative studies across many organizations. Privacy-preserving distributed machine learning (based on cryptography) is popular to solve the problem. However, existing cryptographic protocols still incur excess computational overhead. Here, we make a novel observation that this is partially due to naive adoption of mainstream numerical optimization (e.g., Newton method) and failing to tailor for secure computing. This work presents a contrasting perspective: customizing numerical optimization specifically for secure settings. We propose a seemingly less-favorable optimization method that can in fact significantly accelerate privacy-preserving logistic regression. Leveraging this new method, we propose two new secure protocols for conducting logistic regression in a privacy-preserving and distributed manner. Extensive theoretical and empirical evaluations prove the competitive performance of our two secure proposals while without compromising accuracy or privacy: with speedup up to 2.3x and 8.1x, respectively, over state-of-the-art; and even faster as data scales up. Such drastic speedup is on top of and in addition to performance improvements from existing (and future) state-of-the-art cryptography. Our work provides a new way towards efficient and practical privacy-preserving logistic regression for large-scale studies which are common for modern science.
Tasks
Published 2016-11-03
URL http://arxiv.org/abs/1611.01170v1
PDF http://arxiv.org/pdf/1611.01170v1.pdf
PWC https://paperswithcode.com/paper/privlogit-efficient-privacy-preserving
Repo
Framework

Parallel Large-Scale Attribute Reduction on Cloud Systems

Title Parallel Large-Scale Attribute Reduction on Cloud Systems
Authors Junbo Zhang, Tianrui Li, Yi Pan
Abstract The rapid growth of emerging information technologies and application patterns in modern society, e.g., Internet, Internet of Things, Cloud Computing and Tri-network Convergence, has caused the advent of the era of big data. Big data contains huge values, however, mining knowledge from big data is a tremendously challenging task because of data uncertainty and inconsistency. Attribute reduction (also known as feature selection) can not only be used as an effective preprocessing step, but also exploits the data redundancy to reduce the uncertainty. However, existing solutions are designed 1) either for a single machine that means the entire data must fit in the main memory and the parallelism is limited; 2) or for the Hadoop platform which means that the data have to be loaded into the distributed memory frequently and therefore become inefficient. In this paper, we overcome these shortcomings for maximum efficiency possible, and propose a unified framework for Parallel Large-scale Attribute Reduction, termed PLAR, for big data analysis. PLAR consists of three components: 1) Granular Computing (GrC)-based initialization: it converts a decision table (i.e., original data representation) into a granularity representation which reduces the amount of space and hence can be easily cached in the distributed memory: 2) model-parallelism: it simultaneously evaluates all feature candidates and makes attribute reduction highly parallelizable; 3) data-parallelism: it computes the significance of an attribute in parallel using a MapReduce-style manner. We implement PLAR with four representative heuristic feature selection algorithms on Spark, and evaluate them on various huge datasets, including UCI and astronomical datasets, finding our method’s advantages beyond existing solutions.
Tasks Feature Selection
Published 2016-10-06
URL http://arxiv.org/abs/1610.01807v1
PDF http://arxiv.org/pdf/1610.01807v1.pdf
PWC https://paperswithcode.com/paper/parallel-large-scale-attribute-reduction-on
Repo
Framework

Tensor Representations via Kernel Linearization for Action Recognition from 3D Skeletons (Extended Version)

Title Tensor Representations via Kernel Linearization for Action Recognition from 3D Skeletons (Extended Version)
Authors Piotr Koniusz, Anoop Cherian, Fatih Porikli
Abstract In this paper, we explore tensor representations that can compactly capture higher-order relationships between skeleton joints for 3D action recognition. We first define RBF kernels on 3D joint sequences, which are then linearized to form kernel descriptors. The higher-order outer-products of these kernel descriptors form our tensor representations. We present two different kernels for action recognition, namely (i) a sequence compatibility kernel that captures the spatio-temporal compatibility of joints in one sequence against those in the other, and (ii) a dynamics compatibility kernel that explicitly models the action dynamics of a sequence. Tensors formed from these kernels are then used to train an SVM. We present experiments on several benchmark datasets and demonstrate state of the art results, substantiating the effectiveness of our representations.
Tasks 3D Human Action Recognition, Temporal Action Localization
Published 2016-04-01
URL http://arxiv.org/abs/1604.00239v2
PDF http://arxiv.org/pdf/1604.00239v2.pdf
PWC https://paperswithcode.com/paper/tensor-representations-via-kernel
Repo
Framework

Construction of Convex Sets on Quadrilateral Ordered Tiles or Graphs with Propagation Neighborhood Operations. Dales, Concavity Structures. Application to Gray Image Analysis of Human-Readable Shapes

Title Construction of Convex Sets on Quadrilateral Ordered Tiles or Graphs with Propagation Neighborhood Operations. Dales, Concavity Structures. Application to Gray Image Analysis of Human-Readable Shapes
Authors Igor Polkovnikov
Abstract An effort has been made to show mathematicians some new ideas applied to image analysis. Gray images are presented as tilings. Based on topological properties of the tiling, a number of gray convex hulls: maximal, minimal, and oriented ones are constructed and some are proved. They are constructed with only one operation. Two tilings are used in the Constraint and Allowance types of operations. New type of concavity described: a dale. All operations are parallel, possible to realize clock-less. Convexities define what is the background. They are treated as separate gray objects. There are multiple relations among them and their descendants. Via that, topological size of concavities is proposed. Constructed with the same type of operations, Rays and Angles in a tiling define possible spatial relations. Notions like “strokes” are defined through concavities. Unusual effects on levelized gray objects are shown. It is illustrated how alphabet and complex hieroglyphs can be described through concavities and their relations. A hypothesis of living organisms image analysis is proposed. A number of examples with symbols and a human face are calculated with new Asynchwave C++ software library.
Tasks
Published 2016-08-29
URL http://arxiv.org/abs/1608.08251v1
PDF http://arxiv.org/pdf/1608.08251v1.pdf
PWC https://paperswithcode.com/paper/construction-of-convex-sets-on-quadrilateral
Repo
Framework

Estimation from Indirect Supervision with Linear Moments

Title Estimation from Indirect Supervision with Linear Moments
Authors Aditi Raghunathan, Roy Frostig, John Duchi, Percy Liang
Abstract In structured prediction problems where we have indirect supervision of the output, maximum marginal likelihood faces two computational obstacles: non-convexity of the objective and intractability of even a single gradient computation. In this paper, we bypass both obstacles for a class of what we call linear indirectly-supervised problems. Our approach is simple: we solve a linear system to estimate sufficient statistics of the model, which we then use to estimate parameters via convex optimization. We analyze the statistical properties of our approach and show empirically that it is effective in two settings: learning with local privacy constraints and learning from low-cost count-based annotations.
Tasks Structured Prediction
Published 2016-08-10
URL http://arxiv.org/abs/1608.03100v1
PDF http://arxiv.org/pdf/1608.03100v1.pdf
PWC https://paperswithcode.com/paper/estimation-from-indirect-supervision-with
Repo
Framework

Implicit Modeling – A Generalization of Discriminative and Generative Approaches

Title Implicit Modeling – A Generalization of Discriminative and Generative Approaches
Authors Dmitrij Schlesinger, Carsten Rother
Abstract We propose a new modeling approach that is a generalization of generative and discriminative models. The core idea is to use an implicit parameterization of a joint probability distribution by specifying only the conditional distributions. The proposed scheme combines the advantages of both worlds – it can use powerful complex discriminative models as its parts, having at the same time better generalization capabilities. We thoroughly evaluate the proposed method for a simple classification task with artificial data and illustrate its advantages for real-word scenarios on a semantic image segmentation problem.
Tasks Semantic Segmentation
Published 2016-12-05
URL http://arxiv.org/abs/1612.01397v1
PDF http://arxiv.org/pdf/1612.01397v1.pdf
PWC https://paperswithcode.com/paper/implicit-modeling-a-generalization-of
Repo
Framework

Multimodal Emotion Recognition Using Multimodal Deep Learning

Title Multimodal Emotion Recognition Using Multimodal Deep Learning
Authors Wei Liu, Wei-Long Zheng, Bao-Liang Lu
Abstract To enhance the performance of affective models and reduce the cost of acquiring physiological signals for real-world applications, we adopt multimodal deep learning approach to construct affective models from multiple physiological signals. For unimodal enhancement task, we indicate that the best recognition accuracy of 82.11% on SEED dataset is achieved with shared representations generated by Deep AutoEncoder (DAE) model. For multimodal facilitation tasks, we demonstrate that the Bimodal Deep AutoEncoder (BDAE) achieves the mean accuracies of 91.01% and 83.25% on SEED and DEAP datasets, respectively, which are much superior to the state-of-the-art approaches. For cross-modal learning task, our experimental results demonstrate that the mean accuracy of 66.34% is achieved on SEED dataset through shared representations generated by EEG-based DAE as training samples and shared representations generated by eye-based DAE as testing sample, and vice versa.
Tasks EEG, Emotion Recognition, Multimodal Emotion Recognition
Published 2016-02-26
URL http://arxiv.org/abs/1602.08225v1
PDF http://arxiv.org/pdf/1602.08225v1.pdf
PWC https://paperswithcode.com/paper/multimodal-emotion-recognition-using
Repo
Framework

Tongue contour extraction from ultrasound images based on deep neural network

Title Tongue contour extraction from ultrasound images based on deep neural network
Authors Aurore Jaumard-Hakoun, Kele Xu, Pierre Roussel-Ragot, Gérard Dreyfus, Bruce Denby
Abstract Studying tongue motion during speech using ultrasound is a standard procedure, but automatic ultrasound image labelling remains a challenge, as standard tongue shape extraction methods typically require human intervention. This article presents a method based on deep neural networks to automatically extract tongue contour from ultrasound images on a speech dataset. We use a deep autoencoder trained to learn the relationship between an image and its related contour, so that the model is able to automatically reconstruct contours from the ultrasound image alone. In this paper, we use an automatic labelling algorithm instead of time-consuming hand-labelling during the training process, and estimate the performances of both automatic labelling and contour extraction as compared to hand-labelling. Observed results show quality scores comparable to the state of the art.
Tasks
Published 2016-05-19
URL http://arxiv.org/abs/1605.05912v1
PDF http://arxiv.org/pdf/1605.05912v1.pdf
PWC https://paperswithcode.com/paper/tongue-contour-extraction-from-ultrasound
Repo
Framework

The Power of Non-Ground Rules in Answer Set Programming

Title The Power of Non-Ground Rules in Answer Set Programming
Authors Manuel Bichler, Michael Morak, Stefan Woltran
Abstract Answer set programming (ASP) is a well-established logic programming language that offers an intuitive, declarative syntax for problem solving. In its traditional application, a fixed ASP program for a given problem is designed and the actual instance of the problem is fed into the program as a set of facts. This approach typically results in programs with comparably short and simple rules. However, as is known from complexity analysis, such an approach limits the expressive power of ASP; in fact, an entire NP-check can be encoded into a single large rule body of bounded arity that performs both a guess and a check within the same rule. Here, we propose a novel paradigm for encoding hard problems in ASP by making explicit use of large rules which depend on the actual instance of the problem. We illustrate how this new encoding paradigm can be used, providing examples of problems from the first, second, and even third level of the polynomial hierarchy. As state-of-the-art solvers are tuned towards short rules, rule decomposition is a key technique in the practical realization of our approach. We also provide some preliminary benchmarks which indicate that giving up the convenient way of specifying a fixed program can lead to a significant speed-up. This paper is under consideration for acceptance into TPLP.
Tasks
Published 2016-08-05
URL http://arxiv.org/abs/1608.01856v1
PDF http://arxiv.org/pdf/1608.01856v1.pdf
PWC https://paperswithcode.com/paper/the-power-of-non-ground-rules-in-answer-set
Repo
Framework

Detailed Garment Recovery from a Single-View Image

Title Detailed Garment Recovery from a Single-View Image
Authors Shan Yang, Tanya Ambert, Zherong Pan, Ke Wang, Licheng Yu, Tamara Berg, Ming C. Lin
Abstract Most recent garment capturing techniques rely on acquiring multiple views of clothing, which may not always be readily available, especially in the case of pre-existing photographs from the web. As an alternative, we pro- pose a method that is able to compute a rich and realistic 3D model of a human body and its outfits from a single photograph with little human in- teraction. Our algorithm is not only able to capture the global shape and geometry of the clothing, it can also extract small but important details of cloth, such as occluded wrinkles and folds. Unlike previous methods using full 3D information (i.e. depth, multi-view images, or sampled 3D geom- etry), our approach achieves detailed garment recovery from a single-view image by using statistical, geometric, and physical priors and a combina- tion of parameter estimation, semantic parsing, shape recovery, and physics- based cloth simulation. We demonstrate the effectiveness of our algorithm by re-purposing the reconstructed garments for virtual try-on and garment transfer applications, as well as cloth animation for digital characters.
Tasks Semantic Parsing
Published 2016-08-03
URL http://arxiv.org/abs/1608.01250v4
PDF http://arxiv.org/pdf/1608.01250v4.pdf
PWC https://paperswithcode.com/paper/detailed-garment-recovery-from-a-single-view
Repo
Framework

A Channelized Binning Method for Extraction of Dominant Color Pixel Value

Title A Channelized Binning Method for Extraction of Dominant Color Pixel Value
Authors Siddu P Algur, N H Ayachit, Vivek R
Abstract The Color is one of the most important and easily identifiable features for describing the visual content. The MPEG standard has developed a number of descriptors that covers different aspects of the visual content. The Dominant color descriptor is one of them. This paper proposes a channelized binning approach a novel method for extraction of the dominant color pixel value which is a variant of the dominant color descriptor. The Channelized binning method treats the problem as a statistical problem and tries to avoid color quantization and interpolation guessing of number and centroid of dominant colors. Channelized binning is an iterative approach which automatically estimates the number of dominant pixel values and their centroids. It operates on 24 bit full RGB color space, by considering one color channel at a time and hence avoiding the color quantization. Results show that the proposed method can successfully extract dominant color pixel values.
Tasks Quantization
Published 2016-05-28
URL http://arxiv.org/abs/1605.08856v1
PDF http://arxiv.org/pdf/1605.08856v1.pdf
PWC https://paperswithcode.com/paper/a-channelized-binning-method-for-extraction
Repo
Framework

Learning the semantic structure of objects from Web supervision

Title Learning the semantic structure of objects from Web supervision
Authors David Novotny, Diane Larlus, Andrea Vedaldi
Abstract While recent research in image understanding has often focused on recognizing more types of objects, understanding more about the objects is just as important. Recognizing object parts and attributes has been extensively studied before, yet learning large space of such concepts remains elusive due to the high cost of providing detailed object annotations for supervision. The key contribution of this paper is an algorithm to learn the nameable parts of objects automatically, from images obtained by querying Web search engines. The key challenge is the high level of noise in the annotations; to address it, we propose a new unified embedding space where the appearance and geometry of objects and their semantic parts are represented uniformly. Geometric relationships are induced in a soft manner by a rich set of nonsemantic mid-level anchors, bridging the gap between semantic and non-semantic parts. We also show that the resulting embedding provides a visually-intuitive mechanism to navigate the learned concepts and their corresponding images.
Tasks
Published 2016-07-05
URL http://arxiv.org/abs/1607.01205v1
PDF http://arxiv.org/pdf/1607.01205v1.pdf
PWC https://paperswithcode.com/paper/learning-the-semantic-structure-of-objects
Repo
Framework

Information-theoretic limits of Bayesian network structure learning

Title Information-theoretic limits of Bayesian network structure learning
Authors Asish Ghoshal, Jean Honorio
Abstract In this paper, we study the information-theoretic limits of learning the structure of Bayesian networks (BNs), on discrete as well as continuous random variables, from a finite number of samples. We show that the minimum number of samples required by any procedure to recover the correct structure grows as $\Omega(m)$ and $\Omega(k \log m + (k^2/m))$ for non-sparse and sparse BNs respectively, where $m$ is the number of variables and $k$ is the maximum number of parents per node. We provide a simple recipe, based on an extension of the Fano’s inequality, to obtain information-theoretic limits of structure recovery for any exponential family BN. We instantiate our result for specific conditional distributions in the exponential family to characterize the fundamental limits of learning various commonly used BNs, such as conditional probability table based networks, gaussian BNs, noisy-OR networks, and logistic regression networks. En route to obtaining our main results, we obtain tight bounds on the number of sparse and non-sparse essential-DAGs. Finally, as a byproduct, we recover the information-theoretic limits of sparse variable selection for logistic regression.
Tasks
Published 2016-01-27
URL http://arxiv.org/abs/1601.07460v4
PDF http://arxiv.org/pdf/1601.07460v4.pdf
PWC https://paperswithcode.com/paper/information-theoretic-limits-of-bayesian
Repo
Framework
comments powered by Disqus