May 6, 2019

3251 words 16 mins read

Paper Group ANR 186

Paper Group ANR 186

Query Answering with Inconsistent Existential Rules under Stable Model Semantics. Learning to Incentivize: Eliciting Effort via Output Agreement. A semi-automatic computer-aided method for surgical template design. Distributed Estimation and Learning over Heterogeneous Networks. DASC: Robust Dense Descriptor for Multi-modal and Multi-spectral Corre …

Query Answering with Inconsistent Existential Rules under Stable Model Semantics

Title Query Answering with Inconsistent Existential Rules under Stable Model Semantics
Authors Hai Wan, Heng Zhang, Peng Xiao, Haoran Huang, Yan Zhang
Abstract Traditional inconsistency-tolerent query answering in ontology-based data access relies on selecting maximal components of an ABox/database which are consistent with the ontology. However, some rules in ontologies might be unreliable if they are extracted from ontology learning or written by unskillful knowledge engineers. In this paper we present a framework of handling inconsistent existential rules under stable model semantics, which is defined by a notion called rule repairs to select maximal components of the existential rules. Surprisingly, for R-acyclic existential rules with R-stratified or guarded existential rules with stratified negations, both the data complexity and combined complexity of query answering under the rule {repair semantics} remain the same as that under the conventional query answering semantics. This leads us to propose several approaches to handle the rule {repair semantics} by calling answer set programming solvers. An experimental evaluation shows that these approaches have good scalability of query answering under rule repairs on realistic cases.
Tasks
Published 2016-02-18
URL http://arxiv.org/abs/1602.05699v1
PDF http://arxiv.org/pdf/1602.05699v1.pdf
PWC https://paperswithcode.com/paper/query-answering-with-inconsistent-existential
Repo
Framework

Learning to Incentivize: Eliciting Effort via Output Agreement

Title Learning to Incentivize: Eliciting Effort via Output Agreement
Authors Yang Liu, Yiling Chen
Abstract In crowdsourcing when there is a lack of verification for contributed answers, output agreement mechanisms are often used to incentivize participants to provide truthful answers when the correct answer is hold by the majority. In this paper, we focus on using output agreement mechanisms to elicit effort, in addition to eliciting truthful answers, from a population of workers. We consider a setting where workers have heterogeneous cost of effort exertion and examine the data requester’s problem of deciding the reward level in output agreement for optimal elicitation. In particular, when the requester knows the cost distribution, we derive the optimal reward level for output agreement mechanisms. This is achieved by first characterizing Bayesian Nash equilibria of output agreement mechanisms for a given reward level. When the requester does not know the cost distribution, we develop sequential mechanisms that combine learning the cost distribution with incentivizing effort exertion to approximately determine the optimal reward level.
Tasks
Published 2016-04-17
URL http://arxiv.org/abs/1604.04928v1
PDF http://arxiv.org/pdf/1604.04928v1.pdf
PWC https://paperswithcode.com/paper/learning-to-incentivize-eliciting-effort-via
Repo
Framework

A semi-automatic computer-aided method for surgical template design

Title A semi-automatic computer-aided method for surgical template design
Authors Xiaojun Chen, Lu Xu, Yue Yang, Jan Egger
Abstract This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.
Tasks
Published 2016-02-04
URL http://arxiv.org/abs/1602.01644v1
PDF http://arxiv.org/pdf/1602.01644v1.pdf
PWC https://paperswithcode.com/paper/a-semi-automatic-computer-aided-method-for
Repo
Framework

Distributed Estimation and Learning over Heterogeneous Networks

Title Distributed Estimation and Learning over Heterogeneous Networks
Authors M. Amin Rahimian, Ali Jadbabaie
Abstract We consider several estimation and learning problems that networked agents face when making decisions given their uncertainty about an unknown variable. Our methods are designed to efficiently deal with heterogeneity in both size and quality of the observed data, as well as heterogeneity over time (intermittence). The goal of the studied aggregation schemes is to efficiently combine the observed data that is spread over time and across several network nodes, accounting for all the network heterogeneities. Moreover, we require no form of coordination beyond the local neighborhood of every network agent or sensor node. The three problems that we consider are (i) maximum likelihood estimation of the unknown given initial data sets, (ii) learning the true model parameter from streams of data that the agents receive intermittently over time, and (iii) minimum variance estimation of a complete sufficient statistic from several data points that the networked agents collect over time. In each case we rely on an aggregation scheme to combine the observations of all agents; moreover, when the agents receive streams of data over time, we modify the update rules to accommodate the most recent observations. In every case, we demonstrate the efficiency of our algorithms by proving convergence to the globally efficient estimators given the observations of all agents. We supplement these results by investigating the rate of convergence and providing finite-time performance guarantees.
Tasks
Published 2016-11-10
URL http://arxiv.org/abs/1611.03328v1
PDF http://arxiv.org/pdf/1611.03328v1.pdf
PWC https://paperswithcode.com/paper/distributed-estimation-and-learning-over
Repo
Framework

DASC: Robust Dense Descriptor for Multi-modal and Multi-spectral Correspondence Estimation

Title DASC: Robust Dense Descriptor for Multi-modal and Multi-spectral Correspondence Estimation
Authors Seungryong Kim, Dongbo Min, Bumsub Ham, Minh N. Do, Kwanghoon Sohn
Abstract Establishing dense correspondences between multiple images is a fundamental task in many applications. However, finding a reliable correspondence in multi-modal or multi-spectral images still remains unsolved due to their challenging photometric and geometric variations. In this paper, we propose a novel dense descriptor, called dense adaptive self-correlation (DASC), to estimate multi-modal and multi-spectral dense correspondences. Based on an observation that self-similarity existing within images is robust to imaging modality variations, we define the descriptor with a series of an adaptive self-correlation similarity measure between patches sampled by a randomized receptive field pooling, in which a sampling pattern is obtained using a discriminative learning. The computational redundancy of dense descriptors is dramatically reduced by applying fast edge-aware filtering. Furthermore, in order to address geometric variations including scale and rotation, we propose a geometry-invariant DASC (GI-DASC) descriptor that effectively leverages the DASC through a superpixel-based representation. For a quantitative evaluation of the GI-DASC, we build a novel multi-modal benchmark as varying photometric and geometric conditions. Experimental results demonstrate the outstanding performance of the DASC and GI-DASC in many cases of multi-modal and multi-spectral dense correspondences.
Tasks
Published 2016-04-27
URL http://arxiv.org/abs/1604.07944v1
PDF http://arxiv.org/pdf/1604.07944v1.pdf
PWC https://paperswithcode.com/paper/dasc-robust-dense-descriptor-for-multi-modal
Repo
Framework

Support vector regression model for BigData systems

Title Support vector regression model for BigData systems
Authors Alessandro Maria Rizzi
Abstract Nowadays Big Data are becoming more and more important. Many sectors of our economy are now guided by data-driven decision processes. Big Data and business intelligence applications are facilitated by the MapReduce programming model while, at infrastructural layer, cloud computing provides flexible and cost effective solutions for allocating on demand large clusters. In such systems, capacity allocation, which is the ability to optimally size minimal resources for achieve a certain level of performance, is a key challenge to enhance performance for MapReduce jobs and minimize cloud resource costs. In order to do so, one of the biggest challenge is to build an accurate performance model to estimate job execution time of MapReduce systems. Previous works applied simulation based models for modeling such systems. Although this approach can accurately describe the behavior of Big Data clusters, it is too computationally expensive and does not scale to large system. We try to overcome these issues by applying machine learning techniques. More precisely we focus on Support Vector Regression (SVR) which is intrinsically more robust w.r.t other techniques, like, e.g., neural networks, and less sensitive to outliers in the training set. To better investigate these benefits, we compare SVR to linear regression.
Tasks
Published 2016-12-05
URL http://arxiv.org/abs/1612.01458v1
PDF http://arxiv.org/pdf/1612.01458v1.pdf
PWC https://paperswithcode.com/paper/support-vector-regression-model-for-bigdata
Repo
Framework

Interpretable Semantic Textual Similarity: Finding and explaining differences between sentences

Title Interpretable Semantic Textual Similarity: Finding and explaining differences between sentences
Authors I. Lopez-Gazpio, M. Maritxalar, A. Gonzalez-Agirre, G. Rigau, L. Uria, E. Agirre
Abstract User acceptance of artificial intelligence agents might depend on their ability to explain their reasoning, which requires adding an interpretability layer that fa- cilitates users to understand their behavior. This paper focuses on adding an in- terpretable layer on top of Semantic Textual Similarity (STS), which measures the degree of semantic equivalence between two sentences. The interpretability layer is formalized as the alignment between pairs of segments across the two sentences, where the relation between the segments is labeled with a relation type and a similarity score. We present a publicly available dataset of sentence pairs annotated following the formalization. We then develop a system trained on this dataset which, given a sentence pair, explains what is similar and different, in the form of graded and typed segment alignments. When evaluated on the dataset, the system performs better than an informed baseline, showing that the dataset and task are well-defined and feasible. Most importantly, two user studies show how the system output can be used to automatically produce explanations in natural language. Users performed better when having access to the explanations, pro- viding preliminary evidence that our dataset and method to automatically produce explanations is useful in real applications.
Tasks Semantic Textual Similarity
Published 2016-12-14
URL http://arxiv.org/abs/1612.04868v1
PDF http://arxiv.org/pdf/1612.04868v1.pdf
PWC https://paperswithcode.com/paper/interpretable-semantic-textual-similarity
Repo
Framework

Resting state brain networks from EEG: Hidden Markov states vs. classical microstates

Title Resting state brain networks from EEG: Hidden Markov states vs. classical microstates
Authors Tammo Rukat, Adam Baker, Andrew Quinn, Mark Woolrich
Abstract Functional brain networks exhibit dynamics on the sub-second temporal scale and are often assumed to embody the physiological substrate of cognitive processes. Here we analyse the temporal and spatial dynamics of these states, as measured by EEG, with a hidden Markov model and compare this approach to classical EEG microstate analysis. We find dominating state lifetimes of 100–150,ms for both approaches. The state topographies show obvious similarities. However, they also feature distinct spatial and especially temporal properties. These differences may carry physiological meaningful information originating from patterns in the data that the HMM is able to integrate while the microstate analysis is not. This hypothesis is supported by a consistently high pairwise correlation of the temporal evolution of EEG microstates which is not observed for the HMM states and which seems unlikely to be a good description of the underlying physiology. However, further investigation is required to determine the robustness and the functional and clinical relevance of EEG HMM states in comparison to EEG microstates.
Tasks EEG
Published 2016-06-07
URL http://arxiv.org/abs/1606.02344v1
PDF http://arxiv.org/pdf/1606.02344v1.pdf
PWC https://paperswithcode.com/paper/resting-state-brain-networks-from-eeg-hidden
Repo
Framework

QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding

Title QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
Authors Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, Milan Vojnovic
Abstract Parallel implementations of stochastic gradient descent (SGD) have received significant research attention, thanks to excellent scalability properties of this algorithm, and to its efficiency in the context of training deep neural networks. A fundamental barrier for parallelizing large-scale SGD is the fact that the cost of communicating the gradient updates between nodes can be very large. Consequently, lossy compression heuristics have been proposed, by which nodes only communicate quantized gradients. Although effective in practice, these heuristics do not always provably converge, and it is not clear whether they are optimal. In this paper, we propose Quantized SGD (QSGD), a family of compression schemes which allow the compression of gradient updates at each node, while guaranteeing convergence under standard assumptions. QSGD allows the user to trade off compression and convergence time: it can communicate a sublinear number of bits per iteration in the model dimension, and can achieve asymptotically optimal communication cost. We complement our theoretical results with empirical data, showing that QSGD can significantly reduce communication cost, while being competitive with standard uncompressed techniques on a variety of real tasks. In particular, experiments show that gradient quantization applied to training of deep neural networks for image classification and automated speech recognition can lead to significant reductions in communication cost, and end-to-end training time. For instance, on 16 GPUs, we are able to train a ResNet-152 network on ImageNet 1.8x faster to full accuracy. Of note, we show that there exist generic parameter settings under which all known network architectures preserve or slightly improve their full accuracy when using quantization.
Tasks Image Classification, Quantization, Speech Recognition
Published 2016-10-07
URL http://arxiv.org/abs/1610.02132v4
PDF http://arxiv.org/pdf/1610.02132v4.pdf
PWC https://paperswithcode.com/paper/qsgd-communication-efficient-sgd-via-gradient
Repo
Framework

Local High-order Regularization on Data Manifolds

Title Local High-order Regularization on Data Manifolds
Authors Kwang In Kim, James Tompkin, Hanspeter Pfister, Christian Theobalt
Abstract The common graph Laplacian regularizer is well-established in semi-supervised learning and spectral dimensionality reduction. However, as a first-order regularizer, it can lead to degenerate functions in high-dimensional manifolds. The iterated graph Laplacian enables high-order regularization, but it has a high computational complexity and so cannot be applied to large problems. We introduce a new regularizer which is globally high order and so does not suffer from the degeneracy of the graph Laplacian regularizer, but is also sparse for efficient computation in semi-supervised learning applications. We reduce computational complexity by building a local first-order approximation of the manifold as a surrogate geometry, and construct our high-order regularizer based on local derivative evaluations therein. Experiments on human body shape and pose analysis demonstrate the effectiveness and efficiency of our method.
Tasks Dimensionality Reduction
Published 2016-02-11
URL http://arxiv.org/abs/1602.03805v1
PDF http://arxiv.org/pdf/1602.03805v1.pdf
PWC https://paperswithcode.com/paper/local-high-order-regularization-on-data
Repo
Framework

Makeup like a superstar: Deep Localized Makeup Transfer Network

Title Makeup like a superstar: Deep Localized Makeup Transfer Network
Authors Si Liu, Xinyu Ou, Ruihe Qian, Wei Wang, Xiaochun Cao
Abstract In this paper, we propose a novel Deep Localized Makeup Transfer Network to automatically recommend the most suitable makeup for a female and synthesis the makeup on her face. Given a before-makeup face, her most suitable makeup is determined automatically. Then, both the beforemakeup and the reference faces are fed into the proposed Deep Transfer Network to generate the after-makeup face. Our end-to-end makeup transfer network have several nice properties including: (1) with complete functions: including foundation, lip gloss, and eye shadow transfer; (2) cosmetic specific: different cosmetics are transferred in different manners; (3) localized: different cosmetics are applied on different facial regions; (4) producing naturally looking results without obvious artifacts; (5) controllable makeup lightness: various results from light makeup to heavy makeup can be generated. Qualitative and quantitative experiments show that our network performs much better than the methods of [Guo and Sim, 2009] and two variants of NerualStyle [Gatys et al., 2015a].
Tasks
Published 2016-04-25
URL http://arxiv.org/abs/1604.07102v1
PDF http://arxiv.org/pdf/1604.07102v1.pdf
PWC https://paperswithcode.com/paper/makeup-like-a-superstar-deep-localized-makeup
Repo
Framework

Real Time Fine-Grained Categorization with Accuracy and Interpretability

Title Real Time Fine-Grained Categorization with Accuracy and Interpretability
Authors Shaoli Huang, Dacheng Tao
Abstract A well-designed fine-grained categorization system usually has three contradictory requirements: accuracy (the ability to identify objects among subordinate categories); interpretability (the ability to provide human-understandable explanation of recognition system behavior); and efficiency (the speed of the system). To handle the trade-off between accuracy and interpretability, we propose a novel “Deeper Part-Stacked CNN” architecture armed with interpretability by modeling subtle differences between object parts. The proposed architecture consists of a part localization network, a two-stream classification network that simultaneously encodes object-level and part-level cues, and a feature vectors fusion component. Specifically, the part localization network is implemented by exploring a new paradigm for key point localization that first samples a small number of representable pixels and then determine their labels via a convolutional layer followed by a softmax layer. We also use a cropping layer to extract part features and propose a scale mean-max layer for feature fusion learning. Experimentally, our proposed method outperform state-of-the-art approaches both in part localization task and classification task on Caltech-UCSD Birds-200-2011. Moreover, by adopting a set of sharing strategies between the computation of multiple object parts, our single model is fairly efficient running at 32 frames/sec.
Tasks
Published 2016-10-04
URL http://arxiv.org/abs/1610.00824v1
PDF http://arxiv.org/pdf/1610.00824v1.pdf
PWC https://paperswithcode.com/paper/real-time-fine-grained-categorization-with
Repo
Framework

Automatic Classification of Irregularly Sampled Time Series with Unequal Lengths: A Case Study on Estimated Glomerular Filtration Rate

Title Automatic Classification of Irregularly Sampled Time Series with Unequal Lengths: A Case Study on Estimated Glomerular Filtration Rate
Authors Santosh Tirunagari, Simon Bull, Norman Poh
Abstract A patient’s estimated glomerular filtration rate (eGFR) can provide important information about disease progression and kidney function. Traditionally, an eGFR time series is interpreted by a human expert labelling it as stable or unstable. While this approach works for individual patients, the time consuming nature of it precludes the quick evaluation of risk in large numbers of patients. However, automating this process poses significant challenges as eGFR measurements are usually recorded at irregular intervals and the series of measurements differs in length between patients. Here we present a two-tier system to automatically classify an eGFR trend. First, we model the time series using Gaussian process regression (GPR) to fill in `gaps’ by resampling a fixed size vector of fifty time-dependent observations. Second, we classify the resampled eGFR time series using a K-NN/SVM classifier, and evaluate its performance via 5-fold cross validation. Using this approach we achieved an F-score of 0.90, compared to 0.96 for 5 human experts when scored amongst themselves. |
Tasks Time Series
Published 2016-05-17
URL http://arxiv.org/abs/1605.05142v1
PDF http://arxiv.org/pdf/1605.05142v1.pdf
PWC https://paperswithcode.com/paper/automatic-classification-of-irregularly
Repo
Framework

Lifted Relational Algebra with Recursion and Connections to Modal Logic

Title Lifted Relational Algebra with Recursion and Connections to Modal Logic
Authors Eugenia Ternovska
Abstract We propose a new formalism for specifying and reasoning about problems that involve heterogeneous “pieces of information” – large collections of data, decision procedures of any kind and complexity and connections between them. The essence of our proposal is to lift Codd’s relational algebra from operations on relational tables to operations on classes of structures (with recursion), and to add a direction of information propagation. We observe the presence of information propagation in several formalisms for efficient reasoning and use it to express unary negation and operations used in graph databases. We carefully analyze several reasoning tasks and establish a precise connection between a generalized query evaluation and temporal logic model checking. Our development allows us to reveal a general correspondence between classical and modal logics and may shed a new light on the good computational properties of modal logics and related formalisms.
Tasks
Published 2016-12-29
URL http://arxiv.org/abs/1612.09251v1
PDF http://arxiv.org/pdf/1612.09251v1.pdf
PWC https://paperswithcode.com/paper/lifted-relational-algebra-with-recursion-and
Repo
Framework

Fill it up: Exploiting partial dependency annotations in a minimum spanning tree parser

Title Fill it up: Exploiting partial dependency annotations in a minimum spanning tree parser
Authors Liang Sun, Jason Mielens, Jason Baldridge
Abstract Unsupervised models of dependency parsing typically require large amounts of clean, unlabeled data plus gold-standard part-of-speech tags. Adding indirect supervision (e.g. language universals and rules) can help, but we show that obtaining small amounts of direct supervision - here, partial dependency annotations - provides a strong balance between zero and full supervision. We adapt the unsupervised ConvexMST dependency parser to learn from partial dependencies expressed in the Graph Fragment Language. With less than 24 hours of total annotation, we obtain 7% and 17% absolute improvement in unlabeled dependency scores for English and Spanish, respectively, compared to the same parser using only universal grammar constraints.
Tasks Dependency Parsing
Published 2016-11-26
URL http://arxiv.org/abs/1611.08765v1
PDF http://arxiv.org/pdf/1611.08765v1.pdf
PWC https://paperswithcode.com/paper/fill-it-up-exploiting-partial-dependency
Repo
Framework
comments powered by Disqus