July 27, 2019

3242 words 16 mins read

Paper Group ANR 693

Paper Group ANR 693

An Accelerated Communication-Efficient Primal-Dual Optimization Framework for Structured Machine Learning. A Supervised Approach to Extractive Summarisation of Scientific Papers. A Concave Optimization Algorithm for Matching Partially Overlapping Point Sets. Recommendation under Capacity Constraints. Defense semantics of argumentation: encoding rea …

An Accelerated Communication-Efficient Primal-Dual Optimization Framework for Structured Machine Learning

Title An Accelerated Communication-Efficient Primal-Dual Optimization Framework for Structured Machine Learning
Authors Chenxin Ma, Martin Jaggi, Frank E. Curtis, Nathan Srebro, Martin Takáč
Abstract Distributed optimization algorithms are essential for training machine learning models on very large-scale datasets. However, they often suffer from communication bottlenecks. Confronting this issue, a communication-efficient primal-dual coordinate ascent framework (CoCoA) and its improved variant CoCoA+ have been proposed, achieving a convergence rate of $\mathcal{O}(1/t)$ for solving empirical risk minimization problems with Lipschitz continuous losses. In this paper, an accelerated variant of CoCoA+ is proposed and shown to possess a convergence rate of $\mathcal{O}(1/t^2)$ in terms of reducing suboptimality. The analysis of this rate is also notable in that the convergence rate bounds involve constants that, except in extreme cases, are significantly reduced compared to those previously provided for CoCoA+. The results of numerical experiments are provided to show that acceleration can lead to significant performance gains.
Tasks Distributed Optimization
Published 2017-11-14
URL http://arxiv.org/abs/1711.05305v1
PDF http://arxiv.org/pdf/1711.05305v1.pdf
PWC https://paperswithcode.com/paper/an-accelerated-communication-efficient-primal
Repo
Framework

A Supervised Approach to Extractive Summarisation of Scientific Papers

Title A Supervised Approach to Extractive Summarisation of Scientific Papers
Authors Ed Collins, Isabelle Augenstein, Sebastian Riedel
Abstract Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. In this paper, we introduce a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. We develop models on the dataset making use of both neural sentence encoding and traditionally used summarisation features and show that models which encode sentences as well as their local and global context perform best, significantly outperforming well-established baseline methods.
Tasks
Published 2017-06-13
URL http://arxiv.org/abs/1706.03946v1
PDF http://arxiv.org/pdf/1706.03946v1.pdf
PWC https://paperswithcode.com/paper/a-supervised-approach-to-extractive
Repo
Framework

A Concave Optimization Algorithm for Matching Partially Overlapping Point Sets

Title A Concave Optimization Algorithm for Matching Partially Overlapping Point Sets
Authors Wei Lian, Lei Zhang
Abstract Point matching refers to the process of finding spatial transformation and correspondences between two sets of points. In this paper, we focus on the case that there is only partial overlap between two point sets. Following the approach of the robust point matching method, we model point matching as a mixed linear assignment-least square problem and show that after eliminating the transformation variable, the resulting problem of minimization with respect to point correspondence is a concave optimization problem. Furthermore, this problem has the property that the objective function can be converted into a form with few nonlinear terms via a linear transformation. Based on these properties, we employ the branch-and-bound (BnB) algorithm to optimize the resulting problem where the dimension of the search space is small. To further improve efficiency of the BnB algorithm where computation of the lower bound is the bottleneck, we propose a new lower bounding scheme which has a k-cardinality linear assignment formulation and can be efficiently solved. Experimental results show that the proposed algorithm outperforms state-of-the-art methods in terms of robustness to disturbances and point matching accuracy.
Tasks
Published 2017-01-04
URL http://arxiv.org/abs/1701.00951v1
PDF http://arxiv.org/pdf/1701.00951v1.pdf
PWC https://paperswithcode.com/paper/a-concave-optimization-algorithm-for-matching
Repo
Framework

Recommendation under Capacity Constraints

Title Recommendation under Capacity Constraints
Authors Konstantina Christakopoulou, Jaya Kawale, Arindam Banerjee
Abstract In this paper, we investigate the common scenario where every candidate item for recommendation is characterized by a maximum capacity, i.e., number of seats in a Point-of-Interest (POI) or size of an item’s inventory. Despite the prevalence of the task of recommending items under capacity constraints in a variety of settings, to the best of our knowledge, none of the known recommender methods is designed to respect capacity constraints. To close this gap, we extend three state-of-the art latent factor recommendation approaches: probabilistic matrix factorization (PMF), geographical matrix factorization (GeoMF), and bayesian personalized ranking (BPR), to optimize for both recommendation accuracy and expected item usage that respects the capacity constraints. We introduce the useful concepts of user propensity to listen and item capacity. Our experimental results in real-world datasets, both for the domain of item recommendation and POI recommendation, highlight the benefit of our method for the setting of recommendation under capacity constraints.
Tasks
Published 2017-01-18
URL http://arxiv.org/abs/1701.05228v2
PDF http://arxiv.org/pdf/1701.05228v2.pdf
PWC https://paperswithcode.com/paper/recommendation-under-capacity-constraints
Repo
Framework

Defense semantics of argumentation: encoding reasons for accepting arguments

Title Defense semantics of argumentation: encoding reasons for accepting arguments
Authors Beishui Liao, Leendert van der Torre
Abstract In this paper we show how the defense relation among abstract arguments can be used to encode the reasons for accepting arguments. After introducing a novel notion of defenses and defense graphs, we propose a defense semantics together with a new notion of defense equivalence of argument graphs, and compare defense equivalence with standard equivalence and strong equivalence, respectively. Then, based on defense semantics, we define two kinds of reasons for accepting arguments, i.e., direct reasons and root reasons, and a notion of root equivalence of argument graphs. Finally, we show how the notion of root equivalence can be used in argumentation summarization.
Tasks
Published 2017-04-30
URL http://arxiv.org/abs/1705.00303v2
PDF http://arxiv.org/pdf/1705.00303v2.pdf
PWC https://paperswithcode.com/paper/defense-semantics-of-argumentation-encoding
Repo
Framework

Simultaneous Feature and Body-Part Learning for Real-Time Robot Awareness of Human Behaviors

Title Simultaneous Feature and Body-Part Learning for Real-Time Robot Awareness of Human Behaviors
Authors Fei Han, Xue Yang, Christopher Reardon, Yu Zhang, Hao Zhang
Abstract Robot awareness of human actions is an essential research problem in robotics with many important real-world applications, including human-robot collaboration and teaming. Over the past few years, depth sensors have become a standard device widely used by intelligent robots for 3D perception, which can also offer human skeletal data in 3D space. Several methods based on skeletal data were designed to enable robot awareness of human actions with satisfactory accuracy. However, previous methods treated all body parts and features equally important, without the capability to identify discriminative body parts and features. In this paper, we propose a novel simultaneous Feature And Body-part Learning (FABL) approach that simultaneously identifies discriminative body parts and features, and efficiently integrates all available information together to enable real-time robot awareness of human behaviors. We formulate FABL as a regression-like optimization problem with structured sparsity-inducing norms to model interrelationships of body parts and features. We also develop an optimization algorithm to solve the formulated problem, which possesses a theoretical guarantee to find the optimal solution. To evaluate FABL, three experiments were performed using public benchmark datasets, including the MSR Action3D and CAD-60 datasets, as well as a Baxter robot in practical assistive living applications. Experimental results show that our FABL approach obtains a high recognition accuracy with a processing speed of the order-of-magnitude of 10e4 Hz, which makes FABL a promising method to enable real-time robot awareness of human behaviors in practical robotics applications.
Tasks
Published 2017-02-24
URL http://arxiv.org/abs/1702.07474v1
PDF http://arxiv.org/pdf/1702.07474v1.pdf
PWC https://paperswithcode.com/paper/simultaneous-feature-and-body-part-learning
Repo
Framework

Propositional Knowledge Representation and Reasoning in Restricted Boltzmann Machines

Title Propositional Knowledge Representation and Reasoning in Restricted Boltzmann Machines
Authors Son N. Tran
Abstract While knowledge representation and reasoning are considered the keys for human-level artificial intelligence, connectionist networks have been shown successful in a broad range of applications due to their capacity for robust learning and flexible inference under uncertainty. The idea of representing symbolic knowledge in connectionist networks has been well-received and attracted much attention from research community as this can establish a foundation for integration of scalable learning and sound reasoning. In previous work, there exist a number of approaches that map logical inference rules with feed-forward propagation of artificial neural networks (ANN). However, the discriminative structure of an ANN requires the separation of input/output variables which makes it difficult for general reasoning where any variables should be inferable. Other approaches address this issue by employing generative models such as symmetric connectionist networks, however, they are difficult and convoluted. In this paper we propose a novel method to represent propositional formulas in restricted Boltzmann machines which is less complex, especially in the cases of logical implications and Horn clauses. An integration system is then developed and evaluated in real datasets which shows promising results.
Tasks
Published 2017-05-31
URL http://arxiv.org/abs/1705.10899v3
PDF http://arxiv.org/pdf/1705.10899v3.pdf
PWC https://paperswithcode.com/paper/propositional-knowledge-representation-and
Repo
Framework

Larger is Better: The Effect of Learning Rates Enjoyed by Stochastic Optimization with Progressive Variance Reduction

Title Larger is Better: The Effect of Learning Rates Enjoyed by Stochastic Optimization with Progressive Variance Reduction
Authors Fanhua Shang
Abstract In this paper, we propose a simple variant of the original stochastic variance reduction gradient (SVRG), where hereafter we refer to as the variance reduced stochastic gradient descent (VR-SGD). Different from the choices of the snapshot point and starting point in SVRG and its proximal variant, Prox-SVRG, the two vectors of each epoch in VR-SGD are set to the average and last iterate of the previous epoch, respectively. This setting allows us to use much larger learning rates or step sizes than SVRG, e.g., 3/(7L) for VR-SGD vs 1/(10L) for SVRG, and also makes our convergence analysis more challenging. In fact, a larger learning rate enjoyed by VR-SGD means that the variance of its stochastic gradient estimator asymptotically approaches zero more rapidly. Unlike common stochastic methods such as SVRG and proximal stochastic methods such as Prox-SVRG, we design two different update rules for smooth and non-smooth objective functions, respectively. In other words, VR-SGD can tackle non-smooth and/or non-strongly convex problems directly without using any reduction techniques such as quadratic regularizers. Moreover, we analyze the convergence properties of VR-SGD for strongly convex problems, which show that VR-SGD attains a linear convergence rate. We also provide the convergence guarantees of VR-SGD for non-strongly convex problems. Experimental results show that the performance of VR-SGD is significantly better than its counterparts, SVRG and Prox-SVRG, and it is also much better than the best known stochastic method, Katyusha.
Tasks Stochastic Optimization
Published 2017-04-17
URL http://arxiv.org/abs/1704.04966v1
PDF http://arxiv.org/pdf/1704.04966v1.pdf
PWC https://paperswithcode.com/paper/larger-is-better-the-effect-of-learning-rates
Repo
Framework

Multimodal Prediction and Personalization of Photo Edits with Deep Generative Models

Title Multimodal Prediction and Personalization of Photo Edits with Deep Generative Models
Authors Ardavan Saeedi, Matthew D. Hoffman, Stephen J. DiVerdi, Asma Ghandeharioun, Matthew J. Johnson, Ryan P. Adams
Abstract Professional-grade software applications are powerful but complicated$-$expert users can achieve impressive results, but novices often struggle to complete even basic tasks. Photo editing is a prime example: after loading a photo, the user is confronted with an array of cryptic sliders like “clarity”, “temp”, and “highlights”. An automatically generated suggestion could help, but there is no single “correct” edit for a given image$-$different experts may make very different aesthetic decisions when faced with the same image, and a single expert may make different choices depending on the intended use of the image (or on a whim). We therefore want a system that can propose multiple diverse, high-quality edits while also learning from and adapting to a user’s aesthetic preferences. In this work, we develop a statistical model that meets these objectives. Our model builds on recent advances in neural network generative modeling and scalable inference, and uses hierarchical structure to learn editing patterns across many diverse users. Empirically, we find that our model outperforms other approaches on this challenging multimodal prediction task.
Tasks
Published 2017-04-17
URL http://arxiv.org/abs/1704.04997v1
PDF http://arxiv.org/pdf/1704.04997v1.pdf
PWC https://paperswithcode.com/paper/multimodal-prediction-and-personalization-of
Repo
Framework

Optimal modularity and memory capacity of neural reservoirs

Title Optimal modularity and memory capacity of neural reservoirs
Authors Nathaniel Rodriguez, Eduardo Izquierdo, Yong-Yeol Ahn
Abstract The neural network is a powerful computing framework that has been exploited by biological evolution and by humans for solving diverse problems. Although the computational capabilities of neural networks are determined by their structure, the current understanding of the relationships between a neural network’s architecture and function is still primitive. Here we reveal that neural network’s modular architecture plays a vital role in determining the neural dynamics and memory performance of the network of threshold neurons. In particular, we demonstrate that there exists an optimal modularity for memory performance, where a balance between local cohesion and global connectivity is established, allowing optimally modular networks to remember longer. Our results suggest that insights from dynamical analysis of neural networks and information spreading processes can be leveraged to better design neural networks and may shed light on the brain’s modular organization.
Tasks
Published 2017-06-20
URL http://arxiv.org/abs/1706.06511v3
PDF http://arxiv.org/pdf/1706.06511v3.pdf
PWC https://paperswithcode.com/paper/optimal-modularity-and-memory-capacity-of
Repo
Framework

On Optimal Generalizability in Parametric Learning

Title On Optimal Generalizability in Parametric Learning
Authors Ahmad Beirami, Meisam Razaviyayn, Shahin Shahrampour, Vahid Tarokh
Abstract We consider the parametric learning problem, where the objective of the learner is determined by a parametric loss function. Employing empirical risk minimization with possibly regularization, the inferred parameter vector will be biased toward the training samples. Such bias is measured by the cross validation procedure in practice where the data set is partitioned into a training set used for training and a validation set, which is not used in training and is left to measure the out-of-sample performance. A classical cross validation strategy is the leave-one-out cross validation (LOOCV) where one sample is left out for validation and training is done on the rest of the samples that are presented to the learner, and this process is repeated on all of the samples. LOOCV is rarely used in practice due to the high computational complexity. In this paper, we first develop a computationally efficient approximate LOOCV (ALOOCV) and provide theoretical guarantees for its performance. Then we use ALOOCV to provide an optimization algorithm for finding the regularizer in the empirical risk minimization framework. In our numerical experiments, we illustrate the accuracy and efficiency of ALOOCV as well as our proposed framework for the optimization of the regularizer.
Tasks
Published 2017-11-14
URL http://arxiv.org/abs/1711.05323v1
PDF http://arxiv.org/pdf/1711.05323v1.pdf
PWC https://paperswithcode.com/paper/on-optimal-generalizability-in-parametric
Repo
Framework

Large Margin Object Tracking with Circulant Feature Maps

Title Large Margin Object Tracking with Circulant Feature Maps
Authors Mengmeng Wang, Yong Liu, Zeyi Huang
Abstract Structured output support vector machine (SVM) based tracking algorithms have shown favorable performance recently. Nonetheless, the time-consuming candidate sampling and complex optimization limit their real-time applications. In this paper, we propose a novel large margin object tracking method which absorbs the strong discriminative ability from structured output SVM and speeds up by the correlation filter algorithm significantly. Secondly, a multimodal target detection technique is proposed to improve the target localization precision and prevent model drift introduced by similar objects or background noise. Thirdly, we exploit the feedback from high-confidence tracking results to avoid the model corruption problem. We implement two versions of the proposed tracker with the representations from both conventional hand-crafted and deep convolution neural networks (CNNs) based features to validate the strong compatibility of the algorithm. The experimental results demonstrate that the proposed tracker performs superiorly against several state-of-the-art algorithms on the challenging benchmark sequences while runs at speed in excess of 80 frames per second. The source code and experimental results will be made publicly available.
Tasks Object Tracking
Published 2017-03-15
URL http://arxiv.org/abs/1703.05020v2
PDF http://arxiv.org/pdf/1703.05020v2.pdf
PWC https://paperswithcode.com/paper/large-margin-object-tracking-with-circulant
Repo
Framework

Holistic Interstitial Lung Disease Detection using Deep Convolutional Neural Networks: Multi-label Learning and Unordered Pooling

Title Holistic Interstitial Lung Disease Detection using Deep Convolutional Neural Networks: Multi-label Learning and Unordered Pooling
Authors Mingchen Gao, Ziyue Xu, Le Lu, Adam P. Harrison, Ronald M. Summers, Daniel J. Mollura
Abstract Accurately predicting and detecting interstitial lung disease (ILD) patterns given any computed tomography (CT) slice without any pre-processing prerequisites, such as manually delineated regions of interest (ROIs), is a clinically desirable, yet challenging goal. The majority of existing work relies on manually-provided ILD ROIs to extract sampled 2D image patches from CT slices and, from there, performs patch-based ILD categorization. Acquiring manual ROIs is labor intensive and serves as a bottleneck towards fully-automated CT imaging ILD screening over large-scale populations. Furthermore, despite the considerable high frequency of more than one ILD pattern on a single CT slice, previous works are only designed to detect one ILD pattern per slice or patch. To tackle these two critical challenges, we present multi-label deep convolutional neural networks (CNNs) for detecting ILDs from holistic CT slices (instead of ROIs or sub-images). Conventional single-labeled CNN models can be augmented to cope with the possible presence of multiple ILD pattern labels, via 1) continuous-valued deep regression based robust norm loss functions or 2) a categorical objective as the sum of element-wise binary logistic losses. Our methods are evaluated and validated using a publicly available database of 658 patient CT scans under five-fold cross-validation, achieving promising performance on detecting four major ILD patterns: Ground Glass, Reticular, Honeycomb, and Emphysema. We also investigate the effectiveness of a CNN activation-based deep-feature encoding scheme using Fisher vector encoding, which treats ILD detection as spatially-unordered deep texture classification.
Tasks Computed Tomography (CT), Multi-Label Learning, Texture Classification
Published 2017-01-19
URL http://arxiv.org/abs/1701.05616v1
PDF http://arxiv.org/pdf/1701.05616v1.pdf
PWC https://paperswithcode.com/paper/holistic-interstitial-lung-disease-detection
Repo
Framework

Learning Semantic Relatedness From Human Feedback Using Metric Learning

Title Learning Semantic Relatedness From Human Feedback Using Metric Learning
Authors Thomas Niebler, Martin Becker, Christian Pölitz, Andreas Hotho
Abstract Assessing the degree of semantic relatedness between words is an important task with a variety of semantic applications, such as ontology learning for the Semantic Web, semantic search or query expansion. To accomplish this in an automated fashion, many relatedness measures have been proposed. However, most of these metrics only encode information contained in the underlying corpus and thus do not directly model human intuition. To solve this, we propose to utilize a metric learning approach to improve existing semantic relatedness measures by learning from additional information, such as explicit human feedback. For this, we argue to use word embeddings instead of traditional high-dimensional vector representations in order to leverage their semantic density and to reduce computational cost. We rigorously test our approach on several domains including tagging data as well as publicly available embeddings based on Wikipedia texts and navigation. Human feedback about semantic relatedness for learning and evaluation is extracted from publicly available datasets such as MEN or WS-353. We find that our method can significantly improve semantic relatedness measures by learning from additional information, such as explicit human feedback. For tagging data, we are the first to generate and study embeddings. Our results are of special interest for ontology and recommendation engineers, but also for any other researchers and practitioners of Semantic Web techniques.
Tasks Metric Learning, Word Embeddings
Published 2017-05-21
URL http://arxiv.org/abs/1705.07425v2
PDF http://arxiv.org/pdf/1705.07425v2.pdf
PWC https://paperswithcode.com/paper/learning-semantic-relatedness-from-human
Repo
Framework

Automatic Conflict Detection in Police Body-Worn Audio

Title Automatic Conflict Detection in Police Body-Worn Audio
Authors Alistair Letcher, Jelena Trišović, Collin Cademartori, Xi Chen, Jason Xu
Abstract Automatic conflict detection has grown in relevance with the advent of body-worn technology, but existing metrics such as turn-taking and overlap are poor indicators of conflict in police-public interactions. Moreover, standard techniques to compute them fall short when applied to such diversified and noisy contexts. We develop a pipeline catered to this task combining adaptive noise removal, non-speech filtering and new measures of conflict based on the repetition and intensity of phrases in speech. We demonstrate the effectiveness of our approach on body-worn audio data collected by the Los Angeles Police Department.
Tasks
Published 2017-11-14
URL http://arxiv.org/abs/1711.05355v2
PDF http://arxiv.org/pdf/1711.05355v2.pdf
PWC https://paperswithcode.com/paper/automatic-conflict-detection-in-police-body
Repo
Framework
comments powered by Disqus