July 28, 2019

2973 words 14 mins read

Paper Group ANR 315

Paper Group ANR 315

Adaptive Matching for Expert Systems with Uncertain Task Types. Multi-Document Summarization using Distributed Bag-of-Words Model. Efficient Online Learning for Optimizing Value of Information: Theory and Application to Interactive Troubleshooting. Search Intelligence: Deep Learning For Dominant Category Prediction. Simulated Annealing Algorithm fo …

Adaptive Matching for Expert Systems with Uncertain Task Types

Title Adaptive Matching for Expert Systems with Uncertain Task Types
Authors Virag Shah, Lennart Gulikers, Laurent Massoulie, Milan Vojnovic
Abstract A matching in a two-sided market often incurs an externality: a matched resource may become unavailable to the other side of the market, at least for a while. This is especially an issue in online platforms involving human experts as the expert resources are often scarce. The efficient utilization of experts in these platforms is made challenging by the fact that the information available about the parties involved is usually limited. To address this challenge, we develop a model of a task-expert matching system where a task is matched to an expert using not only the prior information about the task but also the feedback obtained from the past matches. In our model the tasks arrive online while the experts are fixed and constrained by a finite service capacity. For this model, we characterize the maximum task resolution throughput a platform can achieve. We show that the natural greedy approaches where each expert is assigned a task most suitable to her skill is suboptimal, as it does not internalize the above externality. We develop a throughput optimal backpressure algorithm which does so by accounting for the `congestion’ among different task types. Finally, we validate our model and confirm our theoretical findings with data-driven simulations via logs of Math.StackExchange, a StackOverflow forum dedicated to mathematics. |
Tasks
Published 2017-03-02
URL http://arxiv.org/abs/1703.00674v3
PDF http://arxiv.org/pdf/1703.00674v3.pdf
PWC https://paperswithcode.com/paper/adaptive-matching-for-expert-systems-with
Repo
Framework

Multi-Document Summarization using Distributed Bag-of-Words Model

Title Multi-Document Summarization using Distributed Bag-of-Words Model
Authors Kaustubh Mani, Ishan Verma, Hardik Meisheri, Lipika Dey
Abstract As the number of documents on the web is growing exponentially, multi-document summarization is becoming more and more important since it can provide the main ideas in a document set in short time. In this paper, we present an unsupervised centroid-based document-level reconstruction framework using distributed bag of words model. Specifically, our approach selects summary sentences in order to minimize the reconstruction error between the summary and the documents. We apply sentence selection and beam search, to further improve the performance of our model. Experimental results on two different datasets show significant performance gains compared with the state-of-the-art baselines.
Tasks Document Summarization, Multi-Document Summarization
Published 2017-10-07
URL http://arxiv.org/abs/1710.02745v2
PDF http://arxiv.org/pdf/1710.02745v2.pdf
PWC https://paperswithcode.com/paper/multi-document-summarization-using
Repo
Framework

Efficient Online Learning for Optimizing Value of Information: Theory and Application to Interactive Troubleshooting

Title Efficient Online Learning for Optimizing Value of Information: Theory and Application to Interactive Troubleshooting
Authors Yuxin Chen, Jean-Michel Renders, Morteza Haghir Chehreghani, Andreas Krause
Abstract We consider the optimal value of information (VoI) problem, where the goal is to sequentially select a set of tests with a minimal cost, so that one can efficiently make the best decision based on the observed outcomes. Existing algorithms are either heuristics with no guarantees, or scale poorly (with exponential run time in terms of the number of available tests). Moreover, these methods assume a known distribution over the test outcomes, which is often not the case in practice. We propose an efficient sampling-based online learning framework to address the above issues. First, assuming the distribution over hypotheses is known, we propose a dynamic hypothesis enumeration strategy, which allows efficient information gathering with strong theoretical guarantees. We show that with sufficient amount of samples, one can identify a near-optimal decision with high probability. Second, when the parameters of the hypotheses distribution are unknown, we propose an algorithm which learns the parameters progressively via posterior sampling in an online fashion. We further establish a rigorous bound on the expected regret. We demonstrate the effectiveness of our approach on a real-world interactive troubleshooting application and show that one can efficiently make high-quality decisions with low cost.
Tasks
Published 2017-03-16
URL http://arxiv.org/abs/1703.05452v2
PDF http://arxiv.org/pdf/1703.05452v2.pdf
PWC https://paperswithcode.com/paper/efficient-online-learning-for-optimizing
Repo
Framework

Search Intelligence: Deep Learning For Dominant Category Prediction

Title Search Intelligence: Deep Learning For Dominant Category Prediction
Authors Zeeshan Khawar Malik, Mo Kobrosli, Peter Maas
Abstract Deep Neural Networks, and specifically fully-connected convolutional neural networks are achieving remarkable results across a wide variety of domains. They have been trained to achieve state-of-the-art performance when applied to problems such as speech recognition, image classification, natural language processing and bioinformatics. Most of these deep learning models when applied to classification employ the softmax activation function for prediction and aim to minimize cross-entropy loss. In this paper, we have proposed a supervised model for dominant category prediction to improve search recall across all eBay classifieds platforms. The dominant category label for each query in the last 90 days is first calculated by summing the total number of collaborative clicks among all categories. The category having the highest number of collaborative clicks for the given query will be considered its dominant category. Second, each query is transformed to a numeric vector by mapping each unique word in the query document to a unique integer value; all padded to equal length based on the maximum document length within the pre-defined vocabulary size. A fully-connected deep convolutional neural network (CNN) is then applied for classification. The proposed model achieves very high classification accuracy compared to other state-of-the-art machine learning techniques.
Tasks Image Classification, Speech Recognition
Published 2017-02-06
URL http://arxiv.org/abs/1702.01717v1
PDF http://arxiv.org/pdf/1702.01717v1.pdf
PWC https://paperswithcode.com/paper/search-intelligence-deep-learning-for
Repo
Framework

Simulated Annealing Algorithm for Graph Coloring

Title Simulated Annealing Algorithm for Graph Coloring
Authors Alper Kose, Berke Aral Sonmez, Metin Balaban
Abstract The goal of this Random Walks project is to code and experiment the Markov Chain Monte Carlo (MCMC) method for the problem of graph coloring. In this report, we present the plots of cost function (\mathbf{H}) by varying the parameters like (\mathbf{q}) (Number of colors that can be used in coloring) and (\mathbf{c}) (Average node degree). The results are obtained by using simulated annealing scheme, where the temperature (inverse of (\mathbf{\beta})) parameter in the MCMC is lowered progressively.
Tasks
Published 2017-12-03
URL http://arxiv.org/abs/1712.00709v1
PDF http://arxiv.org/pdf/1712.00709v1.pdf
PWC https://paperswithcode.com/paper/simulated-annealing-algorithm-for-graph
Repo
Framework

Responsible Autonomy

Title Responsible Autonomy
Authors Virginia Dignum
Abstract As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.
Tasks
Published 2017-06-08
URL http://arxiv.org/abs/1706.02513v1
PDF http://arxiv.org/pdf/1706.02513v1.pdf
PWC https://paperswithcode.com/paper/responsible-autonomy
Repo
Framework

Gland Segmentation in Histopathology Images Using Random Forest Guided Boundary Construction

Title Gland Segmentation in Histopathology Images Using Random Forest Guided Boundary Construction
Authors Rohith AP, Salman S. Khan, Kumar Anubhav, Angshuman Paul
Abstract Grading of cancer is important to know the extent of its spread. Prior to grading, segmentation of glandular structures is important. Manual segmentation is a time consuming process and is subject to observer bias. Hence, an automated process is required to segment the gland structures. These glands show a large variation in shape size and texture. This makes the task challenging as the glands cannot be segmented using mere morphological operations and conventional segmentation mechanisms. In this project we propose a method which detects the boundary epithelial cells of glands and then a novel approach is used to construct the complete gland boundary. The region enclosed within the boundary can then be obtained to get the segmented gland regions.
Tasks
Published 2017-05-14
URL http://arxiv.org/abs/1705.04924v3
PDF http://arxiv.org/pdf/1705.04924v3.pdf
PWC https://paperswithcode.com/paper/gland-segmentation-in-histopathology-images
Repo
Framework

Joint Detection and Recounting of Abnormal Events by Learning Deep Generic Knowledge

Title Joint Detection and Recounting of Abnormal Events by Learning Deep Generic Knowledge
Authors Ryota Hinami, Tao Mei, Shin’ichi Satoh
Abstract This paper addresses the problem of joint detection and recounting of abnormal events in videos. Recounting of abnormal events, i.e., explaining why they are judged to be abnormal, is an unexplored but critical task in video surveillance, because it helps human observers quickly judge if they are false alarms or not. To describe the events in the human-understandable form for event recounting, learning generic knowledge about visual concepts (e.g., object and action) is crucial. Although convolutional neural networks (CNNs) have achieved promising results in learning such concepts, it remains an open question as to how to effectively use CNNs for abnormal event detection, mainly due to the environment-dependent nature of the anomaly detection. In this paper, we tackle this problem by integrating a generic CNN model and environment-dependent anomaly detectors. Our approach first learns CNN with multiple visual tasks to exploit semantic information that is useful for detecting and recounting abnormal events. By appropriately plugging the model into anomaly detectors, we can detect and recount abnormal events while taking advantage of the discriminative power of CNNs. Our approach outperforms the state-of-the-art on Avenue and UCSD Ped2 benchmarks for abnormal event detection and also produces promising results of abnormal event recounting.
Tasks Anomaly Detection
Published 2017-09-26
URL http://arxiv.org/abs/1709.09121v1
PDF http://arxiv.org/pdf/1709.09121v1.pdf
PWC https://paperswithcode.com/paper/joint-detection-and-recounting-of-abnormal
Repo
Framework

Robust Visual Tracking via Hierarchical Convolutional Features

Title Robust Visual Tracking via Hierarchical Convolutional Features
Authors Chao Ma, Jia-Bin Huang, Xiaokang Yang, Ming-Hsuan Yang
Abstract In this paper, we propose to exploit the rich hierarchical features of deep convolutional neural networks to improve the accuracy and robustness of visual tracking. Deep neural networks trained on object recognition datasets consist of multiple convolutional layers. These layers encode target appearance with different levels of abstraction. For example, the outputs of the last convolutional layers encode the semantic information of targets and such representations are invariant to significant appearance variations. However, their spatial resolutions are too coarse to precisely localize the target. In contrast, features from earlier convolutional layers provide more precise localization but are less invariant to appearance changes. We interpret the hierarchical features of convolutional layers as a nonlinear counterpart of an image pyramid representation and explicitly exploit these multiple levels of abstraction to represent target objects. Specifically, we learn adaptive correlation filters on the outputs from each convolutional layer to encode the target appearance. We infer the maximum response of each layer to locate targets in a coarse-to-fine manner. To further handle the issues with scale estimation and re-detecting target objects from tracking failures caused by heavy occlusion or out-of-the-view movement, we conservatively learn another correlation filter, that maintains a long-term memory of target appearance, as a discriminative classifier. We apply the classifier to two types of object proposals: (1) proposals with a small step size and tightly around the estimated location for scale estimation; and (2) proposals with large step size and across the whole image for target re-detection. Extensive experimental results on large-scale benchmark datasets show that the proposed algorithm performs favorably against state-of-the-art tracking methods.
Tasks Object Recognition, Visual Tracking
Published 2017-07-12
URL http://arxiv.org/abs/1707.03816v2
PDF http://arxiv.org/pdf/1707.03816v2.pdf
PWC https://paperswithcode.com/paper/robust-visual-tracking-via-hierarchical
Repo
Framework

Investigating the Parameter Space of Evolutionary Algorithms

Title Investigating the Parameter Space of Evolutionary Algorithms
Authors Moshe Sipper, Weixuan Fu, Karuna Ahuja, Jason H. Moore
Abstract The practice of evolutionary algorithms involves the tuning of many parameters. How big should the population be? How many generations should the algorithm run? What is the (tournament selection) tournament size? What probabilities should one assign to crossover and mutation? Through an extensive series of experiments over multiple evolutionary algorithm implementations and problems we show that parameter space tends to be rife with viable parameters, at least for 25 the problems studied herein. We discuss the implications of this finding in practice.
Tasks
Published 2017-06-13
URL http://arxiv.org/abs/1706.04119v3
PDF http://arxiv.org/pdf/1706.04119v3.pdf
PWC https://paperswithcode.com/paper/investigating-the-parameter-space-of
Repo
Framework

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

Title Optimal approximation of piecewise smooth functions using deep ReLU neural networks
Authors Philipp Petersen, Felix Voigtlaender
Abstract We study the necessary and sufficient complexity of ReLU neural networks—in terms of depth and number of weights—which is required for approximating classifier functions in $L^2$. As a model class, we consider the set $\mathcal{E}^\beta (\mathbb R^d)$ of possibly discontinuous piecewise $C^\beta$ functions $f : [-1/2, 1/2]^d \to \mathbb R$, where the different smooth regions of $f$ are separated by $C^\beta$ hypersurfaces. For dimension $d \geq 2$, regularity $\beta > 0$, and accuracy $\varepsilon > 0$, we construct artificial neural networks with ReLU activation function that approximate functions from $\mathcal{E}^\beta(\mathbb R^d)$ up to $L^2$ error of $\varepsilon$. The constructed networks have a fixed number of layers, depending only on $d$ and $\beta$, and they have $O(\varepsilon^{-2(d-1)/\beta})$ many nonzero weights, which we prove to be optimal. In addition to the optimality in terms of the number of weights, we show that in order to achieve the optimal approximation rate, one needs ReLU networks of a certain depth. Precisely, for piecewise $C^\beta(\mathbb R^d)$ functions, this minimal depth is given—up to a multiplicative constant—by $\beta/d$. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function $f$ to be approximated can be factorized into a smooth dimension reducing feature map $\tau$ and classifier function $g$—defined on a low-dimensional feature space—as $f = g \circ \tau$. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.
Tasks
Published 2017-09-15
URL http://arxiv.org/abs/1709.05289v4
PDF http://arxiv.org/pdf/1709.05289v4.pdf
PWC https://paperswithcode.com/paper/optimal-approximation-of-piecewise-smooth
Repo
Framework

Extraction of Evolution Descriptions from the Web

Title Extraction of Evolution Descriptions from the Web
Authors Helge Holzmann, Thomas Risse
Abstract The evolution of named entities affects exploration and retrieval tasks in digital libraries. An information retrieval system that is aware of name changes can actively support users in finding former occurrences of evolved entities. However, current structured knowledge bases, such as DBpedia or Freebase, do not provide enough information about evolutions, even though the data is available on their resources, like Wikipedia. Our \emph{Evolution Base} prototype will demonstrate how excerpts describing name evolutions can be identified on these websites with a promising precision. The descriptions are classified by means of models that we trained based on a recent analysis of named entity evolutions on Wikipedia.
Tasks Information Retrieval
Published 2017-02-03
URL http://arxiv.org/abs/1702.01179v1
PDF http://arxiv.org/pdf/1702.01179v1.pdf
PWC https://paperswithcode.com/paper/extraction-of-evolution-descriptions-from-the
Repo
Framework

Convergence, Continuity and Recurrence in Dynamic Epistemic Logic

Title Convergence, Continuity and Recurrence in Dynamic Epistemic Logic
Authors Dominik Klein, Rasmus K. Rendsvig
Abstract The paper analyzes dynamic epistemic logic from a topological perspective. The main contribution consists of a framework in which dynamic epistemic logic satisfies the requirements for being a topological dynamical system thus interfacing discrete dynamic logics with continuous mappings of dynamical systems. The setting is based on a notion of logical convergence, demonstratively equivalent with convergence in Stone topology. Presented is a flexible, parametrized family of metrics inducing the latter, used as an analytical aid. We show maps induced by action model transformations continuous with respect to the Stone topology and present results on the recurrent behavior of said maps.
Tasks
Published 2017-09-01
URL http://arxiv.org/abs/1709.00359v1
PDF http://arxiv.org/pdf/1709.00359v1.pdf
PWC https://paperswithcode.com/paper/convergence-continuity-and-recurrence-in
Repo
Framework

On Feature Reduction using Deep Learning for Trend Prediction in Finance

Title On Feature Reduction using Deep Learning for Trend Prediction in Finance
Authors Luigi Troiano, Elena Mejuto, Pravesh Kriplani
Abstract One of the major advantages in using Deep Learning for Finance is to embed a large collection of information into investment decisions. A way to do that is by means of compression, that lead us to consider a smaller feature space. Several studies are proving that non-linear feature reduction performed by Deep Learning tools is effective in price trend prediction. The focus has been put mainly on Restricted Boltzmann Machines (RBM) and on output obtained by them. Few attention has been payed to Auto-Encoders (AE) as an alternative means to perform a feature reduction. In this paper we investigate the application of both RBM and AE in more general terms, attempting to outline how architectural and input space characteristics can affect the quality of prediction.
Tasks
Published 2017-04-11
URL http://arxiv.org/abs/1704.03205v1
PDF http://arxiv.org/pdf/1704.03205v1.pdf
PWC https://paperswithcode.com/paper/on-feature-reduction-using-deep-learning-for
Repo
Framework

Deep Learning at 15PF: Supervised and Semi-Supervised Classification for Scientific Data

Title Deep Learning at 15PF: Supervised and Semi-Supervised Classification for Scientific Data
Authors Thorsten Kurth, Jian Zhang, Nadathur Satish, Ioannis Mitliagkas, Evan Racah, Mostofa Ali Patwary, Tareq Malas, Narayanan Sundaram, Wahid Bhimji, Mikhail Smorkalov, Jack Deslippe, Mikhail Shiryaev, Srinivas Sridharan, Prabhat, Pradeep Dubey
Abstract This paper presents the first, 15-PetaFLOP Deep Learning system for solving scientific pattern classification problems on contemporary HPC architectures. We develop supervised convolutional architectures for discriminating signals in high-energy physics data as well as semi-supervised architectures for localizing and classifying extreme weather in climate data. Our Intelcaffe-based implementation obtains $\sim$2TFLOP/s on a single Cori Phase-II Xeon-Phi node. We use a hybrid strategy employing synchronous node-groups, while using asynchronous communication across groups. We use this strategy to scale training of a single model to $\sim$9600 Xeon-Phi nodes; obtaining peak performance of 11.73-15.07 PFLOP/s and sustained performance of 11.41-13.27 PFLOP/s. At scale, our HEP architecture produces state-of-the-art classification accuracy on a dataset with 10M images, exceeding that achieved by selections on high-level physics-motivated features. Our semi-supervised architecture successfully extracts weather patterns in a 15TB climate dataset. Our results demonstrate that Deep Learning can be optimized and scaled effectively on many-core, HPC systems.
Tasks
Published 2017-08-17
URL http://arxiv.org/abs/1708.05256v1
PDF http://arxiv.org/pdf/1708.05256v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-at-15pf-supervised-and-semi
Repo
Framework
comments powered by Disqus