Paper Group ANR 236
Preparing for the Unexpected: Diversity Improves Planning Resilience in Evolutionary Algorithms. How does the AI understand what’s going on. Cascaded multi-scale and multi-dimension convolutional neural network for stereo matching. Events Beyond ACE: Curated Training for Events. onlineSPARC: a Programming Environment for Answer Set Programming. Joi …
Preparing for the Unexpected: Diversity Improves Planning Resilience in Evolutionary Algorithms
Title | Preparing for the Unexpected: Diversity Improves Planning Resilience in Evolutionary Algorithms |
Authors | Thomas Gabor, Lenz Belzner, Thomy Phan, Kyrill Schmid |
Abstract | As automatic optimization techniques find their way into industrial applications, the behavior of many complex systems is determined by some form of planner picking the right actions to optimize a given objective function. In many cases, the mapping of plans to objective reward may change due to unforeseen events or circumstances in the real world. In those cases, the planner usually needs some additional effort to adjust to the changed situation and reach its previous level of performance. Whenever we still need to continue polling the planner even during re-planning, it oftentimes exhibits severely lacking performance. In order to improve the planner’s resilience to unforeseen change, we argue that maintaining a certain level of diversity amongst the considered plans at all times should be added to the planner’s objective. Effectively, we encourage the planner to keep alternative plans to its currently best solution. As an example case, we implement a diversity-aware genetic algorithm using two different metrics for diversity (differing in their generality) and show that the blow in performance due to unexpected change can be severely lessened in the average case. We also analyze the parameter settings necessary for these techniques in order to gain an intuition how they can be incorporated into larger frameworks or process models for software and systems engineering. |
Tasks | |
Published | 2018-10-30 |
URL | http://arxiv.org/abs/1810.12483v1 |
http://arxiv.org/pdf/1810.12483v1.pdf | |
PWC | https://paperswithcode.com/paper/preparing-for-the-unexpected-diversity |
Repo | |
Framework | |
How does the AI understand what’s going on
Title | How does the AI understand what’s going on |
Authors | Dimiter Dobrev |
Abstract | Most researchers regard AI as a static function without memory. This is one of the few articles where AI is seen as a device with memory. When we have memory, we can ask ourselves: “Where am I?", and “What is going on?” When we have no memory, we have to assume that we are always in the same place and that the world is always in the same state. |
Tasks | |
Published | 2018-04-27 |
URL | http://arxiv.org/abs/1805.00851v1 |
http://arxiv.org/pdf/1805.00851v1.pdf | |
PWC | https://paperswithcode.com/paper/how-does-the-ai-understand-whats-going-on |
Repo | |
Framework | |
Cascaded multi-scale and multi-dimension convolutional neural network for stereo matching
Title | Cascaded multi-scale and multi-dimension convolutional neural network for stereo matching |
Authors | Haihua Lu, Hai Xu, Li Zhang, Yong Zhao |
Abstract | Convolutional neural networks(CNN) have been shown to perform better than the conventional stereo algorithms for stereo estimation. Numerous efforts focus on the pixel-wise matching cost computation, which is the important building block for many start-of-the-art algorithms. However, those architectures are limited to small and single scale receptive fields and use traditional methods for cost aggregation or even ignore cost aggregation. Differently we take them both into consideration. Firstly, we propose a new multi-scale matching cost computation sub-network, in which two different sizes of receptive fields are implemented parallelly. In this way, the network can make the best use of both variants and balance the trade-off between the increase of receptive field and the loss of detail. Furthermore, we show that our multi-dimension aggregation sub-network which containing 2D convolution and 3D convolution operations can provide rich context and semantic information for estimating an accurate initial disparity. Finally, experiments on challenging stereo benchmark KITTI demonstrate that the proposed method can achieve competitive results even without any additional post-processing. |
Tasks | Stereo Matching, Stereo Matching Hand |
Published | 2018-03-26 |
URL | http://arxiv.org/abs/1803.09437v2 |
http://arxiv.org/pdf/1803.09437v2.pdf | |
PWC | https://paperswithcode.com/paper/cascaded-multi-scale-and-multi-dimension |
Repo | |
Framework | |
Events Beyond ACE: Curated Training for Events
Title | Events Beyond ACE: Curated Training for Events |
Authors | Ryan Gabbard, Jay DeYoung, Marjorie Freedman |
Abstract | We explore a human-driven approach to annotation, curated training (CT), in which annotation is framed as teaching the system by using interactive search to identify informative snippets of text to annotate, unlike traditional approaches which either annotate preselected text or use active learning. A trained annotator performed 80 hours of CT for the thirty event types of the NIST TAC KBP Event Argument Extraction evaluation. Combining this annotation with ACE results in a 6% reduction in error and the learning curve of CT plateaus more slowly than for full-document annotation. 3 NLP researchers performed CT for one event type and showed much sharper learning curves with all three exceeding ACE performance in less than ninety minutes, suggesting that CT can provide further benefits when the annotator deeply understands the system. |
Tasks | Active Learning |
Published | 2018-09-14 |
URL | http://arxiv.org/abs/1809.05576v2 |
http://arxiv.org/pdf/1809.05576v2.pdf | |
PWC | https://paperswithcode.com/paper/events-beyond-ace-curated-training-for-events |
Repo | |
Framework | |
onlineSPARC: a Programming Environment for Answer Set Programming
Title | onlineSPARC: a Programming Environment for Answer Set Programming |
Authors | Elias Marcopoulos, Yuanlin Zhang |
Abstract | Recent progress in logic programming (e.g., the development of the Answer Set Programming paradigm) has made it possible to teach it to general undergraduate and even middle/high school students. Given the limited exposure of these students to computer science, the complexity of downloading, installing and using tools for writing logic programs could be a major barrier for logic programming to reach a much wider audience. We developed onlineSPARC, an online answer set programming environment with a self contained file system and a simple interface. It allows users to type/edit logic programs and perform several tasks over programs, including asking a query to a program, getting the answer sets of a program, and producing a drawing/animation based on the answer sets of a program. |
Tasks | |
Published | 2018-09-21 |
URL | http://arxiv.org/abs/1809.08304v1 |
http://arxiv.org/pdf/1809.08304v1.pdf | |
PWC | https://paperswithcode.com/paper/onlinesparc-a-programming-environment-for |
Repo | |
Framework | |
Joint association and classification analysis of multi-view data
Title | Joint association and classification analysis of multi-view data |
Authors | Yunfeng Zhang, Irina Gaynanova |
Abstract | Multi-view data, that is matched sets of measurements on the same subjects, have become increasingly common with technological advances in genomics and other fields. Often, the subjects are separated into known classes, and it is of interest to find associations between the views that are related to the class membership. Existing classification methods can either be applied to each view separately, or to the concatenated matrix of all views without taking into account between-views associations. On the other hand, existing association methods can not directly incorporate class information. In this work we propose a framework for Joint Association and Classification Analysis of multi-view data (JACA). We support the methodology with theoretical guarantees for estimation consistency in high-dimensional settings, and numerical comparisons with existing methods. In addition to joint learning framework, a distinct advantage of our approach is its ability to use partial information: it can be applied both in the settings with missing class labels, and in the settings with missing subsets of views. We apply JACA to colorectal cancer data from The Cancer Genome Atlas project, and quantify the association between RNAseq and miRNA views with respect to consensus molecular subtypes of colorectal cancer. |
Tasks | |
Published | 2018-11-20 |
URL | http://arxiv.org/abs/1811.08511v1 |
http://arxiv.org/pdf/1811.08511v1.pdf | |
PWC | https://paperswithcode.com/paper/joint-association-and-classification-analysis |
Repo | |
Framework | |
Decision problems for Clark-congruential languages
Title | Decision problems for Clark-congruential languages |
Authors | Makoto Kanazawa, Tobias Kappé |
Abstract | A common question when studying a class of context-free grammars is whether equivalence is decidable within this class. We answer this question positively for the class of Clark-congruential grammars, which are of interest to grammatical inference. We also consider the problem of checking whether a given CFG is Clark-congruential, and show that it is decidable given that the CFG is a DCFG. |
Tasks | |
Published | 2018-05-11 |
URL | http://arxiv.org/abs/1805.04402v2 |
http://arxiv.org/pdf/1805.04402v2.pdf | |
PWC | https://paperswithcode.com/paper/decision-problems-for-clark-congruential |
Repo | |
Framework | |
Bayesian Regularization for Graphical Models with Unequal Shrinkage
Title | Bayesian Regularization for Graphical Models with Unequal Shrinkage |
Authors | Lingrui Gan, Naveen N. Narisetty, Feng Liang |
Abstract | We consider a Bayesian framework for estimating a high-dimensional sparse precision matrix, in which adaptive shrinkage and sparsity are induced by a mixture of Laplace priors. Besides discussing our formulation from the Bayesian standpoint, we investigate the MAP (maximum a posteriori) estimator from a penalized likelihood perspective that gives rise to a new non-convex penalty approximating the $\ell_0$ penalty. Optimal error rates for estimation consistency in terms of various matrix norms along with selection consistency for sparse structure recovery are shown for the unique MAP estimator under mild conditions. For fast and efficient computation, an EM algorithm is proposed to compute the MAP estimator of the precision matrix and (approximate) posterior probabilities on the edges of the underlying sparse structure. Through extensive simulation studies and a real application to a call center data, we have demonstrated the fine performance of our method compared with existing alternatives. |
Tasks | |
Published | 2018-05-06 |
URL | http://arxiv.org/abs/1805.02257v2 |
http://arxiv.org/pdf/1805.02257v2.pdf | |
PWC | https://paperswithcode.com/paper/bayesian-regularization-for-graphical-models |
Repo | |
Framework | |
An Influence-based Clustering Model on Twitter
Title | An Influence-based Clustering Model on Twitter |
Authors | Abbas Ehsanfar, Mo Mansouri |
Abstract | This paper introduces a temporal framework for detecting and clustering emergent and viral topics on social networks. Endogenous and exogenous influence on developing viral content is explored using a clustering method based on the a user’s behavior on social network and a dataset from Twitter API. Results are discussed by introducing metrics such as popularity, burstiness, and relevance score. The results show clear distinction in characteristics of developed content by the two classes of users. |
Tasks | |
Published | 2018-11-19 |
URL | http://arxiv.org/abs/1811.07655v1 |
http://arxiv.org/pdf/1811.07655v1.pdf | |
PWC | https://paperswithcode.com/paper/an-influence-based-clustering-model-on |
Repo | |
Framework | |
Using Artificial Intelligence to Support Compliance with the General Data Protection Regulation
Title | Using Artificial Intelligence to Support Compliance with the General Data Protection Regulation |
Authors | John KC Kingston |
Abstract | The General Data Protection Regulation (GDPR) is a European Union regulation that will replace the existing Data Protection Directive on 25 May 2018. The most significant change is a huge increase in the maximum fine that can be levied for breaches of the regulation. Yet fewer than half of UK companies are fully aware of GDPR - and a number of those who were preparing for it stopped doing so when the Brexit vote was announced. A last-minute rush to become compliant is therefore expected, and numerous companies are starting to offer advice, checklists and consultancy on how to comply with GDPR. In such an environment, artificial intelligence technologies ought to be able to assist by providing best advice; asking all and only the relevant questions; monitoring activities; and carrying out assessments. The paper considers four areas of GDPR compliance where rule based technologies and/or machine learning techniques may be relevant: * Following compliance checklists and codes of conduct; * Supporting risk assessments; * Complying with the new regulations regarding technologies that perform automatic profiling; * Complying with the new regulations concerning recognising and reporting breaches of security. It concludes that AI technology can support each of these four areas. The requirements that GDPR (or organisations that need to comply with GDPR) state for explanation and justification of reasoning imply that rule-based approaches are likely to be more helpful than machine learning approaches. However, there may be good business reasons to take a different approach in some circumstances. |
Tasks | |
Published | 2018-09-15 |
URL | http://arxiv.org/abs/1809.05762v1 |
http://arxiv.org/pdf/1809.05762v1.pdf | |
PWC | https://paperswithcode.com/paper/using-artificial-intelligence-to-support |
Repo | |
Framework | |
Deep Keyframe Detection in Human Action Videos
Title | Deep Keyframe Detection in Human Action Videos |
Authors | Xiang Yan, Syed Zulqarnain Gilani, Hanlin Qin, Mingtao Feng, Liang Zhang, Ajmal Mian |
Abstract | Detecting representative frames in videos based on human actions is quite challenging because of the combined factors of human pose in action and the background. This paper addresses this problem and formulates the key frame detection as one of finding the video frames that optimally maximally contribute to differentiating the underlying action category from all other categories. To this end, we introduce a deep two-stream ConvNet for key frame detection in videos that learns to directly predict the location of key frames. Our key idea is to automatically generate labeled data for the CNN learning using a supervised linear discriminant method. While the training data is generated taking many different human action videos into account, the trained CNN can predict the importance of frames from a single video. We specify a new ConvNet framework, consisting of a summarizer and discriminator. The summarizer is a two-stream ConvNet aimed at, first, capturing the appearance and motion features of video frames, and then encoding the obtained appearance and motion features for video representation. The discriminator is a fitting function aimed at distinguishing between the key frames and others in the video. We conduct experiments on a challenging human action dataset UCF101 and show that our method can detect key frames with high accuracy. |
Tasks | |
Published | 2018-04-26 |
URL | http://arxiv.org/abs/1804.10021v1 |
http://arxiv.org/pdf/1804.10021v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-keyframe-detection-in-human-action |
Repo | |
Framework | |
Egocentric 6-DoF Tracking of Small Handheld Objects
Title | Egocentric 6-DoF Tracking of Small Handheld Objects |
Authors | Rohit Pandey, Pavel Pidlypenskyi, Shuoran Yang, Christine Kaeser-Chen |
Abstract | Virtual and augmented reality technologies have seen significant growth in the past few years. A key component of such systems is the ability to track the pose of head mounted displays and controllers in 3D space. We tackle the problem of efficient 6-DoF tracking of a handheld controller from egocentric camera perspectives. We collected the HMD Controller dataset which consist of over 540,000 stereo image pairs labelled with the full 6-DoF pose of the handheld controller. Our proposed SSD-AF-Stereo3D model achieves a mean average error of 33.5 millimeters in 3D keypoint prediction and is used in conjunction with an IMU sensor on the controller to enable 6-DoF tracking. We also present results on approaches for model based full 6-DoF tracking. All our models operate under the strict constraints of real time mobile CPU inference. |
Tasks | |
Published | 2018-04-16 |
URL | http://arxiv.org/abs/1804.05870v1 |
http://arxiv.org/pdf/1804.05870v1.pdf | |
PWC | https://paperswithcode.com/paper/egocentric-6-dof-tracking-of-small-handheld |
Repo | |
Framework | |
CamLoc: Pedestrian Location Detection from Pose Estimation on Resource-constrained Smart-cameras
Title | CamLoc: Pedestrian Location Detection from Pose Estimation on Resource-constrained Smart-cameras |
Authors | Adrian Cosma, Ion Emilian Radoi, Valentin Radu |
Abstract | Recent advancements in energy-efficient hardware technology is driving the exponential growth we are experiencing in the Internet of Things (IoT) space, with more pervasive computations being performed near to data generation sources. A range of intelligent devices and applications performing local detection is emerging (activity recognition, fitness monitoring, etc.) bringing with them obvious advantages such as reducing detection latency for improved interaction with devices and safeguarding user data by not leaving the device. Video processing holds utility for many emerging applications and data labelling in the IoT space. However, performing this video processing with deep neural networks at the edge of the Internet is not trivial. In this paper we show that pedestrian location estimation using deep neural networks is achievable on fixed cameras with limited compute resources. Our approach uses pose estimation from key body points detection to extend pedestrian skeleton when whole body not in image (occluded by obstacles or partially outside of frame), which achieves better location estimation performance (infrence time and memory footprint) compared to fitting a bounding box over pedestrian and scaling. We collect a sizable dataset comprising of over 2100 frames in videos from one and two surveillance cameras pointing from different angles at the scene, and annotate each frame with the exact position of person in image, in 42 different scenarios of activity and occlusion. We compare our pose estimation based location detection with a popular detection algorithm, YOLOv2, for overlapping bounding-box generation, our solution achieving faster inference time (15x speedup) at half the memory footprint, within resource capabilities on embedded devices, which demonstrate that CamLoc is an efficient solution for location estimation in videos on smart-cameras. |
Tasks | Activity Recognition, Pose Estimation |
Published | 2018-12-28 |
URL | http://arxiv.org/abs/1812.11209v1 |
http://arxiv.org/pdf/1812.11209v1.pdf | |
PWC | https://paperswithcode.com/paper/camloc-pedestrian-location-detection-from |
Repo | |
Framework | |
Rate-Optimal Denoising with Deep Neural Networks
Title | Rate-Optimal Denoising with Deep Neural Networks |
Authors | Reinhard Heckel, Wen Huang, Paul Hand, Vladislav Voroninski |
Abstract | Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy observation. The underlying principle is that neural networks trained on large datasets have empirically been shown to be able to generate natural images well from a low-dimensional latent representation of the image. Given such a generator network, a noisy image can be denoised by i) finding the closest image in the range of the generator or by ii) passing it through an encoder-generator architecture (known as an autoencoder). However, there is little theory to justify this success, let alone to predict the denoising performance as a function of the network parameters. In this paper we consider the problem of denoising an image from additive Gaussian noise using the two generator based approaches. In both cases, we assume the image is well described by a deep neural network with ReLU activations functions, mapping a $k$-dimensional code to an $n$-dimensional image. In the case of the autoencoder, we show that the feedforward network reduces noise energy by a factor of $O(k/n)$. In the case of optimizing over the range of a generative model, we state and analyze a simple gradient algorithm that minimizes a non-convex loss function, and provably reduces noise energy by a factor of $O(k/n)$. We also demonstrate in numerical experiments that this denoising performance is, indeed, achieved by generative priors learned from data. |
Tasks | Denoising, Image Denoising |
Published | 2018-05-22 |
URL | http://arxiv.org/abs/1805.08855v2 |
http://arxiv.org/pdf/1805.08855v2.pdf | |
PWC | https://paperswithcode.com/paper/deep-denoising-rate-optimal-recovery-of |
Repo | |
Framework | |
Image Classification Based on Quantum KNN Algorithm
Title | Image Classification Based on Quantum KNN Algorithm |
Authors | Yijie Dang, Nan Jiang, Hao Hu, Zhuoxiao Ji, Wenyin Zhang |
Abstract | Image classification is an important task in the field of machine learning and image processing. However, the usually used classification method — the K Nearest-Neighbor algorithm has high complexity, because its two main processes: similarity computing and searching are time-consuming. Especially in the era of big data, the problem is prominent when the amount of images to be classified is large. In this paper, we try to use the powerful parallel computing ability of quantum computers to optimize the efficiency of image classification. The scheme is based on quantum K Nearest-Neighbor algorithm. Firstly, the feature vectors of images are extracted on classical computers. Then the feature vectors are inputted into a quantum superposition state, which is used to achieve parallel computing of similarity. Next, the quantum minimum search algorithm is used to speed up searching process for similarity. Finally, the image is classified by quantum measurement. The complexity of the quantum algorithm is only O((kM)^(1/2)), which is superior to the classical algorithms. Moreover, the measurement step is executed only once to ensure the validity of the scheme. The experimental results show that, the classification accuracy is 83.1% on Graz-01 dataset and 78% on Caltech-101 dataset, which is close to existing classical algorithms. Hence, our quantum scheme has a good classification performance while greatly improving the efficiency. |
Tasks | Image Classification |
Published | 2018-05-16 |
URL | http://arxiv.org/abs/1805.06260v1 |
http://arxiv.org/pdf/1805.06260v1.pdf | |
PWC | https://paperswithcode.com/paper/image-classification-based-on-quantum-knn |
Repo | |
Framework | |