Paper Group ANR 160
Reasoning with Justifiable Exceptions in Contextual Hierarchies (Appendix). A Polynomial-time Fragment of Epistemic Probabilistic Argumentation (Technical Report). From Trailers to Storylines: An Efficient Way to Learn from Movies. Pattern Dependence Detection using n-TARP Clustering. Empirical Evaluation of Contextual Policy Search with a Comparis …
Reasoning with Justifiable Exceptions in Contextual Hierarchies (Appendix)
Title | Reasoning with Justifiable Exceptions in Contextual Hierarchies (Appendix) |
Authors | Loris Bozzato, Luciano Serafini, Thomas Eiter |
Abstract | This paper is an appendix to the paper “Reasoning with Justifiable Exceptions in Contextual Hierarchies” by Bozzato, Serafini and Eiter, 2018. It provides further details on the language, the complexity results and the datalog translation introduced in the main paper. |
Tasks | |
Published | 2018-08-06 |
URL | http://arxiv.org/abs/1808.01874v1 |
http://arxiv.org/pdf/1808.01874v1.pdf | |
PWC | https://paperswithcode.com/paper/reasoning-with-justifiable-exceptions-in |
Repo | |
Framework | |
A Polynomial-time Fragment of Epistemic Probabilistic Argumentation (Technical Report)
Title | A Polynomial-time Fragment of Epistemic Probabilistic Argumentation (Technical Report) |
Authors | Nico Potyka |
Abstract | Probabilistic argumentation allows reasoning about argumentation problems in a way that is well-founded by probability theory. However, in practice, this approach can be severely limited by the fact that probabilities are defined by adding an exponential number of terms. We show that this exponential blowup can be avoided in an interesting fragment of epistemic probabilistic argumentation and that some computational problems that have been considered intractable can be solved in polynomial time. We give efficient convex programming formulations for these problems and explore how far our fragment can be extended without loosing tractability. |
Tasks | |
Published | 2018-11-29 |
URL | http://arxiv.org/abs/1811.12083v2 |
http://arxiv.org/pdf/1811.12083v2.pdf | |
PWC | https://paperswithcode.com/paper/a-polynomial-time-fragment-of-epistemic |
Repo | |
Framework | |
From Trailers to Storylines: An Efficient Way to Learn from Movies
Title | From Trailers to Storylines: An Efficient Way to Learn from Movies |
Authors | Qingqiu Huang, Yuanjun Xiong, Yu Xiong, Yuqi Zhang, Dahua Lin |
Abstract | The millions of movies produced in the human history are valuable resources for computer vision research. However, learning a vision model from movie data would meet with serious difficulties. A major obstacle is the computational cost – the length of a movie is often over one hour, which is substantially longer than the short video clips that previous study mostly focuses on. In this paper, we explore an alternative approach to learning vision models from movies. Specifically, we consider a framework comprised of a visual module and a temporal analysis module. Unlike conventional learning methods, the proposed approach learns these modules from different sets of data – the former from trailers while the latter from movies. This allows distinctive visual features to be learned within a reasonable budget while still preserving long-term temporal structures across an entire movie. We construct a large-scale dataset for this study and define a series of tasks on top. Experiments on this dataset showed that the proposed method can substantially reduce the training time while obtaining highly effective features and coherent temporal structures. |
Tasks | |
Published | 2018-06-14 |
URL | http://arxiv.org/abs/1806.05341v1 |
http://arxiv.org/pdf/1806.05341v1.pdf | |
PWC | https://paperswithcode.com/paper/from-trailers-to-storylines-an-efficient-way |
Repo | |
Framework | |
Pattern Dependence Detection using n-TARP Clustering
Title | Pattern Dependence Detection using n-TARP Clustering |
Authors | Tarun Yellamraju, Mireille Boutin |
Abstract | Consider an experiment involving a potentially small number of subjects. Some random variables are observed on each subject: a high-dimensional one called the “observed” random variable, and a one-dimensional one called the “outcome” random variable. We are interested in the dependencies between the observed random variable and the outcome random variable. We propose a method to quantify and validate the dependencies of the outcome random variable on the various patterns contained in the observed random variable. Different degrees of relationship are explored (linear, quadratic, cubic, …). This work is motivated by the need to analyze educational data, which often involves high-dimensional data representing a small number of students. Thus our implementation is designed for a small number of subjects; however, it can be easily modified to handle a very large dataset. As an illustration, the proposed method is used to study the influence of certain skills on the course grade of students in a signal processing class. A valid dependency of the grade on the different skill patterns is observed in the data. |
Tasks | |
Published | 2018-06-13 |
URL | http://arxiv.org/abs/1806.05297v1 |
http://arxiv.org/pdf/1806.05297v1.pdf | |
PWC | https://paperswithcode.com/paper/pattern-dependence-detection-using-n-tarp |
Repo | |
Framework | |
Empirical Evaluation of Contextual Policy Search with a Comparison-based Surrogate Model and Active Covariance Matrix Adaptation
Title | Empirical Evaluation of Contextual Policy Search with a Comparison-based Surrogate Model and Active Covariance Matrix Adaptation |
Authors | Alexander Fabisch |
Abstract | Contextual policy search (CPS) is a class of multi-task reinforcement learning algorithms that is particularly useful for robotic applications. A recent state-of-the-art method is Contextual Covariance Matrix Adaptation Evolution Strategies (C-CMA-ES). It is based on the standard black-box optimization algorithm CMA-ES. There are two useful extensions of CMA-ES that we will transfer to C-CMA-ES and evaluate empirically: ACM-ES, which uses a comparison-based surrogate model, and aCMA-ES, which uses an active update of the covariance matrix. We will show that improvements with these methods can be impressive in terms of sample-efficiency, although this is not relevant any more for the robotic domain. |
Tasks | |
Published | 2018-10-26 |
URL | http://arxiv.org/abs/1810.11491v2 |
http://arxiv.org/pdf/1810.11491v2.pdf | |
PWC | https://paperswithcode.com/paper/empirical-evaluation-of-contextual-policy |
Repo | |
Framework | |
Recognizing Material Properties from Images
Title | Recognizing Material Properties from Images |
Authors | Gabriel Schwartz, Ko Nishino |
Abstract | Humans rely on properties of the materials that make up objects to guide our interactions with them. Grasping smooth materials, for example, requires care, and softness is an ideal property for fabric used in bedding. Even when these properties are not visual (e.g. softness is a physical property), we may still infer their presence visually. We refer to such material properties as visual material attributes. Recognizing these attributes in images can contribute valuable information for general scene understanding and material recognition. Unlike well-known object and scene attributes, visual material attributes are local properties with no fixed shape or spatial extent. We show that given a set of images annotated with known material attributes, we may accurately recognize the attributes from small local image patches. Obtaining such annotations in a consistent fashion at scale, however, is challenging. To address this, we introduce a method that allows us to probe the human visual perception of materials by asking simple yes/no questions comparing pairs of image patches. This provides sufficient weak supervision to build a set of attributes and associated classifiers that, while unnamed, serve the same function as the named attributes we use to describe materials. Doing so allows us to recognize visual material attributes without resorting to exhaustive manual annotation of a fixed set of named attributes. Furthermore, we show that this method may be integrated in the end-to-end learning of a material classification CNN to simultaneously recognize materials and discover their visual attributes. Our experimental results show that visual material attributes, whether named or automatically discovered, provide a useful intermediate representation for known material categories themselves as well as a basis for transfer learning when recognizing previously-unseen categories. |
Tasks | Material Classification, Material Recognition, Scene Understanding, Transfer Learning |
Published | 2018-01-09 |
URL | http://arxiv.org/abs/1801.03127v1 |
http://arxiv.org/pdf/1801.03127v1.pdf | |
PWC | https://paperswithcode.com/paper/recognizing-material-properties-from-images |
Repo | |
Framework | |
Sense Perception Common Sense Relationships
Title | Sense Perception Common Sense Relationships |
Authors | Ndapa Nakashole |
Abstract | Often missing in existing knowledge bases of facts, are relationships that encode common sense knowledge about unnamed entities. In this paper, we propose to extract novel, common sense relationships pertaining to sense perception concepts such as sound and smell. |
Tasks | Common Sense Reasoning |
Published | 2018-11-17 |
URL | https://arxiv.org/abs/1811.07098v2 |
https://arxiv.org/pdf/1811.07098v2.pdf | |
PWC | https://paperswithcode.com/paper/sense-perception-common-sense-relationships |
Repo | |
Framework | |
Peptide-Spectra Matching from Weak Supervision
Title | Peptide-Spectra Matching from Weak Supervision |
Authors | Samuel S. Schoenholz, Sean Hackett, Laura Deming, Eugene Melamud, Navdeep Jaitly, Fiona McAllister, Jonathon O’Brien, George Dahl, Bryson Bennett, Andrew M. Dai, Daphne Koller |
Abstract | As in many other scientific domains, we face a fundamental problem when using machine learning to identify proteins from mass spectrometry data: large ground truth datasets mapping inputs to correct outputs are extremely difficult to obtain. Instead, we have access to imperfect hand-coded models crafted by domain experts. In this paper, we apply deep neural networks to an important step of the protein identification problem, the pairing of mass spectra with short sequences of amino acids called peptides. We train our model to differentiate between top scoring results from a state-of-the art classical system and hard-negative second and third place results. Our resulting model is much better at identifying peptides with spectra than the model used to generate its training data. In particular, we achieve a 43% improvement over standard matching methods and a 10% improvement over a combination of the matching method and an industry standard cross-spectra reranking tool. Importantly, in a more difficult experimental regime that reflects current challenges facing biologists, our advantage over the previous state-of-the-art grows to 15% even after reranking. We believe this approach will generalize to other challenging scientific problems. |
Tasks | |
Published | 2018-08-20 |
URL | http://arxiv.org/abs/1808.06576v2 |
http://arxiv.org/pdf/1808.06576v2.pdf | |
PWC | https://paperswithcode.com/paper/peptide-spectra-matching-from-weak |
Repo | |
Framework | |
Graph Partition Neural Networks for Semi-Supervised Classification
Title | Graph Partition Neural Networks for Semi-Supervised Classification |
Authors | Renjie Liao, Marc Brockschmidt, Daniel Tarlow, Alexander L. Gaunt, Raquel Urtasun, Richard Zemel |
Abstract | We present graph partition neural networks (GPNN), an extension of graph neural networks (GNNs) able to handle extremely large graphs. GPNNs alternate between locally propagating information between nodes in small subgraphs and globally propagating information between the subgraphs. To efficiently partition graphs, we experiment with several partitioning algorithms and also propose a novel variant for fast processing of large scale graphs. We extensively test our model on a variety of semi-supervised node classification tasks. Experimental results indicate that GPNNs are either superior or comparable to state-of-the-art methods on a wide variety of datasets for graph-based semi-supervised classification. We also show that GPNNs can achieve similar performance as standard GNNs with fewer propagation steps. |
Tasks | Node Classification |
Published | 2018-03-16 |
URL | http://arxiv.org/abs/1803.06272v1 |
http://arxiv.org/pdf/1803.06272v1.pdf | |
PWC | https://paperswithcode.com/paper/graph-partition-neural-networks-for-semi |
Repo | |
Framework | |
Truly Autonomous Machines Are Ethical
Title | Truly Autonomous Machines Are Ethical |
Authors | John Hooker |
Abstract | While many see the prospect of autonomous machines as threatening, autonomy may be exactly what we want in a superintelligent machine. There is a sense of autonomy, deeply rooted in the ethical literature, in which an autonomous machine is necessarily an ethical one. Development of the theory underlying this idea not only reveals the advantages of autonomy, but it sheds light on a number of issues in the ethics of artificial intelligence. It helps us to understand what sort of obligations we owe to machines, and what obligations they owe to us. It clears up the issue of assigning responsibility to machines or their creators. More generally, a concept of autonomy that is adequate to both human and artificial intelligence can lead to a more adequate ethical theory for both. |
Tasks | |
Published | 2018-12-05 |
URL | http://arxiv.org/abs/1812.02217v1 |
http://arxiv.org/pdf/1812.02217v1.pdf | |
PWC | https://paperswithcode.com/paper/truly-autonomous-machines-are-ethical |
Repo | |
Framework | |
Distributed Deep Forest and its Application to Automatic Detection of Cash-out Fraud
Title | Distributed Deep Forest and its Application to Automatic Detection of Cash-out Fraud |
Authors | Ya-Lin Zhang, Jun Zhou, Wenhao Zheng, Ji Feng, Longfei Li, Ziqi Liu, Ming Li, Zhiqiang Zhang, Chaochao Chen, Xiaolong Li, Zhi-Hua Zhou, YUAN, QI |
Abstract | Internet companies are facing the need for handling large-scale machine learning applications on a daily basis and distributed implementation of machine learning algorithms which can handle extra-large scale tasks with great performance is widely needed. Deep forest is a recently proposed deep learning framework which uses tree ensembles as its building blocks and it has achieved highly competitive results on various domains of tasks. However, it has not been tested on extremely large scale tasks. In this work, based on our parameter server system, we developed the distributed version of deep forest. To meet the need for real-world tasks, many improvements are introduced to the original deep forest model, including MART (Multiple Additive Regression Tree) as base learners for efficiency and effectiveness consideration, the cost-based method for handling prevalent class-imbalanced data, MART based feature selection for high dimension data and different evaluation metrics for automatically determining of the cascade level. We tested the deep forest model on an extra-large scale task, i.e., automatic detection of cash-out fraud, with more than 100 millions of training samples. Experimental results showed that the deep forest model has the best performance according to the evaluation metrics from different perspectives even with very little effort for parameter tuning. This model can block fraud transactions in a large amount of money each day. Even compared with the best-deployed model, the deep forest model can additionally bring into a significant decrease in economic loss each day. |
Tasks | Feature Selection |
Published | 2018-05-11 |
URL | https://arxiv.org/abs/1805.04234v3 |
https://arxiv.org/pdf/1805.04234v3.pdf | |
PWC | https://paperswithcode.com/paper/distributed-deep-forest-and-its-application |
Repo | |
Framework | |
Tournament Based Ranking CNN for the Cataract grading
Title | Tournament Based Ranking CNN for the Cataract grading |
Authors | Dohyeun Kim, Tae Joon Jun, Daeyoung Kim, Youngsub Eom |
Abstract | Solving the classification problem, unbalanced number of dataset among the classes often causes performance degradation. Especially when some classes dominate the other classes with its large number of datasets, trained model shows low performance in identifying the dominated classes. This is common case when it comes to medical dataset. Because the case with a serious degree is not quite usual, there are imbalance in number of dataset between severe case and normal cases of diseases. Also, there is difficulty in precisely identifying grade of medical data because of vagueness between them. To solve these problems, we propose new architecture of convolutional neural network named Tournament based Ranking CNN which shows remarkable performance gain in identifying dominated classes while trading off very small accuracy loss in dominating classes. Our Approach complemented problems that occur when method of Ranking CNN that aggregates outputs of multiple binary neural network models is applied to medical data. By having tournament structure in aggregating method and using very deep pretrained binary models, our proposed model recorded 68.36% of exact match accuracy, while Ranking CNN recorded 53.40%, pretrained Resnet recorded 56.12% and CNN with linear regression recorded 57.48%. As a result, our proposed method is applied efficiently to cataract grading which have ordinal labels with imbalanced number of data among classes, also can be applied further to medical problems which have similar features to cataract and similar dataset configuration. |
Tasks | |
Published | 2018-07-07 |
URL | http://arxiv.org/abs/1807.02657v1 |
http://arxiv.org/pdf/1807.02657v1.pdf | |
PWC | https://paperswithcode.com/paper/tournament-based-ranking-cnn-for-the-cataract |
Repo | |
Framework | |
Textual Membership Queries
Title | Textual Membership Queries |
Authors | Jonathan Zarecki, Shaul Markovitch |
Abstract | Human labeling of textual data can be very time-consuming and expensive, yet it is critical for the success of an automatic text classification system. In order to minimize human labeling efforts, we propose a novel active learning (AL) solution, that does not rely on existing sources of unlabeled data. It uses a small amount of labeled data as the core set for the synthesis of useful membership queries (MQs) - unlabeled instances synthesized by an algorithm for human labeling. Our solution uses modification operators, functions from the instance space to the instance space that change the input to some extent. We apply the operators on the core set, thus creating a set of new membership queries. Using this framework, we look at the instance space as a search space and apply search algorithms in order to create desirable MQs. We implement this framework in the textual domain. The implementation includes using methods such as WordNet and Word2vec, for replacing text fragments from a given sentence with semantically related ones. We test our framework on several text classification tasks and show improved classifier performance as more MQs are labeled and incorporated into the training set. To the best of our knowledge, this is the first work on membership queries in the textual domain. |
Tasks | Active Learning, Text Classification |
Published | 2018-05-11 |
URL | http://arxiv.org/abs/1805.04609v1 |
http://arxiv.org/pdf/1805.04609v1.pdf | |
PWC | https://paperswithcode.com/paper/textual-membership-queries |
Repo | |
Framework | |
Deep Boosted Regression for MR to CT Synthesis
Title | Deep Boosted Regression for MR to CT Synthesis |
Authors | Kerstin Kläser, Pawel Markiewicz, Marta Ranzini, Wenqi Li, Marc Modat, Brian F Hutton, David Atkinson, Kris Thielemans, M Jorge Cardoso, Sebastien Ourselin |
Abstract | Attenuation correction is an essential requirement of positron emission tomography (PET) image reconstruction to allow for accurate quantification. However, attenuation correction is particularly challenging for PET-MRI as neither PET nor magnetic resonance imaging (MRI) can directly image tissue attenuation properties. MRI-based computed tomography (CT) synthesis has been proposed as an alternative to physics based and segmentation-based approaches that assign a population-based tissue density value in order to generate an attenuation map. We propose a novel deep fully convolutional neural network that generates synthetic CTs in a recursive manner by gradually reducing the residuals of the previous network, increasing the overall accuracy and generalisability, while keeping the number of trainable parameters within reasonable limits. The model is trained on a database of 20 pre-acquired MRI/CT pairs and a four-fold random bootstrapped validation with a 80:20 split is performed. Quantitative results show that the proposed framework outperforms a state-of-the-art atlas-based approach decreasing the Mean Absolute Error (MAE) from 131HU to 68HU for the synthetic CTs and reducing the PET reconstruction error from 14.3% to 7.2%. |
Tasks | Computed Tomography (CT), Image Reconstruction |
Published | 2018-08-22 |
URL | http://arxiv.org/abs/1808.07431v1 |
http://arxiv.org/pdf/1808.07431v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-boosted-regression-for-mr-to-ct |
Repo | |
Framework | |
Augmented Reality-based Feedback for Technician-in-the-loop C-arm Repositioning
Title | Augmented Reality-based Feedback for Technician-in-the-loop C-arm Repositioning |
Authors | Mathias Unberath, Javad Fotouhi, Jonas Hajek, Andreas Maier, Greg Osgood, Russell Taylor, Mehran Armand, Nassir Navab |
Abstract | Interventional C-arm imaging is crucial to percutaneous orthopedic procedures as it enables the surgeon to monitor the progress of surgery on the anatomy level. Minimally invasive interventions require repeated acquisition of X-ray images from different anatomical views to verify tool placement. Achieving and reproducing these views often comes at the cost of increased surgical time and radiation dose to both patient and staff. This work proposes a marker-free “technician-in-the-loop” Augmented Reality (AR) solution for C-arm repositioning. The X-ray technician operating the C-arm interventionally is equipped with a head-mounted display capable of recording desired C-arm poses in 3D via an integrated infrared sensor. For C-arm repositioning to a particular target view, the recorded C-arm pose is restored as a virtual object and visualized in an AR environment, serving as a perceptual reference for the technician. We conduct experiments in a setting simulating orthopedic trauma surgery. Our proof-of-principle findings indicate that the proposed system can decrease the 2.76 X-ray images required per desired view down to zero, suggesting substantial reductions of radiation dose during C-arm repositioning. The proposed AR solution is a first step towards facilitating communication between the surgeon and the surgical staff, improving the quality of surgical image acquisition, and enabling context-aware guidance for surgery rooms of the future. The concept of technician-in-the-loop design will become relevant to various interventions considering the expected advancements of sensing and wearable computing in the near future. |
Tasks | |
Published | 2018-06-22 |
URL | http://arxiv.org/abs/1806.08814v1 |
http://arxiv.org/pdf/1806.08814v1.pdf | |
PWC | https://paperswithcode.com/paper/augmented-reality-based-feedback-for |
Repo | |
Framework | |