Paper Group ANR 568
Ensemble Distribution Distillation. When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion. Towards Ranking Geometric Automated Theorem Provers. Energy-based Graph Convolutional Networks for Scoring Protein Docking Models. Distributed Answer Set Coloring: Stable Models Compu …
Ensemble Distribution Distillation
Title | Ensemble Distribution Distillation |
Authors | Andrey Malinin, Bruno Mlodozeniec, Mark Gales |
Abstract | Ensembles of models often yield improvements in system performance. These ensemble approaches have also been empirically shown to yield robust measures of uncertainty, and are capable of distinguishing between different \emph{forms} of uncertainty. However, ensembles come at a computational and memory cost which may be prohibitive for many applications. There has been significant work done on the distillation of an ensemble into a single model. Such approaches decrease computational cost and allow a single model to achieve an accuracy comparable to that of an ensemble. However, information about the \emph{diversity} of the ensemble, which can yield estimates of different forms of uncertainty, is lost. This work considers the novel task of \emph{Ensemble Distribution Distillation} (EnD$^2$) — distilling the distribution of the predictions from an ensemble, rather than just the average prediction, into a single model. EnD$^2$ enables a single model to retain both the improved classification performance of ensemble distillation as well as information about the diversity of the ensemble, which is useful for uncertainty estimation. A solution for EnD$^2$ based on Prior Networks, a class of models which allow a single neural network to explicitly model a distribution over output distributions, is proposed in this work. The properties of EnD$^2$ are investigated on both an artificial dataset, and on the CIFAR-10, CIFAR-100 and TinyImageNet datasets, where it is shown that EnD$^2$ can approach the classification performance of an ensemble, and outperforms both standard DNNs and Ensemble Distillation on the tasks of misclassification and out-of-distribution input detection. |
Tasks | |
Published | 2019-04-30 |
URL | https://arxiv.org/abs/1905.00076v3 |
https://arxiv.org/pdf/1905.00076v3.pdf | |
PWC | https://paperswithcode.com/paper/ensemble-distribution-distillation |
Repo | |
Framework | |
When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion
Title | When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion |
Authors | Elena Voita, Rico Sennrich, Ivan Titov |
Abstract | Though machine translation errors caused by the lack of context beyond one sentence have long been acknowledged, the development of context-aware NMT systems is hampered by several problems. Firstly, standard metrics are not sensitive to improvements in consistency in document-level translations. Secondly, previous work on context-aware NMT assumed that the sentence-aligned parallel data consisted of complete documents while in most practical scenarios such document-level data constitutes only a fraction of the available parallel data. To address the first issue, we perform a human study on an English-Russian subtitles dataset and identify deixis, ellipsis and lexical cohesion as three main sources of inconsistency. We then create test sets targeting these phenomena. To address the second shortcoming, we consider a set-up in which a much larger amount of sentence-level data is available compared to that aligned at the document level. We introduce a model that is suitable for this scenario and demonstrate major gains over a context-agnostic baseline on our new benchmarks without sacrificing performance as measured with BLEU. |
Tasks | Machine Translation |
Published | 2019-05-15 |
URL | https://arxiv.org/abs/1905.05979v2 |
https://arxiv.org/pdf/1905.05979v2.pdf | |
PWC | https://paperswithcode.com/paper/when-a-good-translation-is-wrong-in-context |
Repo | |
Framework | |
Towards Ranking Geometric Automated Theorem Provers
Title | Towards Ranking Geometric Automated Theorem Provers |
Authors | Nuno Baeta, Pedro Quaresma |
Abstract | The field of geometric automated theorem provers has a long and rich history, from the early AI approaches of the 1960s, synthetic provers, to today algebraic and synthetic provers. The geometry automated deduction area differs from other areas by the strong connection between the axiomatic theories and its standard models. In many cases the geometric constructions are used to establish the theorems’ statements, geometric constructions are, in some provers, used to conduct the proof, used as counter-examples to close some branches of the automatic proof. Synthetic geometry proofs are done using geometric properties, proofs that can have a visual counterpart in the supporting geometric construction. With the growing use of geometry automatic deduction tools as applications in other areas, e.g. in education, the need to evaluate them, using different criteria, is felt. Establishing a ranking among geometric automated theorem provers will be useful for the improvement of the current methods/implementations. Improvements could concern wider scope, better efficiency, proof readability and proof reliability. To achieve the goal of being able to compare geometric automated theorem provers a common test bench is needed: a common language to describe the geometric problems; a comprehensive repository of geometric problems and a set of quality measures. |
Tasks | |
Published | 2019-04-01 |
URL | http://arxiv.org/abs/1904.00619v1 |
http://arxiv.org/pdf/1904.00619v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-ranking-geometric-automated-theorem |
Repo | |
Framework | |
Energy-based Graph Convolutional Networks for Scoring Protein Docking Models
Title | Energy-based Graph Convolutional Networks for Scoring Protein Docking Models |
Authors | Yue Cao, Yang Shen |
Abstract | Structural information about protein-protein interactions, often missing at the interactome scale, is important for mechanistic understanding of cells and rational discovery of therapeutics. Protein docking provides a computational alternative to predict such information. However, ranking near-native docked models high among a large number of candidates, often known as the scoring problem, remains a critical challenge. Moreover, estimating model quality, also known as the quality assessment problem, is rarely addressed in protein docking. In this study the two challenging problems in protein docking are regarded as relative and absolute scoring, respectively, and addressed in one physics-inspired deep learning framework. We represent proteins’ and encounter complexes’ 3D structures as intra- and inter-molecular residue contact graphs with atom-resolution node and edge features. And we propose a novel graph convolutional kernel that pool interacting nodes’ features through edge features so that generalized interaction energies can be learned directly from graph data. The resulting energy-based graph convolutional networks (EGCN) with multi-head attention are trained to predict intra- and inter-molecular energies, binding affinities, and quality measures (interface RMSD) for encounter complexes. Compared to a state-of-the-art scoring function for model ranking, EGCN has significantly improved ranking for a CAPRI test set involving homology docking; and is comparable for Score_set, a CAPRI benchmark set generated by diverse community-wide docking protocols not known to training data. For Score_set quality assessment, EGCN shows about 27% improvement to our previous efforts. Directly learning from 3D structure data in graph representation, EGCN represents the first successful development of graph convolutional networks for protein docking. |
Tasks | |
Published | 2019-12-28 |
URL | https://arxiv.org/abs/1912.12476v1 |
https://arxiv.org/pdf/1912.12476v1.pdf | |
PWC | https://paperswithcode.com/paper/energy-based-graph-convolutional-networks-for |
Repo | |
Framework | |
Distributed Answer Set Coloring: Stable Models Computation via Graph Coloring
Title | Distributed Answer Set Coloring: Stable Models Computation via Graph Coloring |
Authors | Marco De Bortoli |
Abstract | Answer Set Programming (ASP) is a famous logic language for knowledge representation, which has been really successful in the last years, as witnessed by the great interest into the development of efficient solvers for ASP. Yet, the great request of resources for certain types of problems, as the planning ones, still constitutes a big limitation for problem solving. Particularly, in the case the program is grounded before the resolving phase, an exponential blow up of the grounding can generate a huge ground file, infeasible for single machines with limited resources, thus preventing even the discovering of a single non-optimal solution. To address this problem, in this paper we present a distributed approach to ASP solving, exploiting distributed computation benefits in order to overcome the just explained limitations. The here presented tool, which is called Distributed Answer Set Coloring (DASC), is a pure solver based on the well-known Graph Coloring algorithm. DASC is part of a bigger project aiming to bring logic programming into a distributed system, started in 2017 by Federico Igne with mASPreduce and continued in 2018 by Pietro Totis with a distributed grounder. In this paper we present a low level implementation of the Graph Coloring algorithm, via the Boost and MPI libraries for C++. Finally, we provide a few results of the very first working version of our tool, at the moment without any strong optimization or heuristic. |
Tasks | |
Published | 2019-09-18 |
URL | https://arxiv.org/abs/1909.08263v1 |
https://arxiv.org/pdf/1909.08263v1.pdf | |
PWC | https://paperswithcode.com/paper/distributed-answer-set-coloring-stable-models |
Repo | |
Framework | |
A Review of Statistical Learning Machines from ATR to DNA Microarrays: design, assessment, and advice for practitioners
Title | A Review of Statistical Learning Machines from ATR to DNA Microarrays: design, assessment, and advice for practitioners |
Authors | Waleed A. Yousef |
Abstract | Statistical Learning is the process of estimating an unknown probabilistic input-output relationship of a system using a limited number of observations; and a statistical learning machine (SLM) is the machine that learned such a process. While their roots grow deeply in Probability Theory, SLMs are ubiquitous in the modern world. Automatic Target Recognition (ATR) in military applications, Computer Aided Diagnosis (CAD) in medical imaging, DNA microarrays in Genomics, Optical Character Recognition (OCR), Speech Recognition (SR), spam email filtering, stock market prediction, etc., are few examples and applications for SLM; diverse fields but one theory. The field of Statistical Learning can be decomposed to two basic subfields, Design and Assessment. Three main groups of specializations-namely statisticians, engineers, and computer scientists (ordered ascendingly by programming capabilities and descendingly by mathematical rigor)-exist on the venue of this field and each takes its elephant bite. Exaggerated rigorous analysis of statisticians sometimes deprives them from considering new ML techniques and methods that, yet, have no “complete” mathematical theory. On the other hand, immoderate add-hoc simulations of computer scientists sometimes derive them towards unjustified and immature results. A prudent approach is needed that has the enough flexibility to utilize simulations and trials and errors without sacrificing any rigor. If this prudent attitude is necessary for this field it is necessary, as well, in other fields of Engineering. |
Tasks | Optical Character Recognition, Speech Recognition, Stock Market Prediction |
Published | 2019-06-24 |
URL | https://arxiv.org/abs/1906.10019v2 |
https://arxiv.org/pdf/1906.10019v2.pdf | |
PWC | https://paperswithcode.com/paper/statistical-learning-machines-from-atr-to-dna |
Repo | |
Framework | |
DeeperLab: Single-Shot Image Parser
Title | DeeperLab: Single-Shot Image Parser |
Authors | Tien-Ju Yang, Maxwell D. Collins, Yukun Zhu, Jyh-Jing Hwang, Ting Liu, Xiao Zhang, Vivienne Sze, George Papandreou, Liang-Chieh Chen |
Abstract | We present a single-shot, bottom-up approach for whole image parsing. Whole image parsing, also known as Panoptic Segmentation, generalizes the tasks of semantic segmentation for ‘stuff’ classes and instance segmentation for ‘thing’ classes, assigning both semantic and instance labels to every pixel in an image. Recent approaches to whole image parsing typically employ separate standalone modules for the constituent semantic and instance segmentation tasks and require multiple passes of inference. Instead, the proposed DeeperLab image parser performs whole image parsing with a significantly simpler, fully convolutional approach that jointly addresses the semantic and instance segmentation tasks in a single-shot manner, resulting in a streamlined system that better lends itself to fast processing. For quantitative evaluation, we use both the instance-based Panoptic Quality (PQ) metric and the proposed region-based Parsing Covering (PC) metric, which better captures the image parsing quality on ‘stuff’ classes and larger object instances. We report experimental results on the challenging Mapillary Vistas dataset, in which our single model achieves 31.95% (val) / 31.6% PQ (test) and 55.26% PC (val) with 3 frames per second (fps) on GPU or near real-time speed (22.6 fps on GPU) with reduced accuracy. |
Tasks | Instance Segmentation, Panoptic Segmentation, Semantic Segmentation |
Published | 2019-02-13 |
URL | http://arxiv.org/abs/1902.05093v2 |
http://arxiv.org/pdf/1902.05093v2.pdf | |
PWC | https://paperswithcode.com/paper/deeperlab-single-shot-image-parser |
Repo | |
Framework | |
Towards Object Detection from Motion
Title | Towards Object Detection from Motion |
Authors | Rico Jonschkowski, Austin Stone |
Abstract | We present a novel approach to weakly supervised object detection. Instead of annotated images, our method only requires two short videos to learn to detect a new object: 1) a video of a moving object and 2) one or more “negative” videos of the scene without the object. The key idea of our algorithm is to train the object detector to produce physically plausible object motion when applied to the first video and to not detect anything in the second video. With this approach, our method learns to locate objects without any object location annotations. Once the model is trained, it performs object detection on single images. We evaluate our method in three robotics settings that afford learning objects from motion: observing moving objects, watching demonstrations of object manipulation, and physically interacting with objects (see a video summary at https://youtu.be/BH0Hv3zZG_4). |
Tasks | Object Detection, Weakly Supervised Object Detection |
Published | 2019-09-17 |
URL | https://arxiv.org/abs/1909.12950v1 |
https://arxiv.org/pdf/1909.12950v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-object-detection-from-motion |
Repo | |
Framework | |
A Multitask Network for Localization and Recognition of Text in Images
Title | A Multitask Network for Localization and Recognition of Text in Images |
Authors | Mohammad Reza Sarshogh, Keegan E. Hines |
Abstract | We present an end-to-end trainable multi-task network that addresses the problem of lexicon-free text extraction from complex documents. This network simultaneously solves the problems of text localization and text recognition and text segments are identified with no post-processing, cropping, or word grouping. A convolutional backbone and Feature Pyramid Network are combined to provide a shared representation that benefits each of three model heads: text localization, classification, and text recognition. To improve recognition accuracy, we describe a dynamic pooling mechanism that retains high-resolution information across all RoIs. For text recognition, we propose a convolutional mechanism with attention which out-performs more common recurrent architectures. Our model is evaluated against benchmark datasets and comparable methods and achieves high performance in challenging regimes of non-traditional OCR. |
Tasks | Optical Character Recognition |
Published | 2019-06-21 |
URL | https://arxiv.org/abs/1906.09266v1 |
https://arxiv.org/pdf/1906.09266v1.pdf | |
PWC | https://paperswithcode.com/paper/a-multitask-network-for-localization-and |
Repo | |
Framework | |
RBCN: Rectified Binary Convolutional Networks for Enhancing the Performance of 1-bit DCNNs
Title | RBCN: Rectified Binary Convolutional Networks for Enhancing the Performance of 1-bit DCNNs |
Authors | Chunlei Liu, Wenrui Ding, Xin Xia, Yuan Hu, Baochang Zhang, Jianzhuang Liu, Bohan Zhuang, Guodong Guo |
Abstract | Binarized convolutional neural networks (BCNNs) are widely used to improve memory and computation efficiency of deep convolutional neural networks (DCNNs) for mobile and AI chips based applications. However, current BCNNs are not able to fully explore their corresponding full-precision models, causing a significant performance gap between them. In this paper, we propose rectified binary convolutional networks (RBCNs), towards optimized BCNNs, by combining full-precision kernels and feature maps to rectify the binarization process in a unified framework. In particular, we use a GAN to train the 1-bit binary network with the guidance of its corresponding full-precision model, which significantly improves the performance of BCNNs. The rectified convolutional layers are generic and flexible, and can be easily incorporated into existing DCNNs such as WideResNets and ResNets. Extensive experiments demonstrate the superior performance of the proposed RBCNs over state-of-the-art BCNNs. In particular, our method shows strong generalization on the object tracking task. |
Tasks | Object Tracking |
Published | 2019-08-21 |
URL | https://arxiv.org/abs/1908.07748v2 |
https://arxiv.org/pdf/1908.07748v2.pdf | |
PWC | https://paperswithcode.com/paper/190807748 |
Repo | |
Framework | |
Unseen Face Presentation Attack Detection Using Class-Specific Sparse One-Class Multiple Kernel Fusion Regression
Title | Unseen Face Presentation Attack Detection Using Class-Specific Sparse One-Class Multiple Kernel Fusion Regression |
Authors | Shervin Rahimzadeh Arashloo |
Abstract | The paper addresses face presentation attack detection in the challenging conditions of an unseen attack scenario where the system is exposed to novel presentation attacks that were not present in the training step. For this purpose, a pure one-class face presentation attack detection approach based on kernel regression is developed which only utilises bona fide (genuine) samples for training. In the context of the proposed approach, a number of innovations, including multiple kernel fusion, client-specific modelling, sparse regularisation and probabilistic modelling of score distributions are introduced to improve the efficacy of the method. The results of experimental evaluations conducted on the OULU-NPU, Replay-Mobile, Replay-Attack and MSU-MFSD datasets illustrate that the proposed method compares very favourably with other methods operating in an unseen attack detection scenario while achieving very competitive performance to multi-class methods (benefiting from presentation attack data for training) despite using only bona fide samples for training. |
Tasks | Face Presentation Attack Detection |
Published | 2019-12-31 |
URL | https://arxiv.org/abs/1912.13276v1 |
https://arxiv.org/pdf/1912.13276v1.pdf | |
PWC | https://paperswithcode.com/paper/unseen-face-presentation-attack-detection |
Repo | |
Framework | |
Efficient Hardware Implementation of Incremental Learning and Inference on Chip
Title | Efficient Hardware Implementation of Incremental Learning and Inference on Chip |
Authors | Ghouthi Boukli Hacene, Vincent Gripon, Nicolas Farrugia, Matthieu Arzel, Michel Jezequel |
Abstract | In this paper, we tackle the problem of incrementally learning a classifier, one example at a time, directly on chip. To this end, we propose an efficient hardware implementation of a recently introduced incremental learning procedure that achieves state-of-the-art performance by combining transfer learning with majority votes and quantization techniques. The proposed design is able to accommodate for both new examples and new classes directly on the chip. We detail the hardware implementation of the method (implemented on FPGA target) and show it requires limited resources while providing a significant acceleration compared to using a CPU. |
Tasks | Quantization, Transfer Learning |
Published | 2019-11-18 |
URL | https://arxiv.org/abs/1911.07847v1 |
https://arxiv.org/pdf/1911.07847v1.pdf | |
PWC | https://paperswithcode.com/paper/efficient-hardware-implementation-of |
Repo | |
Framework | |
Import2vec - Learning Embeddings for Software Libraries
Title | Import2vec - Learning Embeddings for Software Libraries |
Authors | Bart Theeten, Frederik Vandeputte, Tom Van Cutsem |
Abstract | We consider the problem of developing suitable learning representations (embeddings) for library packages that capture semantic similarity among libraries. Such representations are known to improve the performance of downstream learning tasks (e.g. classification) or applications such as contextual search and analogical reasoning. We apply word embedding techniques from natural language processing (NLP) to train embeddings for library packages (“library vectors”). Library vectors represent libraries by similar context of use as determined by import statements present in source code. Experimental results obtained from training such embeddings on three large open source software corpora reveals that library vectors capture semantically meaningful relationships among software libraries, such as the relationship between frameworks and their plug-ins and libraries commonly used together within ecosystems such as big data infrastructure projects (in Java), front-end and back-end web development frameworks (in JavaScript) and data science toolkits (in Python). |
Tasks | Semantic Similarity, Semantic Textual Similarity |
Published | 2019-03-27 |
URL | http://arxiv.org/abs/1904.03990v1 |
http://arxiv.org/pdf/1904.03990v1.pdf | |
PWC | https://paperswithcode.com/paper/190403990 |
Repo | |
Framework | |
Multi Target Tracking by Learning from Generalized Graph Differences
Title | Multi Target Tracking by Learning from Generalized Graph Differences |
Authors | Håkan Ardö, Mikael Nilsson |
Abstract | Formulating the multi object tracking problem as a network flow optimization problem is a popular choice. In this paper an efficient way of learning the weights of such a network is presented. It separates the problem into one embedding of feasible solutions into a one dimensional feature space and one optimization problem. The embedding can be learned using standard SGD type optimization without relying on an additional optimizations within each step. Training data is produced by performing small perturbations of ground truth tracks and representing them using generalized graph differences, which is an efficient way introduced to represent the difference between two graphs. The proposed method is evaluated on DukeMTMCT with competitive results. |
Tasks | Multi-Object Tracking, Object Tracking |
Published | 2019-08-19 |
URL | https://arxiv.org/abs/1908.06646v1 |
https://arxiv.org/pdf/1908.06646v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-target-tracking-by-learning-from |
Repo | |
Framework | |
Empirical Evaluations of Seed Set Selection Strategies for Predictive Coding
Title | Empirical Evaluations of Seed Set Selection Strategies for Predictive Coding |
Authors | Christian J. Mahoney, Nathaniel Huber-Fliflet, Katie Jensen, Haozhen Zhao, Robert Neary, Shi Ye |
Abstract | Training documents have a significant impact on the performance of predictive models in the legal domain. Yet, there is limited research that explores the effectiveness of the training document selection strategy - in particular, the strategy used to select the seed set, or the set of documents an attorney reviews first to establish an initial model. Since there is limited research on this important component of predictive coding, the authors of this paper set out to identify strategies that consistently perform well. Our research demonstrated that the seed set selection strategy can have a significant impact on the precision of a predictive model. Enabling attorneys with the results of this study will allow them to initiate the most effective predictive modeling process to comb through the terabytes of data typically present in modern litigation. This study used documents from four actual legal cases to evaluate eight different seed set selection strategies. Attorneys can use the results contained within this paper to enhance their approach to predictive coding. |
Tasks | |
Published | 2019-03-21 |
URL | http://arxiv.org/abs/1903.08816v1 |
http://arxiv.org/pdf/1903.08816v1.pdf | |
PWC | https://paperswithcode.com/paper/empirical-evaluations-of-seed-set-selection |
Repo | |
Framework | |