October 19, 2019

2964 words 14 mins read

Paper Group ANR 294

Paper Group ANR 294

A Minimalist Approach to Type-Agnostic Detection of Quadrics in Point Clouds. Deep Theory of Functional Connections: A New Method for Estimating the Solutions of PDEs. Neural Ranking Models for Temporal Dependency Structure Parsing. Time Aggregation and Model Interpretation for Deep Multivariate Longitudinal Patient Outcome Forecasting Systems in C …

A Minimalist Approach to Type-Agnostic Detection of Quadrics in Point Clouds

Title A Minimalist Approach to Type-Agnostic Detection of Quadrics in Point Clouds
Authors Tolga Birdal, Benjamin Busam, Nassir Navab, Slobodan Ilic, Peter Sturm
Abstract This paper proposes a segmentation-free, automatic and efficient procedure to detect general geometric quadric forms in point clouds, where clutter and occlusions are inevitable. Our everyday world is dominated by man-made objects which are designed using 3D primitives (such as planes, cones, spheres, cylinders, etc.). These objects are also omnipresent in industrial environments. This gives rise to the possibility of abstracting 3D scenes through primitives, thereby positions these geometric forms as an integral part of perception and high level 3D scene understanding. As opposed to state-of-the-art, where a tailored algorithm treats each primitive type separately, we propose to encapsulate all types in a single robust detection procedure. At the center of our approach lies a closed form 3D quadric fit, operating in both primal & dual spaces and requiring as low as 4 oriented-points. Around this fit, we design a novel, local null-space voting strategy to reduce the 4-point case to 3. Voting is coupled with the famous RANSAC and makes our algorithm orders of magnitude faster than its conventional counterparts. This is the first method capable of performing a generic cross-type multi-object primitive detection in difficult scenes. Results on synthetic and real datasets support the validity of our method.
Tasks Scene Understanding
Published 2018-03-19
URL http://arxiv.org/abs/1803.07191v1
PDF http://arxiv.org/pdf/1803.07191v1.pdf
PWC https://paperswithcode.com/paper/a-minimalist-approach-to-type-agnostic
Repo
Framework

Deep Theory of Functional Connections: A New Method for Estimating the Solutions of PDEs

Title Deep Theory of Functional Connections: A New Method for Estimating the Solutions of PDEs
Authors Carl Leake
Abstract This article presents a new methodology called deep Theory of Functional Connections (TFC) that estimates the solutions of partial differential equations (PDEs) by combining neural networks with TFC. TFC is used to transform PDEs with boundary conditions into unconstrained optimization problems by embedding the boundary conditions into a “constrained expression.” In this work, a neural network is chosen as the free function, and used to solve the now unconstrained optimization problem. The loss function is taken as the square of the residual of the PDE. Then, the neural network is trained in an unsupervised manner to solve the unconstrained optimization problem. This methodology has two major differences when compared with popular methods used to estimate the solutions of PDEs. First, this methodology does not need to discretize the domain into a grid, rather, this methodology randomly samples points from the domain during the training phase. Second, after training, this methodology represents a closed form, analytical, differentiable approximation of the solution throughout the entire training domain. In contrast, other popular methods require interpolation if the estimated solution is desired at points that do not lie on the discretized grid. The deep TFC method for estimating the solution of PDEs is demonstrated on four problems with a variety of boundary conditions.
Tasks
Published 2018-12-20
URL https://arxiv.org/abs/1812.08625v3
PDF https://arxiv.org/pdf/1812.08625v3.pdf
PWC https://paperswithcode.com/paper/deep-toc-a-new-method-for-estimating-the
Repo
Framework

Neural Ranking Models for Temporal Dependency Structure Parsing

Title Neural Ranking Models for Temporal Dependency Structure Parsing
Authors Yuchen Zhang, Nianwen Xue
Abstract We design and build the first neural temporal dependency parser. It utilizes a neural ranking model with minimal feature engineering, and parses time expressions and events in a text into a temporal dependency tree structure. We evaluate our parser on two domains: news reports and narrative stories. In a parsing-only evaluation setup where gold time expressions and events are provided, our parser reaches 0.81 and 0.70 f-score on unlabeled and labeled parsing respectively, a result that is very competitive against alternative approaches. In an end-to-end evaluation setup where time expressions and events are automatically recognized, our parser beats two strong baselines on both data domains. Our experimental results and discussions shed light on the nature of temporal dependency structures in different domains and provide insights that we believe will be valuable to future research in this area.
Tasks Feature Engineering
Published 2018-09-02
URL http://arxiv.org/abs/1809.00370v1
PDF http://arxiv.org/pdf/1809.00370v1.pdf
PWC https://paperswithcode.com/paper/neural-ranking-models-for-temporal-dependency
Repo
Framework

Time Aggregation and Model Interpretation for Deep Multivariate Longitudinal Patient Outcome Forecasting Systems in Chronic Ambulatory Care

Title Time Aggregation and Model Interpretation for Deep Multivariate Longitudinal Patient Outcome Forecasting Systems in Chronic Ambulatory Care
Authors Beau Norgeot, Dmytro Lituiev, Benjamin S. Glicksberg, Atul J. Butte
Abstract Clinical data for ambulatory care, which accounts for 90% of the nations healthcare spending, is characterized by relatively small sample sizes of longitudinal data, unequal spacing between visits for each patient, with unequal numbers of data points collected across patients. While deep learning has become state-of-the-art for sequence modeling, it is unknown which methods of time aggregation may be best suited for these challenging temporal use cases. Additionally, deep models are often considered uninterpretable by physicians which may prevent the clinical adoption, even of well performing models. We show that time-distributed-dense layers combined with GRUs produce the most generalizable models. Furthermore, we provide a framework for the clinical interpretation of the models.
Tasks
Published 2018-11-30
URL http://arxiv.org/abs/1811.12589v1
PDF http://arxiv.org/pdf/1811.12589v1.pdf
PWC https://paperswithcode.com/paper/time-aggregation-and-model-interpretation-for
Repo
Framework

Incomplete Contracting and AI Alignment

Title Incomplete Contracting and AI Alignment
Authors Dylan Hadfield-Menell, Gillian Hadfield
Abstract We suggest that the analysis of incomplete contracting developed by law and economics researchers can provide a useful framework for understanding the AI alignment problem and help to generate a systematic approach to finding solutions. We first provide an overview of the incomplete contracting literature and explore parallels between this work and the problem of AI alignment. As we emphasize, misalignment between principal and agent is a core focus of economic analysis. We highlight some technical results from the economics literature on incomplete contracts that may provide insights for AI alignment researchers. Our core contribution, however, is to bring to bear an insight that economists have been urged to absorb from legal scholars and other behavioral scientists: the fact that human contracting is supported by substantial amounts of external structure, such as generally available institutions (culture, law) that can supply implied terms to fill the gaps in incomplete contracts. We propose a research agenda for AI alignment work that focuses on the problem of how to build AI that can replicate the human cognitive processes that connect individual incomplete contracts with this supporting external structure.
Tasks
Published 2018-04-12
URL http://arxiv.org/abs/1804.04268v1
PDF http://arxiv.org/pdf/1804.04268v1.pdf
PWC https://paperswithcode.com/paper/incomplete-contracting-and-ai-alignment
Repo
Framework

Developing Synthesis Flows Without Human Knowledge

Title Developing Synthesis Flows Without Human Knowledge
Authors Cunxi Yu, Houping Xiao, Giovanni De Micheli
Abstract Design flows are the explicit combinations of design transformations, primarily involved in synthesis, placement and routing processes, to accomplish the design of Integrated Circuits (ICs) and System-on-Chip (SoC). Mostly, the flows are developed based on the knowledge of the experts. However, due to the large search space of design flows and the increasing design complexity, developing Intellectual Property (IP)-specific synthesis flows providing high Quality of Result (QoR) is extremely challenging. This work presents a fully autonomous framework that artificially produces design-specific synthesis flows without human guidance and baseline flows, using Convolutional Neural Network (CNN). The demonstrations are made by successfully designing logic synthesis flows of three large scaled designs.
Tasks
Published 2018-04-16
URL http://arxiv.org/abs/1804.05714v3
PDF http://arxiv.org/pdf/1804.05714v3.pdf
PWC https://paperswithcode.com/paper/developing-synthesis-flows-without-human
Repo
Framework

Advanced Image Processing for Astronomical Images

Title Advanced Image Processing for Astronomical Images
Authors Diganta Misra, Sparsha Mishra, Bhargav Appasani
Abstract Image Processing in Astronomy is a major field of research and involves a lot of techniques pertaining to improve analyzing the properties of the celestial objects or obtaining preliminary inference from the image data. In this paper, we provide a comprehensive case study of advanced image processing techniques applied to Astronomical Galaxy Images for improved analysis, accurate inferences and faster analysis.
Tasks
Published 2018-12-23
URL http://arxiv.org/abs/1812.09702v1
PDF http://arxiv.org/pdf/1812.09702v1.pdf
PWC https://paperswithcode.com/paper/advanced-image-processing-for-astronomical
Repo
Framework

Improved Algorithms for Collaborative PAC Learning

Title Improved Algorithms for Collaborative PAC Learning
Authors Huy L. Nguyen, Lydia Zakynthinou
Abstract We study a recent model of collaborative PAC learning where $k$ players with $k$ different tasks collaborate to learn a single classifier that works for all tasks. Previous work showed that when there is a classifier that has very small error on all tasks, there is a collaborative algorithm that finds a single classifier for all tasks and has $O((\ln (k))^2)$ times the worst-case sample complexity for learning a single task. In this work, we design new algorithms for both the realizable and the non-realizable setting, having sample complexity only $O(\ln (k))$ times the worst-case sample complexity for learning a single task. The sample complexity upper bounds of our algorithms match previous lower bounds and in some range of parameters are even better than previous algorithms that are allowed to output different classifiers for different tasks.
Tasks
Published 2018-05-22
URL http://arxiv.org/abs/1805.08356v2
PDF http://arxiv.org/pdf/1805.08356v2.pdf
PWC https://paperswithcode.com/paper/improved-algorithms-for-collaborative-pac
Repo
Framework

Performance assessment of the deep learning technologies in grading glaucoma severity

Title Performance assessment of the deep learning technologies in grading glaucoma severity
Authors Yi Zhen, Lei Wang, Han Liu, Jian Zhang, Jiantao Pu
Abstract Objective: To validate and compare the performance of eight available deep learning architectures in grading the severity of glaucoma based on color fundus images. Materials and Methods: We retrospectively collected a dataset of 5978 fundus images and their glaucoma severities were annotated by the consensus of two experienced ophthalmologists. We preprocessed the images to generate global and local regions of interest (ROIs), namely the global field-of-view images and the local disc region images. We then divided the generated images into three independent sub-groups for training, validation, and testing purposes. With the datasets, eight convolutional neural networks (CNNs) (i.e., VGG16, VGG19, ResNet, DenseNet, InceptionV3, InceptionResNet, Xception, and NASNetMobile) were trained separately to grade glaucoma severity, and validated quantitatively using the area under the receiver operating characteristic (ROC) curve and the quadratic kappa score. Results: The CNNs, except VGG16 and VGG19, achieved average kappa scores of 80.36% and 78.22% when trained from scratch on global and local ROIs, and 85.29% and 82.72% when fine-tuned using the pre-trained weights, respectively. VGG16 and VGG19 achieved reasonable accuracy when trained from scratch, but they failed when using pre-trained weights for global and local ROIs. Among these CNNs, the DenseNet had the highest classification accuracy (i.e., 75.50%) based on pre-trained weights when using global ROIs, as compared to 65.50% when using local ROIs. Conclusion: The experiments demonstrated the feasibility of the deep learning technology in grading glaucoma severity. In particular, global field-of-view images contain relatively richer information that may be critical for glaucoma assessment, suggesting that we should use the entire field-of-view of a fundus image for training a deep learning network.
Tasks
Published 2018-10-31
URL http://arxiv.org/abs/1810.13376v1
PDF http://arxiv.org/pdf/1810.13376v1.pdf
PWC https://paperswithcode.com/paper/performance-assessment-of-the-deep-learning
Repo
Framework

Contingency Training

Title Contingency Training
Authors Danilo Vasconcellos Vargas, Hirotaka Takano, Junichi Murata
Abstract When applied to high-dimensional datasets, feature selection algorithms might still leave dozens of irrelevant variables in the dataset. Therefore, even after feature selection has been applied, classifiers must be prepared to the presence of irrelevant variables. This paper investigates a new training method called Contingency Training which increases the accuracy as well as the robustness against irrelevant attributes. Contingency training is classifier independent. By subsampling and removing information from each sample, it creates a set of constraints. These constraints aid the method to automatically find proper importance weights of the dataset’s features. Experiments are conducted with the contingency training applied to neural networks over traditional datasets as well as datasets with additional irrelevant variables. For all of the tests, contingency training surpassed the unmodified training on datasets with irrelevant variables and even outperformed slightly when only a few or no irrelevant variables were present.
Tasks Feature Selection
Published 2018-11-20
URL http://arxiv.org/abs/1811.08214v1
PDF http://arxiv.org/pdf/1811.08214v1.pdf
PWC https://paperswithcode.com/paper/contingency-training
Repo
Framework

Uncertain Trees: Dealing with Uncertain Inputs in Regression Trees

Title Uncertain Trees: Dealing with Uncertain Inputs in Regression Trees
Authors Myriam Tami, Marianne Clausel, Emilie Devijver, Adrien Dulac, Eric Gaussier, Stefan Janaqi, Meriam Chebre
Abstract Tree-based ensemble methods, as Random Forests and Gradient Boosted Trees, have been successfully used for regression in many applications and research studies. Furthermore, these methods have been extended in order to deal with uncertainty in the output variable, using for example a quantile loss in Random Forests (Meinshausen, 2006). To the best of our knowledge, no extension has been provided yet for dealing with uncertainties in the input variables, even though such uncertainties are common in practical situations. We propose here such an extension by showing how standard regression trees optimizing a quadratic loss can be adapted and learned while taking into account the uncertainties in the inputs. By doing so, one no longer assumes that an observation lies into a single region of the regression tree, but rather that it belongs to each region with a certain probability. Experiments conducted on several data sets illustrate the good behavior of the proposed extension.
Tasks
Published 2018-10-27
URL http://arxiv.org/abs/1810.11698v2
PDF http://arxiv.org/pdf/1810.11698v2.pdf
PWC https://paperswithcode.com/paper/uncertain-trees-dealing-with-uncertain-inputs
Repo
Framework

Approximate Newton-based statistical inference using only stochastic gradients

Title Approximate Newton-based statistical inference using only stochastic gradients
Authors Tianyang Li, Anastasios Kyrillidis, Liu Liu, Constantine Caramanis
Abstract We present a novel statistical inference framework for convex empirical risk minimization, using approximate stochastic Newton steps. The proposed algorithm is based on the notion of finite differences and allows the approximation of a Hessian-vector product from first-order information. In theory, our method efficiently computes the statistical error covariance in $M$-estimation, both for unregularized convex learning problems and high-dimensional LASSO regression, without using exact second order information, or resampling the entire data set. We also present a stochastic gradient sampling scheme for statistical inference in non-i.i.d. time series analysis, where we sample contiguous blocks of indices. In practice, we demonstrate the effectiveness of our framework on large-scale machine learning problems, that go even beyond convexity: as a highlight, our work can be used to detect certain adversarial attacks on neural networks.
Tasks Time Series, Time Series Analysis
Published 2018-05-23
URL http://arxiv.org/abs/1805.08920v2
PDF http://arxiv.org/pdf/1805.08920v2.pdf
PWC https://paperswithcode.com/paper/approximate-newton-based-statistical
Repo
Framework

Temporal Analysis of Entity Relatedness and its Evolution using Wikipedia and DBpedia

Title Temporal Analysis of Entity Relatedness and its Evolution using Wikipedia and DBpedia
Authors Narumol Prangnawarat, John P. McCrae, Conor Hayes
Abstract Many researchers have made use of the Wikipedia network for relatedness and similarity tasks. However, most approaches use only the most recent information and not historical changes in the network. We provide an analysis of entity relatedness using temporal graph-based approaches over different versions of the Wikipedia article link network and DBpedia, which is an open-source knowledge base extracted from Wikipedia. We consider creating the Wikipedia article link network as both a union and intersection of edges over multiple time points and present a novel variation of the Jaccard index to weight edges based on their transience. We evaluate our results against the KORE dataset, which was created in 2010, and show that using the 2010 Wikipedia article link network produces the strongest result, suggesting that semantic similarity is time sensitive. We then show that integrating multiple time frames in our methods can give a better overall similarity demonstrating that temporal evolution can have an important effect on entity relatedness.
Tasks Semantic Similarity, Semantic Textual Similarity
Published 2018-12-12
URL http://arxiv.org/abs/1812.05001v1
PDF http://arxiv.org/pdf/1812.05001v1.pdf
PWC https://paperswithcode.com/paper/temporal-analysis-of-entity-relatedness-and
Repo
Framework

MedSim: A Novel Semantic Similarity Measure in Bio-medical Knowledge Graphs

Title MedSim: A Novel Semantic Similarity Measure in Bio-medical Knowledge Graphs
Authors Kai Lei, Kaiqi Yuan, Qiang Zhang, Ying Shen
Abstract We present MedSim, a novel semantic SIMilarity method based on public well-established bio-MEDical knowledge graphs (KGs) and large-scale corpus, to study the therapeutic substitution of antibiotics. Besides hierarchy and corpus of KGs, MedSim further interprets medicine characteristics by constructing multi-dimensional medicine-specific feature vectors. Dataset of 528 antibiotic pairs scored by doctors is applied for evaluation and MedSim has produced statistically significant improvement over other semantic similarity methods. Furthermore, some promising applications of MedSim in drug substitution and drug abuse prevention are presented in case study.
Tasks Knowledge Graphs, Semantic Similarity, Semantic Textual Similarity
Published 2018-12-05
URL http://arxiv.org/abs/1812.01884v1
PDF http://arxiv.org/pdf/1812.01884v1.pdf
PWC https://paperswithcode.com/paper/medsim-a-novel-semantic-similarity-measure-in
Repo
Framework

Differentially Private Fair Learning

Title Differentially Private Fair Learning
Authors Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan Ullman
Abstract Motivated by settings in which predictive models may be required to be non-discriminatory with respect to certain attributes (such as race), but even collecting the sensitive attribute may be forbidden or restricted, we initiate the study of fair learning under the constraint of differential privacy. We design two learning algorithms that simultaneously promise differential privacy and equalized odds, a ‘fairness’ condition that corresponds to equalizing false positive and negative rates across protected groups. Our first algorithm is a private implementation of the equalized odds post-processing approach of [Hardt et al., 2016]. This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of ‘disparate treatment’. Our second algorithm is a differentially private version of the oracle-efficient in-processing approach of [Agarwal et al., 2018] that can be used to find the optimal fair classifier, given access to a subroutine that can solve the original (not necessarily fair) learning problem. This algorithm is more complex but need not have access to protected group membership at test time. We identify new tradeoffs between fairness, accuracy, and privacy that emerge only when requiring all three properties, and show that these tradeoffs can be milder if group membership may be used at test time. We conclude with a brief experimental evaluation.
Tasks
Published 2018-12-06
URL https://arxiv.org/abs/1812.02696v3
PDF https://arxiv.org/pdf/1812.02696v3.pdf
PWC https://paperswithcode.com/paper/differentially-private-fair-learning
Repo
Framework
comments powered by Disqus