April 2, 2020

3081 words 15 mins read

Paper Group ANR 107

Paper Group ANR 107

FrequentNet : A New Deep Learning Baseline for Image Classification. Binary and Multitask Classification Model for Dutch Anaphora Resolution: Die/Dat Prediction. Partially Observable Games for Secure Autonomy. Exploring Bottom-up and Top-down Cues with Attentive Learning for Webly Supervised Object Detection. A Lagrangian Dual Framework for Deep Ne …

FrequentNet : A New Deep Learning Baseline for Image Classification

Title FrequentNet : A New Deep Learning Baseline for Image Classification
Authors Yifei Li, Zheng Wang, Kuangyan Song, Yiming Sun
Abstract In this paper, we generalize the idea from the method called “PCANet” to achieve a new baseline deep learning model for image classification. Instead of using principal component vectors as the filter vector in “PCANet”, we use basis vectors in discrete Fourier analysis and wavelets analysis as our filter vectors. Both of them achieve comparable performance to “PCANet” in benchmark datasets. It is noticeable that our algorithms do not require any optimization techniques to get those basis.
Tasks Image Classification
Published 2020-01-04
URL https://arxiv.org/abs/2001.01034v1
PDF https://arxiv.org/pdf/2001.01034v1.pdf
PWC https://paperswithcode.com/paper/frequentnet-a-new-deep-learning-baseline-for
Repo
Framework

Binary and Multitask Classification Model for Dutch Anaphora Resolution: Die/Dat Prediction

Title Binary and Multitask Classification Model for Dutch Anaphora Resolution: Die/Dat Prediction
Authors Liesbeth Allein, Artuur Leeuwenberg, Marie-Francine Moens
Abstract The correct use of Dutch pronouns ‘die’ and ‘dat’ is a stumbling block for both native and non-native speakers of Dutch due to the multiplicity of syntactic functions and the dependency on the antecedent’s gender and number. Drawing on previous research conducted on neural context-dependent dt-mistake correction models (Heyman et al. 2018), this study constructs the first neural network model for Dutch demonstrative and relative pronoun resolution that specifically focuses on the correction and part-of-speech prediction of these two pronouns. Two separate datasets are built with sentences obtained from, respectively, the Dutch Europarl corpus (Koehn 2015) - which contains the proceedings of the European Parliament from 1996 to the present - and the SoNaR corpus (Oostdijk et al. 2013) - which contains Dutch texts from a variety of domains such as newspapers, blogs and legal texts. Firstly, a binary classification model solely predicts the correct ‘die’ or ‘dat’. The classifier with a bidirectional long short-term memory architecture achieves 84.56% accuracy. Secondly, a multitask classification model simultaneously predicts the correct ‘die’ or ‘dat’ and its part-of-speech tag. The model containing a combination of a sentence and context encoder with both a bidirectional long short-term memory architecture results in 88.63% accuracy for die/dat prediction and 87.73% accuracy for part-of-speech prediction. More evenly-balanced data, larger word embeddings, an extra bidirectional long short-term memory layer and integrated part-of-speech knowledge positively affects die/dat prediction performance, while a context encoder architecture raises part-of-speech prediction performance. This study shows promising results and can serve as a starting point for future research on machine learning models for Dutch anaphora resolution.
Tasks Word Embeddings
Published 2020-01-09
URL https://arxiv.org/abs/2001.02943v1
PDF https://arxiv.org/pdf/2001.02943v1.pdf
PWC https://paperswithcode.com/paper/binary-and-multitask-classification-model-for
Repo
Framework

Partially Observable Games for Secure Autonomy

Title Partially Observable Games for Secure Autonomy
Authors Mohamadreza Ahmadi, Arun A. Viswanathan, Michel D. Ingham, Kymie Tan, Aaron D. Ames
Abstract Technology development efforts in autonomy and cyber-defense have been evolving independently of each other, over the past decade. In this paper, we report our ongoing effort to integrate these two presently distinct areas into a single framework. To this end, we propose the two-player partially observable stochastic game formalism to capture both high-level autonomous mission planning under uncertainty and adversarial decision making subject to imperfect information. We show that synthesizing sub-optimal strategies for such games is possible under finite-memory assumptions for both the autonomous decision maker and the cyber-adversary. We then describe an experimental testbed to evaluate the efficacy of the proposed framework.
Tasks Decision Making
Published 2020-02-05
URL https://arxiv.org/abs/2002.01969v1
PDF https://arxiv.org/pdf/2002.01969v1.pdf
PWC https://paperswithcode.com/paper/partially-observable-games-for-secure
Repo
Framework

Exploring Bottom-up and Top-down Cues with Attentive Learning for Webly Supervised Object Detection

Title Exploring Bottom-up and Top-down Cues with Attentive Learning for Webly Supervised Object Detection
Authors Zhonghua Wu, Qingyi Tao, Guosheng Lin, Jianfei Cai
Abstract Fully supervised object detection has achieved great success in recent years. However, abundant bounding boxes annotations are needed for training a detector for novel classes. To reduce the human labeling effort, we propose a novel webly supervised object detection (WebSOD) method for novel classes which only requires the web images without further annotations. Our proposed method combines bottom-up and top-down cues for novel class detection. Within our approach, we introduce a bottom-up mechanism based on the well-trained fully supervised object detector (i.e. Faster RCNN) as an object region estimator for web images by recognizing the common objectiveness shared by base and novel classes. With the estimated regions on the web images, we then utilize the top-down attention cues as the guidance for region classification. Furthermore, we propose a residual feature refinement (RFR) block to tackle the domain mismatch between web domain and the target domain. We demonstrate our proposed method on PASCAL VOC dataset with three different novel/base splits. Without any target-domain novel-class images and annotations, our proposed webly supervised object detection model is able to achieve promising performance for novel classes. Moreover, we also conduct transfer learning experiments on large scale ILSVRC 2013 detection dataset and achieve state-of-the-art performance.
Tasks Object Detection, Transfer Learning
Published 2020-03-22
URL https://arxiv.org/abs/2003.09790v1
PDF https://arxiv.org/pdf/2003.09790v1.pdf
PWC https://paperswithcode.com/paper/exploring-bottom-up-and-top-down-cues-with
Repo
Framework

A Lagrangian Dual Framework for Deep Neural Networks with Constraints

Title A Lagrangian Dual Framework for Deep Neural Networks with Constraints
Authors Ferdinando Fioretto, Terrence WK Mak, Federico Baldo, Michele Lombardi, Pascal Van Hentenryck
Abstract A variety of computationally challenging constrained optimization problems in several engineering disciplines are solved repeatedly under different scenarios. In many cases, they would benefit from fast and accurate approximations, either to support real-time operations or large-scale simulation studies. This paper aims at exploring how to leverage the substantial data being accumulated by repeatedly solving instances of these applications over time. It introduces a deep learning model that exploits Lagrangian duality to encourage the satisfaction of hard constraints. The proposed method is evaluated on a collection of realistic energy networks, by enforcing non-discriminatory decisions on a variety of datasets, and a transprecision computing application. The results illustrate the effectiveness of the proposed method that dramatically decreases constraint violations by the predictors and, in some applications, increases the prediction accuracy.
Tasks
Published 2020-01-26
URL https://arxiv.org/abs/2001.09394v1
PDF https://arxiv.org/pdf/2001.09394v1.pdf
PWC https://paperswithcode.com/paper/a-lagrangian-dual-framework-for-deep-neural
Repo
Framework

A Data-Efficient Sampling Method for Estimating Basins of Attraction Using Hybrid Active Learning (HAL)

Title A Data-Efficient Sampling Method for Estimating Basins of Attraction Using Hybrid Active Learning (HAL)
Authors Xue-She Wang, James D. Turner, Brian P. Mann
Abstract Although basins of attraction (BoA) diagrams are an insightful tool for understanding the behavior of nonlinear systems, generating these diagrams is either computationally expensive with simulation or difficult and cost prohibitive experimentally. This paper introduces a data-efficient sampling method for estimating BoA. The proposed method is based upon hybrid active learning (HAL) and is designed to find and label the “informative” samples, which efficiently determine the boundary of BoA. It consists of three primary parts: 1) additional sampling on trajectories (AST) to maximize the number of samples obtained from each simulation or experiment; 2) an active learning (AL) algorithm to exploit the local boundary of BoA; and 3) a density-based sampling (DBS) method to explore the global boundary of BoA. An example of estimating the BoA for a bistable nonlinear system is presented to show the high efficiency of our HAL sampling method.
Tasks Active Learning
Published 2020-03-24
URL https://arxiv.org/abs/2003.10976v1
PDF https://arxiv.org/pdf/2003.10976v1.pdf
PWC https://paperswithcode.com/paper/a-data-efficient-sampling-method-for
Repo
Framework

When is Ontology-Mediated Querying Efficient?

Title When is Ontology-Mediated Querying Efficient?
Authors Pablo Barcelo, Cristina Feier, Carsten Lutz, Andreas Pieris
Abstract In ontology-mediated querying, description logic (DL) ontologies are used to enrich incomplete data with domain knowledge which results in more complete answers to queries. However, the evaluation of ontology-mediated queries (OMQs) over relational databases is computationally hard. This raises the question when OMQ evaluation is efficient, in the sense of being tractable in combined complexity or fixed-parameter tractable. We study this question for a range of ontology-mediated query languages based on several important and widely-used DLs, using unions of conjunctive queries as the actual queries. For the DL ELHI extended with the bottom concept, we provide a characterization of the classes of OMQs that are fixed-parameter tractable. For its fragment EL extended with domain and range restrictions and the bottom concept (which restricts the use of inverse roles), we provide a characterization of the classes of OMQs that are tractable in combined complexity. Both results are in terms of equivalence to OMQs of bounded tree width and rest on a reasonable assumption from parameterized complexity theory. They are similar in spirit to Grohe’s seminal characterization of the tractable classes of conjunctive queries over relational databases. We further study the complexity of the meta problem of deciding whether a given OMQ is equivalent to an OMQ of bounded tree width, providing several completeness results that range from NP to 2ExpTime, depending on the DL used. We also consider the DL-Lite family of DLs, including members that admit functional roles.
Tasks
Published 2020-03-17
URL https://arxiv.org/abs/2003.07800v1
PDF https://arxiv.org/pdf/2003.07800v1.pdf
PWC https://paperswithcode.com/paper/when-is-ontology-mediated-querying-efficient
Repo
Framework

Lasso for hierarchical polynomial models

Title Lasso for hierarchical polynomial models
Authors Hugo Maruri-Aguilar, Simon Lunagomez
Abstract In a polynomial regression model, the divisibility conditions implicit in polynomial hierarchy give way to a natural construction of constraints for the model parameters. We use this principle to derive versions of strong and weak hierarchy and to extend existing work in the literature, which at the moment is only concerned with models of degree two. We discuss how to estimate parameters in lasso using standard quadratic programming techniques and apply our proposal to both simulated data and examples from the literature. The proposed methodology compares favorably with existing techniques in terms of low validation error and model size.
Tasks
Published 2020-01-21
URL https://arxiv.org/abs/2001.07778v1
PDF https://arxiv.org/pdf/2001.07778v1.pdf
PWC https://paperswithcode.com/paper/lasso-for-hierarchical-polynomial-models
Repo
Framework

Detecting and Characterizing Bots that Commit Code

Title Detecting and Characterizing Bots that Commit Code
Authors Tapajit Dey, Sara Mousavi, Eduardo Ponce, Tanner Fry, Bogdan Vasilescu, Anna Filippova, Audris Mockus
Abstract Background: Some developer activity traditionally performed manually, such as making code commits, opening, managing, or closing issues is increasingly subject to automation in many OSS projects. Specifically, such activity is often performed by tools that react to events or run at specific times. We refer to such automation tools as bots and, in many software mining scenarios related to developer productivity or code quality it is desirable to identify bots in order to separate their actions from actions of individuals. Aim: Find an automated way of identifying bots and code committed by these bots, and to characterize the types of bots based on their activity patterns. Method and Result: We propose BIMAN, a systematic approach to detect bots using author names, commit messages, files modified by the commit, and projects associated with the ommits. For our test data, the value for AUC-ROC was 0.9. We also characterized these bots based on the time patterns of their code commits and the types of files modified, and found that they primarily work with documentation files and web pages, and these files are most prevalent in HTML and JavaScript ecosystems. We have compiled a shareable dataset containing detailed information about 461 bots we found (all of whom have more than 1000 commits) and 13,762,430 commits they created.
Tasks
Published 2020-03-02
URL https://arxiv.org/abs/2003.03172v3
PDF https://arxiv.org/pdf/2003.03172v3.pdf
PWC https://paperswithcode.com/paper/detecting-and-characterizing-bots-that-commit
Repo
Framework

Meta-learning curiosity algorithms

Title Meta-learning curiosity algorithms
Authors Ferran Alet, Martin F. Schneider, Tomas Lozano-Perez, Leslie Pack Kaelbling
Abstract We hypothesize that curiosity is a mechanism found by evolution that encourages meaningful exploration early in an agent’s life in order to expose it to experiences that enable it to obtain high rewards over the course of its lifetime. We formulate the problem of generating curious behavior as one of meta-learning: an outer loop will search over a space of curiosity mechanisms that dynamically adapt the agent’s reward signal, and an inner loop will perform standard reinforcement learning using the adapted reward signal. However, current meta-RL methods based on transferring neural network weights have only generalized between very similar tasks. To broaden the generalization, we instead propose to meta-learn algorithms: pieces of code similar to those designed by humans in ML papers. Our rich language of programs combines neural networks with other building blocks such as buffers, nearest-neighbor modules and custom loss functions. We demonstrate the effectiveness of the approach empirically, finding two novel curiosity algorithms that perform on par or better than human-designed published curiosity algorithms in domains as disparate as grid navigation with image inputs, acrobot, lunar lander, ant and hopper.
Tasks Meta-Learning
Published 2020-03-11
URL https://arxiv.org/abs/2003.05325v1
PDF https://arxiv.org/pdf/2003.05325v1.pdf
PWC https://paperswithcode.com/paper/meta-learning-curiosity-algorithms-1
Repo
Framework

Nearly Optimal Risk Bounds for Kernel K-Means

Title Nearly Optimal Risk Bounds for Kernel K-Means
Authors Yong Liu, Lizhong Ding, Hua Zhang, Wenqi Ren, Xiao Zhang, Shali Jiang, Xinwang Liu, Weiping Wang
Abstract In this paper, we study the statistical properties of the kernel $k$-means and obtain a nearly optimal excess risk bound, substantially improving the state-of-art bounds in the existing clustering risk analyses. We further analyze the statistical effect of computational approximations of the Nystr"{o}m kernel $k$-means, and demonstrate that it achieves the same statistical accuracy as the exact kernel $k$-means considering only $\sqrt{nk}$ Nystr"{o}m landmark points. To the best of our knowledge, such sharp excess risk bounds for kernel (or approximate kernel) $k$-means have never been seen before.
Tasks
Published 2020-03-09
URL https://arxiv.org/abs/2003.03888v1
PDF https://arxiv.org/pdf/2003.03888v1.pdf
PWC https://paperswithcode.com/paper/nearly-optimal-risk-bounds-for-kernel-k-means
Repo
Framework

A Journey into Ontology Approximation: From Non-Horn to Horn

Title A Journey into Ontology Approximation: From Non-Horn to Horn
Authors Anneke Haga, Carsten Lutz, Johannes Marti, Frank Wolter
Abstract We study complete approximations of an ontology formulated in a non-Horn description logic (DL) such as $\mathcal{ALC}$ in a Horn DL such as~$\mathcal{EL}$. We provide concrete approximation schemes that are necessarily infinite and observe that in the $\mathcal{ELU}$-to-$\mathcal{EL}$ case finite approximations tend to exist in practice and are guaranteed to exist when the original ontology is acyclic. In contrast, neither of this is the case for $\mathcal{ELU}\bot$-to-$\mathcal{EL}\bot$ and for $\mathcal{ALC}$-to-$\mathcal{EL}_\bot$ approximations. We also define a notion of approximation tailored towards ontology-mediated querying, connect it to subsumption-based approximations, and identify a case where finite approximations are guaranteed to exist.
Tasks
Published 2020-01-21
URL https://arxiv.org/abs/2001.07754v3
PDF https://arxiv.org/pdf/2001.07754v3.pdf
PWC https://paperswithcode.com/paper/a-journey-into-ontology-approximation-from
Repo
Framework

CF2-Net: Coarse-to-Fine Fusion Convolutional Network for Breast Ultrasound Image Segmentation

Title CF2-Net: Coarse-to-Fine Fusion Convolutional Network for Breast Ultrasound Image Segmentation
Authors Zhenyuan Ning, Ke Wang, Shengzhou Zhong, Qianjin Feng, Yu Zhang
Abstract Breast ultrasound (BUS) image segmentation plays a crucial role in a computer-aided diagnosis system, which is regarded as a useful tool to help increase the accuracy of breast cancer diagnosis. Recently, many deep learning methods have been developed for segmentation of BUS image and show some advantages compared with conventional region-, model-, and traditional learning-based methods. However, previous deep learning methods typically use skip-connection to concatenate the encoder and decoder, which might not make full fusion of coarse-to-fine features from encoder and decoder. Since the structure and edge of lesion in BUS image are common blurred, these would make it difficult to learn the discriminant information of structure and edge, and reduce the performance. To this end, we propose and evaluate a coarse-to-fine fusion convolutional network (CF2-Net) based on a novel feature integration strategy (forming an ‘E’-like type) for BUS image segmentation. To enhance contour and provide structural information, we concatenate a super-pixel image and the original image as the input of CF2-Net. Meanwhile, to highlight the differences in the lesion regions with variable sizes and relieve the imbalance issue, we further design a weighted-balanced loss function to train the CF2-Net effectively. The proposed CF2-Net was evaluated on an open dataset by using four-fold cross validation. The results of the experiment demonstrate that the CF2-Net obtains state-of-the-art performance when compared with other deep learning-based methods
Tasks Semantic Segmentation
Published 2020-03-23
URL https://arxiv.org/abs/2003.10144v1
PDF https://arxiv.org/pdf/2003.10144v1.pdf
PWC https://paperswithcode.com/paper/cf2-net-coarse-to-fine-fusion-convolutional
Repo
Framework

A Correspondence Analysis Framework for Author-Conference Recommendations

Title A Correspondence Analysis Framework for Author-Conference Recommendations
Authors Rahul Radhakrishnan Iyer, Manish Sharma, Vijaya Saradhi
Abstract For many years, achievements and discoveries made by scientists are made aware through research papers published in appropriate journals or conferences. Often, established scientists and especially newbies are caught up in the dilemma of choosing an appropriate conference to get their work through. Every scientific conference and journal is inclined towards a particular field of research and there is a vast multitude of them for any particular field. Choosing an appropriate venue is vital as it helps in reaching out to the right audience and also to further one’s chance of getting their paper published. In this work, we address the problem of recommending appropriate conferences to the authors to increase their chances of acceptance. We present three different approaches for the same involving the use of social network of the authors and the content of the paper in the settings of dimensionality reduction and topic modeling. In all these approaches, we apply Correspondence Analysis (CA) to derive appropriate relationships between the entities in question, such as conferences and papers. Our models show promising results when compared with existing methods such as content-based filtering, collaborative filtering and hybrid filtering.
Tasks Dimensionality Reduction
Published 2020-01-08
URL https://arxiv.org/abs/2001.02669v1
PDF https://arxiv.org/pdf/2001.02669v1.pdf
PWC https://paperswithcode.com/paper/a-correspondence-analysis-framework-for
Repo
Framework

Lifespan Age Transformation Synthesis

Title Lifespan Age Transformation Synthesis
Authors Roy Or-El, Soumyadip Sengupta, Ohad Fried, Eli Shechtman, Ira Kemelmacher-Shlizerman
Abstract We address the problem of single photo age progression and regression-the prediction of how a person might look in the future, or how they looked in the past. Most existing aging methods are limited to changing the texture, overlooking transformations in head shape that occur during the human aging and growth process. This limits the applicability of previous methods to aging of adults to slightly older adults, and application of those methods to photos of children does not produce quality results. We propose a novel multi-domain image-to-image generative adversarial network architecture, whose learned latent space models a continuous bi-directional aging process. The network is trained on the FFHQ dataset, which we labeled for ages, gender, and semantic segmentation. Fixed age classes are used as anchors to approximate continuous age transformation. Our framework can predict a full head portrait for ages 0-70 from a single photo, modifying both texture and shape of the head. We demonstrate results on a wide variety of photos and datasets, and show significant improvement over the state of the art.
Tasks Semantic Segmentation
Published 2020-03-21
URL https://arxiv.org/abs/2003.09764v1
PDF https://arxiv.org/pdf/2003.09764v1.pdf
PWC https://paperswithcode.com/paper/lifespan-age-transformation-synthesis
Repo
Framework
comments powered by Disqus