January 31, 2020

2923 words 14 mins read

Paper Group ANR 35

Paper Group ANR 35

Optimize Cash Collection: Use Machine learning to Predicting Invoice Payment. Completion Reasoning Emulation for the Description Logic EL+. ActiveHNE: Active Heterogeneous Network Embedding. KerCNNs: biologically inspired lateral connections for classification of corrupted images. Learning Clustered Representation for Complex Free Energy Landscapes …

Optimize Cash Collection: Use Machine learning to Predicting Invoice Payment

Title Optimize Cash Collection: Use Machine learning to Predicting Invoice Payment
Authors Ana Paula Appel, Victor Oliveira, Bruno Lima, Gabriel Louzada Malfatti, Vagner Figueredo de Santana, Rogerio de Paula
Abstract Predicting invoice payment is valuable in multiple industries and supports decision-making processes in most financial workflows. However, the challenge in this realm involves dealing with complex data and the lack of data related to decisions-making processes not registered in the accounts receivable system. This work presents a prototype developed as a solution devised during a partnership with a multinational bank to support collectors in predicting invoices payment. The proposed prototype reached up to 77% of accuracy, which improved the prioritization of customers and supported the daily work of collectors. With the presented results, one expects to support researchers dealing with the problem of invoice payment prediction to get insights and examples of how to tackle issues present in real data.
Tasks Decision Making
Published 2019-12-20
URL https://arxiv.org/abs/1912.10828v1
PDF https://arxiv.org/pdf/1912.10828v1.pdf
PWC https://paperswithcode.com/paper/optimize-cash-collection-use-machine-learning
Repo
Framework

Completion Reasoning Emulation for the Description Logic EL+

Title Completion Reasoning Emulation for the Description Logic EL+
Authors Aaron Eberhart, Monireh Ebrahimi, Lu Zhou, Cogan Shimizu, Pascal Hitzler
Abstract We present a new approach to integrating deep learning with knowledge-based systems that we believe shows promise. Our approach seeks to emulate reasoning structure, which can be inspected part-way through, rather than simply learning reasoner answers, which is typical in many of the black-box systems currently in use. We demonstrate that this idea is feasible by training a long short-term memory (LSTM) artificial neural network to learn EL+ reasoning patterns with two different data sets. We also show that this trained system is resistant to noise by corrupting a percentage of the test data and comparing the reasoner’s and LSTM’s predictions on corrupt data with correct answers.
Tasks
Published 2019-12-11
URL https://arxiv.org/abs/1912.05063v1
PDF https://arxiv.org/pdf/1912.05063v1.pdf
PWC https://paperswithcode.com/paper/completion-reasoning-emulation-for-the
Repo
Framework

ActiveHNE: Active Heterogeneous Network Embedding

Title ActiveHNE: Active Heterogeneous Network Embedding
Authors Xia Chen, Guoxian Yu, Jun Wang, Carlotta Domeniconi, Zhao Li, Xiangliang Zhang
Abstract Heterogeneous network embedding (HNE) is a challenging task due to the diverse node types and/or diverse relationships between nodes. Existing HNE methods are typically unsupervised. To maximize the profit of utilizing the rare and valuable supervised information in HNEs, we develop a novel Active Heterogeneous Network Embedding (ActiveHNE) framework, which includes two components: Discriminative Heterogeneous Network Embedding (DHNE) and Active Query in Heterogeneous Networks (AQHN). In DHNE, we introduce a novel semi-supervised heterogeneous network embedding method based on graph convolutional neural network. In AQHN, we first introduce three active selection strategies based on uncertainty and representativeness, and then derive a batch selection method that assembles these strategies using a multi-armed bandit mechanism. ActiveHNE aims at improving the performance of HNE by feeding the most valuable supervision obtained by AQHN into DHNE. Experiments on public datasets demonstrate the effectiveness of ActiveHNE and its advantage on reducing the query cost.
Tasks Network Embedding
Published 2019-05-14
URL https://arxiv.org/abs/1905.05659v2
PDF https://arxiv.org/pdf/1905.05659v2.pdf
PWC https://paperswithcode.com/paper/activehne-active-heterogeneous-network
Repo
Framework

KerCNNs: biologically inspired lateral connections for classification of corrupted images

Title KerCNNs: biologically inspired lateral connections for classification of corrupted images
Authors Noemi Montobbio, Laurent Bonnasse-Gahot, Giovanna Citti, Alessandro Sarti
Abstract The state of the art in many computer vision tasks is represented by Convolutional Neural Networks (CNNs). Although their hierarchical organization and local feature extraction are inspired by the structure of primate visual systems, the lack of lateral connections in such architectures critically distinguishes their analysis from biological object processing. The idea of enriching CNNs with recurrent lateral connections of convolutional type has been put into practice in recent years, in the form of learned recurrent kernels with no geometrical constraints. In the present work, we introduce biologically plausible lateral kernels encoding a notion of correlation between the feedforward filters of a CNN: at each layer, the associated kernel acts as a transition kernel on the space of activations. The lateral kernels are defined in terms of the filters, thus providing a parameter-free approach to assess the geometry of horizontal connections based on the feedforward structure. We then test this new architecture, which we call KerCNN, on a generalization task related to global shape analysis and pattern completion: once trained for performing basic image classification, the network is evaluated on corrupted testing images. The image perturbations examined are designed to undermine the recognition of the images via local features, thus requiring an integration of context information - which in biological vision is critically linked to lateral connectivity. Our KerCNNs turn out to be far more stable than CNNs and recurrent CNNs to such degradations, thus validating this biologically inspired approach to reinforce object recognition under challenging conditions.
Tasks Image Classification, Object Recognition
Published 2019-10-18
URL https://arxiv.org/abs/1910.08336v1
PDF https://arxiv.org/pdf/1910.08336v1.pdf
PWC https://paperswithcode.com/paper/kercnns-biologically-inspired-lateral
Repo
Framework

Learning Clustered Representation for Complex Free Energy Landscapes

Title Learning Clustered Representation for Complex Free Energy Landscapes
Authors Jun Zhang, Yao-Kun Lei, Xing Che, Zhen Zhang, Yi Isaac Yang, Yi Qin Gao
Abstract In this paper we first analyzed the inductive bias underlying the data scattered across complex free energy landscapes (FEL), and exploited it to train deep neural networks which yield reduced and clustered representation for the FEL. Our parametric method, called Information Distilling of Metastability (IDM), is end-to-end differentiable thus scalable to ultra-large dataset. IDM is also a clustering algorithm and is able to cluster the samples in the meantime of reducing the dimensions. Besides, as an unsupervised learning method, IDM differs from many existing dimensionality reduction and clustering methods in that it neither requires a cherry-picked distance metric nor the ground-true number of clusters, and that it can be used to unroll and zoom-in the hierarchical FEL with respect to different timescales. Through multiple experiments, we show that IDM can achieve physically meaningful representations which partition the FEL into well-defined metastable states hence are amenable for downstream tasks such as mechanism analysis and kinetic modeling.
Tasks Dimensionality Reduction
Published 2019-06-07
URL https://arxiv.org/abs/1906.02852v1
PDF https://arxiv.org/pdf/1906.02852v1.pdf
PWC https://paperswithcode.com/paper/learning-clustered-representation-for-complex
Repo
Framework

Creative AI Through Evolutionary Computation

Title Creative AI Through Evolutionary Computation
Authors Risto Miikkulainen
Abstract The main power of artificial intelligence is not in modeling what we already know, but in creating solutions that are new. Such solutions exist in extremely large, high-dimensional, and complex search spaces. Population-based search techniques, i.e. variants of evolutionary computation, are well suited to finding them. These techniques are also well positioned to take advantage of large-scale parallel computing resources, making creative AI through evolutionary computation the likely “next deep learning”.
Tasks
Published 2019-01-12
URL https://arxiv.org/abs/1901.03775v2
PDF https://arxiv.org/pdf/1901.03775v2.pdf
PWC https://paperswithcode.com/paper/creative-ai-through-evolutionary-computation
Repo
Framework

Classifying Multi-Gas Spectrums using Monte Carlo KNN and Multi-Resolution CNN

Title Classifying Multi-Gas Spectrums using Monte Carlo KNN and Multi-Resolution CNN
Authors Brosnan Yuen
Abstract A Monte Carlo k-nearest neighbours (KNN) and a multi-resolution convolutional neural network (CNN) were developed to detect the presences of multiple gasses in near infrared (IR) spectrums. High Resolution Transmission database was used to synthesize the near IR spectrums. Monte Carlo KNN determined the optimal kernel sizes and the optimal number of channels. The multi-resolution CNN, composed of multiple different kernels, was created using the optimal kernel sizes and the optimal number of channels. The multi-resolution CNN outperforms the multilayer perceptron and the partial least squares.
Tasks
Published 2019-07-04
URL https://arxiv.org/abs/1907.02188v4
PDF https://arxiv.org/pdf/1907.02188v4.pdf
PWC https://paperswithcode.com/paper/classifying-multi-gas-spectrums-using-monte
Repo
Framework

Video Compression With Rate-Distortion Autoencoders

Title Video Compression With Rate-Distortion Autoencoders
Authors Amirhossein Habibian, Ties van Rozendaal, Jakub M. Tomczak, Taco S. Cohen
Abstract In this paper we present a a deep generative model for lossy video compression. We employ a model that consists of a 3D autoencoder with a discrete latent space and an autoregressive prior used for entropy coding. Both autoencoder and prior are trained jointly to minimize a rate-distortion loss, which is closely related to the ELBO used in variational autoencoders. Despite its simplicity, we find that our method outperforms the state-of-the-art learned video compression networks based on motion compensation or interpolation. We systematically evaluate various design choices, such as the use of frame-based or spatio-temporal autoencoders, and the type of autoregressive prior. In addition, we present three extensions of the basic method that demonstrate the benefits over classical approaches to compression. First, we introduce semantic compression, where the model is trained to allocate more bits to objects of interest. Second, we study adaptive compression, where the model is adapted to a domain with limited variability, e.g., videos taken from an autonomous car, to achieve superior compression on that domain. Finally, we introduce multimodal compression, where we demonstrate the effectiveness of our model in joint compression of multiple modalities captured by non-standard imaging sensors, such as quad cameras. We believe that this opens up novel video compression applications, which have not been feasible with classical codecs.
Tasks Motion Compensation, Video Compression
Published 2019-08-14
URL https://arxiv.org/abs/1908.05717v2
PDF https://arxiv.org/pdf/1908.05717v2.pdf
PWC https://paperswithcode.com/paper/video-compression-with-rate-distortion
Repo
Framework

Improving the Results of De novo Peptide Identification via Tandem Mass Spectrometry Using a Genetic Programming-based Scoring Function for Re-ranking Peptide-Spectrum Matches

Title Improving the Results of De novo Peptide Identification via Tandem Mass Spectrometry Using a Genetic Programming-based Scoring Function for Re-ranking Peptide-Spectrum Matches
Authors Samaneh Azari, Bing Xue, Mengjie Zhang, Lifeng Peng
Abstract De novo peptide sequencing algorithms have been widely used in proteomics to analyse tandem mass spectra (MS/MS) and assign them to peptides, but quality-control methods to evaluate the confidence of de novo peptide sequencing are lagging behind. A fundamental part of a quality-control method is the scoring function used to evaluate the quality of peptide-spectrum matches (PSMs). Here, we propose a genetic programming (GP) based method, called GP-PSM, to learn a PSM scoring function for improving the rate of confident peptide identification from MS/MS data. The GP method learns from thousands of MS/MS spectra. Important characteristics about goodness of the matches are extracted from the learning set and incorporated into the GP scoring functions. We compare GP-PSM with two methods including Support Vector Regression (SVR) and Random Forest (RF). The GP method along with RF and SVR, each is used for post-processing the results of peptide identification by PEAKS, a commonly used de novo sequencing method. The results show that GP-PSM outperforms RF and SVR and discriminates accurately between correct and incorrect PSMs. It correctly assigns peptides to 10% more spectra on an evaluation dataset containing 120 MS/MS spectra and decreases the false positive rate (FPR) of peptide identification.
Tasks
Published 2019-08-12
URL https://arxiv.org/abs/1908.08010v1
PDF https://arxiv.org/pdf/1908.08010v1.pdf
PWC https://paperswithcode.com/paper/190808010
Repo
Framework

Decision Trees for Complexity Reduction in Video Compression

Title Decision Trees for Complexity Reduction in Video Compression
Authors Natasha Westland, André Seixas Dias, Marta Mrak
Abstract This paper proposes a method for complexity reduction in practical video encoders using multiple decision tree classifiers. The method is demonstrated for the fast implementation of the ‘High Efficiency Video Coding’ (HEVC) standard, chosen because of its high bit rate reduction capability but large complexity overhead. Optimal partitioning of each video frame into coding units (CUs) is the main source of complexity as a vast number of combinations are tested. The decision tree models were trained to identify when the CU testing process, a time-consuming Lagrangian optimisation, can be skipped i.e a high probability that the CU can remain whole. A novel approach to finding the simplest and most effective decision tree model called ‘manual pruning’ is described. Implementing the skip criteria reduced the average encoding time by 42.1% for a Bj{\o}ntegaard Delta rate detriment of 0.7%, for 17 standard test sequences in a range of resolutions and quantisation parameters.
Tasks Video Compression
Published 2019-08-12
URL https://arxiv.org/abs/1908.04168v1
PDF https://arxiv.org/pdf/1908.04168v1.pdf
PWC https://paperswithcode.com/paper/decision-trees-for-complexity-reduction-in
Repo
Framework

Translate and Label! An Encoder-Decoder Approach for Cross-lingual Semantic Role Labeling

Title Translate and Label! An Encoder-Decoder Approach for Cross-lingual Semantic Role Labeling
Authors Angel Daza, Anette Frank
Abstract We propose a Cross-lingual Encoder-Decoder model that simultaneously translates and generates sentences with Semantic Role Labeling annotations in a resource-poor target language. Unlike annotation projection techniques, our model does not need parallel data during inference time. Our approach can be applied in monolingual, multilingual and cross-lingual settings and is able to produce dependency-based and span-based SRL annotations. We benchmark the labeling performance of our model in different monolingual and multilingual settings using well-known SRL datasets. We then train our model in a cross-lingual setting to generate new SRL labeled data. Finally, we measure the effectiveness of our method by using the generated data to augment the training basis for resource-poor languages and perform manual evaluation to show that it produces high-quality sentences and assigns accurate semantic role annotations. Our proposed architecture offers a flexible method for leveraging SRL data in multiple languages.
Tasks Semantic Role Labeling
Published 2019-08-29
URL https://arxiv.org/abs/1908.11326v1
PDF https://arxiv.org/pdf/1908.11326v1.pdf
PWC https://paperswithcode.com/paper/translate-and-label-an-encoder-decoder
Repo
Framework

Neural networks with motivation

Title Neural networks with motivation
Authors Sergey A. Shuvaev, Ngoc B. Tran, Marcus Stephenson-Jones, Bo Li, Alexei A. Koulakov
Abstract How can animals behave effectively in conditions involving different motivational contexts? Here, we propose how reinforcement learning neural networks can learn optimal behavior for dynamically changing motivational salience vectors. First, we show that Q-learning neural networks with motivation can navigate in environment with dynamic rewards. Second, we show that such networks can learn complex behaviors simultaneously directed towards several goals distributed in an environment. Finally, we show that in Pavlovian conditioning task, the responses of the neurons in our model resemble the firing patterns of neurons in the ventral pallidum (VP), a basal ganglia structure involved in motivated behaviors. We show that, similarly to real neurons, recurrent networks with motivation are composed of two oppositely-tuned classes of neurons, responding to positive and negative rewards. Our model generates predictions for the VP connectivity. We conclude that networks with motivation can rapidly adapt their behavior to varying conditions without changes in synaptic strength when expected reward is modulated by motivation. Such networks may also provide a mechanism for how hierarchical reinforcement learning is implemented in the brain.
Tasks Hierarchical Reinforcement Learning, Q-Learning
Published 2019-06-23
URL https://arxiv.org/abs/1906.09528v2
PDF https://arxiv.org/pdf/1906.09528v2.pdf
PWC https://paperswithcode.com/paper/neural-networks-with-motivation
Repo
Framework

Zero-shot Entity Linking with Dense Entity Retrieval

Title Zero-shot Entity Linking with Dense Entity Retrieval
Authors Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, Luke Zettlemoyer
Abstract We consider the zero-shot entity-linking challenge where each entity is defined by a short textual description, and the model must read these descriptions together with the mention context to make the final linking decisions. In this setting, retrieving entity candidates can be particularly challenging, since many of the common linking cues such as entity alias tables and link popularity are not available. In this paper, we introduce a simple and effective two stage approach for zero-shot linking, based on fine-tuned BERT architectures. In the first stage, we do retrieval in a dense space defined by a bi-encoder that independently embeds the mention context and the entity descriptions. Each candidate is then examined more carefully with a cross-encoder, that concatenates the mention and entity text. Our approach achieves a nearly 5 point absolute gain on a recently introduced zero-shot entity linking benchmark, driven largely by improvements over previous IR-based candidate retrieval. We also show that it performs well in the non-zero-shot setting, obtaining the state-of-the-art result on TACKBP-2010.
Tasks Entity Linking
Published 2019-11-10
URL https://arxiv.org/abs/1911.03814v1
PDF https://arxiv.org/pdf/1911.03814v1.pdf
PWC https://paperswithcode.com/paper/zero-shot-entity-linking-with-dense-entity
Repo
Framework

No Representation without Transformation

Title No Representation without Transformation
Authors Giorgio Giannone, Jonathan Masci, Christian Osendorfer
Abstract We propose to extend Latent Variable Models with a simple idea: learn to encode not only samples but also transformations of such samples. This means that the latent space is not only populated by embeddings but also by higher order objects that map between these embeddings. We show how a hierarchical graphical model can be utilized to enforce desirable algebraic properties of such latent mappings. These mappings in turn structure the latent space and hence can have a core impact on downstream tasks that are solved in the latent space. We demonstrate this impact on a set of experiments and also show that the representation of these latent mappings reflects interpretable properties.
Tasks Latent Variable Models
Published 2019-12-09
URL https://arxiv.org/abs/1912.03845v1
PDF https://arxiv.org/pdf/1912.03845v1.pdf
PWC https://paperswithcode.com/paper/no-representation-without-transformation
Repo
Framework

Content-based image retrieval system with most relevant features among wavelet and color features

Title Content-based image retrieval system with most relevant features among wavelet and color features
Authors Abdolreza Rashno, Elyas Rashno
Abstract Content-based image retrieval (CBIR) has become one of the most important research directions in the domain of digital data management. In this paper, a new feature extraction schema including the norm of low frequency components in wavelet transformation and color features in RGB and HSV domains are proposed as representative feature vector for images in database followed by appropriate similarity measure for each feature type. In CBIR systems, retrieving results are so sensitive to image features. We address this problem with selection of most relevant features among complete feature set by ant colony optimization (ACO)-based feature selection which minimize the number of features as well as maximize F-measure in CBIR system. To evaluate the performance of our proposed CBIR system, it has been compared with three older proposed systems. Results show that the precision and recall of our proposed system are higher than older ones for the majority of image categories in Corel database.
Tasks Content-Based Image Retrieval, Feature Selection, Image Retrieval
Published 2019-02-06
URL http://arxiv.org/abs/1902.02059v1
PDF http://arxiv.org/pdf/1902.02059v1.pdf
PWC https://paperswithcode.com/paper/content-based-image-retrieval-system-with
Repo
Framework
comments powered by Disqus