Paper Group ANR 380
Clustering and Retrieval Method of Immunological Memory Cell in Clonal Selection Algorithm. Improving Bi-directional Generation between Different Modalities with Variational Autoencoders. Modeling Psychotherapy Dialogues with Kernelized Hashcode Representations: A Nonparametric Information-Theoretic Approach. Deep Reinforcement Learning for Image H …
Clustering and Retrieval Method of Immunological Memory Cell in Clonal Selection Algorithm
Title | Clustering and Retrieval Method of Immunological Memory Cell in Clonal Selection Algorithm |
Authors | Takumi Ichimura, Shin Kamada |
Abstract | The clonal selection principle explains the basic features of an adaptive immune response to a antigenic stimulus. It established the idea that only those cells that recognize the antigens are selected to proliferate and differentiate. This paper explains a computational implementation of the clonal selection principle that explicitly takes into account the affinity maturation of the immune response. Antibodies generated by the clonal selection algorithm are clustered in some categories according to the affinity maturation, so that immunological memory cells which respond to the specified pathogen are created. Experimental results to classify the medical database of Coronary Heart Disease databases are reported. For the dataset, our proposed method shows the 99.6% classification capability of training data. |
Tasks | |
Published | 2018-04-08 |
URL | http://arxiv.org/abs/1804.02628v1 |
http://arxiv.org/pdf/1804.02628v1.pdf | |
PWC | https://paperswithcode.com/paper/clustering-and-retrieval-method-of |
Repo | |
Framework | |
Improving Bi-directional Generation between Different Modalities with Variational Autoencoders
Title | Improving Bi-directional Generation between Different Modalities with Variational Autoencoders |
Authors | Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo |
Abstract | We investigate deep generative models that can exchange multiple modalities bi-directionally, e.g., generating images from corresponding texts and vice versa. A major approach to achieve this objective is to train a model that integrates all the information of different modalities into a joint representation and then to generate one modality from the corresponding other modality via this joint representation. We simply applied this approach to variational autoencoders (VAEs), which we call a joint multimodal variational autoencoder (JMVAE). However, we found that when this model attempts to generate a large dimensional modality missing at the input, the joint representation collapses and this modality cannot be generated successfully. Furthermore, we confirmed that this difficulty cannot be resolved even using a known solution. Therefore, in this study, we propose two models to prevent this difficulty: JMVAE-kl and JMVAE-h. Results of our experiments demonstrate that these methods can prevent the difficulty above and that they generate modalities bi-directionally with equal or higher likelihood than conventional VAE methods, which generate in only one direction. Moreover, we confirm that these methods can obtain the joint representation appropriately, so that they can generate various variations of modality by moving over the joint representation or changing the value of another modality. |
Tasks | |
Published | 2018-01-26 |
URL | http://arxiv.org/abs/1801.08702v1 |
http://arxiv.org/pdf/1801.08702v1.pdf | |
PWC | https://paperswithcode.com/paper/improving-bi-directional-generation-between |
Repo | |
Framework | |
Modeling Psychotherapy Dialogues with Kernelized Hashcode Representations: A Nonparametric Information-Theoretic Approach
Title | Modeling Psychotherapy Dialogues with Kernelized Hashcode Representations: A Nonparametric Information-Theoretic Approach |
Authors | Sahil Garg, Irina Rish, Guillermo Cecchi, Palash Goyal, Sarik Ghazarian, Shuyang Gao, Greg Ver Steeg, Aram Galstyan |
Abstract | We propose a novel dialogue modeling framework, the first-ever nonparametric kernel functions based approach for dialogue modeling, which learns kernelized hashcodes as compressed text representations; unlike traditional deep learning models, it handles well relatively small datasets, while also scaling to large ones. We also derive a novel lower bound on mutual information, used as a model-selection criterion favoring representations with better alignment between the utterances of participants in a collaborative dialogue setting, as well as higher predictability of the generated responses. As demonstrated on three real-life datasets, including prominently psychotherapy sessions, the proposed approach significantly outperforms several state-of-art neural network based dialogue systems, both in terms of computational efficiency, reducing training time from days or weeks to hours, and the response quality, achieving an order of magnitude improvement over competitors in frequency of being chosen as the best model by human evaluators. |
Tasks | Dialogue Generation, Model Selection |
Published | 2018-04-26 |
URL | https://arxiv.org/abs/1804.10188v7 |
https://arxiv.org/pdf/1804.10188v7.pdf | |
PWC | https://paperswithcode.com/paper/dialogue-modeling-via-hash-functions |
Repo | |
Framework | |
Deep Reinforcement Learning for Image Hashing
Title | Deep Reinforcement Learning for Image Hashing |
Authors | Yuxin Peng, Jian Zhang, Zhaoda Ye |
Abstract | Deep hashing methods have received much attention recently, which achieve promising results by taking advantage of the strong representation power of deep networks. However, most existing deep hashing methods learn a whole set of hashing functions independently, while ignore the correlations between different hashing functions that can promote the retrieval accuracy greatly. Inspired by the sequential decision ability of deep reinforcement learning, we propose a new Deep Reinforcement Learning approach for Image Hashing (DRLIH). Our proposed DRLIH approach models the hashing learning problem as a sequential decision process, which learns each hashing function by correcting the errors imposed by previous ones and promotes retrieval accuracy. To the best of our knowledge, this is the first work to address hashing problem from deep reinforcement learning perspective. The main contributions of our proposed DRLIH approach can be summarized as follows: (1) We propose a deep reinforcement learning hashing network. In the proposed network, we utilize recurrent neural network (RNN) as agents to model the hashing functions, which take actions of projecting images into binary codes sequentially, so that the current hashing function learning can take previous hashing functions’ error into account. (2) We propose a sequential learning strategy based on proposed DRLIH. We define the state as a tuple of internal features of RNN’s hidden layers and image features, which can reflect history decisions made by the agents. We also propose an action group method to enhance the correlation of hash functions in the same group. Experiments on three widely-used datasets demonstrate the effectiveness of our proposed DRLIH approach. |
Tasks | |
Published | 2018-02-07 |
URL | http://arxiv.org/abs/1802.02904v2 |
http://arxiv.org/pdf/1802.02904v2.pdf | |
PWC | https://paperswithcode.com/paper/deep-reinforcement-learning-for-image-hashing |
Repo | |
Framework | |
Exact Distributed Training: Random Forest with Billions of Examples
Title | Exact Distributed Training: Random Forest with Billions of Examples |
Authors | Mathieu Guillame-Bert, Olivier Teytaud |
Abstract | We introduce an exact distributed algorithm to train Random Forest models as well as other decision forest models without relying on approximating best split search. We explain the proposed algorithm and compare it to related approaches for various complexity measures (time, ram, disk, and network complexity analysis). We report its running performances on artificial and real-world datasets of up to 18 billions examples. This figure is several orders of magnitude larger than datasets tackled in the existing literature. Finally, we empirically show that Random Forest benefits from being trained on more data, even in the case of already gigantic datasets. Given a dataset with 17.3B examples with 82 features (3 numerical, other categorical with high arity), our implementation trains a tree in 22h. |
Tasks | |
Published | 2018-04-18 |
URL | http://arxiv.org/abs/1804.06755v1 |
http://arxiv.org/pdf/1804.06755v1.pdf | |
PWC | https://paperswithcode.com/paper/exact-distributed-training-random-forest-with |
Repo | |
Framework | |
Studio2Shop: from studio photo shoots to fashion articles
Title | Studio2Shop: from studio photo shoots to fashion articles |
Authors | Julia Lasserre, Katharina Rasch, Roland Vollgraf |
Abstract | Fashion is an increasingly important topic in computer vision, in particular the so-called street-to-shop task of matching street images with shop images containing similar fashion items. Solving this problem promises new means of making fashion searchable and helping shoppers find the articles they are looking for. This paper focuses on finding pieces of clothing worn by a person in full-body or half-body images with neutral backgrounds. Such images are ubiquitous on the web and in fashion blogs, and are typically studio photos, we refer to this setting as studio-to-shop. Recent advances in computational fashion include the development of domain-specific numerical representations. Our model Studio2Shop builds on top of such representations and uses a deep convolutional network trained to match a query image to the numerical feature vectors of all the articles annotated in this image. Top-$k$ retrieval evaluation on test query images shows that the correct items are most often found within a range that is sufficiently small for building realistic visual search engines for the studio-to-shop setting. |
Tasks | |
Published | 2018-07-02 |
URL | http://arxiv.org/abs/1807.00556v1 |
http://arxiv.org/pdf/1807.00556v1.pdf | |
PWC | https://paperswithcode.com/paper/studio2shop-from-studio-photo-shoots-to |
Repo | |
Framework | |
Synthetic Sampling for Multi-Class Malignancy Prediction
Title | Synthetic Sampling for Multi-Class Malignancy Prediction |
Authors | Matthew Yung, Eli T. Brown, Alexander Rasin, Jacob D. Furst, Daniela S. Raicu |
Abstract | We explore several oversampling techniques for an imbalanced multi-label classification problem, a setting often encountered when developing models for Computer-Aided Diagnosis (CADx) systems. While most CADx systems aim to optimize classifiers for overall accuracy without considering the relative distribution of each class, we look into using synthetic sampling to increase per-class performance when predicting the degree of malignancy. Using low-level image features and a random forest classifier, we show that using synthetic oversampling techniques increases the sensitivity of the minority classes by an average of 7.22% points, with as much as a 19.88% point increase in sensitivity for a particular minority class. Furthermore, the analysis of low-level image feature distributions for the synthetic nodules reveals that these nodules can provide insights on how to preprocess image data for better classification performance or how to supplement the original datasets when more data acquisition is feasible. |
Tasks | Multi-Label Classification |
Published | 2018-07-07 |
URL | http://arxiv.org/abs/1807.02608v1 |
http://arxiv.org/pdf/1807.02608v1.pdf | |
PWC | https://paperswithcode.com/paper/synthetic-sampling-for-multi-class-malignancy |
Repo | |
Framework | |
Deep Learning Framework for Wireless Systems: Applications to Optical Wireless Communications
Title | Deep Learning Framework for Wireless Systems: Applications to Optical Wireless Communications |
Authors | Hoon Lee, Sang Hyun Lee, Tony Q. S. Quek, Inkyu Lee |
Abstract | Optical wireless communication (OWC) is a promising technology for future wireless communications owing to its potentials for cost-effective network deployment and high data rate. There are several implementation issues in the OWC which have not been encountered in radio frequency wireless communications. First, practical OWC transmitters need an illumination control on color, intensity, and luminance, etc., which poses complicated modulation design challenges. Furthermore, signal-dependent properties of optical channels raise non-trivial challenges both in modulation and demodulation of the optical signals. To tackle such difficulties, deep learning (DL) technologies can be applied for optical wireless transceiver design. This article addresses recent efforts on DL-based OWC system designs. A DL framework for emerging image sensor communication is proposed and its feasibility is verified by simulation. Finally, technical challenges and implementation issues for the DL-based optical wireless technology are discussed. |
Tasks | |
Published | 2018-12-13 |
URL | http://arxiv.org/abs/1812.05227v1 |
http://arxiv.org/pdf/1812.05227v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-learning-framework-for-wireless-systems |
Repo | |
Framework | |
Large-scale Hierarchical Alignment for Data-driven Text Rewriting
Title | Large-scale Hierarchical Alignment for Data-driven Text Rewriting |
Authors | Nikola I. Nikolov, Richard H. R. Hahnloser |
Abstract | We propose a simple unsupervised method for extracting pseudo-parallel monolingual sentence pairs from comparable corpora representative of two different text styles, such as news articles and scientific papers. Our approach does not require a seed parallel corpus, but instead relies solely on hierarchical search over pre-trained embeddings of documents and sentences. We demonstrate the effectiveness of our method through automatic and extrinsic evaluation on text simplification from the normal to the Simple Wikipedia. We show that pseudo-parallel sentences extracted with our method not only supplement existing parallel data, but can even lead to competitive performance on their own. |
Tasks | Style Transfer, Text Simplification |
Published | 2018-10-18 |
URL | https://arxiv.org/abs/1810.08237v2 |
https://arxiv.org/pdf/1810.08237v2.pdf | |
PWC | https://paperswithcode.com/paper/large-scale-hierarchical-alignment-for-author |
Repo | |
Framework | |
A Review of Challenges and Opportunities in Machine Learning for Health
Title | A Review of Challenges and Opportunities in Machine Learning for Health |
Authors | Marzyeh Ghassemi, Tristan Naumann, Peter Schulam, Andrew L. Beam, Irene Y. Chen, Rajesh Ranganath |
Abstract | Modern electronic health records (EHRs) provide data to answer clinically meaningful questions. The growing data in EHRs makes healthcare ripe for the use of machine learning. However, learning in a clinical setting presents unique challenges that complicate the use of common machine learning methodologies. For example, diseases in EHRs are poorly labeled, conditions can encompass multiple underlying endotypes, and healthy individuals are underrepresented. This article serves as a primer to illuminate these challenges and highlights opportunities for members of the machine learning community to contribute to healthcare. |
Tasks | |
Published | 2018-06-01 |
URL | https://arxiv.org/abs/1806.00388v4 |
https://arxiv.org/pdf/1806.00388v4.pdf | |
PWC | https://paperswithcode.com/paper/opportunities-in-machine-learning-for |
Repo | |
Framework | |
Angular Triplet-Center Loss for Multi-view 3D Shape Retrieval
Title | Angular Triplet-Center Loss for Multi-view 3D Shape Retrieval |
Authors | Zhaoqun Li, Cheng Xu, Biao Leng |
Abstract | How to obtain the desirable representation of a 3D shape, which is discriminative across categories and polymerized within classes, is a significant challenge in 3D shape retrieval. Most existing 3D shape retrieval methods focus on capturing strong discriminative shape representation with softmax loss for the classification task, while the shape feature learning with metric loss is neglected for 3D shape retrieval. In this paper, we address this problem based on the intuition that the cosine distance of shape embeddings should be close enough within the same class and far away across categories. Since most of 3D shape retrieval tasks use cosine distance of shape features for measuring shape similarity, we propose a novel metric loss named angular triplet-center loss, which directly optimizes the cosine distances between the features. It inherits the triplet-center loss property to achieve larger inter-class distance and smaller intra-class distance simultaneously. Unlike previous metric loss utilized in 3D shape retrieval methods, where Euclidean distance is adopted and the margin design is difficult, the proposed method is more convenient to train feature embeddings and more suitable for 3D shape retrieval. Moreover, the angle margin is adopted to replace the cosine margin in order to provide more explicit discriminative constraints on an embedding space. Extensive experimental results on two popular 3D object retrieval benchmarks, ModelNet40 and ShapeNetCore 55, demonstrate the effectiveness of our proposed loss, and our method has achieved state-of-the-art results on various 3D shape datasets. |
Tasks | 3D Object Retrieval, 3D Shape Retrieval, Multi-View 3D Shape Retrieval |
Published | 2018-11-21 |
URL | http://arxiv.org/abs/1811.08622v3 |
http://arxiv.org/pdf/1811.08622v3.pdf | |
PWC | https://paperswithcode.com/paper/angular-triplet-center-loss-for-multi-view-3d |
Repo | |
Framework | |
Artificial Intelligence-Defined 5G Radio Access Networks
Title | Artificial Intelligence-Defined 5G Radio Access Networks |
Authors | Miao Yao, Munawwar Sohul, Vuk Marojevic, Jeffrey H. Reed |
Abstract | Massive multiple-input multiple-output antenna systems, millimeter wave communications, and ultra-dense networks have been widely perceived as the three key enablers that facilitate the development and deployment of 5G systems. This article discusses the intelligent agent in 5G base station which combines sensing, learning, understanding and optimizing to facilitate these enablers. We present a flexible, rapidly deployable, and cross-layer artificial intelligence (AI)-based framework to enable the imminent and future demands on 5G and beyond infrastructure. We present example AI-enabled 5G use cases that accommodate important 5G-specific capabilities and discuss the value of AI for enabling beyond 5G network evolution. |
Tasks | |
Published | 2018-11-21 |
URL | http://arxiv.org/abs/1811.08792v2 |
http://arxiv.org/pdf/1811.08792v2.pdf | |
PWC | https://paperswithcode.com/paper/artificial-intelligence-defined-5g-radio |
Repo | |
Framework | |
Deep Cross-modality Adaptation via Semantics Preserving Adversarial Learning for Sketch-based 3D Shape Retrieval
Title | Deep Cross-modality Adaptation via Semantics Preserving Adversarial Learning for Sketch-based 3D Shape Retrieval |
Authors | Jiaxin Chen, Yi Fang |
Abstract | Due to the large cross-modality discrepancy between 2D sketches and 3D shapes, retrieving 3D shapes by sketches is a significantly challenging task. To address this problem, we propose a novel framework to learn a discriminative deep cross-modality adaptation model in this paper. Specifically, we first separately adopt two metric networks, following two deep convolutional neural networks (CNNs), to learn modality-specific discriminative features based on an importance-aware metric learning method. Subsequently, we explicitly introduce a cross-modality transformation network to compensate for the divergence between two modalities, which can transfer features of 2D sketches to the feature space of 3D shapes. We develop an adversarial learning based method to train the transformation model, by simultaneously enhancing the holistic correlations between data distributions of two modalities, and mitigating the local semantic divergences through minimizing a cross-modality mean discrepancy term. Experimental results on the SHREC 2013 and SHREC 2014 datasets clearly show the superior retrieval performance of our proposed model, compared to the state-of-the-art approaches. |
Tasks | 3D Shape Retrieval, Metric Learning |
Published | 2018-07-04 |
URL | http://arxiv.org/abs/1807.01806v1 |
http://arxiv.org/pdf/1807.01806v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-cross-modality-adaptation-via-semantics |
Repo | |
Framework | |
Binary Matrix Completion Using Unobserved Entries
Title | Binary Matrix Completion Using Unobserved Entries |
Authors | Masayoshi Hayashi, Tomoya Sakai, Masashi Sugiyama |
Abstract | A matrix completion problem, which aims to recover a complete matrix from its partial observations, is one of the important problems in the machine learning field and has been studied actively. However, there is a discrepancy between the mainstream problem setting, which assumes continuous-valued observations, and some practical applications such as recommendation systems and SNS link predictions where observations take discrete or even binary values. To cope with this problem, Davenport et al. (2014) proposed a binary matrix completion (BMC) problem, where observations are quantized into binary values. Hsieh et al. (2015) proposed a PU (Positive and Unlabeled) matrix completion problem, which is an extension of the BMC problem. This problem targets the setting where we cannot observe negative values, such as SNS link predictions. In the construction of their method for this setting, they introduced a methodology of the classification problem, regarding each matrix entry as a sample. Their risk, which defines losses over unobserved entries as well, indicates the possibility of the use of unobserved entries. In this paper, motivated by a semi-supervised classification method recently proposed by Sakai et al. (2017), we develop a method for the BMC problem which can use all of positive, negative, and unobserved entries, by combining the risks of Davenport et al. (2014) and Hsieh et al. (2015). To the best of our knowledge, this is the first BMC method which exploits all kinds of matrix entries. We experimentally show that an appropriate mixture of risks improves the performance. |
Tasks | Matrix Completion, Recommendation Systems |
Published | 2018-03-13 |
URL | http://arxiv.org/abs/1803.04663v1 |
http://arxiv.org/pdf/1803.04663v1.pdf | |
PWC | https://paperswithcode.com/paper/binary-matrix-completion-using-unobserved |
Repo | |
Framework | |
Constraint-free Natural Image Reconstruction from fMRI Signals Based on Convolutional Neural Network
Title | Constraint-free Natural Image Reconstruction from fMRI Signals Based on Convolutional Neural Network |
Authors | Chi Zhang, Kai Qiao, Linyuan Wang, Li Tong, Ying Zeng, Bin Yan |
Abstract | In recent years, research on decoding brain activity based on functional magnetic resonance imaging (fMRI) has made remarkable achievements. However, constraint-free natural image reconstruction from brain activity is still a challenge. The existing methods simplified the problem by using semantic prior information or just reconstructing simple images such as letters and digitals. Without semantic prior information, we present a novel method to reconstruct nature images from fMRI signals of human visual cortex based on the computation model of convolutional neural network (CNN). Firstly, we extracted the units output of viewed natural images in each layer of a pre-trained CNN as CNN features. Secondly, we transformed image reconstruction from fMRI signals into the problem of CNN feature visualizations by training a sparse linear regression to map from the fMRI patterns to CNN features. By iteratively optimization to find the matched image, whose CNN unit features become most similar to those predicted from the brain activity, we finally achieved the promising results for the challenging constraint-free natural image reconstruction. As there was no use of semantic prior information of the stimuli when training decoding model, any category of images (not constraint by the training set) could be reconstructed theoretically. We found that the reconstructed images resembled the natural stimuli, especially in position and shape. The experimental results suggest that hierarchical visual features can effectively express the visual perception process of human brain. |
Tasks | Image Reconstruction |
Published | 2018-01-16 |
URL | http://arxiv.org/abs/1801.05151v1 |
http://arxiv.org/pdf/1801.05151v1.pdf | |
PWC | https://paperswithcode.com/paper/constraint-free-natural-image-reconstruction |
Repo | |
Framework | |