October 18, 2019

2888 words 14 mins read

Paper Group ANR 656

Paper Group ANR 656

Visual Analogies between Atari Games for Studying Transfer Learning in RL. Agreement Rate Initialized Maximum Likelihood Estimator for Ensemble Classifier Aggregation and Its Application in Brain-Computer Interface. Exploration vs. Exploitation in Team Formation. Time-interval balancing in multi-processor scheduling of composite modular jobs (preli …

Visual Analogies between Atari Games for Studying Transfer Learning in RL

Title Visual Analogies between Atari Games for Studying Transfer Learning in RL
Authors Doron Sobol, Lior Wolf, Yaniv Taigman
Abstract In this work, we ask the following question: Can visual analogies, learned in an unsupervised way, be used in order to transfer knowledge between pairs of games and even play one game using an agent trained for another game? We attempt to answer this research question by creating visual analogies between a pair of games: a source game and a target game. For example, given a video frame in the target game, we map it to an analogous state in the source game and then attempt to play using a trained policy learned for the source game. We demonstrate convincing visual mapping between four pairs of games (eight mappings), which are used to evaluate three transfer learning approaches.
Tasks Atari Games, Transfer Learning
Published 2018-07-29
URL http://arxiv.org/abs/1807.11074v1
PDF http://arxiv.org/pdf/1807.11074v1.pdf
PWC https://paperswithcode.com/paper/visual-analogies-between-atari-games-for
Repo
Framework

Agreement Rate Initialized Maximum Likelihood Estimator for Ensemble Classifier Aggregation and Its Application in Brain-Computer Interface

Title Agreement Rate Initialized Maximum Likelihood Estimator for Ensemble Classifier Aggregation and Its Application in Brain-Computer Interface
Authors Dongrui Wu, Vernon J. Lawhern, Stephen Gordon, Brent J. Lance, Chin-Teng Lin
Abstract Ensemble learning is a powerful approach to construct a strong learner from multiple base learners. The most popular way to aggregate an ensemble of classifiers is majority voting, which assigns a sample to the class that most base classifiers vote for. However, improved performance can be obtained by assigning weights to the base classifiers according to their accuracy. This paper proposes an agreement rate initialized maximum likelihood estimator (ARIMLE) to optimally fuse the base classifiers. ARIMLE first uses a simplified agreement rate method to estimate the classification accuracy of each base classifier from the unlabeled samples, then employs the accuracies to initialize a maximum likelihood estimator (MLE), and finally uses the expectation-maximization algorithm to refine the MLE. Extensive experiments on visually evoked potential classification in a brain-computer interface application show that ARIMLE outperforms majority voting, and also achieves better or comparable performance with several other state-of-the-art classifier combination approaches.
Tasks
Published 2018-05-12
URL http://arxiv.org/abs/1805.04740v1
PDF http://arxiv.org/pdf/1805.04740v1.pdf
PWC https://paperswithcode.com/paper/agreement-rate-initialized-maximum-likelihood
Repo
Framework

Exploration vs. Exploitation in Team Formation

Title Exploration vs. Exploitation in Team Formation
Authors Ramesh Johari, Vijay Kamble, Anilesh K. Krishnaswamy, Hannah Li
Abstract An online labor platform faces an online learning problem in matching workers with jobs and using the performance on these jobs to create better future matches. This learning problem is complicated by the rise of complex tasks on these platforms, such as web development and product design, that require a team of workers to complete. The success of a job is now a function of the skills and contributions of all workers involved, which may be unknown to both the platform and the client who posted the job. These team matchings result in a structured correlation between what is known about the individuals and this information can be utilized to create better future matches. We analyze two natural settings where the performance of a team is dictated by its strongest and its weakest member, respectively. We find that both problems pose an exploration-exploitation tradeoff between learning the performance of untested teams and repeating previously tested teams that resulted in a good performance. We establish fundamental regret bounds and design near-optimal algorithms that uncover several insights into these tradeoffs.
Tasks
Published 2018-09-18
URL http://arxiv.org/abs/1809.06937v2
PDF http://arxiv.org/pdf/1809.06937v2.pdf
PWC https://paperswithcode.com/paper/exploration-vs-exploitation-in-team-formation
Repo
Framework

Time-interval balancing in multi-processor scheduling of composite modular jobs (preliminary description)

Title Time-interval balancing in multi-processor scheduling of composite modular jobs (preliminary description)
Authors Mark Sh. Levin
Abstract The article describes a special time-interval balancing in multi-processor scheduling of composite modular jobs. This scheduling problem is close to just-in-time planning approach. First, brief literature surveys are presented on just-in-time scheduling and due-data/due-window scheduling problems. Further, the problem and its formulation are proposed for the time-interval balanced scheduling of composite modular jobs. The illustrative real world planning example for modular home-building is described. Here, the main objective function consists in a balance between production of the typical building modules (details) and the assembly processes of the building(s) (by several teams). The assembly plan has to be modified to satisfy the balance requirements. The solving framework is based on the following: (i) clustering of initial set of modular detail types to obtain about ten basic detail types that correspond to main manufacturing conveyors; (ii) designing a preliminary plan of assembly for buildings; (iii) detection of unbalanced time periods, (iv) modification of the planning solution to improve the schedule balance. The framework implements a metaheuristic based on local optimization approach. Two other applications (supply chain management, information transmission systems) are briefly described.
Tasks
Published 2018-11-11
URL http://arxiv.org/abs/1811.04458v1
PDF http://arxiv.org/pdf/1811.04458v1.pdf
PWC https://paperswithcode.com/paper/time-interval-balancing-in-multi-processor
Repo
Framework

Recurrent Iterative Gating Networks for Semantic Segmentation

Title Recurrent Iterative Gating Networks for Semantic Segmentation
Authors Rezaul Karim, Md Amirul Islam, Neil D. B. Bruce
Abstract In this paper, we present an approach for Recurrent Iterative Gating called RIGNet. The core elements of RIGNet involve recurrent connections that control the flow of information in neural networks in a top-down manner, and different variants on the core structure are considered. The iterative nature of this mechanism allows for gating to spread in both spatial extent and feature space. This is revealed to be a powerful mechanism with broad compatibility with common existing networks. Analysis shows how gating interacts with different network characteristics, and we also show that more shallow networks with gating may be made to perform better than much deeper networks that do not include RIGNet modules.
Tasks Semantic Segmentation
Published 2018-11-20
URL http://arxiv.org/abs/1811.08043v1
PDF http://arxiv.org/pdf/1811.08043v1.pdf
PWC https://paperswithcode.com/paper/recurrent-iterative-gating-networks-for
Repo
Framework

Low-Rank Boolean Matrix Approximation by Integer Programming

Title Low-Rank Boolean Matrix Approximation by Integer Programming
Authors Reka Kovacs, Oktay Gunluk, Raphael Hauser
Abstract Low-rank approximations of data matrices are an important dimensionality reduction tool in machine learning and regression analysis. We consider the case of categorical variables, where it can be formulated as the problem of finding low-rank approximations to Boolean matrices. In this paper we give what is to the best of our knowledge the first integer programming formulation that relies on only polynomially many variables and constraints, we discuss how to solve it computationally and report numerical tests on synthetic and real-world data.
Tasks Dimensionality Reduction
Published 2018-03-13
URL http://arxiv.org/abs/1803.04825v1
PDF http://arxiv.org/pdf/1803.04825v1.pdf
PWC https://paperswithcode.com/paper/low-rank-boolean-matrix-approximation-by
Repo
Framework

Convergence Rate of Krasulina Estimator

Title Convergence Rate of Krasulina Estimator
Authors Jiangning Chen
Abstract Principal component analysis (PCA) is one of the most commonly used statistical procedures with a wide range of applications. Consider the points $X_1, X_2,…, X_n$ are vectors drawn i.i.d. from a distribution with mean zero and covariance $\Sigma$, where $\Sigma$ is unknown. Let $A_n = X_nX_n^T$, then $E[A_n] = \Sigma$. This paper consider the problem of finding the least eigenvalue and eigenvector of matrix $\Sigma$. A classical such estimator are due to Krasulina\cite{krasulina_method_1969}. We are going to state the convergence proof of Krasulina for the least eigenvalue and corresponding eigenvector, and then find their convergence rate.
Tasks
Published 2018-08-28
URL https://arxiv.org/abs/1808.09489v4
PDF https://arxiv.org/pdf/1808.09489v4.pdf
PWC https://paperswithcode.com/paper/convergence-of-krasulina-scheme
Repo
Framework

Self Super-Resolution for Magnetic Resonance Images using Deep Networks

Title Self Super-Resolution for Magnetic Resonance Images using Deep Networks
Authors Can Zhao, Aaron Carass, Blake E. Dewey, Jerry L. Prince
Abstract High resolution magnetic resonance~(MR) imaging~(MRI) is desirable in many clinical applications, however, there is a trade-off between resolution, speed of acquisition, and noise. It is common for MR images to have worse through-plane resolution~(slice thickness) than in-plane resolution. In these MRI images, high frequency information in the through-plane direction is not acquired, and cannot be resolved through interpolation. To address this issue, super-resolution methods have been developed to enhance spatial resolution. As an ill-posed problem, state-of-the-art super-resolution methods rely on the presence of external/training atlases to learn the transform from low resolution~(LR) images to high resolution~(HR) images. For several reasons, such HR atlas images are often not available for MRI sequences. This paper presents a self super-resolution~(SSR) algorithm, which does not use any external atlas images, yet can still resolve HR images only reliant on the acquired LR image. We use a blurred version of the input image to create training data for a state-of-the-art super-resolution deep network. The trained network is applied to the original input image to estimate the HR image. Our SSR result shows a significant improvement on through-plane resolution compared to competing SSR methods.
Tasks Super-Resolution
Published 2018-02-26
URL http://arxiv.org/abs/1802.09431v1
PDF http://arxiv.org/pdf/1802.09431v1.pdf
PWC https://paperswithcode.com/paper/self-super-resolution-for-magnetic-resonance
Repo
Framework
Title Link prediction for egocentrically sampled networks
Authors Yun-Jhong Wu, Elizaveta Levina, Ji Zhu
Abstract Link prediction in networks is typically accomplished by estimating or ranking the probabilities of edges for all pairs of nodes. In practice, especially for social networks, the data are often collected by egocentric sampling, which means selecting a subset of nodes and recording all of their edges. This sampling mechanism requires different prediction tools than the typical assumption of links missing at random. We propose a new computationally efficient link prediction algorithm for egocentrically sampled networks, which estimates the underlying probability matrix by estimating its row space. For networks created by sampling rows, our method outperforms many popular link prediction and graphon estimation techniques.
Tasks Graphon Estimation, Link Prediction
Published 2018-03-12
URL http://arxiv.org/abs/1803.04084v1
PDF http://arxiv.org/pdf/1803.04084v1.pdf
PWC https://paperswithcode.com/paper/link-prediction-for-egocentrically-sampled
Repo
Framework

Exploring galaxy evolution with generative models

Title Exploring galaxy evolution with generative models
Authors Kevin Schawinski, M. Dennis Turp, Ce Zhang
Abstract Context. Generative models open up the possibility to interrogate scientific data in a more data-driven way. Aims: We propose a method that uses generative models to explore hypotheses in astrophysics and other areas. We use a neural network to show how we can independently manipulate physical attributes by encoding objects in latent space. Methods: By learning a latent space representation of the data, we can use this network to forward model and explore hypotheses in a data-driven way. We train a neural network to generate artificial data to test hypotheses for the underlying physical processes. Results: We demonstrate this process using a well-studied process in astrophysics, the quenching of star formation in galaxies as they move from low-to high-density environments. This approach can help explore astrophysical and other phenomena in a way that is different from current methods based on simulations and observations.
Tasks
Published 2018-12-03
URL http://arxiv.org/abs/1812.01114v2
PDF http://arxiv.org/pdf/1812.01114v2.pdf
PWC https://paperswithcode.com/paper/exploring-galaxy-evolution-with-generative
Repo
Framework

Multiple sclerosis lesion enhancement and white matter region estimation using hyperintensities in FLAIR images

Title Multiple sclerosis lesion enhancement and white matter region estimation using hyperintensities in FLAIR images
Authors Paulo G. L. Freire, Ricardo J. Ferrari
Abstract Multiple sclerosis (MS) is a demyelinating disease that affects more than 2 million people worldwide. The most used imaging technique to help in its diagnosis and follow-up is magnetic resonance imaging (MRI). Fluid Attenuated Inversion Recovery (FLAIR) images are usually acquired in the context of MS because lesions often appear hyperintense in this particular image weight, making it easier for physicians to identify them. Though lesions have a bright intensity profile, it may overlap with white matter (WM) and gray matter (GM) tissues, posing difficulties to be accurately segmented. In this sense, we propose a lesion enhancement technique to dim down WM and GM regions and highlight hyperintensities, making them much more distinguishable than other tissues. We applied our technique to the ISBI 2015 MS Lesion Segmentation Challenge and took the average gray level intensity of MS lesions, WM and GM on FLAIR and enhanced images. The lesion intensity profile in FLAIR was on average 25% and 19% brighter than white matter and gray matter, respectively; comparatively, the same profile in our enhanced images was on average 444% and 264% brighter. Such results mean a significant improvement on the intensity distinction among these three clusters, which may come as aid both for experts and automated techniques. Moreover, a byproduct of our proposal is that the enhancement can be used to automatically estimate a mask encompassing WM and MS lesions, which may be useful for brain tissue volume assessment and improve MS lesion segmentation accuracy in future works.
Tasks Lesion Segmentation
Published 2018-07-25
URL http://arxiv.org/abs/1807.09619v1
PDF http://arxiv.org/pdf/1807.09619v1.pdf
PWC https://paperswithcode.com/paper/multiple-sclerosis-lesion-enhancement-and
Repo
Framework

Application of Machine Learning in Rock Facies Classification with Physics-Motivated Feature Augmentation

Title Application of Machine Learning in Rock Facies Classification with Physics-Motivated Feature Augmentation
Authors Jie Chen, Yu Zeng
Abstract With recent progress in algorithms and the availability of massive amounts of computation power, application of machine learning techniques is becoming a hot topic in the oil and gas industry. One of the most promising aspects to apply machine learning to the upstream field is the rock facies classification in reservoir characterization, which is crucial in determining the net pay thickness of reservoirs, thus a definitive factor in drilling decision making process. For complex machine learning tasks like facies classification, feature engineering is often critical. This paper shows the inclusion of physics-motivated feature interaction in feature augmentation can further improve the capability of machine learning in rock facies classification. We demonstrate this approach with the SEG 2016 machine learning contest dataset and the top winning algorithms. The improvement is roboust and can be $\sim5%$ better than current existing best F-1 score, where F-1 is an evaluation metric used to quantify average prediction accuracy.
Tasks Decision Making, Facies Classification, Feature Engineering
Published 2018-08-29
URL http://arxiv.org/abs/1808.09856v1
PDF http://arxiv.org/pdf/1808.09856v1.pdf
PWC https://paperswithcode.com/paper/application-of-machine-learning-in-rock
Repo
Framework

Dermatologist Level Dermoscopy Skin Cancer Classification Using Different Deep Learning Convolutional Neural Networks Algorithms

Title Dermatologist Level Dermoscopy Skin Cancer Classification Using Different Deep Learning Convolutional Neural Networks Algorithms
Authors Amirreza Rezvantalab, Habib Safigholi, Somayeh Karimijeshni
Abstract In this paper, the effectiveness and capability of convolutional neural networks have been studied in the classification of 8 skin diseases. Different pre-trained state-of-the-art architectures (DenseNet 201, ResNet 152, Inception v3, InceptionResNet v2) were used and applied on 10135 dermoscopy skin images in total (HAM10000: 10015, PH2: 120). The utilized dataset includes 8 diagnostic categories - melanoma, melanocytic nevi, basal cell carcinoma, benign keratosis, actinic keratosis and intraepithelial carcinoma, dermatofibroma, vascular lesions, and atypical nevi. The aim is to compare the ability of deep learning with the performance of highly trained dermatologists. Overall, the mean results show that all deep learning models outperformed dermatologists (at least 11%). The best ROC AUC values for melanoma and basal cell carcinoma are 94.40% (ResNet 152) and 99.30% (DenseNet 201) versus 82.26% and 88.82% of dermatologists, respectively. Also, DenseNet 201 had the highest macro and micro averaged AUC values for overall classification (98.16%, 98.79%, respectively).
Tasks Skin Cancer Classification
Published 2018-10-21
URL http://arxiv.org/abs/1810.10348v1
PDF http://arxiv.org/pdf/1810.10348v1.pdf
PWC https://paperswithcode.com/paper/dermatologist-level-dermoscopy-skin-cancer
Repo
Framework

Low-memory convolutional neural networks through incremental depth-first processing

Title Low-memory convolutional neural networks through incremental depth-first processing
Authors Jonathan Binas, Yoshua Bengio
Abstract We introduce an incremental processing scheme for convolutional neural network (CNN) inference, targeted at embedded applications with limited memory budgets. Instead of processing layers one by one, individual input pixels are propagated through all parts of the network they can influence under the given structural constraints. This depth-first updating scheme comes with hard bounds on the memory footprint: the memory required is constant in the case of 1D input and proportional to the square root of the input dimension in the case of 2D input.
Tasks
Published 2018-04-28
URL https://arxiv.org/abs/1804.10727v2
PDF https://arxiv.org/pdf/1804.10727v2.pdf
PWC https://paperswithcode.com/paper/low-memory-convolutional-neural-networks
Repo
Framework

Phylotastic: An Experiment in Creating, Manipulating, and Evolving Phylogenetic Biology Workflows Using Logic Programming

Title Phylotastic: An Experiment in Creating, Manipulating, and Evolving Phylogenetic Biology Workflows Using Logic Programming
Authors Thanh Hai Nguyen, Enrico Pontelli, Tran Cao Son
Abstract Evolutionary Biologists have long struggled with the challenge of developing analysis workflows in a flexible manner, thus facilitating the reuse of phylogenetic knowledge. An evolutionary biology workflow can be viewed as a plan which composes web services that can retrieve, manipulate, and produce phylogenetic trees. The Phylotastic project was launched two years ago as a collaboration between evolutionary biologists and computer scientists, with the goal of developing an open architecture to facilitate the creation of such analysis workflows. While composition of web services is a problem that has been extensively explored in the literature, including within the logic programming domain, the incarnation of the problem in Phylotastic provides a number of additional challenges. Along with the need to integrate preferences and formal ontologies in the description of the desired workflow, evolutionary biologists tend to construct workflows in an incremental manner, by successively refining the workflow, by indicating desired changes (e.g., exclusion of certain services, modifications of the desired output). This leads to the need of successive iterations of incremental replanning, to develop a new workflow that integrates the requested changes while minimizing the changes to the original workflow. This paper illustrates how Phylotastic has addressed the challenges of creating and refining phylogenetic analysis workflows using logic programming technology and how such solutions have been used within the general framework of the Phylotastic project. Under consideration in Theory and Practice of Logic Programming (TPLP).
Tasks
Published 2018-05-01
URL http://arxiv.org/abs/1805.00185v1
PDF http://arxiv.org/pdf/1805.00185v1.pdf
PWC https://paperswithcode.com/paper/phylotastic-an-experiment-in-creating
Repo
Framework
comments powered by Disqus