July 29, 2019

3155 words 15 mins read

Paper Group ANR 76

Paper Group ANR 76

Learning Graph While Training: An Evolving Graph Convolutional Neural Network. Stochastic Global Optimization Algorithms: A Systematic Formal Approach. Knowledge Acquisition, Representation & Manipulation in Decision Support Systems. Inertial-aided Rolling Shutter Relative Pose Estimation. An automatic water detection approach based on Dempster-Sh …

Learning Graph While Training: An Evolving Graph Convolutional Neural Network

Title Learning Graph While Training: An Evolving Graph Convolutional Neural Network
Authors Ruoyu Li, Junzhou Huang
Abstract Convolution Neural Networks on Graphs are important generalization and extension of classical CNNs. While previous works generally assumed that the graph structures of samples are regular with unified dimensions, in many applications, they are highly diverse or even not well defined. Under some circumstances, e.g. chemical molecular data, clustering or coarsening for simplifying the graphs is hard to be justified chemically. In this paper, we propose a more general and flexible graph convolution network (EGCN) fed by batch of arbitrarily shaped data together with their evolving graph Laplacians trained in supervised fashion. Extensive experiments have been conducted to demonstrate the superior performance in terms of both the acceleration of parameter fitting and the significantly improved prediction accuracy on multiple graph-structured datasets.
Tasks
Published 2017-08-10
URL http://arxiv.org/abs/1708.04675v1
PDF http://arxiv.org/pdf/1708.04675v1.pdf
PWC https://paperswithcode.com/paper/learning-graph-while-training-an-evolving
Repo
Framework

Stochastic Global Optimization Algorithms: A Systematic Formal Approach

Title Stochastic Global Optimization Algorithms: A Systematic Formal Approach
Authors Jonatan Gomez
Abstract As we know, some global optimization problems cannot be solved using analytic methods, so numeric/algorithmic approaches are used to find near to the optimal solutions for them. A stochastic global optimization algorithm (SGoal) is an iterative algorithm that generates a new population (a set of candidate solutions) from a previous population using stochastic operations. Although some research works have formalized SGoals using Markov kernels, such formalization is not general and sometimes is blurred. In this paper, we propose a comprehensive and systematic formal approach for studying SGoals. First, we present the required theory of probability (\sigma-algebras, measurable functions, kernel, markov chain, products, convergence and so on) and prove that some algorithmic functions like swapping and projection can be represented by kernels. Then, we introduce the notion of join-kernel as a way of characterizing the combination of stochastic methods. Next, we define the optimization space, a formal structure (a set with a \sigma-algebra that contains strict \epsilon-optimal states) for studying SGoals, and we develop kernels, like sort and permutation, on such structure. Finally, we present some popular SGoals in terms of the developed theory, we introduce sufficient conditions for convergence of a SGoal, and we prove convergence of some popular SGoals.
Tasks
Published 2017-06-07
URL http://arxiv.org/abs/1706.02246v1
PDF http://arxiv.org/pdf/1706.02246v1.pdf
PWC https://paperswithcode.com/paper/stochastic-global-optimization-algorithms-a
Repo
Framework

Knowledge Acquisition, Representation & Manipulation in Decision Support Systems

Title Knowledge Acquisition, Representation & Manipulation in Decision Support Systems
Authors M. Michalewicz, S. T. Wierzchoń, M. A. Kłopotek
Abstract In this paper we present a methodology and discuss some implementation issues for a project on statistical/expert approach to data analysis and knowledge acquisition. We discuss some general assumptions underlying the project. Further, the requirements for a user-friendly computer assistant are specified along with the nature of tools aiding the researcher. Next we show some aspects of belief network approach and Dempster-Shafer (DST) methodology introduced in practice to system SEAD. Specifically we present the application of DS methodology to belief revision problem. Further a concept of an interface to probabilistic and DS belief networks enabling a user to understand the communication with a belief network based reasoning system is presented
Tasks
Published 2017-05-23
URL http://arxiv.org/abs/1705.08440v1
PDF http://arxiv.org/pdf/1705.08440v1.pdf
PWC https://paperswithcode.com/paper/knowledge-acquisition-representation
Repo
Framework

Inertial-aided Rolling Shutter Relative Pose Estimation

Title Inertial-aided Rolling Shutter Relative Pose Estimation
Authors Chang-Ryeol Lee, Kuk-Jin Yoon
Abstract Relative pose estimation is a fundamental problem in computer vision and it has been studied for conventional global shutter cameras for decades. However, recently, a rolling shutter camera has been widely used due to its low cost imaging capability and, since the rolling shutter camera captures the image line-by-line, the relative pose estimation of a rolling shutter camera is more difficult than that of a global shutter camera. In this paper, we propose to exploit inertial measurements (gravity and angular velocity) for the rolling shutter relative pose estimation problem. The inertial measurements provide information about the partial relative rotation between two views (cameras) and the instantaneous motion that causes the rolling shutter distortion. Based on this information, we simplify the rolling shutter relative pose estimation problem and propose effective methods to solve it. Unlike the previous methods, which require 44 (linear) or 17 (nonlinear) points with the uniform rolling shutter camera model, the proposed methods require at most 9 or 11 points to estimate the relative pose between the rolling shutter cameras. Experimental results on synthetic data and the public PennCOSYVIO dataset show that the proposed methods outperform the existing methods.
Tasks Pose Estimation
Published 2017-12-01
URL http://arxiv.org/abs/1712.00184v1
PDF http://arxiv.org/pdf/1712.00184v1.pdf
PWC https://paperswithcode.com/paper/inertial-aided-rolling-shutter-relative-pose
Repo
Framework

An automatic water detection approach based on Dempster-Shafer theory for multi spectral images

Title An automatic water detection approach based on Dempster-Shafer theory for multi spectral images
Authors Na Li, Arnaud Martin, Rémi Estival
Abstract Detection of surface water in natural environment via multi-spectral imagery has been widely utilized in many fields, such land cover identification. However, due to the similarity of the spectra of water bodies, built-up areas, approaches based on high-resolution satellites sometimes confuse these features. A popular direction to detect water is spectral index, often requiring the ground truth to find appropriate thresholds manually. As for traditional machine learning methods, they identify water merely via differences of spectra of various land covers, without taking specific properties of spectral reflection into account. In this paper, we propose an automatic approach to detect water bodies based on Dempster-Shafer theory, combining supervised learning with specific property of water in spectral band in a fully unsupervised context. The benefits of our approach are twofold. On the one hand, it performs well in mapping principle water bodies, including little streams and branches. On the other hand, it labels all objects usually confused with water as ignorance', including half-dry watery areas, built-up areas and semi-transparent clouds and shadows. Ignorance’ indicates not only limitations of the spectral properties of water and supervised learning itself but insufficiency of information from multi-spectral bands as well, providing valuable information for further land cover classification.
Tasks
Published 2017-08-09
URL http://arxiv.org/abs/1708.02747v2
PDF http://arxiv.org/pdf/1708.02747v2.pdf
PWC https://paperswithcode.com/paper/an-automatic-water-detection-approach-based
Repo
Framework

Genetic and Memetic Algorithm with Diversity Equilibrium based on Greedy Diversification

Title Genetic and Memetic Algorithm with Diversity Equilibrium based on Greedy Diversification
Authors Andrés Herrera-Poyatos, Francisco Herrera
Abstract The lack of diversity in a genetic algorithm’s population may lead to a bad performance of the genetic operators since there is not an equilibrium between exploration and exploitation. In those cases, genetic algorithms present a fast and unsuitable convergence. In this paper we develop a novel hybrid genetic algorithm which attempts to obtain a balance between exploration and exploitation. It confronts the diversity problem using the named greedy diversification operator. Furthermore, the proposed algorithm applies a competition between parent and children so as to exploit the high quality visited solutions. These operators are complemented by a simple selection mechanism designed to preserve and take advantage of the population diversity. Additionally, we extend our proposal to the field of memetic algorithms, obtaining an improved model with outstanding results in practice. The experimental study shows the validity of the approach as well as how important is taking into account the exploration and exploitation concepts when designing an evolutionary algorithm.
Tasks
Published 2017-02-12
URL http://arxiv.org/abs/1702.03594v1
PDF http://arxiv.org/pdf/1702.03594v1.pdf
PWC https://paperswithcode.com/paper/genetic-and-memetic-algorithm-with-diversity
Repo
Framework

A Survey of Recent Advances in CNN-based Single Image Crowd Counting and Density Estimation

Title A Survey of Recent Advances in CNN-based Single Image Crowd Counting and Density Estimation
Authors Vishwanath A. Sindagi, Vishal M. Patel
Abstract Estimating count and density maps from crowd images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning. In addition, techniques developed for crowd counting can be applied to related tasks in other fields of study such as cell microscopy, vehicle counting and environmental survey. The task of crowd counting and density map estimation is riddled with many challenges such as occlusions, non-uniform density, intra-scene and inter-scene variations in scale and perspective. Nevertheless, over the last few years, crowd count analysis has evolved from earlier methods that are often limited to small variations in crowd density and scales to the current state-of-the-art methods that have developed the ability to perform successfully on a wide range of scenarios. The success of crowd counting methods in the recent years can be largely attributed to deep learning and publications of challenging datasets. In this paper, we provide a comprehensive survey of recent Convolutional Neural Network (CNN) based approaches that have demonstrated significant improvements over earlier methods that rely largely on hand-crafted representations. First, we briefly review the pioneering methods that use hand-crafted representations and then we delve in detail into the deep learning-based approaches and recently published datasets. Furthermore, we discuss the merits and drawbacks of existing CNN-based approaches and identify promising avenues of research in this rapidly evolving field.
Tasks Crowd Counting, Density Estimation
Published 2017-07-05
URL http://arxiv.org/abs/1707.01202v1
PDF http://arxiv.org/pdf/1707.01202v1.pdf
PWC https://paperswithcode.com/paper/a-survey-of-recent-advances-in-cnn-based
Repo
Framework

Fiber Orientation Estimation Guided by a Deep Network

Title Fiber Orientation Estimation Guided by a Deep Network
Authors Chuyang Ye, Jerry L. Prince
Abstract Diffusion magnetic resonance imaging (dMRI) is currently the only tool for noninvasively imaging the brain’s white matter tracts. The fiber orientation (FO) is a key feature computed from dMRI for fiber tract reconstruction. Because the number of FOs in a voxel is usually small, dictionary-based sparse reconstruction has been used to estimate FOs with a relatively small number of diffusion gradients. However, accurate FO estimation in regions with complex FO configurations in the presence of noise can still be challenging. In this work we explore the use of a deep network for FO estimation in a dictionary-based framework and propose an algorithm named Fiber Orientation Reconstruction guided by a Deep Network (FORDN). FORDN consists of two steps. First, we use a smaller dictionary encoding coarse basis FOs to represent the diffusion signals. To estimate the mixture fractions of the dictionary atoms (and thus coarse FOs), a deep network is designed specifically for solving the sparse reconstruction problem. Here, the smaller dictionary is used to reduce the computational cost of training. Second, the coarse FOs inform the final FO estimation, where a larger dictionary encoding dense basis FOs is used and a weighted l1-norm regularized least squares problem is solved to encourage FOs that are consistent with the network output. FORDN was evaluated and compared with state-of-the-art algorithms that estimate FOs using sparse reconstruction on simulated and real dMRI data, and the results demonstrate the benefit of using a deep network for FO estimation.
Tasks
Published 2017-05-19
URL http://arxiv.org/abs/1705.06870v1
PDF http://arxiv.org/pdf/1705.06870v1.pdf
PWC https://paperswithcode.com/paper/fiber-orientation-estimation-guided-by-a-deep
Repo
Framework

Artificial intelligence in peer review: How can evolutionary computation support journal editors?

Title Artificial intelligence in peer review: How can evolutionary computation support journal editors?
Authors Maciej J. Mrowinski, Piotr Fronczak, Agata Fronczak, Marcel Ausloos, Olgica Nedic
Abstract With the volume of manuscripts submitted for publication growing every year, the deficiencies of peer review (e.g. long review times) are becoming more apparent. Editorial strategies, sets of guidelines designed to speed up the process and reduce editors workloads, are treated as trade secrets by publishing houses and are not shared publicly. To improve the effectiveness of their strategies, editors in small publishing groups are faced with undertaking an iterative trial-and-error approach. We show that Cartesian Genetic Programming, a nature-inspired evolutionary algorithm, can dramatically improve editorial strategies. The artificially evolved strategy reduced the duration of the peer review process by 30%, without increasing the pool of reviewers (in comparison to a typical human-developed strategy). Evolutionary computation has typically been used in technological processes or biological ecosystems. Our results demonstrate that genetic programs can improve real-world social systems that are usually much harder to understand and control than physical systems.
Tasks
Published 2017-12-02
URL http://arxiv.org/abs/1712.01682v1
PDF http://arxiv.org/pdf/1712.01682v1.pdf
PWC https://paperswithcode.com/paper/artificial-intelligence-in-peer-review-how
Repo
Framework

On consistency of optimal pricing algorithms in repeated posted-price auctions with strategic buyer

Title On consistency of optimal pricing algorithms in repeated posted-price auctions with strategic buyer
Authors Alexey Drutsa
Abstract We study revenue optimization learning algorithms for repeated posted-price auctions where a seller interacts with a single strategic buyer that holds a fixed private valuation for a good and seeks to maximize his cumulative discounted surplus. For this setting, first, we propose a novel algorithm that never decreases offered prices and has a tight strategic regret bound in $\Theta(\log\log T)$ under some mild assumptions on the buyer surplus discounting. This result closes the open research question on the existence of a no-regret horizon-independent weakly consistent pricing. The proposed algorithm is inspired by our observation that a double decrease of offered prices in a weakly consistent algorithm is enough to cause a linear regret. This motivates us to construct a novel transformation that maps a right-consistent algorithm to a weakly consistent one that never decreases offered prices. Second, we outperform the previously known strategic regret upper bound of the algorithm PRRFES, where the improvement is achieved by means of a finer constant factor $C$ of the principal term $C\log\log T$ in this upper bound. Finally, we generalize results on strategic regret previously known for geometric discounting of the buyer’s surplus to discounting of other types, namely: the optimality of the pricing PRRFES to the case of geometrically concave decreasing discounting; and linear lower bound on the strategic regret of a wide range of horizon-independent weakly consistent algorithms to the case of arbitrary discounts.
Tasks
Published 2017-07-17
URL http://arxiv.org/abs/1707.05101v2
PDF http://arxiv.org/pdf/1707.05101v2.pdf
PWC https://paperswithcode.com/paper/on-consistency-of-optimal-pricing-algorithms
Repo
Framework

Extreme clicking for efficient object annotation

Title Extreme clicking for efficient object annotation
Authors Dim P. Papadopoulos, Jasper R. R. Uijlings, Frank Keller, Vittorio Ferrari
Abstract Manually annotating object bounding boxes is central to building computer vision datasets, and it is very time consuming (annotating ILSVRC [53] took 35s for one high-quality box [62]). It involves clicking on imaginary corners of a tight box around the object. This is difficult as these corners are often outside the actual object and several adjustments are required to obtain a tight box. We propose extreme clicking instead: we ask the annotator to click on four physical points on the object: the top, bottom, left- and right-most points. This task is more natural and these points are easy to find. We crowd-source extreme point annotations for PASCAL VOC 2007 and 2012 and show that (1) annotation time is only 7s per box, 5x faster than the traditional way of drawing boxes [62]; (2) the quality of the boxes is as good as the original ground-truth drawn the traditional way; (3) detectors trained on our annotations are as accurate as those trained on the original ground-truth. Moreover, our extreme clicking strategy not only yields box coordinates, but also four accurate boundary points. We show (4) how to incorporate them into GrabCut to obtain more accurate segmentations than those delivered when initializing it from bounding boxes; (5) semantic segmentations models trained on these segmentations outperform those trained on segmentations derived from bounding boxes.
Tasks
Published 2017-08-09
URL http://arxiv.org/abs/1708.02750v1
PDF http://arxiv.org/pdf/1708.02750v1.pdf
PWC https://paperswithcode.com/paper/extreme-clicking-for-efficient-object
Repo
Framework

Theoretical insights into the optimization landscape of over-parameterized shallow neural networks

Title Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
Authors Mahdi Soltanolkotabi, Adel Javanmard, Jason D. Lee
Abstract In this paper we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the number of observations are fewer than the number of parameters in the model. We show that with quadratic activations the optimization landscape of training such shallow neural networks has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics. This result holds for an arbitrary training data of input/output pairs. For differentiable activation functions we also show that gradient descent, when suitably initialized, converges at a linear rate to a globally optimal model. This result focuses on a realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted weight coefficients.
Tasks
Published 2017-07-16
URL http://arxiv.org/abs/1707.04926v2
PDF http://arxiv.org/pdf/1707.04926v2.pdf
PWC https://paperswithcode.com/paper/theoretical-insights-into-the-optimization
Repo
Framework

Linking Generative Adversarial Learning and Binary Classification

Title Linking Generative Adversarial Learning and Binary Classification
Authors Akshay Balsubramani
Abstract In this note, we point out a basic link between generative adversarial (GA) training and binary classification – any powerful discriminator essentially computes an (f-)divergence between real and generated samples. The result, repeatedly re-derived in decision theory, has implications for GA Networks (GANs), providing an alternative perspective on training f-GANs by designing the discriminator loss function.
Tasks
Published 2017-09-05
URL http://arxiv.org/abs/1709.01509v1
PDF http://arxiv.org/pdf/1709.01509v1.pdf
PWC https://paperswithcode.com/paper/linking-generative-adversarial-learning-and
Repo
Framework

Soft Methodology for Cost-and-error Sensitive Classification

Title Soft Methodology for Cost-and-error Sensitive Classification
Authors Te-Kang Jan, Da-Wei Wang, Chi-Hung Lin, Hsuan-Tien Lin
Abstract Many real-world data mining applications need varying cost for different types of classification errors and thus call for cost-sensitive classification algorithms. Existing algorithms for cost-sensitive classification are successful in terms of minimizing the cost, but can result in a high error rate as the trade-off. The high error rate holds back the practical use of those algorithms. In this paper, we propose a novel cost-sensitive classification methodology that takes both the cost and the error rate into account. The methodology, called soft cost-sensitive classification, is established from a multicriteria optimization problem of the cost and the error rate, and can be viewed as regularizing cost-sensitive classification with the error rate. The simple methodology allows immediate improvements of existing cost-sensitive classification algorithms. Experiments on the benchmark and the real-world data sets show that our proposed methodology indeed achieves lower test error rates and similar (sometimes lower) test costs than existing cost-sensitive classification algorithms. We also demonstrate that the methodology can be extended for considering the weighted error rate instead of the original error rate. This extension is useful for tackling unbalanced classification problems.
Tasks
Published 2017-10-26
URL http://arxiv.org/abs/1710.09515v1
PDF http://arxiv.org/pdf/1710.09515v1.pdf
PWC https://paperswithcode.com/paper/soft-methodology-for-cost-and-error-sensitive
Repo
Framework

Sharp bounds for population recovery

Title Sharp bounds for population recovery
Authors Anindya De, Ryan O’Donnell, Rocco Servedio
Abstract The population recovery problem is a basic problem in noisy unsupervised learning that has attracted significant research attention in recent years [WY12,DRWY12, MS13, BIMP13, LZ15,DST16]. A number of different variants of this problem have been studied, often under assumptions on the unknown distribution (such as that it has restricted support size). In this work we study the sample complexity and algorithmic complexity of the most general version of the problem, under both bit-flip noise and erasure noise model. We give essentially matching upper and lower sample complexity bounds for both noise models, and efficient algorithms matching these sample complexity bounds up to polynomial factors.
Tasks
Published 2017-03-04
URL http://arxiv.org/abs/1703.01474v1
PDF http://arxiv.org/pdf/1703.01474v1.pdf
PWC https://paperswithcode.com/paper/sharp-bounds-for-population-recovery
Repo
Framework
comments powered by Disqus