July 29, 2019

2512 words 12 mins read

Paper Group ANR 65

Paper Group ANR 65

Fuzzy Rankings: Properties and Applications. People Mover’s Distance: Class level geometry using fast pairwise data adaptive transportation costs. On the mathematics of beauty: beautiful images. Automatic Segmentation and Overall Survival Prediction in Gliomas using Fully Convolutional Neural Network and Texture Analysis. A Radically New Theory of …

Fuzzy Rankings: Properties and Applications

Title Fuzzy Rankings: Properties and Applications
Authors Jiří Mazurek
Abstract In practice, a ranking of objects with respect to given set of criteria is of considerable importance. However, due to lack of knowledge, information of time pressure, decision makers might not be able to provide a (crisp) ranking of objects from the top to the bottom. Instead, some objects might be ranked equally, or better than other objects only to some degree. In such cases, a generalization of crisp rankings to fuzzy rankings can be more useful. The aim of the article is to introduce the notion of a fuzzy ranking and to discuss its several properties, namely orderings, similarity and indecisiveness. The proposed approach can be used both for group decision making or multiple criteria decision making when uncertainty is involved.
Tasks Decision Making
Published 2017-03-15
URL http://arxiv.org/abs/1703.05201v1
PDF http://arxiv.org/pdf/1703.05201v1.pdf
PWC https://paperswithcode.com/paper/fuzzy-rankings-properties-and-applications
Repo
Framework

People Mover’s Distance: Class level geometry using fast pairwise data adaptive transportation costs

Title People Mover’s Distance: Class level geometry using fast pairwise data adaptive transportation costs
Authors Alexander Cloninger, Brita Roy, Carley Riley, Harlan M. Krumholz
Abstract We address the problem of defining a network graph on a large collection of classes. Each class is comprised of a collection of data points, sampled in a non i.i.d. way, from some unknown underlying distribution. The application we consider in this paper is a large scale high dimensional survey of people living in the US, and the question of how similar or different are the various counties in which these people live. We use a co-clustering diffusion metric to learn the underlying distribution of people, and build an approximate earth mover’s distance algorithm using this data adaptive transportation cost.
Tasks
Published 2017-07-03
URL http://arxiv.org/abs/1707.00514v1
PDF http://arxiv.org/pdf/1707.00514v1.pdf
PWC https://paperswithcode.com/paper/people-movers-distance-class-level-geometry
Repo
Framework

On the mathematics of beauty: beautiful images

Title On the mathematics of beauty: beautiful images
Authors A. M. Khalili
Abstract In this paper, we will study the simplest kind of beauty which can be found in simple visual patterns. The proposed approach shows that aesthetically appealing patterns deliver higher amount of information over multiple levels in comparison with less aesthetically appealing patterns when the same amount of energy is used. The proposed approach is used to classify aesthetically appealing patterns.
Tasks
Published 2017-05-13
URL https://arxiv.org/abs/1705.08244v7
PDF https://arxiv.org/pdf/1705.08244v7.pdf
PWC https://paperswithcode.com/paper/on-the-mathematics-of-beauty-beautiful-images
Repo
Framework

Automatic Segmentation and Overall Survival Prediction in Gliomas using Fully Convolutional Neural Network and Texture Analysis

Title Automatic Segmentation and Overall Survival Prediction in Gliomas using Fully Convolutional Neural Network and Texture Analysis
Authors Varghese Alex, Mohammed Safwan, Ganapathy Krishnamurthi
Abstract In this paper, we use a fully convolutional neural network (FCNN) for the segmentation of gliomas from Magnetic Resonance Images (MRI). A fully automatic, voxel based classification was achieved by training a 23 layer deep FCNN on 2-D slices extracted from patient volumes. The network was trained on slices extracted from 130 patients and validated on 50 patients. For the task of survival prediction, texture and shape based features were extracted from T1 post contrast volume to train an XGBoost regressor. On BraTS 2017 validation set, the proposed scheme achieved a mean whole tumor, tumor core and active dice score of 0.83, 0.69 and 0.69 respectively and an accuracy of 52% for the overall survival prediction.
Tasks Texture Classification
Published 2017-12-06
URL http://arxiv.org/abs/1712.02066v1
PDF http://arxiv.org/pdf/1712.02066v1.pdf
PWC https://paperswithcode.com/paper/automatic-segmentation-and-overall-survival
Repo
Framework

A Radically New Theory of how the Brain Represents and Computes with Probabilities

Title A Radically New Theory of how the Brain Represents and Computes with Probabilities
Authors Gerard Rinkus
Abstract The brain is believed to implement probabilistic reasoning and to represent information via population, or distributed, coding. Most previous population-based probabilistic (PPC) theories share several basic properties: 1) continuous-valued neurons; 2) fully(densely)-distributed codes, i.e., all(most) units participate in every code; 3) graded synapses; 4) rate coding; 5) units have innate unimodal tuning functions (TFs); 6) intrinsically noisy units; and 7) noise/correlation is considered harmful. We present a radically different theory that assumes: 1) binary units; 2) only a small subset of units, i.e., a sparse distributed representation (SDR) (cell assembly), comprises any individual code; 3) binary synapses; 4) signaling formally requires only single (i.e., first) spikes; 5) units initially have completely flat TFs (all weights zero); 6) units are far less intrinsically noisy than traditionally thought; rather 7) noise is a resource generated/used to cause similar inputs to map to similar codes, controlling a tradeoff between storage capacity and embedding the input space statistics in the pattern of intersections over stored codes, epiphenomenally determining correlation patterns across neurons. The theory, Sparsey, was introduced 20+ years ago as a canonical cortical circuit/algorithm model achieving efficient sequence learning/recognition, but not elaborated as an alternative to PPC theories. Here, we show that: a) the active SDR simultaneously represents both the most similar/likely input and the entire (coarsely-ranked) similarity likelihood/distribution over all stored inputs (hypotheses); and b) given an input, the SDR code selection algorithm, which underlies both learning and inference, updates both the most likely hypothesis and the entire likelihood distribution (cf. belief update) with a number of steps that remains constant as the number of stored items increases.
Tasks
Published 2017-01-26
URL http://arxiv.org/abs/1701.07879v4
PDF http://arxiv.org/pdf/1701.07879v4.pdf
PWC https://paperswithcode.com/paper/a-radically-new-theory-of-how-the-brain
Repo
Framework

Online Learning for Distribution-Free Prediction

Title Online Learning for Distribution-Free Prediction
Authors Dave Zachariah, Petre Stoica, Thomas B. Schön
Abstract We develop an online learning method for prediction, which is important in problems with large and/or streaming data sets. We formulate the learning approach using a covariance-fitting methodology, and show that the resulting predictor has desirable computational and distribution-free properties: It is implemented online with a runtime that scales linearly in the number of samples; has a constant memory requirement; avoids local minima problems; and prunes away redundant feature dimensions without relying on restrictive assumptions on the data distribution. In conjunction with the split conformal approach, it also produces distribution-free prediction confidence intervals in a computationally efficient manner. The method is demonstrated on both real and synthetic datasets.
Tasks
Published 2017-03-15
URL http://arxiv.org/abs/1703.05060v1
PDF http://arxiv.org/pdf/1703.05060v1.pdf
PWC https://paperswithcode.com/paper/online-learning-for-distribution-free
Repo
Framework

Addendum to: Summary Information for Reasoning About Hierarchical Plans

Title Addendum to: Summary Information for Reasoning About Hierarchical Plans
Authors Lavindra de Silva, Sebastian Sardina, Lin Padgham
Abstract Hierarchically structured agent plans are important for efficient planning and acting, and they also serve (among other things) to produce “richer” classical plans, composed not just of a sequence of primitive actions, but also “abstract” ones representing the supplied hierarchies. A crucial step for this and other approaches is deriving precondition and effect “summaries” from a given plan hierarchy. This paper provides mechanisms to do this for more pragmatic and conventional hierarchies than in the past. To this end, we formally define the notion of a precondition and an effect for a hierarchical plan; we present data structures and algorithms for automatically deriving this information; and we analyse the properties of the presented algorithms. We conclude the paper by detailing how our algorithms may be used together with a classical planner in order to obtain abstract plans.
Tasks
Published 2017-08-09
URL http://arxiv.org/abs/1708.03019v1
PDF http://arxiv.org/pdf/1708.03019v1.pdf
PWC https://paperswithcode.com/paper/addendum-to-summary-information-for-reasoning
Repo
Framework

Preservation of Semantic Properties during the Aggregation of Abstract Argumentation Frameworks

Title Preservation of Semantic Properties during the Aggregation of Abstract Argumentation Frameworks
Authors Weiwei Chen, Ulle Endriss
Abstract An abstract argumentation framework can be used to model the argumentative stance of an agent at a high level of abstraction, by indicating for every pair of arguments that is being considered in a debate whether the first attacks the second. When modelling a group of agents engaged in a debate, we may wish to aggregate their individual argumentation frameworks to obtain a single such framework that reflects the consensus of the group. Even when agents disagree on many details, there may well be high-level agreement on important semantic properties, such as the acceptability of a given argument. Using techniques from social choice theory, we analyse under what circumstances such semantic properties agreed upon by the individual agents can be preserved under aggregation.
Tasks Abstract Argumentation
Published 2017-07-27
URL http://arxiv.org/abs/1707.08740v1
PDF http://arxiv.org/pdf/1707.08740v1.pdf
PWC https://paperswithcode.com/paper/preservation-of-semantic-properties-during
Repo
Framework

Real-time Semantic Image Segmentation via Spatial Sparsity

Title Real-time Semantic Image Segmentation via Spatial Sparsity
Authors Zifeng Wu, Chunhua Shen, Anton van den Hengel
Abstract We propose an approach to semantic (image) segmentation that reduces the computational costs by a factor of 25 with limited impact on the quality of results. Semantic segmentation has a number of practical applications, and for most such applications the computational costs are critical. The method follows a typical two-column network structure, where one column accepts an input image, while the other accepts a half-resolution version of that image. By identifying specific regions in the full-resolution image that can be safely ignored, as well as carefully tailoring the network structure, we can process approximately 15 highresolution Cityscapes images (1024x2048) per second using a single GTX 980 video card, while achieving a mean intersection-over-union score of 72.9% on the Cityscapes test set.
Tasks Semantic Segmentation
Published 2017-12-01
URL http://arxiv.org/abs/1712.00213v1
PDF http://arxiv.org/pdf/1712.00213v1.pdf
PWC https://paperswithcode.com/paper/real-time-semantic-image-segmentation-via
Repo
Framework

Adaptive Real-Time Removal of Impulse Noise in Medical Images

Title Adaptive Real-Time Removal of Impulse Noise in Medical Images
Authors Zohreh HosseinKhani, Mohsen Hajabdollahi, Nader Karimi, Reza Soroushmehr, Shahram Shirani, Kayvan Najarian, Shadrokh Samavi
Abstract Noise is an important factor that degrades the quality of medical images. Impulse noise is a common noise, which is caused by malfunctioning of sensor elements or errors in the transmission of images. In medical images due to presence of white foreground and black background, many pixels have intensities similar to impulse noise and distinction between noisy and regular pixels is difficult. In software techniques, the accuracy of the noise removal is more important than the algorithm’s complexity. But for hardware implementation having a low complexity algorithm with an acceptable accuracy is essential. In this paper a low complexity de-noising method is proposed that removes the noise by local analysis of the image blocks. The proposed method distinguishes non-noisy pixels that have noise-like intensities. All steps are designed to have low hardware complexity. Simulation results show that for different magnetic resonance images, the proposed method removes impulse noise with an acceptable accuracy.
Tasks
Published 2017-09-06
URL http://arxiv.org/abs/1709.02270v2
PDF http://arxiv.org/pdf/1709.02270v2.pdf
PWC https://paperswithcode.com/paper/adaptive-real-time-removal-of-impulse-noise
Repo
Framework

The Maximum Cosine Framework for Deriving Perceptron Based Linear Classifiers

Title The Maximum Cosine Framework for Deriving Perceptron Based Linear Classifiers
Authors Nader H. Bshouty, Catherine A. Haddad-Zaknoon
Abstract In this work, we introduce a mathematical framework, called the Maximum Cosine Framework or MCF, for deriving new linear classifiers. The method is based on selecting an appropriate bound on the cosine of the angle between the target function and the algorithm’s. To justify its correctness, we use the MCF to show how to regenerate the update rule of Aggressive ROMMA. Moreover, we construct a cosine bound from which we build the Maximum Cosine Perceptron algorithm or, for short, the MCP algorithm. We prove that the MCP shares the same mistake bound like the Perceptron. In addition, we demonstrate the promising performance of the MCP on a real dataset. Our experiments show that, under the restriction of single pass learning, the MCP algorithm outperforms PA and Aggressive ROMMA.
Tasks
Published 2017-07-04
URL http://arxiv.org/abs/1707.00821v1
PDF http://arxiv.org/pdf/1707.00821v1.pdf
PWC https://paperswithcode.com/paper/the-maximum-cosine-framework-for-deriving
Repo
Framework

Predicting Depression Severity by Multi-Modal Feature Engineering and Fusion

Title Predicting Depression Severity by Multi-Modal Feature Engineering and Fusion
Authors Aven Samareh, Yan Jin, Zhangyang Wang, Xiangyu Chang, Shuai Huang
Abstract We present our preliminary work to determine if patient’s vocal acoustic, linguistic, and facial patterns could predict clinical ratings of depression severity, namely Patient Health Questionnaire depression scale (PHQ-8). We proposed a multi modal fusion model that combines three different modalities: audio, video , and text features. By training over AVEC 2017 data set, our proposed model outperforms each single modality prediction model, and surpasses the data set baseline with ice margin.
Tasks Feature Engineering
Published 2017-11-29
URL http://arxiv.org/abs/1711.11155v1
PDF http://arxiv.org/pdf/1711.11155v1.pdf
PWC https://paperswithcode.com/paper/predicting-depression-severity-by-multi-modal
Repo
Framework

A Unified Model for Near and Remote Sensing

Title A Unified Model for Near and Remote Sensing
Authors Scott Workman, Menghua Zhai, David J. Crandall, Nathan Jacobs
Abstract We propose a novel convolutional neural network architecture for estimating geospatial functions such as population density, land cover, or land use. In our approach, we combine overhead and ground-level images in an end-to-end trainable neural network, which uses kernel regression and density estimation to convert features extracted from the ground-level images into a dense feature map. The output of this network is a dense estimate of the geospatial function in the form of a pixel-level labeling of the overhead image. To evaluate our approach, we created a large dataset of overhead and ground-level images from a major urban area with three sets of labels: land use, building function, and building age. We find that our approach is more accurate for all tasks, in some cases dramatically so.
Tasks Density Estimation
Published 2017-08-09
URL http://arxiv.org/abs/1708.03035v1
PDF http://arxiv.org/pdf/1708.03035v1.pdf
PWC https://paperswithcode.com/paper/a-unified-model-for-near-and-remote-sensing
Repo
Framework

Quantified advantage of discontinuous weight selection in approximations with deep neural networks

Title Quantified advantage of discontinuous weight selection in approximations with deep neural networks
Authors Dmitry Yarotsky
Abstract We consider approximations of 1D Lipschitz functions by deep ReLU networks of a fixed width. We prove that without the assumption of continuous weight selection the uniform approximation error is lower than with this assumption at least by a factor logarithmic in the size of the network.
Tasks
Published 2017-05-03
URL http://arxiv.org/abs/1705.01365v1
PDF http://arxiv.org/pdf/1705.01365v1.pdf
PWC https://paperswithcode.com/paper/quantified-advantage-of-discontinuous-weight
Repo
Framework

Allocation Problems in Ride-Sharing Platforms: Online Matching with Offline Reusable Resources

Title Allocation Problems in Ride-Sharing Platforms: Online Matching with Offline Reusable Resources
Authors John P Dickerson, Karthik A Sankararaman, Aravind Srinivasan, Pan Xu
Abstract Bipartite matching markets pair agents on one side of a market with agents, items, or contracts on the opposing side. Prior work addresses online bipartite matching markets, where agents arrive over time and are dynamically matched to a known set of disposable resources. In this paper, we propose a new model, Online Matching with (offline) Reusable Resources under Known Adversarial Distributions (OM-RR-KAD), in which resources on the offline side are reusable instead of disposable; that is, once matched, resources become available again at some point in the future. We show that our model is tractable by presenting an LP-based adaptive algorithm that achieves an online competitive ratio of 1/2 - eps for any given eps greater than 0. We also show that no non-adaptive algorithm can achieve a ratio of 1/2 + o(1) based on the same benchmark LP. Through a data-driven analysis on a massive openly-available dataset, we show our model is robust enough to capture the application of taxi dispatching services and ride-sharing systems. We also present heuristics that perform well in practice.
Tasks
Published 2017-11-22
URL http://arxiv.org/abs/1711.08345v2
PDF http://arxiv.org/pdf/1711.08345v2.pdf
PWC https://paperswithcode.com/paper/allocation-problems-in-ride-sharing-platforms
Repo
Framework
comments powered by Disqus