Paper Group ANR 686
Achieving the time of $1$-NN, but the accuracy of $k$-NN. Recent Advances in Features Extraction and Description Algorithms: A Comprehensive Survey. Does mitigating ML’s impact disparity require treatment disparity?. Centrality measures for graphons: Accounting for uncertainty in networks. Automatic Response Assessment in Regions of Language Cortex …
Achieving the time of $1$-NN, but the accuracy of $k$-NN
Title | Achieving the time of $1$-NN, but the accuracy of $k$-NN |
Authors | Lirong Xue, Samory Kpotufe |
Abstract | We propose a simple approach which, given distributed computing resources, can nearly achieve the accuracy of $k$-NN prediction, while matching (or improving) the faster prediction time of $1$-NN. The approach consists of aggregating denoised $1$-NN predictors over a small number of distributed subsamples. We show, both theoretically and experimentally, that small subsample sizes suffice to attain similar performance as $k$-NN, without sacrificing the computational efficiency of $1$-NN. |
Tasks | |
Published | 2017-12-06 |
URL | http://arxiv.org/abs/1712.02369v2 |
http://arxiv.org/pdf/1712.02369v2.pdf | |
PWC | https://paperswithcode.com/paper/achieving-the-time-of-1-nn-but-the-accuracy |
Repo | |
Framework | |
Recent Advances in Features Extraction and Description Algorithms: A Comprehensive Survey
Title | Recent Advances in Features Extraction and Description Algorithms: A Comprehensive Survey |
Authors | Ehab Salahat, Murad Qasaimeh |
Abstract | Computer vision is one of the most active research fields in information technology today. Giving machines and robots the ability to see and comprehend the surrounding world at the speed of sight creates endless potential applications and opportunities. Feature detection and description algorithms can be indeed considered as the retina of the eyes of such machines and robots. However, these algorithms are typically computationally intensive, which prevents them from achieving the speed of sight real-time performance. In addition, they differ in their capabilities and some may favor and work better given a specific type of input compared to others. As such, it is essential to compactly report their pros and cons as well as their performances and recent advances. This paper is dedicated to provide a comprehensive overview on the state-of-the-art and recent advances in feature detection and description algorithms. Specifically, it starts by overviewing fundamental concepts. It then compares, reports and discusses their performance and capabilities. The Maximally Stable Extremal Regions algorithm and the Scale Invariant Feature Transform algorithms, being two of the best of their type, are selected to report their recent algorithmic derivatives. |
Tasks | |
Published | 2017-03-19 |
URL | http://arxiv.org/abs/1703.06376v1 |
http://arxiv.org/pdf/1703.06376v1.pdf | |
PWC | https://paperswithcode.com/paper/recent-advances-in-features-extraction-and |
Repo | |
Framework | |
Does mitigating ML’s impact disparity require treatment disparity?
Title | Does mitigating ML’s impact disparity require treatment disparity? |
Authors | Zachary C. Lipton, Alexandra Chouldechova, Julian McAuley |
Abstract | Following related work in law and policy, two notions of disparity have come to shape the study of fairness in algorithmic decision-making. Algorithms exhibit treatment disparity if they formally treat members of protected subgroups differently; algorithms exhibit impact disparity when outcomes differ across subgroups, even if the correlation arises unintentionally. Naturally, we can achieve impact parity through purposeful treatment disparity. In one thread of technical work, papers aim to reconcile the two forms of parity proposing disparate learning processes (DLPs). Here, the learning algorithm can see group membership during training but produce a classifier that is group-blind at test time. In this paper, we show theoretically that: (i) When other features correlate to group membership, DLPs will (indirectly) implement treatment disparity, undermining the policy desiderata they are designed to address; (ii) When group membership is partly revealed by other features, DLPs induce within-class discrimination; and (iii) In general, DLPs provide a suboptimal trade-off between accuracy and impact parity. Based on our technical analysis, we argue that transparent treatment disparity is preferable to occluded methods for achieving impact parity. Experimental results on several real-world datasets highlight the practical consequences of applying DLPs vs. per-group thresholds. |
Tasks | Decision Making |
Published | 2017-11-19 |
URL | http://arxiv.org/abs/1711.07076v3 |
http://arxiv.org/pdf/1711.07076v3.pdf | |
PWC | https://paperswithcode.com/paper/does-mitigating-mls-impact-disparity-require |
Repo | |
Framework | |
Centrality measures for graphons: Accounting for uncertainty in networks
Title | Centrality measures for graphons: Accounting for uncertainty in networks |
Authors | Marco Avella-Medina, Francesca Parise, Michael T. Schaub, Santiago Segarra |
Abstract | As relational datasets modeled as graphs keep increasing in size and their data-acquisition is permeated by uncertainty, graph-based analysis techniques can become computationally and conceptually challenging. In particular, node centrality measures rely on the assumption that the graph is perfectly known – a premise not necessarily fulfilled for large, uncertain networks. Accordingly, centrality measures may fail to faithfully extract the importance of nodes in the presence of uncertainty. To mitigate these problems, we suggest a statistical approach based on graphon theory: we introduce formal definitions of centrality measures for graphons and establish their connections to classical graph centrality measures. A key advantage of this approach is that centrality measures defined at the modeling level of graphons are inherently robust to stochastic variations of specific graph realizations. Using the theory of linear integral operators, we define degree, eigenvector, Katz and PageRank centrality functions for graphons and establish concentration inequalities demonstrating that graphon centrality functions arise naturally as limits of their counterparts defined on sequences of graphs of increasing size. The same concentration inequalities also provide high-probability bounds between the graphon centrality functions and the centrality measures on any sampled graph, thereby establishing a measure of uncertainty of the measured centrality score. The same concentration inequalities also provide high-probability bounds between the graphon centrality functions and the centrality measures on any sampled graph, thereby establishing a measure of uncertainty of the measured centrality score. |
Tasks | |
Published | 2017-07-28 |
URL | http://arxiv.org/abs/1707.09350v4 |
http://arxiv.org/pdf/1707.09350v4.pdf | |
PWC | https://paperswithcode.com/paper/centrality-measures-for-graphons-accounting |
Repo | |
Framework | |
Automatic Response Assessment in Regions of Language Cortex in Epilepsy Patients Using ECoG-based Functional Mapping and Machine Learning
Title | Automatic Response Assessment in Regions of Language Cortex in Epilepsy Patients Using ECoG-based Functional Mapping and Machine Learning |
Authors | Harish RaviPrakash, Milena Korostenskaja, Eduardo Castillo, Ki Lee, James Baumgartner, Ulas Bagci |
Abstract | Accurate localization of brain regions responsible for language and cognitive functions in Epilepsy patients should be carefully determined prior to surgery. Electrocorticography (ECoG)-based Real Time Functional Mapping (RTFM) has been shown to be a safer alternative to the electrical cortical stimulation mapping (ESM), which is currently the clinical/gold standard. Conventional methods for analyzing RTFM signals are based on statistical comparison of signal power at certain frequency bands. Compared to gold standard (ESM), they have limited accuracies when assessing channel responses. In this study, we address the accuracy limitation of the current RTFM signal estimation methods by analyzing the full frequency spectrum of the signal and replacing signal power estimation methods with machine learning algorithms, specifically random forest (RF), as a proof of concept. We train RF with power spectral density of the time-series RTFM signal in supervised learning framework where ground truth labels are obtained from the ESM. Results obtained from RTFM of six adult patients in a strictly controlled experimental setup reveal the state of the art detection accuracy of $\approx 78%$ for the language comprehension task, an improvement of $23%$ over the conventional RTFM estimation method. To the best of our knowledge, this is the first study exploring the use of machine learning approaches for determining RTFM signal characteristics, and using the whole-frequency band for better region localization. Our results demonstrate the feasibility of machine learning based RTFM signal analysis method over the full spectrum to be a clinical routine in the near future. |
Tasks | Time Series |
Published | 2017-05-26 |
URL | http://arxiv.org/abs/1706.01380v2 |
http://arxiv.org/pdf/1706.01380v2.pdf | |
PWC | https://paperswithcode.com/paper/automatic-response-assessment-in-regions-of |
Repo | |
Framework | |
Efficient computational strategies to learn the structure of probabilistic graphical models of cumulative phenomena
Title | Efficient computational strategies to learn the structure of probabilistic graphical models of cumulative phenomena |
Authors | Daniele Ramazzotti, Marco S. Nobile, Marco Antoniotti, Alex Graudenzi |
Abstract | Structural learning of Bayesian Networks (BNs) is a NP-hard problem, which is further complicated by many theoretical issues, such as the I-equivalence among different structures. In this work, we focus on a specific subclass of BNs, named Suppes-Bayes Causal Networks (SBCNs), which include specific structural constraints based on Suppes’ probabilistic causation to efficiently model cumulative phenomena. Here we compare the performance, via extensive simulations, of various state-of-the-art search strategies, such as local search techniques and Genetic Algorithms, as well as of distinct regularization methods. The assessment is performed on a large number of simulated datasets from topologies with distinct levels of complexity, various sample size and different rates of errors in the data. Among the main results, we show that the introduction of Suppes’ constraints dramatically improve the inference accuracy, by reducing the solution space and providing a temporal ordering on the variables. We also report on trade-offs among different search techniques that can be efficiently employed in distinct experimental settings. This manuscript is an extended version of the paper “Structural Learning of Probabilistic Graphical Models of Cumulative Phenomena” presented at the 2018 International Conference on Computational Science. |
Tasks | |
Published | 2017-03-08 |
URL | http://arxiv.org/abs/1703.03074v4 |
http://arxiv.org/pdf/1703.03074v4.pdf | |
PWC | https://paperswithcode.com/paper/efficient-computational-strategies-to-learn |
Repo | |
Framework | |
Go game formal revealing by Ising model
Title | Go game formal revealing by Ising model |
Authors | Matías Alvarado, Arturo Yee, Carlos Villarreal |
Abstract | Go gaming is a struggle for territory control between rival, black and white, stones on a board. We model the Go dynamics in a game by means of the Ising model whose interaction coefficients reflect essential rules and tactics employed in Go to build long-term strategies. At any step of the game, the energy functional of the model provides the control degree (strength) of a player over the board. A close fit between predictions of the model with actual games is obtained. |
Tasks | |
Published | 2017-10-19 |
URL | http://arxiv.org/abs/1710.07360v1 |
http://arxiv.org/pdf/1710.07360v1.pdf | |
PWC | https://paperswithcode.com/paper/go-game-formal-revealing-by-ising-model |
Repo | |
Framework | |
WordSup: Exploiting Word Annotations for Character based Text Detection
Title | WordSup: Exploiting Word Annotations for Character based Text Detection |
Authors | Han Hu, Chengquan Zhang, Yuxuan Luo, Yuzhuo Wang, Junyu Han, Errui Ding |
Abstract | Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 and COCO-text. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition. |
Tasks | Scene Text Detection |
Published | 2017-08-22 |
URL | http://arxiv.org/abs/1708.06720v1 |
http://arxiv.org/pdf/1708.06720v1.pdf | |
PWC | https://paperswithcode.com/paper/wordsup-exploiting-word-annotations-for |
Repo | |
Framework | |
Unsupervised part learning for visual recognition
Title | Unsupervised part learning for visual recognition |
Authors | Ronan Sicre, Yannis Avrithis, Ewa Kijak, Frederic Jurie |
Abstract | Part-based image classification aims at representing categories by small sets of learned discriminative parts, upon which an image representation is built. Considered as a promising avenue a decade ago, this direction has been neglected since the advent of deep neural networks. In this context, this paper brings two contributions: first, it shows that despite the recent success of end-to-end holistic models, explicit part learning can boosts classification performance. Second, this work proceeds one step further than recent part-based models (PBM), focusing on how to learn parts without using any labeled data. Instead of learning a set of parts per class, as generally done in the PBM literature, the proposed approach both constructs a partition of a given set of images into visually similar groups, and subsequently learn a set of discriminative parts per group in a fully unsupervised fashion. This strategy opens the door to the use of PBM in new applications for which the notion of image categories is irrelevant, such as instance-based image retrieval, for example. We experimentally show that our learned parts can help building efficient image representations, for classification as well as for indexing tasks, resulting in performance superior to holistic state-of-the art Deep Convolutional Neural Networks (DCNN) encoding. |
Tasks | Image Classification, Image Retrieval |
Published | 2017-04-12 |
URL | http://arxiv.org/abs/1704.03755v1 |
http://arxiv.org/pdf/1704.03755v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-part-learning-for-visual |
Repo | |
Framework | |
Operational thermal load forecasting in district heating networks using machine learning and expert advice
Title | Operational thermal load forecasting in district heating networks using machine learning and expert advice |
Authors | Davy Geysen, Oscar De Somer, Christian Johansson, Jens Brage, Dirk Vanhoudt |
Abstract | Forecasting thermal load is a key component for the majority of optimization solutions for controlling district heating and cooling systems. Recent studies have analysed the results of a number of data-driven methods applied to thermal load forecasting, this paper presents the results of combining a collection of these individual methods in an expert system. The expert system will combine multiple thermal load forecasts in a way that it always tracks the best expert in the system. This solution is tested and validated using a thermal load dataset of 27 months obtained from 10 residential buildings located in Rottne, Sweden together with outdoor temperature information received from a weather forecast service. The expert system is composed of the following data-driven methods: linear regression, extremely randomized trees regression, feed-forward neural network and support vector machine. The results of the proposed solution are compared with the results of the individual methods. |
Tasks | Load Forecasting |
Published | 2017-10-17 |
URL | http://arxiv.org/abs/1710.06134v1 |
http://arxiv.org/pdf/1710.06134v1.pdf | |
PWC | https://paperswithcode.com/paper/operational-thermal-load-forecasting-in |
Repo | |
Framework | |
Deep Neural Networks
Title | Deep Neural Networks |
Authors | Randall Balestriero, Richard Baraniuk |
Abstract | Deep Neural Networks (DNNs) are universal function approximators providing state-of- the-art solutions on wide range of applications. Common perceptual tasks such as speech recognition, image classification, and object tracking are now commonly tackled via DNNs. Some fundamental problems remain: (1) the lack of a mathematical framework providing an explicit and interpretable input-output formula for any topology, (2) quantification of DNNs stability regarding adversarial examples (i.e. modified inputs fooling DNN predictions whilst undetectable to humans), (3) absence of generalization guarantees and controllable behaviors for ambiguous patterns, (4) leverage unlabeled data to apply DNNs to domains where expert labeling is scarce as in the medical field. Answering those points would provide theoretical perspectives for further developments based on a common ground. Furthermore, DNNs are now deployed in tremendous societal applications, pushing the need to fill this theoretical gap to ensure control, reliability, and interpretability. |
Tasks | Image Classification, Object Tracking, Speech Recognition |
Published | 2017-10-25 |
URL | http://arxiv.org/abs/1710.09302v3 |
http://arxiv.org/pdf/1710.09302v3.pdf | |
PWC | https://paperswithcode.com/paper/deep-neural-networks |
Repo | |
Framework | |
ARIGAN: Synthetic Arabidopsis Plants using Generative Adversarial Network
Title | ARIGAN: Synthetic Arabidopsis Plants using Generative Adversarial Network |
Authors | Mario Valerio Giuffrida, Hanno Scharr, Sotirios A Tsaftaris |
Abstract | In recent years, there has been an increasing interest in image-based plant phenotyping, applying state-of-the-art machine learning approaches to tackle challenging problems, such as leaf segmentation (a multi-instance problem) and counting. Most of these algorithms need labelled data to learn a model for the task at hand. Despite the recent release of a few plant phenotyping datasets, large annotated plant image datasets for the purpose of training deep learning algorithms are lacking. One common approach to alleviate the lack of training data is dataset augmentation. Herein, we propose an alternative solution to dataset augmentation for plant phenotyping, creating artificial images of plants using generative neural networks. We propose the Arabidopsis Rosette Image Generator (through) Adversarial Network: a deep convolutional network that is able to generate synthetic rosette-shaped plants, inspired by DCGAN (a recent adversarial network model using convolutional layers). Specifically, we trained the network using A1, A2, and A4 of the CVPPP 2017 LCC dataset, containing Arabidopsis Thaliana plants. We show that our model is able to generate realistic 128x128 colour images of plants. We train our network conditioning on leaf count, such that it is possible to generate plants with a given number of leaves suitable, among others, for training regression based models. We propose a new Ax dataset of artificial plants images, obtained by our ARIGAN. We evaluate this new dataset using a state-of-the-art leaf counting algorithm, showing that the testing error is reduced when Ax is used as part of the training data. |
Tasks | |
Published | 2017-09-04 |
URL | http://arxiv.org/abs/1709.00938v1 |
http://arxiv.org/pdf/1709.00938v1.pdf | |
PWC | https://paperswithcode.com/paper/arigan-synthetic-arabidopsis-plants-using |
Repo | |
Framework | |
Image similarity using Deep CNN and Curriculum Learning
Title | Image similarity using Deep CNN and Curriculum Learning |
Authors | Srikar Appalaraju, Vineet Chaoji |
Abstract | Image similarity involves fetching similar looking images given a reference image. Our solution called SimNet, is a deep siamese network which is trained on pairs of positive and negative images using a novel online pair mining strategy inspired by Curriculum learning. We also created a multi-scale CNN, where the final image embedding is a joint representation of top as well as lower layer embedding’s. We go on to show that this multi-scale siamese network is better at capturing fine grained image similarities than traditional CNN’s. |
Tasks | |
Published | 2017-09-26 |
URL | http://arxiv.org/abs/1709.08761v2 |
http://arxiv.org/pdf/1709.08761v2.pdf | |
PWC | https://paperswithcode.com/paper/image-similarity-using-deep-cnn-and |
Repo | |
Framework | |
k*-Nearest Neighbors: From Global to Local
Title | k*-Nearest Neighbors: From Global to Local |
Authors | Oren Anava, Kfir Y. Levy |
Abstract | The weighted k-nearest neighbors algorithm is one of the most fundamental non-parametric methods in pattern recognition and machine learning. The question of setting the optimal number of neighbors as well as the optimal weights has received much attention throughout the years, nevertheless this problem seems to have remained unsettled. In this paper we offer a simple approach to locally weighted regression/classification, where we make the bias-variance tradeoff explicit. Our formulation enables us to phrase a notion of optimal weights, and to efficiently find these weights as well as the optimal number of neighbors efficiently and adaptively, for each data point whose value we wish to estimate. The applicability of our approach is demonstrated on several datasets, showing superior performance over standard locally weighted methods. |
Tasks | |
Published | 2017-01-25 |
URL | http://arxiv.org/abs/1701.07266v1 |
http://arxiv.org/pdf/1701.07266v1.pdf | |
PWC | https://paperswithcode.com/paper/k-nearest-neighbors-from-global-to-local |
Repo | |
Framework | |
Adversarial Feature Matching for Text Generation
Title | Adversarial Feature Matching for Text Generation |
Authors | Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, Lawrence Carin |
Abstract | The Generative Adversarial Network (GAN) has achieved great success in generating realistic (real-valued) synthetic data. However, convergence issues and difficulties dealing with discrete data hinder the applicability of GAN to text. We propose a framework for generating realistic text via adversarial training. We employ a long short-term memory network as generator, and a convolutional network as discriminator. Instead of using the standard objective of GAN, we propose matching the high-dimensional latent feature distributions of real and synthetic sentences, via a kernelized discrepancy metric. This eases adversarial training by alleviating the mode-collapsing problem. Our experiments show superior performance in quantitative evaluation, and demonstrate that our model can generate realistic-looking sentences. |
Tasks | Text Generation |
Published | 2017-06-12 |
URL | http://arxiv.org/abs/1706.03850v3 |
http://arxiv.org/pdf/1706.03850v3.pdf | |
PWC | https://paperswithcode.com/paper/adversarial-feature-matching-for-text |
Repo | |
Framework | |