Paper Group ANR 590
Towards a glaucoma risk index based on simulated hemodynamics from fundus images. Network Signatures from Image Representation of Adjacency Matrices: Deep/Transfer Learning for Subgraph Classification. Universal Supervised Learning for Individual Data. Mechanism Design for Social Good. Revisiting Skip-Gram Negative Sampling Model with Rectification …
Towards a glaucoma risk index based on simulated hemodynamics from fundus images
Title | Towards a glaucoma risk index based on simulated hemodynamics from fundus images |
Authors | José Ignacio Orlando, João Barbosa Breda, Karel van Keer, Matthew B. Blaschko, Pablo J. Blanco, Carlos A. Bulant |
Abstract | Glaucoma is the leading cause of irreversible but preventable blindness in the world. Its major treatable risk factor is the intra-ocular pressure, although other biomarkers are being explored to improve the understanding of the pathophysiology of the disease. It has been recently observed that glaucoma induces changes in the ocular hemodynamics. However, its effects on the functional behavior of the retinal arterioles have not been studied yet. In this paper we propose a first approach for characterizing those changes using computational hemodynamics. The retinal blood flow is simulated using a 0D model for a steady, incompressible non Newtonian fluid in rigid domains. The simulation is performed on patient-specific arterial trees extracted from fundus images. We also propose a novel feature representation technique to comprise the outcomes of the simulation stage into a fixed length feature vector that can be used for classification studies. Our experiments on a new database of fundus images show that our approach is able to capture representative changes in the hemodynamics of glaucomatous patients. Code and data are publicly available in https://ignaciorlando.github.io. |
Tasks | |
Published | 2018-05-25 |
URL | http://arxiv.org/abs/1805.10273v4 |
http://arxiv.org/pdf/1805.10273v4.pdf | |
PWC | https://paperswithcode.com/paper/towards-a-glaucoma-risk-index-based-on |
Repo | |
Framework | |
Network Signatures from Image Representation of Adjacency Matrices: Deep/Transfer Learning for Subgraph Classification
Title | Network Signatures from Image Representation of Adjacency Matrices: Deep/Transfer Learning for Subgraph Classification |
Authors | Kshiteesh Hegde, Malik Magdon-Ismail, Ram Ramanathan, Bishal Thapa |
Abstract | We propose a novel subgraph image representation for classification of network fragments with the targets being their parent networks. The graph image representation is based on 2D image embeddings of adjacency matrices. We use this image representation in two modes. First, as the input to a machine learning algorithm. Second, as the input to a pure transfer learner. Our conclusions from several datasets are that (a) deep learning using our structured image features performs the best compared to benchmark graph kernel and classical features based methods; and, (b) pure transfer learning works effectively with minimum interference from the user and is robust against small data. |
Tasks | Transfer Learning |
Published | 2018-04-17 |
URL | http://arxiv.org/abs/1804.06275v1 |
http://arxiv.org/pdf/1804.06275v1.pdf | |
PWC | https://paperswithcode.com/paper/network-signatures-from-image-representation |
Repo | |
Framework | |
Universal Supervised Learning for Individual Data
Title | Universal Supervised Learning for Individual Data |
Authors | Yaniv Fogel, Meir Feder |
Abstract | Universal supervised learning is considered from an information theoretic point of view following the universal prediction approach, see Merhav and Feder (1998). We consider the standard supervised “batch” learning where prediction is done on a test sample once the entire training data is observed, and the individual setting where the features and labels, both in the training and test, are specific individual quantities. The information theoretic approach naturally uses the self-information loss or log-loss. Our results provide universal learning schemes that compete with a “genie” (or reference) that knows the true test label. In particular, it is demonstrated that the main proposed scheme, termed Predictive Normalized Maximum Likelihood (pNML), is a robust learning solution that outperforms the current leading approach based on Empirical Risk Minimization (ERM). Furthermore, the pNML construction provides a pointwise indication for the learnability of the specific test challenge with the given training examples |
Tasks | |
Published | 2018-12-22 |
URL | http://arxiv.org/abs/1812.09520v1 |
http://arxiv.org/pdf/1812.09520v1.pdf | |
PWC | https://paperswithcode.com/paper/universal-supervised-learning-for-individual |
Repo | |
Framework | |
Mechanism Design for Social Good
Title | Mechanism Design for Social Good |
Authors | Rediet Abebe, Kira Goldner |
Abstract | Across various domains–such as health, education, and housing–improving societal welfare involves allocating resources, setting policies, targeting interventions, and regulating activities. These solutions have an immense impact on the day-to-day lives of individuals, whether in the form of access to quality healthcare, labor market outcomes, or how votes are accounted for in a democratic society. Problems that can have an out-sized impact on individuals whose opportunities have historically been limited often pose conceptual and technical challenges, requiring insights from many disciplines. Conversely, the lack of interdisciplinary approach can leave these urgent needs unaddressed and can even exacerbate underlying socioeconomic inequalities. To realize the opportunities in these domains, we need to correctly set objectives and reason about human behavior and actions. Doing so requires a deep grounding in the field of interest and collaboration with domain experts who understand the societal implications and feasibility of proposed solutions. These insights can play an instrumental role in proposing algorithmically-informed policies. In this article, we describe the Mechanism Design for Social Good (MD4SG) research agenda, which involves using insights from algorithms, optimization, and mechanism design to improve access to opportunity. The MD4SG research community takes an interdisciplinary, multi-stakeholder approach to improve societal welfare. We discuss three exciting research avenues within MD4SG related to improving access to opportunity in the developing world, labor markets and discrimination, and housing. For each of these, we showcase ongoing work, underline new directions, and discuss potential for implementing existing work in practice. |
Tasks | |
Published | 2018-10-21 |
URL | http://arxiv.org/abs/1810.09832v1 |
http://arxiv.org/pdf/1810.09832v1.pdf | |
PWC | https://paperswithcode.com/paper/mechanism-design-for-social-good |
Repo | |
Framework | |
Revisiting Skip-Gram Negative Sampling Model with Rectification
Title | Revisiting Skip-Gram Negative Sampling Model with Rectification |
Authors | Cun Mu, Guang Yang, Zheng Yan |
Abstract | We revisit skip-gram negative sampling (SGNS), one of the most popular neural-network based approaches to learning distributed word representation. We first point out the ambiguity issue undermining the SGNS model, in the sense that the word vectors can be entirely distorted without changing the objective value. To resolve the issue, we investigate the intrinsic structures in solution that a good word embedding model should deliver. Motivated by this, we rectify the SGNS model with quadratic regularization, and show that this simple modification suffices to structure the solution in the desired manner. A theoretical justification is presented, which provides novel insights into quadratic regularization . Preliminary experiments are also conducted on Google’s analytical reasoning task to support the modified SGNS model. |
Tasks | |
Published | 2018-04-01 |
URL | http://arxiv.org/abs/1804.00306v2 |
http://arxiv.org/pdf/1804.00306v2.pdf | |
PWC | https://paperswithcode.com/paper/revisiting-skip-gram-negative-sampling-model |
Repo | |
Framework | |
Learning to synthesize: splitting and recombining low and high spatial frequencies for image recovery
Title | Learning to synthesize: splitting and recombining low and high spatial frequencies for image recovery |
Authors | Mo Deng, Shuai Li, George Barbastathis |
Abstract | Deep Neural Network (DNN)-based image reconstruction, despite many successes, often exhibits uneven fidelity between high and low spatial frequency bands. In this paper we propose the Learning Synthesis by DNN (LS-DNN) approach where two DNNs process the low and high spatial frequencies, respectively, and, improving over [30], the two DNNs are trained separately and a third DNN combines them into an image with high fidelity at all bands. We demonstrate LS-DNN in two canonical inverse problems: super-resolution (SR) in diffraction-limited imaging (DLI), and quantitative phase retrieval (QPR). Our results also show comparable or improved performance over perceptual-loss based SR [21], and can be generalized to a wider range of image recovery problems. |
Tasks | Image Reconstruction, Super-Resolution |
Published | 2018-11-19 |
URL | http://arxiv.org/abs/1811.07945v1 |
http://arxiv.org/pdf/1811.07945v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-synthesize-splitting-and |
Repo | |
Framework | |
BelMan: Bayesian Bandits on the Belief–Reward Manifold
Title | BelMan: Bayesian Bandits on the Belief–Reward Manifold |
Authors | Debabrota Basu, Pierre Senellart, Stéphane Bressan |
Abstract | We propose a generic, Bayesian, information geometric approach to the exploration–exploitation trade-off in multi-armed bandit problems. Our approach, BelMan, uniformly supports pure exploration, exploration–exploitation, and two-phase bandit problems. The knowledge on bandit arms and their reward distributions is summarised by the barycentre of the joint distributions of beliefs and rewards of the arms, the \emph{pseudobelief-reward}, within the beliefs-rewards manifold. BelMan alternates \emph{information projection} and \emph{reverse information projection}, i.e., projection of the pseudobelief-reward onto beliefs-rewards to choose the arm to play, and projection of the resulting beliefs-rewards onto the pseudobelief-reward. It introduces a mechanism that infuses an exploitative bias by means of a \emph{focal distribution}, i.e., a reward distribution that gradually concentrates on higher rewards. Comparative performance evaluation with state-of-the-art algorithms shows that BelMan is not only competitive but can also outperform other approaches in specific setups, for instance involving many arms and continuous rewards. |
Tasks | |
Published | 2018-05-04 |
URL | https://arxiv.org/abs/1805.01627v2 |
https://arxiv.org/pdf/1805.01627v2.pdf | |
PWC | https://paperswithcode.com/paper/belman-bayesian-bandits-on-the-belief-reward |
Repo | |
Framework | |
An Upper Bound for Random Measurement Error in Causal Discovery
Title | An Upper Bound for Random Measurement Error in Causal Discovery |
Authors | Tineke Blom, Anna Klimovskaia, Sara Magliacane, Joris M. Mooij |
Abstract | Causal discovery algorithms infer causal relations from data based on several assumptions, including notably the absence of measurement error. However, this assumption is most likely violated in practical applications, which may result in erroneous, irreproducible results. In this work we show how to obtain an upper bound for the variance of random measurement error from the covariance matrix of measured variables and how to use this upper bound as a correction for constraint-based causal discovery. We demonstrate a practical application of our approach on both simulated data and real-world protein signaling data. |
Tasks | Causal Discovery |
Published | 2018-10-18 |
URL | http://arxiv.org/abs/1810.07973v1 |
http://arxiv.org/pdf/1810.07973v1.pdf | |
PWC | https://paperswithcode.com/paper/an-upper-bound-for-random-measurement-error |
Repo | |
Framework | |
Representing the Insincere: Strategically Robust Proportional Representation
Title | Representing the Insincere: Strategically Robust Proportional Representation |
Authors | Barton E. Lee |
Abstract | Proportional representation (PR) is a fundamental principle of many democracies world-wide which employ PR-based voting rules to elect their representatives. The normative properties of these voting rules however, are often only understood in the context of sincere voting. In this paper we consider PR in the presence of strategic voters. We construct a voting rule such that for every preference profile there exists at least one costly voting equilibrium satisfying PR with respect to voters’ private and unrevealed preferences - such a voting rule is said to be strategically robust. In contrast, a commonly applied voting rule is shown not be strategically robust. Furthermore, we prove a limit on `how strategically robust’ a PR-based voting rule can be; we show that there is no PR-based voting rule which ensures that every equilibrium satisfies PR. Collectively, our results highlight the possibility and limit of achieving PR in the presence of strategic voters and a positive role for mechanisms, such as pre-election polls, which coordinate voter behaviour towards equilibria which satisfy PR. | |
Tasks | |
Published | 2018-01-29 |
URL | http://arxiv.org/abs/1801.09346v1 |
http://arxiv.org/pdf/1801.09346v1.pdf | |
PWC | https://paperswithcode.com/paper/representing-the-insincere-strategically |
Repo | |
Framework | |
gprHOG and the popularity of Histogram of Oriented Gradients (HOG) for Buried Threat Detection in Ground-Penetrating Radar
Title | gprHOG and the popularity of Histogram of Oriented Gradients (HOG) for Buried Threat Detection in Ground-Penetrating Radar |
Authors | Daniel Reichman, Leslie M. Collins, Jordan M. Malof |
Abstract | Substantial research has been devoted to the development of algorithms that automate buried threat detection (BTD) with ground penetrating radar (GPR) data, resulting in a large number of proposed algorithms. One popular algorithm GPR-based BTD, originally applied by Torrione et al., 2012, is the Histogram of Oriented Gradients (HOG) feature. In a recent large-scale comparison among five veteran institutions, a modified version of HOG referred to here as “gprHOG”, performed poorly compared to other modern algorithms. In this paper, we provide experimental evidence demonstrating that the modifications to HOG that comprise gprHOG result in a substantially better-performing algorithm. The results here, in conjunction with the large-scale algorithm comparison, suggest that HOG is not competitive with modern GPR-based BTD algorithms. Given HOG’s popularity, these results raise some questions about many existing studies, and suggest gprHOG (and especially HOG) should be employed with caution in future studies. |
Tasks | |
Published | 2018-06-04 |
URL | http://arxiv.org/abs/1806.01349v2 |
http://arxiv.org/pdf/1806.01349v2.pdf | |
PWC | https://paperswithcode.com/paper/gprhog-and-the-popularity-of-histogram-of |
Repo | |
Framework | |
The loss landscape of overparameterized neural networks
Title | The loss landscape of overparameterized neural networks |
Authors | Y Cooper |
Abstract | We explore some mathematical features of the loss landscape of overparameterized neural networks. A priori one might imagine that the loss function looks like a typical function from $\mathbb{R}^n$ to $\mathbb{R}$ - in particular, nonconvex, with discrete global minima. In this paper, we prove that in at least one important way, the loss function of an overparameterized neural network does not look like a typical function. If a neural net has $n$ parameters and is trained on $d$ data points, with $n>d$, we show that the locus $M$ of global minima of $L$ is usually not discrete, but rather an $n-d$ dimensional submanifold of $\mathbb{R}^n$. In practice, neural nets commonly have orders of magnitude more parameters than data points, so this observation implies that $M$ is typically a very high-dimensional subset of $\mathbb{R}^n$. |
Tasks | |
Published | 2018-04-26 |
URL | http://arxiv.org/abs/1804.10200v1 |
http://arxiv.org/pdf/1804.10200v1.pdf | |
PWC | https://paperswithcode.com/paper/the-loss-landscape-of-overparameterized-1 |
Repo | |
Framework | |
Stable specification search in structural equation model with latent variables
Title | Stable specification search in structural equation model with latent variables |
Authors | Ridho Rahmadi, Perry Groot, Tom Heskes |
Abstract | In our previous study, we introduced stable specification search for cross-sectional data (S3C). It is an exploratory causal method that combines stability selection concept and multi-objective optimization to search for stable and parsimonious causal structures across the entire range of model complexities. In this study, we extended S3C to S3C-Latent, to model causal relations between latent variables. We evaluated S3C-Latent on simulated data and compared the results to those of PC-MIMBuild, an extension of the PC algorithm, the state-of-the-art causal discovery method. The comparison showed that S3C-Latent achieved better performance. We also applied S3C-Latent to real-world data of children with attention deficit/hyperactivity disorder and data about measuring mental abilities among pupils. The results are consistent with those of previous studies. |
Tasks | Causal Discovery |
Published | 2018-05-24 |
URL | http://arxiv.org/abs/1805.09527v1 |
http://arxiv.org/pdf/1805.09527v1.pdf | |
PWC | https://paperswithcode.com/paper/stable-specification-search-in-structural |
Repo | |
Framework | |
The Importance of Generation Order in Language Modeling
Title | The Importance of Generation Order in Language Modeling |
Authors | Nicolas Ford, Daniel Duckworth, Mohammad Norouzi, George E. Dahl |
Abstract | Neural language models are a critical component of state-of-the-art systems for machine translation, summarization, audio transcription, and other tasks. These language models are almost universally autoregressive in nature, generating sentences one token at a time from left to right. This paper studies the influence of token generation order on model quality via a novel two-pass language model that produces partially-filled sentence “templates” and then fills in missing tokens. We compare various strategies for structuring these two passes and observe a surprisingly large variation in model quality. We find the most effective strategy generates function words in the first pass followed by content words in the second. We believe these experimental results justify a more extensive investigation of generation order for neural language models. |
Tasks | Language Modelling, Machine Translation |
Published | 2018-08-23 |
URL | http://arxiv.org/abs/1808.07910v1 |
http://arxiv.org/pdf/1808.07910v1.pdf | |
PWC | https://paperswithcode.com/paper/the-importance-of-generation-order-in |
Repo | |
Framework | |
Applications of a Graph Theoretic Based Clustering Framework in Computer Vision and Pattern Recognition
Title | Applications of a Graph Theoretic Based Clustering Framework in Computer Vision and Pattern Recognition |
Authors | Yonatan Tariku Tesfaye |
Abstract | Recently, several clustering algorithms have been used to solve variety of problems from different discipline. This dissertation aims to address different challenging tasks in computer vision and pattern recognition by casting the problems as a clustering problem. We proposed novel approaches to solve multi-target tracking, visual geo-localization and outlier detection problems using a unified underlining clustering framework, i.e., dominant set clustering and its extensions, and presented a superior result over several state-of-the-art approaches. |
Tasks | Outlier Detection |
Published | 2018-01-07 |
URL | http://arxiv.org/abs/1802.02181v1 |
http://arxiv.org/pdf/1802.02181v1.pdf | |
PWC | https://paperswithcode.com/paper/applications-of-a-graph-theoretic-based |
Repo | |
Framework | |
Adversarial Attacks, Regression, and Numerical Stability Regularization
Title | Adversarial Attacks, Regression, and Numerical Stability Regularization |
Authors | Andre T. Nguyen, Edward Raff |
Abstract | Adversarial attacks against neural networks in a regression setting are a critical yet understudied problem. In this work, we advance the state of the art by investigating adversarial attacks against regression networks and by formulating a more effective defense against these attacks. In particular, we take the perspective that adversarial attacks are likely caused by numerical instability in learned functions. We introduce a stability inducing, regularization based defense against adversarial attacks in the regression setting. Our new and easy to implement defense is shown to outperform prior approaches and to improve the numerical stability of learned functions. |
Tasks | |
Published | 2018-12-07 |
URL | http://arxiv.org/abs/1812.02885v1 |
http://arxiv.org/pdf/1812.02885v1.pdf | |
PWC | https://paperswithcode.com/paper/adversarial-attacks-regression-and-numerical |
Repo | |
Framework | |