Paper Group ANR 217
Neural Networks with Manifold Learning for Diabetic Retinopathy Detection. On the Complexity of Constrained Determinantal Point Processes. Square Hellinger Subadditivity for Bayesian Networks and its Applications to Identity Testing. Distributed Multi-Task Learning with Shared Representation. Retrieving challenging vessel connections in retinal ima …
Neural Networks with Manifold Learning for Diabetic Retinopathy Detection
Title | Neural Networks with Manifold Learning for Diabetic Retinopathy Detection |
Authors | Arjun Raj Rajanna, Kamelia Aryafar, Rajeev Ramchandran, Christye Sisson, Ali Shokoufandeh, Raymond Ptucha |
Abstract | Widespread outreach programs using remote retinal imaging have proven to decrease the risk from diabetic retinopathy, the leading cause of blindness in the US. However, this process still requires manual verification of image quality and grading of images for level of disease by a trained human grader and will continue to be limited by the lack of such scarce resources. Computer-aided diagnosis of retinal images have recently gained increasing attention in the machine learning community. In this paper, we introduce a set of neural networks for diabetic retinopathy classification of fundus retinal images. We evaluate the efficiency of the proposed classifiers in combination with preprocessing and augmentation steps on a sample dataset. Our experimental results show that neural networks in combination with preprocessing on the images can boost the classification accuracy on this dataset. Moreover the proposed models are scalable and can be used in large scale datasets for diabetic retinopathy detection. The models introduced in this paper can be used to facilitate the diagnosis and speed up the detection process. |
Tasks | Diabetic Retinopathy Detection |
Published | 2016-12-12 |
URL | http://arxiv.org/abs/1612.03961v1 |
http://arxiv.org/pdf/1612.03961v1.pdf | |
PWC | https://paperswithcode.com/paper/neural-networks-with-manifold-learning-for |
Repo | |
Framework | |
On the Complexity of Constrained Determinantal Point Processes
Title | On the Complexity of Constrained Determinantal Point Processes |
Authors | L. Elisa Celis, Amit Deshpande, Tarun Kathuria, Damian Straszak, Nisheeth K. Vishnoi |
Abstract | Determinantal Point Processes (DPPs) are probabilistic models that arise in quantum physics and random matrix theory and have recently found numerous applications in computer science. DPPs define distributions over subsets of a given ground set, they exhibit interesting properties such as negative correlation, and, unlike other models, have efficient algorithms for sampling. When applied to kernel methods in machine learning, DPPs favor subsets of the given data with more diverse features. However, many real-world applications require efficient algorithms to sample from DPPs with additional constraints on the subset, e.g., partition or matroid constraints that are important to ensure priors, resource or fairness constraints on the sampled subset. Whether one can efficiently sample from DPPs in such constrained settings is an important problem that was first raised in a survey of DPPs by \cite{KuleszaTaskar12} and studied in some recent works in the machine learning literature. The main contribution of our paper is the first resolution of the complexity of sampling from DPPs with constraints. We give exact efficient algorithms for sampling from constrained DPPs when their description is in unary. Furthermore, we prove that when the constraints are specified in binary, this problem is #P-hard via a reduction from the problem of computing mixed discriminants implying that it may be unlikely that there is an FPRAS. Our results benefit from viewing the constrained sampling problem via the lens of polynomials. Consequently, we obtain a few algorithms of independent interest: 1) to count over the base polytope of regular matroids when there are additional (succinct) budget constraints and, 2) to evaluate and compute the mixed characteristic polynomials, that played a central role in the resolution of the Kadison-Singer problem, for certain special cases. |
Tasks | Point Processes |
Published | 2016-08-01 |
URL | http://arxiv.org/abs/1608.00554v3 |
http://arxiv.org/pdf/1608.00554v3.pdf | |
PWC | https://paperswithcode.com/paper/on-the-complexity-of-constrained |
Repo | |
Framework | |
Square Hellinger Subadditivity for Bayesian Networks and its Applications to Identity Testing
Title | Square Hellinger Subadditivity for Bayesian Networks and its Applications to Identity Testing |
Authors | Constantinos Daskalakis, Qinxuan Pan |
Abstract | We show that the square Hellinger distance between two Bayesian networks on the same directed graph, $G$, is subadditive with respect to the neighborhoods of $G$. Namely, if $P$ and $Q$ are the probability distributions defined by two Bayesian networks on the same DAG, our inequality states that the square Hellinger distance, $H^2(P,Q)$, between $P$ and $Q$ is upper bounded by the sum, $\sum_v H^2(P_{{v} \cup \Pi_v}, Q_{{v} \cup \Pi_v})$, of the square Hellinger distances between the marginals of $P$ and $Q$ on every node $v$ and its parents $\Pi_v$ in the DAG. Importantly, our bound does not involve the conditionals but the marginals of $P$ and $Q$. We derive a similar inequality for more general Markov Random Fields. As an application of our inequality, we show that distinguishing whether two Bayesian networks $P$ and $Q$ on the same (but potentially unknown) DAG satisfy $P=Q$ vs $d_{\rm TV}(P,Q)>\epsilon$ can be performed from $\tilde{O}(\Sigma^{3/4(d+1)} \cdot n/\epsilon^2)$ samples, where $d$ is the maximum in-degree of the DAG and $\Sigma$ the domain of each variable of the Bayesian networks. If $P$ and $Q$ are defined on potentially different and potentially unknown trees, the sample complexity becomes $\tilde{O}(\Sigma^{4.5} n/\epsilon^2)$, whose dependence on $n, \epsilon$ is optimal up to logarithmic factors. Lastly, if $P$ and $Q$ are product distributions over ${0,1}^n$ and $Q$ is known, the sample complexity becomes $O(\sqrt{n}/\epsilon^2)$, which is optimal up to constant factors. |
Tasks | |
Published | 2016-12-09 |
URL | http://arxiv.org/abs/1612.03164v1 |
http://arxiv.org/pdf/1612.03164v1.pdf | |
PWC | https://paperswithcode.com/paper/square-hellinger-subadditivity-for-bayesian |
Repo | |
Framework | |
Distributed Multi-Task Learning with Shared Representation
Title | Distributed Multi-Task Learning with Shared Representation |
Authors | Jialei Wang, Mladen Kolar, Nathan Srebro |
Abstract | We study the problem of distributed multi-task learning with shared representation, where each machine aims to learn a separate, but related, task in an unknown shared low-dimensional subspaces, i.e. when the predictor matrix has low rank. We consider a setting where each task is handled by a different machine, with samples for the task available locally on the machine, and study communication-efficient methods for exploiting the shared structure. |
Tasks | Multi-Task Learning |
Published | 2016-03-07 |
URL | http://arxiv.org/abs/1603.02185v1 |
http://arxiv.org/pdf/1603.02185v1.pdf | |
PWC | https://paperswithcode.com/paper/distributed-multi-task-learning-with-shared |
Repo | |
Framework | |
Retrieving challenging vessel connections in retinal images by line co-occurrence statistics
Title | Retrieving challenging vessel connections in retinal images by line co-occurrence statistics |
Authors | Samaneh Abbasi-Sureshjani, Jiong Zhang, Remco Duits, Bart ter Haar Romeny |
Abstract | Natural images contain often curvilinear structures, which might be disconnected, or partly occluded. Recovering the missing connection of disconnected structures is an open issue and needs appropriate geometric reasoning. We propose to find line co-occurrence statistics from the centerlines of blood vessels in retinal images and show its remarkable similarity to a well-known probabilistic model for the connectivity pattern in the primary visual cortex. Furthermore, the probabilistic model is trained from the data via statistics and used for automated grouping of interrupted vessels in a spectral clustering based approach. Several challenging image patches are investigated around junction points, where successful results indicate the perfect match of the trained model to the profiles of blood vessels in retinal images. Also, comparisons among several statistical models obtained from different datasets reveals their high similarity i.e., they are independent of the dataset. On top of that, the best approximation of the statistical model with the symmetrized extension of the probabilistic model on the projective line bundle is found with a least square error smaller than 2%. Apparently, the direction process on the projective line bundle is a good continuation model for vessels in retinal images. |
Tasks | |
Published | 2016-10-20 |
URL | http://arxiv.org/abs/1610.06368v1 |
http://arxiv.org/pdf/1610.06368v1.pdf | |
PWC | https://paperswithcode.com/paper/retrieving-challenging-vessel-connections-in |
Repo | |
Framework | |
Coarse-to-Fine Segmentation With Shape-Tailored Scale Spaces
Title | Coarse-to-Fine Segmentation With Shape-Tailored Scale Spaces |
Authors | Ganesh Sundaramoorthi, Naeemullah Khan, Byung-Woo Hong |
Abstract | We formulate a general energy and method for segmentation that is designed to have preference for segmenting the coarse structure over the fine structure of the data, without smoothing across boundaries of regions. The energy is formulated by considering data terms at a continuum of scales from the scale space computed from the Heat Equation within regions, and integrating these terms over all time. We show that the energy may be approximately optimized without solving for the entire scale space, but rather solving time-independent linear equations at the native scale of the image, making the method computationally feasible. We provide a multi-region scheme, and apply our method to motion segmentation. Experiments on a benchmark dataset shows that our method is less sensitive to clutter or other undesirable fine-scale structure, and leads to better performance in motion segmentation. |
Tasks | Motion Segmentation |
Published | 2016-03-24 |
URL | http://arxiv.org/abs/1603.07745v1 |
http://arxiv.org/pdf/1603.07745v1.pdf | |
PWC | https://paperswithcode.com/paper/coarse-to-fine-segmentation-with-shape |
Repo | |
Framework | |
Visual Script and Language Identification
Title | Visual Script and Language Identification |
Authors | Anguelos Nicolaou, Andrew Bagdanov, Lluis Gomez-Bigorda, Dimosthenis Karatzas |
Abstract | In this paper we introduce a script identification method based on hand-crafted texture features and an artificial neural network. The proposed pipeline achieves near state-of-the-art performance for script identification of video-text and state-of-the-art performance on visual language identification of handwritten text. More than using the deep network as a classifier, the use of its intermediary activations as a learned metric demonstrates remarkable results and allows the use of discriminative models on unknown classes. Comparative experiments in video-text and text in the wild datasets provide insights on the internals of the proposed deep network. |
Tasks | Language Identification |
Published | 2016-01-08 |
URL | http://arxiv.org/abs/1601.01885v1 |
http://arxiv.org/pdf/1601.01885v1.pdf | |
PWC | https://paperswithcode.com/paper/visual-script-and-language-identification |
Repo | |
Framework | |
Very Deep Convolutional Neural Networks for Robust Speech Recognition
Title | Very Deep Convolutional Neural Networks for Robust Speech Recognition |
Authors | Yanmin Qian, Philip C Woodland |
Abstract | This paper describes the extension and optimization of our previous work on very deep convolutional neural networks (CNNs) for effective recognition of noisy speech in the Aurora 4 task. The appropriate number of convolutional layers, the sizes of the filters, pooling operations and input feature maps are all modified: the filter and pooling sizes are reduced and dimensions of input feature maps are extended to allow adding more convolutional layers. Furthermore appropriate input padding and input feature map selection strategies are developed. In addition, an adaptation framework using joint training of very deep CNN with auxiliary features i-vector and fMLLR features is developed. These modifications give substantial word error rate reductions over the standard CNN used as baseline. Finally the very deep CNN is combined with an LSTM-RNN acoustic model and it is shown that state-level weighted log likelihood score combination in a joint acoustic model decoding scheme is very effective. On the Aurora 4 task, the very deep CNN achieves a WER of 8.81%, further 7.99% with auxiliary feature joint training, and 7.09% with LSTM-RNN joint decoding. |
Tasks | Robust Speech Recognition, Speech Recognition |
Published | 2016-10-02 |
URL | http://arxiv.org/abs/1610.00277v1 |
http://arxiv.org/pdf/1610.00277v1.pdf | |
PWC | https://paperswithcode.com/paper/very-deep-convolutional-neural-networks-for-1 |
Repo | |
Framework | |
EPOpt: Learning Robust Neural Network Policies Using Model Ensembles
Title | EPOpt: Learning Robust Neural Network Policies Using Model Ensembles |
Authors | Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, Sergey Levine |
Abstract | Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks, especially when the policies are represented using rich function approximators like deep neural networks. Model-based methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning/adaptation. |
Tasks | Domain Adaptation |
Published | 2016-10-05 |
URL | http://arxiv.org/abs/1610.01283v4 |
http://arxiv.org/pdf/1610.01283v4.pdf | |
PWC | https://paperswithcode.com/paper/epopt-learning-robust-neural-network-policies |
Repo | |
Framework | |
Hyperparameter Transfer Learning through Surrogate Alignment for Efficient Deep Neural Network Training
Title | Hyperparameter Transfer Learning through Surrogate Alignment for Efficient Deep Neural Network Training |
Authors | Ilija Ilievski, Jiashi Feng |
Abstract | Recently, several optimization methods have been successfully applied to the hyperparameter optimization of deep neural networks (DNNs). The methods work by modeling the joint distribution of hyperparameter values and corresponding error. Those methods become less practical when applied to modern DNNs whose training may take a few days and thus one cannot collect sufficient observations to accurately model the distribution. To address this challenging issue, we propose a method that learns to transfer optimal hyperparameter values for a small source dataset to hyperparameter values with comparable performance on a dataset of interest. As opposed to existing transfer learning methods, our proposed method does not use hand-designed features. Instead, it uses surrogates to model the hyperparameter-error distributions of the two datasets and trains a neural network to learn the transfer function. Extensive experiments on three CV benchmark datasets clearly demonstrate the efficiency of our method. |
Tasks | Hyperparameter Optimization, Transfer Learning |
Published | 2016-07-31 |
URL | http://arxiv.org/abs/1608.00218v1 |
http://arxiv.org/pdf/1608.00218v1.pdf | |
PWC | https://paperswithcode.com/paper/hyperparameter-transfer-learning-through |
Repo | |
Framework | |
Top-push Video-based Person Re-identification
Title | Top-push Video-based Person Re-identification |
Authors | Jinjie You, Ancong Wu, Xiang Li, Wei-Shi Zheng |
Abstract | Most existing person re-identification (re-id) models focus on matching still person images across disjoint camera views. Since only limited information can be exploited from still images, it is hard (if not impossible) to overcome the occlusion, pose and camera-view change, and lighting variation problems. In comparison, video-based re-id methods can utilize extra space-time information, which contains much more rich cues for matching to overcome the mentioned problems. However, we find that when using video-based representation, some inter-class difference can be much more obscure than the one when using still-image based representation, because different people could not only have similar appearance but also have similar motions and actions which are hard to align. To solve this problem, we propose a top-push distance learning model (TDL), in which we integrate a top-push constrain for matching video features of persons. The top-push constraint enforces the optimization on top-rank matching in re-id, so as to make the matching model more effective towards selecting more discriminative features to distinguish different persons. Our experiments show that the proposed video-based re-id framework outperforms the state-of-the-art video-based re-id methods. |
Tasks | Person Re-Identification, Video-Based Person Re-Identification |
Published | 2016-04-29 |
URL | http://arxiv.org/abs/1604.08683v2 |
http://arxiv.org/pdf/1604.08683v2.pdf | |
PWC | https://paperswithcode.com/paper/top-push-video-based-person-re-identification |
Repo | |
Framework | |
Toward Implicit Sample Noise Modeling: Deviation-driven Matrix Factorization
Title | Toward Implicit Sample Noise Modeling: Deviation-driven Matrix Factorization |
Authors | Guang-He Lee, Shao-Wen Yang, Shou-De Lin |
Abstract | The objective function of a matrix factorization model usually aims to minimize the average of a regression error contributed by each element. However, given the existence of stochastic noises, the implicit deviations of sample data from their true values are almost surely diverse, which makes each data point not equally suitable for fitting a model. In this case, simply averaging the cost among data in the objective function is not ideal. Intuitively we would like to emphasize more on the reliable instances (i.e., those contain smaller noise) while training a model. Motivated by such observation, we derive our formula from a theoretical framework for optimal weighting under heteroscedastic noise distribution. Specifically, by modeling and learning the deviation of data, we design a novel matrix factorization model. Our model has two advantages. First, it jointly learns the deviation and conducts dynamic reweighting of instances, allowing the model to converge to a better solution. Second, during learning the deviated instances are assigned lower weights, which leads to faster convergence since the model does not need to overfit the noise. The experiments are conducted in clean recommendation and noisy sensor datasets to test the effectiveness of the model in various scenarios. The results show that our model outperforms the state-of-the-art factorization and deep learning models in both accuracy and efficiency. |
Tasks | |
Published | 2016-10-28 |
URL | http://arxiv.org/abs/1610.09274v1 |
http://arxiv.org/pdf/1610.09274v1.pdf | |
PWC | https://paperswithcode.com/paper/toward-implicit-sample-noise-modeling |
Repo | |
Framework | |
GPU-Based Fuzzy C-Means Clustering Algorithm for Image Segmentation
Title | GPU-Based Fuzzy C-Means Clustering Algorithm for Image Segmentation |
Authors | Mishal Almazrooie, Mogana Vadiveloo, Rosni Abdullah |
Abstract | In this paper, a fast and practical GPU-based implementation of Fuzzy C-Means(FCM) clustering algorithm for image segmentation is proposed. First, an extensive analysis is conducted to study the dependency among the image pixels in the algorithm for parallelization. The proposed GPU-based FCM has been tested on digital brain simulated dataset to segment white matter(WM), gray matter(GM) and cerebrospinal fluid (CSF) soft tissue regions. The execution time of the sequential FCM is 519 seconds for an image dataset with the size of 1MB. While the proposed GPU-based FCM requires only 2.33 seconds for the similar size of image dataset. An estimated 245-fold speedup is measured for the data size of 40 KB on a CUDA device that has 448 processors. |
Tasks | Semantic Segmentation |
Published | 2016-01-01 |
URL | http://arxiv.org/abs/1601.00072v3 |
http://arxiv.org/pdf/1601.00072v3.pdf | |
PWC | https://paperswithcode.com/paper/gpu-based-fuzzy-c-means-clustering-algorithm |
Repo | |
Framework | |
Accelerate Stochastic Subgradient Method by Leveraging Local Growth Condition
Title | Accelerate Stochastic Subgradient Method by Leveraging Local Growth Condition |
Authors | Yi Xu, Qihang Lin, Tianbao Yang |
Abstract | In this paper, a new theory is developed for first-order stochastic convex optimization, showing that the global convergence rate is sufficiently quantified by a local growth rate of the objective function in a neighborhood of the optimal solutions. In particular, if the objective function $F(\mathbf w)$ in the $\epsilon$-sublevel set grows as fast as $\mathbf w - \mathbf w_*_2^{1/\theta}$, where $\mathbf w_*$ represents the closest optimal solution to $\mathbf w$ and $\theta\in(0,1]$ quantifies the local growth rate, the iteration complexity of first-order stochastic optimization for achieving an $\epsilon$-optimal solution can be $\widetilde O(1/\epsilon^{2(1-\theta)})$, which is optimal at most up to a logarithmic factor. To achieve the faster global convergence, we develop two different accelerated stochastic subgradient methods by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Besides the theoretical improvements, this work also includes new contributions towards making the proposed algorithms practical: (i) we present practical variants of accelerated stochastic subgradient methods that can run without the knowledge of multiplicative growth constant and even the growth rate $\theta$; (ii) we consider a broad family of problems in machine learning to demonstrate that the proposed algorithms enjoy faster convergence than traditional stochastic subgradient method. We also characterize the complexity of the proposed algorithms for ensuring the gradient is small without the smoothness assumption. |
Tasks | Stochastic Optimization |
Published | 2016-07-04 |
URL | https://arxiv.org/abs/1607.01027v4 |
https://arxiv.org/pdf/1607.01027v4.pdf | |
PWC | https://paperswithcode.com/paper/accelerated-stochastic-subgradient-methods |
Repo | |
Framework | |
Metaheuristic Algorithms for Convolution Neural Network
Title | Metaheuristic Algorithms for Convolution Neural Network |
Authors | L. M. Rasdi Rere, Mohamad Ivan Fanany, Aniati Murni Arymurthy |
Abstract | A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). |
Tasks | |
Published | 2016-10-06 |
URL | http://arxiv.org/abs/1610.01925v1 |
http://arxiv.org/pdf/1610.01925v1.pdf | |
PWC | https://paperswithcode.com/paper/metaheuristic-algorithms-for-convolution |
Repo | |
Framework | |