Paper Group ANR 895
Learning View Priors for Single-view 3D Reconstruction. Accelerated Labeling of Discrete Abstractions for Autonomous Driving Subject to LTL Specifications. Methods for Segmentation and Classification of Digital Microscopy Tissue Images. Compressive Regularized Discriminant Analysis of High-Dimensional Data with Applications to Microarray Studies. F …
Learning View Priors for Single-view 3D Reconstruction
Title | Learning View Priors for Single-view 3D Reconstruction |
Authors | Hiroharu Kato, Tatsuya Harada |
Abstract | There is some ambiguity in the 3D shape of an object when the number of observed views is small. Because of this ambiguity, although a 3D object reconstructor can be trained using a single view or a few views per object, reconstructed shapes only fit the observed views and appear incorrect from the unobserved viewpoints. To reconstruct shapes that look reasonable from any viewpoint, we propose to train a discriminator that learns prior knowledge regarding possible views. The discriminator is trained to distinguish the reconstructed views of the observed viewpoints from those of the unobserved viewpoints. The reconstructor is trained to correct unobserved views by fooling the discriminator. Our method outperforms current state-of-the-art methods on both synthetic and natural image datasets; this validates the effectiveness of our method. |
Tasks | 3D Reconstruction, Single-View 3D Reconstruction |
Published | 2018-11-26 |
URL | http://arxiv.org/abs/1811.10719v2 |
http://arxiv.org/pdf/1811.10719v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-view-priors-for-single-view-3d |
Repo | |
Framework | |
Accelerated Labeling of Discrete Abstractions for Autonomous Driving Subject to LTL Specifications
Title | Accelerated Labeling of Discrete Abstractions for Autonomous Driving Subject to LTL Specifications |
Authors | Brian Paden, Peng Liu, Schuyler Cullen |
Abstract | Linear temporal logic and automaton-based run-time verification provide a powerful framework for designing task and motion planning algorithms for autonomous agents. The drawback to this approach is the computational cost of operating on high resolution discrete abstractions of continuous dynamical systems. In particular, the computational bottleneck that arises is converting perceived environment variables into a labeling function on the states of a Kripke structure or analogously the transitions of a labeled transition system. This paper presents the design and empirical evaluation of an approach to constructing the labeling function that exposes a large degree of parallelism in the operation as well as efficient memory access patterns. The approach is implemented on a commodity GPU and empirical results demonstrate the efficacy of the labeling technique for real-time planning and decision-making. |
Tasks | Autonomous Driving, Decision Making, Motion Planning |
Published | 2018-10-05 |
URL | http://arxiv.org/abs/1810.02612v2 |
http://arxiv.org/pdf/1810.02612v2.pdf | |
PWC | https://paperswithcode.com/paper/accelerated-labeling-of-discrete-abstractions |
Repo | |
Framework | |
Methods for Segmentation and Classification of Digital Microscopy Tissue Images
Title | Methods for Segmentation and Classification of Digital Microscopy Tissue Images |
Authors | Quoc Dang Vu, Simon Graham, Minh Nguyen Nhat To, Muhammad Shaban, Talha Qaiser, Navid Alemi Koohbanani, Syed Ali Khurram, Tahsin Kurc, Keyvan Farahani, Tianhao Zhao, Rajarsi Gupta, Jin Tae Kwak, Nasir Rajpoot, Joel Saltz |
Abstract | High-resolution microscopy images of tissue specimens provide detailed information about the morphology of normal and diseased tissue. Image analysis of tissue morphology can help cancer researchers develop a better understanding of cancer biology. Segmentation of nuclei and classification of tissue images are two common tasks in tissue image analysis. Development of accurate and efficient algorithms for these tasks is a challenging problem because of the complexity of tissue morphology and tumor heterogeneity. In this paper we present two computer algorithms; one designed for segmentation of nuclei and the other for classification of whole slide tissue images. The segmentation algorithm implements a multiscale deep residual aggregation network to accurately segment nuclear material and then separate clumped nuclei into individual nuclei. The classification algorithm initially carries out patch-level classification via a deep learning method, then patch-level statistical and morphological features are used as input to a random forest regression model for whole slide image classification. The segmentation and classification algorithms were evaluated in the MICCAI 2017 Digital Pathology challenge. The segmentation algorithm achieved an accuracy score of 0.78. The classification algorithm achieved an accuracy score of 0.81. |
Tasks | Image Classification |
Published | 2018-10-31 |
URL | http://arxiv.org/abs/1810.13230v2 |
http://arxiv.org/pdf/1810.13230v2.pdf | |
PWC | https://paperswithcode.com/paper/methods-for-segmentation-and-classification |
Repo | |
Framework | |
Compressive Regularized Discriminant Analysis of High-Dimensional Data with Applications to Microarray Studies
Title | Compressive Regularized Discriminant Analysis of High-Dimensional Data with Applications to Microarray Studies |
Authors | Muhammad Naveed Tabassum, Esa Ollila |
Abstract | We propose a modification of linear discriminant analysis, referred to as compressive regularized discriminant analysis (CRDA), for analysis of high-dimensional datasets. CRDA is specially designed for feature elimination purpose and can be used as gene selection method in microarray studies. CRDA lends ideas from $\ell_{q,1}$ norm minimization algorithms in the multiple measurement vectors (MMV) model and utilizes joint-sparsity promoting hard thresholding for feature elimination. A regularization of the sample covariance matrix is also needed as we consider the challenging scenario where the number of features (variables) is comparable or exceeding the sample size of the training dataset. A simulation study and four examples of real-life microarray datasets evaluate the performances of CRDA based classifiers. Overall, the proposed method gives fewer misclassification errors than its competitors, while at the same time achieving accurate feature selection. |
Tasks | Feature Selection |
Published | 2018-04-11 |
URL | http://arxiv.org/abs/1804.03981v1 |
http://arxiv.org/pdf/1804.03981v1.pdf | |
PWC | https://paperswithcode.com/paper/compressive-regularized-discriminant-analysis |
Repo | |
Framework | |
Fusing Saliency Maps with Region Proposals for Unsupervised Object Localization
Title | Fusing Saliency Maps with Region Proposals for Unsupervised Object Localization |
Authors | Hakan Karaoguz, Patric Jensfelt |
Abstract | In this paper we address the problem of unsupervised localization of objects in single images. Compared to previous state-of-the-art method our method is fully unsupervised in the sense that there is no prior instance level or category level information about the image. Furthermore, we treat each image individually and do not rely on any neighboring image similarity. We employ deep-learning based generation of saliency maps and region proposals to tackle this problem. First salient regions in the image are determined using an encoder/decoder architecture. The resulting saliency map is matched with region proposals from a class agnostic region proposal network to roughly localize the candidate object regions. These regions are further refined based on the overlap and similarity ratios. Our experimental evaluations on a benchmark dataset show that the method gets close to current state-of-the-art methods in terms of localization accuracy even though these make use of multiple frames. Furthermore, we created a more challenging and realistic dataset with multiple object categories and varying viewpoint and illumination conditions for evaluating the method’s performance in real world scenarios. |
Tasks | Object Localization, Unsupervised Object Localization |
Published | 2018-04-11 |
URL | http://arxiv.org/abs/1804.03905v1 |
http://arxiv.org/pdf/1804.03905v1.pdf | |
PWC | https://paperswithcode.com/paper/fusing-saliency-maps-with-region-proposals |
Repo | |
Framework | |
DIMENSION: Dynamic MR Imaging with Both K-space and Spatial Prior Knowledge Obtained via Multi-Supervised Network Training
Title | DIMENSION: Dynamic MR Imaging with Both K-space and Spatial Prior Knowledge Obtained via Multi-Supervised Network Training |
Authors | Shanshan Wang, Ziwen Ke, Huitao Cheng, Sen Jia, Ying Leslie, Hairong Zheng, Dong Liang |
Abstract | Dynamic MR image reconstruction from incomplete k-space data has generated great research interest due to its capability in reducing scan time. Nevertheless, the reconstruction problem is still challenging due to its ill-posed nature. Most existing methods either suffer from long iterative reconstruction time or explore limited prior knowledge. This paper proposes a dynamic MR imaging method with both k-space and spatial prior knowledge integrated via multi-supervised network training, dubbed as DIMENSION. Specifically, the DIMENSION architecture consists of a frequential prior network for updating the k-space with its network prediction and a spatial prior network for capturing image structures and details. Furthermore, a multisupervised network training technique is developed to constrain the frequency domain information and reconstruction results at different levels. The comparisons with classical k-t FOCUSS, k-t SLR, L+S and the state-of-the-art CNN-based method on in vivo datasets show our method can achieve improved reconstruction results in shorter time. |
Tasks | Image Reconstruction |
Published | 2018-09-30 |
URL | http://arxiv.org/abs/1810.00302v4 |
http://arxiv.org/pdf/1810.00302v4.pdf | |
PWC | https://paperswithcode.com/paper/dimension-dynamic-mr-imaging-with-both-k |
Repo | |
Framework | |
Generalised Structural CNNs (SCNNs) for time series data with arbitrary graph topology
Title | Generalised Structural CNNs (SCNNs) for time series data with arbitrary graph topology |
Authors | Thomas Teh, Chaiyawan Auepanwiriyakul, John Alexander Harston, A. Aldo Faisal |
Abstract | Deep Learning methods, specifically convolutional neural networks (CNNs), have seen a lot of success in the domain of image-based data, where the data offers a clearly structured topology in the regular lattice of pixels. This 4-neighbourhood topological simplicity makes the application of convolutional masks straightforward for time series data, such as video applications, but many high-dimensional time series data are not organised in regular lattices, and instead values may have adjacency relationships with non-trivial topologies, such as small-world networks or trees. In our application case, human kinematics, it is currently unclear how to generalise convolutional kernels in a principled manner. Therefore we define and implement here a framework for general graph-structured CNNs for time series analysis. Our algorithm automatically builds convolutional layers using the specified adjacency matrix of the data dimensions and convolutional masks that scale with the hop distance. In the limit of a lattice-topology our method produces the well-known image convolutional masks. We test our method first on synthetic data of arbitrarily-connected graphs and human hand motion capture data, where the hand is represented by a tree capturing the mechanical dependencies of the joints. We are able to demonstrate, amongst other things, that inclusion of the graph structure of the data dimensions improves model prediction significantly, when compared against a benchmark CNN model with only time convolution layers. |
Tasks | Motion Capture, Time Series, Time Series Analysis |
Published | 2018-03-14 |
URL | http://arxiv.org/abs/1803.05419v2 |
http://arxiv.org/pdf/1803.05419v2.pdf | |
PWC | https://paperswithcode.com/paper/generalised-structural-cnns-scnns-for-time |
Repo | |
Framework | |
Reproducible evaluation of diffusion MRI features for automatic classification of patients with Alzheimers disease
Title | Reproducible evaluation of diffusion MRI features for automatic classification of patients with Alzheimers disease |
Authors | Junhao Wen, Jorge Samper-Gonzalez, Simona Bottani, Alexandre Routier, Ninon Burgos, Thomas Jacquemont, Sabrina Fontanella, Stanley Durrleman, Stephane Epelbaum, Anne Bertrand, Olivier Colliot |
Abstract | Diffusion MRI is the modality of choice to study alterations of white matter. In past years, various works have used diffusion MRI for automatic classification of AD. However, classification performance obtained with different approaches is difficult to compare and these studies are also difficult to reproduce. In the present paper, we first extend a previously proposed framework to diffusion MRI data for AD classification. Specifically, we add: conversion of diffusion MRI ADNI data into the BIDS standard and pipelines for diffusion MRI preprocessing and feature extraction. We then apply the framework to compare different components. First, FS has a positive impact on classification results: highest balanced accuracy (BA) improved from 0.76 to 0.82 for task CN vs AD. Secondly, voxel-wise features generally gives better performance than regional features. Fractional anisotropy (FA) and mean diffusivity (MD) provided comparable results for voxel-wise features. Moreover, we observe that the poor performance obtained in tasks involving MCI were potentially caused by the small data samples, rather than by the data imbalance. Furthermore, no extensive classification difference exists for different degree of smoothing and registration methods. Besides, we demonstrate that using non-nested validation of FS leads to unreliable and over-optimistic results: 0.05 up to 0.40 relative increase in BA. Lastly, with proper FR and FS, the performance of diffusion MRI features is comparable to that of T1w MRI. All the code of the framework and the experiments are publicly available: general-purpose tools have been integrated into the Clinica software package (www.clinica.run) and the paper-specific code is available at: https://github.com/aramis-lab/AD-ML. |
Tasks | Feature Selection |
Published | 2018-12-28 |
URL | https://arxiv.org/abs/1812.11183v3 |
https://arxiv.org/pdf/1812.11183v3.pdf | |
PWC | https://paperswithcode.com/paper/reproducible-evaluation-of-diffusion-mri |
Repo | |
Framework | |
What is wrong with style transfer for texts?
Title | What is wrong with style transfer for texts? |
Authors | Alexey Tikhonov, Ivan P. Yamshchikov |
Abstract | A number of recent machine learning papers work with an automated style transfer for texts and, counter to intuition, demonstrate that there is no consensus formulation of this NLP task. Different researchers propose different algorithms, datasets and target metrics to address it. This short opinion paper aims to discuss possible formalization of this NLP task in anticipation of a further growing interest to it. |
Tasks | Style Transfer |
Published | 2018-08-13 |
URL | http://arxiv.org/abs/1808.04365v1 |
http://arxiv.org/pdf/1808.04365v1.pdf | |
PWC | https://paperswithcode.com/paper/what-is-wrong-with-style-transfer-for-texts |
Repo | |
Framework | |
Regularized Ensembles and Transferability in Adversarial Learning
Title | Regularized Ensembles and Transferability in Adversarial Learning |
Authors | Yifan Chen, Yevgeniy Vorobeychik |
Abstract | Despite the considerable success of convolutional neural networks in a broad array of domains, recent research has shown these to be vulnerable to small adversarial perturbations, commonly known as adversarial examples. Moreover, such examples have shown to be remarkably portable, or transferable, from one model to another, enabling highly successful black-box attacks. We explore this issue of transferability and robustness from two dimensions: first, considering the impact of conventional $l_p$ regularization as well as replacing the top layer with a linear support vector machine (SVM), and second, the value of combining regularized models into an ensemble. We show that models trained with different regularizers present barriers to transferability, as does partial information about the models comprising the ensemble. |
Tasks | |
Published | 2018-12-05 |
URL | http://arxiv.org/abs/1812.01821v1 |
http://arxiv.org/pdf/1812.01821v1.pdf | |
PWC | https://paperswithcode.com/paper/regularized-ensembles-and-transferability-in |
Repo | |
Framework | |
The relationship between linguistic expression and symptoms of depression, anxiety, and suicidal thoughts: A longitudinal study of blog content
Title | The relationship between linguistic expression and symptoms of depression, anxiety, and suicidal thoughts: A longitudinal study of blog content |
Authors | B. ODea, T. W. Boonstra, M. E. Larsen, T. Nguyen, S. Venkatesh, H. Christensen |
Abstract | Due to its popularity and availability, social media data may present a new way to identify individuals who are experiencing mental illness. By analysing blog content, this study aimed to investigate the associations between linguistic features and symptoms of depression, generalised anxiety, and suicidal ideation. This study utilised a longitudinal study design. Individuals who blogged were invited to participate in a study in which they completed fortnightly mental health questionnaires including the PHQ9 and GAD7 for a period of 36 weeks. Linguistic features were extracted from blog data using the LIWC tool. Bivariate and multivariate analyses were performed to investigate the correlations between the linguistic features and mental health scores between subjects. We then used the multivariate regression model to predict longitudinal changes in mood within subjects. A total of 153 participants consented to taking part, with 38 participants completing the required number of questionnaires and blog posts during the study period. Between-subject analysis revealed that several linguistic features, including tentativeness and non-fluencies, were significantly associated with depression and anxiety symptoms, but not suicidal thoughts. Within-subject analysis showed no robust correlations between linguistic features and changes in mental health score. This study provides further support for the relationship between linguistic features within social media data and symptoms of depression and anxiety. The lack of robust within-subject correlations indicate that the relationship observed at the group level may not generalise to individual changes over time. |
Tasks | |
Published | 2018-11-07 |
URL | http://arxiv.org/abs/1811.02750v1 |
http://arxiv.org/pdf/1811.02750v1.pdf | |
PWC | https://paperswithcode.com/paper/the-relationship-between-linguistic |
Repo | |
Framework | |
Closed-Book Training to Improve Summarization Encoder Memory
Title | Closed-Book Training to Improve Summarization Encoder Memory |
Authors | Yichen Jiang, Mohit Bansal |
Abstract | A good neural sequence-to-sequence summarization model should have a strong encoder that can distill and memorize the important information from long input texts so that the decoder can generate salient summaries based on the encoder’s memory. In this paper, we aim to improve the memorization capabilities of the encoder of a pointer-generator model by adding an additional ‘closed-book’ decoder without attention and pointer mechanisms. Such a decoder forces the encoder to be more selective in the information encoded in its memory state because the decoder can’t rely on the extra information provided by the attention and possibly copy modules, and hence improves the entire model. On the CNN/Daily Mail dataset, our 2-decoder model outperforms the baseline significantly in terms of ROUGE and METEOR metrics, for both cross-entropy and reinforced setups (and on human evaluation). Moreover, our model also achieves higher scores in a test-only DUC-2002 generalizability setup. We further present a memory ability test, two saliency metrics, as well as several sanity-check ablations (based on fixed-encoder, gradient-flow cut, and model capacity) to prove that the encoder of our 2-decoder model does in fact learn stronger memory representations than the baseline encoder. |
Tasks | |
Published | 2018-09-12 |
URL | http://arxiv.org/abs/1809.04585v1 |
http://arxiv.org/pdf/1809.04585v1.pdf | |
PWC | https://paperswithcode.com/paper/closed-book-training-to-improve-summarization |
Repo | |
Framework | |
Crowdbreaks: Tracking Health Trends using Public Social Media Data and Crowdsourcing
Title | Crowdbreaks: Tracking Health Trends using Public Social Media Data and Crowdsourcing |
Authors | Martin Mueller, Marcel Salathé |
Abstract | In the past decade, tracking health trends using social media data has shown great promise, due to a powerful combination of massive adoption of social media around the world, and increasingly potent hardware and software that enables us to work with these new big data streams. At the same time, many challenging problems have been identified. First, there is often a mismatch between how rapidly online data can change, and how rapidly algorithms are updated, which means that there is limited reusability for algorithms trained on past data as their performance decreases over time. Second, much of the work is focusing on specific issues during a specific past period in time, even though public health institutions would need flexible tools to assess multiple evolving situations in real time. Third, most tools providing such capabilities are proprietary systems with little algorithmic or data transparency, and thus little buy-in from the global public health and research community. Here, we introduce Crowdbreaks, an open platform which allows tracking of health trends by making use of continuous crowdsourced labelling of public social media content. The system is built in a way which automatizes the typical workflow from data collection, filtering, labelling and training of machine learning classifiers and therefore can greatly accelerate the research process in the public health domain. This work introduces the technical aspects of the platform and explores its future use cases. |
Tasks | |
Published | 2018-05-14 |
URL | http://arxiv.org/abs/1805.05491v1 |
http://arxiv.org/pdf/1805.05491v1.pdf | |
PWC | https://paperswithcode.com/paper/crowdbreaks-tracking-health-trends-using |
Repo | |
Framework | |
A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization
Title | A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization |
Authors | Andre Milzarek, Xiantao Xiao, Shicong Cen, Zaiwen Wen, Michael Ulbrich |
Abstract | In this work, we present a globalized stochastic semismooth Newton method for solving stochastic optimization problems involving smooth nonconvex and nonsmooth convex terms in the objective function. We assume that only noisy gradient and Hessian information of the smooth part of the objective function is available via calling stochastic first and second order oracles. The proposed method can be seen as a hybrid approach combining stochastic semismooth Newton steps and stochastic proximal gradient steps. Two inexact growth conditions are incorporated to monitor the convergence and the acceptance of the semismooth Newton steps and it is shown that the algorithm converges globally to stationary points in expectation. Moreover, under standard assumptions and utilizing random matrix concentration inequalities, we prove that the proposed approach locally turns into a pure stochastic semismooth Newton method and converges r-superlinearly with high probability. We present numerical results and comparisons on $\ell_1$-regularized logistic regression and nonconvex binary classification that demonstrate the efficiency of our algorithm. |
Tasks | Stochastic Optimization |
Published | 2018-03-09 |
URL | http://arxiv.org/abs/1803.03466v1 |
http://arxiv.org/pdf/1803.03466v1.pdf | |
PWC | https://paperswithcode.com/paper/a-stochastic-semismooth-newton-method-for |
Repo | |
Framework | |
Representing Verbs as Argument Concepts
Title | Representing Verbs as Argument Concepts |
Authors | Yu Gong, Kaiqi Zhao, Kenny Q. Zhu |
Abstract | Verbs play an important role in the understanding of natural language text. This paper studies the problem of abstracting the subject and object arguments of a verb into a set of noun concepts, known as the “argument concepts”. This set of concepts, whose size is parameterized, represents the fine-grained semantics of a verb. For example, the object of “enjoy” can be abstracted into time, hobby and event, etc. We present a novel framework to automatically infer human readable and machine computable action concepts with high accuracy. |
Tasks | |
Published | 2018-03-02 |
URL | http://arxiv.org/abs/1803.00729v1 |
http://arxiv.org/pdf/1803.00729v1.pdf | |
PWC | https://paperswithcode.com/paper/representing-verbs-as-argument-concepts |
Repo | |
Framework | |