Paper Group AWR 192
ViewAL: Active Learning with Viewpoint Entropy for Semantic Segmentation. Large Scale Incremental Learning. Scheduling the Learning Rate via Hypergradients: New Insights and a New Algorithm. BOAH: A Tool Suite for Multi-Fidelity Bayesian Optimization & Analysis of Hyperparameters. A Framework for Understanding Unintended Consequences of Machine Lea …
ViewAL: Active Learning with Viewpoint Entropy for Semantic Segmentation
Title | ViewAL: Active Learning with Viewpoint Entropy for Semantic Segmentation |
Authors | Yawar Siddiqui, Julien Valentin, Matthias Nießner |
Abstract | We propose ViewAL, a novel active learning strategy for semantic segmentation that exploits viewpoint consistency in multi-view datasets. Our core idea is that inconsistencies in model predictions across viewpoints provide a very reliable measure of uncertainty and encourage the model to perform well irrespective of the viewpoint under which objects are observed. To incorporate this uncertainty measure, we introduce a new viewpoint entropy formulation, which is the basis of our active learning strategy. In addition, we propose uncertainty computations on a superpixel level, which exploits inherently localized signal in the segmentation task, directly lowering the annotation costs. This combination of viewpoint entropy and the use of superpixels allows to efficiently select samples that are highly informative for improving the network. We demonstrate that our proposed active learning strategy not only yields the best-performing models for the same amount of required labeled data, but also significantly reduces labeling effort. For instance, our method achieves 95% of maximum achievable network performance using only 7%, 17%, and 24% labeled data on SceneNet-RGBD, ScanNet, and Matterport3D, respectively. On these datasets, the best state-of-the-art method achieves the same performance with 14%, 27% and 33% labeled data. Finally, we demonstrate that labeling using superpixels yields the same quality of ground-truth compared to labeling whole images, but requires 25% less time. |
Tasks | Active Learning, Semantic Segmentation |
Published | 2019-11-26 |
URL | https://arxiv.org/abs/1911.11789v2 |
https://arxiv.org/pdf/1911.11789v2.pdf | |
PWC | https://paperswithcode.com/paper/viewal-active-learning-with-viewpoint-entropy |
Repo | https://github.com/nihalsid/ViewAL |
Framework | pytorch |
Large Scale Incremental Learning
Title | Large Scale Incremental Learning |
Authors | Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, Yun Fu |
Abstract | Modern machine learning suffers from catastrophic forgetting when learning new classes incrementally. The performance dramatically degrades due to the missing data of old classes. Incremental learning methods have been proposed to retain the knowledge acquired from the old classes, by using knowledge distilling and keeping a few exemplars from the old classes. However, these methods struggle to scale up to a large number of classes. We believe this is because of the combination of two factors: (a) the data imbalance between the old and new classes, and (b) the increasing number of visually similar classes. Distinguishing between an increasing number of visually similar classes is particularly challenging, when the training data is unbalanced. We propose a simple and effective method to address this data imbalance issue. We found that the last fully connected layer has a strong bias towards the new classes, and this bias can be corrected by a linear model. With two bias parameters, our method performs remarkably well on two large datasets: ImageNet (1000 classes) and MS-Celeb-1M (10000 classes), outperforming the state-of-the-art algorithms by 11.1% and 13.2% respectively. |
Tasks | |
Published | 2019-05-30 |
URL | https://arxiv.org/abs/1905.13260v1 |
https://arxiv.org/pdf/1905.13260v1.pdf | |
PWC | https://paperswithcode.com/paper/large-scale-incremental-learning-1 |
Repo | https://github.com/sairin1202/BIC |
Framework | pytorch |
Scheduling the Learning Rate via Hypergradients: New Insights and a New Algorithm
Title | Scheduling the Learning Rate via Hypergradients: New Insights and a New Algorithm |
Authors | Michele Donini, Luca Franceschi, Massimiliano Pontil, Orchid Majumder, Paolo Frasconi |
Abstract | We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization. This allows us to explicitly search for schedules that achieve good generalization. We describe the structure of the gradient of a validation error w.r.t. the learning rate, the hypergradient, and based on this we introduce a novel online algorithm. Our method adaptively interpolates between the recently proposed techniques of Franceschi et al. (2017) and Baydin et al. (2017), featuring increased stability and faster convergence. We show empirically that the proposed method compares favourably with baselines and related methods in terms of final test accuracy. |
Tasks | Hyperparameter Optimization |
Published | 2019-10-18 |
URL | https://arxiv.org/abs/1910.08525v1 |
https://arxiv.org/pdf/1910.08525v1.pdf | |
PWC | https://paperswithcode.com/paper/scheduling-the-learning-rate-via-1 |
Repo | https://github.com/awslabs/adatune |
Framework | pytorch |
BOAH: A Tool Suite for Multi-Fidelity Bayesian Optimization & Analysis of Hyperparameters
Title | BOAH: A Tool Suite for Multi-Fidelity Bayesian Optimization & Analysis of Hyperparameters |
Authors | Marius Lindauer, Katharina Eggensperger, Matthias Feurer, André Biedenkapp, Joshua Marben, Philipp Müller, Frank Hutter |
Abstract | Hyperparameter optimization and neural architecture search can become prohibitively expensive for regular black-box Bayesian optimization because the training and evaluation of a single model can easily take several hours. To overcome this, we introduce a comprehensive tool suite for effective multi-fidelity Bayesian optimization and the analysis of its runs. The suite, written in Python, provides a simple way to specify complex design spaces, a robust and efficient combination of Bayesian optimization and HyperBand, and a comprehensive analysis of the optimization process and its outcomes. |
Tasks | Hyperparameter Optimization, Neural Architecture Search |
Published | 2019-08-16 |
URL | https://arxiv.org/abs/1908.06756v1 |
https://arxiv.org/pdf/1908.06756v1.pdf | |
PWC | https://paperswithcode.com/paper/boah-a-tool-suite-for-multi-fidelity-bayesian |
Repo | https://github.com/automl/BOAH |
Framework | none |
A Framework for Understanding Unintended Consequences of Machine Learning
Title | A Framework for Understanding Unintended Consequences of Machine Learning |
Authors | Harini Suresh, John V. Guttag |
Abstract | As machine learning increasingly affects people and society, it is important that we strive for a comprehensive and unified understanding of potential sources of unwanted consequences. For instance, downstream harms to particular groups are often blamed on “biased data,” but this concept encompass too many issues to be useful in developing solutions. In this paper, we provide a framework that partitions sources of downstream harm in machine learning into six distinct categories spanning the data generation and machine learning pipeline. We describe how these issues arise, how they are relevant to particular applications, and how they motivate different solutions. In doing so, we aim to facilitate the development of solutions that stem from an understanding of application-specific populations and data generation processes, rather than relying on general statements about what may or may not be “fair.” |
Tasks | |
Published | 2019-01-28 |
URL | https://arxiv.org/abs/1901.10002v3 |
https://arxiv.org/pdf/1901.10002v3.pdf | |
PWC | https://paperswithcode.com/paper/a-framework-for-understanding-unintended |
Repo | https://github.com/summerscope/fair-ml-reading-group |
Framework | none |
High-dimensional Dense Residual Convolutional Neural Network for Light Field Reconstruction
Title | High-dimensional Dense Residual Convolutional Neural Network for Light Field Reconstruction |
Authors | Nan Meng, Hayden K. -H. So, Xing Sun, Edmund Y. Lam |
Abstract | We consider the problem of high-dimensional light field reconstruction and develop a learning-based framework for spatial and angular super-resolution. Many current approaches either require disparity clues or restore the spatial and angular details separately. Such methods have difficulties with non-Lambertian surfaces or occlusions. In contrast, we formulate light field super-resolution (LFSR) as tensor restoration and develop a learning framework based on a two-stage restoration with 4-dimensional (4D) convolution. This allows our model to learn the features capturing the geometry information encoded in multiple adjacent views. Such geometric features vary near the occlusion regions and indicate the foreground object border. To train a feasible network, we propose a novel normalization operation based on a group of views in the feature maps, design a stage-wise loss function, and develop the multi-range training strategy to further improve the performance. Evaluations are conducted on a number of light field datasets including real-world scenes, synthetic data, and microscope light fields. The proposed method achieves superior performance and less execution time comparing with other state-of-the-art schemes. |
Tasks | Super-Resolution |
Published | 2019-10-03 |
URL | https://arxiv.org/abs/1910.01426v3 |
https://arxiv.org/pdf/1910.01426v3.pdf | |
PWC | https://paperswithcode.com/paper/high-dimensional-dense-residual-convolutional |
Repo | https://github.com/monaen/LightFieldReconstruction |
Framework | tf |
Mapper Based Classifier
Title | Mapper Based Classifier |
Authors | Jacek Cyranka, Alexander Georges, David Meyer |
Abstract | Topological data analysis aims to extract topological quantities from data, which tend to focus on the broader global structure of the data rather than local information. The Mapper method, specifically, generalizes clustering methods to identify significant global mathematical structures, which are out of reach of many other approaches. We propose a classifier based on applying the Mapper algorithm to data projected onto a latent space. We obtain the latent space by using PCA or autoencoders. Notably, a classifier based on the Mapper method is immune to any gradient based attack, and improves robustness over traditional CNNs (convolutional neural networks). We report theoretical justification and some numerical experiments that confirm our claims. |
Tasks | Topological Data Analysis |
Published | 2019-10-17 |
URL | https://arxiv.org/abs/1910.08103v2 |
https://arxiv.org/pdf/1910.08103v2.pdf | |
PWC | https://paperswithcode.com/paper/mapper-based-classifier |
Repo | https://github.com/asgeorges/mapper-classifier |
Framework | none |
2D and 3D Vascular Structures Enhancement via Multiscale Fractional Anisotropy Tensor
Title | 2D and 3D Vascular Structures Enhancement via Multiscale Fractional Anisotropy Tensor |
Authors | Haifa F. Alhasson, Shuaa S. Alharbi, Boguslaw Obara |
Abstract | The detection of vascular structures from noisy images is a fundamental process for extracting meaningful information in many applications. Most well-known vascular enhancing techniques often rely on Hessian-based filters. This paper investigates the feasibility and deficiencies of detecting curve-like structures using a Hessian matrix. The main contribution is a novel enhancement function, which overcomes the deficiencies of established methods. Our approach has been evaluated quantitatively and qualitatively using synthetic examples and a wide range of real 2D and 3D biomedical images. Compared with other existing approaches, the experimental results prove that our proposed approach achieves high-quality curvilinear structure enhancement. |
Tasks | |
Published | 2019-02-01 |
URL | http://arxiv.org/abs/1902.00550v1 |
http://arxiv.org/pdf/1902.00550v1.pdf | |
PWC | https://paperswithcode.com/paper/2d-and-3d-vascular-structures-enhancement-via |
Repo | https://github.com/Haifafh/MFAT |
Framework | none |
CUTIE: Learning to Understand Documents with Convolutional Universal Text Information Extractor
Title | CUTIE: Learning to Understand Documents with Convolutional Universal Text Information Extractor |
Authors | Xiaohui Zhao, Endi Niu, Zhuo Wu, Xiaoguang Wang |
Abstract | Extracting key information from documents, such as receipts or invoices, and preserving the interested texts to structured data is crucial in the document-intensive streamline processes of office automation in areas that includes but not limited to accounting, financial, and taxation areas. To avoid designing expert rules for each specific type of document, some published works attempt to tackle the problem by learning a model to explore the semantic context in text sequences based on the Named Entity Recognition (NER) method in the NLP field. In this paper, we propose to harness the effective information from both semantic meaning and spatial distribution of texts in documents. Specifically, our proposed model, Convolutional Universal Text Information Extractor (CUTIE), applies convolutional neural networks on gridded texts where texts are embedded as features with semantical connotations. We further explore the effect of employing different structures of convolutional neural network and propose a fast and portable structure. We demonstrate the effectiveness of the proposed method on a dataset with up to $4,484$ labelled receipts, without any pre-training or post-processing, achieving state of the art performance that is much better than the NER based methods in terms of either speed and accuracy. Experimental results also demonstrate that the proposed CUTIE model being able to achieve good performance with a much smaller amount of training data. |
Tasks | Named Entity Recognition |
Published | 2019-03-29 |
URL | https://arxiv.org/abs/1903.12363v4 |
https://arxiv.org/pdf/1903.12363v4.pdf | |
PWC | https://paperswithcode.com/paper/cutie-learning-to-understand-documents-with |
Repo | https://github.com/vsymbol/CUTIE |
Framework | tf |
Sample Efficient Text Summarization Using a Single Pre-Trained Transformer
Title | Sample Efficient Text Summarization Using a Single Pre-Trained Transformer |
Authors | Urvashi Khandelwal, Kevin Clark, Dan Jurafsky, Lukasz Kaiser |
Abstract | Language model (LM) pre-training has resulted in impressive performance and sample efficiency on a variety of language understanding tasks. However, it remains unclear how to best use pre-trained LMs for generation tasks such as abstractive summarization, particularly to enhance sample efficiency. In these sequence-to-sequence settings, prior work has experimented with loading pre-trained weights into the encoder and/or decoder networks, but used non-pre-trained encoder-decoder attention weights. We instead use a pre-trained decoder-only network, where the same Transformer LM both encodes the source and generates the summary. This ensures that all parameters in the network, including those governing attention over source states, have been pre-trained before the fine-tuning step. Experiments on the CNN/Daily Mail dataset show that our pre-trained Transformer LM substantially improves over pre-trained Transformer encoder-decoder networks in limited-data settings. For instance, it achieves 13.1 ROUGE-2 using only 1% of the training data (~3000 examples), while pre-trained encoder-decoder models score 2.3 ROUGE-2. |
Tasks | Abstractive Text Summarization, Language Modelling, Text Summarization |
Published | 2019-05-21 |
URL | https://arxiv.org/abs/1905.08836v1 |
https://arxiv.org/pdf/1905.08836v1.pdf | |
PWC | https://paperswithcode.com/paper/sample-efficient-text-summarization-using-a |
Repo | https://github.com/t080/pytorch-translm |
Framework | pytorch |
Matrix Completion in the Unit Hypercube via Structured Matrix Factorization
Title | Matrix Completion in the Unit Hypercube via Structured Matrix Factorization |
Authors | Emanuele Bugliarello, Swayambhoo Jain, Vineeth Rakesh |
Abstract | Several complex tasks that arise in organizations can be simplified by mapping them into a matrix completion problem. In this paper, we address a key challenge faced by our company: predicting the efficiency of artists in rendering visual effects (VFX) in film shots. We tackle this challenge by using a two-fold approach: first, we transform this task into a constrained matrix completion problem with entries bounded in the unit interval [0, 1]; second, we propose two novel matrix factorization models that leverage our knowledge of the VFX environment. Our first approach, expertise matrix factorization (EMF), is an interpretable method that structures the latent factors as weighted user-item interplay. The second one, survival matrix factorization (SMF), is instead a probabilistic model for the underlying process defining employees’ efficiencies. We show the effectiveness of our proposed models by extensive numerical tests on our VFX dataset and two additional datasets with values that are also bounded in the [0, 1] interval. |
Tasks | Matrix Completion |
Published | 2019-05-30 |
URL | https://arxiv.org/abs/1905.12881v1 |
https://arxiv.org/pdf/1905.12881v1.pdf | |
PWC | https://paperswithcode.com/paper/matrix-completion-in-the-unit-hypercube-via |
Repo | https://github.com/e-bug/unit-mf |
Framework | none |
Guided Learning Convolution System for DCASE 2019 Task 4
Title | Guided Learning Convolution System for DCASE 2019 Task 4 |
Authors | Liwei Lin, Xiangdong Wang, Hong Liu, Yueliang Qian |
Abstract | In this paper, we describe in detail the system we submitted to DCASE2019 task 4: sound event detection (SED) in domestic environments. We employ a convolutional neural network (CNN) with an embedding-level attention pooling module to solve it. By considering the interference caused by the co-occurrence of multiple events in the unbalanced dataset, we utilize the disentangled feature to raise the performance of the model. To take advantage of the unlabeled data, we adopt Guided Learning for semi-supervised learning. A group of median filters with adaptive window sizes is utilized in the post-processing of output probabilities of the model. We also analyze the effect of the synthetic data on the performance of the model and finally achieve an event-based F-measure of 45.43% on the validation set and an event-based F-measure of 42.7% on the test set. The system we submitted to the challenge achieves the best performance compared to those of other participates. |
Tasks | Sound Event Detection |
Published | 2019-09-11 |
URL | https://arxiv.org/abs/1909.06178v1 |
https://arxiv.org/pdf/1909.06178v1.pdf | |
PWC | https://paperswithcode.com/paper/guided-learning-convolution-system-for-dcase |
Repo | https://github.com/Kikyo-16/Sound_event_detection |
Framework | tf |
Speech Model Pre-training for End-to-End Spoken Language Understanding
Title | Speech Model Pre-training for End-to-End Spoken Language Understanding |
Authors | Loren Lugosch, Mirco Ravanelli, Patrick Ignoto, Vikrant Singh Tomar, Yoshua Bengio |
Abstract | Whereas conventional spoken language understanding (SLU) systems map speech to text, and then text to intent, end-to-end SLU systems map speech directly to intent through a single trainable model. Achieving high accuracy with these end-to-end models without a large amount of training data is difficult. We propose a method to reduce the data requirements of end-to-end SLU in which the model is first pre-trained to predict words and phonemes, thus learning good features for SLU. We introduce a new SLU dataset, Fluent Speech Commands, and show that our method improves performance both when the full dataset is used for training and when only a small subset is used. We also describe preliminary experiments to gauge the model’s ability to generalize to new phrases not heard during training. |
Tasks | Spoken Language Understanding |
Published | 2019-04-07 |
URL | https://arxiv.org/abs/1904.03670v2 |
https://arxiv.org/pdf/1904.03670v2.pdf | |
PWC | https://paperswithcode.com/paper/speech-model-pre-training-for-end-to-end |
Repo | https://github.com/lorenlugosch/end-to-end-SLU |
Framework | pytorch |
A Comparative Analysis of Feature Selection Methods for Biomarker Discovery in Study of Toxicant-treated Atlantic Cod (Gadus morhua) Liver
Title | A Comparative Analysis of Feature Selection Methods for Biomarker Discovery in Study of Toxicant-treated Atlantic Cod (Gadus morhua) Liver |
Authors | Xiaokang Zhang, Inge Jonassen |
Abstract | Univariate and multivariate feature selection methods can be used for biomarker discovery in analysis of toxicant exposure. Among the univariate methods, differential expression analysis (DEA) is often applied for its simplicity and interpretability. A characteristic of methods for DEA is that they treat genes individually, disregarding the correlation that exists between them. On the other hand, some multivariate feature selection methods are proposed for biomarker discovery. Provided with various biomarker discovery methods, how to choose the most suitable method for a specific dataset becomes a problem. In this paper, we present a framework for comparison of potential biomarker discovery methods: three methods that stem from different theories are compared by how stable they are and how well they can improve the classification accuracy. The three methods we have considered are: Significance Analysis of Microarrays (SAM) which identifies the differentially expressed genes; minimum Redundancy Maximum Relevance (mRMR) based on information theory; and Characteristic Direction (GeoDE) inspired by a graphical perspective. Tested on the gene expression data from two experiments exposing the cod fish to two different toxicants (MeHg and PCB 153), different methods stand out in different cases, so a decision upon the most suitable method should be made based on the dataset under study and the research interest. |
Tasks | Feature Selection |
Published | 2019-05-20 |
URL | https://arxiv.org/abs/1905.08048v1 |
https://arxiv.org/pdf/1905.08048v1.pdf | |
PWC | https://paperswithcode.com/paper/a-comparative-analysis-of-feature-selection |
Repo | https://github.com/zhxiaokang/FScompare |
Framework | none |
On the Understanding and Interpretation of Machine Learning Predictions in Clinical Gait Analysis Using Explainable Artificial Intelligence
Title | On the Understanding and Interpretation of Machine Learning Predictions in Clinical Gait Analysis Using Explainable Artificial Intelligence |
Authors | Fabian Horst, Djordje Slijepcevic, Sebastian Lapuschkin, Anna-Maria Raberger, Matthias Zeppelzauer, Wojciech Samek, Christian Breiteneder, Wolfgang I. Schöllhorn, Brian Horsak |
Abstract | Systems incorporating Artificial Intelligence (AI) and machine learning (ML) techniques are increasingly used to guide decision-making in the healthcare sector. While AI-based systems provide powerful and promising results with regard to their classification and prediction accuracy (e.g., in differentiating between different disorders in human gait), most share a central limitation, namely their black-box character. Understanding which features classification models learn, whether they are meaningful and consequently whether their decisions are trustworthy is difficult and often impossible to comprehend. This severely hampers their applicability as decision-support systems in clinical practice. There is a strong need for AI-based systems to provide transparency and justification of predictions, which are necessary also for ethical and legal compliance. As a consequence, in recent years the field of explainable AI (XAI) has gained increasing importance. The primary aim of this article is to investigate whether XAI methods can enhance transparency, explainability and interpretability of predictions in automated clinical gait classification. We utilize a dataset comprising bilateral three-dimensional ground reaction force measurements from 132 patients with different lower-body gait disorders and 62 healthy controls. In our experiments, we included several gait classification tasks, employed a representative set of classification methods, and a well-established XAI method - Layer-wise Relevance Propagation - to explain decisions at the signal (input) level. The presented approach exemplifies how XAI can be used to understand and interpret state-of-the-art ML models trained for gait classification tasks, and shows that the features that are considered relevant for machine learning models can be attributed to meaningful and clinically relevant biomechanical gait characteristics. |
Tasks | Decision Making |
Published | 2019-12-16 |
URL | https://arxiv.org/abs/1912.07737v1 |
https://arxiv.org/pdf/1912.07737v1.pdf | |
PWC | https://paperswithcode.com/paper/on-the-understanding-and-interpretation-of |
Repo | https://github.com/sebastian-lapuschkin/interpretable-deep-gait-injury |
Framework | none |