October 18, 2019

3185 words 15 mins read

Paper Group ANR 568

Paper Group ANR 568

Fairness Without Demographics in Repeated Loss Minimization. Joint Ground and Aerial Package Delivery Services: A Stochastic Optimization Approach. Sturm: Sparse Tubal-Regularized Multilinear Regression for fMRI. LeukoNet: DCT-based CNN architecture for the classification of normal versus Leukemic blasts in B-ALL Cancer. Two-stage iterative Procrus …

Fairness Without Demographics in Repeated Loss Minimization

Title Fairness Without Demographics in Repeated Loss Minimization
Authors Tatsunori B. Hashimoto, Megha Srivastava, Hongseok Namkoong, Percy Liang
Abstract Machine learning models (e.g., speech recognizers) are usually trained to minimize average loss, which results in representation disparity—minority groups (e.g., non-native speakers) contribute less to the training objective and thus tend to suffer higher loss. Worse, as model accuracy affects user retention, a minority group can shrink over time. In this paper, we first show that the status quo of empirical risk minimization (ERM) amplifies representation disparity over time, which can even make initially fair models unfair. To mitigate this, we develop an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution. We prove that this approach controls the risk of the minority group at each time step, in the spirit of Rawlsian distributive justice, while remaining oblivious to the identity of the groups. We demonstrate that DRO prevents disparity amplification on examples where ERM fails, and show improvements in minority group user satisfaction in a real-world text autocomplete task.
Tasks
Published 2018-06-20
URL http://arxiv.org/abs/1806.08010v2
PDF http://arxiv.org/pdf/1806.08010v2.pdf
PWC https://paperswithcode.com/paper/fairness-without-demographics-in-repeated
Repo
Framework

Joint Ground and Aerial Package Delivery Services: A Stochastic Optimization Approach

Title Joint Ground and Aerial Package Delivery Services: A Stochastic Optimization Approach
Authors Suttinee Sawadsitang, Dusit Niyato, Puay-Siew Tan, Ping Wang
Abstract Unmanned aerial vehicles (UAVs), also known as drones, have emerged as a promising mode of fast, energy-efficient, and cost-effective package delivery. A considerable number of works have studied different aspects of drone package delivery service by a supplier, one of which is delivery planning. However, existing works addressing the planning issues consider a simple case of perfect delivery without service interruption, e.g., due to accident which is common and realistic. Therefore, this paper introduces the joint ground and aerial delivery service optimization and planning (GADOP) framework. The framework explicitly incorporates uncertainty of drone package delivery, i.e., takeoff and breakdown conditions. The GADOP framework aims to minimize the total delivery cost given practical constraints, e.g., traveling distance limit. Specifically, we formulate the GADOP framework as a three-stage stochastic integer programming model. To deal with the high complexity issue of the problem, a decomposition method is adopted. Then, the performance of the GADOP framework is evaluated by using two data sets including Solomon benchmark suite and the real data from one of the Singapore logistics companies. The performance evaluation clearly shows that the GADOP framework can achieve significantly lower total payment than that of the baseline methods which do not take uncertainty into account.
Tasks Stochastic Optimization
Published 2018-08-14
URL http://arxiv.org/abs/1808.04617v2
PDF http://arxiv.org/pdf/1808.04617v2.pdf
PWC https://paperswithcode.com/paper/joint-ground-and-aerial-package-delivery
Repo
Framework

Sturm: Sparse Tubal-Regularized Multilinear Regression for fMRI

Title Sturm: Sparse Tubal-Regularized Multilinear Regression for fMRI
Authors Wenwen Li, Jian Lou, Shuo Zhou, Haiping Lu
Abstract While functional magnetic resonance imaging (fMRI) is important for healthcare/neuroscience applications, it is challenging to classify or interpret due to its multi-dimensional structure, high dimensionality, and small number of samples available. Recent sparse multilinear regression methods based on tensor are emerging as promising solutions for fMRI, yet existing works rely on unfolding/folding operations and a tensor rank relaxation with limited tightness. The newly proposed tensor singular value decomposition (t-SVD) sheds light on new directions. In this work, we study t-SVD for sparse multilinear regression and propose a Sparse tubal-regularized multilinear regression (Sturm) method for fMRI. Specifically, the Sturm model performs multilinear regression with two regularization terms: a tubal tensor nuclear norm based on t-SVD and a standard L1 norm. We further derive the algorithm under the alternating direction method of multipliers framework. We perform experiments on four classification problems, including both resting-state fMRI for disease diagnosis and task-based fMRI for neural decoding. The results show the superior performance of Sturm in classifying fMRI using just a small number of voxels.
Tasks
Published 2018-12-04
URL http://arxiv.org/abs/1812.01496v1
PDF http://arxiv.org/pdf/1812.01496v1.pdf
PWC https://paperswithcode.com/paper/sturm-sparse-tubal-regularized-multilinear
Repo
Framework

LeukoNet: DCT-based CNN architecture for the classification of normal versus Leukemic blasts in B-ALL Cancer

Title LeukoNet: DCT-based CNN architecture for the classification of normal versus Leukemic blasts in B-ALL Cancer
Authors Simmi Mourya, Sonaal Kant, Pulkit Kumar, Anubha Gupta, Ritu Gupta
Abstract Acute lymphoblastic leukemia (ALL) constitutes approximately 25% of the pediatric cancers. In general, the task of identifying immature leukemic blasts from normal cells under the microscope is challenging because morphologically the images of the two cells appear similar. In this paper, we propose a deep learning framework for classifying immature leukemic blasts and normal cells. The proposed model combines the Discrete Cosine Transform (DCT) domain features extracted via CNN with the Optical Density (OD) space features to build a robust classifier. Elaborate experiments have been conducted to validate the proposed LeukoNet classifier.
Tasks
Published 2018-10-18
URL http://arxiv.org/abs/1810.07961v2
PDF http://arxiv.org/pdf/1810.07961v2.pdf
PWC https://paperswithcode.com/paper/leukonet-dct-based-cnn-architecture-for-the
Repo
Framework

Two-stage iterative Procrustes match algorithm and its application for VQ-based speaker verification

Title Two-stage iterative Procrustes match algorithm and its application for VQ-based speaker verification
Authors Richeng Tan, Jing Li
Abstract In the past decades, Vector Quantization (VQ) model has been very popular across different pattern recognition areas, especially for feature-based tasks. However, the classification or regression performance of VQ-based systems always confronts the feature mismatch problem, which will heavily affect the performance of them. In this paper, we propose a two-stage iterative Procrustes algorithm (TIPM) to address the feature mismatch problem for VQ-based applications. At the first stage, the algorithm will remove mismatched feature vector pairs for a pair of input feature sets. Then, the second stage will collect those correct matched feature pairs that were discarded during the first stage. To evaluate the effectiveness of the proposed TIPM algorithm, speaker verification is used as the case study in this paper. The experiments were conducted on the TIMIT database and the results show that TIPM can improve VQ-based speaker verification performance clean condition and all noisy conditions.
Tasks Quantization, Speaker Verification
Published 2018-07-10
URL http://arxiv.org/abs/1807.03587v1
PDF http://arxiv.org/pdf/1807.03587v1.pdf
PWC https://paperswithcode.com/paper/two-stage-iterative-procrustes-match
Repo
Framework

A Whole Slide Image Grading Benchmark and Tissue Classification for Cervical Cancer Precursor Lesions with Inter-Observer Variability

Title A Whole Slide Image Grading Benchmark and Tissue Classification for Cervical Cancer Precursor Lesions with Inter-Observer Variability
Authors Abdulkadir Albayrak, Asli Unlu, Nurullah Calik, Abdulkerim Capar, Gokhan Bilgin, Behcet Ugur Toreyin, Bahar Muezzinoglu, Ilknur Turkmen, Lutfiye Durak-Ata
Abstract The cervical cancer developing from the precancerous lesions caused by the Human Papilloma Virus (HPV) has been one of the preventable cancers with the help of periodic screening. There are two types of grading conventions widely accepted among pathologists. On the other hand, inter-observer variability is an important issue for final diagnosis. In this paper, a whole-slide image grading benchmark for cervical cancer precursor lesions is introduced. The papillae of the cervical epithelium and overlapping cell problems are handled and a tissue classification method with a novel morphological feature exploiting the relative orientation between the BM and the major axis of all nuclei is developed and its performance is evaluated. Besides, the inter-observer variability is also revealed by a thorough comparison among pathologists’ decisions, as well as, the final diagnosis.
Tasks
Published 2018-12-26
URL http://arxiv.org/abs/1812.10256v1
PDF http://arxiv.org/pdf/1812.10256v1.pdf
PWC https://paperswithcode.com/paper/a-whole-slide-image-grading-benchmark-and
Repo
Framework

Spiking Linear Dynamical Systems on Neuromorphic Hardware for Low-Power Brain-Machine Interfaces

Title Spiking Linear Dynamical Systems on Neuromorphic Hardware for Low-Power Brain-Machine Interfaces
Authors David G. Clark, Jesse A. Livezey, Edward F. Chang, Kristofer E. Bouchard
Abstract Neuromorphic architectures achieve low-power operation by using many simple spiking neurons in lieu of traditional hardware. Here, we develop methods for precise linear computations in spiking neural networks and use these methods to map the evolution of a linear dynamical system (LDS) onto an existing neuromorphic chip: IBM’s TrueNorth. We analytically characterize, and numerically validate, the discrepancy between the spiking LDS state sequence and that of its non-spiking counterpart. These analytical results shed light on the multiway tradeoff between time, space, energy, and accuracy in neuromorphic computation. To demonstrate the utility of our work, we implemented a neuromorphic Kalman filter (KF) and used it for offline decoding of human vocal pitch from neural data. The neuromorphic KF could be used for low-power filtering in domains beyond neuroscience, such as navigation or robotics.
Tasks
Published 2018-05-22
URL http://arxiv.org/abs/1805.08889v2
PDF http://arxiv.org/pdf/1805.08889v2.pdf
PWC https://paperswithcode.com/paper/spiking-linear-dynamical-systems-on
Repo
Framework

Learning based Facial Image Compression with Semantic Fidelity Metric

Title Learning based Facial Image Compression with Semantic Fidelity Metric
Authors Zhibo Chen, Tianyu He
Abstract Surveillance and security scenarios usually require high efficient facial image compression scheme for face recognition and identification. While either traditional general image codecs or special facial image compression schemes only heuristically refine codec separately according to face verification accuracy metric. We propose a Learning based Facial Image Compression (LFIC) framework with a novel Regionally Adaptive Pooling (RAP) module whose parameters can be automatically optimized according to gradient feedback from an integrated hybrid semantic fidelity metric, including a successfully exploration to apply Generative Adversarial Network (GAN) as metric directly in image compression scheme. The experimental results verify the framework’s efficiency by demonstrating performance improvement of 71.41%, 48.28% and 52.67% bitrate saving separately over JPEG2000, WebP and neural network-based codecs under the same face verification accuracy distortion metric. We also evaluate LFIC’s superior performance gain compared with latest specific facial image codecs. Visual experiments also show some interesting insight on how LFIC can automatically capture the information in critical areas based on semantic distortion metrics for optimized compression, which is quite different from the heuristic way of optimization in traditional image compression algorithms.
Tasks Face Recognition, Face Verification, Image Compression
Published 2018-12-25
URL http://arxiv.org/abs/1812.10067v2
PDF http://arxiv.org/pdf/1812.10067v2.pdf
PWC https://paperswithcode.com/paper/learning-based-facial-image-compression-with
Repo
Framework

Past Visions of Artificial Futures: One Hundred and Fifty Years under the Spectre of Evolving Machines

Title Past Visions of Artificial Futures: One Hundred and Fifty Years under the Spectre of Evolving Machines
Authors Tim Taylor, Alan Dorin
Abstract The influence of Artificial Intelligence (AI) and Artificial Life (ALife) technologies upon society, and their potential to fundamentally shape the future evolution of humankind, are topics very much at the forefront of current scientific, governmental and public debate. While these might seem like very modern concerns, they have a long history that is often disregarded in contemporary discourse. Insofar as current debates do acknowledge the history of these ideas, they rarely look back further than the origin of the modern digital computer age in the 1940s-50s. In this paper we explore the earlier history of these concepts. We focus in particular on the idea of self-reproducing and evolving machines, and potential implications for our own species. We show that discussion of these topics arose in the 1860s, within a decade of the publication of Darwin’s The Origin of Species, and attracted increasing interest from scientists, novelists and the general public in the early 1900s. After introducing the relevant work from this period, we categorise the various visions presented by these authors of the future implications of evolving machines for humanity. We suggest that current debates on the co-evolution of society and technology can be enriched by a proper appreciation of the long history of the ideas involved.
Tasks Artificial Life
Published 2018-06-04
URL http://arxiv.org/abs/1806.01322v1
PDF http://arxiv.org/pdf/1806.01322v1.pdf
PWC https://paperswithcode.com/paper/past-visions-of-artificial-futures-one
Repo
Framework

Verifiable Reinforcement Learning via Policy Extraction

Title Verifiable Reinforcement Learning via Policy Extraction
Authors Osbert Bastani, Yewen Pu, Armando Solar-Lezama
Abstract While deep reinforcement learning has successfully solved many challenging control tasks, its real-world applicability has been limited by the inability to ensure the safety of learned policies. We propose an approach to verifiable reinforcement learning by training decision tree policies, which can represent complex policies (since they are nonparametric), yet can be efficiently verified using existing techniques (since they are highly structured). The challenge is that decision tree policies are difficult to train. We propose VIPER, an algorithm that combines ideas from model compression and imitation learning to learn decision tree policies guided by a DNN policy (called the oracle) and its Q-function, and show that it substantially outperforms two baselines. We use VIPER to (i) learn a provably robust decision tree policy for a variant of Atari Pong with a symbolic state space, (ii) learn a decision tree policy for a toy game based on Pong that provably never loses, and (iii) learn a provably stable decision tree policy for cart-pole. In each case, the decision tree policy achieves performance equal to that of the original DNN policy.
Tasks Imitation Learning, Model Compression
Published 2018-05-22
URL http://arxiv.org/abs/1805.08328v2
PDF http://arxiv.org/pdf/1805.08328v2.pdf
PWC https://paperswithcode.com/paper/verifiable-reinforcement-learning-via-policy
Repo
Framework

Weakly-Supervised Hierarchical Text Classification

Title Weakly-Supervised Hierarchical Text Classification
Authors Yu Meng, Jiaming Shen, Chao Zhang, Jiawei Han
Abstract Hierarchical text classification, which aims to classify text documents into a given hierarchy, is an important task in many real-world applications. Recently, deep neural models are gaining increasing popularity for text classification due to their expressive power and minimum requirement for feature engineering. However, applying deep neural networks for hierarchical text classification remains challenging, because they heavily rely on a large amount of training data and meanwhile cannot easily determine appropriate levels of documents in the hierarchical setting. In this paper, we propose a weakly-supervised neural method for hierarchical text classification. Our method does not require a large amount of training data but requires only easy-to-provide weak supervision signals such as a few class-related documents or keywords. Our method effectively leverages such weak supervision signals to generate pseudo documents for model pre-training, and then performs self-training on real unlabeled data to iteratively refine the model. During the training process, our model features a hierarchical neural structure, which mimics the given hierarchy and is capable of determining the proper levels for documents with a blocking mechanism. Experiments on three datasets from different domains demonstrate the efficacy of our method compared with a comprehensive set of baselines.
Tasks Feature Engineering, Text Classification
Published 2018-12-29
URL http://arxiv.org/abs/1812.11270v1
PDF http://arxiv.org/pdf/1812.11270v1.pdf
PWC https://paperswithcode.com/paper/weakly-supervised-hierarchical-text
Repo
Framework

Analysis Dictionary Learning based Classification: Structure for Robustness

Title Analysis Dictionary Learning based Classification: Structure for Robustness
Authors Wen Tang, Ashkan Panahi, Hamid Krim, Liyi Dai
Abstract A discriminative structured analysis dictionary is proposed for the classification task. A structure of the union of subspaces (UoS) is integrated into the conventional analysis dictionary learning to enhance the capability of discrimination. A simple classifier is also simultaneously included into the formulated functional to ensure a more complete consistent classification. The solution of the algorithm is efficiently obtained by the linearized alternating direction method of multipliers. Moreover, a distributed structured analysis dictionary learning is also presented to address large scale datasets. It can group-(class-) independently train the structured analysis dictionaries by different machines/cores/threads, and therefore avoid a high computational cost. A consensus structured analysis dictionary and a global classifier are jointly learned in the distributed approach to safeguard the discriminative power and the efficiency of classification. Experiments demonstrate that our method achieves a comparable or better performance than the state-of-the-art algorithms in a variety of visual classification tasks. In addition, the training and testing computational complexity are also greatly reduced.
Tasks Dictionary Learning
Published 2018-07-13
URL https://arxiv.org/abs/1807.04899v2
PDF https://arxiv.org/pdf/1807.04899v2.pdf
PWC https://paperswithcode.com/paper/analysis-dictionary-learning-based
Repo
Framework

Products of Many Large Random Matrices and Gradients in Deep Neural Networks

Title Products of Many Large Random Matrices and Gradients in Deep Neural Networks
Authors Boris Hanin, Mihai Nica
Abstract We study products of random matrices in the regime where the number of terms and the size of the matrices simultaneously tend to infinity. Our main theorem is that the logarithm of the $\ell_2$ norm of such a product applied to any fixed vector is asymptotically Gaussian. The fluctuations we find can be thought of as a finite temperature correction to the limit in which first the size and then the number of matrices tend to infinity. Depending on the scaling limit considered, the mean and variance of the limiting Gaussian depend only on either the first two or the first four moments of the measure from which matrix entries are drawn. We also obtain explicit error bounds on the moments of the norm and the Kolmogorov-Smirnov distance to a Gaussian. Finally, we apply our result to obtain precise information about the stability of gradients in randomly initialized deep neural networks with ReLU activations. This provides a quantitative measure of the extent to which the exploding and vanishing gradient problem occurs in a fully connected neural network with ReLU activations and a given architecture.
Tasks
Published 2018-12-14
URL http://arxiv.org/abs/1812.05994v1
PDF http://arxiv.org/pdf/1812.05994v1.pdf
PWC https://paperswithcode.com/paper/products-of-many-large-random-matrices-and
Repo
Framework

Temporally Coherent Video Harmonization Using Adversarial Networks

Title Temporally Coherent Video Harmonization Using Adversarial Networks
Authors Haozhi Huang, Senzhe Xu, Junxiong Cai, Wei Liu, Shimin Hu
Abstract Compositing is one of the most important editing operations for images and videos. The process of improving the realism of composite results is often called harmonization. Previous approaches for harmonization mainly focus on images. In this work, we take one step further to attack the problem of video harmonization. Specifically, we train a convolutional neural network in an adversarial way, exploiting a pixel-wise disharmony discriminator to achieve more realistic harmonized results and introducing a temporal loss to increase temporal consistency between consecutive harmonized frames. Thanks to the pixel-wise disharmony discriminator, we are also able to relieve the need of input foreground masks. Since existing video datasets which have ground-truth foreground masks and optical flows are not sufficiently large, we propose a simple yet efficient method to build up a synthetic dataset supporting supervised training of the proposed adversarial network. Experiments show that training on our synthetic dataset generalizes well to the real-world composite dataset. Also, our method successfully incorporates temporal consistency during training and achieves more harmonious results than previous methods.
Tasks
Published 2018-09-05
URL http://arxiv.org/abs/1809.01372v1
PDF http://arxiv.org/pdf/1809.01372v1.pdf
PWC https://paperswithcode.com/paper/temporally-coherent-video-harmonization-using
Repo
Framework

Improving Zero-Shot Translation of Low-Resource Languages

Title Improving Zero-Shot Translation of Low-Resource Languages
Authors Surafel M. Lakew, Quintino F. Lotito, Matteo Negri, Marco Turchi, Marcello Federico
Abstract Recent work on multilingual neural machine translation reported competitive performance with respect to bilingual models and surprisingly good performance even on (zeroshot) translation directions not observed at training time. We investigate here a zero-shot translation in a particularly lowresource multilingual setting. We propose a simple iterative training procedure that leverages a duality of translations directly generated by the system for the zero-shot directions. The translations produced by the system (sub-optimal since they contain mixed language from the shared vocabulary), are then used together with the original parallel data to feed and iteratively re-train the multilingual network. Over time, this allows the system to learn from its own generated and increasingly better output. Our approach shows to be effective in improving the two zero-shot directions of our multilingual model. In particular, we observed gains of about 9 BLEU points over a baseline multilingual model and up to 2.08 BLEU over a pivoting mechanism using two bilingual models. Further analysis shows that there is also a slight improvement in the non-zero-shot language directions.
Tasks Machine Translation
Published 2018-11-04
URL http://arxiv.org/abs/1811.01389v1
PDF http://arxiv.org/pdf/1811.01389v1.pdf
PWC https://paperswithcode.com/paper/improving-zero-shot-translation-of-low
Repo
Framework
comments powered by Disqus