October 18, 2019

3164 words 15 mins read

Paper Group ANR 502

Paper Group ANR 502

Theoretical analysis and propositions for “ontology citation”. Longitudinal Face Aging in the Wild - Recent Deep Learning Approaches. SAM-GCNN: A Gated Convolutional Neural Network with Segment-Level Attention Mechanism for Home Activity Monitoring. Laconic Deep Learning Computing. Towards the Creation of a Large Corpus of Synthetically-Identified …

Theoretical analysis and propositions for “ontology citation”

Title Theoretical analysis and propositions for “ontology citation”
Authors Biswanath Dutta
Abstract Ontology citation, the practice of referring the ontology in a similar fashion the scientific community routinely follows in providing the bibliographic references to other scholarly works, has not received enough attention it supposed to. Interestingly, so far none of the existing standard citation styles (e.g., APA, CMOS, and IEEE) have included ontology as a citable information source in the list of citable information sources such as journal article, book, website, etc. Also, not much work can be found in the literature on this topic though there are various issues and aspects of it that demand a thorough study. For instance, what to cite? Is it the publication that describes the ontology, or the ontology itself? The citation format, style, illustration of motivations of ontology citation, the citation principles, ontology impact factor, citation analysis, and so forth. In this work, we primarily analyse the current ontology citation practices and the related issues. We illustrate the various motivations and the basic principles of ontology citation. We also propose a template for referring the source of ontologies.
Tasks
Published 2018-09-05
URL http://arxiv.org/abs/1809.01462v1
PDF http://arxiv.org/pdf/1809.01462v1.pdf
PWC https://paperswithcode.com/paper/theoretical-analysis-and-propositions-for
Repo
Framework

Longitudinal Face Aging in the Wild - Recent Deep Learning Approaches

Title Longitudinal Face Aging in the Wild - Recent Deep Learning Approaches
Authors Chi Nhan Duong, Khoa Luu, Kha Gia Quach, Tien D. Bui
Abstract Face Aging has raised considerable attentions and interest from the computer vision community in recent years. Numerous approaches ranging from purely image processing techniques to deep learning structures have been proposed in literature. In this paper, we aim to give a review of recent developments of modern deep learning based approaches, i.e. Deep Generative Models, for Face Aging task. Their structures, formulation, learning algorithms as well as synthesized results are also provided with systematic discussions. Moreover, the aging databases used in most methods to learn the aging process are also reviewed.
Tasks
Published 2018-02-23
URL http://arxiv.org/abs/1802.08726v1
PDF http://arxiv.org/pdf/1802.08726v1.pdf
PWC https://paperswithcode.com/paper/longitudinal-face-aging-in-the-wild-recent
Repo
Framework

SAM-GCNN: A Gated Convolutional Neural Network with Segment-Level Attention Mechanism for Home Activity Monitoring

Title SAM-GCNN: A Gated Convolutional Neural Network with Segment-Level Attention Mechanism for Home Activity Monitoring
Authors Yu-Han Shen, Ke-Xin He, Wei-Qiang Zhang
Abstract In this paper, we propose a method for home activity monitoring. We demonstrate our model on dataset of Detection and Classification of Acoustic Scenes and Events (DCASE) 2018 Challenge Task 5. This task aims to classify multi-channel audios into one of the provided pre-defined classes. All of these classes are daily activities performed in a home environment. To tackle this task, we propose a gated convolutional neural network with segment-level attention mechanism (SAM-GCNN). The proposed framework is a convolutional model with two auxiliary modules: a gated convolutional neural network and a segment-level attention mechanism. Furthermore, we adopted model ensemble to enhance the capability of generalization of our model. We evaluated our work on the development dataset of DCASE 2018 Task 5 and achieved competitive performance, with a macro-averaged F-1 score increasing from 83.76% to 89.33%, compared with the convolutional baseline system.
Tasks Home Activity Monitoring
Published 2018-10-03
URL http://arxiv.org/abs/1810.03986v2
PDF http://arxiv.org/pdf/1810.03986v2.pdf
PWC https://paperswithcode.com/paper/sam-gcnn-a-gated-convolutional-neural-network
Repo
Framework

Laconic Deep Learning Computing

Title Laconic Deep Learning Computing
Authors Sayeh Sharify, Mostafa Mahmoud, Alberto Delmas Lascorz, Milos Nikolic, Andreas Moshovos
Abstract We motivate a method for transparently identifying ineffectual computations in unmodified Deep Learning models and without affecting accuracy. Specifically, we show that if we decompose multiplications down to the bit level the amount of work performed during inference for image classification models can be consistently reduced by two orders of magnitude. In the best case studied of a sparse variant of AlexNet, this approach can ideally reduce computation work by more than 500x. We present Laconic a hardware accelerator that implements this approach to improve execution time, and energy efficiency for inference with Deep Learning Networks. Laconic judiciously gives up some of the work reduction potential to yield a low-cost, simple, and energy efficient design that outperforms other state-of-the-art accelerators. For example, a Laconic configuration that uses a weight memory interface with just 128 wires outperforms a conventional accelerator with a 2K-wire weight memory interface by 2.3x on average while being 2.13x more energy efficient on average. A Laconic configuration that uses a 1K-wire weight memory interface, outperforms the 2K-wire conventional accelerator by 15.4x and is 1.95x more energy efficient. Laconic does not require but rewards advances in model design such as a reduction in precision, the use of alternate numeric representations that reduce the number of bits that are “1”, or an increase in weight or activation sparsity.
Tasks Image Classification
Published 2018-05-10
URL http://arxiv.org/abs/1805.04513v1
PDF http://arxiv.org/pdf/1805.04513v1.pdf
PWC https://paperswithcode.com/paper/laconic-deep-learning-computing
Repo
Framework

Towards the Creation of a Large Corpus of Synthetically-Identified Clinical Notes

Title Towards the Creation of a Large Corpus of Synthetically-Identified Clinical Notes
Authors Willie Boag, Tristan Naumann, Peter Szolovits
Abstract Clinical notes often describe the most important aspects of a patient’s physiology and are therefore critical to medical research. However, these notes are typically inaccessible to researchers without prior removal of sensitive protected health information (PHI), a natural language processing (NLP) task referred to as deidentification. Tools to automatically de-identify clinical notes are needed but are difficult to create without access to those very same notes containing PHI. This work presents a first step toward creating a large synthetically-identified corpus of clinical notes and corresponding PHI annotations in order to facilitate the development de-identification tools. Further, one such tool is evaluated against this corpus in order to understand the advantages and shortcomings of this approach.
Tasks
Published 2018-03-07
URL http://arxiv.org/abs/1803.02728v1
PDF http://arxiv.org/pdf/1803.02728v1.pdf
PWC https://paperswithcode.com/paper/towards-the-creation-of-a-large-corpus-of
Repo
Framework

Augmenting Statistical Machine Translation with Subword Translation of Out-of-Vocabulary Words

Title Augmenting Statistical Machine Translation with Subword Translation of Out-of-Vocabulary Words
Authors Nelson F. Liu, Jonathan May, Michael Pust, Kevin Knight
Abstract Most statistical machine translation systems cannot translate words that are unseen in the training data. However, humans can translate many classes of out-of-vocabulary (OOV) words (e.g., novel morphological variants, misspellings, and compounds) without context by using orthographic clues. Following this observation, we describe and evaluate several general methods for OOV translation that use only subword information. We pose the OOV translation problem as a standalone task and intrinsically evaluate our approaches on fourteen typologically diverse languages across varying resource levels. Adding OOV translators to a statistical machine translation system yields consistent BLEU gains (0.5 points on average, and up to 2.0) for all fourteen languages, especially in low-resource scenarios.
Tasks Machine Translation
Published 2018-08-16
URL http://arxiv.org/abs/1808.05700v1
PDF http://arxiv.org/pdf/1808.05700v1.pdf
PWC https://paperswithcode.com/paper/augmenting-statistical-machine-translation
Repo
Framework

Faster Balanced Clusterings in High Dimension

Title Faster Balanced Clusterings in High Dimension
Authors Hu Ding
Abstract The problem of constrained clustering has attracted significant attention in the past decades. In this paper, we study the balanced $k$-center, $k$-median, and $k$-means clustering problems where the size of each cluster is constrained by the given lower and upper bounds. The problems are motivated by the applications in processing large-scale data in high dimension. Existing methods often need to compute complicated matchings (or min cost flows) to satisfy the balance constraint, and thus suffer from high complexities especially in high dimension. We develop an effective framework for the three balanced clustering problems to address this issue, and our method is based on a novel spatial partition idea in geometry. For the balanced $k$-center clustering, we provide a $4$-approximation algorithm that improves the existing approximation factors; for the balanced $k$-median and $k$-means clusterings, our algorithms yield constant and $(1+\epsilon)$-approximation factors with any $\epsilon>0$. More importantly, our algorithms achieve linear or nearly linear running times when $k$ is a constant, and significantly improve the existing ones. Our results can be easily extended to metric balanced clusterings and the running times are sub-linear in terms of the complexity of $n$-point metric.
Tasks
Published 2018-09-04
URL http://arxiv.org/abs/1809.00932v2
PDF http://arxiv.org/pdf/1809.00932v2.pdf
PWC https://paperswithcode.com/paper/faster-balanced-clusterings-in-high-dimension
Repo
Framework

Integrative Multi-View Reduced-Rank Regression: Bridging Group-Sparse and Low-Rank Models

Title Integrative Multi-View Reduced-Rank Regression: Bridging Group-Sparse and Low-Rank Models
Authors Gen Li, Xiaokang Liu, Kun Chen
Abstract Multi-view data have been routinely collected in various fields of science and engineering. A general problem is to study the predictive association between multivariate responses and multi-view predictor sets, all of which can be of high dimensionality. It is likely that only a few views are relevant to prediction, and the predictors within each relevant view contribute to the prediction collectively rather than sparsely. We cast this new problem under the familiar multivariate regression framework and propose an integrative reduced-rank regression (iRRR), where each view has its own low-rank coefficient matrix. As such, latent features are extracted from each view in a supervised fashion. For model estimation, we develop a convex composite nuclear norm penalization approach, which admits an efficient algorithm via alternating direction method of multipliers. Extensions to non-Gaussian and incomplete data are discussed. Theoretically, we derive non-asymptotic oracle bounds of iRRR under a restricted eigenvalue condition. Our results recover oracle bounds of several special cases of iRRR including Lasso, group Lasso and nuclear norm penalized regression. Therefore, iRRR seamlessly bridges group-sparse and low-rank methods and can achieve substantially faster convergence rate under realistic settings of multi-view learning. Simulation studies and an application in the Longitudinal Studies of Aging further showcase the efficacy of the proposed methods.
Tasks MULTI-VIEW LEARNING
Published 2018-07-26
URL http://arxiv.org/abs/1807.10375v1
PDF http://arxiv.org/pdf/1807.10375v1.pdf
PWC https://paperswithcode.com/paper/integrative-multi-view-reduced-rank
Repo
Framework

Gridbot: An autonomous robot controlled by a Spiking Neural Network mimicking the brain’s navigational system

Title Gridbot: An autonomous robot controlled by a Spiking Neural Network mimicking the brain’s navigational system
Authors Guangzhi Tang, Konstantinos P. Michmizos
Abstract It is true that the “best” neural network is not necessarily the one with the most “brain-like” behavior. Understanding biological intelligence, however, is a fundamental goal for several distinct disciplines. Translating our understanding of intelligence to machines is a fundamental problem in robotics. Propelled by new advancements in Neuroscience, we developed a spiking neural network (SNN) that draws from mounting experimental evidence that a number of individual neurons is associated with spatial navigation. By following the brain’s structure, our model assumes no initial all-to-all connectivity, which could inhibit its translation to a neuromorphic hardware, and learns an uncharted territory by mapping its identified components into a limited number of neural representations, through spike-timing dependent plasticity (STDP). In our ongoing effort to employ a bioinspired SNN-controlled robot to real-world spatial mapping applications, we demonstrate here how an SNN may robustly control an autonomous robot in mapping and exploring an unknown environment, while compensating for its own intrinsic hardware imperfections, such as partial or total loss of visual input.
Tasks
Published 2018-07-05
URL http://arxiv.org/abs/1807.02155v1
PDF http://arxiv.org/pdf/1807.02155v1.pdf
PWC https://paperswithcode.com/paper/gridbot-an-autonomous-robot-controlled-by-a
Repo
Framework

The Impatient May Use Limited Optimism to Minimize Regret

Title The Impatient May Use Limited Optimism to Minimize Regret
Authors Michaël Cadilhac, Guillermo A. Pérez, Marie van den Bogaard
Abstract Discounted-sum games provide a formal model for the study of reinforcement learning, where the agent is enticed to get rewards early since later rewards are discounted. When the agent interacts with the environment, she may regret her actions, realizing that a previous choice was suboptimal given the behavior of the environment. The main contribution of this paper is a PSPACE algorithm for computing the minimum possible regret of a given game. To this end, several results of independent interest are shown. (1) We identify a class of regret-minimizing and admissible strategies that first assume that the environment is collaborating, then assume it is adversarial—the precise timing of the switch is key here. (2) Disregarding the computational cost of numerical analysis, we provide an NP algorithm that checks that the regret entailed by a given time-switching strategy exceeds a given value. (3) We show that determining whether a strategy minimizes regret is decidable in PSPACE.
Tasks
Published 2018-11-17
URL http://arxiv.org/abs/1811.07146v1
PDF http://arxiv.org/pdf/1811.07146v1.pdf
PWC https://paperswithcode.com/paper/the-impatient-may-use-limited-optimism-to
Repo
Framework

Can Artificial Intelligence Reliably Report Chest X-Rays?: Radiologist Validation of an Algorithm trained on 2.3 Million X-Rays

Title Can Artificial Intelligence Reliably Report Chest X-Rays?: Radiologist Validation of an Algorithm trained on 2.3 Million X-Rays
Authors Preetham Putha, Manoj Tadepalli, Bhargava Reddy, Tarun Raj, Justy Antony Chiramal, Shalini Govil, Namita Sinha, Manjunath KS, Sundeep Reddivari, Ammar Jagirdar, Pooja Rao, Prashant Warier
Abstract Background: Chest X-rays are the most commonly performed, cost-effective diagnostic imaging tests ordered by physicians. A clinically validated AI system that can reliably separate normals from abnormals can be invaluble particularly in low-resource settings. The aim of this study was to develop and validate a deep learning system to detect various abnormalities seen on a chest X-ray. Methods: A deep learning system was trained on 2.3 million chest X-rays and their corresponding radiology reports to identify various abnormalities seen on a Chest X-ray. The system was tested against - 1. A three-radiologist majority on an independent, retrospectively collected set of 2000 X-rays(CQ2000) 2. Radiologist reports on a separate validation set of 100,000 scans(CQ100k). The primary accuracy measure was area under the ROC curve (AUC), estimated separately for each abnormality and for normal versus abnormal scans. Results: On the CQ2000 dataset, the deep learning system demonstrated an AUC of 0.92(CI 0.91-0.94) for detection of abnormal scans, and AUC(CI) of 0.96(0.94-0.98), 0.96(0.94-0.98), 0.95(0.87-1), 0.95(0.92-0.98), 0.93(0.90-0.96), 0.89(0.83-0.94), 0.91(0.87-0.96), 0.94(0.93-0.96), 0.98(0.97-1) for the detection of blunted costophrenic angle, cardiomegaly, cavity, consolidation, fibrosis, hilar enlargement, nodule, opacity and pleural effusion. The AUCs were similar on the larger CQ100k dataset except for detecting normals where the AUC was 0.86(0.85-0.86). Interpretation: Our study demonstrates that a deep learning algorithm trained on a large, well-labelled dataset can accurately detect multiple abnormalities on chest X-rays. As these systems improve in accuracy, applying deep learning to widen the reach of chest X-ray interpretation and improve reporting efficiency will add tremendous value in radiology workflows and public health screenings globally.
Tasks
Published 2018-07-19
URL https://arxiv.org/abs/1807.07455v2
PDF https://arxiv.org/pdf/1807.07455v2.pdf
PWC https://paperswithcode.com/paper/can-artificial-intelligence-reliably-report
Repo
Framework

DeepMVS: Learning Multi-view Stereopsis

Title DeepMVS: Learning Multi-view Stereopsis
Authors Po-Han Huang, Kevin Matzen, Johannes Kopf, Narendra Ahuja, Jia-Bin Huang
Abstract We present DeepMVS, a deep convolutional neural network (ConvNet) for multi-view stereo reconstruction. Taking an arbitrary number of posed images as input, we first produce a set of plane-sweep volumes and use the proposed DeepMVS network to predict high-quality disparity maps. The key contributions that enable these results are (1) supervised pretraining on a photorealistic synthetic dataset, (2) an effective method for aggregating information across a set of unordered images, and (3) integrating multi-layer feature activations from the pre-trained VGG-19 network. We validate the efficacy of DeepMVS using the ETH3D Benchmark. Our results show that DeepMVS compares favorably against state-of-the-art conventional MVS algorithms and other ConvNet based methods, particularly for near-textureless regions and thin structures.
Tasks
Published 2018-04-02
URL http://arxiv.org/abs/1804.00650v1
PDF http://arxiv.org/pdf/1804.00650v1.pdf
PWC https://paperswithcode.com/paper/deepmvs-learning-multi-view-stereopsis
Repo
Framework

Distributed Stochastic Gradient Tracking Methods

Title Distributed Stochastic Gradient Tracking Methods
Authors Shi Pu, Angelia Nedić
Abstract In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method (DSGT) and a gossip-like stochastic gradient tracking method (GSGT). We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant stepsize choice). Under DSGT, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size $n$, which is a comparable performance to a centralized stochastic gradient algorithm. Moreover, we show that when the network is well-connected, GSGT incurs lower communication cost than DSGT while maintaining a similar computational cost. Numerical example further demonstrates the effectiveness of the proposed methods.
Tasks
Published 2018-05-25
URL https://arxiv.org/abs/1805.11454v5
PDF https://arxiv.org/pdf/1805.11454v5.pdf
PWC https://paperswithcode.com/paper/distributed-stochastic-gradient-tracking
Repo
Framework

Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy

Title Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy
Authors Qian Yu, Songze Li, Netanel Raviv, Seyed Mohammadreza Mousavi Kalan, Mahdi Soltanolkotabi, Salman Avestimehr
Abstract We consider a scenario involving computations over a massive dataset stored distributedly across multiple workers, which is at the core of distributed learning algorithms. We propose Lagrange Coded Computing (LCC), a new framework to simultaneously provide (1) resiliency against stragglers that may prolong computations; (2) security against Byzantine (or malicious) workers that deliberately modify the computation for their benefit; and (3) (information-theoretic) privacy of the dataset amidst possible collusion of workers. LCC, which leverages the well-known Lagrange polynomial to create computation redundancy in a novel coded form across workers, can be applied to any computation scenario in which the function of interest is an arbitrary multivariate polynomial of the input dataset, hence covering many computations of interest in machine learning. LCC significantly generalizes prior works to go beyond linear computations. It also enables secure and private computing in distributed settings, improving the computation and communication efficiency of the state-of-the-art. Furthermore, we prove the optimality of LCC by showing that it achieves the optimal tradeoff between resiliency, security, and privacy, i.e., in terms of tolerating the maximum number of stragglers and adversaries, and providing data privacy against the maximum number of colluding workers. Finally, we show via experiments on Amazon EC2 that LCC speeds up the conventional uncoded implementation of distributed least-squares linear regression by up to $13.43\times$, and also achieves a $2.36\times$-$12.65\times$ speedup over the state-of-the-art straggler mitigation strategies.
Tasks
Published 2018-06-04
URL http://arxiv.org/abs/1806.00939v4
PDF http://arxiv.org/pdf/1806.00939v4.pdf
PWC https://paperswithcode.com/paper/lagrange-coded-computing-optimal-design-for
Repo
Framework

Global Model Interpretation via Recursive Partitioning

Title Global Model Interpretation via Recursive Partitioning
Authors Chengliang Yang, Anand Rangarajan, Sanjay Ranka
Abstract In this work, we propose a simple but effective method to interpret black-box machine learning models globally. That is, we use a compact binary tree, the interpretation tree, to explicitly represent the most important decision rules that are implicitly contained in the black-box machine learning models. This tree is learned from the contribution matrix which consists of the contributions of input variables to predicted scores for each single prediction. To generate the interpretation tree, a unified process recursively partitions the input variable space by maximizing the difference in the average contribution of the split variable between the divided spaces. We demonstrate the effectiveness of our method in diagnosing machine learning models on multiple tasks. Also, it is useful for new knowledge discovery as such insights are not easily identifiable when only looking at single predictions. In general, our work makes it easier and more efficient for human beings to understand machine learning models.
Tasks
Published 2018-02-11
URL http://arxiv.org/abs/1802.04253v2
PDF http://arxiv.org/pdf/1802.04253v2.pdf
PWC https://paperswithcode.com/paper/global-model-interpretation-via-recursive
Repo
Framework
comments powered by Disqus