October 19, 2019

3024 words 15 mins read

Paper Group ANR 218

Paper Group ANR 218

Bridging the Gap Between 2D and 3D Organ Segmentation with Volumetric Fusion Net. Qunatification of Metabolites in MR Spectroscopic Imaging using Machine Learning. Improving robustness of classifiers by training against live traffic. The Corpus Replication Task. Multi range Real-time depth inference from a monocular stabilized footage using a Fully …

Bridging the Gap Between 2D and 3D Organ Segmentation with Volumetric Fusion Net

Title Bridging the Gap Between 2D and 3D Organ Segmentation with Volumetric Fusion Net
Authors Yingda Xia, Lingxi Xie, Fengze Liu, Zhuotun Zhu, Elliot K. Fishman, Alan L. Yuille
Abstract There has been a debate on whether to use 2D or 3D deep neural networks for volumetric organ segmentation. Both 2D and 3D models have their advantages and disadvantages. In this paper, we present an alternative framework, which trains 2D networks on different viewpoints for segmentation, and builds a 3D Volumetric Fusion Net (VFN) to fuse the 2D segmentation results. VFN is relatively shallow and contains much fewer parameters than most 3D networks, making our framework more efficient at integrating 3D information for segmentation. We train and test the segmentation and fusion modules individually, and propose a novel strategy, named cross-cross-augmentation, to make full use of the limited training data. We evaluate our framework on several challenging abdominal organs, and verify its superiority in segmentation accuracy and stability over existing 2D and 3D approaches.
Tasks
Published 2018-04-02
URL http://arxiv.org/abs/1804.00392v2
PDF http://arxiv.org/pdf/1804.00392v2.pdf
PWC https://paperswithcode.com/paper/bridging-the-gap-between-2d-and-3d-organ
Repo
Framework

Qunatification of Metabolites in MR Spectroscopic Imaging using Machine Learning

Title Qunatification of Metabolites in MR Spectroscopic Imaging using Machine Learning
Authors Dhritiman Das, Eduardo Coello, Rolf F Schulte, Bjoern H Menze
Abstract Magnetic Resonance Spectroscopic Imaging (MRSI) is a clinical imaging modality for measuring tissue metabolite levels in-vivo. An accurate estimation of spectral parameters allows for better assessment of spectral quality and metabolite concentration levels. The current gold standard quantification method is the LCModel - a commercial fitting tool. However, this fails for spectra having poor signal-to-noise ratio (SNR) or a large number of artifacts. This paper introduces a framework based on random forest regression for accurate estimation of the output parameters of a model based analysis of MR spectroscopy data. The goal of our proposed framework is to learn the spectral features from a training set comprising of different variations of both simulated and in-vivo brain spectra and then use this learning for the subsequent metabolite quantification. Experiments involve training and testing on simulated and in-vivo human brain spectra. We estimate parameters such as concentration of metabolites and compare our results with that from the LCModel.
Tasks
Published 2018-05-25
URL http://arxiv.org/abs/1805.10201v1
PDF http://arxiv.org/pdf/1805.10201v1.pdf
PWC https://paperswithcode.com/paper/qunatification-of-metabolites-in-mr
Repo
Framework

Improving robustness of classifiers by training against live traffic

Title Improving robustness of classifiers by training against live traffic
Authors Kumar Sricharan, Kumar Kallurupalli, Ashok Srivastava
Abstract Deep learning models are known to be overconfident in their predictions on out of distribution inputs. This is a challenge when a model is trained on a particular input dataset, but receives out of sample data when deployed in practice. Recently, there has been work on building classifiers that are robust to out of distribution samples by adding a regularization term that maximizes the entropy of the classifier output on out of distribution data. However, given the challenge that it is not always possible to obtain out of distribution samples, the authors suggest a GAN based alternative that is independent of specific knowledge of out of distribution samples. From this existing work, we also know that having access to the true out of sample distribution for regularization works significantly better than using samples from the GAN. In this paper, we make the following observation: in practice, the out of distribution samples are contained in the traffic that hits a deployed classifier. However, the traffic will also contain a unknown proportion of in-distribution samples. If the entropy over of all of the traffic data were to be naively maximized, this will hurt the classifier performance on in-distribution data. To effectively leverage this traffic data, we propose an adaptive regularization technique (based on the maximum predictive probability score of a sample) which penalizes out of distribution samples more heavily than in distribution samples in the incoming traffic. This ensures that the overall performance of the classifier does not degrade on in-distribution data, while detection of out-of-distribution samples is significantly improved by leveraging the unlabeled traffic data. We show the effectiveness of our method via experiments on natural image datasets.
Tasks
Published 2018-12-01
URL http://arxiv.org/abs/1812.00237v1
PDF http://arxiv.org/pdf/1812.00237v1.pdf
PWC https://paperswithcode.com/paper/improving-robustness-of-classifiers-by
Repo
Framework

The Corpus Replication Task

Title The Corpus Replication Task
Authors Tobias Eichinger
Abstract In the field of Natural Language Processing (NLP), we revisit the well-known word embedding algorithm word2vec. Word embeddings identify words by vectors such that the words’ distributional similarity is captured. Unexpectedly, besides semantic similarity even relational similarity has been shown to be captured in word embeddings generated by word2vec, whence two questions arise. Firstly, which kind of relations are representable in continuous space and secondly, how are relations built. In order to tackle these questions we propose a bottom-up point of view. We call generating input text for which word2vec outputs target relations solving the Corpus Replication Task. Deeming generalizations of this approach to any set of relations possible, we expect solving of the Corpus Replication Task to provide partial answers to the questions.
Tasks Semantic Similarity, Semantic Textual Similarity, Word Embeddings
Published 2018-06-20
URL http://arxiv.org/abs/1806.07978v1
PDF http://arxiv.org/pdf/1806.07978v1.pdf
PWC https://paperswithcode.com/paper/the-corpus-replication-task
Repo
Framework

Multi range Real-time depth inference from a monocular stabilized footage using a Fully Convolutional Neural Network

Title Multi range Real-time depth inference from a monocular stabilized footage using a Fully Convolutional Neural Network
Authors Clément Pinard, Laure Chevalley, Antoine Manzanera, David Filliat
Abstract Using a neural network architecture for depth map inference from monocular stabilized videos with application to UAV videos in rigid scenes, we propose a multi-range architecture for unconstrained UAV flight, leveraging flight data from sensors to make accurate depth maps for uncluttered outdoor environment. We try our algorithm on both synthetic scenes and real UAV flight data. Quantitative results are given for synthetic scenes with a slightly noisy orientation, and show that our multi-range architecture improves depth inference. Along with this article is a video that present our results more thoroughly.
Tasks
Published 2018-09-12
URL http://arxiv.org/abs/1809.04467v1
PDF http://arxiv.org/pdf/1809.04467v1.pdf
PWC https://paperswithcode.com/paper/multi-range-real-time-depth-inference-from-a
Repo
Framework

A Periodicity-based Parallel Time Series Prediction Algorithm in Cloud Computing Environments

Title A Periodicity-based Parallel Time Series Prediction Algorithm in Cloud Computing Environments
Authors Jianguo Chen, Kenli Li, Huigui Rong, Kashif Bilal, Keqin Li, Philip S. Yu
Abstract In the era of big data, practical applications in various domains continually generate large-scale time-series data. Among them, some data show significant or potential periodicity characteristics, such as meteorological and financial data. It is critical to efficiently identify the potential periodic patterns from massive time-series data and provide accurate predictions. In this paper, a Periodicity-based Parallel Time Series Prediction (PPTSP) algorithm for large-scale time-series data is proposed and implemented in the Apache Spark cloud computing environment. To effectively handle the massive historical datasets, a Time Series Data Compression and Abstraction (TSDCA) algorithm is presented, which can reduce the data scale as well as accurately extracting the characteristics. Based on this, we propose a Multi-layer Time Series Periodic Pattern Recognition (MTSPPR) algorithm using the Fourier Spectrum Analysis (FSA) method. In addition, a Periodicity-based Time Series Prediction (PTSP) algorithm is proposed. Data in the subsequent period are predicted based on all previous period models, in which a time attenuation factor is introduced to control the impact of different periods on the prediction results. Moreover, to improve the performance of the proposed algorithms, we propose a parallel solution on the Apache Spark platform, using the Streaming real-time computing module. To efficiently process the large-scale time-series datasets in distributed computing environments, Distributed Streams (DStreams) and Resilient Distributed Datasets (RDDs) are used to store and calculate these datasets. Extensive experimental results show that our PPTSP algorithm has significant advantages compared with other algorithms in terms of prediction accuracy and performance.
Tasks Time Series, Time Series Prediction
Published 2018-10-17
URL http://arxiv.org/abs/1810.07776v1
PDF http://arxiv.org/pdf/1810.07776v1.pdf
PWC https://paperswithcode.com/paper/a-periodicity-based-parallel-time-series
Repo
Framework

Loss-aware Weight Quantization of Deep Networks

Title Loss-aware Weight Quantization of Deep Networks
Authors Lu Hou, James T. Kwok
Abstract The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network.
Tasks Quantization
Published 2018-02-23
URL http://arxiv.org/abs/1802.08635v2
PDF http://arxiv.org/pdf/1802.08635v2.pdf
PWC https://paperswithcode.com/paper/loss-aware-weight-quantization-of-deep
Repo
Framework

Smart Inverter Grid Probing for Learning Loads: Part II - Probing Injection Design

Title Smart Inverter Grid Probing for Learning Loads: Part II - Probing Injection Design
Authors Siddharth Bhela, Vassilis Kekatos, Sriharsha Veeramachaneni
Abstract This two-part work puts forth the idea of engaging power electronics to probe an electric grid to infer non-metered loads. Probing can be accomplished by commanding inverters to perturb their power injections and record the induced voltage response. Once a probing setup is deemed topologically observable by the tests of Part I, Part II provides a methodology for designing probing injections abiding by inverter and network constraints to improve load estimates. The task is challenging since system estimates depend on both probing injections and unknown loads in an implicit nonlinear fashion. The methodology first constructs a library of candidate probing vectors by sampling over the feasible set of inverter injections. Leveraging a linearized grid model and a robust approach, the candidate probing vectors violating voltage constraints for any anticipated load value are subsequently rejected. Among the qualified candidates, the design finally identifies the probing vectors yielding the most diverse system states. The probing task under noisy phasor and non-phasor data is tackled using a semidefinite-program (SDP) relaxation. Numerical tests using synthetic and real-world data on a benchmark feeder validate the conditions of Part I; the SDP-based solver; the importance of probing design; and the effects of probing duration and noise.
Tasks
Published 2018-06-22
URL http://arxiv.org/abs/1806.08836v2
PDF http://arxiv.org/pdf/1806.08836v2.pdf
PWC https://paperswithcode.com/paper/smart-inverter-grid-probing-for-learning-1
Repo
Framework

Graph2Seq: Scalable Learning Dynamics for Graphs

Title Graph2Seq: Scalable Learning Dynamics for Graphs
Authors Shaileshh Bojja Venkatakrishnan, Mohammad Alizadeh, Pramod Viswanath
Abstract Neural networks have been shown to be an effective tool for learning algorithms over graph-structured data. However, graph representation techniques—that convert graphs to real-valued vectors for use with neural networks—are still in their infancy. Recent works have proposed several approaches (e.g., graph convolutional networks), but these methods have difficulty scaling and generalizing to graphs with different sizes and shapes. We present Graph2Seq, a new technique that represents vertices of graphs as infinite time-series. By not limiting the representation to a fixed dimension, Graph2Seq scales naturally to graphs of arbitrary sizes and shapes. Graph2Seq is also reversible, allowing full recovery of the graph structure from the sequences. By analyzing a formal computational model for graph representation, we show that an unbounded sequence is necessary for scalability. Our experimental results with Graph2Seq show strong generalization and new state-of-the-art performance on a variety of graph combinatorial optimization problems.
Tasks Combinatorial Optimization, Time Series
Published 2018-02-14
URL http://arxiv.org/abs/1802.04948v3
PDF http://arxiv.org/pdf/1802.04948v3.pdf
PWC https://paperswithcode.com/paper/graph2seq-scalable-learning-dynamics-for
Repo
Framework

Learning Robust Search Strategies Using a Bandit-Based Approach

Title Learning Robust Search Strategies Using a Bandit-Based Approach
Authors Wei Xia, Roland H. C. Yap
Abstract Effective solving of constraint problems often requires choosing good or specific search heuristics. However, choosing or designing a good search heuristic is non-trivial and is often a manual process. In this paper, rather than manually choosing/designing search heuristics, we propose the use of bandit-based learning techniques to automatically select search heuristics. Our approach is online where the solver learns and selects from a set of heuristics during search. The goal is to obtain automatic search heuristics which give robust performance. Preliminary experiments show that our adaptive technique is more robust than the original search heuristics. It can also outperform the original heuristics.
Tasks
Published 2018-05-10
URL http://arxiv.org/abs/1805.03876v1
PDF http://arxiv.org/pdf/1805.03876v1.pdf
PWC https://paperswithcode.com/paper/learning-robust-search-strategies-using-a
Repo
Framework

Solving Large Extensive-Form Games with Strategy Constraints

Title Solving Large Extensive-Form Games with Strategy Constraints
Authors Trevor Davis, Kevin Waugh, Michael Bowling
Abstract Extensive-form games are a common model for multiagent interactions with imperfect information. In two-player zero-sum games, the typical solution concept is a Nash equilibrium over the unconstrained strategy set for each player. In many situations, however, we would like to constrain the set of possible strategies. For example, constraints are a natural way to model limited resources, risk mitigation, safety, consistency with past observations of behavior, or other secondary objectives for an agent. In small games, optimal strategies under linear constraints can be found by solving a linear program; however, state-of-the-art algorithms for solving large games cannot handle general constraints. In this work we introduce a generalized form of Counterfactual Regret Minimization that provably finds optimal strategies under any feasible set of convex constraints. We demonstrate the effectiveness of our algorithm for finding strategies that mitigate risk in security games, and for opponent modeling in poker games when given only partial observations of private information.
Tasks
Published 2018-09-20
URL http://arxiv.org/abs/1809.07893v2
PDF http://arxiv.org/pdf/1809.07893v2.pdf
PWC https://paperswithcode.com/paper/solving-large-extensive-form-games-with
Repo
Framework

Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints

Title Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints
Authors Di Tang, XiaoFeng Wang, Kehuan Zhang
Abstract To launch black-box attacks against a Deep Neural Network (DNN) based Face Recognition (FR) system, one needs to build \textit{substitute} models to simulate the target model, so the adversarial examples discovered from substitute models could also mislead the target model. Such \textit{transferability} is achieved in recent studies through querying the target model to obtain data for training the substitute models. A real-world target, likes the FR system of law enforcement, however, is less accessible to the adversary. To attack such a system, a substitute model with similar quality as the target model is needed to identify their common defects. This is hard since the adversary often does not have the enough resources to train such a powerful model (hundreds of millions of images and rooms of GPUs are needed to train a commercial FR system). We found in our research, however, that a resource-constrained adversary could still effectively approximate the target model’s capability to recognize \textit{specific} individuals, by training \textit{biased} substitute models on additional images of those victims whose identities the attacker want to cover or impersonate. This is made possible by a new property we discovered, called \textit{Nearly Local Linearity} (NLL), which models the observation that an ideal DNN model produces the image representations (embeddings) whose distances among themselves truthfully describe the human perception of the differences among the input images. By simulating this property around the victim’s images, we significantly improve the transferability of black-box impersonation attacks by nearly 50%. Particularly, we successfully attacked a commercial system trained over 20 million images, using 4 million images and 1/5 of the training time but achieving 62% transferability in an impersonation attack and 89% in a dodging attack.
Tasks Face Recognition
Published 2018-02-13
URL http://arxiv.org/abs/1802.09900v2
PDF http://arxiv.org/pdf/1802.09900v2.pdf
PWC https://paperswithcode.com/paper/query-free-attacks-on-industry-grade-face
Repo
Framework

Efficient human-like semantic representations via the Information Bottleneck principle

Title Efficient human-like semantic representations via the Information Bottleneck principle
Authors Noga Zaslavsky, Charles Kemp, Terry Regier, Naftali Tishby
Abstract Maintaining efficient semantic representations of the environment is a major challenge both for humans and for machines. While human languages represent useful solutions to this problem, it is not yet clear what computational principle could give rise to similar solutions in machines. In this work we propose an answer to this open question. We suggest that languages compress percepts into words by optimizing the Information Bottleneck (IB) tradeoff between the complexity and accuracy of their lexicons. We present empirical evidence that this principle may give rise to human-like semantic representations, by exploring how human languages categorize colors. We show that color naming systems across languages are near-optimal in the IB sense, and that these natural systems are similar to artificial IB color naming systems with a single tradeoff parameter controlling the cross-language variability. In addition, the IB systems evolve through a sequence of structural phase transitions, demonstrating a possible adaptation process. This work thus identifies a computational principle that characterizes human semantic systems, and that could usefully inform semantic representations in machines.
Tasks
Published 2018-08-09
URL http://arxiv.org/abs/1808.03353v1
PDF http://arxiv.org/pdf/1808.03353v1.pdf
PWC https://paperswithcode.com/paper/efficient-human-like-semantic-representations
Repo
Framework

A Hierarchical Deep Learning Natural Language Parser for Fashion

Title A Hierarchical Deep Learning Natural Language Parser for Fashion
Authors José Marcelino, João Faria, Luís Baía, Ricardo Gamelas Sousa
Abstract This work presents a hierarchical deep learning natural language parser for fashion. Our proposal intends not only to recognize fashion-domain entities but also to expose syntactic and morphologic insights. We leverage the usage of an architecture of specialist models, each one for a different task (from parsing to entity recognition). Such architecture renders a hierarchical model able to capture the nuances of the fashion language. The natural language parser is able to deal with textual ambiguities which are left unresolved by our currently existing solution. Our empirical results establish a robust baseline, which justifies the use of hierarchical architectures of deep learning models while opening new research avenues to explore.
Tasks
Published 2018-06-25
URL http://arxiv.org/abs/1806.09511v1
PDF http://arxiv.org/pdf/1806.09511v1.pdf
PWC https://paperswithcode.com/paper/a-hierarchical-deep-learning-natural-language
Repo
Framework

Multi-Source Domain Adaptation with Mixture of Experts

Title Multi-Source Domain Adaptation with Mixture of Experts
Authors Jiang Guo, Darsh J Shah, Regina Barzilay
Abstract We propose a mixture-of-experts approach for unsupervised domain adaptation from multiple sources. The key idea is to explicitly capture the relationship between a target example and different source domains. This relationship, expressed by a point-to-set metric, determines how to combine predictors trained on various domains. The metric is learned in an unsupervised fashion using meta-training. Experimental results on sentiment analysis and part-of-speech tagging demonstrate that our approach consistently outperforms multiple baselines and can robustly handle negative transfer.
Tasks Domain Adaptation, Part-Of-Speech Tagging, Sentiment Analysis, Unsupervised Domain Adaptation
Published 2018-09-07
URL http://arxiv.org/abs/1809.02256v2
PDF http://arxiv.org/pdf/1809.02256v2.pdf
PWC https://paperswithcode.com/paper/multi-source-domain-adaptation-with-mixture
Repo
Framework
comments powered by Disqus