January 30, 2020

3274 words 16 mins read

Paper Group ANR 262

Paper Group ANR 262

Which principal components are most sensitive to distributional changes?. Continuous Control with Contexts, Provably. Converged Deep Framework Assembling Principled Modules for CS-MRI. Masked-RPCA: Sparse and Low-rank Decomposition Under Overlaying Model and Application to Moving Object Detection. Compressed Sensing MRI via a Multi-scale Dilated Re …

Which principal components are most sensitive to distributional changes?

Title Which principal components are most sensitive to distributional changes?
Authors Martin Tveten
Abstract PCA is often used in anomaly detection and statistical process control tasks. For bivariate data, we prove that the minor projection (the least varying projection) of the PCA-rotated data is the most sensitive to distributional changes, where sensitivity is defined by the Hellinger distance between distributions before and after a change. In particular, this is almost always the case if only one parameter of the bivariate normal distribution changes, i.e., the change is sparse. Simulations indicate that the minor projections are the most sensitive for a large range of changes and pre-change settings in higher dimensions as well. This motivates using the minor projections for detecting sparse distributional changes in high-dimensional data.
Tasks Anomaly Detection
Published 2019-05-15
URL https://arxiv.org/abs/1905.06318v1
PDF https://arxiv.org/pdf/1905.06318v1.pdf
PWC https://paperswithcode.com/paper/which-principal-components-are-most-sensitive
Repo
Framework

Continuous Control with Contexts, Provably

Title Continuous Control with Contexts, Provably
Authors Simon S. Du, Ruosong Wang, Mengdi Wang, Lin F. Yang
Abstract A fundamental challenge in artificial intelligence is to build an agent that generalizes and adapts to unseen environments. A common strategy is to build a decoder that takes the context of the unseen new environment as input and generates a policy accordingly. The current paper studies how to build a decoder for the fundamental continuous control task, linear quadratic regulator (LQR), which can model a wide range of real-world physical environments. We present a simple algorithm for this problem, which uses upper confidence bound (UCB) to refine the estimate of the decoder and balance the exploration-exploitation trade-off. Theoretically, our algorithm enjoys a $\widetilde{O}\left(\sqrt{T}\right)$ regret bound in the online setting where $T$ is the number of environments the agent played. This also implies after playing $\widetilde{O}\left(1/\epsilon^2\right)$ environments, the agent is able to transfer the learned knowledge to obtain an $\epsilon$-suboptimal policy for an unseen environment. To our knowledge, this is first provably efficient algorithm to build a decoder in the continuous control setting. While our main focus is theoretical, we also present experiments that demonstrate the effectiveness of our algorithm.
Tasks Continuous Control
Published 2019-10-30
URL https://arxiv.org/abs/1910.13614v1
PDF https://arxiv.org/pdf/1910.13614v1.pdf
PWC https://paperswithcode.com/paper/continuous-control-with-contexts-provably-1
Repo
Framework

Converged Deep Framework Assembling Principled Modules for CS-MRI

Title Converged Deep Framework Assembling Principled Modules for CS-MRI
Authors Risheng Liu, Yuxi Zhang, Shichao Cheng, Zhongxuan Luo, Xin Fan
Abstract Compressed Sensing Magnetic Resonance Imaging (CS-MRI) significantly accelerates MR data acquisition at a sampling rate much lower than the Nyquist criterion. A major challenge for CS-MRI lies in solving the severely ill-posed inverse problem to reconstruct aliasing-free MR images from the sparse k-space data. Conventional methods typically optimize an energy function, producing reconstruction of high quality, but their iterative numerical solvers unavoidably bring extremely slow processing. Recent data-driven techniques are able to provide fast restoration by either learning direct prediction to final reconstruction or plugging learned modules into the energy optimizer. Nevertheless, these data-driven predictors cannot guarantee the reconstruction following constraints underlying the regularizers of conventional methods so that the reliability of their reconstruction results are questionable. In this paper, we propose a converged deep framework assembling principled modules for CS-MRI that fuses learning strategy with the iterative solver of a conventional reconstruction energy. This framework embeds an optimal condition checking mechanism, fostering \emph{efficient} and \emph{reliable} reconstruction. We also apply the framework to two practical tasks, \emph{i.e.}, parallel imaging and reconstruction with Rician noise. Extensive experiments on both benchmark and manufacturer-testing images demonstrate that the proposed method reliably converges to the optimal solution more efficiently and accurately than the state-of-the-art in various scenarios.
Tasks
Published 2019-10-29
URL https://arxiv.org/abs/1910.13046v1
PDF https://arxiv.org/pdf/1910.13046v1.pdf
PWC https://paperswithcode.com/paper/converged-deep-framework-assembling
Repo
Framework

Masked-RPCA: Sparse and Low-rank Decomposition Under Overlaying Model and Application to Moving Object Detection

Title Masked-RPCA: Sparse and Low-rank Decomposition Under Overlaying Model and Application to Moving Object Detection
Authors Amirhossein Khalilian-Gourtani, Shervin Minaee, Yao Wang
Abstract Foreground detection in a given video sequence is a pivotal step in many computer vision applications such as video surveillance system. Robust Principal Component Analysis (RPCA) performs low-rank and sparse decomposition and accomplishes such a task when the background is stationary and the foreground is dynamic and relatively small. A fundamental issue with RPCA is the assumption that the low-rank and sparse components are added at each element, whereas in reality, the moving foreground is overlaid on the background. We propose the representation via masked decomposition (i.e. an overlaying model) where each element either belongs to the low-rank or the sparse component, decided by a mask. We propose the Masked-RPCA algorithm to recover the mask and the low-rank components simultaneously, utilizing linearizing and alternating direction techniques. We further extend our formulation to be robust to dynamic changes in the background and enforce spatial connectivity in the foreground component. Our study shows significant improvement of the detected mask compared to post-processing on the sparse component obtained by other frameworks.
Tasks Object Detection
Published 2019-09-17
URL https://arxiv.org/abs/1909.08049v1
PDF https://arxiv.org/pdf/1909.08049v1.pdf
PWC https://paperswithcode.com/paper/masked-rpca-sparse-and-low-rank-decomposition
Repo
Framework

Compressed Sensing MRI via a Multi-scale Dilated Residual Convolution Network

Title Compressed Sensing MRI via a Multi-scale Dilated Residual Convolution Network
Authors Yuxiang Dai, Peixian Zhuang
Abstract Magnetic resonance imaging (MRI) reconstruction is an active inverse problem which can be addressed by conventional compressed sensing (CS) MRI algorithms that exploit the sparse nature of MRI in an iterative optimization-based manner. However, two main drawbacks of iterative optimization-based CSMRI methods are time-consuming and are limited in model capacity. Meanwhile, one main challenge for recent deep learning-based CSMRI is the trade-off between model performance and network size. To address the above issues, we develop a new multi-scale dilated network for MRI reconstruction with high speed and outstanding performance. Comparing to convolutional kernels with same receptive fields, dilated convolutions reduce network parameters with smaller kernels and expand receptive fields of kernels to obtain almost same information. To maintain the abundance of features, we present global and local residual learnings to extract more image edges and details. Then we utilize concatenation layers to fuse multi-scale features and residual learnings for better reconstruction. Compared with several non-deep and deep learning CSMRI algorithms, the proposed method yields better reconstruction accuracy and noticeable visual improvements. In addition, we perform the noisy setting to verify the model stability, and then extend the proposed model on a MRI super-resolution task.
Tasks Super-Resolution
Published 2019-06-11
URL https://arxiv.org/abs/1906.05251v1
PDF https://arxiv.org/pdf/1906.05251v1.pdf
PWC https://paperswithcode.com/paper/compressed-sensing-mri-via-a-multi-scale
Repo
Framework
Title Sub-Architecture Ensemble Pruning in Neural Architecture Search
Authors Yijun Bian, Qingquan Song, Mengnan Du, Jun Yao, Huanhuan Chen, Xia Hu
Abstract Neural architecture search (NAS) is gaining more and more attention in recent years due to its flexibility and the remarkable capability of reducing the burden of neural network design. To achieve better performance, however, the searching process usually costs massive computation, which might not be affordable to researchers and practitioners. While recent attempts have employed ensemble learning methods to mitigate the enormous computation, an essential characteristic of diversity in ensemble methods is missed out, causing more similar sub-architectures to be gathered and potential redundancy in the final ensemble architecture. To bridge this gap, we propose a pruning method for NAS ensembles, named as ‘‘Sub-Architecture Ensemble Pruning in Neural Architecture Search (SAEP).’’ It targets to utilize diversity and achieve sub-ensemble architectures in a smaller size with comparable performance to the unpruned ensemble architectures. Three possible solutions are proposed to decide which subarchitectures should be pruned during the searching process. Experimental results demonstrate the effectiveness of the proposed method in largely reducing the size of ensemble architectures while maintaining the final performance. Moreover, distinct deeper architectures could be discovered if the searched sub-architectures are not diverse enough.
Tasks Neural Architecture Search
Published 2019-10-01
URL https://arxiv.org/abs/1910.00370v1
PDF https://arxiv.org/pdf/1910.00370v1.pdf
PWC https://paperswithcode.com/paper/sub-architecture-ensemble-pruning-in-neural
Repo
Framework

Team QCRI-MIT at SemEval-2019 Task 4: Propaganda Analysis Meets Hyperpartisan News Detection

Title Team QCRI-MIT at SemEval-2019 Task 4: Propaganda Analysis Meets Hyperpartisan News Detection
Authors Abdelrhman Saleh, Ramy Baly, Alberto Barrón-Cedeño, Giovanni Da San Martino, Mitra Mohtarami, Preslav Nakov, James Glass
Abstract In this paper, we describe our submission to SemEval-2019 Task 4 on Hyperpartisan News Detection. Our system relies on a variety of engineered features originally used to detect propaganda. This is based on the assumption that biased messages are propagandistic in the sense that they promote a particular political cause or viewpoint. We trained a logistic regression model with features ranging from simple bag-of-words to vocabulary richness and text readability features. Our system achieved 72.9% accuracy on the test data that is annotated manually and 60.8% on the test data that is annotated with distant supervision. Additional experiments showed that significant performance improvements can be achieved with better feature pre-processing.
Tasks
Published 2019-04-06
URL http://arxiv.org/abs/1904.03513v1
PDF http://arxiv.org/pdf/1904.03513v1.pdf
PWC https://paperswithcode.com/paper/team-qcri-mit-at-semeval-2019-task-4
Repo
Framework

Glioma Grade Predictions using Scattering Wavelet Transform-Based Radiomics

Title Glioma Grade Predictions using Scattering Wavelet Transform-Based Radiomics
Authors Qijian Chen, Lihui Wang, Li Wang, Zeyu Deng, Jian Zhang, Yuemin Zhu
Abstract Glioma grading before the surgery is very critical for the prognosis prediction and treatment plan making. In this paper, we present a novel scattering wavelet-based radiomics method to predict noninvasively and accurately the glioma grades. The multimodal magnetic resonance images of 285 patients were used, with the intratumoral and peritumoral regions well labeled. The wavelet scattering-based features and traditional radiomics features were firstly extracted from both intratumoral and peritumoral regions respectively. The support vector machine (SVM), logistic regression (LR) and random forest (RF) were then trained with 5-fold cross validation to predict the glioma grades. The prediction obtained with different features was finally evaluated in terms of quantitative metrics. The area under the receiver operating characteristic curve (AUC) of glioma grade prediction based on scattering wavelet features was up to 0.99 when considering both intratumoral and peritumoral features in multimodal images, which increases by about 17% compared to traditional radiomics. Such results shown that the local invariant features extracted from the scattering wavelet transform allows improving the prediction accuracy for glioma grading. In addition, the features extracted from peritumoral regions further increases the accuracy of glioma grading.
Tasks
Published 2019-05-23
URL https://arxiv.org/abs/1905.09589v1
PDF https://arxiv.org/pdf/1905.09589v1.pdf
PWC https://paperswithcode.com/paper/glioma-grade-predictions-using-scattering
Repo
Framework
Title Efficient Multi-Objective Optimization through Population-based Parallel Surrogate Search
Authors Taimoor Akhtar, Christine A. Shoemaker
Abstract Multi-Objective Optimization (MOO) is very difficult for expensive functions because most current MOO methods rely on a large number of function evaluations to get an accurate solution. We address this problem with surrogate approximation and parallel computation. We develop an MOO algorithm MOPLS-N for expensive functions that combines iteratively updated surrogate approximations of the objective functions with a structure for efficiently selecting a population of $N$ points so that the expensive objectives for all points are simultaneously evaluated on $N$ processors in each iteration. MOPLS incorporates Radial Basis Function (RBF) approximation, Tabu Search and local candidate search around multiple points to strike a balance between exploration, exploitation and diversification during each algorithm iteration. Eleven test problems (with 8 to 24 decision variables and two real-world watershed problems are used to compare performance of MOPLS to ParEGO, GOMORS, Borg, MOEA/D, and NSGA-III on a limited budget of evaluations with between 1 (serial) and 64 processors. MOPLS in serial is better than all non-RBF serial methods tested. Parallel speedup of MOPLS is higher than all other parallel algorithms with 16 and 64 processors. With both algorithms on 64 processors MOPLS is at least 2 times faster than NSGA-III on the watershed problems.
Tasks
Published 2019-03-06
URL http://arxiv.org/abs/1903.02167v1
PDF http://arxiv.org/pdf/1903.02167v1.pdf
PWC https://paperswithcode.com/paper/efficient-multi-objective-optimization
Repo
Framework

Recover and Identify: A Generative Dual Model for Cross-Resolution Person Re-Identification

Title Recover and Identify: A Generative Dual Model for Cross-Resolution Person Re-Identification
Authors Yu-Jhe Li, Yun-Chun Chen, Yen-Yu Lin, Xiaofei Du, Yu-Chiang Frank Wang
Abstract Person re-identification (re-ID) aims at matching images of the same identity across camera views. Due to varying distances between cameras and persons of interest, resolution mismatch can be expected, which would degrade person re-ID performance in real-world scenarios. To overcome this problem, we propose a novel generative adversarial network to address cross-resolution person re-ID, allowing query images with varying resolutions. By advancing adversarial learning techniques, our proposed model learns resolution-invariant image representations while being able to recover the missing details in low-resolution input images. The resulting features can be jointly applied for improving person re-ID performance due to preserving resolution invariance and recovering re-ID oriented discriminative details. Our experiments on five benchmark datasets confirm the effectiveness of our approach and its superiority over the state-of-the-art methods, especially when the input resolutions are unseen during training.
Tasks Person Re-Identification
Published 2019-08-16
URL https://arxiv.org/abs/1908.06052v1
PDF https://arxiv.org/pdf/1908.06052v1.pdf
PWC https://paperswithcode.com/paper/recover-and-identify-a-generative-dual-model
Repo
Framework

DARTS: Dialectal Arabic Transcription System

Title DARTS: Dialectal Arabic Transcription System
Authors Sameer Khurana, Ahmed Ali, James Glass
Abstract We present the speech to text transcription system, called DARTS, for low resource Egyptian Arabic dialect. We analyze the following; transfer learning from high resource broadcast domain to low-resource dialectal domain and semi-supervised learning where we use in-domain unlabeled audio data collected from YouTube. Key features of our system are: A deep neural network acoustic model that consists of a front end Convolutional Neural Network (CNN) followed by several layers of Time Delayed Neural Network (TDNN) and Long-Short Term Memory Recurrent Neural Network (LSTM); sequence discriminative training of the acoustic model; n-gram and recurrent neural network language model for decoding and N-best list rescoring. We show that a simple transfer learning method can achieve good results. The results are further improved by using unlabeled data from YouTube in a semi-supervised setup. Various systems are combined to give the final system that achieves the lowest word error on on the community standard Egyptian-Arabic speech dataset (MGB-3).
Tasks Language Modelling, Transfer Learning
Published 2019-09-26
URL https://arxiv.org/abs/1909.12163v1
PDF https://arxiv.org/pdf/1909.12163v1.pdf
PWC https://paperswithcode.com/paper/darts-dialectal-arabic-transcription-system
Repo
Framework

Challenges with EM in application to weakly identifiable mixture models

Title Challenges with EM in application to weakly identifiable mixture models
Authors Raaz Dwivedi, Nhat Ho, Koulik Khamaru, Martin J. Wainwright, Michael I. Jordan, Bin Yu
Abstract We study a class of weakly identifiable location-scale mixture models for which the maximum likelihood estimates based on $n$ i.i.d. samples are known to have lower accuracy than the classical $n^{- \frac{1}{2}}$ error. We investigate whether the Expectation-Maximization (EM) algorithm also converges slowly for these models. We first demonstrate via simulation studies a broad range of over-specified mixture models for which the EM algorithm converges very slowly, both in one and higher dimensions. We provide a complete analytical characterization of this behavior for fitting data generated from a multivariate standard normal distribution using two-component Gaussian mixture with varying location and scale parameters. Our results reveal distinct regimes in the convergence behavior of EM as a function of the dimension $d$. In the multivariate setting ($d \geq 2$), when the covariance matrix is constrained to a multiple of the identity matrix, the EM algorithm converges in order $(n/d)^{\frac{1}{2}}$ steps and returns estimates that are at a Euclidean distance of order ${(n/d)^{-\frac{1}{4}}}$ and ${ (n d)^{- \frac{1}{2}}}$ from the true location and scale parameter respectively. On the other hand, in the univariate setting ($d = 1$), the EM algorithm converges in order $n^{\frac{3}{4} }$ steps and returns estimates that are at a Euclidean distance of order ${ n^{- \frac{1}{8}}}$ and ${ n^{-\frac{1} {4}}}$ from the true location and scale parameter respectively. Establishing the slow rates in the univariate setting requires a novel localization argument with two stages, with each stage involving an epoch-based argument applied to a different surrogate EM operator at the population level. We also show multivariate ($d \geq 2$) examples, involving more general covariance matrices, that exhibit the same slow rates as the univariate case.
Tasks
Published 2019-02-01
URL http://arxiv.org/abs/1902.00194v1
PDF http://arxiv.org/pdf/1902.00194v1.pdf
PWC https://paperswithcode.com/paper/challenges-with-em-in-application-to-weakly
Repo
Framework

General Board Game Playing for Education and Research in Generic AI Game Learning

Title General Board Game Playing for Education and Research in Generic AI Game Learning
Authors Wolfgang Konen
Abstract We present a new general board game (GBG) playing and learning framework. GBG defines the common interfaces for board games, game states and their AI agents. It allows one to run competitions of different agents on different games. It standardizes those parts of board game playing and learning that otherwise would be tedious and repetitive parts in coding. GBG is suitable for arbitrary 1-, 2-, …, N-player board games. It makes a generic TD($\lambda$)-n-tuple agent for the first time available to arbitrary games. On various games, TD($\lambda$)-n-tuple is found to be superior to other generic agents like MCTS. GBG aims at the educational perspective, where it helps students to start faster in the area of game learning. GBG aims as well at the research perspective by collecting a growing set of games and AI agents to assess their strengths and generalization capabilities in meaningful competitions. Initial successful educational and research results are reported.
Tasks Board Games
Published 2019-07-11
URL https://arxiv.org/abs/1907.06508v1
PDF https://arxiv.org/pdf/1907.06508v1.pdf
PWC https://paperswithcode.com/paper/general-board-game-playing-for-education-and
Repo
Framework

Personalized Cancer Chemotherapy Schedule: a numerical comparison of performance and robustness in model-based and model-free scheduling methodologies

Title Personalized Cancer Chemotherapy Schedule: a numerical comparison of performance and robustness in model-based and model-free scheduling methodologies
Authors Jesus Tordesillas, Juncal Arbelaiz
Abstract Reinforcement learning algorithms are gaining popularity in fields in which optimal scheduling is important, and oncology is not an exception. The complex and uncertain dynamics of cancer limit the performance of traditional model-based scheduling strategies like Optimal Control. Motivated by the recent success of model-free Deep Reinforcement Learning (DRL) in challenging control tasks and in the design of medical treatments, we use Deep Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG) to design a personalized cancer chemotherapy schedule. We show that both of them succeed in the task and outperform the Optimal Control solution in the presence of uncertainty. Furthermore, we show that DDPG can exterminate cancer more efficiently than DQN presumably due to its continuous action space. Finally, we provide some insight regarding the amount of samples required for the training.
Tasks Q-Learning
Published 2019-04-02
URL https://arxiv.org/abs/1904.01200v3
PDF https://arxiv.org/pdf/1904.01200v3.pdf
PWC https://paperswithcode.com/paper/personalized-cancer-chemotherapy-schedule-a
Repo
Framework

Explainable Artificial Intelligence (XAI) for 6G: Improving Trust between Human and Machine

Title Explainable Artificial Intelligence (XAI) for 6G: Improving Trust between Human and Machine
Authors Weisi Guo
Abstract As the 5th Generation (5G) mobile networks are bringing about global societal benefits, the design phase for the 6th Generation (6G) has started. 6G will need to enable greater levels of autonomy, improve human machine interfacing, and achieve deep connectivity in more diverse environments. The need for increased explainability to enable trust is critical for 6G as it manages a wide range of mission critical services (e.g. autonomous driving) to safety critical tasks (e.g. remote surgery). As we migrate from traditional model-based optimisation to deep learning, the trust we have in our optimisation modules decrease. This loss of trust means we cannot understand the impact of: 1) poor/bias/malicious data, and 2) neural network design on decisions; nor can we explain to the engineer or the public the network’s actions. In this review, we outline the core concepts of Explainable Artificial Intelligence (XAI) for 6G, including: public and legal motivations, definitions of explainability, performance vs. explainability trade-offs, methods to improve explainability, and frameworks to incorporate XAI into future wireless systems. Our review is grounded in cases studies for both PHY and MAC layer optimisation, and provide the community with an important research area to embark upon.
Tasks Autonomous Driving
Published 2019-11-11
URL https://arxiv.org/abs/1911.04542v2
PDF https://arxiv.org/pdf/1911.04542v2.pdf
PWC https://paperswithcode.com/paper/explainable-artificial-intelligence-xai-for
Repo
Framework
comments powered by Disqus