Paper Group ANR 1292
MM for Penalized Estimation. Learning Selection Masks for Deep Neural Networks. Image Disentanglement and Uncooperative Re-Entanglement for High-Fidelity Image-to-Image Translation. Predicting the Topical Stance of Media and Popular Twitter Users. Transfer Learning for Segmenting Dimensionally-Reduced Hyperspectral Images. Towards Personalized Mana …
MM for Penalized Estimation
Title | MM for Penalized Estimation |
Authors | Zhu Wang |
Abstract | Penalized estimation can conduct variable selection and parameter estimation simultaneously. The general framework is to minimize a loss function subject to a penalty designed to generate sparse variable selection. Much of the previous work have focused on convex loss functions including generalized linear models. When data are contaminated with noise, robust loss functions are typically introduced. Recent literature has witnessed a growing impact of nonconvex loss-based methods, which can generate robust estimation for data contaminated with outliers. This article investigates robust variable selection based on penalized nonconvex loss functions. We investigate properties of the local and global minimizers of the original penalized loss function and the surrogate penalized loss function induced by the majorization-minimization (MM) algorithm for numerical computation. We establish convergence theory of the proposed MM algorithm for penalized convex and nonconvex loss functions. Performance of the proposed algorithms for regression and classification problems are evaluated on simulated and real data including healthcare costs and cancer clinical status. Efficient implementations of the algorithms are available in the R package mpath in CRAN. |
Tasks | |
Published | 2019-12-23 |
URL | https://arxiv.org/abs/1912.11119v1 |
https://arxiv.org/pdf/1912.11119v1.pdf | |
PWC | https://paperswithcode.com/paper/mm-for-penalized-estimation |
Repo | |
Framework | |
Learning Selection Masks for Deep Neural Networks
Title | Learning Selection Masks for Deep Neural Networks |
Authors | Stefan Oehmcke, Fabian Gieseke |
Abstract | Data have often to be moved between servers and clients during the inference phase. For instance, modern virtual assistants collect data on mobile devices and the data are sent to remote servers for the analysis. A related scenario is that clients have to access and download large amounts of data stored on servers in order to apply machine learning models. Depending on the available bandwidth, this data transfer can be a serious bottleneck, which can significantly limit the application machine learning models. In this work, we propose a simple yet effective framework that allows to select certain parts of the input data needed for the subsequent application of a given neural network. Both the masks as well as the neural network are trained simultaneously such that a good model performance is achieved while, at the same time, only a minimal amount of data is selected by the masks. During the inference phase, only the parts selected by the masks have to be transferred between the server and the client. Our experimental evaluation indicates that it is, for certain learning tasks, possible to significantly reduce the amount of data needed to be transferred without affecting the model performance much. |
Tasks | |
Published | 2019-06-11 |
URL | https://arxiv.org/abs/1906.04673v1 |
https://arxiv.org/pdf/1906.04673v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-selection-masks-for-deep-neural |
Repo | |
Framework | |
Image Disentanglement and Uncooperative Re-Entanglement for High-Fidelity Image-to-Image Translation
Title | Image Disentanglement and Uncooperative Re-Entanglement for High-Fidelity Image-to-Image Translation |
Authors | Adam W. Harley, Shih-En Wei, Jason Saragih, Katerina Fragkiadaki |
Abstract | Cross-domain image-to-image translation should satisfy two requirements: (1) preserve the information that is common to both domains, and (2) generate convincing images covering variations that appear in the target domain. This is challenging, especially when there are no example translations available as supervision. Adversarial cycle consistency was recently proposed as a solution, with beautiful and creative results, yielding much follow-up work. However, augmented reality applications cannot readily use such techniques to provide users with compelling translations of real scenes, because the translations do not have high-fidelity constraints. In other words, current models are liable to change details that should be preserved: while re-texturing a face, they may alter the face’s expression in an unpredictable way. In this paper, we introduce the problem of high-fidelity image-to-image translation, and present a method for solving it. Our main insight is that low-fidelity translations typically escape a cycle-consistency penalty, because the back-translator learns to compensate for the forward-translator’s errors. We therefore introduce an optimization technique that prevents the networks from cooperating: simply train each network only when its input data is real. Prior works, in comparison, train each network with a mix of real and generated data. Experimental results show that our method accurately disentangles the factors that separate the domains, and converges to semantics-preserving translations that prior methods miss. |
Tasks | Image-to-Image Translation |
Published | 2019-01-11 |
URL | https://arxiv.org/abs/1901.03628v2 |
https://arxiv.org/pdf/1901.03628v2.pdf | |
PWC | https://paperswithcode.com/paper/image-disentanglement-and-uncooperative-re |
Repo | |
Framework | |
Predicting the Topical Stance of Media and Popular Twitter Users
Title | Predicting the Topical Stance of Media and Popular Twitter Users |
Authors | Peter Stefanov, Kareem Darwish, Preslav Nakov |
Abstract | Controversial social and political issues of the day spur people to express their opinion on social networks, often sharing links to online media articles and reposting statements from prominent members of the platforms. Discovering the stances of people and entire media on current, debatable topics is important for social statisticians and policy makers. Many supervised solutions exist for determining viewpoints, but manually annotating training data is costly. In this paper, we propose a method that uses unsupervised learning and is able to characterize both the general political leaning of online media and popular Twitter users, as well as their stances with respect to controversial topics, by leveraging on the retweet behavior of users. We evaluate the model by comparing its bias predictions to gold labels from the Media Bias/Fact Check website, and we further perform manual analysis. |
Tasks | |
Published | 2019-07-02 |
URL | https://arxiv.org/abs/1907.01260v1 |
https://arxiv.org/pdf/1907.01260v1.pdf | |
PWC | https://paperswithcode.com/paper/predicting-the-topical-stance-of-media-and |
Repo | |
Framework | |
Transfer Learning for Segmenting Dimensionally-Reduced Hyperspectral Images
Title | Transfer Learning for Segmenting Dimensionally-Reduced Hyperspectral Images |
Authors | Jakub Nalepa, Michal Myller, Michal Kawulok |
Abstract | Deep learning has established the state of the art in multiple fields, including hyperspectral image analysis. However, training large-capacity learners to segment such imagery requires representative training sets. Acquiring such data is human-dependent and time-consuming, especially in Earth observation scenarios, where the hyperspectral data transfer is very costly and time-constrained. In this letter, we show how to effectively deal with a limited number and size of available hyperspectral ground-truth sets, and apply transfer learning for building deep feature extractors. Also, we exploit spectral dimensionality reduction to make our technique applicable over hyperspectral data acquired using different sensors, which may capture different numbers of hyperspectral bands. The experiments, performed over several benchmarks and backed up with statistical tests, indicated that our approach allows us to effectively train well-generalizing deep convolutional neural nets even using significantly reduced data. |
Tasks | Dimensionality Reduction, Transfer Learning |
Published | 2019-06-23 |
URL | https://arxiv.org/abs/1906.09631v1 |
https://arxiv.org/pdf/1906.09631v1.pdf | |
PWC | https://paperswithcode.com/paper/transfer-learning-for-segmenting |
Repo | |
Framework | |
Towards Personalized Management of Type B Aortic Dissection Using STENT: a STandard cta database with annotation of the ENtire aorta and True-false lumen
Title | Towards Personalized Management of Type B Aortic Dissection Using STENT: a STandard cta database with annotation of the ENtire aorta and True-false lumen |
Authors | Jianning Li, Long Cao, Yangyang Ge, Bowen Meng, Cheng Wang, Wei Guo |
Abstract | Type B Aortic Dissection(TBAD) is a rare aortic disease with a high 5-year mortality.Personalized and precise management of TBAD has been increasingly desired in clinic which requires the geometric parameters of TBAD specific to the patient be measured accurately.This remains to be a challenging task for vascular surgeons as manual measurement is highly subjective and imprecise. To solve this problem,we introduce STENT-a STandard cta database with annotation of the ENtire aorta and True-false lumen. The database contains 274 CT angiography (CTA) scans from 274 unique TBAD patients and is split into a training set(254 cases including 210 preoperative and 44 postoperative scans ) and a test set(20 cases).Based on STENT,we develop a series of methods including automated TBAD segmentation and automated measurement of TBAD parameters that facilitate personalized and precise management of the disease. In this work, the database and the proposed methods are thoroughly introduced and evaluated and the results of our study shows the feasibility and effectiveness of our approach to easing the decision-making process for vascular surgeons during personalized TBAD management. |
Tasks | Decision Making |
Published | 2019-01-03 |
URL | http://arxiv.org/abs/1901.04584v2 |
http://arxiv.org/pdf/1901.04584v2.pdf | |
PWC | https://paperswithcode.com/paper/towards-personalized-management-of-type-b |
Repo | |
Framework | |
Exploring Hate Speech Detection in Multimodal Publications
Title | Exploring Hate Speech Detection in Multimodal Publications |
Authors | Raul Gomez, Jaume Gibert, Lluis Gomez, Dimosthenis Karatzas |
Abstract | In this work we target the problem of hate speech detection in multimodal publications formed by a text and an image. We gather and annotate a large scale dataset from Twitter, MMHS150K, and propose different models that jointly analyze textual and visual information for hate speech detection, comparing them with unimodal detection. We provide quantitative and qualitative results and analyze the challenges of the proposed task. We find that, even though images are useful for the hate speech detection task, current multimodal models cannot outperform models analyzing only text. We discuss why and open the field and the dataset for further research. |
Tasks | Hate Speech Detection |
Published | 2019-10-09 |
URL | https://arxiv.org/abs/1910.03814v1 |
https://arxiv.org/pdf/1910.03814v1.pdf | |
PWC | https://paperswithcode.com/paper/exploring-hate-speech-detection-in-multimodal |
Repo | |
Framework | |
Transferrable Prototypical Networks for Unsupervised Domain Adaptation
Title | Transferrable Prototypical Networks for Unsupervised Domain Adaptation |
Authors | Yingwei Pan, Ting Yao, Yehao Li, Yu Wang, Chong-Wah Ngo, Tao Mei |
Abstract | In this paper, we introduce a new idea for unsupervised domain adaptation via a remold of Prototypical Networks, which learn an embedding space and perform classification via a remold of the distances to the prototype of each class. Specifically, we present Transferrable Prototypical Networks (TPN) for adaptation such that the prototypes for each class in source and target domains are close in the embedding space and the score distributions predicted by prototypes separately on source and target data are similar. Technically, TPN initially matches each target example to the nearest prototype in the source domain and assigns an example a “pseudo” label. The prototype of each class could then be computed on source-only, target-only and source-target data, respectively. The optimization of TPN is end-to-end trained by jointly minimizing the distance across the prototypes on three types of data and KL-divergence of score distributions output by each pair of the prototypes. Extensive experiments are conducted on the transfers across MNIST, USPS and SVHN datasets, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, we obtain an accuracy of 80.4% of single model on VisDA 2017 dataset. |
Tasks | Domain Adaptation, Unsupervised Domain Adaptation |
Published | 2019-04-25 |
URL | http://arxiv.org/abs/1904.11227v1 |
http://arxiv.org/pdf/1904.11227v1.pdf | |
PWC | https://paperswithcode.com/paper/transferrable-prototypical-networks-for |
Repo | |
Framework | |
Direction Matters: On Influence-Preserving Graph Summarization and Max-cut Principle for Directed Graphs
Title | Direction Matters: On Influence-Preserving Graph Summarization and Max-cut Principle for Directed Graphs |
Authors | Wenkai Xu, Gang Niu, Aapo Hyvärinen, Masashi Sugiyama |
Abstract | Summarizing large-scaled directed graphs into small-scale representations is a useful but less studied problem setting. Conventional clustering approaches, which based on “Min-Cut”-style criteria, compress both the vertices and edges of the graph into the communities, that lead to a loss of directed edge information. On the other hand, compressing the vertices while preserving the directed edge information provides a way to learn the small-scale representation of a directed graph. The reconstruction error, which measures the edge information preserved by the summarized graph, can be used to learn such representation. Compared to the original graphs, the summarized graphs are easier to analyze and are capable of extracting group-level features which is useful for efficient interventions of population behavior. In this paper, we present a model, based on minimizing reconstruction error with non-negative constraints, which relates to a “Max-Cut” criterion that simultaneously identifies the compressed nodes and the directed compressed relations between these nodes. A multiplicative update algorithm with column-wise normalization is proposed. We further provide theoretical results on the identifiability of the model and on the convergence of the proposed algorithms. Experiments are conducted to demonstrate the accuracy and robustness of the proposed method. |
Tasks | |
Published | 2019-07-22 |
URL | https://arxiv.org/abs/1907.09588v1 |
https://arxiv.org/pdf/1907.09588v1.pdf | |
PWC | https://paperswithcode.com/paper/direction-matters-on-influence-preserving |
Repo | |
Framework | |
Sparse Group Lasso: Optimal Sample Complexity, Convergence Rate, and Statistical Inference
Title | Sparse Group Lasso: Optimal Sample Complexity, Convergence Rate, and Statistical Inference |
Authors | T. Tony Cai, Anru Zhang, Yuchen Zhou |
Abstract | In this paper, we study sparse group Lasso for high-dimensional double sparse linear regression, where the parameter of interest is simultaneously element-wise and group-wise sparse. This problem is an important instance of the simultaneously structured model – an actively studied topic in statistics and machine learning. In the noiseless case, we provide matching upper and lower bounds on sample complexity for the exact recovery of sparse vectors and for stable estimation of approximately sparse vectors, respectively. In the noisy case, we develop upper and matching minimax lower bounds for estimation error. We also consider the debiased sparse group Lasso and investigate its asymptotic property for the purpose of statistical inference. Finally, numerical studies are provided to support the theoretical results. |
Tasks | |
Published | 2019-09-21 |
URL | https://arxiv.org/abs/1909.09851v1 |
https://arxiv.org/pdf/1909.09851v1.pdf | |
PWC | https://paperswithcode.com/paper/190909851 |
Repo | |
Framework | |
Dealing with Stochasticity in Biological ODE Models
Title | Dealing with Stochasticity in Biological ODE Models |
Authors | Hamda Ajmal, Michael Madden, Catherine Enright |
Abstract | Mathematical modeling with Ordinary Differential Equations (ODEs) has proven to be extremely successful in a variety of fields, including biology. However, these models are completely deterministic given a certain set of initial conditions. We convert mathematical ODE models of three benchmark biological systems to Dynamic Bayesian Networks (DBNs). The DBN model can handle model uncertainty and data uncertainty in a principled manner. They can be used for temporal data mining for noisy and missing variables. We apply Particle Filtering algorithm to infer the model variables by re-estimating the models parameters of various biological ODE models. The model parameters are automatically re-estimated using temporal evidence in the form of data streams. The results show that DBNs are capable of inferring the model variables of the ODE model with high accuracy in situations where data is missing, incomplete, sparse and irregular and true values of model parameters are not known. |
Tasks | |
Published | 2019-10-10 |
URL | https://arxiv.org/abs/1910.04909v2 |
https://arxiv.org/pdf/1910.04909v2.pdf | |
PWC | https://paperswithcode.com/paper/dealing-with-stochasticity-in-biological-ode |
Repo | |
Framework | |
Context R-CNN: Long Term Temporal Context for Per-Camera Object Detection
Title | Context R-CNN: Long Term Temporal Context for Per-Camera Object Detection |
Authors | Sara Beery, Guanhang Wu, Vivek Rathod, Ronny Votel, Jonathan Huang |
Abstract | In static monitoring cameras, useful contextual information can stretch far beyond the few seconds typical video understanding models might see: subjects may exhibit similar behavior over multiple days, and background objects remain static. Due to power and storage constraints, sampling frequencies are low, often no faster than one frame per second, and sometimes are irregular due to the use of a motion trigger. In order to perform well in this setting, models must be robust to irregular sampling rates. In this paper we propose a method that leverages temporal context from the unlabeled frames of a novel camera to improve performance at that camera. Specifically, we propose an attention-based approach that allows our model, Context R-CNN, to index into a long term memory bank constructed on a per-camera basis and aggregate contextual features from other frames to boost object detection performance on the current frame. We apply Context R-CNN to two settings: (1) species detection using camera traps, and (2) vehicle detection in traffic cameras, showing in both settings that Context R-CNN leads to performance gains over strong baselines. Moreover, we show that increasing the contextual time horizon leads to improved results. When applied to camera trap data from the Snapshot Serengeti dataset, Context R-CNN with context from up to a month of images outperforms a single-frame baseline by 17.9% mAP, and outperforms S3D (a 3d convolution based baseline) by 11.2% mAP. |
Tasks | Object Detection, Video Understanding |
Published | 2019-12-07 |
URL | https://arxiv.org/abs/1912.03538v2 |
https://arxiv.org/pdf/1912.03538v2.pdf | |
PWC | https://paperswithcode.com/paper/long-term-temporal-context-for-per-camera |
Repo | |
Framework | |
LEt-SNE: A Hybrid Approach To Data Embedding and Visualization of Hyperspectral Imagery
Title | LEt-SNE: A Hybrid Approach To Data Embedding and Visualization of Hyperspectral Imagery |
Authors | Megh Shukla, Biplab Banerjee, Krishna Mohan Buddhiraju |
Abstract | Hyperspectral Imagery (and Remote Sensing in general) captured from UAVs or satellites are highly voluminous in nature due to the large spatial extent and wavelengths captured by them. Since analyzing these images requires a huge amount of computational time and power, various dimensionality reduction techniques have been used for feature reduction. Some popular techniques among these falter when applied to Hyperspectral Imagery due to the famed curse of dimensionality. In this paper, we propose a novel approach, LEt-SNE, which combines graph based algorithms like t-SNE and Laplacian Eigenmaps into a model parameterized by a shallow feed forward network. We introduce a new term, Compression Factor, that enables our method to combat the curse of dimensionality. The proposed algorithm is suitable for manifold visualization and sample clustering with labelled or unlabelled data. We demonstrate that our method is competitive with current state-of-the-art methods on hyperspectral remote sensing datasets in public domain. |
Tasks | Dimensionality Reduction |
Published | 2019-10-19 |
URL | https://arxiv.org/abs/1910.08790v2 |
https://arxiv.org/pdf/1910.08790v2.pdf | |
PWC | https://paperswithcode.com/paper/let-sne-a-hybrid-approach-to-data-embedding |
Repo | |
Framework | |
Cross-modality (CT-MRI) prior augmented deep learning for robust lung tumor segmentation from small MR datasets
Title | Cross-modality (CT-MRI) prior augmented deep learning for robust lung tumor segmentation from small MR datasets |
Authors | Jue Jiang, Yu-Chi Hu, Neelam Tyagi, Pengpeng Zhang, Andreas Rimner, Joseph O. Deasy, Harini Veeraraghavan |
Abstract | Lack of large expert annotated MR datasets makes training deep learning models difficult. Therefore, a cross-modality (MR-CT) deep learning segmentation approach that augments training data using pseudo MR images produced by transforming expert-segmented CT images was developed. Eighty-One T2-weighted MRI scans from 28 patients with non-small cell lung cancers were analyzed. Cross-modality prior encoding the transformation of CT to pseudo MR images resembling T2w MRI was learned as a generative adversarial deep learning model. This model augmented training data arising from 6 expert-segmented T2w MR patient scans with 377 pseudo MRI from non-small cell lung cancer CT patient scans with obtained from the Cancer Imaging Archive. A two-dimensional Unet implemented with batch normalization was trained to segment the tumors from T2w MRI. This method was benchmarked against (a) standard data augmentation and two state-of-the art cross-modality pseudo MR-based augmentation and (b) two segmentation networks. Segmentation accuracy was computed using Dice similarity coefficient (DSC), Hausdroff distance metrics, and volume ratio. The proposed approach produced the lowest statistical variability in the intensity distribution between pseudo and T2w MR images measured as Kullback-Leibler divergence of 0.069. This method produced the highest segmentation accuracy with a DSC of 0.75 and the lowest Hausdroff distance on the test dataset. This approach produced highly similar estimations of tumor growth as an expert (P = 0.37). A novel deep learning MR segmentation was developed that overcomes the limitation of learning robust models from small datasets by leveraging learned cross-modality priors to augment training. The results show the feasibility of the approach and the corresponding improvement over the state-of-the-art methods. |
Tasks | Data Augmentation |
Published | 2019-01-31 |
URL | http://arxiv.org/abs/1901.11369v2 |
http://arxiv.org/pdf/1901.11369v2.pdf | |
PWC | https://paperswithcode.com/paper/cross-modality-ct-mri-prior-augmented-deep |
Repo | |
Framework | |
An Evaluation of Bitcoin Address Classification based on Transaction History Summarization
Title | An Evaluation of Bitcoin Address Classification based on Transaction History Summarization |
Authors | Yu-Jing Lin, Po-Wei Wu, Cheng-Han Hsu, I-Ping Tu, Shih-wei Liao |
Abstract | Bitcoin is a cryptocurrency that features a distributed, decentralized and trustworthy mechanism, which has made Bitcoin a popular global transaction platform. The transaction efficiency among nations and the privacy benefiting from address anonymity of the Bitcoin network have attracted many activities such as payments, investments, gambling, and even money laundering in the past decade. Unfortunately, some criminal behaviors which took advantage of this platform were not identified. This has discouraged many governments to support cryptocurrency. Thus, the capability to identify criminal addresses becomes an important issue in the cryptocurrency network. In this paper, we propose new features in addition to those commonly used in the literature to build a classification model for detecting abnormality of Bitcoin network addresses. These features include various high orders of moments of transaction time (represented by block height) which summarizes the transaction history in an efficient way. The extracted features are trained by supervised machine learning methods on a labeling category data set. The experimental evaluation shows that these features have improved the performance of Bitcoin address classification significantly. We evaluate the results under eight classifiers and achieve the highest Micro-F1/Macro-F1 of 87%/86% with LightGBM. |
Tasks | |
Published | 2019-03-19 |
URL | http://arxiv.org/abs/1903.07994v1 |
http://arxiv.org/pdf/1903.07994v1.pdf | |
PWC | https://paperswithcode.com/paper/an-evaluation-of-bitcoin-address |
Repo | |
Framework | |