October 18, 2019

3025 words 15 mins read

Paper Group ANR 458

Paper Group ANR 458

Concept-Oriented Deep Learning. Robust multivariate and functional archetypal analysis with application to financial time series analysis. A Short Survey of Topological Data Analysis in Time Series and Systems Analysis. Long-term Tracking in the Wild: A Benchmark. Mind Your POV: Convergence of Articles and Editors Towards Wikipedia’s Neutrality Nor …

Concept-Oriented Deep Learning

Title Concept-Oriented Deep Learning
Authors Daniel T Chang
Abstract Concepts are the foundation of human deep learning, understanding, and knowledge integration and transfer. We propose concept-oriented deep learning (CODL) which extends (machine) deep learning with concept representations and conceptual understanding capability. CODL addresses some of the major limitations of deep learning: interpretability, transferability, contextual adaptation, and requirement for lots of labeled training data. We discuss the major aspects of CODL including concept graph, concept representations, concept exemplars, and concept representation learning systems supporting incremental and continual learning.
Tasks Continual Learning, Representation Learning
Published 2018-06-05
URL http://arxiv.org/abs/1806.01756v1
PDF http://arxiv.org/pdf/1806.01756v1.pdf
PWC https://paperswithcode.com/paper/concept-oriented-deep-learning
Repo
Framework

Robust multivariate and functional archetypal analysis with application to financial time series analysis

Title Robust multivariate and functional archetypal analysis with application to financial time series analysis
Authors Jesús Moliner, Irene Epifanio
Abstract Archetypal analysis approximates data by means of mixtures of actual extreme cases (archetypoids) or archetypes, which are a convex combination of cases in the data set. Archetypes lie on the boundary of the convex hull. This makes the analysis very sensitive to outliers. A robust methodology by means of M-estimators for classical multivariate and functional data is proposed. This unsupervised methodology allows complex data to be understood even by non-experts. The performance of the new procedure is assessed in a simulation study, where a comparison with a previous methodology for the multivariate case is also carried out, and our proposal obtains favorable results. Finally, robust bivariate functional archetypoid analysis is applied to a set of companies in the S&P 500 described by two time series of stock quotes. A new graphic representation is also proposed to visualize the results. The analysis shows how the information can be easily interpreted and how even non-experts can gain a qualitative understanding of the data.
Tasks Time Series, Time Series Analysis
Published 2018-10-01
URL http://arxiv.org/abs/1810.00919v2
PDF http://arxiv.org/pdf/1810.00919v2.pdf
PWC https://paperswithcode.com/paper/robust-multivariate-and-functional-archetypal
Repo
Framework

A Short Survey of Topological Data Analysis in Time Series and Systems Analysis

Title A Short Survey of Topological Data Analysis in Time Series and Systems Analysis
Authors Shafie Gholizadeh, Wlodek Zadrozny
Abstract Topological Data Analysis (TDA) is the collection of mathematical tools that capture the structure of shapes in data. Despite computational topology and computational geometry, the utilization of TDA in time series and signal processing is relatively new. In some recent contributions, TDA has been utilized as an alternative to the conventional signal processing methods. Specifically, TDA is been considered to deal with noisy signals and time series. In these applications, TDA is used to find the shapes in data as the main properties, while the other properties are assumed much less informative. In this paper, we will review recent developments and contributions where topological data analysis especially persistent homology has been applied to time series analysis, dynamical systems and signal processing. We will cover problem statements such as stability determination, risk analysis, systems behaviour, and predicting critical transitions in financial markets.
Tasks Time Series, Time Series Analysis, Topological Data Analysis
Published 2018-09-27
URL http://arxiv.org/abs/1809.10745v2
PDF http://arxiv.org/pdf/1809.10745v2.pdf
PWC https://paperswithcode.com/paper/a-short-survey-of-topological-data-analysis
Repo
Framework

Long-term Tracking in the Wild: A Benchmark

Title Long-term Tracking in the Wild: A Benchmark
Authors Jack Valmadre, Luca Bertinetto, João F. Henriques, Ran Tao, Andrea Vedaldi, Arnold Smeulders, Philip Torr, Efstratios Gavves
Abstract We introduce the OxUvA dataset and benchmark for evaluating single-object tracking algorithms. Benchmarks have enabled great strides in the field of object tracking by defining standardized evaluations on large sets of diverse videos. However, these works have focused exclusively on sequences that are just tens of seconds in length and in which the target is always visible. Consequently, most researchers have designed methods tailored to this “short-term” scenario, which is poorly representative of practitioners’ needs. Aiming to address this disparity, we compile a long-term, large-scale tracking dataset of sequences with average length greater than two minutes and with frequent target object disappearance. The OxUvA dataset is much larger than the object tracking datasets of recent years: it comprises 366 sequences spanning 14 hours of video. We assess the performance of several algorithms, considering both the ability to locate the target and to determine whether it is present or absent. Our goal is to offer the community a large and diverse benchmark to enable the design and evaluation of tracking methods ready to be used “in the wild”. The project website is http://oxuva.net
Tasks Object Tracking
Published 2018-03-26
URL http://arxiv.org/abs/1803.09502v3
PDF http://arxiv.org/pdf/1803.09502v3.pdf
PWC https://paperswithcode.com/paper/long-term-tracking-in-the-wild-a-benchmark
Repo
Framework

Mind Your POV: Convergence of Articles and Editors Towards Wikipedia’s Neutrality Norm

Title Mind Your POV: Convergence of Articles and Editors Towards Wikipedia’s Neutrality Norm
Authors Umashanthi Pavalanathan, Xiaochuang Han, Jacob Eisenstein
Abstract Wikipedia has a strong norm of writing in a ‘neutral point of view’ (NPOV). Articles that violate this norm are tagged, and editors are encouraged to make corrections. But the impact of this tagging system has not been quantitatively measured. Does NPOV tagging help articles to converge to the desired style? Do NPOV corrections encourage editors to adopt this style? We study these questions using a corpus of NPOV-tagged articles and a set of lexicons associated with biased language. An interrupted time series analysis shows that after an article is tagged for NPOV, there is a significant decrease in biased language in the article, as measured by several lexicons. However, for individual editors, NPOV corrections and talk page discussions yield no significant change in the usage of words in most of these lexicons, including Wikipedia’s own list of ‘words to watch.’ This suggests that NPOV tagging and discussion does improve content, but has less success enculturating editors to the site’s linguistic norms.
Tasks Time Series, Time Series Analysis
Published 2018-09-18
URL http://arxiv.org/abs/1809.06951v1
PDF http://arxiv.org/pdf/1809.06951v1.pdf
PWC https://paperswithcode.com/paper/mind-your-pov-convergence-of-articles-and
Repo
Framework

Iterative Segmentation from Limited Training Data: Applications to Congenital Heart Disease

Title Iterative Segmentation from Limited Training Data: Applications to Congenital Heart Disease
Authors Danielle F. Pace, Adrian V. Dalca, Tom Brosch, Tal Geva, Andrew J. Powell, Jürgen Weese, Mehdi H. Moghari, Polina Golland
Abstract We propose a new iterative segmentation model which can be accurately learned from a small dataset. A common approach is to train a model to directly segment an image, requiring a large collection of manually annotated images to capture the anatomical variability in a cohort. In contrast, we develop a segmentation model that recursively evolves a segmentation in several steps, and implement it as a recurrent neural network. We learn model parameters by optimizing the interme- diate steps of the evolution in addition to the final segmentation. To this end, we train our segmentation propagation model by presenting incom- plete and/or inaccurate input segmentations paired with a recommended next step. Our work aims to alleviate challenges in segmenting heart structures from cardiac MRI for patients with congenital heart disease (CHD), which encompasses a range of morphological deformations and topological changes. We demonstrate the advantages of this approach on a dataset of 20 images from CHD patients, learning a model that accurately segments individual heart chambers and great vessels. Com- pared to direct segmentation, the iterative method yields more accurate segmentation for patients with the most severe CHD malformations.
Tasks
Published 2018-09-11
URL http://arxiv.org/abs/1809.04182v1
PDF http://arxiv.org/pdf/1809.04182v1.pdf
PWC https://paperswithcode.com/paper/iterative-segmentation-from-limited-training
Repo
Framework

Estimation of Camera Locations in Highly Corrupted Scenarios: All About that Base, No Shape Trouble

Title Estimation of Camera Locations in Highly Corrupted Scenarios: All About that Base, No Shape Trouble
Authors Yunpeng Shi, Gilad Lerman
Abstract We propose a strategy for improving camera location estimation in structure from motion. Our setting assumes highly corrupted pairwise directions (i.e., normalized relative location vectors), so there is a clear room for improving current state-of-the-art solutions for this problem. Our strategy identifies severely corrupted pairwise directions by using a geometric consistency condition. It then selects a cleaner set of pairwise directions as a preprocessing step for common solvers. We theoretically guarantee the successful performance of a basic version of our strategy under a synthetic corruption model. Numerical results on artificial and real data demonstrate the significant improvement obtained by our strategy.
Tasks
Published 2018-04-07
URL http://arxiv.org/abs/1804.02591v1
PDF http://arxiv.org/pdf/1804.02591v1.pdf
PWC https://paperswithcode.com/paper/estimation-of-camera-locations-in-highly
Repo
Framework

Improved Deep Hashing with Soft Pairwise Similarity for Multi-label Image Retrieval

Title Improved Deep Hashing with Soft Pairwise Similarity for Multi-label Image Retrieval
Authors Zheng Zhang, Qin Zou, Yuewei Lin, Long Chen, Song Wang
Abstract Hash coding has been widely used in the approximate nearest neighbor search for large-scale image retrieval. Recently, many deep hashing methods have been proposed and shown largely improved performance over traditional feature-learning-based methods. Most of these methods examine the pairwise similarity on the semantic-level labels, where the pairwise similarity is generally defined in a hard-assignment way. That is, the pairwise similarity is ‘1’ if they share no less than one class label and ‘0’ if they do not share any. However, such similarity definition cannot reflect the similarity ranking for pairwise images that hold multiple labels. In this paper, a new deep hashing method is proposed for multi-label image retrieval by re-defining the pairwise similarity into an instance similarity, where the instance similarity is quantified into a percentage based on the normalized semantic labels. Based on the instance similarity, a weighted cross-entropy loss and a minimum mean square error loss are tailored for loss-function construction, and are efficiently used for simultaneous feature learning and hash coding. Experiments on three popular datasets demonstrate that, the proposed method outperforms the competing methods and achieves the state-of-the-art performance in multi-label image retrieval.
Tasks Image Retrieval, Multi-Label Image Retrieval
Published 2018-03-08
URL https://arxiv.org/abs/1803.02987v3
PDF https://arxiv.org/pdf/1803.02987v3.pdf
PWC https://paperswithcode.com/paper/instance-similarity-deep-hashing-for-multi
Repo
Framework

Understanding and Enhancing the Transferability of Adversarial Examples

Title Understanding and Enhancing the Transferability of Adversarial Examples
Authors Lei Wu, Zhanxing Zhu, Cheng Tai, Weinan E
Abstract State-of-the-art deep neural networks are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can \textit{transfer across models}: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack deployed systems without any query, which severely hinder the application of deep learning, especially in the areas where security is crucial. In this work, we systematically study how two classes of factors that might influence the transferability of adversarial examples. One is about model-specific factors, including network architecture, model capacity and test accuracy. The other is the local smoothness of loss function for constructing adversarial examples. Based on these understanding, a simple but effective strategy is proposed to enhance transferability. We call it variance-reduced attack, since it utilizes the variance-reduced gradient to generate adversarial example. The effectiveness is confirmed by a variety of experiments on both CIFAR-10 and ImageNet datasets.
Tasks
Published 2018-02-27
URL http://arxiv.org/abs/1802.09707v1
PDF http://arxiv.org/pdf/1802.09707v1.pdf
PWC https://paperswithcode.com/paper/understanding-and-enhancing-the
Repo
Framework

From the Periphery to the Center: Information Brokerage in an Evolving Network

Title From the Periphery to the Center: Information Brokerage in an Evolving Network
Authors Bo Yan, Yiping Liu, Jiamou Liu, Yijin Cai, Hongyi Su, Hong Zheng
Abstract Interpersonal ties are pivotal to individual efficacy, status and performance in an agent society. This paper explores three important and interrelated themes in social network theory: the center/periphery partition of the network; network dynamics; and social integration of newcomers. We tackle the question: How would a newcomer harness information brokerage to integrate into a dynamic network going from periphery to center? We model integration as the interplay between the newcomer and the dynamics network and capture information brokerage using a process of relationship building. We analyze theoretical guarantees for the newcomer to reach the center through tactics; proving that a winning tactic always exists for certain types of network dynamics. We then propose three tactics and show their superior performance over alternative methods on four real-world datasets and four network models. In general, our tactics place the newcomer to the center by adding very few new edges on dynamic networks with approximately 14000 nodes.
Tasks
Published 2018-05-02
URL http://arxiv.org/abs/1805.00751v1
PDF http://arxiv.org/pdf/1805.00751v1.pdf
PWC https://paperswithcode.com/paper/from-the-periphery-to-the-center-information
Repo
Framework

Joint Representation Learning of Cross-lingual Words and Entities via Attentive Distant Supervision

Title Joint Representation Learning of Cross-lingual Words and Entities via Attentive Distant Supervision
Authors Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Chengjiang Li, Xu Chen, Tiansi Dong
Abstract Joint representation learning of words and entities benefits many NLP tasks, but has not been well explored in cross-lingual settings. In this paper, we propose a novel method for joint representation learning of cross-lingual words and entities. It captures mutually complementary knowledge, and enables cross-lingual inferences among knowledge bases and texts. Our method does not require parallel corpora, and automatically generates comparable data via distant supervision using multi-lingual knowledge bases. We utilize two types of regularizers to align cross-lingual words and entities, and design knowledge attention and cross-lingual attention to further reduce noises. We conducted a series of experiments on three tasks: word translation, entity relatedness, and cross-lingual entity linking. The results, both qualitatively and quantitatively, demonstrate the significance of our method.
Tasks Cross-Lingual Entity Linking, Entity Linking, Representation Learning
Published 2018-11-27
URL http://arxiv.org/abs/1811.10776v1
PDF http://arxiv.org/pdf/1811.10776v1.pdf
PWC https://paperswithcode.com/paper/joint-representation-learning-of-cross
Repo
Framework

QSAR Classification Modeling for Bioactivity of Molecular Structure via SPL-Logsum

Title QSAR Classification Modeling for Bioactivity of Molecular Structure via SPL-Logsum
Authors Liang-Yong Xia, Qing-Yong Wang
Abstract Quantitative structure-activity relationship (QSAR) modelling is effective ‘bridge’ to search the reliable relationship related bioactivity to molecular structure. A QSAR classification model contains a lager number of redundant, noisy and irrelevant descriptors. To address this problem, various of methods have been proposed for descriptor selection. Generally, they can be grouped into three categories: filters, wrappers, and embedded methods. Regularization method is an important embedded technology, which can be used for continuous shrinkage and automatic descriptors selection. In recent years, the interest of researchers in the application of regularization techniques is increasing in descriptors selection , such as, logistic regression(LR) with $L_1$ penalty. In this paper, we proposed a novel descriptor selection method based on self-paced learning(SPL) with Logsum penalized LR for predicting the bioactivity of molecular structure. SPL inspired by the learning process of humans and animals that gradually learns from easy samples(smaller losses) to hard samples(bigger losses) samples into training and Logsum regularization has capacity to select few meaningful and significant molecular descriptors, respectively. Experimental results on simulation and three public QSAR datasets show that our proposed SPL-Logsum method outperforms other commonly used sparse methods in terms of classification performance and model interpretation.
Tasks
Published 2018-04-23
URL http://arxiv.org/abs/1804.08615v2
PDF http://arxiv.org/pdf/1804.08615v2.pdf
PWC https://paperswithcode.com/paper/qsar-classification-modeling-for-bioactivity
Repo
Framework

On the Universal Approximability and Complexity Bounds of Quantized ReLU Neural Networks

Title On the Universal Approximability and Complexity Bounds of Quantized ReLU Neural Networks
Authors Yukun Ding, Jinglan Liu, Jinjun Xiong, Yiyu Shi
Abstract Compression is a key step to deploy large neural networks on resource-constrained platforms. As a popular compression technique, quantization constrains the number of distinct weight values and thus reducing the number of bits required to represent and store each weight. In this paper, we study the representation power of quantized neural networks. First, we prove the universal approximability of quantized ReLU networks on a wide class of functions. Then we provide upper bounds on the number of weights and the memory size for a given approximation error bound and the bit-width of weights for function-independent and function-dependent structures. Our results reveal that, to attain an approximation error bound of $\epsilon$, the number of weights needed by a quantized network is no more than $\mathcal{O}\left(\log^5(1/\epsilon)\right)$ times that of an unquantized network. This overhead is of much lower order than the lower bound of the number of weights needed for the error bound, supporting the empirical success of various quantization techniques. To the best of our knowledge, this is the first in-depth study on the complexity bounds of quantized neural networks.
Tasks Quantization
Published 2018-02-10
URL http://arxiv.org/abs/1802.03646v4
PDF http://arxiv.org/pdf/1802.03646v4.pdf
PWC https://paperswithcode.com/paper/on-the-universal-approximability-and
Repo
Framework

Using General Adversarial Networks for Marketing: A Case Study of Airbnb

Title Using General Adversarial Networks for Marketing: A Case Study of Airbnb
Authors Richard Diehl Martinez, John Kaleialoha Kamalu
Abstract In this paper, we examine the use case of general adversarial networks (GANs) in the field of marketing. In particular, we analyze how GAN models can replicate text patterns from successful product listings on Airbnb, a peer-to-peer online market for short-term apartment rentals. To do so, we define the Diehl-Martinez-Kamalu (DMK) loss function as a new class of functions that forces the model’s generated output to include a set of user-defined keywords. This allows the general adversarial network to recommend a way of rewording the phrasing of a listing description to increase the likelihood that it is booked. Although we tailor our analysis to Airbnb data, we believe this framework establishes a more general model for how generative algorithms can be used to produce text samples for the purposes of marketing.
Tasks
Published 2018-06-29
URL http://arxiv.org/abs/1806.11432v1
PDF http://arxiv.org/pdf/1806.11432v1.pdf
PWC https://paperswithcode.com/paper/using-general-adversarial-networks-for
Repo
Framework

Item Recommendation with Variational Autoencoders and Heterogenous Priors

Title Item Recommendation with Variational Autoencoders and Heterogenous Priors
Authors Giannis Karamanolakis, Kevin Raji Cherian, Ananth Ravi Narayan, Jie Yuan, Da Tang, Tony Jebara
Abstract In recent years, Variational Autoencoders (VAEs) have been shown to be highly effective in both standard collaborative filtering applications and extensions such as incorporation of implicit feedback. We extend VAEs to collaborative filtering with side information, for instance when ratings are combined with explicit text feedback from the user. Instead of using a user-agnostic standard Gaussian prior, we incorporate user-dependent priors in the latent VAE space to encode users’ preferences as functions of the review text. Taking into account both the rating and the text information to represent users in this multimodal latent space is promising to improve recommendation quality. Our proposed model is shown to outperform the existing VAE models for collaborative filtering (up to 29.41% relative improvement in ranking metric) along with other baselines that incorporate both user ratings and text for item recommendation.
Tasks
Published 2018-07-17
URL http://arxiv.org/abs/1807.06651v2
PDF http://arxiv.org/pdf/1807.06651v2.pdf
PWC https://paperswithcode.com/paper/item-recommendation-with-variational
Repo
Framework
comments powered by Disqus