Paper Group ANR 437
A Fourier-invariant method for locating point-masses and computing their attributes. Multi-period Time Series Modeling with Sparsity via Bayesian Variational Inference. If it ain’t broke, don’t fix it: Sparse metric repair. A Projected Gradient Descent Method for CRF Inference allowing End-To-End Training of Arbitrary Pairwise Potentials. Automated …
A Fourier-invariant method for locating point-masses and computing their attributes
Title | A Fourier-invariant method for locating point-masses and computing their attributes |
Authors | Charles K. Chui, Hrushikesh N. Mhaskar |
Abstract | Motivated by the interest of observing the growth of cancer cells among normal living cells and exploring how galaxies and stars are truly formed, the objective of this paper is to introduce a rigorous and effective method for counting point-masses, determining their spatial locations, and computing their attributes. Based on computation of Hermite moments that are Fourier-invariant, our approach facilitates the processing of both spatial and Fourier data in any dimension. |
Tasks | |
Published | 2017-07-26 |
URL | http://arxiv.org/abs/1707.09319v1 |
http://arxiv.org/pdf/1707.09319v1.pdf | |
PWC | https://paperswithcode.com/paper/a-fourier-invariant-method-for-locating-point |
Repo | |
Framework | |
Multi-period Time Series Modeling with Sparsity via Bayesian Variational Inference
Title | Multi-period Time Series Modeling with Sparsity via Bayesian Variational Inference |
Authors | Daniel Hsu |
Abstract | In this paper, we use augmented the hierarchical latent variable model to model multi-period time series, where the dynamics of time series are governed by factors or trends in multiple periods. Previous methods based on stacked recurrent neural network (RNN) and deep belief network (DBN) models cannot model the tendencies in multiple periods, and no models for sequential data pay special attention to redundant input variables which have no or even negative impact on prediction and modeling. Applying hierarchical latent variable model with multiple transition periods, our proposed algorithm can capture dependencies in different temporal resolutions. Introducing Bayesian neural network with Horseshoe prior as input network, we can discard the redundant input variables in the optimization process, concurrently with the learning of other parts of the model. Based on experiments with both synthetic and real-world data, we show that the proposed method significantly improves the modeling and prediction performance on multi-period time series. |
Tasks | Time Series |
Published | 2017-07-03 |
URL | http://arxiv.org/abs/1707.00666v3 |
http://arxiv.org/pdf/1707.00666v3.pdf | |
PWC | https://paperswithcode.com/paper/multi-period-time-series-modeling-with |
Repo | |
Framework | |
If it ain’t broke, don’t fix it: Sparse metric repair
Title | If it ain’t broke, don’t fix it: Sparse metric repair |
Authors | Anna C. Gilbert, Lalit Jain |
Abstract | Many modern data-intensive computational problems either require, or benefit from distance or similarity data that adhere to a metric. The algorithms run faster or have better performance guarantees. Unfortunately, in real applications, the data are messy and values are noisy. The distances between the data points are far from satisfying a metric. Indeed, there are a number of different algorithms for finding the closest set of distances to the given ones that also satisfy a metric (sometimes with the extra condition of being Euclidean). These algorithms can have unintended consequences, they can change a large number of the original data points, and alter many other features of the data. The goal of sparse metric repair is to make as few changes as possible to the original data set or underlying distances so as to ensure the resulting distances satisfy the properties of a metric. In other words, we seek to minimize the sparsity (or the $\ell_0$ “norm”) of the changes we make to the distances subject to the new distances satisfying a metric. We give three different combinatorial algorithms to repair a metric sparsely. In one setting the algorithm is guaranteed to return the sparsest solution and in the other settings, the algorithms repair the metric. Without prior information, the algorithms run in time proportional to the cube of the number of input data points and, with prior information we can reduce the running time considerably. |
Tasks | |
Published | 2017-10-29 |
URL | http://arxiv.org/abs/1710.10655v1 |
http://arxiv.org/pdf/1710.10655v1.pdf | |
PWC | https://paperswithcode.com/paper/if-it-aint-broke-dont-fix-it-sparse-metric |
Repo | |
Framework | |
A Projected Gradient Descent Method for CRF Inference allowing End-To-End Training of Arbitrary Pairwise Potentials
Title | A Projected Gradient Descent Method for CRF Inference allowing End-To-End Training of Arbitrary Pairwise Potentials |
Authors | Måns Larsson, Anurag Arnab, Fredrik Kahl, Shuai Zheng, Philip Torr |
Abstract | Are we using the right potential functions in the Conditional Random Field models that are popular in the Vision community? Semantic segmentation and other pixel-level labelling tasks have made significant progress recently due to the deep learning paradigm. However, most state-of-the-art structured prediction methods also include a random field model with a hand-crafted Gaussian potential to model spatial priors, label consistencies and feature-based image conditioning. In this paper, we challenge this view by developing a new inference and learning framework which can learn pairwise CRF potentials restricted only by their dependence on the image pixel values and the size of the support. Both standard spatial and high-dimensional bilateral kernels are considered. Our framework is based on the observation that CRF inference can be achieved via projected gradient descent and consequently, can easily be integrated in deep neural networks to allow for end-to-end training. It is empirically demonstrated that such learned potentials can improve segmentation accuracy and that certain label class interactions are indeed better modelled by a non-Gaussian potential. In addition, we compare our inference method to the commonly used mean-field algorithm. Our framework is evaluated on several public benchmarks for semantic segmentation with improved performance compared to previous state-of-the-art CNN+CRF models. |
Tasks | Semantic Segmentation, Structured Prediction |
Published | 2017-01-24 |
URL | http://arxiv.org/abs/1701.06805v3 |
http://arxiv.org/pdf/1701.06805v3.pdf | |
PWC | https://paperswithcode.com/paper/a-projected-gradient-descent-method-for-crf |
Repo | |
Framework | |
Automated Detection of Non-Relevant Posts on the Russian Imageboard “2ch”: Importance of the Choice of Word Representations
Title | Automated Detection of Non-Relevant Posts on the Russian Imageboard “2ch”: Importance of the Choice of Word Representations |
Authors | Amir Bakarov, Olga Gureenkova |
Abstract | This study considers the problem of automated detection of non-relevant posts on Web forums and discusses the approach of resolving this problem by approximation it with the task of detection of semantic relatedness between the given post and the opening post of the forum discussion thread. The approximated task could be resolved through learning the supervised classifier with a composed word embeddings of two posts. Considering that the success in this task could be quite sensitive to the choice of word representations, we propose a comparison of the performance of different word embedding models. We train 7 models (Word2Vec, Glove, Word2Vec-f, Wang2Vec, AdaGram, FastText, Swivel), evaluate embeddings produced by them on dataset of human judgements and compare their performance on the task of non-relevant posts detection. To make the comparison, we propose a dataset of semantic relatedness with posts from one of the most popular Russian Web forums, imageboard “2ch”, which has challenging lexical and grammatical features. |
Tasks | Word Embeddings |
Published | 2017-07-16 |
URL | http://arxiv.org/abs/1707.04860v1 |
http://arxiv.org/pdf/1707.04860v1.pdf | |
PWC | https://paperswithcode.com/paper/automated-detection-of-non-relevant-posts-on |
Repo | |
Framework | |
A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines
Title | A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines |
Authors | Michael R. Smith, Aaron J. Hill, Kristofor D. Carlson, Craig M. Vineyard, Jonathon Donaldson, David R. Follett, Pamela L. Follett, John H. Naegle, Conrad D. James, James B. Aimone |
Abstract | Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons. Inspired by neuroscience principles, we present a digital neuromorphic architecture, the Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex synaptic response functions without requiring additional hardware components. We consider the paradigm of spiking neurons with temporally coded information as opposed to non-spiking rate coded neurons used in most neural networks. In this paradigm we examine liquid state machines applied to speech recognition and show how a liquid state machine with temporal dynamics maps onto the STPU-demonstrating the flexibility and efficiency of the STPU for instantiating neural algorithms. |
Tasks | Speech Recognition |
Published | 2017-03-21 |
URL | http://arxiv.org/abs/1704.08306v1 |
http://arxiv.org/pdf/1704.08306v1.pdf | |
PWC | https://paperswithcode.com/paper/a-digital-neuromorphic-architecture |
Repo | |
Framework | |
Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light
Title | Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light |
Authors | Yuki Shiba, Satoshi Ono, Ryo Furukawa, Shinsaku Hiura, Hiroshi Kawasaki |
Abstract | One of the solutions of depth imaging of moving scene is to project a static pattern on the object and use just a single image for reconstruction. However, if the motion of the object is too fast with respect to the exposure time of the image sensor, patterns on the captured image are blurred and reconstruction fails. In this paper, we impose multiple projection patterns into each single captured image to realize temporal super resolution of the depth image sequences. With our method, multiple patterns are projected onto the object with higher fps than possible with a camera. In this case, the observed pattern varies depending on the depth and motion of the object, so we can extract temporal information of the scene from each single image. The decoding process is realized using a learning-based approach where no geometric calibration is needed. Experiments confirm the effectiveness of our method where sequential shapes are reconstructed from a single image. Both quantitative evaluations and comparisons with recent techniques were also conducted. |
Tasks | Calibration, Super-Resolution |
Published | 2017-10-02 |
URL | http://arxiv.org/abs/1710.00517v1 |
http://arxiv.org/pdf/1710.00517v1.pdf | |
PWC | https://paperswithcode.com/paper/temporal-shape-super-resolution-by-intra |
Repo | |
Framework | |
Selective Video Object Cutout
Title | Selective Video Object Cutout |
Authors | Wenguan Wang, Jianbing Shen, Fatih Porikli |
Abstract | Conventional video segmentation approaches rely heavily on appearance models. Such methods often use appearance descriptors that have limited discriminative power under complex scenarios. To improve the segmentation performance, this paper presents a pyramid histogram based confidence map that incorporates structure information into appearance statistics. It also combines geodesic distance based dynamic models. Then, it employs an efficient measure of uncertainty propagation using local classifiers to determine the image regions where the object labels might be ambiguous. The final foreground cutout is obtained by refining on the uncertain regions. Additionally, to reduce manual labeling, our method determines the frames to be labeled by the human operator in a principled manner, which further boosts the segmentation performance and minimizes the labeling effort. Our extensive experimental analyses on two big benchmarks demonstrate that our solution achieves superior performance, favorable computational efficiency, and reduced manual labeling in comparison to the state-of-the-art. |
Tasks | Video Semantic Segmentation |
Published | 2017-02-28 |
URL | http://arxiv.org/abs/1702.08640v5 |
http://arxiv.org/pdf/1702.08640v5.pdf | |
PWC | https://paperswithcode.com/paper/selective-video-object-cutout |
Repo | |
Framework | |
Learning Compact Appearance Representation for Video-based Person Re-Identification
Title | Learning Compact Appearance Representation for Video-based Person Re-Identification |
Authors | Wei Zhang, Shengnan Hu, Kan Liu, Zhengjun Zha |
Abstract | This paper presents a novel approach for video-based person re-identification using multiple Convolutional Neural Networks (CNNs). Unlike previous work, we intend to extract a compact yet discriminative appearance representation from several frames rather than the whole sequence. Specifically, given a video, the representative frames are selected based on the walking profile of consecutive frames. A multiple CNN architecture incorporated with feature pooling is proposed to learn and compile the features of the selected representative frames into a compact description about the pedestrian for identification. Experiments are conducted on benchmark datasets to demonstrate the superiority of the proposed method over existing person re-identification approaches. |
Tasks | Person Re-Identification, Video-Based Person Re-Identification |
Published | 2017-02-21 |
URL | https://arxiv.org/abs/1702.06294v2 |
https://arxiv.org/pdf/1702.06294v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-compact-appearance-representation |
Repo | |
Framework | |
Automatic Synonym Discovery with Knowledge Bases
Title | Automatic Synonym Discovery with Knowledge Bases |
Authors | Meng Qu, Xiang Ren, Jiawei Han |
Abstract | Recognizing entity synonyms from text has become a crucial task in many entity-leveraging applications. However, discovering entity synonyms from domain-specific text corpora (e.g., news articles, scientific papers) is rather challenging. Current systems take an entity name string as input to find out other names that are synonymous, ignoring the fact that often times a name string can refer to multiple entities (e.g., “apple” could refer to both Apple Inc and the fruit apple). Moreover, most existing methods require training data manually created by domain experts to construct supervised-learning systems. In this paper, we study the problem of automatic synonym discovery with knowledge bases, that is, identifying synonyms for knowledge base entities in a given domain-specific corpus. The manually-curated synonyms for each entity stored in a knowledge base not only form a set of name strings to disambiguate the meaning for each other, but also can serve as “distant” supervision to help determine important features for the task. We propose a novel framework, called DPE, to integrate two kinds of mutually-complementing signals for synonym discovery, i.e., distributional features based on corpus-level statistics and textual patterns based on local contexts. In particular, DPE jointly optimizes the two kinds of signals in conjunction with distant supervision, so that they can mutually enhance each other in the training stage. At the inference stage, both signals will be utilized to discover synonyms for the given entities. Experimental results prove the effectiveness of the proposed framework. |
Tasks | |
Published | 2017-06-25 |
URL | http://arxiv.org/abs/1706.08186v1 |
http://arxiv.org/pdf/1706.08186v1.pdf | |
PWC | https://paperswithcode.com/paper/automatic-synonym-discovery-with-knowledge |
Repo | |
Framework | |
Predictive modelling of training loads and injury in Australian football
Title | Predictive modelling of training loads and injury in Australian football |
Authors | David L. Carey, Kok-Leong Ong, Rod Whiteley, Kay M. Crossley, Justin Crow, Meg E. Morris |
Abstract | To investigate whether training load monitoring data could be used to predict injuries in elite Australian football players, data were collected from elite athletes over 3 seasons at an Australian football club. Loads were quantified using GPS devices, accelerometers and player perceived exertion ratings. Absolute and relative training load metrics were calculated for each player each day (rolling average, exponentially weighted moving average, acute:chronic workload ratio, monotony and strain). Injury prediction models (regularised logistic regression, generalised estimating equations, random forests and support vector machines) were built for non-contact, non-contact time-loss and hamstring specific injuries using the first two seasons of data. Injury predictions were generated for the third season and evaluated using the area under the receiver operator characteristic (AUC). Predictive performance was only marginally better than chance for models of non-contact and non-contact time-loss injuries (AUC$<$0.65). The best performing model was a multivariate logistic regression for hamstring injuries (best AUC=0.76). Learning curves suggested logistic regression was underfitting the load-injury relationship and that using a more complex model or increasing the amount of model building data may lead to future improvements. Injury prediction models built using training load data from a single club showed poor ability to predict injuries when tested on previously unseen data, suggesting they are limited as a daily decision tool for practitioners. Focusing the modelling approach on specific injury types and increasing the amount of training data may lead to the development of improved predictive models for injury prevention. |
Tasks | Injury Prediction |
Published | 2017-06-14 |
URL | http://arxiv.org/abs/1706.04336v1 |
http://arxiv.org/pdf/1706.04336v1.pdf | |
PWC | https://paperswithcode.com/paper/predictive-modelling-of-training-loads-and |
Repo | |
Framework | |
Conflict Analysis for Pythagorean Fuzzy Information Systems with Group Decision Making
Title | Conflict Analysis for Pythagorean Fuzzy Information Systems with Group Decision Making |
Authors | Guangming Lang |
Abstract | Pythagorean fuzzy sets provide stronger ability than intuitionistic fuzzy sets to model uncertainty information and knowledge, but little effort has been paid to conflict analysis of Pythagorean fuzzy information systems. In this paper, we present three types of positive, central, and negative alliances with different thresholds, and employ examples to illustrate how to construct the positive, central, and negative alliances. Then we study conflict analysis of Pythagorean fuzzy information systems based on Bayesian minimum risk theory. Finally, we investigate group conflict analysis of Pythagorean fuzzy information systems based on Bayesian minimum risk theory. |
Tasks | Decision Making |
Published | 2017-07-12 |
URL | http://arxiv.org/abs/1707.03739v1 |
http://arxiv.org/pdf/1707.03739v1.pdf | |
PWC | https://paperswithcode.com/paper/conflict-analysis-for-pythagorean-fuzzy |
Repo | |
Framework | |
Scaffolding Networks: Incremental Learning and Teaching Through Questioning
Title | Scaffolding Networks: Incremental Learning and Teaching Through Questioning |
Authors | Asli Celikyilmaz, Li Deng, Lihong Li, Chong Wang |
Abstract | We introduce a new paradigm of learning for reasoning, understanding, and prediction, as well as the scaffolding network to implement this paradigm. The scaffolding network embodies an incremental learning approach that is formulated as a teacher-student network architecture to teach machines how to understand text and do reasoning. The key to our computational scaffolding approach is the interactions between the teacher and the student through sequential questioning. The student observes each sentence in the text incrementally, and it uses an attention-based neural net to discover and register the key information in relation to its current memory. Meanwhile, the teacher asks questions about the observed text, and the student network gets rewarded by correctly answering these questions. The entire network is updated continually using reinforcement learning. Our experimental results on synthetic and real datasets show that the scaffolding network not only outperforms state-of-the-art methods but also learns to do reasoning in a scalable way even with little human generated input. |
Tasks | |
Published | 2017-02-28 |
URL | http://arxiv.org/abs/1702.08653v2 |
http://arxiv.org/pdf/1702.08653v2.pdf | |
PWC | https://paperswithcode.com/paper/scaffolding-networks-incremental-learning-and |
Repo | |
Framework | |
Discrete Wavelet Transform Based Algorithm for Recognition of QRS Complexes
Title | Discrete Wavelet Transform Based Algorithm for Recognition of QRS Complexes |
Authors | Rachid Haddadi, Elhassane Abdelmounim, Mustapha El Hanine, Abdelaziz Belaguid |
Abstract | This paper proposes the application of Discrete Wavelet Transform (DWT) to detect the QRS (ECG is characterized by a recurrent wave sequence of P, QRS and T-wave) of an electrocardiogram (ECG) signal. Wavelet Transform provides localization in both time and frequency. In preprocessing stage, DWT is used to remove the baseline wander in the ECG signal. The performance of the algorithm of QRS detection is evaluated against the standard MIT BIH (Massachusetts Institute of Technology, Beth Israel Hospital) Arrhythmia database. The average QRS complexes detection rate of 98.1 % is achieved. |
Tasks | |
Published | 2017-02-28 |
URL | http://arxiv.org/abs/1703.00075v1 |
http://arxiv.org/pdf/1703.00075v1.pdf | |
PWC | https://paperswithcode.com/paper/discrete-wavelet-transform-based-algorithm |
Repo | |
Framework | |
Random Forests of Interaction Trees for Estimating Individualized Treatment Effects in Randomized Trials
Title | Random Forests of Interaction Trees for Estimating Individualized Treatment Effects in Randomized Trials |
Authors | Xiaogang Su, Annette T. Peña, Lei Liu, Richard A. Levine |
Abstract | Assessing heterogeneous treatment effects has become a growing interest in advancing precision medicine. Individualized treatment effects (ITE) play a critical role in such an endeavor. Concerning experimental data collected from randomized trials, we put forward a method, termed random forests of interaction trees (RFIT), for estimating ITE on the basis of interaction trees (Su et al., 2009). To this end, we first propose a smooth sigmoid surrogate (SSS) method, as an alternative to greedy search, to speed up tree construction. RFIT outperforms the traditional `separate regression’ approach in estimating ITE. Furthermore, standard errors for the estimated ITE via RFIT can be obtained with the infinitesimal jackknife method. We assess and illustrate the use of RFIT via both simulation and the analysis of data from an acupuncture headache trial. | |
Tasks | |
Published | 2017-09-14 |
URL | http://arxiv.org/abs/1709.04862v1 |
http://arxiv.org/pdf/1709.04862v1.pdf | |
PWC | https://paperswithcode.com/paper/random-forests-of-interaction-trees-for |
Repo | |
Framework | |