Paper Group ANR 173
Estimation of Tire-Road Friction for Road Vehicles: a Time Delay Neural Network Approach. Sample Complexity Lower Bounds for Linear System Identification. Self-Paced Probabilistic Principal Component Analysis for Data with Outliers. Sequence Labeling Parsing by Learning Across Representations. How Large Are Lions? Inducing Distributions over Quanti …
Estimation of Tire-Road Friction for Road Vehicles: a Time Delay Neural Network Approach
Title | Estimation of Tire-Road Friction for Road Vehicles: a Time Delay Neural Network Approach |
Authors | Alexandre M. Ribeiro, Alexandra Moutinho, André R. Fioravanti, Ely C. de Paiva |
Abstract | The performance of vehicle active safety systems is dependent on the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of different vehicle control systems. This paper deals with the tire-road friction coefficient estimation problem through the knowledge of lateral tire force. A time delay neural network (TDNN) is adopted for the proposed estimation design. The TDNN aims at detecting road friction coefficient under lateral force excitations avoiding the use of standard mathematical tire models, which may provide a more efficient method with robust results. Moreover, the approach is able to estimate the road friction at each wheel independently, instead of using lumped axle models simplifications. Simulations based on a realistic vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method. The results are compared with a classical approach, a model-based method modeled as a nonlinear regression. |
Tasks | Autonomous Vehicles |
Published | 2019-08-01 |
URL | https://arxiv.org/abs/1908.00452v2 |
https://arxiv.org/pdf/1908.00452v2.pdf | |
PWC | https://paperswithcode.com/paper/estimation-of-tire-road-friction-for |
Repo | |
Framework | |
Sample Complexity Lower Bounds for Linear System Identification
Title | Sample Complexity Lower Bounds for Linear System Identification |
Authors | Yassir Jedra, Alexandre Proutiere |
Abstract | This paper establishes problem-specific sample complexity lower bounds for linear system identification problems. The sample complexity is defined in the PAC framework: it corresponds to the time it takes to identify the system parameters with prescribed accuracy and confidence levels. By problem-specific, we mean that the lower bound explicitly depends on the system to be identified (which contrasts with minimax lower bounds), and hence really captures the identification hardness specific to the system. We consider both uncontrolled and controlled systems. For uncontrolled systems, the lower bounds are valid for any linear system, stable or not, and only depend of the system finite-time controllability gramian. A simplified lower bound depending on the spectrum of the system only is also derived. In view of recent finitetime analysis of classical estimation methods (e.g. ordinary least squares), our sample complexity lower bounds are tight for many systems. For controlled systems, our lower bounds are not as explicit as in the case of uncontrolled systems, but could well provide interesting insights into the design of control policy with minimal sample complexity. |
Tasks | |
Published | 2019-03-25 |
URL | http://arxiv.org/abs/1903.10343v1 |
http://arxiv.org/pdf/1903.10343v1.pdf | |
PWC | https://paperswithcode.com/paper/sample-complexity-lower-bounds-for-linear |
Repo | |
Framework | |
Self-Paced Probabilistic Principal Component Analysis for Data with Outliers
Title | Self-Paced Probabilistic Principal Component Analysis for Data with Outliers |
Authors | Bowen Zhao, Xi Xiao, Wanpeng Zhang, Bin Zhang, Shutao Xia |
Abstract | Principal Component Analysis (PCA) is a popular tool for dimensionality reduction and feature extraction in data analysis. There is a probabilistic version of PCA, known as Probabilistic PCA (PPCA). However, standard PCA and PPCA are not robust, as they are sensitive to outliers. To alleviate this problem, this paper introduces the Self-Paced Learning mechanism into PPCA, and proposes a novel method called Self-Paced Probabilistic Principal Component Analysis (SP-PPCA). Furthermore, we design the corresponding optimization algorithm based on the alternative search strategy and the expectation-maximization algorithm. SP-PPCA looks for optimal projection vectors and filters out outliers iteratively. Experiments on both synthetic problems and real-world datasets clearly demonstrate that SP-PPCA is able to reduce or eliminate the impact of outliers. |
Tasks | Dimensionality Reduction |
Published | 2019-04-13 |
URL | http://arxiv.org/abs/1904.06546v1 |
http://arxiv.org/pdf/1904.06546v1.pdf | |
PWC | https://paperswithcode.com/paper/self-paced-probabilistic-principal-component |
Repo | |
Framework | |
Sequence Labeling Parsing by Learning Across Representations
Title | Sequence Labeling Parsing by Learning Across Representations |
Authors | Michalina Strzyz, David Vilares, Carlos Gómez-Rodríguez |
Abstract | We use parsing as sequence labeling as a common framework to learn across constituency and dependency syntactic abstractions. To do so, we cast the problem as multitask learning (MTL). First, we show that adding a parsing paradigm as an auxiliary loss consistently improves the performance on the other paradigm. Secondly, we explore an MTL sequence labeling model that parses both representations, at almost no cost in terms of performance and speed. The results across the board show that on average MTL models with auxiliary losses for constituency parsing outperform single-task ones by 1.14 F1 points, and for dependency parsing by 0.62 UAS points. |
Tasks | Constituency Parsing, Dependency Parsing |
Published | 2019-07-02 |
URL | https://arxiv.org/abs/1907.01339v3 |
https://arxiv.org/pdf/1907.01339v3.pdf | |
PWC | https://paperswithcode.com/paper/sequence-labeling-parsing-by-learning-across |
Repo | |
Framework | |
How Large Are Lions? Inducing Distributions over Quantitative Attributes
Title | How Large Are Lions? Inducing Distributions over Quantitative Attributes |
Authors | Yanai Elazar, Abhijit Mahabal, Deepak Ramachandran, Tania Bedrax-Weiss, Dan Roth |
Abstract | Most current NLP systems have little knowledge about quantitative attributes of objects and events. We propose an unsupervised method for collecting quantitative information from large amounts of web data, and use it to create a new, very large resource consisting of distributions over physical quantities associated with objects, adjectives, and verbs which we call Distributions over Quantitative (DoQ). This contrasts with recent work in this area which has focused on making only relative comparisons such as “Is a lion bigger than a wolf?". Our evaluation shows that DoQ compares favorably with state of the art results on existing datasets for relative comparisons of nouns and adjectives, and on a new dataset we introduce. |
Tasks | |
Published | 2019-06-04 |
URL | https://arxiv.org/abs/1906.01327v1 |
https://arxiv.org/pdf/1906.01327v1.pdf | |
PWC | https://paperswithcode.com/paper/how-large-are-lions-inducing-distributions |
Repo | |
Framework | |
Binary Classification using Pairs of Minimum Spanning Trees or N-ary Trees
Title | Binary Classification using Pairs of Minimum Spanning Trees or N-ary Trees |
Authors | Riccardo La Grassa, Ignazio Gallo, Alessandro Calefati, Dimitri Ognibene |
Abstract | One-class classifiers are trained with target class only samples. Intuitively, their conservative modelling of the class description may benefit classical classification tasks where classes are difficult to separate due to overlapping and data imbalance. In this work, three methods are proposed which leverage on the combination of one-class classifiers based on non-parametric models, N-ary Trees and Minimum Spanning Trees class descriptors (MST-CD), to tackle binary classification problems. The methods deal with the inconsistencies arising from combining multiple classifiers and with spurious connections that MST-CD creates in multi-modal class distributions. As shown by our tests on several datasets, the proposed approach is feasible and comparable with state-of-the-art algorithms. |
Tasks | |
Published | 2019-06-14 |
URL | https://arxiv.org/abs/1906.06090v2 |
https://arxiv.org/pdf/1906.06090v2.pdf | |
PWC | https://paperswithcode.com/paper/binary-classification-using-pairs-of-minimum |
Repo | |
Framework | |
Pragmatic classification of movement primitives for stroke rehabilitation
Title | Pragmatic classification of movement primitives for stroke rehabilitation |
Authors | Avinash Parnandi, Jasim Uddin, Dawn M. Nilsen, Heidi Schambra |
Abstract | Rehabilitation training is the primary intervention to improve motor recovery after stroke, but a tool to measure functional training does not currently exist. To bridge this gap, we previously developed an approach to classify functional movement primitives using wearable sensors and a machine learning (ML) algorithm. We found that this approach had encouraging classification performance but had computational and practical limitations, such as training time, sensor cost, and magnetic drift. Here, we sought to refine this approach and determine the algorithm, sensor configurations, and data requirements needed to maximize computational and practical performance. Motion data had been previously collected from 6 stroke patients wearing 11 inertial measurement units (IMUs) as they moved objects on a target array. To identify optimal ML performance, we evaluated 4 algorithms that are commonly used in activity recognition (linear discriminant analysis (LDA), na"ive Bayes, support vector machine, and k-nearest neighbors). We compared their classification accuracy, computational complexity, and tuning requirements. To identify optimal sensor configuration, we progressively sampled fewer sensors and compared classification accuracy. To identify optimal data requirements, we compared accuracy using data from IMUs versus accelerometers. We found that LDA had the highest classification accuracy (92%) of the algorithms tested. It also was the most pragmatic, with low training and testing times and modest tuning requirements. We found that 7 sensors on the paretic arm and back resulted in the best accuracy. Using this array, accelerometers had a lower accuracy (84%). We refined strategies to accurately and pragmatically quantify functional movement primitives in stroke patients. We propose that this optimized ML-sensor approach could be a means to quantify training dose after stroke. |
Tasks | Activity Recognition |
Published | 2019-02-22 |
URL | http://arxiv.org/abs/1902.08697v3 |
http://arxiv.org/pdf/1902.08697v3.pdf | |
PWC | https://paperswithcode.com/paper/pragmatic-classification-of-movement |
Repo | |
Framework | |
To believe or not to believe: Validating explanation fidelity for dynamic malware analysis
Title | To believe or not to believe: Validating explanation fidelity for dynamic malware analysis |
Authors | Li Chen, Carter Yagemann, Evan Downing |
Abstract | Converting malware into images followed by vision-based deep learning algorithms has shown superior threat detection efficacy compared with classical machine learning algorithms. When malware are visualized as images, visual-based interpretation schemes can also be applied to extract insights of why individual samples are classified as malicious. In this work, via two case studies of dynamic malware classification, we extend the local interpretable model-agnostic explanation algorithm to explain image-based dynamic malware classification and examine its interpretation fidelity. For both case studies, we first train deep learning models via transfer learning on malware images, demonstrate high classification effectiveness, apply an explanation method on the images, and correlate the results back to the samples to validate whether the algorithmic insights are consistent with security domain expertise. In our first case study, the interpretation framework identifies indirect calls that uniquely characterize the underlying exploit behavior of a malware family. In our second case study, the interpretation framework extracts insightful information such as cryptography-related APIs when applied on images created from API existence, but generate ambiguous interpretation on images created from API sequences and frequencies. Our findings indicate that current image-based interpretation techniques are promising for explaining vision-based malware classification. We continue to develop image-based interpretation schemes specifically for security applications. |
Tasks | Malware Classification, Transfer Learning |
Published | 2019-04-30 |
URL | http://arxiv.org/abs/1905.00122v1 |
http://arxiv.org/pdf/1905.00122v1.pdf | |
PWC | https://paperswithcode.com/paper/to-believe-or-not-to-believe-validating |
Repo | |
Framework | |
Transferring Knowledge Fragments for Learning Distance Metric from A Heterogeneous Domain
Title | Transferring Knowledge Fragments for Learning Distance Metric from A Heterogeneous Domain |
Authors | Yong Luo, Yonggang Wen, Tongliang Liu, Dacheng Tao |
Abstract | The goal of transfer learning is to improve the performance of target learning task by leveraging information (or transferring knowledge) from other related tasks. In this paper, we examine the problem of transfer distance metric learning (DML), which usually aims to mitigate the label information deficiency issue in the target DML. Most of the current Transfer DML (TDML) methods are not applicable to the scenario where data are drawn from heterogeneous domains. Some existing heterogeneous transfer learning (HTL) approaches can learn target distance metric by usually transforming the samples of source and target domain into a common subspace. However, these approaches lack flexibility in real-world applications, and the learned transformations are often restricted to be linear. This motivates us to develop a general flexible heterogeneous TDML (HTDML) framework. In particular, any (linear/nonlinear) DML algorithms can be employed to learn the source metric beforehand. Then the pre-learned source metric is represented as a set of knowledge fragments to help target metric learning. We show how generalization error in the target domain could be reduced using the proposed transfer strategy, and develop novel algorithm to learn either linear or nonlinear target metric. Extensive experiments on various applications demonstrate the effectiveness of the proposed method. |
Tasks | Metric Learning, Transfer Learning |
Published | 2019-04-08 |
URL | http://arxiv.org/abs/1904.04061v1 |
http://arxiv.org/pdf/1904.04061v1.pdf | |
PWC | https://paperswithcode.com/paper/transferring-knowledge-fragments-for-learning |
Repo | |
Framework | |
On the CVP for the root lattices via folding with deep ReLU neural networks
Title | On the CVP for the root lattices via folding with deep ReLU neural networks |
Authors | Vincent Corlay, Joseph J. Boutros, Philippe Ciblat, Loic Brunel |
Abstract | Point lattices and their decoding via neural networks are considered in this paper. Lattice decoding in Rn, known as the closest vector problem (CVP), becomes a classification problem in the fundamental parallelotope with a piecewise linear function defining the boundary. Theoretical results are obtained by studying root lattices. We show how the number of pieces in the boundary function reduces dramatically with folding, from exponential to linear. This translates into a two-layer ReLU network requiring a number of neurons growing exponentially in n to solve the CVP, whereas this complexity becomes polynomial in n for a deep ReLU network. |
Tasks | |
Published | 2019-02-06 |
URL | http://arxiv.org/abs/1902.05146v2 |
http://arxiv.org/pdf/1902.05146v2.pdf | |
PWC | https://paperswithcode.com/paper/on-the-cvp-for-the-root-lattices-via-folding |
Repo | |
Framework | |
Decomposition-Based Transfer Distance Metric Learning for Image Classification
Title | Decomposition-Based Transfer Distance Metric Learning for Image Classification |
Authors | Yong Luo, Tongliang Liu, Dacheng Tao, Chao Xu |
Abstract | Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method. |
Tasks | Image Classification, Metric Learning, Transfer Learning |
Published | 2019-04-08 |
URL | http://arxiv.org/abs/1904.03846v1 |
http://arxiv.org/pdf/1904.03846v1.pdf | |
PWC | https://paperswithcode.com/paper/decomposition-based-transfer-distance-metric |
Repo | |
Framework | |
Rugby-Bot: Utilizing Multi-Task Learning & Fine-Grained Features for Rugby League Analysis
Title | Rugby-Bot: Utilizing Multi-Task Learning & Fine-Grained Features for Rugby League Analysis |
Authors | Matthew Holbrook, Jennifer Hobbs, Patrick Lucey |
Abstract | Sporting events are extremely complex and require a multitude of metrics to accurate describe the event. When making multiple predictions, one should make them from a single source to keep consistency across the predictions. We present a multi-task learning method of generating multiple predictions for analysis via a single prediction source. To enable this approach, we utilize a fine-grain representation using fine-grain spatial data using a wide-and-deep learning approach. Additionally, our approach can predict distributions rather than single point values. We highlighted the utility of our approach on the sport of Rugby League and call our prediction engine “Rugby-Bot”. |
Tasks | Multi-Task Learning |
Published | 2019-10-16 |
URL | https://arxiv.org/abs/1910.07410v1 |
https://arxiv.org/pdf/1910.07410v1.pdf | |
PWC | https://paperswithcode.com/paper/rugby-bot-utilizing-multi-task-learning-fine |
Repo | |
Framework | |
TSRuleGrowth : Extraction de règles de prédiction semi-ordonnées à partir d’une série temporelle d'éléments discrets, application dans un contexte d’intelligence ambiante
Title | TSRuleGrowth : Extraction de règles de prédiction semi-ordonnées à partir d’une série temporelle d'éléments discrets, application dans un contexte d’intelligence ambiante |
Authors | Benoit Vuillemin, Lionel Delphin-Poulat, Rozenn Nicol, Laëtitia Matignon, Salima Hassas |
Abstract | This paper presents a new algorithm: TSRuleGrowth, looking for partially-ordered rules over a time series. This algorithm takes principles from the state of the art of rule mining and applies them to time series via a new notion of support. We apply this algorithm to real data from a connected environment, which extract user habits through different connected objects. |
Tasks | Time Series |
Published | 2019-07-23 |
URL | https://arxiv.org/abs/1907.10054v1 |
https://arxiv.org/pdf/1907.10054v1.pdf | |
PWC | https://paperswithcode.com/paper/tsrulegrowth-extraction-de-regles-de |
Repo | |
Framework | |
TrajectoryNet: a new spatio-temporal feature learning network for human motion prediction
Title | TrajectoryNet: a new spatio-temporal feature learning network for human motion prediction |
Authors | Xiaoli Liu, Jianqin Yin, Jin Liu, Pengxiang Ding, Jun Liu, Huaping Liu |
Abstract | Human motion prediction is an increasingly interesting topic in computer vision and robotics. In this paper, we propose a new 2D CNN based network, TrajectoryNet, to predict future poses in the trajectory space. Compared with most existing methods, our model focuses on modeling the motion dynamics with coupled spatio-temporal features, local-global spatial features and global temporal co-occurrence features of the previous pose sequence. Specifically, the coupled spatio-temporal features describe the spatial and temporal structure information hidden in the natural human motion sequence, which can be mined by covering the space and time dimensions of the input pose sequence with the convolutional filters. The local-global spatial features that encode different correlations of different joints of the human body (e.g. strong correlations between joints of one limb, weak correlations between joints of different limbs) are captured hierarchically by enlarging the receptive field layer by layer and residual connections from the lower layers to the deeper layers in our proposed convolutional network. And the global temporal co-occurrence features represent the co-occurrence relationship that different subsequences in a complex motion sequence are appeared simultaneously, which can be obtained automatically with our proposed TrajectoryNet by reorganizing the temporal information as the depth dimension of the input tensor. Finally, future poses are approximated based on the captured motion dynamics features. Extensive experiments show that our method achieves state-of-the-art performance on three challenging benchmarks (e.g. Human3.6M, G3D, and FNTU), which demonstrates the effectiveness of our proposed method. The code will be available if the paper is accepted. |
Tasks | motion prediction, Pose Prediction |
Published | 2019-10-15 |
URL | https://arxiv.org/abs/1910.06583v2 |
https://arxiv.org/pdf/1910.06583v2.pdf | |
PWC | https://paperswithcode.com/paper/trajectorylet-net-a-novel-framework-for-pose |
Repo | |
Framework | |
Prediction-Tracking-Segmentation
Title | Prediction-Tracking-Segmentation |
Authors | Jianren Wang, Yihui He, Xiaobo Wang, Xinjia Yu, Xia Chen |
Abstract | We introduce a prediction driven method for visual tracking and segmentation in videos. Instead of solely relying on matching with appearance cues for tracking, we build a predictive model which guides finding more accurate tracking regions efficiently. With the proposed prediction mechanism, we improve the model robustness against distractions and occlusions during tracking. We demonstrate significant improvements over state-of-the-art methods not only on visual tracking tasks (VOT 2016 and VOT 2018) but also on video segmentation datasets (DAVIS 2016 and DAVIS 2017). |
Tasks | Video Semantic Segmentation, Visual Tracking |
Published | 2019-04-05 |
URL | http://arxiv.org/abs/1904.03280v1 |
http://arxiv.org/pdf/1904.03280v1.pdf | |
PWC | https://paperswithcode.com/paper/prediction-tracking-segmentation |
Repo | |
Framework | |