October 19, 2019

3278 words 16 mins read

Paper Group ANR 296

Paper Group ANR 296

A New Multi Criteria Decision Making Method: Approach of Logarithmic Concept (APLOCO). A Study of Deep Feature Fusion based Methods for Classifying Multi-lead ECG. Towards Robust Evaluations of Continual Learning. Generating protected fingerprint template utilizing coprime mapping transformation. Trace Quotient with Sparsity Priors for Learning Low …

A New Multi Criteria Decision Making Method: Approach of Logarithmic Concept (APLOCO)

Title A New Multi Criteria Decision Making Method: Approach of Logarithmic Concept (APLOCO)
Authors Tevfik Bulut
Abstract The primary aim of the study is to introduce APLOCO method which is developed for the solution of multicriteria decision making problems both theoretically and practically. In this context, application subject of APLACO constitutes evaluation of investment potential of different cities in metropolitan status in Turkey. The secondary purpose of the study is to identify the independent variables affecting the factories in the operating phase and to estimate the effect levels of independent variables on the dependent variable in the organized industrial zones (OIZs), whose mission is to reduce regional development disparities and to mobilize local production dynamics. For this purpose, the effect levels of independent variables on dependent variables have been determined using the multilayer perceptron (MLP) method, which has a wide use in artificial neural networks (ANNs). The effect levels derived from MLP have been then used as the weight levels of the decision criteria in APLOCO. The independent variables included in MLP are also used as the decision criteria in APLOCO. According to the results obtained from APLOCO, Istanbul city is the best alternative in term of the investment potential and other alternatives are Manisa, Denizli, Izmir, Kocaeli, Bursa, Ankara, Adana, and Antalya, respectively. Although APLOCO is used to solve the ranking problem in order to show application process in the paper, it can be employed easily in the solution of classification and selection problems. On the other hand, the study also shows a rare example of the nested usage of APLOCO which is one of the methods of operation research as well as MLP used in determination of weights.
Tasks Decision Making
Published 2018-02-08
URL http://arxiv.org/abs/1802.04095v1
PDF http://arxiv.org/pdf/1802.04095v1.pdf
PWC https://paperswithcode.com/paper/a-new-multi-criteria-decision-making-method
Repo
Framework

A Study of Deep Feature Fusion based Methods for Classifying Multi-lead ECG

Title A Study of Deep Feature Fusion based Methods for Classifying Multi-lead ECG
Authors Bin Chen, Wei Guo, Bin Li, Rober K. F. Teng, Mingjun Dai, Jianping Luo, Hui Wang
Abstract An automatic classification method has been studied to effectively detect and recognize Electrocardiogram (ECG). Based on the synchronizing and orthogonal relationships of multiple leads, we propose a Multi-branch Convolution and Residual Network (MBCRNet) with three kinds of feature fusion methods for automatic detection of normal and abnormal ECG signals. Experiments are conducted on the Chinese Cardiovascular Disease Database (CCDD). Through 10-fold cross-validation, we achieve an average accuracy of 87.04% and a sensitivity of 89.93%, which outperforms previous methods under the same database. It is also shown that the multi-lead feature fusion network can improve the classification accuracy over the network only with the single lead features.
Tasks
Published 2018-08-06
URL http://arxiv.org/abs/1808.01721v1
PDF http://arxiv.org/pdf/1808.01721v1.pdf
PWC https://paperswithcode.com/paper/a-study-of-deep-feature-fusion-based-methods
Repo
Framework

Towards Robust Evaluations of Continual Learning

Title Towards Robust Evaluations of Continual Learning
Authors Sebastian Farquhar, Yarin Gal
Abstract Experiments used in current continual learning research do not faithfully assess fundamental challenges of learning continually. Instead of assessing performance on challenging and representative experiment designs, recent research has focused on increased dataset difficulty, while still using flawed experiment set-ups. We examine standard evaluations and show why these evaluations make some continual learning approaches look better than they are. We introduce desiderata for continual learning evaluations and explain why their absence creates misleading comparisons. Based on our desiderata we then propose new experiment designs which we demonstrate with various continual learning approaches and datasets. Our analysis calls for a reprioritization of research effort by the community.
Tasks Continual Learning
Published 2018-05-24
URL https://arxiv.org/abs/1805.09733v3
PDF https://arxiv.org/pdf/1805.09733v3.pdf
PWC https://paperswithcode.com/paper/towards-robust-evaluations-of-continual
Repo
Framework

Generating protected fingerprint template utilizing coprime mapping transformation

Title Generating protected fingerprint template utilizing coprime mapping transformation
Authors Rudresh Dwivedi, Somnath Dey
Abstract The identity of a user is permanently lost if biometric data gets compromised since the biometric information is irreplaceable and irrevocable. To revoke and reissue a new template in place of the compromised biometric template, the idea of cancelable biometrics has been introduced. The concept behind cancelable biometric is to irreversibly transform the original biometric template and perform the comparison in the protected domain. In this paper, a coprime transformation scheme has been proposed to derive a protected fingerprint template. The method divides the fingerprint region into a number of sectors with respect to each minutiae point and identifies the nearest-neighbor minutiae in each sector. Then, ridge features for all neighboring minutiae points are computed and mapped onto co-prime positions of a random matrix to generate the cancelable template. The proposed approach achieves an EER of 1.82, 1.39, 4.02 and 5.77 on DB1, DB2, DB3 and DB4 datasets of the FVC2002 and an EER of 8.70, 7.95, 5.23 and 4.87 on DB1, DB2, DB3 and DB4 datasets of FVC2004 databases, respectively. Experimental evaluations indicate that the method outperforms in comparison to the current state-of-the-art. Moreover, it has been confirmed from the security analysis that the proposed method fulfills the desired characteristics of diversity, revocability, and non-invertibility with a minor performance degradation caused by the transformation.
Tasks
Published 2018-05-25
URL http://arxiv.org/abs/1805.10108v1
PDF http://arxiv.org/pdf/1805.10108v1.pdf
PWC https://paperswithcode.com/paper/generating-protected-fingerprint-template
Repo
Framework

Trace Quotient with Sparsity Priors for Learning Low Dimensional Image Representations

Title Trace Quotient with Sparsity Priors for Learning Low Dimensional Image Representations
Authors Xian Wei, Hao Shen, Martin Kleinsteuber
Abstract This work studies the problem of learning appropriate low dimensional image representations. We propose a generic algorithmic framework, which leverages two classic representation learning paradigms, i.e., sparse representation and the trace quotient criterion. The former is a well-known powerful tool to identify underlying self-explanatory factors of data, while the latter is known for disentangling underlying low dimensional discriminative factors in data. Our developed solutions disentangle sparse representations of images by employing the trace quotient criterion. We construct a unified cost function, coined as the SPARse LOW dimensional representation (SparLow) function, for jointly learning both a sparsifying dictionary and a dimensionality reduction transformation. The SparLow function is widely applicable for developing various algorithms in three classic machine learning scenarios, namely, unsupervised, supervised, and semi-supervised learning. In order to develop efficient joint learning algorithms for maximizing the SparLow function, we deploy a framework of sparse coding with appropriate convex priors to ensure the sparse representations to be locally differentiable. Moreover, we develop an efficient geometric conjugate gradient algorithm to maximize the SparLow function on its underlying Riemannian manifold. Performance of the proposed SparLow algorithmic framework is investigated on several image processing tasks, such as 3D data visualization, face/digit recognition, and object/scene categorization.
Tasks Dimensionality Reduction, Representation Learning
Published 2018-10-08
URL http://arxiv.org/abs/1810.03523v1
PDF http://arxiv.org/pdf/1810.03523v1.pdf
PWC https://paperswithcode.com/paper/trace-quotient-with-sparsity-priors-for
Repo
Framework

Optimizing Taxi Carpool Policies via Reinforcement Learning and Spatio-Temporal Mining

Title Optimizing Taxi Carpool Policies via Reinforcement Learning and Spatio-Temporal Mining
Authors Ishan Jindal, Zhiwei Qin, Xuewen Chen, Matthew Nokleby, Jieping Ye
Abstract In this paper, we develop a reinforcement learning (RL) based system to learn an effective policy for carpooling that maximizes transportation efficiency so that fewer cars are required to fulfill the given amount of trip demand. For this purpose, first, we develop a deep neural network model, called ST-NN (Spatio-Temporal Neural Network), to predict taxi trip time from the raw GPS trip data. Secondly, we develop a carpooling simulation environment for RL training, with the output of ST-NN and using the NYC taxi trip dataset. In order to maximize transportation efficiency and minimize traffic congestion, we choose the effective distance covered by the driver on a carpool trip as the reward. Therefore, the more effective distance a driver achieves over a trip (i.e. to satisfy more trip demand) the higher the efficiency and the less will be the traffic congestion. We compared the performance of RL learned policy to a fixed policy (which always accepts carpool) as a baseline and obtained promising results that are interpretable and demonstrate the advantage of our RL approach. We also compare the performance of ST-NN to that of state-of-the-art travel time estimation methods and observe that ST-NN significantly improves the prediction performance and is more robust to outliers.
Tasks
Published 2018-11-11
URL http://arxiv.org/abs/1811.04345v1
PDF http://arxiv.org/pdf/1811.04345v1.pdf
PWC https://paperswithcode.com/paper/optimizing-taxi-carpool-policies-via
Repo
Framework

Learning to Activate Relay Nodes: Deep Reinforcement Learning Approach

Title Learning to Activate Relay Nodes: Deep Reinforcement Learning Approach
Authors Minhae Kwon, Juhyeon Lee, Hyunggon Park
Abstract In this paper, we propose a distributed solution to design a multi-hop ad hoc network where mobile relay nodes strategically determine their wireless transmission ranges based on a deep reinforcement learning approach. We consider scenarios where only a limited networking infrastructure is available but a large number of wireless mobile relay nodes are deployed in building a multi-hop ad hoc network to deliver source data to the destination. A mobile relay node is considered as a decision-making agent that strategically determines its transmission range in a way that maximizes network throughput while minimizing the corresponding transmission power consumption. Each relay node collects information from its partial observations and learns its environment through a sequence of experiences. Hence, the proposed solution requires only a minimal amount of information from the system. We show that the actions that the relay nodes take from its policy are determined as to activate or inactivate its transmission, i.e., only necessary relay nodes are activated with the maximum transmit power, and nonessential nodes are deactivated to minimize power consumption. Using extensive experiments, we confirm that the proposed solution builds a network with higher network performance than current state-of-the-art solutions in terms of system goodput and connectivity ratio.
Tasks Decision Making
Published 2018-11-24
URL http://arxiv.org/abs/1811.09759v1
PDF http://arxiv.org/pdf/1811.09759v1.pdf
PWC https://paperswithcode.com/paper/learning-to-activate-relay-nodes-deep
Repo
Framework

Agile Amulet: Real-Time Salient Object Detection with Contextual Attention

Title Agile Amulet: Real-Time Salient Object Detection with Contextual Attention
Authors Pingping Zhang, Luyao Wang, Dong Wang, Huchuan Lu, Chunhua Shen
Abstract This paper proposes an Agile Aggregating Multi-Level feaTure framework (Agile Amulet) for salient object detection. The Agile Amulet builds on previous works to predict saliency maps using multi-level convolutional features. Compared to previous works, Agile Amulet employs some key innovations to improve training and testing speed while also increase prediction accuracy. More specifically, we first introduce a contextual attention module that can rapidly highlight most salient objects or regions with contextual pyramids. Thus, it effectively guides the learning of low-layer convolutional features and tells the backbone network where to look. The contextual attention module is a fully convolutional mechanism that simultaneously learns complementary features and predicts saliency scores at each pixel. In addition, we propose a novel method to aggregate multi-level deep convolutional features. As a result, we are able to use the integrated side-output features of pre-trained convolutional networks alone, which significantly reduces the model parameters leading to a model size of 67 MB, about half of Amulet. Compared to other deep learning based saliency methods, Agile Amulet is of much lighter-weight, runs faster (30 fps in real-time) and achieves higher performance on seven public benchmarks in terms of both quantitative and qualitative evaluation.
Tasks Object Detection, Salient Object Detection
Published 2018-02-20
URL http://arxiv.org/abs/1802.06960v1
PDF http://arxiv.org/pdf/1802.06960v1.pdf
PWC https://paperswithcode.com/paper/agile-amulet-real-time-salient-object
Repo
Framework

Wasserstein Soft Label Propagation on Hypergraphs: Algorithm and Generalization Error Bounds

Title Wasserstein Soft Label Propagation on Hypergraphs: Algorithm and Generalization Error Bounds
Authors Tingran Gao, Shahab Asoodeh, Yi Huang, James Evans
Abstract Inspired by recent interests of developing machine learning and data mining algorithms on hypergraphs, we investigate in this paper the semi-supervised learning algorithm of propagating “soft labels” (e.g. probability distributions, class membership scores) over hypergraphs, by means of optimal transportation. Borrowing insights from Wasserstein propagation on graphs [Solomon et al. 2014], we re-formulate the label propagation procedure as a message-passing algorithm, which renders itself naturally to a generalization applicable to hypergraphs through Wasserstein barycenters. Furthermore, in a PAC learning framework, we provide generalization error bounds for propagating one-dimensional distributions on graphs and hypergraphs using 2-Wasserstein distance, by establishing the \textit{algorithmic stability} of the proposed semi-supervised learning algorithm. These theoretical results also shed new lights upon deeper understandings of the Wasserstein propagation on graphs.
Tasks
Published 2018-09-06
URL http://arxiv.org/abs/1809.01833v2
PDF http://arxiv.org/pdf/1809.01833v2.pdf
PWC https://paperswithcode.com/paper/wasserstein-soft-label-propagation-on
Repo
Framework

Gradient-based Filter Design for the Dual-tree Wavelet Transform

Title Gradient-based Filter Design for the Dual-tree Wavelet Transform
Authors Daniel Recoskie, Richard Mann
Abstract The wavelet transform has seen success when incorporated into neural network architectures, such as in wavelet scattering networks. More recently, it has been shown that the dual-tree complex wavelet transform can provide better representations than the standard transform. With this in mind, we extend our previous method for learning filters for the 1D and 2D wavelet transforms into the dual-tree domain. We show that with few modifications to our original model, we can learn directional filters that leverage the properties of the dual-tree wavelet transform.
Tasks
Published 2018-06-04
URL http://arxiv.org/abs/1806.01793v1
PDF http://arxiv.org/pdf/1806.01793v1.pdf
PWC https://paperswithcode.com/paper/gradient-based-filter-design-for-the-dual
Repo
Framework

Transportation Modes Classification Using Feature Engineering

Title Transportation Modes Classification Using Feature Engineering
Authors Mohammad Etemad
Abstract Predicting transportation modes from GPS (Global Positioning System) records is a hot topic in the trajectory mining domain. Each GPS record is called a trajectory point and a trajectory is a sequence of these points. Trajectory mining has applications including but not limited to transportation mode detection, tourism, traffic congestion, smart cities management, animal behaviour analysis, environmental preservation, and traffic dynamics are some of the trajectory mining applications. Transportation modes prediction as one of the tasks in human mobility and vehicle mobility applications plays an important role in resource allocation, traffic management systems, tourism planning and accident detection. In this work, the proposed framework in Etemad et al. is extended to consider other aspects in the task of transportation modes prediction. Wrapper search and information retrieval methods were investigated to find the best subset of trajectory features. Finding the best classifier and the best feature subset, the framework is compared against two related papers that applied deep learning methods. The results show that our framework achieved better performance. Moreover, the ground truth noise removal improved accuracy of transportation modes prediction task; however, the assumption of having access to test set labels in pre-processing task is invalid. Furthermore, the cross validation approaches were investigated and the performance results show that the random cross validation method provides optimistic results.
Tasks Feature Engineering, Information Retrieval
Published 2018-07-28
URL http://arxiv.org/abs/1807.10876v1
PDF http://arxiv.org/pdf/1807.10876v1.pdf
PWC https://paperswithcode.com/paper/transportation-modes-classification-using
Repo
Framework

JOBS: Joint-Sparse Optimization from Bootstrap Samples

Title JOBS: Joint-Sparse Optimization from Bootstrap Samples
Authors Luoluo Liu, Sang Peter Chin, Trac D. Tran
Abstract Classical signal recovery based on $\ell_1$ minimization solves the least squares problem with all available measurements via sparsity-promoting regularization. In practice, it is often the case that not all measurements are available or required for recovery. Measurements might be corrupted/missing or they arrive sequentially in streaming fashion. In this paper, we propose a global sparse recovery strategy based on subsets of measurements, named JOBS, in which multiple measurements vectors are generated from the original pool of measurements via bootstrapping, and then a joint-sparse constraint is enforced to ensure support consistency among multiple predictors. The final estimate is obtained by averaging over the $K$ predictors. The performance limits associated with different choices of number of bootstrap samples $L$ and number of estimates $K$ is analyzed theoretically. Simulation results validate some of the theoretical analysis, and show that the proposed method yields state-of-the-art recovery performance, outperforming $\ell_1$ minimization and a few other existing bootstrap-based techniques in the challenging case of low levels of measurements and is preferable over other bagging-based methods in the streaming setting since it performs better with small $K$ and $L$ for data-sets with large sizes.
Tasks
Published 2018-10-08
URL http://arxiv.org/abs/1810.03743v2
PDF http://arxiv.org/pdf/1810.03743v2.pdf
PWC https://paperswithcode.com/paper/jobs-joint-sparse-optimization-from-bootstrap
Repo
Framework

Controlling Over-generalization and its Effect on Adversarial Examples Generation and Detection

Title Controlling Over-generalization and its Effect on Adversarial Examples Generation and Detection
Authors Mahdieh Abbasi, Arezoo Rajabi, Azadeh Sadat Mozafari, Rakesh B. Bobba, Christian Gagne
Abstract Convolutional Neural Networks (CNNs) significantly improve the state-of-the-art for many applications, especially in computer vision. However, CNNs still suffer from a tendency to confidently classify out-distribution samples from unknown classes into pre-defined known classes. Further, they are also vulnerable to adversarial examples. We are relating these two issues through the tendency of CNNs to over-generalize for areas of the input space not covered well by the training set. We show that a CNN augmented with an extra output class can act as a simple yet effective end-to-end model for controlling over-generalization. As an appropriate training set for the extra class, we introduce two resources that are computationally efficient to obtain: a representative natural out-distribution set and interpolated in-distribution samples. To help select a representative natural out-distribution set among available ones, we propose a simple measurement to assess an out-distribution set’s fitness. We also demonstrate that training such an augmented CNN with representative out-distribution natural datasets and some interpolated samples allows it to better handle a wide range of unseen out-distribution samples and black-box adversarial examples without training it on any adversaries. Finally, we show that generation of white-box adversarial attacks using our proposed augmented CNN can become harder, as the attack algorithms have to get around the rejection regions when generating actual adversaries.
Tasks
Published 2018-08-21
URL http://arxiv.org/abs/1808.08282v2
PDF http://arxiv.org/pdf/1808.08282v2.pdf
PWC https://paperswithcode.com/paper/controlling-over-generalization-and-its-1
Repo
Framework

Beyond Patient Monitoring: Conversational Agents Role in Telemedicine & Healthcare Support For Home-Living Elderly Individuals

Title Beyond Patient Monitoring: Conversational Agents Role in Telemedicine & Healthcare Support For Home-Living Elderly Individuals
Authors Ahmed Fadhil
Abstract There is a need for systems to dynamically interact with ageing populations to gather information, monitor health condition and provide support, especially after hospital discharge or at-home settings. Several smart devices have been delivered by digital health, bundled with telemedicine systems, smartphone and other digital services. While such solutions offer personalised data and suggestions, the real disruptive step comes from the interaction of new digital ecosystem, represented by chatbots. Chatbots will play a leading role by embodying the function of a virtual assistant and bridging the gap between patients and clinicians. Powered by AI and machine learning algorithms, chatbots are forecasted to save healthcare costs when used in place of a human or assist them as a preliminary step of helping to assess a condition and providing self-care recommendations. This paper describes integrating chatbots into telemedicine systems intended for elderly patient after their hospital discharge. The paper discusses possible ways to utilise chatbots to assist healthcare providers and support patients with their condition.
Tasks
Published 2018-03-03
URL http://arxiv.org/abs/1803.06000v1
PDF http://arxiv.org/pdf/1803.06000v1.pdf
PWC https://paperswithcode.com/paper/beyond-patient-monitoring-conversational
Repo
Framework

Helping Crisis Responders Find the Informative Needle in the Tweet Haystack

Title Helping Crisis Responders Find the Informative Needle in the Tweet Haystack
Authors Leon Derczynski, Kenny Meesters, Kalina Bontcheva, Diana Maynard
Abstract Crisis responders are increasingly using social media, data and other digital sources of information to build a situational understanding of a crisis situation in order to design an effective response. However with the increased availability of such data, the challenge of identifying relevant information from it also increases. This paper presents a successful automatic approach to handling this problem. Messages are filtered for informativeness based on a definition of the concept drawn from prior research and crisis response experts. Informative messages are tagged for actionable data – for example, people in need, threats to rescue efforts, changes in environment, and so on. In all, eight categories of actionability are identified. The two components – informativeness and actionability classification – are packaged together as an openly-available tool called Emina (Emergent Informativeness and Actionability).
Tasks
Published 2018-01-29
URL http://arxiv.org/abs/1801.09633v1
PDF http://arxiv.org/pdf/1801.09633v1.pdf
PWC https://paperswithcode.com/paper/helping-crisis-responders-find-the
Repo
Framework
comments powered by Disqus