January 26, 2020

3292 words 16 mins read

Paper Group ANR 1536

Paper Group ANR 1536

Offspring Population Size Matters when Comparing Evolutionary Algorithms with Self-Adjusting Mutation Rates. CED: Color Event Camera Dataset. Parallel Medical Imaging: A New Data-Knowledge-Driven Evolutionary Framework for Medical Image Analysis. Decentralised Sparse Multi-Task Regression. Shallow Triple Stream Three-dimensional CNN (STSTNet) for M …

Offspring Population Size Matters when Comparing Evolutionary Algorithms with Self-Adjusting Mutation Rates

Title Offspring Population Size Matters when Comparing Evolutionary Algorithms with Self-Adjusting Mutation Rates
Authors Anna Rodionova, Kirill Antonov, Arina Buzdalova, Carola Doerr
Abstract We analyze the performance of the 2-rate $(1+\lambda)$ Evolutionary Algorithm (EA) with self-adjusting mutation rate control, its 3-rate counterpart, and a $(1+\lambda)$~EA variant using multiplicative update rules on the OneMax problem. We compare their efficiency for offspring population sizes ranging up to $\lambda=3,200$ and problem sizes up to $n=100,000$. Our empirical results show that the ranking of the algorithms is very consistent across all tested dimensions, but strongly depends on the population size. While for small values of $\lambda$ the 2-rate EA performs best, the multiplicative updates become superior for starting for some threshold value of $\lambda$ between 50 and 100. Interestingly, for population sizes around 50, the $(1+\lambda)$~EA with static mutation rates performs on par with the best of the self-adjusting algorithms. We also consider how the lower bound $p_{\min}$ for the mutation rate influences the efficiency of the algorithms. We observe that for the 2-rate EA and the EA with multiplicative update rules the more generous bound $p_{\min}=1/n^2$ gives better results than $p_{\min}=1/n$ when $\lambda$ is small. For both algorithms the situation reverses for large~$\lambda$.
Tasks
Published 2019-04-17
URL http://arxiv.org/abs/1904.08032v2
PDF http://arxiv.org/pdf/1904.08032v2.pdf
PWC https://paperswithcode.com/paper/offspring-population-size-matters-when
Repo
Framework

CED: Color Event Camera Dataset

Title CED: Color Event Camera Dataset
Authors Cedric Scheerlinck, Henri Rebecq, Timo Stoffregen, Nick Barnes, Robert Mahony, Davide Scaramuzza
Abstract Event cameras are novel, bio-inspired visual sensors, whose pixels output asynchronous and independent timestamped spikes at local intensity changes, called ‘events’. Event cameras offer advantages over conventional frame-based cameras in terms of latency, high dynamic range (HDR) and temporal resolution. Until recently, event cameras have been limited to outputting events in the intensity channel, however, recent advances have resulted in the development of color event cameras, such as the Color-DAVIS346. In this work, we present and release the first Color Event Camera Dataset (CED), containing 50 minutes of footage with both color frames and events. CED features a wide variety of indoor and outdoor scenes, which we hope will help drive forward event-based vision research. We also present an extension of the event camera simulator ESIM that enables simulation of color events. Finally, we present an evaluation of three state-of-the-art image reconstruction methods that can be used to convert the Color-DAVIS346 into a continuous-time, HDR, color video camera to visualise the event stream, and for use in downstream vision applications.
Tasks Event-based vision, Image Reconstruction
Published 2019-04-24
URL http://arxiv.org/abs/1904.10772v1
PDF http://arxiv.org/pdf/1904.10772v1.pdf
PWC https://paperswithcode.com/paper/ced-color-event-camera-dataset
Repo
Framework

Parallel Medical Imaging: A New Data-Knowledge-Driven Evolutionary Framework for Medical Image Analysis

Title Parallel Medical Imaging: A New Data-Knowledge-Driven Evolutionary Framework for Medical Image Analysis
Authors Chao Gou, Tianyu Shen, Wenbo Zheng, Oliver Kwan, Fei-Yue Wang
Abstract There has been much progress in data-driven artificial intelligence technology for medical image analysis in last decades. However, it still remains a challenge due to its distinctive complexity of acquiring and annotating image data, extracting medical domain knowledge, and explaining the diagnostic decision for medical image analysis. In this paper, we propose a data-knowledge-driven evolutionary framework termed as Parallel Medical Imaging (PMI) for medical image analysis based on the methodology of interactive ACP-based parallel intelligence. In the PMI framework, computational experiments with predictive learning in a data-driven way are conducted to extract medical knowledge for diagnostic decision support. Artificial imaging systems are introduced to select and prescriptively generate medical image data in a knowledge-driven way to utilize medical domain knowledge. Through the parallel evolutionary optimization, our proposed PMI framework can boost the generalization ability and alleviate the limitation of medical interpretation for diagnostic decision.
Tasks
Published 2019-03-12
URL https://arxiv.org/abs/1903.04855v2
PDF https://arxiv.org/pdf/1903.04855v2.pdf
PWC https://paperswithcode.com/paper/parallel-medical-imaging-a-new-data-knowledge
Repo
Framework

Decentralised Sparse Multi-Task Regression

Title Decentralised Sparse Multi-Task Regression
Authors Dominic Richards, Sahand N. Negahban, Patrick Rebeschini
Abstract We consider a sparse multi-task regression framework for fitting a collection of related sparse models. Representing models as nodes in a graph with edges between related models, a framework that fuses lasso regressions with the total variation penalty is investigated. Under a form of restricted eigenvalue assumption, bounds on prediction and squared error are given that depend upon the sparsity of each model and the differences between related models. This assumption relates to the smallest eigenvalue restricted to the intersection of two cone sets of the covariance matrix constructed from each of the agents’ covariances. We show that this assumption can be satisfied if the constructed covariance matrix satisfies a restricted isometry property. In the case of a grid topology high-probability bounds are given that match, up to log factors, the no-communication setting of fitting a lasso on each model, divided by the number of agents. A decentralised dual method that exploits a convex-concave formulation of the penalised problem is proposed to fit the models and its effectiveness demonstrated on simulations against the group lasso and variants.
Tasks
Published 2019-12-03
URL https://arxiv.org/abs/1912.01417v1
PDF https://arxiv.org/pdf/1912.01417v1.pdf
PWC https://paperswithcode.com/paper/decentralised-sparse-multi-task-regression
Repo
Framework

Shallow Triple Stream Three-dimensional CNN (STSTNet) for Micro-expression Recognition

Title Shallow Triple Stream Three-dimensional CNN (STSTNet) for Micro-expression Recognition
Authors Sze-Teng Liong, Y. S. Gan, John See, Huai-Qian Khor, Yen-Chang Huang
Abstract In the recent year, state-of-the-art for facial micro-expression recognition have been significantly advanced by deep neural networks. The robustness of deep learning has yielded promising performance beyond that of traditional handcrafted approaches. Most works in literature emphasized on increasing the depth of networks and employing highly complex objective functions to learn more features. In this paper, we design a Shallow Triple Stream Three-dimensional CNN (STSTNet) that is computationally light whilst capable of extracting discriminative high level features and details of micro-expressions. The network learns from three optical flow features (i.e., optical strain, horizontal and vertical optical flow fields) computed based on the onset and apex frames of each video. Our experimental results demonstrate the effectiveness of the proposed STSTNet, which obtained an unweighted average recall rate of 0.7605 and unweighted F1-score of 0.7353 on the composite database consisting of 442 samples from the SMIC, CASME II and SAMM databases.
Tasks Optical Flow Estimation
Published 2019-02-10
URL https://arxiv.org/abs/1902.03634v2
PDF https://arxiv.org/pdf/1902.03634v2.pdf
PWC https://paperswithcode.com/paper/a-shallow-triple-stream-three-dimensional-cnn
Repo
Framework

How can AI Automate End-to-End Data Science?

Title How can AI Automate End-to-End Data Science?
Authors Charu Aggarwal, Djallel Bouneffouf, Horst Samulowitz, Beat Buesser, Thanh Hoang, Udayan Khurana, Sijia Liu, Tejaswini Pedapati, Parikshit Ram, Ambrish Rawat, Martin Wistuba, Alexander Gray
Abstract Data science is labor-intensive and human experts are scarce but heavily involved in every aspect of it. This makes data science time consuming and restricted to experts with the resulting quality heavily dependent on their experience and skills. To make data science more accessible and scalable, we need its democratization. Automated Data Science (AutoDS) is aimed towards that goal and is emerging as an important research and business topic. We introduce and define the AutoDS challenge, followed by a proposal of a general AutoDS framework that covers existing approaches but also provides guidance for the development of new methods. We categorize and review the existing literature from multiple aspects of the problem setup and employed techniques. Then we provide several views on how AI could succeed in automating end-to-end AutoDS. We hope this survey can serve as insightful guideline for the AutoDS field and provide inspiration for future research.
Tasks
Published 2019-10-22
URL https://arxiv.org/abs/1910.14436v1
PDF https://arxiv.org/pdf/1910.14436v1.pdf
PWC https://paperswithcode.com/paper/how-can-ai-automate-end-to-end-data-science
Repo
Framework

Improving Abstractive Text Summarization with History Aggregation

Title Improving Abstractive Text Summarization with History Aggregation
Authors Pengcheng Liao, Chuang Zhang, Xiaojun Chen, Xiaofei Zhou
Abstract Recent neural sequence to sequence models have provided feasible solutions for abstractive summarization. However, such models are still hard to tackle long text dependency in the summarization task. A high-quality summarization system usually depends on strong encoder which can refine important information from long input texts so that the decoder can generate salient summaries from the encoder’s memory. In this paper, we propose an aggregation mechanism based on the Transformer model to address the challenge of long text representation. Our model can review history information to make encoder hold more memory capacity. Empirically, we apply our aggregation mechanism to the Transformer model and experiment on CNN/DailyMail dataset to achieve higher quality summaries compared to several strong baseline models on the ROUGE metrics.
Tasks Abstractive Text Summarization, Text Summarization
Published 2019-12-24
URL https://arxiv.org/abs/1912.11046v1
PDF https://arxiv.org/pdf/1912.11046v1.pdf
PWC https://paperswithcode.com/paper/improving-abstractive-text-summarization-with
Repo
Framework

Reinforcement Learning Based Emotional Editing Constraint Conversation Generation

Title Reinforcement Learning Based Emotional Editing Constraint Conversation Generation
Authors Jia Li, Xiao Sun, Xing Wei, Changliang Li, Jianhua Tao
Abstract In recent years, the generation of conversation content based on deep neural networks has attracted many researchers. However, traditional neural language models tend to generate general replies, lacking logical and emotional factors. This paper proposes a conversation content generation model that combines reinforcement learning with emotional editing constraints to generate more meaningful and customizable emotional replies. The model divides the replies into three clauses based on pre-generated keywords and uses the emotional editor to further optimize the final reply. The model combines multi-task learning with multiple indicator rewards to comprehensively optimize the quality of replies. Experiments shows that our model can not only improve the fluency of the replies, but also significantly enhance the logical relevance and emotional relevance of the replies.
Tasks Multi-Task Learning
Published 2019-04-17
URL http://arxiv.org/abs/1904.08061v1
PDF http://arxiv.org/pdf/1904.08061v1.pdf
PWC https://paperswithcode.com/paper/reinforcement-learning-based-emotional
Repo
Framework

Learning to Authenticate with Deep Multibiometric Hashing and Neural Network Decoding

Title Learning to Authenticate with Deep Multibiometric Hashing and Neural Network Decoding
Authors Veeru Talreja, Sobhan Soleymani, Matthew C. Valenti, Nasser M. Nasrabadi
Abstract In this paper, we propose a novel multimodal deep hashing neural decoder (MDHND) architecture, which integrates a deep hashing framework with a neural network decoder (NND) to create an effective multibiometric authentication system. The MDHND consists of two separate modules: a multimodal deep hashing (MDH) module, which is used for feature-level fusion and binarization of multiple biometrics, and a neural network decoder (NND) module, which is used to refine the intermediate binary codes generated by the MDH and compensate for the difference between enrollment and probe biometrics (variations in pose, illumination, etc.). Use of NND helps to improve the performance of the overall multimodal authentication system. The MDHND framework is trained in 3 steps using joint optimization of the two modules. In Step 1, the MDH parameters are trained and learned to generate a shared multimodal latent code; in Step 2, the latent codes from Step 1 are passed through a conventional error-correcting code (ECC) decoder to generate the ground truth to train a neural network decoder (NND); in Step 3, the NND decoder is trained using the ground truth from Step 2 and the MDH and NND are jointly optimized. Experimental results on a standard multimodal dataset demonstrate the superiority of our method relative to other current multimodal authentication systems
Tasks
Published 2019-02-11
URL http://arxiv.org/abs/1902.04149v3
PDF http://arxiv.org/pdf/1902.04149v3.pdf
PWC https://paperswithcode.com/paper/learning-to-authenticate-with-deep
Repo
Framework

A Stabilized Feedback Episodic Memory (SF-EM) and Home Service Provision Framework for Robot and IoT Collaboration

Title A Stabilized Feedback Episodic Memory (SF-EM) and Home Service Provision Framework for Robot and IoT Collaboration
Authors Ue-Hwan Kim, Jong-Hwan Kim
Abstract The automated home referred to as Smart Home is expected to offer fully customized services to its residents, reducing the amount of home labor, thus improving human beings’ welfare. Service robots and Internet of Things (IoT) play the key roles in the development of Smart Home. The service provision with these two main components in a Smart Home environment requires: 1) learning and reasoning algorithms and 2) the integration of robot and IoT systems. Conventional computational intelligence-based learning and reasoning algorithms do not successfully manage dynamic changes in the Smart Home data, and the simple integrations fail to fully draw the synergies from the collaboration of the two systems. To tackle these limitations, we propose: 1) a stabilized memory network with a feedback mechanism which can learn user behaviors in an incremental manner and 2) a robot-IoT service provision framework for a Smart Home which utilizes the proposed memory architecture as a learning and reasoning module and exploits synergies between the robot and IoT systems. We conduct a set of comprehensive experiments under various conditions to verify the performance of the proposed memory architecture and the service provision framework and analyze the experiment results.
Tasks
Published 2019-07-31
URL https://arxiv.org/abs/1907.13274v1
PDF https://arxiv.org/pdf/1907.13274v1.pdf
PWC https://paperswithcode.com/paper/a-stabilized-feedback-episodic-memory-sf-em
Repo
Framework

Directionally Constrained Fully Convolutional Neural Network For Airborne Lidar Point Cloud Classification

Title Directionally Constrained Fully Convolutional Neural Network For Airborne Lidar Point Cloud Classification
Authors Congcong Wen, Lina Yang, Ling Peng, Xiang Li, Tianhe Chi
Abstract Point cloud classification plays an important role in a wide range of airborne light detection and ranging (LiDAR) applications, such as topographic mapping, forest monitoring, power line detection, and road detection. However, due to the sensor noise, high redundancy, incompleteness, and complexity of airborne LiDAR systems, point cloud classification is challenging. In this paper, we proposed a directionally constrained fully convolutional neural network (D-FCN) that can take the original 3D coordinates and LiDAR intensity as input; thus, it can directly apply to unstructured 3D point clouds for semantic labeling. Specifically, we first introduce a novel directionally constrained point convolution (D-Conv) module to extract locally representative features of 3D point sets from the projected 2D receptive fields. To make full use of the orientation information of neighborhood points, the proposed D-Conv module performs convolution in an orientation-aware manner by using a directionally constrained nearest neighborhood search. Then, we designed a multiscale fully convolutional neural network with downsampling and upsampling blocks to enable multiscale point feature learning. The proposed D-FCN model can therefore process input point cloud with arbitrary sizes and directly predict the semantic labels for all the input points in an end-to-end manner. Without involving additional geometry features as input, the proposed method has demonstrated superior performance on the International Society for Photogrammetry and Remote Sensing (ISPRS) 3D labeling benchmark dataset. The results show that our model has achieved a new state-of-the-art level of performance with an average F1 score of 70.7%, and it has improved the performance by a large margin on categories with a small number of points (such as powerline, car, and facade).
Tasks
Published 2019-08-19
URL https://arxiv.org/abs/1908.06673v1
PDF https://arxiv.org/pdf/1908.06673v1.pdf
PWC https://paperswithcode.com/paper/directionally-constrained-fully-convolutional
Repo
Framework

Detection and Prediction of Users Attitude Based on Real-Time and Batch Sentiment Analysis of Facebook Comments

Title Detection and Prediction of Users Attitude Based on Real-Time and Batch Sentiment Analysis of Facebook Comments
Authors Hieu Tran, Maxim Shcherbakov
Abstract The most of the people have their account on social networks (e.g. Facebook, Vkontakte) where they express their attitude to different situations and events. Facebook provides only the positive mark as a like button and share. However, it is important to know the position of a certain user on posts even though the opinion is negative. Positive, negative and neutral attitude can be extracted from the comments of users. Overall information about positive, negative and neutral opinion can bring the understanding of how people react in a position. Moreover, it is important to know how attitude is changing during the time period. The contribution of the paper is a new method based on sentiment text analysis for detection and prediction negative and positive patterns for Facebook comments which combines (i) real-time sentiment text analysis for pattern discovery and (ii) batch data processing for creating opinion forecasting algorithm. To perform forecast we propose two-steps algorithm where: (i) patterns are clustered using unsupervised clustering techniques and (ii) trend prediction is performed based on finding the nearest pattern from the certain cluster. Case studies show the efficiency and accuracy (Avg. MAE = 0.008) of the proposed method and its practical applicability. Also, we discovered three types of users attitude patterns and described them.
Tasks Sentiment Analysis
Published 2019-06-08
URL https://arxiv.org/abs/1906.03392v1
PDF https://arxiv.org/pdf/1906.03392v1.pdf
PWC https://paperswithcode.com/paper/detection-and-prediction-of-users-attitude
Repo
Framework

Generating summaries tailored to target characteristics

Title Generating summaries tailored to target characteristics
Authors Kushal Chawla, Hrituraj Singh, Arijit Pramanik, Mithlesh Kumar, Balaji Vasan Srinivasan
Abstract Recently, research efforts have gained pace to cater to varied user preferences while generating text summaries. While there have been attempts to incorporate a few handpicked characteristics such as length or entities, a holistic view around these preferences is missing and crucial insights on why certain characteristics should be incorporated in a specific manner are absent. With this objective, we provide a categorization around these characteristics relevant to the task of text summarization: one, focusing on what content needs to be generated and second, focusing on the stylistic aspects of the output summaries. We use our insights to provide guidelines on appropriate methods to incorporate various classes characteristics in sequence-to-sequence summarization framework. Our experiments with incorporating topics, readability and simplicity indicate the viability of the proposed prescriptions
Tasks Text Summarization
Published 2019-12-18
URL https://arxiv.org/abs/1912.08492v1
PDF https://arxiv.org/pdf/1912.08492v1.pdf
PWC https://paperswithcode.com/paper/generating-summaries-tailored-to-target
Repo
Framework

CoopSubNet: Cooperating Subnetwork for Data-Driven Regularization of Deep Networks under Limited Training Budgets

Title CoopSubNet: Cooperating Subnetwork for Data-Driven Regularization of Deep Networks under Limited Training Budgets
Authors Riddhish Bhalodia, Shireen Elhabian, Ladislav Kavan, Ross Whitaker
Abstract Deep networks are an integral part of the current machine learning paradigm. Their inherent ability to learn complex functional mappings between data and various target variables, while discovering hidden, task-driven features, makes them a powerful technology in a wide variety of applications. Nonetheless, the success of these networks typically relies on the availability of sufficient training data to optimize a large number of free parameters while avoiding overfitting, especially for networks with large capacity. In scenarios with limited training budgets, e.g., supervised tasks with limited labeled samples, several generic and/or task-specific regularization techniques, including data augmentation, have been applied to improve the generalization of deep networks.Typically such regularizations are introduced independently of that data or training scenario, and must therefore be tuned, tested, and modified to meet the needs of a particular network. In this paper, we propose a novel regularization framework that is driven by the population-level statistics of the feature space to be learned. The regularization is in the form of a \textbf{cooperating subnetwork}, which is an auto-encoder architecture attached to the feature space and trained in conjunction with the primary network. We introduce the architecture and training methodology and demonstrate the effectiveness of the proposed cooperative network-based regularization in a variety of tasks and architectures from the literature. Our code is freely available at \url{https://github.com/riddhishb/CoopSubNet
Tasks Data Augmentation
Published 2019-06-13
URL https://arxiv.org/abs/1906.05441v1
PDF https://arxiv.org/pdf/1906.05441v1.pdf
PWC https://paperswithcode.com/paper/coopsubnet-cooperating-subnetwork-for-data
Repo
Framework

Network Classifiers With Output Smoothing

Title Network Classifiers With Output Smoothing
Authors Elsa Rizk, Roula Nassif, Ali H. Sayed
Abstract This work introduces two strategies for training network classifiers with heterogeneous agents. One strategy promotes global smoothing over the graph and a second strategy promotes local smoothing over neighbourhoods. It is assumed that the feature sizes can vary from one agent to another, with some agents observing insufficient attributes to be able to make reliable decisions on their own. As a result, cooperation with neighbours is necessary. However, due to the fact that the feature dimensions are different across the agents, their classifier dimensions will also be different. This means that cooperation cannot rely on combining the classifier parameters. We instead propose smoothing the outputs of the classifiers, which are the predicted labels. By doing so, the dynamics that describes the evolution of the network classifier becomes more challenging than usual because the classifier parameters end up appearing as part of the regularization term as well. We illustrate performance by means of computer simulations.
Tasks
Published 2019-10-30
URL https://arxiv.org/abs/1911.04870v1
PDF https://arxiv.org/pdf/1911.04870v1.pdf
PWC https://paperswithcode.com/paper/network-classifiers-with-output-smoothing
Repo
Framework
comments powered by Disqus