April 2, 2020

3303 words 16 mins read

Paper Group ANR 239

Paper Group ANR 239

Investigating the influence Brexit had on Financial Markets, in particular the GBP/EUR exchange rate. Unsupervised Fuzzy eIX: Evolving Internal-eXternal Fuzzy Clustering. Graph4Code: A Machine Interpretable Knowledge Graph for Code. Emotion Recognition From Gait Analyses: Current Research and Future Directions. AI-Mediated Exchange Theory. Convolut …

Investigating the influence Brexit had on Financial Markets, in particular the GBP/EUR exchange rate

Title Investigating the influence Brexit had on Financial Markets, in particular the GBP/EUR exchange rate
Authors Michael Filletti
Abstract On 23rd June 2016, 51.9% of British voters voted to leave the European Union, triggering a process and events that have led to the United Kingdom leaving the EU, an event that has become known as ‘Brexit’. In this piece of research, we investigate the effects of this entire process on the currency markets, specifically the GBP/EUR exchange rate. Financial markets are known to be sensitive to news articles and media, and the aim of this research is to evaluate the magnitude of impact of relevant events, as well as whether the impact was positive or negative for the GBP.
Tasks
Published 2020-03-03
URL https://arxiv.org/abs/2003.05895v1
PDF https://arxiv.org/pdf/2003.05895v1.pdf
PWC https://paperswithcode.com/paper/investigating-the-influence-brexit-had-on
Repo
Framework

Unsupervised Fuzzy eIX: Evolving Internal-eXternal Fuzzy Clustering

Title Unsupervised Fuzzy eIX: Evolving Internal-eXternal Fuzzy Clustering
Authors Charles Aguiar, Daniel Leite
Abstract Time-varying classifiers, namely, evolving classifiers, play an important role in a scenario in which information is available as a never-ending online data stream. We present a new unsupervised learning method for numerical data called evolving Internal-eXternal Fuzzy clustering method (Fuzzy eIX). We develop the notion of double-boundary fuzzy granules and elaborate on its implications. Type 1 and type 2 fuzzy inference systems can be obtained from the projection of Fuzzy eIX granules. We perform the principle of the balanced information granularity within Fuzzy eIX classifiers to achieve a higher level of model understandability. Internal and external granules are updated from a numerical data stream at the same time that the global granular structure of the classifier is autonomously evolved. A synthetic nonstationary problem called Rotation of Twin Gaussians shows the behavior of the classifier. The Fuzzy eIX classifier could keep up with its accuracy in a scenario in which offline-trained classifiers would clearly have their accuracy drastically dropped.
Tasks
Published 2020-03-25
URL https://arxiv.org/abs/2003.12381v1
PDF https://arxiv.org/pdf/2003.12381v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-fuzzy-eix-evolving-internal
Repo
Framework

Graph4Code: A Machine Interpretable Knowledge Graph for Code

Title Graph4Code: A Machine Interpretable Knowledge Graph for Code
Authors Kavitha Srinivas, Ibrahim Abdelaziz, Julian Dolby, James P. McCusker
Abstract Knowledge graphs have proven to be extremely useful in powering diverse applications in semantic search, natural language understanding, and even image classification. Graph4Code attempts to build well structured knowledge graphs about program code to similarly revolutionize diverse applications such as code search, code understanding, refactoring, bug detection, and code automation. We build such a graph by applying a set of generic code analysis techniques to Python code on the web. Since use of popular Python modules is ubiquitous in code, calls to functions in Python modules serve as key nodes of the knowledge graph. The edges in the graph are based on 1) function usage in the wild (e.g., which other function tends to call this one, or which function tends to precede this one, as gleaned from program analysis), 2) documentation about the function (e.g., code documentation, usage documentation, or forum discussions such as StackOverflow), and 3) program specific features such as class hierarchies. We use the Whyis knowledge graph management framework to make the graph easily extensible. We apply these techniques to 1.3M Python files drawn from GitHub, and associated documentation on the web for over 400 popular libraries, as well as StackOverflow posts about the same set of libraries. This knowledge graph will be made available soon to the larger community for use.
Tasks Code Search, Image Classification, Knowledge Graphs
Published 2020-02-21
URL https://arxiv.org/abs/2002.09440v1
PDF https://arxiv.org/pdf/2002.09440v1.pdf
PWC https://paperswithcode.com/paper/graph4code-a-machine-interpretable-knowledge
Repo
Framework

Emotion Recognition From Gait Analyses: Current Research and Future Directions

Title Emotion Recognition From Gait Analyses: Current Research and Future Directions
Authors Shihao Xu, Jing Fang, Xiping Hu, Edith Ngai, Yi Guo, Victor C. M. Leung, Jun Cheng, Bin Hu
Abstract Human gait refers to a daily motion that represents not only mobility, but it can also be used to identify the walker by either human observers or computers. Recent studies reveal that gait even conveys information about the walker’s emotion. Individuals in different emotion states may show different gait patterns. The mapping between various emotions and gait patterns provides a new source for automated emotion recognition. Compared to traditional emotion detection biometrics, such as facial expression, speech and physiological parameters, gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject. These advantages make gait a promising source for emotion detection. This article reviews current research on gait-based emotion detection, particularly on how gait parameters can be affected by different emotion states and how the emotion states can be recognized through distinct gait patterns. We focus on the detailed methods and techniques applied in the whole process of emotion recognition: data collection, preprocessing, and classification. At last, we discuss possible future developments of efficient and effective gait-based emotion recognition using the state of the art techniques on intelligent computation and big data.
Tasks Emotion Recognition
Published 2020-03-13
URL https://arxiv.org/abs/2003.11461v1
PDF https://arxiv.org/pdf/2003.11461v1.pdf
PWC https://paperswithcode.com/paper/emotion-recognition-from-gait-analyses
Repo
Framework

AI-Mediated Exchange Theory

Title AI-Mediated Exchange Theory
Authors Xiao Ma, Taylor W. Brown
Abstract As Artificial Intelligence (AI) plays an ever-expanding role in sociotechnical systems, it is important to articulate the relationships between humans and AI. However, the scholarly communities studying human-AI relationships – including but not limited to social computing, machine learning, science and technology studies, and other social sciences – are divided by the perspectives that define them. These perspectives vary both by their focus on humans or AI, and in the micro/macro lenses through which they approach subjects. These differences inhibit the integration of findings, and thus impede science and interdisciplinarity. In this position paper, we propose the development of a framework AI-Mediated Exchange Theory (AI-MET) to bridge these divides. As an extension to Social Exchange Theory (SET) in the social sciences, AI-MET views AI as influencing human-to-human relationships via a taxonomy of mediation mechanisms. We list initial ideas of these mechanisms, and show how AI-MET can be used to help human-AI research communities speak to one another.
Tasks
Published 2020-03-04
URL https://arxiv.org/abs/2003.02093v1
PDF https://arxiv.org/pdf/2003.02093v1.pdf
PWC https://paperswithcode.com/paper/ai-mediated-exchange-theory
Repo
Framework

Convolutional Spiking Neural Networks for Spatio-Temporal Feature Extraction

Title Convolutional Spiking Neural Networks for Spatio-Temporal Feature Extraction
Authors Ali Samadzadeh, Fatemeh Sadat Tabatabaei Far, Ali Javadi, Ahmad Nickabadi, Morteza Haghir Chehreghani
Abstract Spiking neural networks (SNNs) can be used in low-power and embedded systems (such as emerging neuromorphic chips) due to their event-based nature. Also, they have the advantage of low computation cost in contrast to conventional artificial neural networks (ANNs), while preserving ANN’s properties. However, temporal coding in layers of convolutional spiking neural networks and other types of SNNs has yet to be studied. In this paper, we provide insight into spatio-temporal feature extraction of convolutional SNNs in experiments designed to exploit this property. Our proposed shallow convolutional SNN outperforms state-of-the-art spatio-temporal feature extractor methods such as C3D, ConvLstm, and similar networks. Furthermore, we present a new deep spiking architecture to tackle real-world problems (in particular classification tasks), and the model achieved superior performance compared to other SNN methods on CIFAR10-DVS. It is also worth noting that the training process is implemented based on spatio-temporal backpropagation, and ANN to SNN conversion methods will serve no use.
Tasks
Published 2020-03-27
URL https://arxiv.org/abs/2003.12346v1
PDF https://arxiv.org/pdf/2003.12346v1.pdf
PWC https://paperswithcode.com/paper/convolutional-spiking-neural-networks-for
Repo
Framework

Data-Driven Neuromorphic DRAM-based CNN and RNN Accelerators

Title Data-Driven Neuromorphic DRAM-based CNN and RNN Accelerators
Authors Tobi Delbruck, Shih-Chii Liu
Abstract The energy consumed by running large deep neural networks (DNNs) on hardware accelerators is dominated by the need for lots of fast memory to store both states and weights. This large required memory is currently only economically viable through DRAM. Although DRAM is high-throughput and low-cost memory (costing 20X less than SRAM), its long random access latency is bad for the unpredictable access patterns in spiking neural networks (SNNs). In addition, accessing data from DRAM costs orders of magnitude more energy than doing arithmetic with that data. SNNs are energy-efficient if local memory is available and few spikes are generated. This paper reports on our developments over the last 5 years of convolutional and recurrent deep neural network hardware accelerators that exploit either spatial or temporal sparsity similar to SNNs but achieve SOA throughput, power efficiency and latency even with the use of DRAM for the required storage of the weights and states of large DNNs.
Tasks
Published 2020-03-29
URL https://arxiv.org/abs/2003.13006v1
PDF https://arxiv.org/pdf/2003.13006v1.pdf
PWC https://paperswithcode.com/paper/data-driven-neuromorphic-dram-based-cnn-and
Repo
Framework

Image Generation Via Minimizing Fréchet Distance in Discriminator Feature Space

Title Image Generation Via Minimizing Fréchet Distance in Discriminator Feature Space
Authors Khoa D. Doan, Saurav Manchanda, Fengjiao Wang, Sathiya Keerthi, Avradeep Bhowmik, Chandan K. Reddy
Abstract For a given image generation problem, the intrinsic image manifold is often low dimensional. We use the intuition that it is much better to train the GAN generator by minimizing the distributional distance between real and generated images in a small dimensional feature space representing such a manifold than on the original pixel-space. We use the feature space of the GAN discriminator for such a representation. For distributional distance, we employ one of two choices: the Fr'{e}chet distance or direct optimal transport (OT); these respectively lead us to two new GAN methods: Fr'{e}chet-GAN and OT-GAN. The idea of employing Fr'{e}chet distance comes from the success of Fr'{e}chet Inception Distance as a solid evaluation metric in image generation. Fr'{e}chet-GAN is attractive in several ways. We propose an efficient, numerically stable approach to calculate the Fr'{e}chet distance and its gradient. The Fr'{e}chet distance estimation requires a significantly less computation time than OT; this allows Fr'{e}chet-GAN to use much larger mini-batch size in training than OT. More importantly, we conduct experiments on a number of benchmark datasets and show that Fr'{e}chet-GAN (in particular) and OT-GAN have significantly better image generation capabilities than the existing representative primal and dual GAN approaches based on the Wasserstein distance.
Tasks Image Generation
Published 2020-03-26
URL https://arxiv.org/abs/2003.11774v2
PDF https://arxiv.org/pdf/2003.11774v2.pdf
PWC https://paperswithcode.com/paper/image-generation-via-minimizing-frechet
Repo
Framework

Posterior-GAN: Towards Informative and Coherent Response Generation with Posterior Generative Adversarial Network

Title Posterior-GAN: Towards Informative and Coherent Response Generation with Posterior Generative Adversarial Network
Authors Shaoxiong Feng, Hongshen Chen, Kan Li, Dawei Yin
Abstract Neural conversational models learn to generate responses by taking into account the dialog history. These models are typically optimized over the query-response pairs with a maximum likelihood estimation objective. However, the query-response tuples are naturally loosely coupled, and there exist multiple responses that can respond to a given query, which leads the conversational model learning burdensome. Besides, the general dull response problem is even worsened when the model is confronted with meaningless response training instances. Intuitively, a high-quality response not only responds to the given query but also links up to the future conversations, in this paper, we leverage the query-response-future turn triples to induce the generated responses that consider both the given context and the future conversations. To facilitate the modeling of these triples, we further propose a novel encoder-decoder based generative adversarial learning framework, Posterior Generative Adversarial Network (Posterior-GAN), which consists of a forward and a backward generative discriminator to cooperatively encourage the generated response to be informative and coherent by two complementary assessment perspectives. Experimental results demonstrate that our method effectively boosts the informativeness and coherence of the generated response on both automatic and human evaluation, which verifies the advantages of considering two assessment perspectives.
Tasks
Published 2020-03-04
URL https://arxiv.org/abs/2003.02020v1
PDF https://arxiv.org/pdf/2003.02020v1.pdf
PWC https://paperswithcode.com/paper/posterior-gan-towards-informative-and
Repo
Framework

An Automated Approach for the Discovery of Interoperability

Title An Automated Approach for the Discovery of Interoperability
Authors Duygu Sap, Daniel P. Szabo
Abstract In this article, we present an automated approach that would test for and discover the interoperability of CAD systems based on the approximately-invariant shape properties of their models. We further show that exchanging models in standard format does not guarantee the preservation of shape properties. Our analysis is based on utilizing queries in deriving the shape properties and constructing the proxy models of the given CAD models [1]. We generate template files to accommodate the information necessary for the property computations and proxy model constructions, and implement an interoperability discovery program called DTest to execute the interoperability testing. We posit that our method could be extended to interoperability testing on CAD-to-CAE and/or CAD-to-CAM interactions by modifying the set of property checks and providing the additional requirements that may emerge in CAE or CAM applications.
Tasks
Published 2020-01-26
URL https://arxiv.org/abs/2001.10585v1
PDF https://arxiv.org/pdf/2001.10585v1.pdf
PWC https://paperswithcode.com/paper/an-automated-approach-for-the-discovery-of
Repo
Framework

Review of data analysis in vision inspection of power lines with an in-depth discussion of deep learning technology

Title Review of data analysis in vision inspection of power lines with an in-depth discussion of deep learning technology
Authors Xinyu Liu, Xiren Miao, Hao Jiang, Jing Chen
Abstract The widespread popularity of unmanned aerial vehicles enables an immense amount of power lines inspection data to be collected. How to employ massive inspection data especially the visible images to maintain the reliability, safety, and sustainability of power transmission is a pressing issue. To date, substantial works have been conducted on the analysis of power lines inspection data. With the aim of providing a comprehensive overview for researchers who are interested in developing a deep-learning-based analysis system for power lines inspection data, this paper conducts a thorough review of the current literature and identifies the challenges for future research. Following the typical procedure of inspection data analysis, we categorize current works in this area into component detection and fault diagnosis. For each aspect, the techniques and methodologies adopted in the literature are summarized. Some valuable information is also included such as data description and method performance. Further, an in-depth discussion of existing deep-learning-related analysis methods in power lines inspection is proposed. Finally, we conclude the paper with several research trends for the future of this area, such as data quality problems, small object detection, embedded application, and evaluation baseline.
Tasks Object Detection, Small Object Detection
Published 2020-03-22
URL https://arxiv.org/abs/2003.09802v1
PDF https://arxiv.org/pdf/2003.09802v1.pdf
PWC https://paperswithcode.com/paper/review-of-data-analysis-in-vision-inspection
Repo
Framework

Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network

Title Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network
Authors Asmaa Abbas, Mohammed M. Abdelsamea, Mohamed Medhat Gaber
Abstract Chest X-ray is the first imaging technique that plays an important role in the diagnosis of COVID-19 disease. Due to the high availability of large-scale annotated image datasets, great success has been achieved using convolutional neural networks (CNNs) for image recognition and classification. However, due to the limited availability of annotated medical images, the classification of medical images remains the biggest challenge in medical diagnosis. Thanks to transfer learning, an effective mechanism that can provide a promising solution by transferring knowledge from generic object recognition tasks to domain-specific tasks. In this paper, we validate and adopt our previously developed CNN, called Decompose, Transfer, and Compose (DeTraC), for the classification of COVID-19 chest X-ray images. DeTraC can deal with any irregularities in the image dataset by investigating its class boundaries using a class decomposition mechanism. The experimental results showed the capability of DeTraC in the detection of COVID-19 cases from a comprehensive image dataset collected from several hospitals around the world. High accuracy of 95.12% (with a sensitivity of 97.91%, a specificity of 91.87%, and a precision of 93.36%) was achieved by DeTraC in the detection of COVID-19 X-ray images from normal, and severe acute respiratory syndrome cases.
Tasks Medical Diagnosis, Object Recognition, Transfer Learning
Published 2020-03-26
URL https://arxiv.org/abs/2003.13815v1
PDF https://arxiv.org/pdf/2003.13815v1.pdf
PWC https://paperswithcode.com/paper/classification-of-covid-19-in-chest-x-ray
Repo
Framework

Toward Accurate and Realistic Virtual Try-on Through Shape Matching and Multiple Warps

Title Toward Accurate and Realistic Virtual Try-on Through Shape Matching and Multiple Warps
Authors Kedan Li, Min Jin Chong, Jingen Liu, David Forsyth
Abstract A virtual try-on method takes a product image and an image of a model and produces an image of the model wearing the product. Most methods essentially compute warps from the product image to the model image and combine using image generation methods. However, obtaining a realistic image is challenging because the kinematics of garments is complex and because outline, texture, and shading cues in the image reveal errors to human viewers. The garment must have appropriate drapes; texture must be warped to be consistent with the shape of a draped garment; small details (buttons, collars, lapels, pockets, etc.) must be placed appropriately on the garment, and so on. Evaluation is particularly difficult and is usually qualitative. This paper uses quantitative evaluation on a challenging, novel dataset to demonstrate that (a) for any warping method, one can choose target models automatically to improve results, and (b) learning multiple coordinated specialized warpers offers further improvements on results. Target models are chosen by a learned embedding procedure that predicts a representation of the products the model is wearing. This prediction is used to match products to models. Specialized warpers are trained by a method that encourages a second warper to perform well in locations where the first works poorly. The warps are then combined using a U-Net. Qualitative evaluation confirms that these improvements are wholesale over outline, texture shading, and garment details.
Tasks Image Generation
Published 2020-03-22
URL https://arxiv.org/abs/2003.10817v2
PDF https://arxiv.org/pdf/2003.10817v2.pdf
PWC https://paperswithcode.com/paper/toward-accurate-and-realistic-virtual-try-on
Repo
Framework

Learning Layout and Style Reconfigurable GANs for Controllable Image Synthesis

Title Learning Layout and Style Reconfigurable GANs for Controllable Image Synthesis
Authors Wei Sun, Tianfu Wu
Abstract With the remarkable recent progress on learning deep generative models, it becomes increasingly interesting to develop models for controllable image synthesis from reconfigurable inputs. This paper focuses on a recent emerged task, layout-to-image, to learn generative models that are capable of synthesizing photo-realistic images from spatial layout (i.e., object bounding boxes configured in an image lattice) and style (i.e., structural and appearance variations encoded by latent vectors). This paper first proposes an intuitive paradigm for the task, layout-to-mask-to-image, to learn to unfold object masks of given bounding boxes in an input layout to bridge the gap between the input layout and synthesized images. Then, this paper presents a method built on Generative Adversarial Networks for the proposed layout-to-mask-to-image with style control at both image and mask levels. Object masks are learned from the input layout and iteratively refined along stages in the generator network. Style control at the image level is the same as in vanilla GANs, while style control at the object mask level is realized by a proposed novel feature normalization scheme, Instance-Sensitive and Layout-Aware Normalization. In experiments, the proposed method is tested in the COCO-Stuff dataset and the Visual Genome dataset with state-of-the-art performance obtained.
Tasks Image Generation
Published 2020-03-25
URL https://arxiv.org/abs/2003.11571v1
PDF https://arxiv.org/pdf/2003.11571v1.pdf
PWC https://paperswithcode.com/paper/learning-layout-and-style-reconfigurable-gans
Repo
Framework

Redesigning SLAM for Arbitrary Multi-Camera Systems

Title Redesigning SLAM for Arbitrary Multi-Camera Systems
Authors Juichung Kuo, Manasi Muglikar, Zichao Zhang, Davide Scaramuzza
Abstract Adding more cameras to SLAM systems improves robustness and accuracy but complicates the design of the visual front-end significantly. Thus, most systems in the literature are tailored for specific camera configurations. In this work, we aim at an adaptive SLAM system that works for arbitrary multi-camera setups. To this end, we revisit several common building blocks in visual SLAM. In particular, we propose an adaptive initialization scheme, a sensor-agnostic, information-theoretic keyframe selection algorithm, and a scalable voxel-based map. These techniques make little assumption about the actual camera setups and prefer theoretically grounded methods over heuristics. We adapt a state-of-the-art visual-inertial odometry with these modifications, and experimental results show that the modified pipeline can adapt to a wide range of camera setups (e.g., 2 to 6 cameras in one experiment) without the need of sensor-specific modifications or tuning.
Tasks
Published 2020-03-04
URL https://arxiv.org/abs/2003.02014v1
PDF https://arxiv.org/pdf/2003.02014v1.pdf
PWC https://paperswithcode.com/paper/redesigning-slam-for-arbitrary-multi-camera
Repo
Framework
comments powered by Disqus