April 1, 2020

2927 words 14 mins read

Paper Group ANR 506

Paper Group ANR 506

Negative Statements Considered Useful. Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS – a collection of Technical Notes Part 1. Investigating the Compositional Structure Of Deep Neural Networks. Proceedings of Symposium on Data Mining Applications 2014. Convergence of Recursive Stochastic Algorithms using Wasserstein …

Negative Statements Considered Useful

Title Negative Statements Considered Useful
Authors Hiba Arnaout, Simon Razniewski, Gerhard Weikum
Abstract Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, while they abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In query-log-based text extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.1M statements for 100K popular Wikidata entities.
Tasks Question Answering
Published 2020-01-13
URL https://arxiv.org/abs/2001.04425v3
PDF https://arxiv.org/pdf/2001.04425v3.pdf
PWC https://paperswithcode.com/paper/negative-statements-considered-useful
Repo
Framework

Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS – a collection of Technical Notes Part 1

Title Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS – a collection of Technical Notes Part 1
Authors Robin Bloomfield, Gareth Fletcher, Heidy Khlaaf, Philippa Ryan, Shuji Kinoshita, Yoshiki Kinoshit, Makoto Takeyama, Yutaka Matsubara, Peter Popov, Kazuki Imai, Yoshinori Tsutake
Abstract This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines.
Tasks Autonomous Vehicles
Published 2020-02-28
URL https://arxiv.org/abs/2003.00789v1
PDF https://arxiv.org/pdf/2003.00789v1.pdf
PWC https://paperswithcode.com/paper/towards-identifying-and-closing-gaps-in
Repo
Framework

Investigating the Compositional Structure Of Deep Neural Networks

Title Investigating the Compositional Structure Of Deep Neural Networks
Authors Francesco Craighero, Fabrizio Angaroni, Alex Graudenzi, Fabio Stella, Marco Antoniotti
Abstract The current understanding of deep neural networks can only partially explain how input structure, network parameters and optimization algorithms jointly contribute to achieve the strong generalization power that is typically observed in many real-world applications. In order to improve the comprehension and interpretability of deep neural networks, we here introduce a novel theoretical framework based on the compositional structure of piecewise linear activation functions. By defining a direct acyclic graph representing the composition of activation patterns through the network layers, it is possible to characterize the instances of the input data with respect to both the predicted label and the specific (linear) transformation used to perform predictions. Preliminary tests on the MNIST dataset show that our method can group input instances with regard to their similarity in the internal representation of the neural network, providing an intuitive measure of input complexity.
Tasks
Published 2020-02-17
URL https://arxiv.org/abs/2002.06967v1
PDF https://arxiv.org/pdf/2002.06967v1.pdf
PWC https://paperswithcode.com/paper/investigating-the-compositional-structure-of
Repo
Framework

Proceedings of Symposium on Data Mining Applications 2014

Title Proceedings of Symposium on Data Mining Applications 2014
Authors Basit Qureshi, Yasir Javed
Abstract The Symposium on Data Mining and Applications (SDMA 2014) is aimed to gather researchers and application developers from a wide range of data mining related areas such as statistics, computational intelligence, pattern recognition, databases, Big Data Mining and visualization. SDMA is organized by MEGDAM to advance the state of the art in data mining research field and its various real world applications. The symposium will provide opportunities for technical collaboration among data mining and machine learning researchers around the Saudi Arabia, GCC countries and Middle-East region. Acceptance will be based primarily on originality, significance and quality of contribution.
Tasks
Published 2020-01-29
URL https://arxiv.org/abs/2001.11324v1
PDF https://arxiv.org/pdf/2001.11324v1.pdf
PWC https://paperswithcode.com/paper/proceedings-of-symposium-on-data-mining
Repo
Framework

Convergence of Recursive Stochastic Algorithms using Wasserstein Divergence

Title Convergence of Recursive Stochastic Algorithms using Wasserstein Divergence
Authors Abhishek Gupta, William B. Haskell
Abstract This paper develops a unified framework, based on iterated random operator theory, to analyze the convergence of constant stepsize recursive stochastic algorithms (RSAs) in machine learning and reinforcement learning. RSAs use randomization to efficiently compute expectations, and so their iterates form a stochastic process. The key idea is to lift the RSA into an appropriate higher-dimensional space and then express it as an equivalent Markov chain. Instead of determining the convergence of this Markov chain (which may not converge under constant stepsize), we study the convergence of the distribution of this Markov chain. To study this, we define a new notion of Wasserstein divergence. We show that if the distribution of the iterates in the Markov chain satisfy certain contraction property with respect to the Wasserstein divergence, then the Markov chain admits an invariant distribution. Inspired by the SVRG algorithm, we develop a method to convert any RSA to a variance reduced RSA that converges to the optimal solution with in almost sure sense or in probability. We show that convergence of a large family of constant stepsize RSAs can be understood using this framework. We apply this framework to ascertain the convergence of mini-batch SGD, forward-backward splitting with catalyst, SVRG, SAGA, empirical Q value iteration, synchronous Q-learning, enhanced policy iteration, and MDPs with a generative model. We also develop two new algorithms for reinforcement learning and establish their convergence using this framework.
Tasks Q-Learning
Published 2020-03-25
URL https://arxiv.org/abs/2003.11403v1
PDF https://arxiv.org/pdf/2003.11403v1.pdf
PWC https://paperswithcode.com/paper/convergence-of-recursive-stochastic
Repo
Framework

A Simple Class Decision Balancing for Incremental Learning

Title A Simple Class Decision Balancing for Incremental Learning
Authors Hongjoon Ahn, Taesup Moon
Abstract Class incremental learning (CIL) problem, in which a learning agent continuously learns new classes from incrementally arriving training data batches, has gained much attention recently in AI and computer vision community due to both fundamental and practical perspectives of the problem. For mitigating the main difficulty of deep neural network(DNN)-based CIL, the catastrophic forgetting, recent work showed that a simple fine-tuning (FT) based schemes can outperform the earlier attempts of using knowledge distillation, particularly when a small-sized exemplar-memory for storing samples from the previously learned classes is allowed. The core limitation of the vanilla FT, however, is the severe classification score bias between the new and previously learned classes, and several state-of-the-art methods proposed to rectify the bias via additional post-processing of the scores. In this paper, we propose two simple modifications for the vanilla FT, separated softmax (SS) layer and ratio-preserving (RP) mini-batches for SGD updates. Our scheme, dubbed as SS-IL, is shown to give much more balanced class decisions, have much less biased scores, and outperform strong state-of-the-art baselines on several large-scale benchmark datasets, without any sophisticated post-processing of the scores. We also give several novel analyses our and baseline methods, confirming the effectiveness of our approach in CIL.
Tasks
Published 2020-03-31
URL https://arxiv.org/abs/2003.13947v1
PDF https://arxiv.org/pdf/2003.13947v1.pdf
PWC https://paperswithcode.com/paper/a-simple-class-decision-balancing-for
Repo
Framework

Adaptive Dithering Using Curved Markov-Gaussian Noise in the Quantized Domain for Mapping SDR to HDR Image

Title Adaptive Dithering Using Curved Markov-Gaussian Noise in the Quantized Domain for Mapping SDR to HDR Image
Authors Subhayan Mukherjee, Guan-Ming Su, Irene Cheng
Abstract High Dynamic Range (HDR) imaging is gaining increased attention due to its realistic content, for not only regular displays but also smartphones. Before sufficient HDR content is distributed, HDR visualization still relies mostly on converting Standard Dynamic Range (SDR) content. SDR images are often quantized, or bit depth reduced, before SDR-to-HDR conversion, e.g. for video transmission. Quantization can easily lead to banding artefacts. In some computing and/or memory I/O limited environment, the traditional solution using spatial neighborhood information is not feasible. Our method includes noise generation (offline) and noise injection (online), and operates on pixels of the quantized image. We vary the magnitude and structure of the noise pattern adaptively based on the luma of the quantized pixel and the slope of the inverse-tone mapping function. Subjective user evaluations confirm the superior performance of our technique.
Tasks Quantization
Published 2020-01-20
URL https://arxiv.org/abs/2001.06983v1
PDF https://arxiv.org/pdf/2001.06983v1.pdf
PWC https://paperswithcode.com/paper/adaptive-dithering-using-curved-markov
Repo
Framework

Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep Reinforcement Learning Approach

Title Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep Reinforcement Learning Approach
Authors Sarder Fakhrul Abedin, Md. Shirajum Munir, Nguyen H. Tran, Zhu Han, Choong Seon Hong
Abstract In this paper, we design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed to improve the data freshness and connectivity to the Internet of Things (IoT) devices. First, we formulate an energy-efficient trajectory optimization problem in which the objective is to maximize the energy efficiency by optimizing the UAV-BS trajectory policy. We also incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS. Second, we propose an agile deep reinforcement learning with experience replay model to solve the formulated problem concerning the contextual constraints for the UAV-BS navigation. Moreover, the proposed approach is well-suited for solving the problem, since the state space of the problem is extremely large and finding the best trajectory policy with useful contextual features is too complex for the UAV-BSs. By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time. Finally, the simulation results illustrate the proposed approach is 3.6% and 3.13% more energy efficient than those of the greedy and baseline deep Q Network (DQN) approaches.
Tasks
Published 2020-02-21
URL https://arxiv.org/abs/2003.04816v1
PDF https://arxiv.org/pdf/2003.04816v1.pdf
PWC https://paperswithcode.com/paper/data-freshness-and-energy-efficient-uav
Repo
Framework

Multi-Task Recurrent Neural Network for Surgical Gesture Recognition and Progress Prediction

Title Multi-Task Recurrent Neural Network for Surgical Gesture Recognition and Progress Prediction
Authors Beatrice van Amsterdam, Matthew J. Clarkson, Danail Stoyanov
Abstract Surgical gesture recognition is important for surgical data science and computer-aided intervention. Even with robotic kinematic information, automatically segmenting surgical steps presents numerous challenges because surgical demonstrations are characterized by high variability in style, duration and order of actions. In order to extract discriminative features from the kinematic signals and boost recognition accuracy, we propose a multi-task recurrent neural network for simultaneous recognition of surgical gestures and estimation of a novel formulation of surgical task progress. To show the effectiveness of the presented approach, we evaluate its application on the JIGSAWS dataset, that is currently the only publicly available dataset for surgical gesture recognition featuring robot kinematic data. We demonstrate that recognition performance improves in multi-task frameworks with progress estimation without any additional manual labelling and training.
Tasks Gesture Recognition, Surgical Gesture Recognition
Published 2020-03-10
URL https://arxiv.org/abs/2003.04772v1
PDF https://arxiv.org/pdf/2003.04772v1.pdf
PWC https://paperswithcode.com/paper/multi-task-recurrent-neural-network-for-1
Repo
Framework

Knapsack Pruning with Inner Distillation

Title Knapsack Pruning with Inner Distillation
Authors Yonathan Aflalo, Asaf Noy, Ming Lin, Itamar Friedman, Lihi Zelnik
Abstract Neural network pruning reduces the computational cost of an over-parameterized network to improve its efficiency. Popular methods vary from $\ell_1$-norm sparsification to Neural Architecture Search (NAS). In this work, we propose a novel pruning method that optimizes the final accuracy of the pruned network and distills knowledge from the over-parameterized parent network’s inner layers. To enable this approach, we formulate the network pruning as a Knapsack Problem which optimizes the trade-off between the importance of neurons and their associated computational cost. Then we prune the network channels while maintaining the high-level structure of the network. The pruned network is fine-tuned under the supervision of the parent network using its inner network knowledge, a technique we refer to as the Inner Knowledge Distillation. Our method leads to state-of-the-art pruning results on ImageNet, CIFAR-10 and CIFAR-100 using ResNet backbones. To prune complex network structures such as convolutions with skip-links and depth-wise convolutions, we propose a block grouping approach to cope with these structures. Through this we produce compact architectures with the same FLOPs as EfficientNet-B0 and MobileNetV3 but with higher accuracy, by $1%$ and $0.3%$ respectively on ImageNet, and faster runtime on GPU.
Tasks Network Pruning, Neural Architecture Search
Published 2020-02-19
URL https://arxiv.org/abs/2002.08258v2
PDF https://arxiv.org/pdf/2002.08258v2.pdf
PWC https://paperswithcode.com/paper/knapsack-pruning-with-inner-distillation
Repo
Framework

Reliable and Energy Efficient MLC STT-RAM Buffer for CNN Accelerators

Title Reliable and Energy Efficient MLC STT-RAM Buffer for CNN Accelerators
Authors Masoomeh Jasemi, Shaahin Hessabi, Nader Bagherzadeh
Abstract We propose a lightweight scheme where the formation of a data block is changed in such a way that it can tolerate soft errors significantly better than the baseline. The key insight behind our work is that CNN weights are normalized between -1 and 1 after each convolutional layer, and this leaves one bit unused in half-precision floating-point representation. By taking advantage of the unused bit, we create a backup for the most significant bit to protect it against the soft errors. Also, considering the fact that in MLC STT-RAMs the cost of memory operations (read and write), and reliability of a cell are content-dependent (some patterns take larger current and longer time, while they are more susceptible to soft error), we rearrange the data block to minimize the number of costly bit patterns. Combining these two techniques provides the same level of accuracy compared to an error-free baseline while improving the read and write energy by 9% and 6%, respectively.
Tasks
Published 2020-01-14
URL https://arxiv.org/abs/2001.08806v1
PDF https://arxiv.org/pdf/2001.08806v1.pdf
PWC https://paperswithcode.com/paper/reliable-and-energy-efficient-mlc-stt-ram
Repo
Framework

Folding-based compression of point cloud attributes

Title Folding-based compression of point cloud attributes
Authors Maurice Quach, Giuseppe Valenzise, Frederic Dufaux
Abstract Existing techniques to compress point cloud attributes leverage either geometric or video-based compression tools. In this work, we explore a radically different approach inspired by recent advances in point cloud representation learning. A point cloud can be interpreted as a 2D manifold in a 3D space. As that, its attributes could be mapped onto a folded 2D grid; compressed through a conventional 2D image codec; and mapped back at the decoder side to recover attributes on 3D points. The folding operation is optimized by employing a deep neural network as a parametric folding function. As mapping is lossy in nature, we propose several strategies to refine it in such a way that attributes in 3D can be mapped to the 2D grid with minimal distortion. This approach can be flexibly applied to portions of point clouds in order to better adapt to local geometric complexity, and thus has a potential for being used as a tool in existing or future coding pipelines. Our preliminary results show that the proposed folding-based coding scheme can already reach performance similar to the latest MPEG GPCC codec.
Tasks Representation Learning
Published 2020-02-11
URL https://arxiv.org/abs/2002.04439v1
PDF https://arxiv.org/pdf/2002.04439v1.pdf
PWC https://paperswithcode.com/paper/folding-based-compression-of-point-cloud
Repo
Framework

Learning spatio-temporal representations with temporal squeeze pooling

Title Learning spatio-temporal representations with temporal squeeze pooling
Authors Guoxi Huang, Adrian G. Bors
Abstract In this paper, we propose a new video representation learning method, named Temporal Squeeze (TS) pooling, which can extract the essential movement information from a long sequence of video frames and map it into a set of few images, named Squeezed Images. By embedding the Temporal Squeeze pooling as a layer into off-the-shelf Convolution Neural Networks (CNN), we design a new video classification model, named Temporal Squeeze Network (TeSNet). The resulting Squeezed Images contain the essential movement information from the video frames, corresponding to the optimization of the video classification task. We evaluate our architecture on two video classification benchmarks, and the results achieved are compared to the state-of-the-art.
Tasks Representation Learning, Video Classification
Published 2020-02-11
URL https://arxiv.org/abs/2002.04685v1
PDF https://arxiv.org/pdf/2002.04685v1.pdf
PWC https://paperswithcode.com/paper/learning-spatio-temporal-representations-with
Repo
Framework

Gesture recognition with 60GHz 802.11 waveforms

Title Gesture recognition with 60GHz 802.11 waveforms
Authors Eran Hof, Amichai Sanderovich, Evyatar Hemo
Abstract Gesture recognition application over 802.11 ad/y waveforms is developed. Simultaneous gestures of slider-control and two-finger gesture for switching are detected based on Golay sequences of channel estimation fields of the packets.
Tasks Gesture Recognition
Published 2020-02-25
URL https://arxiv.org/abs/2002.10836v1
PDF https://arxiv.org/pdf/2002.10836v1.pdf
PWC https://paperswithcode.com/paper/gesture-recognition-with-60ghz-80211
Repo
Framework
Title Automatic Gesture Recognition in Robot-assisted Surgery with Reinforcement Learning and Tree Search
Authors Xiaojie Gao, Yueming Jin, Qi Dou, Pheng-Ann Heng
Abstract Automatic surgical gesture recognition is fundamental for improving intelligence in robot-assisted surgery, such as conducting complicated tasks of surgery surveillance and skill evaluation. However, current methods treat each frame individually and produce the outcomes without effective consideration on future information. In this paper, we propose a framework based on reinforcement learning and tree search for joint surgical gesture segmentation and classification. An agent is trained to segment and classify the surgical video in a human-like manner whose direct decisions are re-considered by tree search appropriately. Our proposed tree search algorithm unites the outputs from two designed neural networks, i.e., policy and value network. With the integration of complementary information from distinct models, our framework is able to achieve the better performance than baseline methods using either of the neural networks. For an overall evaluation, our developed approach consistently outperforms the existing methods on the suturing task of JIGSAWS dataset in terms of accuracy, edit score and F1 score. Our study highlights the utilization of tree search to refine actions in reinforcement learning framework for surgical robotic applications.
Tasks Gesture Recognition, Surgical Gesture Recognition
Published 2020-02-20
URL https://arxiv.org/abs/2002.08718v1
PDF https://arxiv.org/pdf/2002.08718v1.pdf
PWC https://paperswithcode.com/paper/automatic-gesture-recognition-in-robot
Repo
Framework
comments powered by Disqus