July 28, 2019

2978 words 14 mins read

Paper Group ANR 190

Paper Group ANR 190

Towards ECDSA key derivation from deep embeddings for novel Blockchain applications. Approximate Inference-based Motion Planning by Learning and Exploiting Low-Dimensional Latent Variable Models. Full-reference image quality assessment-based B-mode ultrasound image similarity measure. Fair Personalization. Efficient optimization for Hierarchically- …

Towards ECDSA key derivation from deep embeddings for novel Blockchain applications

Title Towards ECDSA key derivation from deep embeddings for novel Blockchain applications
Authors Christian S. Perone
Abstract In this work, we propose a straightforward method to derive Elliptic Curve Digital Signature Algorithm (ECDSA) key pairs from embeddings created using Deep Learning and Metric Learning approaches. We also show that these keys allows the derivation of cryptocurrencies (such as Bitcoin) addresses that can be used to transfer and receive funds, allowing novel Blockchain-based applications that can be used to transfer funds or data directly to domains such as image, text, sound or any other domain where Deep Learning can extract high-quality embeddings; providing thus a novel integration between the properties of the Blockchain-based technologies such as trust minimization and decentralization together with the high-quality learned representations from Deep Learning techniques.
Tasks Metric Learning
Published 2017-11-11
URL http://arxiv.org/abs/1711.04069v1
PDF http://arxiv.org/pdf/1711.04069v1.pdf
PWC https://paperswithcode.com/paper/towards-ecdsa-key-derivation-from-deep
Repo
Framework

Approximate Inference-based Motion Planning by Learning and Exploiting Low-Dimensional Latent Variable Models

Title Approximate Inference-based Motion Planning by Learning and Exploiting Low-Dimensional Latent Variable Models
Authors Jung-Su Ha, Hyeok-Joo Chae, Han-Lim Choi
Abstract This work presents an efficient framework to generate a motion plan of a robot with high degrees of freedom (e.g., a humanoid robot). High-dimensionality of the robot configuration space often leads to difficulties in utilizing the widely-used motion planning algorithms, since the volume of the decision space increases exponentially with the number of dimensions. To handle complications arising from the large decision space, and to solve a corresponding motion planning problem efficiently, two key concepts are adopted in this work: First, the Gaussian process latent variable model (GP-LVM) is utilized for low-dimensional representation of the original configuration space. Second, an approximate inference algorithm is used, exploiting through the duality between control and estimation, to explore the decision space and to compute a high-quality motion trajectory of the robot. Utilizing the GP-LVM and the duality between control and estimation, we construct a fully probabilistic generative model with which a high-dimensional motion planning problem is transformed into a tractable inference problem. Finally, we compute the motion trajectory via an approximate inference algorithm based on a variant of the particle filter. The resulting motions can be viewed in the supplemental video. ( https://youtu.be/kngEaOR4Esc )
Tasks Latent Variable Models, Motion Planning
Published 2017-11-22
URL http://arxiv.org/abs/1711.08275v2
PDF http://arxiv.org/pdf/1711.08275v2.pdf
PWC https://paperswithcode.com/paper/approximate-inference-based-motion-planning
Repo
Framework

Full-reference image quality assessment-based B-mode ultrasound image similarity measure

Title Full-reference image quality assessment-based B-mode ultrasound image similarity measure
Authors Kele Xu, Xi Liu, Hengxing Cai, Zhifeng Gao
Abstract During the last decades, the number of new full-reference image quality assessment algorithms has been increasing drastically. Yet, despite of the remarkable progress that has been made, the medical ultrasound image similarity measurement remains largely unsolved due to a high level of speckle noise contamination. Potential applications of the ultrasound image similarity measurement seem evident in several aspects. To name a few, ultrasound imaging quality assessment, abnormal function region detection, etc. In this paper, a comparative study was made on full-reference image quality assessment methods for ultrasound image visual structural similarity measure. Moreover, based on the image similarity index, a generic ultrasound motion tracking re-initialization framework is given in this work. The experiments are conducted on synthetic data and real-ultrasound liver data and the results demonstrate that, with proposed similarity-based tracking re-initialization, the mean error of landmarks tracking can be decreased from 2 mm to about 1.5 mm in the ultrasound liver sequence.
Tasks Image Quality Assessment
Published 2017-01-10
URL http://arxiv.org/abs/1701.02797v2
PDF http://arxiv.org/pdf/1701.02797v2.pdf
PWC https://paperswithcode.com/paper/full-reference-image-quality-assessment-based
Repo
Framework

Fair Personalization

Title Fair Personalization
Authors L. Elisa Celis, Nisheeth K. Vishnoi
Abstract Personalization is pervasive in the online space as, when combined with learning, it leads to higher efficiency and revenue by allowing the most relevant content to be served to each user. However, recent studies suggest that such personalization can propagate societal or systemic biases, which has led to calls for regulatory mechanisms and algorithms to combat inequality. Here we propose a rigorous algorithmic framework that allows for the possibility to control biased or discriminatory personalization with respect to sensitive attributes of users without losing all of the benefits of personalization.
Tasks
Published 2017-07-07
URL http://arxiv.org/abs/1707.02260v1
PDF http://arxiv.org/pdf/1707.02260v1.pdf
PWC https://paperswithcode.com/paper/fair-personalization
Repo
Framework

Efficient optimization for Hierarchically-structured Interacting Segments (HINTS)

Title Efficient optimization for Hierarchically-structured Interacting Segments (HINTS)
Authors Hossam Isack, Olga Veksler, Ipek Oguz, Milan Sonka, Yuri Boykov
Abstract We propose an effective optimization algorithm for a general hierarchical segmentation model with geometric interactions between segments. Any given tree can specify a partial order over object labels defining a hierarchy. It is well-established that segment interactions, such as inclusion/exclusion and margin constraints, make the model significantly more discriminant. However, existing optimization methods do not allow full use of such models. Generic -expansion results in weak local minima, while common binary multi-layered formulations lead to non-submodularity, complex high-order potentials, or polar domain unwrapping and shape biases. In practice, applying these methods to arbitrary trees does not work except for simple cases. Our main contribution is an optimization method for the Hierarchically-structured Interacting Segments (HINTS) model with arbitrary trees. Our Path-Moves algorithm is based on multi-label MRF formulation and can be seen as a combination of well-known a-expansion and Ishikawa techniques. We show state-of-the-art biomedical segmentation for many diverse examples of complex trees.
Tasks
Published 2017-03-30
URL http://arxiv.org/abs/1703.10530v1
PDF http://arxiv.org/pdf/1703.10530v1.pdf
PWC https://paperswithcode.com/paper/efficient-optimization-for-hierarchically
Repo
Framework

Incorrigibility in the CIRL Framework

Title Incorrigibility in the CIRL Framework
Authors Ryan Carey
Abstract A value learning system has incentives to follow shutdown instructions, assuming the shutdown instruction provides information (in the technical sense) about which actions lead to valuable outcomes. However, this assumption is not robust to model mis-specification (e.g., in the case of programmer errors). We demonstrate this by presenting some Supervised POMDP scenarios in which errors in the parameterized reward function remove the incentive to follow shutdown commands. These difficulties parallel those discussed by Soares et al. (2015) in their paper on corrigibility. We argue that it is important to consider systems that follow shutdown commands under some weaker set of assumptions (e.g., that one small verified module is correctly implemented; as opposed to an entire prior probability distribution and/or parameterized reward function). We discuss some difficulties with simple ways to attempt to attain these sorts of guarantees in a value learning framework.
Tasks
Published 2017-09-19
URL http://arxiv.org/abs/1709.06275v2
PDF http://arxiv.org/pdf/1709.06275v2.pdf
PWC https://paperswithcode.com/paper/incorrigibility-in-the-cirl-framework
Repo
Framework

Open Vocabulary Scene Parsing

Title Open Vocabulary Scene Parsing
Authors Hang Zhao, Xavier Puig, Bolei Zhou, Sanja Fidler, Antonio Torralba
Abstract Recognizing arbitrary objects in the wild has been a challenging problem due to the limitations of existing classification models and datasets. In this paper, we propose a new task that aims at parsing scenes with a large and open vocabulary, and several evaluation metrics are explored for this problem. Our proposed approach to this problem is a joint image pixel and word concept embeddings framework, where word concepts are connected by semantic relations. We validate the open vocabulary prediction ability of our framework on ADE20K dataset which covers a wide variety of scenes and objects. We further explore the trained joint embedding space to show its interpretability.
Tasks Scene Parsing
Published 2017-03-26
URL http://arxiv.org/abs/1703.08769v2
PDF http://arxiv.org/pdf/1703.08769v2.pdf
PWC https://paperswithcode.com/paper/open-vocabulary-scene-parsing
Repo
Framework

Learning Heuristic Search via Imitation

Title Learning Heuristic Search via Imitation
Authors Mohak Bhardwaj, Sanjiban Choudhury, Sebastian Scherer
Abstract Robotic motion planning problems are typically solved by constructing a search tree of valid maneuvers from a start to a goal configuration. Limited onboard computation and real-time planning constraints impose a limit on how large this search tree can grow. Heuristics play a crucial role in such situations by guiding the search towards potentially good directions and consequently minimizing search effort. Moreover, it must infer such directions in an efficient manner using only the information uncovered by the search up until that time. However, state of the art methods do not address the problem of computing a heuristic that explicitly minimizes search effort. In this paper, we do so by training a heuristic policy that maps the partial information from the search to decide which node of the search tree to expand. Unfortunately, naively training such policies leads to slow convergence and poor local minima. We present SaIL, an efficient algorithm that trains heuristic policies by imitating “clairvoyant oracles” - oracles that have full information about the world and demonstrate decisions that minimize search effort. We leverage the fact that such oracles can be efficiently computed using dynamic programming and derive performance guarantees for the learnt heuristic. We validate the approach on a spectrum of environments which show that SaIL consistently outperforms state of the art algorithms. Our approach paves the way forward for learning heuristics that demonstrate an anytime nature - finding feasible solutions quickly and incrementally refining it over time.
Tasks Motion Planning
Published 2017-07-10
URL http://arxiv.org/abs/1707.03034v1
PDF http://arxiv.org/pdf/1707.03034v1.pdf
PWC https://paperswithcode.com/paper/learning-heuristic-search-via-imitation
Repo
Framework

Multitask Learning for Fine-Grained Twitter Sentiment Analysis

Title Multitask Learning for Fine-Grained Twitter Sentiment Analysis
Authors Georgios Balikas, Simon Moura, Massih-Reza Amini
Abstract Traditional sentiment analysis approaches tackle problems like ternary (3-category) and fine-grained (5-category) classification by learning the tasks separately. We argue that such classification tasks are correlated and we propose a multitask approach based on a recurrent neural network that benefits by jointly learning them. Our study demonstrates the potential of multitask models on this type of problems and improves the state-of-the-art results in the fine-grained sentiment classification problem.
Tasks Sentiment Analysis, Twitter Sentiment Analysis
Published 2017-07-12
URL http://arxiv.org/abs/1707.03569v1
PDF http://arxiv.org/pdf/1707.03569v1.pdf
PWC https://paperswithcode.com/paper/multitask-learning-for-fine-grained-twitter
Repo
Framework

Combined Task and Motion Planning as Classical AI Planning

Title Combined Task and Motion Planning as Classical AI Planning
Authors Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
Abstract Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
Tasks Motion Planning
Published 2017-06-21
URL http://arxiv.org/abs/1706.06927v1
PDF http://arxiv.org/pdf/1706.06927v1.pdf
PWC https://paperswithcode.com/paper/combined-task-and-motion-planning-as
Repo
Framework

Distance to Center of Mass Encoding for Instance Segmentation

Title Distance to Center of Mass Encoding for Instance Segmentation
Authors Thomio Watanabe, Denis Wolf
Abstract The instance segmentation can be considered an extension of the object detection problem where bounding boxes are replaced by object contours. Strictly speaking the problem requires to identify each pixel instance and class independently of the artifice used for this mean. The advantage of instance segmentation over the usual object detection lies in the precise delineation of objects improving object localization. Additionally, object contours allow the evaluation of partial occlusion with basic image processing algorithms. This work approaches the instance segmentation problem as an annotation problem and presents a novel technique to encode and decode ground truth annotations. We propose a mathematical representation of instances that any deep semantic segmentation model can learn and generalize. Each individual instance is represented by a center of mass and a field of vectors pointing to it. This encoding technique has been denominated Distance to Center of Mass Encoding (DCME).
Tasks Instance Segmentation, Object Detection, Object Localization, Semantic Segmentation
Published 2017-11-24
URL http://arxiv.org/abs/1711.09060v1
PDF http://arxiv.org/pdf/1711.09060v1.pdf
PWC https://paperswithcode.com/paper/distance-to-center-of-mass-encoding-for
Repo
Framework

Retrieving Similar X-Ray Images from Big Image Data Using Radon Barcodes with Single Projections

Title Retrieving Similar X-Ray Images from Big Image Data Using Radon Barcodes with Single Projections
Authors Morteza Babaie, H. R. Tizhoosh, Shujin Zhu, M. E. Shiri
Abstract The idea of Radon barcodes (RBC) has been introduced recently. In this paper, we propose a content-based image retrieval approach for big datasets based on Radon barcodes. Our method (Single Projection Radon Barcode, or SP-RBC) uses only a few Radon single projections for each image as global features that can serve as a basis for weak learners. This is our most important contribution in this work, which improves the results of the RBC considerably. As a matter of fact, only one projection of an image, as short as a single SURF feature vector, can already achieve acceptable results. Nevertheless, using multiple projections in a long vector will not deliver anticipated improvements. To exploit the information inherent in each projection, our method uses the outcome of each projection separately and then applies more precise local search on the small subset of retrieved images. We have tested our method using IRMA 2009 dataset a with 14,400 x-ray images as part of imageCLEF initiative. Our approach leads to a substantial decrease in the error rate in comparison with other non-learning methods.
Tasks Content-Based Image Retrieval, Image Retrieval
Published 2017-01-02
URL http://arxiv.org/abs/1701.00449v1
PDF http://arxiv.org/pdf/1701.00449v1.pdf
PWC https://paperswithcode.com/paper/retrieving-similar-x-ray-images-from-big
Repo
Framework

An Adaptive Sampling Scheme to Efficiently Train Fully Convolutional Networks for Semantic Segmentation

Title An Adaptive Sampling Scheme to Efficiently Train Fully Convolutional Networks for Semantic Segmentation
Authors Lorenz Berger, Eoin Hyde, M. Jorge Cardoso, Sebastien Ourselin
Abstract Deep convolutional neural networks (CNNs) have shown excellent performance in object recognition tasks and dense classification problems such as semantic segmentation. However, training deep neural networks on large and sparse datasets is still challenging and can require large amounts of computation and memory. In this work, we address the task of performing semantic segmentation on large data sets, such as three-dimensional medical images. We propose an adaptive sampling scheme that uses a-posterior error maps, generated throughout training, to focus sampling on difficult regions, resulting in improved learning. Our contribution is threefold: 1) We give a detailed description of the proposed sampling algorithm to speed up and improve learning performance on large images. We propose a deep dual path CNN that captures information at fine and coarse scales, resulting in a network with a large field of view and high resolution outputs. We show that our method is able to attain new state-of-the-art results on the VISCERAL Anatomy benchmark.
Tasks Object Recognition, Semantic Segmentation
Published 2017-09-08
URL http://arxiv.org/abs/1709.02764v4
PDF http://arxiv.org/pdf/1709.02764v4.pdf
PWC https://paperswithcode.com/paper/an-adaptive-sampling-scheme-to-efficiently
Repo
Framework

SEVEN: Deep Semi-supervised Verification Networks

Title SEVEN: Deep Semi-supervised Verification Networks
Authors Vahid Noroozi, Lei Zheng, Sara Bahaadini, Sihong Xie, Philip S. Yu
Abstract Verification determines whether two samples belong to the same class or not, and has important applications such as face and fingerprint verification, where thousands or millions of categories are present but each category has scarce labeled examples, presenting two major challenges for existing deep learning models. We propose a deep semi-supervised model named SEmi-supervised VErification Network (SEVEN) to address these challenges. The model consists of two complementary components. The generative component addresses the lack of supervision within each category by learning general salient structures from a large amount of data across categories. The discriminative component exploits the learned general features to mitigate the lack of supervision within categories, and also directs the generative component to find more informative structures of the whole data manifold. The two components are tied together in SEVEN to allow an end-to-end training of the two components. Extensive experiments on four verification tasks demonstrate that SEVEN significantly outperforms other state-of-the-art deep semi-supervised techniques when labeled data are in short supply. Furthermore, SEVEN is competitive with fully supervised baselines trained with a larger amount of labeled data. It indicates the importance of the generative component in SEVEN.
Tasks
Published 2017-06-12
URL http://arxiv.org/abs/1706.03692v2
PDF http://arxiv.org/pdf/1706.03692v2.pdf
PWC https://paperswithcode.com/paper/seven-deep-semi-supervised-verification
Repo
Framework

Are you serious?: Rhetorical Questions and Sarcasm in Social Media Dialog

Title Are you serious?: Rhetorical Questions and Sarcasm in Social Media Dialog
Authors Shereen Oraby, Vrindavan Harrison, Amita Misra, Ellen Riloff, Marilyn Walker
Abstract Effective models of social dialog must understand a broad range of rhetorical and figurative devices. Rhetorical questions (RQs) are a type of figurative language whose aim is to achieve a pragmatic goal, such as structuring an argument, being persuasive, emphasizing a point, or being ironic. While there are computational models for other forms of figurative language, rhetorical questions have received little attention to date. We expand a small dataset from previous work, presenting a corpus of 10,270 RQs from debate forums and Twitter that represent different discourse functions. We show that we can clearly distinguish between RQs and sincere questions (0.76 F1). We then show that RQs can be used both sarcastically and non-sarcastically, observing that non-sarcastic (other) uses of RQs are frequently argumentative in forums, and persuasive in tweets. We present experiments to distinguish between these uses of RQs using SVM and LSTM models that represent linguistic features and post-level context, achieving results as high as 0.76 F1 for “sarcastic” and 0.77 F1 for “other” in forums, and 0.83 F1 for both “sarcastic” and “other” in tweets. We supplement our quantitative experiments with an in-depth characterization of the linguistic variation in RQs.
Tasks
Published 2017-09-15
URL http://arxiv.org/abs/1709.05305v1
PDF http://arxiv.org/pdf/1709.05305v1.pdf
PWC https://paperswithcode.com/paper/are-you-serious-rhetorical-questions-and
Repo
Framework
comments powered by Disqus