May 6, 2019

3049 words 15 mins read

Paper Group ANR 246

Paper Group ANR 246

Image Denoising with Kernels based on Natural Image Relations. Ensemble preconditioning for Markov chain Monte Carlo simulation. Recognizing and Eliciting Weakly Single Crossing Profiles on Trees. A Learning Algorithm for Relational Logistic Regression: Preliminary Results. Analysis of a low memory implementation of the Orthogonal Matching Pursuit …

Image Denoising with Kernels based on Natural Image Relations

Title Image Denoising with Kernels based on Natural Image Relations
Authors Valero Laparra, Juan Gutiérrez, Gustavo Camps-Valls, Jesús Malo
Abstract A successful class of image denoising methods is based on Bayesian approaches working in wavelet representations. However, analytical estimates can be obtained only for particular combinations of analytical models of signal and noise, thus precluding its straightforward extension to deal with other arbitrary noise sources. In this paper, we propose an alternative non-explicit way to take into account the relations among natural image wavelet coefficients for denoising: we use support vector regression (SVR) in the wavelet domain to enforce these relations in the estimated signal. Since relations among the coefficients are specific to the signal, the regularization property of SVR is exploited to remove the noise, which does not share this feature. The specific signal relations are encoded in an anisotropic kernel obtained from mutual information measures computed on a representative image database. Training considers minimizing the Kullback-Leibler divergence (KLD) between the estimated and actual probability functions of signal and noise in order to enforce similarity. Due to its non-parametric nature, the method can eventually cope with different noise sources without the need of an explicit re-formulation, as it is strictly necessary under parametric Bayesian formalisms. Results under several noise levels and noise sources show that: (1) the proposed method outperforms conventional wavelet methods that assume coefficient independence, (2) it is similar to state-of-the-art methods that do explicitly include these relations when the noise source is Gaussian, and (3) it gives better numerical and visual performance when more complex, realistic noise sources are considered. Therefore, the proposed machine learning approach can be seen as a more flexible (model-free) alternative to the explicit description of wavelet coefficient relations for image denoising.
Tasks Denoising, Image Denoising
Published 2016-01-31
URL http://arxiv.org/abs/1602.00217v1
PDF http://arxiv.org/pdf/1602.00217v1.pdf
PWC https://paperswithcode.com/paper/image-denoising-with-kernels-based-on-natural
Repo
Framework

Ensemble preconditioning for Markov chain Monte Carlo simulation

Title Ensemble preconditioning for Markov chain Monte Carlo simulation
Authors Charles Matthews, Jonathan Weare, Benedict Leimkuhler
Abstract We describe parallel Markov chain Monte Carlo methods that propagate a collective ensemble of paths, with local covariance information calculated from neighboring replicas. The use of collective dynamics eliminates multiplicative noise and stabilizes the dynamics thus providing a practical approach to difficult anisotropic sampling problems in high dimensions. Numerical experiments with model problems demonstrate that dramatic potential speedups, compared to various alternative schemes, are attainable.
Tasks
Published 2016-07-13
URL http://arxiv.org/abs/1607.03954v1
PDF http://arxiv.org/pdf/1607.03954v1.pdf
PWC https://paperswithcode.com/paper/ensemble-preconditioning-for-markov-chain
Repo
Framework

Recognizing and Eliciting Weakly Single Crossing Profiles on Trees

Title Recognizing and Eliciting Weakly Single Crossing Profiles on Trees
Authors Palash Dey
Abstract The domain of single crossing preference profiles is a widely studied domain in social choice theory. It has been generalized to the domain of single crossing preference profiles with respect to trees which inherits many desirable properties from the single crossing domain, for example, transitivity of majority relation, existence of polynomial time algorithms for finding winners of Kemeny voting rule, etc. In this paper, we consider a further generalization of the domain of single crossing profiles on trees to the domain consisting of all preference profiles which can be extended to single crossing preference profiles with respect to some tree by adding more preferences to it. We call this domain the weakly single crossing domain on trees. We present a polynomial time algorithm for recognizing weakly single crossing profiles on trees. We then move on to develop a polynomial time algorithm with low query complexity for eliciting weakly single crossing profiles on trees even when we do not know any tree with respect to which the closure of the input profile is single crossing and the preferences can be queried only sequentially; moreover, the sequential order is also unknown. We complement the performance of our preference elicitation algorithm by proving that our algorithm makes an optimal number of queries up to constant factors when the number of preferences is large compared to the number of candidates, even if the input profile is known to be single crossing with respect to some given tree and the preferences can be accessed randomly.
Tasks
Published 2016-11-13
URL http://arxiv.org/abs/1611.04175v1
PDF http://arxiv.org/pdf/1611.04175v1.pdf
PWC https://paperswithcode.com/paper/recognizing-and-eliciting-weakly-single
Repo
Framework

A Learning Algorithm for Relational Logistic Regression: Preliminary Results

Title A Learning Algorithm for Relational Logistic Regression: Preliminary Results
Authors Bahare Fatemi, Seyed Mehran Kazemi, David Poole
Abstract Relational logistic regression (RLR) is a representation of conditional probability in terms of weighted formulae for modelling multi-relational data. In this paper, we develop a learning algorithm for RLR models. Learning an RLR model from data consists of two steps: 1- learning the set of formulae to be used in the model (a.k.a. structure learning) and learning the weight of each formula (a.k.a. parameter learning). For structure learning, we deploy Schmidt and Murphy’s hierarchical assumption: first we learn a model with simple formulae, then more complex formulae are added iteratively only if all their sub-formulae have proven effective in previous learned models. For parameter learning, we convert the problem into a non-relational learning problem and use an off-the-shelf logistic regression learning algorithm from Weka, an open-source machine learning tool, to learn the weights. We also indicate how hidden features about the individuals can be incorporated into RLR to boost the learning performance. We compare our learning algorithm to other structure and parameter learning algorithms in the literature, and compare the performance of RLR models to standard logistic regression and RDN-Boost on a modified version of the MovieLens data-set.
Tasks Relational Reasoning
Published 2016-06-28
URL http://arxiv.org/abs/1606.08531v1
PDF http://arxiv.org/pdf/1606.08531v1.pdf
PWC https://paperswithcode.com/paper/a-learning-algorithm-for-relational-logistic
Repo
Framework

Analysis of a low memory implementation of the Orthogonal Matching Pursuit greedy strategy

Title Analysis of a low memory implementation of the Orthogonal Matching Pursuit greedy strategy
Authors Laura Rebollo-Neira, Miroslav Rozloznik, Pradip Sasmal
Abstract The convergence and numerical analysis of a low memory implementation of the Orthogonal Matching Pursuit greedy strategy, which is termed Self Projected Matching Pursuit, is presented. This approach provides an iterative way of solving the least squares problem with much less storage requirement than direct linear algebra techniques. Hence, it is appropriate for solving large linear systems. Furthermore, the low memory requirement of the method suits it for massive parallelization, via Graphics Processing Unit, to tackle systems which can be broken into a large number of subsystems of much smaller dimension.
Tasks
Published 2016-08-31
URL http://arxiv.org/abs/1609.00053v2
PDF http://arxiv.org/pdf/1609.00053v2.pdf
PWC https://paperswithcode.com/paper/analysis-of-a-low-memory-implementation-of
Repo
Framework

Unethical Research: How to Create a Malevolent Artificial Intelligence

Title Unethical Research: How to Create a Malevolent Artificial Intelligence
Authors Federico Pistono, Roman V. Yampolskiy
Abstract Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine. Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).
Tasks
Published 2016-05-10
URL http://arxiv.org/abs/1605.02817v2
PDF http://arxiv.org/pdf/1605.02817v2.pdf
PWC https://paperswithcode.com/paper/unethical-research-how-to-create-a-malevolent
Repo
Framework

GeoGebra Tools with Proof Capabilities

Title GeoGebra Tools with Proof Capabilities
Authors Zoltán Kovács, Csilla Sólyom-Gecse
Abstract We report about significant enhancements of the complex algebraic geometry theorem proving subsystem in GeoGebra for automated proofs in Euclidean geometry, concerning the extension of numerous GeoGebra tools with proof capabilities. As a result, a number of elementary theorems can be proven by using GeoGebra’s intuitive user interface on various computer architectures including native Java and web based systems with JavaScript. We also provide a test suite for benchmarking our results with 200 test cases.
Tasks Automated Theorem Proving
Published 2016-03-03
URL http://arxiv.org/abs/1603.01228v1
PDF http://arxiv.org/pdf/1603.01228v1.pdf
PWC https://paperswithcode.com/paper/geogebra-tools-with-proof-capabilities
Repo
Framework

Re-ranking Object Proposals for Object Detection in Automatic Driving

Title Re-ranking Object Proposals for Object Detection in Automatic Driving
Authors Zhun Zhong, Mingyi Lei, Shaozi Li, Jianping Fan
Abstract Object detection often suffers from a plenty of bootless proposals, selecting high quality proposals remains a great challenge. In this paper, we propose a semantic, class-specific approach to re-rank object proposals, which can consistently improve the recall performance even with less proposals. We first extract features for each proposal including semantic segmentation, stereo information, contextual information, CNN-based objectness and low-level cue, and then score them using class-specific weights learnt by Structured SVM. The advantages of the proposed model are twofold: 1) it can be easily merged to existing generators with few computational costs, and 2) it can achieve high recall rate uner strict critical even using less proposals. Experimental evaluation on the KITTI benchmark demonstrates that our approach significantly improves existing popular generators on recall performance. Moreover, in the experiment conducted for object detection, even with 1,500 proposals, our approach can still have higher average precision (AP) than baselines with 5,000 proposals.
Tasks Object Detection, Semantic Segmentation
Published 2016-05-19
URL http://arxiv.org/abs/1605.05904v2
PDF http://arxiv.org/pdf/1605.05904v2.pdf
PWC https://paperswithcode.com/paper/re-ranking-object-proposals-for-object
Repo
Framework

Cross-Graph Learning of Multi-Relational Associations

Title Cross-Graph Learning of Multi-Relational Associations
Authors Hanxiao Liu, Yiming Yang
Abstract Cross-graph Relational Learning (CGRL) refers to the problem of predicting the strengths or labels of multi-relational tuples of heterogeneous object types, through the joint inference over multiple graphs which specify the internal connections among each type of objects. CGRL is an open challenge in machine learning due to the daunting number of all possible tuples to deal with when the numbers of nodes in multiple graphs are large, and because the labeled training instances are extremely sparse as typical. Existing methods such as tensor factorization or tensor-kernel machines do not work well because of the lack of convex formulation for the optimization of CGRL models, the poor scalability of the algorithms in handling combinatorial numbers of tuples, and/or the non-transductive nature of the learning methods which limits their ability to leverage unlabeled data in training. This paper proposes a novel framework which formulates CGRL as a convex optimization problem, enables transductive learning using both labeled and unlabeled tuples, and offers a scalable algorithm that guarantees the optimal solution and enjoys a linear time complexity with respect to the sizes of input graphs. In our experiments with a subset of DBLP publication records and an Enzyme multi-source dataset, the proposed method successfully scaled to the large cross-graph inference problem, and outperformed other representative approaches significantly.
Tasks Relational Reasoning
Published 2016-05-06
URL http://arxiv.org/abs/1605.01832v1
PDF http://arxiv.org/pdf/1605.01832v1.pdf
PWC https://paperswithcode.com/paper/cross-graph-learning-of-multi-relational
Repo
Framework

Scale Coding Bag of Deep Features for Human Attribute and Action Recognition

Title Scale Coding Bag of Deep Features for Human Attribute and Action Recognition
Authors Fahad Shahbaz Khan, Joost van de Weijer, Rao Muhammad Anwer, Andrew D. Bagdanov, Michael Felsberg, Jorma Laaksonen
Abstract Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a Bag of Deep Features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state-of-the-art.
Tasks Action Recognition In Still Images, Temporal Action Localization
Published 2016-12-14
URL http://arxiv.org/abs/1612.04884v2
PDF http://arxiv.org/pdf/1612.04884v2.pdf
PWC https://paperswithcode.com/paper/scale-coding-bag-of-deep-features-for-human
Repo
Framework

Represent, Aggregate, and Constrain: A Novel Architecture for Machine Reading from Noisy Sources

Title Represent, Aggregate, and Constrain: A Novel Architecture for Machine Reading from Noisy Sources
Authors Jason Naradowsky, Sebastian Riedel
Abstract In order to extract event information from text, a machine reading model must learn to accurately read and interpret the ways in which that information is expressed. But it must also, as the human reader must, aggregate numerous individual value hypotheses into a single coherent global analysis, applying global constraints which reflect prior knowledge of the domain. In this work we focus on the task of extracting plane crash event information from clusters of related news articles whose labels are derived via distant supervision. Unlike previous machine reading work, we assume that while most target values will occur frequently in most clusters, they may also be missing or incorrect. We introduce a novel neural architecture to explicitly model the noisy nature of the data and to deal with these aforementioned learning issues. Our models are trained end-to-end and achieve an improvement of more than 12.1 F$_1$ over previous work, despite using far less linguistic annotation. We apply factor graph constraints to promote more coherent event analyses, with belief propagation inference formulated within the transitions of a recurrent neural network. We show this technique additionally improves maximum F$_1$ by up to 2.8 points, resulting in a relative improvement of $50%$ over the previous state-of-the-art.
Tasks Reading Comprehension
Published 2016-10-30
URL http://arxiv.org/abs/1610.09722v1
PDF http://arxiv.org/pdf/1610.09722v1.pdf
PWC https://paperswithcode.com/paper/represent-aggregate-and-constrain-a-novel
Repo
Framework

Identifying Topology of Power Distribution Networks Based on Smart Meter Data

Title Identifying Topology of Power Distribution Networks Based on Smart Meter Data
Authors Jayadev P Satya, Nirav Bhatt, Ramkrishna Pasumarthy, Aravind Rajeswaran
Abstract In a power distribution network, the network topology information is essential for an efficient operation of the network. This information of network connectivity is not accurately available, at the low voltage level, due to uninformed changes that happen from time to time. In this paper, we propose a novel data–driven approach to identify the underlying network topology including the load phase connectivity from time series of energy measurements. The proposed method involves the application of Principal Component Analysis (PCA) and its graph-theoretic interpretation to infer the topology from smart meter energy measurements. The method is demonstrated through simulation on randomly generated networks and also on IEEE recognized Roy Billinton distribution test system.
Tasks Time Series
Published 2016-09-09
URL http://arxiv.org/abs/1609.02678v1
PDF http://arxiv.org/pdf/1609.02678v1.pdf
PWC https://paperswithcode.com/paper/identifying-topology-of-power-distribution
Repo
Framework

Integrated perception with recurrent multi-task neural networks

Title Integrated perception with recurrent multi-task neural networks
Authors Hakan Bilen, Andrea Vedaldi
Abstract Modern discriminative predictors have been shown to match natural intelligences in specific perceptual tasks in image classification, object and part detection, boundary extraction, etc. However, a major advantage that natural intelligences still have is that they work well for “all” perceptual problems together, solving them efficiently and coherently in an “integrated manner”. In order to capture some of these advantages in machine perception, we ask two questions: whether deep neural networks can learn universal image representations, useful not only for a single task but for all of them, and how the solutions to the different tasks can be integrated in this framework. We answer by proposing a new architecture, which we call “MultiNet”, in which not only deep image features are shared between tasks, but where tasks can interact in a recurrent manner by encoding the results of their analysis in a common shared representation of the data. In this manner, we show that the performance of individual tasks in standard benchmarks can be improved first by sharing features between them and then, more significantly, by integrating their solutions in the common representation.
Tasks Image Classification
Published 2016-06-06
URL http://arxiv.org/abs/1606.01735v2
PDF http://arxiv.org/pdf/1606.01735v2.pdf
PWC https://paperswithcode.com/paper/integrated-perception-with-recurrent-multi
Repo
Framework

Enhancing Sentence Relation Modeling with Auxiliary Character-level Embedding

Title Enhancing Sentence Relation Modeling with Auxiliary Character-level Embedding
Authors Peng Li, Heng Huang
Abstract Neural network based approaches for sentence relation modeling automatically generate hidden matching features from raw sentence pairs. However, the quality of matching feature representation may not be satisfied due to complex semantic relations such as entailment or contradiction. To address this challenge, we propose a new deep neural network architecture that jointly leverage pre-trained word embedding and auxiliary character embedding to learn sentence meanings. The two kinds of word sequence representations as inputs into multi-layer bidirectional LSTM to learn enhanced sentence representation. After that, we construct matching features followed by another temporal CNN to learn high-level hidden matching feature representations. Experimental results demonstrate that our approach consistently outperforms the existing methods on standard evaluation datasets.
Tasks
Published 2016-03-30
URL http://arxiv.org/abs/1603.09405v1
PDF http://arxiv.org/pdf/1603.09405v1.pdf
PWC https://paperswithcode.com/paper/enhancing-sentence-relation-modeling-with
Repo
Framework

Built-in Foreground/Background Prior for Weakly-Supervised Semantic Segmentation

Title Built-in Foreground/Background Prior for Weakly-Supervised Semantic Segmentation
Authors Fatemehsadat Saleh, Mohammad Sadegh Ali Akbarian, Mathieu Salzmann, Lars Petersson, Stephen Gould, Jose M. Alvarez
Abstract Pixel-level annotations are expensive and time consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recently, CNN-based methods have proposed to fine-tune pre-trained networks using image tags. Without additional information, this leads to poor localization accuracy. This problem, however, was alleviated by making use of objectness priors to generate foreground/background masks. Unfortunately these priors either require training pixel-level annotations/bounding boxes, or still yield inaccurate object boundaries. Here, we propose a novel method to extract markedly more accurate masks from the pre-trained network itself, forgoing external objectness modules. This is accomplished using the activations of the higher-level convolutional layers, smoothed by a dense CRF. We demonstrate that our method, based on these masks and a weakly-supervised loss, outperforms the state-of-the-art tag-based weakly-supervised semantic segmentation techniques. Furthermore, we introduce a new form of inexpensive weak supervision yielding an additional accuracy boost.
Tasks Semantic Segmentation, Weakly-Supervised Semantic Segmentation
Published 2016-09-02
URL http://arxiv.org/abs/1609.00446v1
PDF http://arxiv.org/pdf/1609.00446v1.pdf
PWC https://paperswithcode.com/paper/built-in-foregroundbackground-prior-for
Repo
Framework
comments powered by Disqus