May 7, 2019

3001 words 15 mins read

Paper Group ANR 95

Paper Group ANR 95

Studying the impact of negotiation environments on negotiation teams’ performance. Statistical mechanics of unsupervised feature learning in a restricted Boltzmann machine with binary synapses. Anatomy-Aware Measurement of Segmentation Accuracy. Bayesian linear regression with Student-t assumptions. Unsupervised Transductive Domain Adaptation. Topi …

Studying the impact of negotiation environments on negotiation teams’ performance

Title Studying the impact of negotiation environments on negotiation teams’ performance
Authors Victor Sanchez-Anguix, Vicente Julian, Vicente Botti, Ana Garcia-Fornes
Abstract In this article we study the impact of the negotiation environment on the performance of several intra-team strategies (team dynamics) for agent-based negotiation teams that negotiate with an opponent. An agent-based negotiation team is a group of agents that joins together as a party because they share common interests in the negotiation at hand. It is experimentally shown how negotiation environment conditions like the deadline of both parties, the concession speed of the opponent, similarity among team members, and team size affect performance metrics like the minimum utility of team members, the average utility of team members, and the number of negotiation rounds. Our goal is identifying which intra-team strategies work better in different environmental conditions in order to provide useful knowledge for team members to select appropriate intra-team strategies according to environmental conditions.
Tasks
Published 2016-04-16
URL http://arxiv.org/abs/1604.04737v1
PDF http://arxiv.org/pdf/1604.04737v1.pdf
PWC https://paperswithcode.com/paper/studying-the-impact-of-negotiation
Repo
Framework

Statistical mechanics of unsupervised feature learning in a restricted Boltzmann machine with binary synapses

Title Statistical mechanics of unsupervised feature learning in a restricted Boltzmann machine with binary synapses
Authors Haiping Huang
Abstract Revealing hidden features in unlabeled data is called unsupervised feature learning, which plays an important role in pretraining a deep neural network. Here we provide a statistical mechanics analysis of the unsupervised learning in a restricted Boltzmann machine with binary synapses. A message passing equation to infer the hidden feature is derived, and furthermore, variants of this equation are analyzed. A statistical analysis by replica theory describes the thermodynamic properties of the model. Our analysis confirms an entropy crisis preceding the non-convergence of the message passing equation, suggesting a discontinuous phase transition as a key characteristic of the restricted Boltzmann machine. Continuous phase transition is also confirmed depending on the embedded feature strength in the data. The mean-field result under the replica symmetric assumption agrees with that obtained by running message passing algorithms on single instances of finite sizes. Interestingly, in an approximate Hopfield model, the entropy crisis is absent, and a continuous phase transition is observed instead. We also develop an iterative equation to infer the hyper-parameter (temperature) hidden in the data, which in physics corresponds to iteratively imposing Nishimori condition. Our study provides insights towards understanding the thermodynamic properties of the restricted Boltzmann machine learning, and moreover important theoretical basis to build simplified deep networks.
Tasks
Published 2016-12-06
URL http://arxiv.org/abs/1612.01717v2
PDF http://arxiv.org/pdf/1612.01717v2.pdf
PWC https://paperswithcode.com/paper/statistical-mechanics-of-unsupervised-feature
Repo
Framework

Anatomy-Aware Measurement of Segmentation Accuracy

Title Anatomy-Aware Measurement of Segmentation Accuracy
Authors Hamid R. Tizhoosh, Ahmed A. Othman
Abstract Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a ``master gold’’ based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users’ segmentation skills may require reordering. |
Tasks Accuracy Metrics
Published 2016-04-16
URL http://arxiv.org/abs/1604.04678v1
PDF http://arxiv.org/pdf/1604.04678v1.pdf
PWC https://paperswithcode.com/paper/anatomy-aware-measurement-of-segmentation
Repo
Framework

Bayesian linear regression with Student-t assumptions

Title Bayesian linear regression with Student-t assumptions
Authors Chaobing Song, Shu-Tao Xia
Abstract As an automatic method of determining model complexity using the training data alone, Bayesian linear regression provides us a principled way to select hyperparameters. But one often needs approximation inference if distribution assumption is beyond Gaussian distribution. In this paper, we propose a Bayesian linear regression model with Student-t assumptions (BLRS), which can be inferred exactly. In this framework, both conjugate prior and expectation maximization (EM) algorithm are generalized. Meanwhile, we prove that the maximum likelihood solution is equivalent to the standard Bayesian linear regression with Gaussian assumptions (BLRG). The $q$-EM algorithm for BLRS is nearly identical to the EM algorithm for BLRG. It is showed that $q$-EM for BLRS can converge faster than EM for BLRG for the task of predicting online news popularity.
Tasks
Published 2016-04-15
URL http://arxiv.org/abs/1604.04434v1
PDF http://arxiv.org/pdf/1604.04434v1.pdf
PWC https://paperswithcode.com/paper/bayesian-linear-regression-with-student-t
Repo
Framework

Unsupervised Transductive Domain Adaptation

Title Unsupervised Transductive Domain Adaptation
Authors Ozan Sener, Hyun Oh Song, Ashutosh Saxena, Silvio Savarese
Abstract Supervised learning with large scale labeled datasets and deep layered models has made a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers generalization issues under the presence of a domain shift between the training and the test data distribution. In this regard, unsupervised domain adaptation algorithms have been proposed to directly address the domain shift problem. In this paper, we approach the problem from a transductive perspective. We incorporate the domain shift and the transductive target inference into our framework by jointly solving for an asymmetric similarity metric and the optimal transductive target label assignment. We also show that our model can easily be extended for deep feature learning in order to learn features which are discriminative in the target domain. Our experiments show that the proposed method significantly outperforms state-of-the-art algorithms in both object recognition and digit classification experiments by a large margin.
Tasks Domain Adaptation, Object Recognition, Unsupervised Domain Adaptation
Published 2016-02-10
URL http://arxiv.org/abs/1602.03534v3
PDF http://arxiv.org/pdf/1602.03534v3.pdf
PWC https://paperswithcode.com/paper/unsupervised-transductive-domain-adaptation
Repo
Framework

Topic Grids for Homogeneous Data Visualization

Title Topic Grids for Homogeneous Data Visualization
Authors Shih-Chieh Su, Joseph Vaughn, Jean-Laurent Huynh
Abstract We propose the topic grids to detect anomaly and analyze the behavior based on the access log content. Content-based behavioral risk is quantified in the high dimensional space where the topics are generated from the log. The topics are being projected homogeneously into a space that is perception- and interaction-friendly to the human experts.
Tasks
Published 2016-08-23
URL http://arxiv.org/abs/1608.06664v1
PDF http://arxiv.org/pdf/1608.06664v1.pdf
PWC https://paperswithcode.com/paper/topic-grids-for-homogeneous-data
Repo
Framework

Detection of concealed cars in complex cargo X-ray imagery using Deep Learning

Title Detection of concealed cars in complex cargo X-ray imagery using Deep Learning
Authors Nicolas Jaccard, Thomas W. Rogers, Edward J. Morton, Lewis D. Griffin
Abstract Non-intrusive inspection systems based on X-ray radiography techniques are routinely used at transport hubs to ensure the conformity of cargo content with the supplied shipping manifest. As trade volumes increase and regulations become more stringent, manual inspection by trained operators is less and less viable due to low throughput. Machine vision techniques can assist operators in their task by automating parts of the inspection workflow. Since cars are routinely involved in trafficking, export fraud, and tax evasion schemes, they represent an attractive target for automated detection and flagging for subsequent inspection by operators. In this contribution, we describe a method for the detection of cars in X-ray cargo images based on trained-from-scratch Convolutional Neural Networks. By introducing an oversampling scheme that suitably addresses the low number of car images available for training, we achieved 100% car image classification rate for a false positive rate of 1-in-454. Cars that were partially or completely obscured by other goods, a modus operandi frequently adopted by criminals, were correctly detected. We believe that this level of performance suggests that the method is suitable for deployment in the field. It is expected that the generic object detection workflow described can be extended to other object classes given the availability of suitable training data.
Tasks Image Classification, Object Detection
Published 2016-06-26
URL http://arxiv.org/abs/1606.08078v2
PDF http://arxiv.org/pdf/1606.08078v2.pdf
PWC https://paperswithcode.com/paper/detection-of-concealed-cars-in-complex-cargo
Repo
Framework

Developing a cardiovascular disease risk factor annotated corpus of Chinese electronic medical records

Title Developing a cardiovascular disease risk factor annotated corpus of Chinese electronic medical records
Authors Jia Su, Bin He, Yi Guan, Jingchi Jiang, Jinfeng Yang
Abstract Cardiovascular disease (CVD) has become the leading cause of death in China, and most of the cases can be prevented by controlling risk factors. The goal of this study was to build a corpus of CVD risk factor annotations based on Chinese electronic medical records (CEMRs). This corpus is intended to be used to develop a risk factor information extraction system that, in turn, can be applied as a foundation for the further study of the progress of risk factors and CVD. We designed a light annotation task to capture CVD risk factors with indicators, temporal attributes and assertions that were explicitly or implicitly displayed in the records. The task included: 1) preparing data; 2) creating guidelines for capturing annotations (these were created with the help of clinicians); 3) proposing an annotation method including building the guidelines draft, training the annotators and updating the guidelines, and corpus construction. Then, a risk factor annotated corpus based on de-identified discharge summaries and progress notes from 600 patients was developed. Built with the help of clinicians, this corpus has an inter-annotator agreement (IAA) F1-measure of 0.968, indicating a high reliability. To the best of our knowledge, this is the first annotated corpus concerning CVD risk factors in CEMRs and the guidelines for capturing CVD risk factor annotations from CEMRs were proposed. The obtained document-level annotations can be applied in future studies to monitor risk factors and CVD over the long term.
Tasks
Published 2016-11-28
URL http://arxiv.org/abs/1611.09020v2
PDF http://arxiv.org/pdf/1611.09020v2.pdf
PWC https://paperswithcode.com/paper/developing-a-cardiovascular-disease-risk
Repo
Framework

Fast Fourier single-pixel imaging using binary illumination

Title Fast Fourier single-pixel imaging using binary illumination
Authors Zibang Zhang, Xueying Wang, Jingang Zhong
Abstract Fourier single-pixel imaging (FSI) has proven capable of reconstructing high-quality two-dimensional and three-dimensional images. The utilization of the sparsity of natural images in Fourier domain allows high-resolution images to be reconstructed from far fewer measurements than effective image pixels. However, applying original FSI in digital micro-mirror device (DMD) based high-speed imaging system turns out to be challenging, because the original FSI uses grayscale Fourier basis patterns for illumination while DMDs generate grayscale patterns at a relatively low rate. DMDs are a binary device which can only generate a black-and-white pattern at each instance. In this paper, we adopt binary Fourier patterns for illumination to achieve DMD-based high-speed single-pixel imaging. Binary Fourier patterns are generated by upsampling and then applying error diffusion based dithering to the grayscale patterns. Experiments demonstrate the proposed technique able to achieve static imaging with high quality and dynamic imaging in real time. The proposed technique potentially allows high-quality and high-speed imaging over broad wavebands.
Tasks
Published 2016-12-09
URL http://arxiv.org/abs/1612.02880v1
PDF http://arxiv.org/pdf/1612.02880v1.pdf
PWC https://paperswithcode.com/paper/fast-fourier-single-pixel-imaging-using
Repo
Framework

Stochastic Neural Networks with Monotonic Activation Functions

Title Stochastic Neural Networks with Monotonic Activation Functions
Authors Siamak Ravanbakhsh, Barnabas Poczos, Jeff Schneider, Dale Schuurmans, Russell Greiner
Abstract We propose a Laplace approximation that creates a stochastic unit from any smooth monotonic activation function, using only Gaussian noise. This paper investigates the application of this stochastic approximation in training a family of Restricted Boltzmann Machines (RBM) that are closely linked to Bregman divergences. This family, that we call exponential family RBM (Exp-RBM), is a subset of the exponential family Harmoniums that expresses family members through a choice of smooth monotonic non-linearity for each neuron. Using contrastive divergence along with our Gaussian approximation, we show that Exp-RBM can learn useful representations using novel stochastic units.
Tasks
Published 2016-01-01
URL http://arxiv.org/abs/1601.00034v4
PDF http://arxiv.org/pdf/1601.00034v4.pdf
PWC https://paperswithcode.com/paper/stochastic-neural-networks-with-monotonic
Repo
Framework

Event Selection Rules to Compute Explanations

Title Event Selection Rules to Compute Explanations
Authors Charles Prud’homme, Xavier Lorca, Narendra Jussien
Abstract Explanations have been introduced in the previous century. Their interest in reducing the search space is no longer questioned. Yet, their efficient implementation into CSP solver is still a challenge. In this paper, we introduce ESeR, an Event Selection Rules algorithm that filters events generated during propagation. This dynamic selection enables an efficient computation of explanations for intelligent backtracking al- gorithms. We show the effectiveness of our approach on the instances of the last three MiniZinc challenges
Tasks
Published 2016-08-29
URL http://arxiv.org/abs/1608.08015v1
PDF http://arxiv.org/pdf/1608.08015v1.pdf
PWC https://paperswithcode.com/paper/event-selection-rules-to-compute-explanations
Repo
Framework

“Influence Sketching”: Finding Influential Samples In Large-Scale Regressions

Title “Influence Sketching”: Finding Influential Samples In Large-Scale Regressions
Authors Mike Wojnowicz, Ben Cruz, Xuan Zhao, Brian Wallace, Matt Wolff, Jay Luan, Caleb Crable
Abstract There is an especially strong need in modern large-scale data analysis to prioritize samples for manual inspection. For example, the inspection could target important mislabeled samples or key vulnerabilities exploitable by an adversarial attack. In order to solve the “needle in the haystack” problem of which samples to inspect, we develop a new scalable version of Cook’s distance, a classical statistical technique for identifying samples which unusually strongly impact the fit of a regression model (and its downstream predictions). In order to scale this technique up to very large and high-dimensional datasets, we introduce a new algorithm which we call “influence sketching.” Influence sketching embeds random projections within the influence computation; in particular, the influence score is calculated using the randomly projected pseudo-dataset from the post-convergence Generalized Linear Model (GLM). We validate that influence sketching can reliably and successfully discover influential samples by applying the technique to a malware detection dataset of over 2 million executable files, each represented with almost 100,000 features. For example, we find that randomly deleting approximately 10% of training samples reduces predictive accuracy only slightly from 99.47% to 99.45%, whereas deleting the same number of samples with high influence sketch scores reduces predictive accuracy all the way down to 90.24%. Moreover, we find that influential samples are especially likely to be mislabeled. In the case study, we manually inspect the most influential samples, and find that influence sketching pointed us to new, previously unidentified pieces of malware.
Tasks Adversarial Attack, Malware Detection
Published 2016-11-17
URL http://arxiv.org/abs/1611.05923v3
PDF http://arxiv.org/pdf/1611.05923v3.pdf
PWC https://paperswithcode.com/paper/influence-sketching-finding-influential
Repo
Framework

Predictive No-Reference Assessment of Video Quality

Title Predictive No-Reference Assessment of Video Quality
Authors Maria Torres Vega, Decebal Constantin Mocanu, Antonio Liotta
Abstract Among the various means to evaluate the quality of video streams, No-Reference (NR) methods have low computation and may be executed on thin clients. Thus, NR algorithms would be perfect candidates in cases of real-time quality assessment, automated quality control and, particularly, in adaptive mobile streaming. Yet, existing NR approaches are often inaccurate, in comparison to Full-Reference (FR) algorithms, especially under lossy network conditions. In this work, we present an NR method that combines machine learning with simple NR metrics to achieve a quality index comparably as accurate as the Video Quality Metric (VQM) Full-Reference algorithm. Our method is tested in an extensive dataset (960 videos), under lossy network conditions and considering nine different machine learning algorithms. Overall, we achieve an over 97% correlation with VQM, while allowing real-time assessment of video quality of experience in realistic streaming scenarios.
Tasks
Published 2016-04-25
URL http://arxiv.org/abs/1604.07322v2
PDF http://arxiv.org/pdf/1604.07322v2.pdf
PWC https://paperswithcode.com/paper/predictive-no-reference-assessment-of-video
Repo
Framework

Wavelet-Based Semantic Features for Hyperspectral Signature Discrimination

Title Wavelet-Based Semantic Features for Hyperspectral Signature Discrimination
Authors Siwei Feng, Yuki Itoh, Mario Parente, Marco F. Duarte
Abstract Hyperspectral signature classification is a quantitative analysis approach for hyperspectral imagery which performs detection and classification of the constituent materials at the pixel level in the scene. The classification procedure can be operated directly on hyperspectral data or performed by using some features extracted from the corresponding hyperspectral signatures containing information like the signature’s energy or shape. In this paper, we describe a technique that applies non-homogeneous hidden Markov chain (NHMC) models to hyperspectral signature classification. The basic idea is to use statistical models (such as NHMC) to characterize wavelet coefficients which capture the spectrum semantics (i.e., structural information) at multiple levels. Experimental results show that the approach based on NHMC models can outperform existing approaches relevant in classification tasks.
Tasks
Published 2016-02-11
URL http://arxiv.org/abs/1602.03903v2
PDF http://arxiv.org/pdf/1602.03903v2.pdf
PWC https://paperswithcode.com/paper/wavelet-based-semantic-features-for
Repo
Framework

Minimax Error of Interpolation and Optimal Design of Experiments for Variable Fidelity Data

Title Minimax Error of Interpolation and Optimal Design of Experiments for Variable Fidelity Data
Authors Alexey Zaytsev, Evgeny Burnaev
Abstract Engineering problems often involve data sources of variable fidelity with different costs of obtaining an observation. In particular, one can use both a cheap low fidelity function (e.g. a computational experiment with a CFD code) and an expensive high fidelity function (e.g. a wind tunnel experiment) to generate a data sample in order to construct a regression model of a high fidelity function. The key question in this setting is how the sizes of the high and low fidelity data samples should be selected in order to stay within a given computational budget and maximize accuracy of the regression model prior to committing resources on data acquisition. In this paper we obtain minimax interpolation errors for single and variable fidelity scenarios for a multivariate Gaussian process regression. Evaluation of the minimax errors allows us to identify cases when the variable fidelity data provides better interpolation accuracy than the exclusively high fidelity data for the same computational budget. These results allow us to calculate the optimal shares of variable fidelity data samples under the given computational budget constraint. Real and synthetic data experiments suggest that using the obtained optimal shares often outperforms natural heuristics in terms of the regression accuracy.
Tasks
Published 2016-10-21
URL http://arxiv.org/abs/1610.06731v3
PDF http://arxiv.org/pdf/1610.06731v3.pdf
PWC https://paperswithcode.com/paper/minimax-error-of-interpolation-and-optimal
Repo
Framework
comments powered by Disqus