October 18, 2019

3161 words 15 mins read

Paper Group ANR 584

Paper Group ANR 584

Covfefe: A Computer Vision Approach For Estimating Force Exertion. Approximate Survey Propagation for Statistical Inference. Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency. Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples. Scalable Multi-Task Gaussian Process Tensor Regressi …

Covfefe: A Computer Vision Approach For Estimating Force Exertion

Title Covfefe: A Computer Vision Approach For Estimating Force Exertion
Authors Vaneet Aggarwal, Hamed Asadi, Mayank Gupta, Jae Joong Lee, Denny Yu
Abstract Cumulative exposure to repetitive and forceful activities may lead to musculoskeletal injuries which not only reduce workers’ efficiency and productivity, but also affect their quality of life. Thus, widely accessible techniques for reliable detection of unsafe muscle force exertion levels for human activity is necessary for their well-being. However, measurement of force exertion levels is challenging and the existing techniques pose a great challenge as they are either intrusive, interfere with human-machine interface, and/or subjective in the nature, thus are not scalable for all workers. In this work, we use face videos and the photoplethysmography (PPG) signals to classify force exertion levels of 0%, 50%, and 100% (representing rest, moderate effort, and high effort), thus providing a non-intrusive and scalable approach. Efficient feature extraction approaches have been investigated, including standard deviation of the movement of different landmarks of the face, distances between peaks and troughs in the PPG signals. We note that the PPG signals can be obtained from the face videos, thus giving an efficient classification algorithm for the force exertion levels using face videos. Based on the data collected from 20 subjects, features extracted from the face videos give 90% accuracy in classification among the 100% and the combination of 0% and 50% datasets. Further combining the PPG signals provide 81.7% accuracy. The approach is also shown to be robust to the correctly identify force level when the person is talking, even though such datasets are not included in the training.
Tasks Photoplethysmography (PPG)
Published 2018-09-25
URL http://arxiv.org/abs/1809.09293v1
PDF http://arxiv.org/pdf/1809.09293v1.pdf
PWC https://paperswithcode.com/paper/covfefe-a-computer-vision-approach-for
Repo
Framework

Approximate Survey Propagation for Statistical Inference

Title Approximate Survey Propagation for Statistical Inference
Authors Fabrizio Antenucci, Florent Krzakala, Pierfrancesco Urbani, Lenka Zdeborová
Abstract Approximate message passing algorithm enjoyed considerable attention in the last decade. In this paper we introduce a variant of the AMP algorithm that takes into account glassy nature of the system under consideration. We coin this algorithm as the approximate survey propagation (ASP) and derive it for a class of low-rank matrix estimation problems. We derive the state evolution for the ASP algorithm and prove that it reproduces the one-step replica symmetry breaking (1RSB) fixed-point equations, well-known in physics of disordered systems. Our derivation thus gives a concrete algorithmic meaning to the 1RSB equations that is of independent interest. We characterize the performance of ASP in terms of convergence and mean-squared error as a function of the free Parisi parameter s. We conclude that when there is a model mismatch between the true generative model and the inference model, the performance of AMP rapidly degrades both in terms of MSE and of convergence, while ASP converges in a larger regime and can reach lower errors. Among other results, our analysis leads us to a striking hypothesis that whenever s (or other parameters) can be set in such a way that the Nishimori condition $M=Q>0$ is restored, then the corresponding algorithm is able to reach mean-squared error as low as the Bayes-optimal error obtained when the model and its parameters are known and exactly matched in the inference procedure.
Tasks
Published 2018-07-03
URL http://arxiv.org/abs/1807.01296v1
PDF http://arxiv.org/pdf/1807.01296v1.pdf
PWC https://paperswithcode.com/paper/approximate-survey-propagation-for
Repo
Framework

Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency

Title Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency
Authors Liqian Ma, Xu Jia, Stamatios Georgoulis, Tinne Tuytelaars, Luc Van Gool
Abstract Image-to-image translation has recently received significant attention due to advances in deep learning. Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way. However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex inner- and cross-domain variations. To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain. We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain. Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain. To avoid semantic inconsistencies during translation that naturally appear due to the large inner- and cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels. Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process.
Tasks Image-to-Image Translation, Unsupervised Image-To-Image Translation
Published 2018-05-28
URL http://arxiv.org/abs/1805.11145v4
PDF http://arxiv.org/pdf/1805.11145v4.pdf
PWC https://paperswithcode.com/paper/exemplar-guided-unsupervised-image-to-image
Repo
Framework

Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples

Title Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples
Authors Hajime Ono, Tsubasa Takahashi, Kazuya Kakizaki
Abstract How can we make machine learning provably robust against adversarial examples in a scalable way? Since certified defense methods, which ensure $\epsilon$-robust, consume huge resources, they can only achieve small degree of robustness in practice. Lipschitz margin training (LMT) is a scalable certified defense, but it can also only achieve small robustness due to over-regularization. How can we make certified defense more efficiently? We present LC-LMT, a light weight Lipschitz margin training which solves the above problem. Our method has the following properties; (a) efficient: it can achieve $\epsilon$-robustness at early epoch, and (b) robust: it has a potential to get higher robustness than LMT. In the evaluation, we demonstrate the benefits of the proposed method. LC-LMT can achieve required robustness more than 30 epoch earlier than LMT in MNIST, and shows more than 90 $%$ accuracy against both legitimate and adversarial inputs.
Tasks
Published 2018-11-20
URL http://arxiv.org/abs/1811.08080v1
PDF http://arxiv.org/pdf/1811.08080v1.pdf
PWC https://paperswithcode.com/paper/lightweight-lipschitz-margin-training-for
Repo
Framework

Scalable Multi-Task Gaussian Process Tensor Regression for Normative Modeling of Structured Variation in Neuroimaging Data

Title Scalable Multi-Task Gaussian Process Tensor Regression for Normative Modeling of Structured Variation in Neuroimaging Data
Authors Seyed Mostafa Kia, Christian F. Beckmann, Andre F. Marquand
Abstract Most brain disorders are very heterogeneous in terms of their underlying biology and developing analysis methods to model such heterogeneity is a major challenge. A promising approach is to use probabilistic regression methods to estimate normative models of brain function using (f)MRI data then use these to map variation across individuals in clinical populations (e.g., via anomaly detection). To fully capture individual differences, it is crucial to statistically model the patterns of correlation across different brain regions and individuals. However, this is very challenging for neuroimaging data because of high-dimensionality and highly structured patterns of correlation across multiple axes. Here, we propose a general and flexible multi-task learning framework to address this problem. Our model uses a tensor-variate Gaussian process in a Bayesian mixed-effects model and makes use of Kronecker algebra and a low-rank approximation to scale efficiently to multi-way neuroimaging data at the whole brain level. On a publicly available clinical fMRI dataset, we show that our computationally affordable approach substantially improves detection sensitivity over both a mass-univariate normative model and a classifier that –unlike our approach– has full access to the clinical labels.
Tasks Anomaly Detection, Multi-Task Learning
Published 2018-07-31
URL http://arxiv.org/abs/1808.00036v2
PDF http://arxiv.org/pdf/1808.00036v2.pdf
PWC https://paperswithcode.com/paper/scalable-multi-task-gaussian-process-tensor
Repo
Framework

Learning to Explicitate Connectives with Seq2Seq Network for Implicit Discourse Relation Classification

Title Learning to Explicitate Connectives with Seq2Seq Network for Implicit Discourse Relation Classification
Authors Wei Shi, Vera Demberg
Abstract Implicit discourse relation classification is one of the most difficult steps in discourse parsing. The difficulty stems from the fact that the coherence relation must be inferred based on the content of the discourse relational arguments. Therefore, an effective encoding of the relational arguments is of crucial importance. We here propose a new model for implicit discourse relation classification, which consists of a classifier, and a sequence-to-sequence model which is trained to generate a representation of the discourse relational arguments by trying to predict the relational arguments including a suitable implicit connective. Training is possible because such implicit connectives have been annotated as part of the PDTB corpus. Along with a memory network, our model could generate more refined representations for the task. And on the now standard 11-way classification, our method outperforms previous state of the art systems on the PDTB benchmark on multiple settings including cross validation.
Tasks Implicit Discourse Relation Classification, Relation Classification
Published 2018-11-05
URL http://arxiv.org/abs/1811.01697v2
PDF http://arxiv.org/pdf/1811.01697v2.pdf
PWC https://paperswithcode.com/paper/learning-to-explicitate-connectives-with
Repo
Framework

A note on hyperparameters in black-box adversarial examples

Title A note on hyperparameters in black-box adversarial examples
Authors Jamie Hayes
Abstract Since Biggio et al. (2013) and Szegedy et al. (2013) first drew attention to adversarial examples, there has been a flood of research into defending and attacking machine learning models. However, almost all proposed attacks assume white-box access to a model. In other words, the attacker is assumed to have perfect knowledge of the models weights and architecture. With this insider knowledge, a white-box attack can leverage gradient information to craft adversarial examples. Black-box attacks assume no knowledge of the model weights or architecture. These attacks craft adversarial examples using information only contained in the logits or hard classification label. Here, we assume the attacker can use the logits in order to find an adversarial example. Empirically, we show that 2-sided stochastic gradient estimation techniques are not sensitive to scaling parameters, and can be used to mount powerful black-box attacks requiring relatively few model queries.
Tasks
Published 2018-11-15
URL http://arxiv.org/abs/1811.06539v1
PDF http://arxiv.org/pdf/1811.06539v1.pdf
PWC https://paperswithcode.com/paper/a-note-on-hyperparameters-in-black-box
Repo
Framework

Reinforced Extractive Summarization with Question-Focused Rewards

Title Reinforced Extractive Summarization with Question-Focused Rewards
Authors Kristjan Arumae, Fei Liu
Abstract We investigate a new training paradigm for extractive summarization. Traditionally, human abstracts are used to derive goldstandard labels for extraction units. However, the labels are often inaccurate, because human abstracts and source documents cannot be easily aligned at the word level. In this paper we convert human abstracts to a set of Cloze-style comprehension questions. System summaries are encouraged to preserve salient source content useful for answering questions and share common words with the abstracts. We use reinforcement learning to explore the space of possible extractive summaries and introduce a question-focused reward function to promote concise, fluent, and informative summaries. Our experiments show that the proposed method is effective. It surpasses state-of-the-art systems on the standard summarization dataset.
Tasks
Published 2018-05-25
URL http://arxiv.org/abs/1805.10392v2
PDF http://arxiv.org/pdf/1805.10392v2.pdf
PWC https://paperswithcode.com/paper/reinforced-extractive-summarization-with
Repo
Framework

Analysis and Optimization of Deep Counterfactual Value Networks

Title Analysis and Optimization of Deep Counterfactual Value Networks
Authors Patryk Hopner, Eneldo Loza Mencía
Abstract Recently a strong poker-playing algorithm called DeepStack was published, which is able to find an approximate Nash equilibrium during gameplay by using heuristic values of future states predicted by deep neural networks. This paper analyzes new ways of encoding the inputs and outputs of DeepStack’s deep counterfactual value networks based on traditional abstraction techniques, as well as an unabstracted encoding, which was able to increase the network’s accuracy.
Tasks
Published 2018-07-02
URL http://arxiv.org/abs/1807.00900v2
PDF http://arxiv.org/pdf/1807.00900v2.pdf
PWC https://paperswithcode.com/paper/analysis-and-optimization-of-deep
Repo
Framework

Highly-Economized Multi-View Binary Compression for Scalable Image Clustering

Title Highly-Economized Multi-View Binary Compression for Scalable Image Clustering
Authors Zheng Zhang, Li Liu, Jie Qin, Fan Zhu, Fumin Shen, Yong Xu, Ling Shao, Heng Tao Shen
Abstract How to economically cluster large-scale multi-view images is a long-standing problem in computer vision. To tackle this challenge, we introduce a novel approach named Highly-economized Scalable Image Clustering (HSIC) that radically surpasses conventional image clustering methods via binary compression. We intuitively unify the binary representation learning and efficient binary cluster structure learning into a joint framework. In particular, common binary representations are learned by exploiting both sharable and individual information across multiple views to capture their underlying correlations. Meanwhile, cluster assignment with robust binary centroids is also performed via effective discrete optimization under L21-norm constraint. By this means, heavy continuous-valued Euclidean distance computations can be successfully reduced by efficient binary XOR operations during the clustering procedure. To our best knowledge, HSIC is the first binary clustering work specifically designed for scalable multi-view image clustering. Extensive experimental results on four large-scale image datasets show that HSIC consistently outperforms the state-of-the-art approaches, whilst significantly reducing computational time and memory footprint.
Tasks Image Clustering, Representation Learning
Published 2018-09-17
URL http://arxiv.org/abs/1809.05992v1
PDF http://arxiv.org/pdf/1809.05992v1.pdf
PWC https://paperswithcode.com/paper/highly-economized-multi-view-binary
Repo
Framework

Webpage Saliency Prediction with Two-stage Generative Adversarial Networks

Title Webpage Saliency Prediction with Two-stage Generative Adversarial Networks
Authors Yu Li, Ya Zhang
Abstract Web page saliency prediction is a challenge problem in image transformation and computer vision. In this paper, we propose a new model combined with web page outline information to prediction people’s interest region in web page. For each web page image, our model can generate the saliency map which indicates the region of interest for people. A two-stage generative adversarial networks are proposed and image outline information is introduced for better transferring. Experiment results on FIWI dataset show that our model have better performance in terms of saliency prediction.
Tasks Saliency Prediction
Published 2018-05-29
URL http://arxiv.org/abs/1805.11374v1
PDF http://arxiv.org/pdf/1805.11374v1.pdf
PWC https://paperswithcode.com/paper/webpage-saliency-prediction-with-two-stage
Repo
Framework

Unexpected sawtooth artifact in beat-to-beat pulse transit time measured from patient monitor data

Title Unexpected sawtooth artifact in beat-to-beat pulse transit time measured from patient monitor data
Authors Yu-Ting Lin, Yu-Lun Lo, Chen-Yun Lin, Hau-Tieng Wu, Martin G. Frasch
Abstract Object: It is increasingly popular to collect as much data as possible in the hospital setting from clinical monitors for research purposes. However, in this setup the data calibration issue is often not discussed and, rather, implicitly assumed, while the clinical monitors might not be designed for the data analysis purpose. We hypothesize that this calibration issue for a secondary analysis may become an important source of artifacts in patient monitor data. We test an off-the-shelf integrated photoplethysmography (PPG) and electrocardiogram (ECG) monitoring device for its ability to yield a reliable pulse transit time (PTT) signal. Approach: This is a retrospective clinical study using two databases: one containing 35 subjects who underwent laparoscopic cholecystectomy, another containing 22 subjects who underwent spontaneous breathing test in the intensive care unit. All data sets include recordings of PPG and ECG using a commonly deployed patient monitor. We calculated the PTT signal offline. Main Results: We report a novel constant oscillatory pattern in the PTT signal and identify this pattern as a sawtooth artifact. We apply an approach based on the de-shape method to visualize, quantify and validate this sawtooth artifact. Significance: The PPG and ECG signals not designed for the PTT evaluation may contain unwanted artifacts. The PTT signal should be calibrated before analysis to avoid erroneous interpretation of its physiological meaning.
Tasks Calibration, Photoplethysmography (PPG)
Published 2018-08-27
URL https://arxiv.org/abs/1809.01722v2
PDF https://arxiv.org/pdf/1809.01722v2.pdf
PWC https://paperswithcode.com/paper/unexpected-sawtooth-artifact-in-beat-to-beat
Repo
Framework

Greybox fuzzing as a contextual bandits problem

Title Greybox fuzzing as a contextual bandits problem
Authors Ketan Patil, Aditya Kanade
Abstract Greybox fuzzing is one of the most useful and effective techniques for the bug detection in large scale application programs. It uses minimal amount of instrumentation. American Fuzzy Lop (AFL) is a popular coverage based evolutionary greybox fuzzing tool. AFL performs extremely well in fuzz testing large applications and finding critical vulnerabilities, but AFL involves a lot of heuristics while deciding the favored test case(s), skipping test cases during fuzzing, assigning fuzzing iterations to test case(s). In this work, we aim at replacing the heuristics the AFL uses while assigning the fuzzing iterations to a test case during the random fuzzing. We formalize this problem as a `contextual bandit problem’ and we propose an algorithm to solve this problem. We have implemented our approach on top of the AFL. We modify the AFL’s heuristics with our learned model through the policy gradient method. Our learning algorithm selects the multiplier of the number of fuzzing iterations to be assigned to a test case during random fuzzing, given a fixed length substring of the test case to be fuzzed. We fuzz the substring with this new energy value and continuously updates the policy based upon the interesting test cases it produces on fuzzing. |
Tasks Multi-Armed Bandits
Published 2018-06-11
URL http://arxiv.org/abs/1806.03806v1
PDF http://arxiv.org/pdf/1806.03806v1.pdf
PWC https://paperswithcode.com/paper/greybox-fuzzing-as-a-contextual-bandits
Repo
Framework

Self-Bounded Prediction Suffix Tree via Approximate String Matching

Title Self-Bounded Prediction Suffix Tree via Approximate String Matching
Authors Dongwoo Kim, Christian Walder
Abstract Prediction suffix trees (PST) provide an effective tool for sequence modelling and prediction. Current prediction techniques for PSTs rely on exact matching between the suffix of the current sequence and the previously observed sequence. We present a provably correct algorithm for learning a PST with approximate suffix matching by relaxing the exact matching condition. We then present a self-bounded enhancement of our algorithm where the depth of suffix tree grows automatically in response to the model performance on a training sequence. Through experiments on synthetic datasets as well as three real-world datasets, we show that the approximate matching PST results in better predictive performance than the other variants of PST.
Tasks
Published 2018-02-09
URL http://arxiv.org/abs/1802.03184v2
PDF http://arxiv.org/pdf/1802.03184v2.pdf
PWC https://paperswithcode.com/paper/self-bounded-prediction-suffix-tree-via
Repo
Framework

A Distributed Collaborative Filtering Algorithm Using Multiple Data Sources

Title A Distributed Collaborative Filtering Algorithm Using Multiple Data Sources
Authors Mohamed Reda Bouadjenek, Esther Pacitti, Maximilien Servajean, Florent Masseglia, Amr El Abbadi
Abstract Collaborative Filtering (CF) is one of the most commonly used recommendation methods. CF consists in predicting whether, or how much, a user will like (or dislike) an item by leveraging the knowledge of the user’s preferences as well as that of other users. In practice, users interact and express their opinion on only a small subset of items, which makes the corresponding user-item rating matrix very sparse. Such data sparsity yields two main problems for recommender systems: (1) the lack of data to effectively model users’ preferences, and (2) the lack of data to effectively model item characteristics. However, there are often many other data sources that are available to a recommender system provider, which can describe user interests and item characteristics (e.g., users’ social network, tags associated to items, etc.). These valuable data sources may supply useful information to enhance a recommendation system in modeling users’ preferences and item characteristics more accurately and thus, hopefully, to make recommenders more precise. For various reasons, these data sources may be managed by clusters of different data centers, thus requiring the development of distributed solutions. In this paper, we propose a new distributed collaborative filtering algorithm, which exploits and combines multiple and diverse data sources to improve recommendation quality. Our experimental evaluation using real datasets shows the effectiveness of our algorithm compared to state-of-the-art recommendation algorithms.
Tasks Recommendation Systems
Published 2018-07-16
URL http://arxiv.org/abs/1807.05853v1
PDF http://arxiv.org/pdf/1807.05853v1.pdf
PWC https://paperswithcode.com/paper/a-distributed-collaborative-filtering
Repo
Framework
comments powered by Disqus