Paper Group ANR 133
Convergence rates for optimised adaptive importance samplers. DAZSL: Dynamic Attributes for Zero-Shot Learning. Visualizing the Invisible: Occluded Vehicle Segmentation and Recovery. Identifying Offensive Posts and Targeted Offense from Twitter. AutoPhase: Compiler Phase-Ordering for High Level Synthesis with Deep Reinforcement Learning. The Semant …
Convergence rates for optimised adaptive importance samplers
Title | Convergence rates for optimised adaptive importance samplers |
Authors | Ömer Deniz Akyildiz, Joaquín Míguez |
Abstract | Adaptive importance samplers are adaptive Monte Carlo algorithms to estimate expectations with respect to some target distribution which adapt themselves to obtain better estimators over a sequence of iterations. Although it is straightforward to show that they have the same $\mathcal{O}(1/\sqrt{N})$ convergence rate as standard importance samplers, where $N$ is the number of Monte Carlo samples, the behaviour of adaptive importance samplers over the number of iterations has been left relatively unexplored. In this work, we investigate an adaptation strategy based on convex optimisation which leads to a class of adaptive importance samplers termed optimised adaptive importance samplers (OAIS). These samplers rely on the iterative minimisation of the $\chi^2$-divergence between an exponential-family proposal and the target. The analysed algorithms are closely related to the class of adaptive importance samplers which minimise the variance of the weight function. We first prove non-asymptotic error bounds for the mean squared errors (MSEs) of these algorithms, which explicitly depend on the number of iterations and the number of samples together. The non-asymptotic bounds derived in this paper imply that when the target belongs to the exponential family, the $L_2$ errors of the optimised samplers converge to the perfect Monte Carlo sampling error $\mathcal{O}(1/\sqrt{N})$. We also show that when the target is not from the exponential family, the asymptotic error rate is $\mathcal{O}(\sqrt{\rho^\star/N})$ where $\rho^\star$ is the minimum $\chi^2$-divergence between the target and an exponential-family proposal. |
Tasks | |
Published | 2019-03-28 |
URL | https://arxiv.org/abs/1903.12044v3 |
https://arxiv.org/pdf/1903.12044v3.pdf | |
PWC | https://paperswithcode.com/paper/convergence-rates-for-optimised-adaptive |
Repo | |
Framework | |
DAZSL: Dynamic Attributes for Zero-Shot Learning
Title | DAZSL: Dynamic Attributes for Zero-Shot Learning |
Authors | Jonathan D. Jones, Tae Soo Kim, Michael Peven, Zihao Xiao, Jin Bai, Yi Zhang, Weichao Qiu, Alan Yuille, Gregory D. Hager |
Abstract | Inspired by earlier applications to still images, zero-shot activity recognition has largely focused on image-derived representations without regard to the video’s temporal aspect. Since these methods cannot capture the time evolution of an activity, reversible actions such as entering and exiting a car are often indistinguishable. In this work, we present a simple and elegant framework for modeling activities using dynamic attribute signatures. We show that specifying temporal structure greatly increases zero-shot systems’ discriminative power. We also extend our method to form the first framework to our knowledge for zero-shot joint segmentation and classification of activities in videos. We evaluate our method on the Olympic Sports and UCF101 datasets, where our model establishes a new state of the art under multiple experimental paradigms. We also demonstrate the first results in zero-shot decoding of complex action sequences on a widely-used surgical dataset. Lastly, we show that we can even eliminate the need to train attribute detectors by using off-the-shelf object detectors to recognize activities in challenging security footage. |
Tasks | Action Detection, Activity Detection, Activity Recognition, Video Classification, Zero-Shot Learning |
Published | 2019-12-08 |
URL | https://arxiv.org/abs/1912.03613v2 |
https://arxiv.org/pdf/1912.03613v2.pdf | |
PWC | https://paperswithcode.com/paper/zero-shot-recognition-of-complex-action |
Repo | |
Framework | |
Visualizing the Invisible: Occluded Vehicle Segmentation and Recovery
Title | Visualizing the Invisible: Occluded Vehicle Segmentation and Recovery |
Authors | Xiaosheng Yan, Yuanlong Yu, Feigege Wang, Wenxi Liu, Shengfeng He, Jia Pan |
Abstract | In this paper, we propose a novel iterative multi-task framework to complete the segmentation mask of an occluded vehicle and recover the appearance of its invisible parts. In particular, to improve the quality of the segmentation completion, we present two coupled discriminators and introduce an auxiliary 3D model pool for sampling authentic silhouettes as adversarial samples. In addition, we propose a two-path structure with a shared network to enhance the appearance recovery capability. By iteratively performing the segmentation completion and the appearance recovery, the results will be progressively refined. To evaluate our method, we present a dataset, the Occluded Vehicle dataset, containing synthetic and real-world occluded vehicle images. We conduct comparison experiments on this dataset and demonstrate that our model outperforms the state-of-the-art in tasks of recovering segmentation mask and appearance for occluded vehicles. Moreover, we also demonstrate that our appearance recovery approach can benefit the occluded vehicle tracking in real-world videos. |
Tasks | |
Published | 2019-07-22 |
URL | https://arxiv.org/abs/1907.09381v1 |
https://arxiv.org/pdf/1907.09381v1.pdf | |
PWC | https://paperswithcode.com/paper/visualizing-the-invisible-occluded-vehicle |
Repo | |
Framework | |
Identifying Offensive Posts and Targeted Offense from Twitter
Title | Identifying Offensive Posts and Targeted Offense from Twitter |
Authors | Haimin Zhang, Debanjan Mahata, Simra Shahid, Laiba Mehnaz, Sarthak Anand, Yaman Singla, Rajiv Ratn Shah, Karan Uppal |
Abstract | In this paper we present our approach and the system description for Sub-task A and Sub Task B of SemEval 2019 Task 6: Identifying and Categorizing Offensive Language in Social Media. Sub-task A involves identifying if a given tweet is offensive or not, and Sub Task B involves detecting if an offensive tweet is targeted towards someone (group or an individual). Our models for Sub-task A is based on an ensemble of Convolutional Neural Network, Bidirectional LSTM with attention, and Bidirectional LSTM + Bidirectional GRU, whereas for Sub-task B, we rely on a set of heuristics derived from the training data and manual observation. We provide detailed analysis of the results obtained using the trained models. Our team ranked 5th out of 103 participants in Sub-task A, achieving a macro F1 score of 0.807, and ranked 8th out of 75 participants in Sub Task B achieving a macro F1 of 0.695. |
Tasks | |
Published | 2019-04-19 |
URL | http://arxiv.org/abs/1904.09072v1 |
http://arxiv.org/pdf/1904.09072v1.pdf | |
PWC | https://paperswithcode.com/paper/identifying-offensive-posts-and-targeted |
Repo | |
Framework | |
AutoPhase: Compiler Phase-Ordering for High Level Synthesis with Deep Reinforcement Learning
Title | AutoPhase: Compiler Phase-Ordering for High Level Synthesis with Deep Reinforcement Learning |
Authors | Ameer Haj-Ali, Qijing Huang, William Moses, John Xiang, Ion Stoica, Krste Asanovic, John Wawrzynek |
Abstract | The performance of the code generated by a compiler depends on the order in which the optimization passes are applied. In high-level synthesis, the quality of the generated circuit relates directly to the code generated by the front-end compiler. Choosing a good order–often referred to as the phase-ordering problem–is an NP-hard problem. In this paper, we evaluate a new technique to address the phase-ordering problem: deep reinforcement learning. We implement a framework in the context of the LLVM compiler to optimize the ordering for HLS programs and compare the performance of deep reinforcement learning to state-of-the-art algorithms that address the phase-ordering problem. Overall, our framework runs one to two orders of magnitude faster than these algorithms, and achieves a 16% improvement in circuit performance over the -O3 compiler flag. |
Tasks | |
Published | 2019-01-15 |
URL | http://arxiv.org/abs/1901.04615v2 |
http://arxiv.org/pdf/1901.04615v2.pdf | |
PWC | https://paperswithcode.com/paper/autophase-compiler-phase-ordering-for-high |
Repo | |
Framework | |
The Semantic Asset Administration Shell
Title | The Semantic Asset Administration Shell |
Authors | Sebastian R. Bader, Maria Maleshkova |
Abstract | The disruptive potential of the upcoming digital transformations for the industrial manufacturing domain have led to several reference frameworks and numerous standardization approaches. On the other hand, the Semantic Web community has made significant contributions in the field, for instance on data and service description, integration of heterogeneous sources and devices, and AI techniques in distributed systems. These two streams of work are, however, mostly unrelated and only briefly regard each others requirements, practices and terminology. We contribute to closing this gap by providing the Semantic Asset Administration Shell, an RDF-based representation of the Industrie 4.0 Component. We provide an ontology for the latest data model specification, created a RML mapping, supply resources to validate the RDF entities and introduce basic reasoning on the Asset Administration Shell data model. Furthermore, we discuss the different assumptions and presentation patterns, and analyze the implications of a semantic representation on the original data. We evaluate the thereby created overheads, and conclude that the semantic lifting is manageable, also for restricted or embedded devices, and therefore meets the needs of Industrie 4.0 scenarios. |
Tasks | |
Published | 2019-09-02 |
URL | https://arxiv.org/abs/1909.00690v1 |
https://arxiv.org/pdf/1909.00690v1.pdf | |
PWC | https://paperswithcode.com/paper/the-semantic-asset-administration-shell |
Repo | |
Framework | |
Skeleton based Zero Shot Action Recognition in Joint Pose-Language Semantic Space
Title | Skeleton based Zero Shot Action Recognition in Joint Pose-Language Semantic Space |
Authors | Bhavan Jasani, Afshaan Mazagonwalla |
Abstract | How does one represent an action? How does one describe an action that we have never seen before? Such questions are addressed by the Zero Shot Learning paradigm, where a model is trained on only a subset of classes and is evaluated on its ability to correctly classify an example from a class it has never seen before. In this work, we present a body pose based zero shot action recognition network and demonstrate its performance on the NTU RGB-D dataset. Our model learns to jointly encapsulate visual similarities based on pose features of the action performer as well as similarities in the natural language descriptions of the unseen action class names. We demonstrate how this pose-language semantic space encodes knowledge which allows our model to correctly predict actions not seen during training. |
Tasks | Temporal Action Localization, Zero-Shot Learning |
Published | 2019-11-26 |
URL | https://arxiv.org/abs/1911.11344v1 |
https://arxiv.org/pdf/1911.11344v1.pdf | |
PWC | https://paperswithcode.com/paper/skeleton-based-zero-shot-action-recognition |
Repo | |
Framework | |
Predicting Annotation Difficulty to Improve Task Routing and Model Performance for Biomedical Information Extraction
Title | Predicting Annotation Difficulty to Improve Task Routing and Model Performance for Biomedical Information Extraction |
Authors | Yinfei Yang, Oshin Agarwal, Chris Tar, Byron C. Wallace, Ani Nenkova |
Abstract | Modern NLP systems require high-quality annotated data. In specialized domains, expert annotations may be prohibitively expensive. An alternative is to rely on crowdsourcing to reduce costs at the risk of introducing noise. In this paper we demonstrate that directly modeling instance difficulty can be used to improve model performance, and to route instances to appropriate annotators. Our difficulty prediction model combines two learned representations: a `universal’ encoder trained on out-of-domain data, and a task-specific encoder. Experiments on a complex biomedical information extraction task using expert and lay annotators show that: (i) simply excluding from the training data instances predicted to be difficult yields a small boost in performance; (ii) using difficulty scores to weight instances during training provides further, consistent gains; (iii) assigning instances predicted to be difficult to domain experts is an effective strategy for task routing. Our experiments confirm the expectation that for specialized tasks expert annotations are higher quality than crowd labels, and hence preferable to obtain if practical. Moreover, augmenting small amounts of expert data with a larger set of lay annotations leads to further improvements in model performance. | |
Tasks | |
Published | 2019-05-19 |
URL | https://arxiv.org/abs/1905.07791v1 |
https://arxiv.org/pdf/1905.07791v1.pdf | |
PWC | https://paperswithcode.com/paper/predicting-annotation-difficulty-to-improve |
Repo | |
Framework | |
A Choquet Fuzzy Integral Vertical Bagging Classifier for Mobile Telematics Data Analysis
Title | A Choquet Fuzzy Integral Vertical Bagging Classifier for Mobile Telematics Data Analysis |
Authors | Mohammad Siami, Mohsen Naderpour, Jie Lu |
Abstract | Mobile app development in recent years has resulted in new products and features to improve human life. Mobile telematics is one such development that encompasses multidisciplinary fields for transportation safety. The application of mobile telematics has been explored in many areas, such as insurance and road safety. However, to the best of our knowledge, its application in gender detection has not been explored. This paper proposes a Choquet fuzzy integral vertical bagging classifier that detects gender through mobile telematics. In this model, different random forest classifiers are trained by randomly generated features with rough set theory, and the top three classifiers are fused using the Choquet fuzzy integral. The model is implemented and evaluated on a real dataset. The empirical results indicate that the Choquet fuzzy integral vertical bagging classifier outperforms other classifiers. |
Tasks | |
Published | 2019-03-19 |
URL | http://arxiv.org/abs/1903.07970v1 |
http://arxiv.org/pdf/1903.07970v1.pdf | |
PWC | https://paperswithcode.com/paper/a-choquet-fuzzy-integral-vertical-bagging |
Repo | |
Framework | |
Auditing and Achieving Intersectional Fairness in Classification Problems
Title | Auditing and Achieving Intersectional Fairness in Classification Problems |
Authors | Giulio Morina, Viktoriia Oliinyk, Julian Waton, Ines Marusic, Konstantinos Georgatzis |
Abstract | Machine learning algorithms are extensively used to make increasingly more consequential decisions, so that achieving optimal predictive performance can no longer be the only focus. This paper explores intersectional fairness, that is fairness when intersections of multiple sensitive attributes – such as race, age, nationality, etc. – are considered. Previous research has mainly been focusing on fairness with respect to a single sensitive attribute, with intersectional fairness being comparatively less studied despite its critical importance for modern machine learning applications. We introduce intersectional fairness metrics by extending prior work, and provide different methodologies to audit discrimination in a given dataset or model outputs. Secondly, we develop novel post-processing techniques to mitigate any detected bias in a classification model. Our proposed methodology does not rely on any assumptions regarding the underlying model and aims at guaranteeing fairness while preserving good predictive performance. Finally, we give guidance on a practical implementation, showing how the proposed methods perform on a real-world dataset. |
Tasks | |
Published | 2019-11-04 |
URL | https://arxiv.org/abs/1911.01468v1 |
https://arxiv.org/pdf/1911.01468v1.pdf | |
PWC | https://paperswithcode.com/paper/auditing-and-achieving-intersectional |
Repo | |
Framework | |
Approximating intractable short ratemodel distribution with neural network
Title | Approximating intractable short ratemodel distribution with neural network |
Authors | Anna Knezevic, Nikolai Dokuchaev |
Abstract | We propose an algorithm which predicts each subsequent time step relative to the previous timestep of intractable short rate model (when adjusted for drift and overall distribution of previous percentile result) and show that the method achieves superior outcomes to the unbiased estimate both on the trained dataset and different validation data. |
Tasks | |
Published | 2019-12-29 |
URL | https://arxiv.org/abs/1912.12615v7 |
https://arxiv.org/pdf/1912.12615v7.pdf | |
PWC | https://paperswithcode.com/paper/approximating-intractable-short-ratemodel |
Repo | |
Framework | |
A Preliminary Study on Optimal Placement of Cameras
Title | A Preliminary Study on Optimal Placement of Cameras |
Authors | Lin Xu |
Abstract | This paper primarily focuses on figuring out the best array of cameras, or visual sensors, so that such a placement enables the maximum utilization of these visual sensors. Maximizing the utilization of these cameras can convert to another problem that is simpler for the formulation, that is, maximizing the total coverage with these cameras. To solve the problem, the coverage problem is first defined subject to the capabilities and limits of cameras. Then, poses of cameras are analyzed for the best arrangement. |
Tasks | |
Published | 2019-10-26 |
URL | https://arxiv.org/abs/1910.12053v1 |
https://arxiv.org/pdf/1910.12053v1.pdf | |
PWC | https://paperswithcode.com/paper/a-preliminary-study-on-optimal-placement-of |
Repo | |
Framework | |
Multi-way Encoding for Robustness
Title | Multi-way Encoding for Robustness |
Authors | Donghyun Kim, Sarah Adel Bargal, Jianming Zhang, Stan Sclaroff |
Abstract | Deep models are state-of-the-art for many computer vision tasks including image classification and object detection. However, it has been shown that deep models are vulnerable to adversarial examples. We highlight how one-hot encoding directly contributes to this vulnerability and propose breaking away from this widely-used, but highly-vulnerable mapping. We demonstrate that by leveraging a different output encoding, multi-way encoding, we decorrelate source and target models, making target models more secure. Our approach makes it more difficult for adversaries to find useful gradients for generating adversarial attacks. We present robustness for black-box and white-box attacks on four benchmark datasets: MNIST, CIFAR-10, CIFAR-100, and SVHN. The strength of our approach is also presented in the form of an attack for model watermarking, raising challenges in detecting stolen models. |
Tasks | Image Classification, Object Detection |
Published | 2019-06-05 |
URL | https://arxiv.org/abs/1906.02033v2 |
https://arxiv.org/pdf/1906.02033v2.pdf | |
PWC | https://paperswithcode.com/paper/multi-way-encoding-for-robustness |
Repo | |
Framework | |
AirwayNet: A Voxel-Connectivity Aware Approach for Accurate Airway Segmentation Using Convolutional Neural Networks
Title | AirwayNet: A Voxel-Connectivity Aware Approach for Accurate Airway Segmentation Using Convolutional Neural Networks |
Authors | Yulei Qin, Mingjian Chen, Hao Zheng, Yun Gu, Mali Shen, Jie Yang, Xiaolin Huang, Yue-Min Zhu, Guang-Zhong Yang |
Abstract | Airway segmentation on CT scans is critical for pulmonary disease diagnosis and endobronchial navigation. Manual extraction of airway requires strenuous efforts due to the complicated structure and various appearance of airway. For automatic airway extraction, convolutional neural networks (CNNs) based methods have recently become the state-of-the-art approach. However, there still remains a challenge for CNNs to perceive the tree-like pattern and comprehend the connectivity of airway. To address this, we propose a voxel-connectivity aware approach named AirwayNet for accurate airway segmentation. By connectivity modeling, conventional binary segmentation task is transformed into 26 tasks of connectivity prediction. Thus, our AirwayNet learns both airway structure and relationship between neighboring voxels. To take advantage of context knowledge, lung distance map and voxel coordinates are fed into AirwayNet as additional semantic information. Compared to existing approaches, AirwayNet achieved superior performance, demonstrating the effectiveness of the network’s awareness of voxel connectivity. |
Tasks | |
Published | 2019-07-16 |
URL | https://arxiv.org/abs/1907.06852v1 |
https://arxiv.org/pdf/1907.06852v1.pdf | |
PWC | https://paperswithcode.com/paper/airwaynet-a-voxel-connectivity-aware-approach |
Repo | |
Framework | |
Heterogeneous Graph-based Knowledge Transfer for Generalized Zero-shot Learning
Title | Heterogeneous Graph-based Knowledge Transfer for Generalized Zero-shot Learning |
Authors | Junjie Wang, Xiangfeng Wang, Bo Jin, Junchi Yan, Wenjie Zhang, Hongyuan Zha |
Abstract | Generalized zero-shot learning (GZSL) tackles the problem of learning to classify instances involving both seen classes and unseen ones. The key issue is how to effectively transfer the model learned from seen classes to unseen classes. Existing works in GZSL usually assume that some prior information about unseen classes are available. However, such an assumption is unrealistic when new unseen classes appear dynamically. To this end, we propose a novel heterogeneous graph-based knowledge transfer method (HGKT) for GZSL, agnostic to unseen classes and instances, by leveraging graph neural network. Specifically, a structured heterogeneous graph is constructed with high-level representative nodes for seen classes, which are chosen through Wasserstein barycenter in order to simultaneously capture inter-class and intra-class relationship. The aggregation and embedding functions can be learned through graph neural network, which can be used to compute the embeddings of unseen classes by transferring the knowledge from their neighbors. Extensive experiments on public benchmark datasets show that our method achieves state-of-the-art results. |
Tasks | Transfer Learning, Zero-Shot Learning |
Published | 2019-11-20 |
URL | https://arxiv.org/abs/1911.09046v1 |
https://arxiv.org/pdf/1911.09046v1.pdf | |
PWC | https://paperswithcode.com/paper/heterogeneous-graph-based-knowledge-transfer |
Repo | |
Framework | |