Paper Group ANR 590
KnowBias: Detecting Political Polarity in Long Text Content. Unsupervised Feature Learning with K-means and An Ensemble of Deep Convolutional Neural Networks for Medical Image Classification. Surgical Gesture Recognition with Optical Flow only. Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions. Reasoning …
KnowBias: Detecting Political Polarity in Long Text Content
Title | KnowBias: Detecting Political Polarity in Long Text Content |
Authors | Aditya Saligrama |
Abstract | We introduce a classification scheme for detecting political bias in long text content such as newspaper opinion articles. Obtaining long text data and annotations at sufficient scale for training is difficult, but it is relatively easy to extract political polarity from tweets through their authorship. We train on tweets and perform inference on articles. Universal sentence encoders and other existing methods that aim to address this domain-adaptation scenario deliver inaccurate and inconsistent predictions on articles, which we show is due to a difference in opinion concentration between tweets and articles. We propose a two-step classification scheme that uses a neutral detector trained on tweets to remove neutral sentences from articles in order to align opinion concentration and therefore improve accuracy on that domain. Our implementation is available for public use at https://knowbias.ml. |
Tasks | Domain Adaptation |
Published | 2019-09-22 |
URL | https://arxiv.org/abs/1909.12230v2 |
https://arxiv.org/pdf/1909.12230v2.pdf | |
PWC | https://paperswithcode.com/paper/knowbias-detecting-political-polarity-in-long |
Repo | |
Framework | |
Unsupervised Feature Learning with K-means and An Ensemble of Deep Convolutional Neural Networks for Medical Image Classification
Title | Unsupervised Feature Learning with K-means and An Ensemble of Deep Convolutional Neural Networks for Medical Image Classification |
Authors | Euijoon Ahn, Ashnil Kumar, Dagan Feng, Michael Fulham, Jinman Kim |
Abstract | Medical image analysis using supervised deep learning methods remains problematic because of the reliance of deep learning methods on large amounts of labelled training data. Although medical imaging data repositories continue to expand there has not been a commensurate increase in the amount of annotated data. Hence, we propose a new unsupervised feature learning method that learns feature representations to then differentiate dissimilar medical images using an ensemble of different convolutional neural networks (CNNs) and K-means clustering. It jointly learns feature representations and clustering assignments in an end-to-end fashion. We tested our approach on a public medical dataset and show its accuracy was better than state-of-the-art unsupervised feature learning methods and comparable to state-of-the-art supervised CNNs. Our findings suggest that our method could be used to tackle the issue of the large volume of unlabelled data in medical imaging repositories. |
Tasks | Image Classification |
Published | 2019-06-07 |
URL | https://arxiv.org/abs/1906.03359v1 |
https://arxiv.org/pdf/1906.03359v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-feature-learning-with-k-means |
Repo | |
Framework | |
Surgical Gesture Recognition with Optical Flow only
Title | Surgical Gesture Recognition with Optical Flow only |
Authors | Duygu Sarikaya, Pierre Jannin |
Abstract | In this paper, we address the open research problem of surgical gesture recognition using motion cues from video data only. We adapt Optical flow ConvNets initially proposed by Simonyan et al.. While Simonyan uses both RGB frames and dense optical flow, we use only dense optical flow representations as input to emphasize the role of motion in surgical gesture recognition, and present it as a robust alternative to kinematic data. We also overcome one of the limitations of Optical flow ConvNets by initializing our model with cross modality pre-training. A large number of promising studies that address surgical gesture recognition highly rely on kinematic data which requires additional recording devices. To our knowledge, this is the first paper that addresses surgical gesture recognition using dense optical flow information only. We achieve competitive results on JIGSAWS dataset, moreover, our model achieves more robust results with less standard deviation, which suggests optical flow information can be used as an alternative to kinematic data for the recognition of surgical gestures. |
Tasks | Gesture Recognition, Optical Flow Estimation, Surgical Gesture Recognition |
Published | 2019-04-01 |
URL | http://arxiv.org/abs/1904.01143v1 |
http://arxiv.org/pdf/1904.01143v1.pdf | |
PWC | https://paperswithcode.com/paper/surgical-gesture-recognition-with-optical |
Repo | |
Framework | |
Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions
Title | Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions |
Authors | Lijun Zhang, Guanghui Wang, Wei-Wei Tu, Zhi-Hua Zhou |
Abstract | To deal with changing environments, a new performance measure—adaptive regret, defined as the maximum static regret over any interval, is proposed in online learning. Under the setting of online convex optimization, several algorithms have been successfully developed to minimize the adaptive regret. However, existing algorithms lack universality in the sense that they can only handle one type of convex functions and need apriori knowledge of parameters. By contrast, there exist universal algorithms, such as MetaGrad, that attain optimal static regret for multiple types of convex functions simultaneously. Along this line of research, this paper presents the first universal algorithm for minimizing the adaptive regret of convex functions. Specifically, we borrow the idea of maintaining multiple learning rates in MetaGrad to handle the uncertainty of functions, and utilize the technique of sleeping experts to capture changing environments. In this way, our algorithm automatically adapts to the property of functions (convex, exponentially concave, or strongly convex), as well as the nature of environments (stationary or changing). As a by product, it also allows the type of functions to switch between rounds. |
Tasks | |
Published | 2019-06-26 |
URL | https://arxiv.org/abs/1906.10851v1 |
https://arxiv.org/pdf/1906.10851v1.pdf | |
PWC | https://paperswithcode.com/paper/dual-adaptivity-a-universal-algorithm-for |
Repo | |
Framework | |
Reasoning Over Paths via Knowledge Base Completion
Title | Reasoning Over Paths via Knowledge Base Completion |
Authors | Saatviga Sudhahar, Ian Roberts, Andrea Pierleoni |
Abstract | Reasoning over paths in large scale knowledge graphs is an important problem for many applications. In this paper we discuss a simple approach to automatically build and rank paths between a source and target entity pair with learned embeddings using a knowledge base completion model (KBC). We assembled a knowledge graph by mining the available biomedical scientific literature and extracted a set of high frequency paths to use for validation. We demonstrate that our method is able to effectively rank a list of known paths between a pair of entities and also come up with plausible paths that are not present in the knowledge graph. For a given entity pair we are able to reconstruct the highest ranking path 60% of the time within the the top 10 ranked paths and achieve 49% mean average precision. Our approach is compositional since any KBC model that can produce vector representations of entities can be used. |
Tasks | Knowledge Base Completion, Knowledge Graphs |
Published | 2019-11-01 |
URL | https://arxiv.org/abs/1911.00492v1 |
https://arxiv.org/pdf/1911.00492v1.pdf | |
PWC | https://paperswithcode.com/paper/reasoning-over-paths-via-knowledge-base-1 |
Repo | |
Framework | |
Generalized active learning and design of statistical experiments for manifold-valued data
Title | Generalized active learning and design of statistical experiments for manifold-valued data |
Authors | Mikhail A. Langovoy |
Abstract | Characterizing the appearance of real-world surfaces is a fundamental problem in multidimensional reflectometry, computer vision and computer graphics. For many applications, appearance is sufficiently well characterized by the bidirectional reflectance distribution function (BRDF). We treat BRDF measurements as samples of points from high-dimensional non-linear non-convex manifolds. BRDF manifolds form an infinite-dimensional space, but typically the available measurements are very scarce for complicated problems such as BRDF estimation. Therefore, an efficient learning strategy is crucial when performing the measurements. In this paper, we build the foundation of a mathematical framework that allows to develop and apply new techniques within statistical design of experiments and generalized proactive learning, in order to establish more efficient sampling and measurement strategies for BRDF data manifolds. |
Tasks | Active Learning |
Published | 2019-04-08 |
URL | http://arxiv.org/abs/1904.03909v1 |
http://arxiv.org/pdf/1904.03909v1.pdf | |
PWC | https://paperswithcode.com/paper/generalized-active-learning-and-design-of |
Repo | |
Framework | |
Talking With Your Hands: Scaling Hand Gestures and Recognition With CNNs
Title | Talking With Your Hands: Scaling Hand Gestures and Recognition With CNNs |
Authors | Okan Köpüklü, Yao Rong, Gerhard Rigoll |
Abstract | The use of hand gestures provides a natural alternative to cumbersome interface devices for Human-Computer Interaction (HCI) systems. As the technology advances and communication between humans and machines becomes more complex, HCI systems should also be scaled accordingly in order to accommodate the introduced complexities. In this paper, we propose a methodology to scale hand gestures by forming them with predefined gesture-phonemes, and a convolutional neural network (CNN) based framework to recognize hand gestures by learning only their constituents of gesture-phonemes. The total number of possible hand gestures can be increased exponentially by increasing the number of used gesture-phonemes. For this objective, we introduce a new benchmark dataset named Scaled Hand Gestures Dataset (SHGD) with only gesture-phonemes in its training set and 3-tuples gestures in the test set. In our experimental analysis, we achieve to recognize hand gestures containing one and three gesture-phonemes with an accuracy of 98.47% (in 15 classes) and 94.69% (in 810 classes), respectively. Our dataset, code and pretrained models are publicly available. |
Tasks | |
Published | 2019-05-10 |
URL | https://arxiv.org/abs/1905.04225v2 |
https://arxiv.org/pdf/1905.04225v2.pdf | |
PWC | https://paperswithcode.com/paper/talking-with-your-hands-scaling-hand-gestures |
Repo | |
Framework | |
Localization with Limited Annotation for Chest X-rays
Title | Localization with Limited Annotation for Chest X-rays |
Authors | Eyal Rozenberg, Daniel Freedman, Alex Bronstein |
Abstract | Localization of an object within an image is a common task in medical imaging. Learning to localize or detect objects typically requires the collection of data which has been labelled with bounding boxes or similar annotations, which can be very time consuming and expensive. A technique which could perform such learning with much less annotation would, therefore, be quite valuable. We present such a technique for localization with limited annotation, in which the number of images with bounding boxes can be a small fraction of the total dataset (e.g. less than 1%); all other images only possess a whole image label and no bounding box. We propose a novel loss function for tackling this problem; the loss is a continuous relaxation of a well-defined discrete formulation of weakly supervised learning and is numerically well-posed. Furthermore, we propose a new architecture which accounts for both patch dependence and shift-invariance, through the inclusion of CRF layers and anti-aliasing filters, respectively. We apply our technique to the localization of thoracic diseases in chest X-ray images and demonstrate state-of-the-art localization performance on the ChestX-ray14 dataset. |
Tasks | |
Published | 2019-09-19 |
URL | https://arxiv.org/abs/1909.08842v2 |
https://arxiv.org/pdf/1909.08842v2.pdf | |
PWC | https://paperswithcode.com/paper/localization-with-limited-annotation |
Repo | |
Framework | |
Commonsense Knowledge Base Completion with Structural and Semantic Context
Title | Commonsense Knowledge Base Completion with Structural and Semantic Context |
Authors | Chaitanya Malaviya, Chandra Bhagavatula, Antoine Bosselut, Yejin Choi |
Abstract | Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and ConceptNet) poses unique challenges compared to the much studied conventional knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form text to represent nodes, resulting in orders of magnitude more nodes compared to conventional KBs (18x more nodes in ATOMIC compared to Freebase (FB15K-237)). Importantly, this implies significantly sparser graph structures - a major challenge for existing KB completion methods that assume densely connected graphs over a relatively smaller set of nodes. In this paper, we present novel KB completion models that can address these challenges by exploiting the structural and semantic context of nodes. Specifically, we investigate two key ideas: (1) learning from local graph structure, using graph convolutional networks and automatic graph densification and (2) transfer learning from pre-trained language models to knowledge graphs for enhanced contextual representation of knowledge. We describe our method to incorporate information from both these sources in a joint model and provide the first empirical results for KB completion on ATOMIC and evaluation with ranking metrics on ConceptNet. Our results demonstrate the effectiveness of language model representations in boosting link prediction performance and the advantages of learning from local graph structure (+1.5 points in MRR for ConceptNet) when training on subgraphs for computational efficiency. Further analysis on model predictions shines light on the types of commonsense knowledge that language models capture well. |
Tasks | Knowledge Base Completion, Knowledge Graphs, Language Modelling, Link Prediction, Transfer Learning |
Published | 2019-10-07 |
URL | https://arxiv.org/abs/1910.02915v2 |
https://arxiv.org/pdf/1910.02915v2.pdf | |
PWC | https://paperswithcode.com/paper/exploiting-structural-and-semantic-context |
Repo | |
Framework | |
Class-independent sequential full image segmentation, using a convolutional net that finds a segment within an attention region, given a pointer pixel within this segment
Title | Class-independent sequential full image segmentation, using a convolutional net that finds a segment within an attention region, given a pointer pixel within this segment |
Authors | Sagi Eppel |
Abstract | This work examines the use of a fully convolutional net (FCN) to find an image segment, given a pixel within this segment region. The net receives an image, a point in the image and a region of interest (RoI ) mask. The net output is a binary mask of the segment in which the point is located. The region where the segment can be found is contained within the input RoI mask. Full image segmentation can be achieved by running this net sequentially, region-by-region on the image, and stitching the output segments into a single segmentation map. This simple method addresses two major challenges of image segmentation: 1) Segmentation of unknown categories that were not included in the training set. 2) Segmentation of both individual object instances (things) and non-objects (stuff), such as sky and vegetation. Hence, if the pointer pixel is located within a person in a group, the net will output a mask that covers that individual person; if the pointer point is located within the sky region, the net returns the region of the sky in the image. This is true even if no example for sky or person appeared in the training set. The net was tested and trained on the COCO panoptic dataset and achieved 67% IOU for segmentation of familiar classes (that were part of the net training set) and 53% IOU for segmentation of unfamiliar classes (that were not included in the training). |
Tasks | Semantic Segmentation |
Published | 2019-02-20 |
URL | http://arxiv.org/abs/1902.07810v2 |
http://arxiv.org/pdf/1902.07810v2.pdf | |
PWC | https://paperswithcode.com/paper/class-independent-sequential-full-image |
Repo | |
Framework | |
Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment
Title | Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment |
Authors | Hemant Pugaliya, Karan Saxena, Shefali Garg, Sheetal Shalini, Prashant Gupta, Eric Nyberg, Teruko Mitamura |
Abstract | Parallel deep learning architectures like fine-tuned BERT and MT-DNN, have quickly become the state of the art, bypassing previous deep and shallow learning methods by a large margin. More recently, pre-trained models from large related datasets have been able to perform well on many downstream tasks by just fine-tuning on domain-specific datasets . However, using powerful models on non-trivial tasks, such as ranking and large document classification, still remains a challenge due to input size limitations of parallel architecture and extremely small datasets (insufficient for fine-tuning). In this work, we introduce an end-to-end system, trained in a multi-task setting, to filter and re-rank answers in the medical domain. We use task-specific pre-trained models as deep feature extractors. Our model achieves the highest Spearman’s Rho and Mean Reciprocal Rank of 0.338 and 0.9622 respectively, on the ACL-BioNLP workshop MediQA Question Answering shared-task. |
Tasks | Document Classification, Multi-Task Learning, Question Answering |
Published | 2019-07-01 |
URL | https://arxiv.org/abs/1907.01643v1 |
https://arxiv.org/pdf/1907.01643v1.pdf | |
PWC | https://paperswithcode.com/paper/pentagon-at-mediqa-2019-multi-task-learning |
Repo | |
Framework | |
Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research
Title | Autocurricula and the Emergence of Innovation from Social Interaction: A Manifesto for Multi-Agent Intelligence Research |
Authors | Joel Z. Leibo, Edward Hughes, Marc Lanctot, Thore Graepel |
Abstract | Evolution has produced a multi-scale mosaic of interacting adaptive units. Innovations arise when perturbations push parts of the system away from stable equilibria into new regimes where previously well-adapted solutions no longer work. Here we explore the hypothesis that multi-agent systems sometimes display intrinsic dynamics arising from competition and cooperation that provide a naturally emergent curriculum, which we term an autocurriculum. The solution of one social task often begets new social tasks, continually generating novel challenges, and thereby promoting innovation. Under certain conditions these challenges may become increasingly complex over time, demanding that agents accumulate ever more innovations. |
Tasks | |
Published | 2019-03-02 |
URL | http://arxiv.org/abs/1903.00742v2 |
http://arxiv.org/pdf/1903.00742v2.pdf | |
PWC | https://paperswithcode.com/paper/autocurricula-and-the-emergence-of-innovation |
Repo | |
Framework | |
Appearance-based Gesture recognition in the compressed domain
Title | Appearance-based Gesture recognition in the compressed domain |
Authors | Shaojie Xu, Anvesha Amaravati, Justin Romberg, Arijit Raychowdhury |
Abstract | We propose a novel appearance-based gesture recognition algorithm using compressed domain signal processing techniques. Gesture features are extracted directly from the compressed measurements, which are the block averages and the coded linear combinations of the image sensor’s pixel values. We also improve both the computational efficiency and the memory requirement of the previous DTW-based K-NN gesture classifiers. Both simulation testing and hardware implementation strongly support the proposed algorithm. |
Tasks | Gesture Recognition |
Published | 2019-02-19 |
URL | http://arxiv.org/abs/1903.00100v1 |
http://arxiv.org/pdf/1903.00100v1.pdf | |
PWC | https://paperswithcode.com/paper/appearance-based-gesture-recognition-in-the |
Repo | |
Framework | |
ShapeMask: Learning to Segment Novel Objects by Refining Shape Priors
Title | ShapeMask: Learning to Segment Novel Objects by Refining Shape Priors |
Authors | Weicheng Kuo, Anelia Angelova, Jitendra Malik, Tsung-Yi Lin |
Abstract | Instance segmentation aims to detect and segment individual objects in a scene. Most existing methods rely on precise mask annotations of every category. However, it is difficult and costly to segment objects in novel categories because a large number of mask annotations is required. We introduce ShapeMask, which learns the intermediate concept of object shape to address the problem of generalization in instance segmentation to novel categories. ShapeMask starts with a bounding box detection and gradually refines it by first estimating the shape of the detected object through a collection of shape priors. Next, ShapeMask refines the coarse shape into an instance level mask by learning instance embeddings. The shape priors provide a strong cue for object-like prediction, and the instance embeddings model the instance specific appearance information. ShapeMask significantly outperforms the state-of-the-art by 6.4 and 3.8 AP when learning across categories, and obtains competitive performance in the fully supervised setting. It is also robust to inaccurate detections, decreased model capacity, and small training data. Moreover, it runs efficiently with 150ms inference time and trains within 11 hours on TPUs. With a larger backbone model, ShapeMask increases the gap with state-of-the-art to 9.4 and 6.2 AP across categories. Code will be released. |
Tasks | Instance Segmentation, Semantic Segmentation |
Published | 2019-04-05 |
URL | http://arxiv.org/abs/1904.03239v1 |
http://arxiv.org/pdf/1904.03239v1.pdf | |
PWC | https://paperswithcode.com/paper/shapemask-learning-to-segment-novel-objects |
Repo | |
Framework | |
Efficient adjustment sets for population average treatment effect estimation in non-parametric causal graphical models
Title | Efficient adjustment sets for population average treatment effect estimation in non-parametric causal graphical models |
Authors | Andrea Rotnitzky, Ezequiel Smucler |
Abstract | The method of covariate adjustment is often used for estimation of population average treatment effects in observational studies. Graphical rules for determining all valid covariate adjustment sets from an assumed causal graphical model are well known. Restricting attention to causal linear models, a recent article derived two novel graphical criteria: one to compare the asymptotic variance of linear regression treatment effect estimators that control for certain distinct adjustment sets and another to identify the optimal adjustment set that yields the least squares treatment effect estimator with the smallest asymptotic variance among consistent adjusted least squares estimators. In this paper we show that the same graphical criteria can be used in non-parametric causal graphical models when treatment effects are estimated by contrasts involving non-parametrically adjusted estimators of the interventional means. We also provide a graphical criterion for determining the optimal adjustment set among the minimal adjustment sets, which is valid for both linear and non-parametric estimators. We provide a new graphical criterion for comparing time dependent adjustment sets, that is, sets comprised by covariates that adjust for future treatments and that are themselves affected by earlier treatments. We show by example that uniformly optimal time dependent adjustment sets do not always exist. In addition, for point interventions, we provide a sound and complete graphical criterion for determining when a non-parametric optimally adjusted estimator of an interventional mean, or of a contrast of interventional means, is as efficient as an efficient estimator of the same parameter that exploits the information in the conditional independencies encoded in the non-parametric causal graphical model. |
Tasks | |
Published | 2019-12-01 |
URL | https://arxiv.org/abs/1912.00306v2 |
https://arxiv.org/pdf/1912.00306v2.pdf | |
PWC | https://paperswithcode.com/paper/efficient-adjustment-sets-for-population |
Repo | |
Framework | |