Paper Group ANR 291
Few-Shot and Zero-Shot Learning for Historical Text Normalization. Multi-Task Learning for Coherence Modeling. 5D Light Field Synthesis from a Monocular Video. Time-Varying Interaction Estimation Using Ensemble Methods. Multitask Learning for Blackmarket Tweet Detection. Space Navigator: a Tool for the Optimization of Collision Avoidance Maneuvers. …
Few-Shot and Zero-Shot Learning for Historical Text Normalization
Title | Few-Shot and Zero-Shot Learning for Historical Text Normalization |
Authors | Marcel Bollmann, Natalia Korchagina, Anders Søgaard |
Abstract | Historical text normalization often relies on small training datasets. Recent work has shown that multi-task learning can lead to significant improvements by exploiting synergies with related datasets, but there has been no systematic study of different multi-task learning architectures. This paper evaluates 63~multi-task learning configurations for sequence-to-sequence-based historical text normalization across ten datasets from eight languages, using autoencoding, grapheme-to-phoneme mapping, and lemmatization as auxiliary tasks. We observe consistent, significant improvements across languages when training data for the target task is limited, but minimal or no improvements when training data is abundant. We also show that zero-shot learning outperforms the simple, but relatively strong, identity baseline. |
Tasks | Lemmatization, Multi-Task Learning, Zero-Shot Learning |
Published | 2019-03-12 |
URL | https://arxiv.org/abs/1903.04870v2 |
https://arxiv.org/pdf/1903.04870v2.pdf | |
PWC | https://paperswithcode.com/paper/few-shot-and-zero-shot-learning-for |
Repo | |
Framework | |
Multi-Task Learning for Coherence Modeling
Title | Multi-Task Learning for Coherence Modeling |
Authors | Youmna Farag, Helen Yannakoudakis |
Abstract | We address the task of assessing discourse coherence, an aspect of text quality that is essential for many NLP tasks, such as summarization and language assessment. We propose a hierarchical neural network trained in a multi-task fashion that learns to predict a document-level coherence score (at the network’s top layers) along with word-level grammatical roles (at the bottom layers), taking advantage of inductive transfer between the two tasks. We assess the extent to which our framework generalizes to different domains and prediction tasks, and demonstrate its effectiveness not only on standard binary evaluation coherence tasks, but also on real-world tasks involving the prediction of varying degrees of coherence, achieving a new state of the art. |
Tasks | Multi-Task Learning |
Published | 2019-07-04 |
URL | https://arxiv.org/abs/1907.02427v1 |
https://arxiv.org/pdf/1907.02427v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-task-learning-for-coherence-modeling |
Repo | |
Framework | |
5D Light Field Synthesis from a Monocular Video
Title | 5D Light Field Synthesis from a Monocular Video |
Authors | Kyuho Bae, Andre Ivan, Hajime Nagahara, In Kyu Park |
Abstract | Commercially available light field cameras have difficulty in capturing 5D (4D + time) light field videos. They can only capture still light filed images or are excessively expensive for normal users to capture the light field video. To tackle this problem, we propose a deep learning-based method for synthesizing a light field video from a monocular video. We propose a new synthetic light field video dataset that renders photorealistic scenes using UnrealCV rendering engine because no light field dataset is available. The proposed deep learning framework synthesizes the light field video with a full set (9$\times$9) of sub-aperture images from a normal monocular video. The proposed network consists of three sub-networks, namely, feature extraction, 5D light field video synthesis, and temporal consistency refinement. Experimental results show that our model can successfully synthesize the light field video for synthetic and actual scenes and outperforms the previous frame-by-frame methods quantitatively and qualitatively. The synthesized light field can be used for conventional light field applications, namely, depth estimation, viewpoint change, and refocusing. |
Tasks | Depth Estimation |
Published | 2019-12-23 |
URL | https://arxiv.org/abs/1912.10687v1 |
https://arxiv.org/pdf/1912.10687v1.pdf | |
PWC | https://paperswithcode.com/paper/5d-light-field-synthesis-from-a-monocular |
Repo | |
Framework | |
Time-Varying Interaction Estimation Using Ensemble Methods
Title | Time-Varying Interaction Estimation Using Ensemble Methods |
Authors | Brandon Oselio, Amir Sadeghian, Silvio Savarese, Alfred Hero |
Abstract | Directed information (DI) is a useful tool to explore time-directed interactions in multivariate data. However, as originally formulated DI is not well suited to interactions that change over time. In previous work, adaptive directed information was introduced to accommodate non-stationarity, while still preserving the utility of DI to discover complex dependencies between entities. There are many design decisions and parameters that are crucial to the effectiveness of ADI. Here, we apply ideas from ensemble learning in order to alleviate this issue, allowing for a more robust estimator for exploratory data analysis. We apply these techniques to interaction estimation in a crowded scene, utilizing the Stanford drone dataset as an example. |
Tasks | |
Published | 2019-06-25 |
URL | https://arxiv.org/abs/1906.10746v1 |
https://arxiv.org/pdf/1906.10746v1.pdf | |
PWC | https://paperswithcode.com/paper/time-varying-interaction-estimation-using |
Repo | |
Framework | |
Multitask Learning for Blackmarket Tweet Detection
Title | Multitask Learning for Blackmarket Tweet Detection |
Authors | Udit Arora, William Scott Paka, Tanmoy Chakraborty |
Abstract | Online social media platforms have made the world more connected than ever before, thereby making it easier for everyone to spread their content across a wide variety of audiences. Twitter is one such popular platform where people publish tweets to spread their messages to everyone. Twitter allows users to Retweet other users’ tweets in order to broadcast it to their network. The more retweets a particular tweet gets, the faster it spreads. This creates incentives for people to obtain artificial growth in the reach of their tweets by using certain blackmarket services to gain inorganic appraisals for their content. In this paper, we attempt to detect such tweets that have been posted on these blackmarket services in order to gain artificially boosted retweets. We use a multitask learning framework to leverage soft parameter sharing between a classification and a regression based task on separate inputs. This allows us to effectively detect tweets that have been posted to these blackmarket services, achieving an F1-score of 0.89 when classifying tweets as blackmarket or genuine. |
Tasks | |
Published | 2019-07-09 |
URL | https://arxiv.org/abs/1907.04072v1 |
https://arxiv.org/pdf/1907.04072v1.pdf | |
PWC | https://paperswithcode.com/paper/multitask-learning-for-blackmarket-tweet |
Repo | |
Framework | |
Space Navigator: a Tool for the Optimization of Collision Avoidance Maneuvers
Title | Space Navigator: a Tool for the Optimization of Collision Avoidance Maneuvers |
Authors | Leonid Gremyachikh, Dmitrii Dubov, Nikita Kazeev, Andrey Kulibaba, Andrey Skuratov, Anton Tereshkin, Andrey Ustyuzhanin, Lubov Shiryaeva, Sergej Shishkin |
Abstract | The number of space objects will grow several times in a few years due to the planned launches of constellations of thousands microsatellites. It leads to a significant increase in the threat of satellite collisions. Spacecraft must undertake collision avoidance maneuvers to mitigate the risk. According to publicly available information, conjunction events are now manually handled by operators on the Earth. The manual maneuver planning requires qualified personnel and will be impractical for constellations of thousands satellites. In this paper we propose a new modular autonomous collision avoidance system called “Space Navigator”. It is based on a novel maneuver optimization approach that combines domain knowledge with Reinforcement Learning methods. |
Tasks | |
Published | 2019-02-06 |
URL | http://arxiv.org/abs/1902.02095v1 |
http://arxiv.org/pdf/1902.02095v1.pdf | |
PWC | https://paperswithcode.com/paper/space-navigator-a-tool-for-the-optimization |
Repo | |
Framework | |
Unsupervised Behavior Change Detection in Multidimensional Data Streams for Maritime Traffic Monitoring
Title | Unsupervised Behavior Change Detection in Multidimensional Data Streams for Maritime Traffic Monitoring |
Authors | Lucas May Petry, Amilcar Soares, Vania Bogorny, Stan Matwin |
Abstract | The worldwide growth of maritime traffic and the development of the Automatic Identification System (AIS) has led to advances in monitoring systems for preventing vessel accidents and detecting illegal activities. In this work, we describe research gaps and challenges in machine learning for vessel behavior change and event detection, considering several constraints imposed by real-time data streams and the maritime monitoring domain. As a starting point, we investigate how unsupervised and semi-supervised change detection methods may be employed for identifying shifts in vessel behavior, aiming to detect and label unusual events. |
Tasks | |
Published | 2019-08-14 |
URL | https://arxiv.org/abs/1908.05103v1 |
https://arxiv.org/pdf/1908.05103v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-behavior-change-detection-in |
Repo | |
Framework | |
Don’t paraphrase, detect! Rapid and Effective Data Collection for Semantic Parsing
Title | Don’t paraphrase, detect! Rapid and Effective Data Collection for Semantic Parsing |
Authors | Jonathan Herzig, Jonathan Berant |
Abstract | A major hurdle on the road to conversational interfaces is the difficulty in collecting data that maps language utterances to logical forms. One prominent approach for data collection has been to automatically generate pseudo-language paired with logical forms, and paraphrase the pseudo-language to natural language through crowdsourcing (Wang et al., 2015). However, this data collection procedure often leads to low performance on real data, due to a mismatch between the true distribution of examples and the distribution induced by the data collection procedure. In this paper, we thoroughly analyze two sources of mismatch in this process: the mismatch in logical form distribution and the mismatch in language distribution between the true and induced distributions. We quantify the effects of these mismatches, and propose a new data collection approach that mitigates them. Assuming access to unlabeled utterances from the true distribution, we combine crowdsourcing with a paraphrase model to detect correct logical forms for the unlabeled utterances. On two datasets, our method leads to 70.6 accuracy on average on the true distribution, compared to 51.3 in paraphrasing-based data collection. |
Tasks | Semantic Parsing |
Published | 2019-08-26 |
URL | https://arxiv.org/abs/1908.09940v2 |
https://arxiv.org/pdf/1908.09940v2.pdf | |
PWC | https://paperswithcode.com/paper/dont-paraphrase-detect-rapid-and-effective |
Repo | |
Framework | |
Multi-Differential Fairness Auditor for Black Box Classifiers
Title | Multi-Differential Fairness Auditor for Black Box Classifiers |
Authors | Xavier Gitiaux, Huzefa Rangwala |
Abstract | Machine learning algorithms are increasingly involved in sensitive decision-making process with adversarial implications on individuals. This paper presents mdfa, an approach that identifies the characteristics of the victims of a classifier’s discrimination. We measure discrimination as a violation of multi-differential fairness. Multi-differential fairness is a guarantee that a black box classifier’s outcomes do not leak information on the sensitive attributes of a small group of individuals. We reduce the problem of identifying worst-case violations to matching distributions and predicting where sensitive attributes and classifier’s outcomes coincide. We apply mdfa to a recidivism risk assessment classifier and demonstrate that individuals identified as African-American with little criminal history are three-times more likely to be considered at high risk of violent recidivism than similar individuals but not African-American. |
Tasks | Decision Making |
Published | 2019-03-18 |
URL | http://arxiv.org/abs/1903.07609v1 |
http://arxiv.org/pdf/1903.07609v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-differential-fairness-auditor-for-black |
Repo | |
Framework | |
ViBE: Dressing for Diverse Body Shapes
Title | ViBE: Dressing for Diverse Body Shapes |
Authors | Wei-Lin Hsiao, Kristen Grauman |
Abstract | Body shape plays an important role in determining what garments will best suit a given person, yet today’s clothing recommendation methods take a “one shape fits all” approach. These body-agnostic vision methods and datasets are a barrier to inclusion, ill-equipped to provide good suggestions for diverse body shapes. We introduce ViBE, a VIsual Body-aware Embedding that captures clothing’s affinity with different body shapes. Given an image of a person, the proposed embedding identifies garments that will flatter her specific body shape. We show how to learn the embedding from an online catalog displaying fashion models of various shapes and sizes wearing the products, and we devise a method to explain the algorithm’s suggestions for well-fitting garments. We apply our approach to a dataset of diverse subjects, and demonstrate its strong advantages over the status quo body-agnostic recommendation, both according to automated metrics and human opinion. |
Tasks | |
Published | 2019-12-13 |
URL | https://arxiv.org/abs/1912.06697v2 |
https://arxiv.org/pdf/1912.06697v2.pdf | |
PWC | https://paperswithcode.com/paper/dressing-for-diverse-body-shapes |
Repo | |
Framework | |
REFUGE Challenge: A Unified Framework for Evaluating Automated Methods for Glaucoma Assessment from Fundus Photographs
Title | REFUGE Challenge: A Unified Framework for Evaluating Automated Methods for Glaucoma Assessment from Fundus Photographs |
Authors | José Ignacio Orlando, Huazhu Fu, João Barbossa Breda, Karel van Keer, Deepti R. Bathula, Andrés Diaz-Pinto, Ruogu Fang, Pheng-Ann Heng, Jeyoung Kim, JoonHo Lee, Joonseok Lee, Xiaoxiao Li, Peng Liu, Shuai Lu, Balamurali Murugesan, Valery Naranjo, Sai Samarth R. Phaye, Sharath M. Shankaranarayana, Apoorva Sikka, Jaemin Son, Anton van den Hengel, Shujun Wang, Junyan Wu, Zifeng Wu, Guanghui Xu, Yongli Xu, Pengshuai Yin, Fei Li, Xiulan Zhang, Yanwu Xu, Xiulan Zhang, Hrvoje Bogunović |
Abstract | Glaucoma is one of the leading causes of irreversible but preventable blindness in working age populations. Color fundus photography (CFP) is the most cost-effective imaging modality to screen for retinal disorders. However, its application to glaucoma has been limited to the computation of a few related biomarkers such as the vertical cup-to-disc ratio. Deep learning approaches, although widely applied for medical image analysis, have not been extensively used for glaucoma assessment due to the limited size of the available data sets. Furthermore, the lack of a standardize benchmark strategy makes difficult to compare existing methods in a uniform way. In order to overcome these issues we set up the Retinal Fundus Glaucoma Challenge, REFUGE (\url{https://refuge.grand-challenge.org}), held in conjunction with MICCAI 2018. The challenge consisted of two primary tasks, namely optic disc/cup segmentation and glaucoma classification. As part of REFUGE, we have publicly released a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one. We have also built an evaluation framework to ease and ensure fairness in the comparison of different models, encouraging the development of novel techniques in the field. 12 teams qualified and participated in the online challenge. This paper summarizes their methods and analyzes their corresponding results. In particular, we observed that two of the top-ranked teams outperformed two human experts in the glaucoma classification task. Furthermore, the segmentation results were in general consistent with the ground truth annotations, with complementary outcomes that can be further exploited by ensembling the results. |
Tasks | |
Published | 2019-10-08 |
URL | https://arxiv.org/abs/1910.03667v1 |
https://arxiv.org/pdf/1910.03667v1.pdf | |
PWC | https://paperswithcode.com/paper/refuge-challenge-a-unified-framework-for |
Repo | |
Framework | |
Decoupled Certainty-Driven Consistency Loss for Semi-supervised Learning
Title | Decoupled Certainty-Driven Consistency Loss for Semi-supervised Learning |
Authors | Yiting Li, Lu Liu, Robby T. Tan |
Abstract | One of the successful approaches in semi-supervised learning is based on the consistency loss between different predictions under random perturbations. Typically, a student model is trained to be consistent with teachers prediction for the inputs under different perturbation. However, to be successful,the teachers pseudo labels must have good quality, otherwise the whole learning process will fail. Unfortunately, existing methods do not assess the quality of teachers pseudo labels. In this paper, we propose a novel certainty-driven consistency loss (CCL) that exploits the predictive uncertainty information in the consistency loss to let the student dynamically learn from reliable targets. Specifically, we propose two approaches, i.e. Filtering CCL and Temperature CCL to either filter out uncertain predictions or pay less attention on the uncertain ones in the consistency regularization. We combine the two approaches, which we call FT-CCL to further improve consistency learning framework. Based on our experiments, FT-CCL shows improvements on a general semi-supervised learning task and robustness to noisy labels. We further introduce a novel mutual learning method, where one student is decoupled from its teacher, and learns from the other student’s teacher, in order to learn additional knowledge. Experimental results demonstrate the advantages of our method over the state-of-the-art semi-supervised deep learning methods. |
Tasks | |
Published | 2019-01-17 |
URL | https://arxiv.org/abs/1901.05657v3 |
https://arxiv.org/pdf/1901.05657v3.pdf | |
PWC | https://paperswithcode.com/paper/certainty-driven-consistency-loss-for-semi |
Repo | |
Framework | |
Unsupervised Prediction of Negative Health Events Ahead of Time
Title | Unsupervised Prediction of Negative Health Events Ahead of Time |
Authors | Anahita Hosseini, Majid Sarrafzadeh |
Abstract | The emergence of continuous health monitoring and the availability of an enormous amount of time series data has provided a great opportunity for the advancement of personal health tracking. In recent years, unsupervised learning methods have drawn special attention of researchers to tackle the sparse annotation of health data and real-time detection of anomalies has been a central problem of interest. However, one problem that has not been well addressed before is the early prediction of forthcoming negative health events. Early signs of an event can introduce subtle and gradual changes in the health signal prior to its onset, detection of which can be invaluable in effective prevention. In this study, we first demonstrate our observations on the shortcoming of widely adopted anomaly detection methods in uncovering the changes prior to a negative health event. We then propose a framework which relies on online clustering of signal segment representations which are automatically learned by a specially designed LSTM auto-encoder. We show the effectiveness of our approach by predicting Bradycardia events in infants using MIT-PICS dataset 1.3 minutes ahead of time with 68% AUC score on average, using no label supervision. Results of our study can indicate the viability of our approach in the early detection of health events in other applications as well. |
Tasks | Anomaly Detection, Time Series |
Published | 2019-01-31 |
URL | http://arxiv.org/abs/1901.11168v1 |
http://arxiv.org/pdf/1901.11168v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-prediction-of-negative-health |
Repo | |
Framework | |
A Multi-language Platform for Generating Algebraic Mathematical Word Problems
Title | A Multi-language Platform for Generating Algebraic Mathematical Word Problems |
Authors | Vijini Liyanage, Surangika Ranathunga |
Abstract | Existing approaches for automatically generating mathematical word problems are deprived of customizability and creativity due to the inherent nature of template-based mechanisms they employ. We present a solution to this problem with the use of deep neural language generation mechanisms. Our approach uses a Character Level Long Short Term Memory Network (LSTM) to generate word problems, and uses POS (Part of Speech) tags to resolve the constraints found in the generated problems. Our approach is capable of generating Mathematics Word Problems in both English and Sinhala languages with an accuracy over 90%. |
Tasks | Text Generation |
Published | 2019-11-19 |
URL | https://arxiv.org/abs/1912.01110v1 |
https://arxiv.org/pdf/1912.01110v1.pdf | |
PWC | https://paperswithcode.com/paper/a-multi-language-platform-for-generating |
Repo | |
Framework | |
Least-squares registration of point sets over SE (d) using closed-form projections
Title | Least-squares registration of point sets over SE (d) using closed-form projections |
Authors | Sk. Miraj Ahmed, Niladri Ranjan Das, Kunal Narayan Chaudhury |
Abstract | Consider the problem of registering multiple point sets in some $d$-dimensional space using rotations and translations. Assume that there are sets with common points, and moreover the pairwise correspondences are known for such sets. We consider a least-squares formulation of this problem, where the variables are the transforms associated with the point sets. The present novelty is that we reduce this nonconvex problem to an optimization over the positive semidefinite cone, where the objective is linear but the constraints are nevertheless nonconvex. We propose to solve this using variable splitting and the alternating directions method of multipliers (ADMM). Due to the linearity of the objective and the structure of constraints, the ADMM subproblems are given by projections with closed-form solutions. In particular, for $m$ point sets, the dominant cost per iteration is the partial eigendecomposition of an $md \times md$ matrix, and $m-1$ singular value decompositions of $d \times d$ matrices. We empirically show that for appropriate parameter settings, the proposed solver has a large convergence basin and is stable under perturbations. As applications, we use our method for $2$D shape matching and $3$D multiview registration. In either application, we model the shapes/scans as point sets and determine the pairwise correspondences using ICP. In particular, our algorithm compares favorably with existing methods for multiview reconstruction in terms of timing and accuracy. |
Tasks | |
Published | 2019-04-08 |
URL | http://arxiv.org/abs/1904.04218v2 |
http://arxiv.org/pdf/1904.04218v2.pdf | |
PWC | https://paperswithcode.com/paper/least-squares-registration-of-point-sets-over |
Repo | |
Framework | |