Paper Group ANR 1058
Estimating causal effects of time-dependent exposures on a binary endpoint in a high-dimensional setting. Nightmare at test time: How punctuation prevents parsers from generalizing. Automated and Interpretable Patient ECG Profiles for Disease Detection, Tracking, and Discovery. Optimizing Photonic Nanostructures via Multi-fidelity Gaussian Processe …
Estimating causal effects of time-dependent exposures on a binary endpoint in a high-dimensional setting
Title | Estimating causal effects of time-dependent exposures on a binary endpoint in a high-dimensional setting |
Authors | Vahé Asvatourian, Clélia Coutzac, Nathalie Chaput, Caroline Robert, Stefan Michiels, Emilie Lanoy |
Abstract | Recently, the intervention calculus when the DAG is absent (IDA) method was developed to estimate lower bounds of causal effects from observational high-dimensional data. Originally it was introduced to assess the effect of baseline biomarkers which do not vary over time. However, in many clinical settings, measurements of biomarkers are repeated at fixed time points during treatment exposure and, therefore, this method need to be extended. The purpose of this paper is then to extend the first step of the IDA, the Peter Clarks (PC)-algorithm, to a time-dependent exposure in the context of a binary outcome. We generalised the PC-algorithm for taking into account the chronological order of repeated measurements of the exposure and propose to apply the IDA with our new version, the chronologically ordered PC-algorithm (COPC-algorithm). A simulation study has been performed before applying the method for estimating causal effects of time-dependent immunological biomarkers on toxicity, death and progression in patients with metastatic melanoma. The simulation study showed that the completed partially directed acyclic graphs (CPDAGs) obtained using COPC-algorithm were structurally closer to the true CPDAG than CPDAGs obtained using PC-algorithm. Also, causal effects were more accurate when they were estimated based on CPDAGs obtained using COPC-algorithm. Moreover, CPDAGs obtained by COPC-algorithm allowed removing non-chronologic arrows with a variable measured at a time t pointing to a variable measured at a time t’ where t'< t. Bidirected edges were less present in CPDAGs obtained with the COPC-algorithm, supporting the fact that there was less variability in causal effects estimated from these CPDAGs. The COPC-algorithm provided CPDAGs that keep the chronological structure present in the data, thus allowed to estimate lower bounds of the causal effect of time-dependent biomarkers. |
Tasks | |
Published | 2018-03-28 |
URL | http://arxiv.org/abs/1803.10535v2 |
http://arxiv.org/pdf/1803.10535v2.pdf | |
PWC | https://paperswithcode.com/paper/estimating-causal-effects-of-time-dependent |
Repo | |
Framework | |
Nightmare at test time: How punctuation prevents parsers from generalizing
Title | Nightmare at test time: How punctuation prevents parsers from generalizing |
Authors | Anders Søgaard, Miryam de Lhoneux, Isabelle Augenstein |
Abstract | Punctuation is a strong indicator of syntactic structure, and parsers trained on text with punctuation often rely heavily on this signal. Punctuation is a diversion, however, since human language processing does not rely on punctuation to the same extent, and in informal texts, we therefore often leave out punctuation. We also use punctuation ungrammatically for emphatic or creative purposes, or simply by mistake. We show that (a) dependency parsers are sensitive to both absence of punctuation and to alternative uses; (b) neural parsers tend to be more sensitive than vintage parsers; (c) training neural parsers without punctuation outperforms all out-of-the-box parsers across all scenarios where punctuation departs from standard punctuation. Our main experiments are on synthetically corrupted data to study the effect of punctuation in isolation and avoid potential confounds, but we also show effects on out-of-domain data. |
Tasks | |
Published | 2018-08-31 |
URL | http://arxiv.org/abs/1809.00070v1 |
http://arxiv.org/pdf/1809.00070v1.pdf | |
PWC | https://paperswithcode.com/paper/nightmare-at-test-time-how-punctuation |
Repo | |
Framework | |
Automated and Interpretable Patient ECG Profiles for Disease Detection, Tracking, and Discovery
Title | Automated and Interpretable Patient ECG Profiles for Disease Detection, Tracking, and Discovery |
Authors | Geoffrey H. Tison, Jeffrey Zhang, Francesca N. Delling, Rahul C. Deo |
Abstract | The electrocardiogram or ECG has been in use for over 100 years and remains the most widely performed diagnostic test to characterize cardiac structure and electrical activity. We hypothesized that parallel advances in computing power, innovations in machine learning algorithms, and availability of large-scale digitized ECG data would enable extending the utility of the ECG beyond its current limitations, while at the same time preserving interpretability, which is fundamental to medical decision-making. We identified 36,186 ECGs from the UCSF database that were 1) in normal sinus rhythm and 2) would enable training of specific models for estimation of cardiac structure or function or detection of disease. We derived a novel model for ECG segmentation using convolutional neural networks (CNN) and Hidden Markov Models (HMM) and evaluated its output by comparing electrical interval estimates to 141,864 measurements from the clinical workflow. We built a 725-element patient-level ECG profile using downsampled segmentation data and trained machine learning models to estimate left ventricular mass, left atrial volume, mitral annulus e’ and to detect and track four diseases: pulmonary arterial hypertension (PAH), hypertrophic cardiomyopathy (HCM), cardiac amyloid (CA), and mitral valve prolapse (MVP). CNN-HMM derived ECG segmentation agreed with clinical estimates, with median absolute deviations (MAD) as a fraction of observed value of 0.6% for heart rate and 4% for QT interval. Patient-level ECG profiles enabled quantitative estimates of left ventricular and mitral annulus e’ velocity with good discrimination in binary classification models of left ventricular hypertrophy and diastolic function. Models for disease detection ranged from AUROC of 0.94 to 0.77 for MVP. Top-ranked variables for all models included known ECG characteristics along with novel predictors of these traits/diseases. |
Tasks | Decision Making |
Published | 2018-07-06 |
URL | http://arxiv.org/abs/1807.02569v1 |
http://arxiv.org/pdf/1807.02569v1.pdf | |
PWC | https://paperswithcode.com/paper/automated-and-interpretable-patient-ecg |
Repo | |
Framework | |
Optimizing Photonic Nanostructures via Multi-fidelity Gaussian Processes
Title | Optimizing Photonic Nanostructures via Multi-fidelity Gaussian Processes |
Authors | Jialin Song, Yury S. Tokpanov, Yuxin Chen, Dagny Fleischman, Kate T. Fountaine, Harry A. Atwater, Yisong Yue |
Abstract | We apply numerical methods in combination with finite-difference-time-domain (FDTD) simulations to optimize transmission properties of plasmonic mirror color filters using a multi-objective figure of merit over a five-dimensional parameter space by utilizing novel multi-fidelity Gaussian processes approach. We compare these results with conventional derivative-free global search algorithms, such as (single-fidelity) Gaussian Processes optimization scheme, and Particle Swarm Optimization—a commonly used method in nanophotonics community, which is implemented in Lumerical commercial photonics software. We demonstrate the performance of various numerical optimization approaches on several pre-collected real-world datasets and show that by properly trading off expensive information sources with cheap simulations, one can more effectively optimize the transmission properties with a fixed budget. |
Tasks | Gaussian Processes |
Published | 2018-11-15 |
URL | http://arxiv.org/abs/1811.07707v1 |
http://arxiv.org/pdf/1811.07707v1.pdf | |
PWC | https://paperswithcode.com/paper/optimizing-photonic-nanostructures-via-multi |
Repo | |
Framework | |
Integrated Tools for Engineering Ontologies
Title | Integrated Tools for Engineering Ontologies |
Authors | V. Yu. Velychko, K. S. Malakhov, V. V. Semenkov, A. E. Strizhak |
Abstract | The article presents an overview of current specialized ontology engineering tools, as well as texts’ annotation tools based on ontologies. The main functions and features of these tools, their advantages and disadvantages are discussed. A systematic comparative analysis of means for engineering ontologies is presented. |
Tasks | |
Published | 2018-02-19 |
URL | http://arxiv.org/abs/1802.06821v1 |
http://arxiv.org/pdf/1802.06821v1.pdf | |
PWC | https://paperswithcode.com/paper/integrated-tools-for-engineering-ontologies |
Repo | |
Framework | |
Enabling FAIR Research in Earth Science through Research Objects
Title | Enabling FAIR Research in Earth Science through Research Objects |
Authors | Andres Garcia-Silva, Jose Manuel Gomez-Perez, Raul Palma, Marcin Krystek, Simone Mantovani, Federica Foglini, Valentina Grande, Francesco De Leo, Stefano Salvi, Elisa Trasati, Vito Romaniello, Mirko Albani, Cristiano Silvagni, Rosemarie Leone, Fulvio Marelli, Sergio Albani, Michele Lazzarini, Hazel J. Napier, Helen M. Glaves, Timothy Aldridge, Charles Meertens, Fran Boler, Henry W. Loescher, Christine Laney, Melissa A Genazzio, Daniel Crawl, Ilkay Altintas |
Abstract | Data-intensive science communities are progressively adopting FAIR practices that enhance the visibility of scientific breakthroughs and enable reuse. At the core of this movement, research objects contain and describe scientific information and resources in a way compliant with the FAIR principles and sustain the development of key infrastructure and tools. This paper provides an account of the challenges, experiences and solutions involved in the adoption of FAIR around research objects over several Earth Science disciplines. During this journey, our work has been comprehensive, with outcomes including: an extended research object model adapted to the needs of earth scientists; the provisioning of digital object identifiers (DOI) to enable persistent identification and to give due credit to authors; the generation of content-based, semantically rich, research object metadata through natural language processing, enhancing visibility and reuse through recommendation systems and third-party search engines; and various types of checklists that provide a compact representation of research object quality as a key enabler of scientific reuse. All these results have been integrated in ROHub, a platform that provides research object management functionality to a wealth of applications and interfaces across different scientific communities. To monitor and quantify the community uptake of research objects, we have defined indicators and obtained measures via ROHub that are also discussed herein. |
Tasks | Recommendation Systems |
Published | 2018-09-27 |
URL | http://arxiv.org/abs/1809.10617v1 |
http://arxiv.org/pdf/1809.10617v1.pdf | |
PWC | https://paperswithcode.com/paper/enabling-fair-research-in-earth-science |
Repo | |
Framework | |
Questioning the assumptions behind fairness solutions
Title | Questioning the assumptions behind fairness solutions |
Authors | Rebekah Overdorf, Bogdan Kulynych, Ero Balsa, Carmela Troncoso, Seda Gürses |
Abstract | In addition to their benefits, optimization systems can have negative economic, moral, social, and political effects on populations as well as their environments. Frameworks like fairness have been proposed to aid service providers in addressing subsequent bias and discrimination during data collection and algorithm design. However, recent reports of neglect, unresponsiveness, and malevolence cast doubt on whether service providers can effectively implement fairness solutions. These reports invite us to revisit assumptions made about the service providers in fairness solutions. Namely, that service providers have (i) the incentives or (ii) the means to mitigate optimization externalities. Moreover, the environmental impact of these systems suggests that we need (iii) novel frameworks that consider systems other than algorithmic decision-making and recommender systems, and (iv) solutions that go beyond removing related algorithmic biases. Going forward, we propose Protective Optimization Technologies that enable optimization subjects to defend against negative consequences of optimization systems. |
Tasks | Decision Making, Recommendation Systems |
Published | 2018-11-27 |
URL | http://arxiv.org/abs/1811.11293v1 |
http://arxiv.org/pdf/1811.11293v1.pdf | |
PWC | https://paperswithcode.com/paper/questioning-the-assumptions-behind-fairness |
Repo | |
Framework | |
The Future of Prosody: It’s about Time
Title | The Future of Prosody: It’s about Time |
Authors | Dafydd Gibbon |
Abstract | Prosody is usually defined in terms of the three distinct but interacting domains of pitch, intensity and duration patterning, or, more generally, as phonological and phonetic properties of ‘suprasegmentals’, speech segments which are larger than consonants and vowels. Rather than taking this approach, the concept of multiple time domains for prosody processing is taken up, and methods of time domain analysis are discussed: annotation mining with timing dispersion measures, time tree induction, oscillator models in phonology and phonetics, and finally the use of the Amplitude Envelope Modulation Spectrum (AEMS). While frequency demodulation (in the form of pitch tracking) is a central issue in prosodic analysis, in the present context it is amplitude envelope demodulation and frequency zones in the long time-domain spectra of the demodulated envelope which are focused. A generalised view is taken of oscillation as iteration in abstract prosodic models and as modulation and demodulation of a variety of rhythms in the speech signal. |
Tasks | |
Published | 2018-04-23 |
URL | http://arxiv.org/abs/1804.09543v4 |
http://arxiv.org/pdf/1804.09543v4.pdf | |
PWC | https://paperswithcode.com/paper/the-future-of-prosody-its-about-time |
Repo | |
Framework | |
Slugbot: An Application of a Novel and Scalable Open Domain Socialbot Framework
Title | Slugbot: An Application of a Novel and Scalable Open Domain Socialbot Framework |
Authors | Kevin K. Bowden, Jiaqi Wu, Shereen Oraby, Amita Misra, Marilyn Walker |
Abstract | In this paper we introduce a novel, open domain socialbot for the Amazon Alexa Prize competition, aimed at carrying on friendly conversations with users on a variety of topics. We present our modular system, highlighting our different data sources and how we use the human mind as a model for data management. Additionally we build and employ natural language understanding and information retrieval tools and APIs to expand our knowledge bases. We describe our semistructured, scalable framework for crafting topic-specific dialogue flows, and give details on our dialogue management schemes and scoring mechanisms. Finally we briefly evaluate the performance of our system and observe the challenges that an open domain socialbot faces. |
Tasks | Dialogue Management, Information Retrieval |
Published | 2018-01-04 |
URL | http://arxiv.org/abs/1801.01531v1 |
http://arxiv.org/pdf/1801.01531v1.pdf | |
PWC | https://paperswithcode.com/paper/slugbot-an-application-of-a-novel-and |
Repo | |
Framework | |
Robust Handling of Polysemy via Sparse Representations
Title | Robust Handling of Polysemy via Sparse Representations |
Authors | Abhijit Mahabal, Dan Roth, Sid Mittal |
Abstract | Words are polysemous and multi-faceted, with many shades of meanings. We suggest that sparse distributed representations are more suitable than other, commonly used, (dense) representations to express these multiple facets, and present Category Builder, a working system that, as we show, makes use of sparse representations to support multi-faceted lexical representations. We argue that the set expansion task is well suited to study these meaning distinctions since a word may belong to multiple sets with a different reason for membership in each. We therefore exhibit the performance of Category Builder on this task, while showing that our representation captures at the same time analogy problems such as “the Ganga of Egypt” or “the Voldemort of Tolkien”. Category Builder is shown to be a more expressive lexical representation and to outperform dense representations such as Word2Vec in some analogy classes despite being shown only two of the three input terms. |
Tasks | |
Published | 2018-05-18 |
URL | http://arxiv.org/abs/1805.07398v1 |
http://arxiv.org/pdf/1805.07398v1.pdf | |
PWC | https://paperswithcode.com/paper/robust-handling-of-polysemy-via-sparse |
Repo | |
Framework | |
Anomaly detection in wide area network mesh using two machine learning anomaly detection algorithms
Title | Anomaly detection in wide area network mesh using two machine learning anomaly detection algorithms |
Authors | James Zhang, Ilija Vukotic, Robert Gardner |
Abstract | Anomaly detection is the practice of identifying items or events that do not conform to an expected behavior or do not correlate with other items in a dataset. It has previously been applied to areas such as intrusion detection, system health monitoring, and fraud detection in credit card transactions. In this paper, we describe a new method for detecting anomalous behavior over network performance data, gathered by perfSONAR, using two machine learning algorithms: Boosted Decision Trees (BDT) and Simple Feedforward Neural Network. The effectiveness of each algorithm was evaluated and compared. Both have shown sufficient performance and sensitivity. |
Tasks | Anomaly Detection, Fraud Detection, Intrusion Detection |
Published | 2018-01-30 |
URL | http://arxiv.org/abs/1801.10094v1 |
http://arxiv.org/pdf/1801.10094v1.pdf | |
PWC | https://paperswithcode.com/paper/anomaly-detection-in-wide-area-network-mesh |
Repo | |
Framework | |
Composite Optimization by Nonconvex Majorization-Minimization
Title | Composite Optimization by Nonconvex Majorization-Minimization |
Authors | Jonas Geiping, Michael Moeller |
Abstract | The minimization of a nonconvex composite function can model a variety of imaging tasks. A popular class of algorithms for solving such problems are majorization-minimization techniques which iteratively approximate the composite nonconvex function by a majorizing function that is easy to minimize. Most techniques, e.g. gradient descent, utilize convex majorizers in order to guarantee that the majorizer is easy to minimize. In our work we consider a natural class of nonconvex majorizers for these functions, and show that these majorizers are still sufficient for a globally convergent optimization scheme. Numerical results illustrate that by applying this scheme, one can often obtain superior local optima compared to previous majorization-minimization methods, when the nonconvex majorizers are solved to global optimality. Finally, we illustrate the behavior of our algorithm for depth super-resolution from raw time-of-flight data. |
Tasks | Super-Resolution |
Published | 2018-02-20 |
URL | http://arxiv.org/abs/1802.07072v2 |
http://arxiv.org/pdf/1802.07072v2.pdf | |
PWC | https://paperswithcode.com/paper/composite-optimization-by-nonconvex |
Repo | |
Framework | |
A Deep Learning based Framework to Detect and Recognize Humans using Contactless Palmprints in the Wild
Title | A Deep Learning based Framework to Detect and Recognize Humans using Contactless Palmprints in the Wild |
Authors | Yang Liu, Ajay Kumar |
Abstract | Contactless and online palmprint identfication offers improved user convenience, hygiene, user-security and is highly desirable in a range of applications. This technical report details an accurate and generalizable deep learning-based framework to detect and recognize humans using contactless palmprint images in the wild. Our network is based on fully convolutional network that generates deeply learned residual features. We design a soft-shifted triplet loss function to more effectively learn discriminative palmprint features. Online palmprint identification also requires a contactless palm detector, which is adapted and trained from faster-R-CNN architecture, to detect palmprint region under varying backgrounds. Our reproducible experimental results on publicly available contactless palmprint databases suggest that the proposed framework consistently outperforms several classical and state-of-the-art palmprint recognition methods. More importantly, the model presented in this report offers superior generalization capability, unlike other popular methods in the literature, as it does not essentially require database-specific parameter tuning, which is another key advantage over other methods in the literature. |
Tasks | |
Published | 2018-12-29 |
URL | http://arxiv.org/abs/1812.11319v1 |
http://arxiv.org/pdf/1812.11319v1.pdf | |
PWC | https://paperswithcode.com/paper/a-deep-learning-based-framework-to-detect-and |
Repo | |
Framework | |
Tongue image constitution recognition based on Complexity Perception method
Title | Tongue image constitution recognition based on Complexity Perception method |
Authors | Jiajiong Ma, Guihua Wen, Yang Hu, Tianyuan Chang, Haibin Zeng, Lijun Jiang, Jianzeng Qin |
Abstract | Background and Object: In China, body constitution is highly related to physiological and pathological functions of human body and determines the tendency of the disease, which is of great importance for treatment in clinical medicine. Tongue diagnosis, as a key part of Traditional Chinese Medicine inspection, is an important way to recognize the type of constitution.In order to deploy tongue image constitution recognition system on non-invasive mobile device to achieve fast, efficient and accurate constitution recognition, an efficient method is required to deal with the challenge of this kind of complex environment. Methods: In this work, we perform the tongue area detection, tongue area calibration and constitution classification using methods which are based on deep convolutional neural network. Subject to the variation of inconstant environmental condition, the distribution of the picture is uneven, which has a bad effect on classification performance. To solve this problem, we propose a method based on the complexity of individual instances to divide dataset into two subsets and classify them separately, which is capable of improving classification accuracy. To evaluate the performance of our proposed method, we conduct experiments on three sizes of tongue datasets, in which deep convolutional neural network method and traditional digital image analysis method are respectively applied to extract features for tongue images. The proposed method is combined with the base classifier Softmax, SVM, and DecisionTree respectively. Results: As the experiments results shown, our proposed method improves the classification accuracy by 1.135% on average and achieves 59.99% constitution classification accuracy. Conclusions: Experimental results on three datasets show that our proposed method can effectively improve the classification accuracy of tongue constitution recognition. |
Tasks | Calibration |
Published | 2018-03-01 |
URL | http://arxiv.org/abs/1803.00219v1 |
http://arxiv.org/pdf/1803.00219v1.pdf | |
PWC | https://paperswithcode.com/paper/tongue-image-constitution-recognition-based |
Repo | |
Framework | |
Wildest Faces: Face Detection and Recognition in Violent Settings
Title | Wildest Faces: Face Detection and Recognition in Violent Settings |
Authors | Mehmet Kerim Yucel, Yunus Can Bilge, Oguzhan Oguz, Nazli Ikizler-Cinbis, Pinar Duygulu, Ramazan Gokberk Cinbis |
Abstract | With the introduction of large-scale datasets and deep learning models capable of learning complex representations, impressive advances have emerged in face detection and recognition tasks. Despite such advances, existing datasets do not capture the difficulty of face recognition in the wildest scenarios, such as hostile disputes or fights. Furthermore, existing datasets do not represent completely unconstrained cases of low resolution, high blur and large pose/occlusion variances. To this end, we introduce the Wildest Faces dataset, which focuses on such adverse effects through violent scenes. The dataset consists of an extensive set of violent scenes of celebrities from movies. Our experimental results demonstrate that state-of-the-art techniques are not well-suited for violent scenes, and therefore, Wildest Faces is likely to stir further interest in face detection and recognition research. |
Tasks | Face Detection, Face Recognition |
Published | 2018-05-19 |
URL | http://arxiv.org/abs/1805.07566v1 |
http://arxiv.org/pdf/1805.07566v1.pdf | |
PWC | https://paperswithcode.com/paper/wildest-faces-face-detection-and-recognition |
Repo | |
Framework | |