October 19, 2019

3067 words 15 mins read

Paper Group ANR 404

Paper Group ANR 404

A Systematic Review of Automated Grammar Checking in English Language. A Framework for Implementing Machine Learning on Omics Data. Computational Analysis of Insurance Complaints: GEICO Case Study. Application of Grey Numbers to Assessment Processes. Solving the Course-timetabling Problem of Cairo University Using Max-SAT. Transferring Deep Reinfor …

A Systematic Review of Automated Grammar Checking in English Language

Title A Systematic Review of Automated Grammar Checking in English Language
Authors Madhvi Soni, Jitendra Singh Thakur
Abstract Grammar checking is the task of detection and correction of grammatical errors in the text. English is the dominating language in the field of science and technology. Therefore, the non-native English speakers must be able to use correct English grammar while reading, writing or speaking. This generates the need of automatic grammar checking tools. So far many approaches have been proposed and implemented. But less efforts have been made in surveying the literature in the past decade. The objective of this systematic review is to examine the existing literature, highlighting the current issues and suggesting the potential directions of future research. This systematic review is a result of analysis of 12 primary studies obtained after designing a search strategy for selecting papers found on the web. We also present a possible scheme for the classification of grammar errors. Among the main observations, we found that there is a lack of efficient and robust grammar checking tools for real time applications. We present several useful illustrations- most prominent are the schematic diagrams that we provide for each approach and a table that summarizes these approaches along different dimensions such as target error types, linguistic dataset used, strengths and limitations of the approach. This facilitates better understandability, comparison and evaluation of previous research.
Tasks
Published 2018-03-29
URL http://arxiv.org/abs/1804.00540v1
PDF http://arxiv.org/pdf/1804.00540v1.pdf
PWC https://paperswithcode.com/paper/a-systematic-review-of-automated-grammar
Repo
Framework

A Framework for Implementing Machine Learning on Omics Data

Title A Framework for Implementing Machine Learning on Omics Data
Authors Geoffroy Dubourg-Felonneau, Timothy Cannings, Fergal Cotter, Hannah Thompson, Nirmesh Patel, John W Cassidy, Harry W Clifford
Abstract The potential benefits of applying machine learning methods to -omics data are becoming increasingly apparent, especially in clinical settings. However, the unique characteristics of these data are not always well suited to machine learning techniques. These data are often generated across different technologies in different labs, and frequently with high dimensionality. In this paper we present a framework for combining -omics data sets, and for handling high dimensional data, making -omics research more accessible to machine learning applications. We demonstrate the success of this framework through integration and analysis of multi-analyte data for a set of 3,533 breast cancers. We then use this data-set to predict breast cancer patient survival for individuals at risk of an impending event, with higher accuracy and lower variance than methods trained on individual data-sets. We hope that our pipelines for data-set generation and transformation will open up -omics data to machine learning researchers. We have made these freely available for noncommercial use at www.ccg.ai.
Tasks
Published 2018-11-26
URL http://arxiv.org/abs/1811.10455v1
PDF http://arxiv.org/pdf/1811.10455v1.pdf
PWC https://paperswithcode.com/paper/a-framework-for-implementing-machine-learning
Repo
Framework

Computational Analysis of Insurance Complaints: GEICO Case Study

Title Computational Analysis of Insurance Complaints: GEICO Case Study
Authors Amir Karami, Noelle M. Pendergraft
Abstract The online environment has provided a great opportunity for insurance policyholders to share their complaints with respect to different services. These complaints can reveal valuable information for insurance companies who seek to improve their services; however, analyzing a huge number of online complaints is a complicated task for human and must involve computational methods to create an efficient process. This research proposes a computational approach to characterize the major topics of a large number of online complaints. Our approach is based on using the topic modeling approach to disclose the latent semantic of complaints. The proposed approach deployed on thousands of GEICO negative reviews. Analyzing 1,371 GEICO complaints indicates that there are 30 major complains in four categories: (1) customer service, (2) insurance coverage, paperwork, policy, and reports, (3) legal issues, and (4) costs, estimates, and payments. This research approach can be used in other applications to explore a large number of reviews.
Tasks
Published 2018-06-26
URL http://arxiv.org/abs/1806.09736v1
PDF http://arxiv.org/pdf/1806.09736v1.pdf
PWC https://paperswithcode.com/paper/computational-analysis-of-insurance
Repo
Framework

Application of Grey Numbers to Assessment Processes

Title Application of Grey Numbers to Assessment Processes
Authors Michael Gr. Voskoglou, Yiannis Theodorou
Abstract The theory of grey systems plays an important role in science,engineering and in the everyday life in general for handling approximate data. In the present paper grey numbers are used as a tool for assessing with linguistic expressions the mean performance of a group of objects participating in a certain activity. Two applications to student and football player assessment are also presented illustrating our results.
Tasks
Published 2018-04-02
URL http://arxiv.org/abs/1804.00423v1
PDF http://arxiv.org/pdf/1804.00423v1.pdf
PWC https://paperswithcode.com/paper/application-of-grey-numbers-to-assessment
Repo
Framework

Solving the Course-timetabling Problem of Cairo University Using Max-SAT

Title Solving the Course-timetabling Problem of Cairo University Using Max-SAT
Authors Mohamed El Halaby
Abstract Due to the good performance of current SAT (satisfiability) and Max-SAT (maximum ssatisfiability) solvers, many real-life optimization problems such as scheduling can be solved by encoding them into Max-SAT. In this paper we tackle the course timetabling problem of the department of mathematics, Cairo University by encoding it into Max-SAT. Generating timetables for the department by hand has proven to be cumbersome and the generated timetable almost always contains conflicts. We show how the constraints can be modelled as a Max-SAT instance.
Tasks
Published 2018-02-11
URL http://arxiv.org/abs/1803.05027v1
PDF http://arxiv.org/pdf/1803.05027v1.pdf
PWC https://paperswithcode.com/paper/solving-the-course-timetabling-problem-of
Repo
Framework

Transferring Deep Reinforcement Learning with Adversarial Objective and Augmentation

Title Transferring Deep Reinforcement Learning with Adversarial Objective and Augmentation
Authors Shu-Hsuan Hsu, I-Chao Shen, Bing-Yu Chen
Abstract In the past few years, deep reinforcement learning has been proven to solve problems which have complex states like video games or board games. The next step of intelligent agents would be able to generalize between tasks, and using prior experience to pick up new skills more quickly. However, most reinforcement learning algorithms for now are often suffering from catastrophic forgetting even when facing a very similar target task. Our approach enables the agents to generalize knowledge from a single source task, and boost the learning progress with a semisupervised learning method when facing a new task. We evaluate this approach on Atari games, which is a popular reinforcement learning benchmark, and show that it outperforms common baselines based on pre-training and fine-tuning.
Tasks Atari Games, Board Games
Published 2018-09-04
URL http://arxiv.org/abs/1809.00770v1
PDF http://arxiv.org/pdf/1809.00770v1.pdf
PWC https://paperswithcode.com/paper/transferring-deep-reinforcement-learning-with
Repo
Framework

Beyond Equal-Length Snippets: How Long is Sufficient to Recognize an Audio Scene?

Title Beyond Equal-Length Snippets: How Long is Sufficient to Recognize an Audio Scene?
Authors Huy Phan, Oliver Y. Chén, Philipp Koch, Lam Pham, Ian McLoughlin, Alfred Mertins, Maarten De Vos
Abstract Due to the variability in characteristics of audio scenes, some scenes can naturally be recognized earlier than others. In this work, rather than using equal-length snippets for all scene categories, as is common in the literature, we study to which temporal extent an audio scene can be reliably recognized given state-of-the-art models. Moreover, as model fusion with deep network ensemble is prevalent in audio scene classification, we further study whether, and if so, when model fusion is necessary for this task. To achieve these goals, we employ two single-network systems relying on a convolutional neural network and a recurrent neural network for classification as well as early fusion and late fusion of these networks. Experimental results on the LITIS-Rouen dataset show that some scenes can be reliably recognized with a few seconds while other scenes require significantly longer durations. In addition, model fusion is shown to be the most beneficial when the signal length is short.
Tasks Scene Classification
Published 2018-11-02
URL https://arxiv.org/abs/1811.01095v2
PDF https://arxiv.org/pdf/1811.01095v2.pdf
PWC https://paperswithcode.com/paper/beyond-equal-length-snippets-how-long-is
Repo
Framework

Learning to Write Notes in Electronic Health Records

Title Learning to Write Notes in Electronic Health Records
Authors Peter J. Liu
Abstract Clinicians spend a significant amount of time inputting free-form textual notes into Electronic Health Records (EHR) systems. Much of this documentation work is seen as a burden, reducing time spent with patients and contributing to clinician burnout. With the aspiration of AI-assisted note-writing, we propose a new language modeling task predicting the content of notes conditioned on past data from a patient’s medical record, including patient demographics, labs, medications, and past notes. We train generative models using the public, de-identified MIMIC-III dataset and compare generated notes with those in the dataset on multiple measures. We find that much of the content can be predicted, and that many common templates found in notes can be learned. We discuss how such models can be useful in supporting assistive note-writing features such as error-detection and auto-complete.
Tasks Language Modelling
Published 2018-08-08
URL http://arxiv.org/abs/1808.02622v1
PDF http://arxiv.org/pdf/1808.02622v1.pdf
PWC https://paperswithcode.com/paper/learning-to-write-notes-in-electronic-health
Repo
Framework

New directional bat algorithm for continuous optimization problems

Title New directional bat algorithm for continuous optimization problems
Authors Asma Chakri, Rabia Khelif, Mohamed Benouaret, Xin-She Yang
Abstract Bat algorithm (BA) is a recent optimization algorithm based on swarm intelligence and inspiration from the echolocation behavior of bats. One of the issues in the standard bat algorithm is the premature convergence that can occur due to the low exploration ability of the algorithm under some conditions. To overcome this deficiency, directional echolocation is introduced to the standard bat algorithm to enhance its exploration and exploitation capabilities. In addition to such directional echolocation, three other improvements have been embedded into the standard bat algorithm to enhance its performance. The new proposed approach, namely the directional Bat Algorithm (dBA), has been then tested using several standard and non-standard benchmarks from the CEC2005 benchmark suite. The performance of dBA has been compared with ten other algorithms and BA variants using non-parametric statistical tests. The statistical test results show the superiority of the directional bat algorithm.
Tasks
Published 2018-04-22
URL http://arxiv.org/abs/1805.05854v1
PDF http://arxiv.org/pdf/1805.05854v1.pdf
PWC https://paperswithcode.com/paper/new-directional-bat-algorithm-for-continuous
Repo
Framework

Neural Sign Language Translation based on Human Keypoint Estimation

Title Neural Sign Language Translation based on Human Keypoint Estimation
Authors Sang-Ki Ko, Chang Jo Kim, Hyedong Jung, Choongsang Cho
Abstract We propose a sign language translation system based on human keypoint estimation. It is well-known that many problems in the field of computer vision require a massive amount of dataset to train deep neural network models. The situation is even worse when it comes to the sign language translation problem as it is far more difficult to collect high-quality training data. In this paper, we introduce the KETI (short for Korea Electronics Technology Institute) sign language dataset which consists of 14,672 videos of high resolution and quality. Considering the fact that each country has a different and unique sign language, the KETI sign language dataset can be the starting line for further research on the Korean sign language translation. Using the KETI sign language dataset, we develop a neural network model for translating sign videos into natural language sentences by utilizing the human keypoints extracted from a face, hands, and body parts. The obtained human keypoint vector is normalized by the mean and standard deviation of the keypoints and used as input to our translation model based on the sequence-to-sequence architecture. As a result, we show that our approach is robust even when the size of the training data is not sufficient. Our translation model achieves 93.28% (55.28%, respectively) translation accuracy on the validation set (test set, respectively) for 105 sentences that can be used in emergency situations. We compare several types of our neural sign translation models based on different attention mechanisms in terms of classical metrics for measuring the translation performance.
Tasks Sign Language Translation
Published 2018-11-28
URL https://arxiv.org/abs/1811.11436v2
PDF https://arxiv.org/pdf/1811.11436v2.pdf
PWC https://paperswithcode.com/paper/neural-sign-language-translation-based-on
Repo
Framework

MLIC: A MaxSAT-Based framework for learning interpretable classification rules

Title MLIC: A MaxSAT-Based framework for learning interpretable classification rules
Authors Dmitry Malioutov, Kuldeep S. Meel
Abstract The wide adoption of machine learning approaches in the industry, government, medicine and science has renewed the interest in interpretable machine learning: many decisions are too important to be delegated to black-box techniques such as deep neural networks or kernel SVMs. Historically, problems of learning interpretable classifiers, including classification rules or decision trees, have been approached by greedy heuristic methods as essentially all the exact optimization formulations are NP-hard. Our primary contribution is a MaxSAT-based framework, called MLIC, which allows principled search for interpretable classification rules expressible in propositional logic. Our approach benefits from the revolutionary advances in the constraint satisfaction community to solve large-scale instances of such problems. In experimental evaluations over a collection of benchmarks arising from practical scenarios, we demonstrate its effectiveness: we show that the formulation can solve large classification problems with tens or hundreds of thousands of examples and thousands of features, and to provide a tunable balance of accuracy vs. interpretability. Furthermore, we show that in many problems interpretability can be obtained at only a minor cost in accuracy. The primary objective of the paper is to show that recent advances in the MaxSAT literature make it realistic to find optimal (or very high quality near-optimal) solutions to large-scale classification problems. The key goal of the paper is to excite researchers in both interpretable classification and in the CP community to take it further and propose richer formulations, and to develop bespoke solvers attuned to the problem of interpretable ML.
Tasks Interpretable Machine Learning
Published 2018-12-05
URL http://arxiv.org/abs/1812.01843v1
PDF http://arxiv.org/pdf/1812.01843v1.pdf
PWC https://paperswithcode.com/paper/mlic-a-maxsat-based-framework-for-learning
Repo
Framework

Implicit Bias of Gradient Descent on Linear Convolutional Networks

Title Implicit Bias of Gradient Descent on Linear Convolutional Networks
Authors Suriya Gunasekar, Jason Lee, Daniel Soudry, Nathan Srebro
Abstract We show that gradient descent on full-width linear convolutional networks of depth $L$ converges to a linear predictor related to the $\ell_{2/L}$ bridge penalty in the frequency domain. This is in contrast to linearly fully connected networks, where gradient descent converges to the hard margin linear support vector machine solution, regardless of depth.
Tasks
Published 2018-06-01
URL http://arxiv.org/abs/1806.00468v2
PDF http://arxiv.org/pdf/1806.00468v2.pdf
PWC https://paperswithcode.com/paper/implicit-bias-of-gradient-descent-on-linear
Repo
Framework

Conceptual Organization is Revealed by Consumer Activity Patterns

Title Conceptual Organization is Revealed by Consumer Activity Patterns
Authors Adam N. Hornsby, Thomas Evans, Peter Riefer, Rosie Prior, Bradley C. Love
Abstract Meaning may arise from an element’s role or interactions within a larger system. For example, hitting nails is more central to people’s concept of a hammer than its particular material composition or other intrinsic features. Likewise, the importance of a web page may result from its links with other pages rather than solely from its content. One example of meaning arising from extrinsic relationships are approaches that extract the meaning of word concepts from co-occurrence patterns in large, text corpora. The success of these methods suggest that human activity patterns may reveal conceptual organization. However, texts do not directly reflect human activity, but instead serve a communicative function and are usually highly curated or edited to suit an audience. Here, we apply methods devised for text to a data source that directly reflects thousands of individuals’ activity patterns, namely supermarket purchases. Using product co-occurrence data from nearly 1.3m shopping baskets, we trained a topic model to learn 25 high-level concepts (or “topics”). These topics were found to be comprehensible and coherent by both retail experts and consumers. Topics ranged from specific (e.g., ingredients for a stir-fry) to general (e.g., cooking from scratch). Topics tended to be goal-directed and situational, consistent with the notion that human conceptual knowledge is tailored to support action. Individual differences in the topics sampled predicted basic demographic characteristics. These results suggest that human activity patterns reveal conceptual organization and may give rise to it.
Tasks
Published 2018-10-19
URL http://arxiv.org/abs/1810.08577v1
PDF http://arxiv.org/pdf/1810.08577v1.pdf
PWC https://paperswithcode.com/paper/conceptual-organization-is-revealed-by
Repo
Framework

Towards Budget-Driven Hardware Optimization for Deep Convolutional Neural Networks using Stochastic Computing

Title Towards Budget-Driven Hardware Optimization for Deep Convolutional Neural Networks using Stochastic Computing
Authors Zhe Li, Ji Li, Ao Ren, Caiwen Ding, Jeffrey Draper, Qinru Qiu, Bo Yuan, Yanzhi Wang
Abstract Recently, Deep Convolutional Neural Network (DCNN) has achieved tremendous success in many machine learning applications. Nevertheless, the deep structure has brought significant increases in computation complexity. Largescale deep learning systems mainly operate in high-performance server clusters, thus restricting the application extensions to personal or mobile devices. Previous works on GPU and/or FPGA acceleration for DCNNs show increasing speedup, but ignore other constraints, such as area, power, and energy. Stochastic Computing (SC), as a unique data representation and processing technique, has the potential to enable the design of fully parallel and scalable hardware implementations of large-scale deep learning systems. This paper proposed an automatic design allocation algorithm driven by budget requirement considering overall accuracy performance. This systematic method enables the automatic design of a DCNN where all design parameters are jointly optimized. Experimental results demonstrate that proposed algorithm can achieve a joint optimization of all design parameters given the comprehensive budget of a DCNN.
Tasks
Published 2018-05-10
URL http://arxiv.org/abs/1805.04142v1
PDF http://arxiv.org/pdf/1805.04142v1.pdf
PWC https://paperswithcode.com/paper/towards-budget-driven-hardware-optimization
Repo
Framework

DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

Title DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation
Authors Biyi Fang, Jillian Co, Mi Zhang
Abstract There is an undeniable communication barrier between deaf people and people with normal hearing ability. Although innovations in sign language translation technology aim to tear down this communication barrier, the majority of existing sign language translation systems are either intrusive or constrained by resolution or ambient lighting conditions. Moreover, these existing systems can only perform single-sign ASL translation rather than sentence-level translation, making them much less useful in daily-life communication scenarios. In this work, we fill this critical gap by presenting DeepASL, a transformative deep learning-based sign language translation technology that enables ubiquitous and non-intrusive American Sign Language (ASL) translation at both word and sentence levels. DeepASL uses infrared light as its sensing mechanism to non-intrusively capture the ASL signs. It incorporates a novel hierarchical bidirectional deep recurrent neural network (HB-RNN) and a probabilistic framework based on Connectionist Temporal Classification (CTC) for word-level and sentence-level ASL translation respectively. To evaluate its performance, we have collected 7,306 samples from 11 participants, covering 56 commonly used ASL words and 100 ASL sentences. DeepASL achieves an average 94.5% word-level translation accuracy and an average 8.2% word error rate on translating unseen ASL sentences. Given its promising performance, we believe DeepASL represents a significant step towards breaking the communication barrier between deaf people and hearing majority, and thus has the significant potential to fundamentally change deaf people’s lives.
Tasks Sign Language Translation
Published 2018-02-21
URL http://arxiv.org/abs/1802.07584v3
PDF http://arxiv.org/pdf/1802.07584v3.pdf
PWC https://paperswithcode.com/paper/deepasl-enabling-ubiquitous-and-non-intrusive
Repo
Framework
comments powered by Disqus