May 5, 2019

3522 words 17 mins read

Paper Group ANR 521

Paper Group ANR 521

Re-evaluating Automatic Metrics for Image Captioning. ConfidentCare: A Clinical Decision Support System for Personalized Breast Cancer Screening. An Axiomatic Approach to Routing. Automated Big Text Security Classification. A large scale study of SVM based methods for abstract screening in systematic reviews. Reinforcement Learning With Temporal Lo …

Re-evaluating Automatic Metrics for Image Captioning

Title Re-evaluating Automatic Metrics for Image Captioning
Authors Mert Kilickaya, Aykut Erdem, Nazli Ikizler-Cinbis, Erkut Erdem
Abstract The task of generating natural language descriptions from images has received a lot of attention in recent years. Consequently, it is becoming increasingly important to evaluate such image captioning approaches in an automatic manner. In this paper, we provide an in-depth evaluation of the existing image captioning metrics through a series of carefully designed experiments. Moreover, we explore the utilization of the recently proposed Word Mover’s Distance (WMD) document metric for the purpose of image captioning. Our findings outline the differences and/or similarities between metrics and their relative robustness by means of extensive correlation, accuracy and distraction based evaluations. Our results also demonstrate that WMD provides strong advantages over other metrics.
Tasks Image Captioning
Published 2016-12-22
URL http://arxiv.org/abs/1612.07600v1
PDF http://arxiv.org/pdf/1612.07600v1.pdf
PWC https://paperswithcode.com/paper/re-evaluating-automatic-metrics-for-image
Repo
Framework

ConfidentCare: A Clinical Decision Support System for Personalized Breast Cancer Screening

Title ConfidentCare: A Clinical Decision Support System for Personalized Breast Cancer Screening
Authors Ahmed M. Alaa, Kyeong H. Moon, William Hsu, Mihaela van der Schaar
Abstract Breast cancer screening policies attempt to achieve timely diagnosis by the regular screening of apparently healthy women. Various clinical decisions are needed to manage the screening process; those include: selecting the screening tests for a woman to take, interpreting the test outcomes, and deciding whether or not a woman should be referred to a diagnostic test. Such decisions are currently guided by clinical practice guidelines (CPGs), which represent a one-size-fits-all approach that are designed to work well on average for a population, without guaranteeing that it will work well uniformly over that population. Since the risks and benefits of screening are functions of each patients features, personalized screening policies that are tailored to the features of individuals are needed in order to ensure that the right tests are recommended to the right woman. In order to address this issue, we present ConfidentCare: a computer-aided clinical decision support system that learns a personalized screening policy from the electronic health record (EHR) data. ConfidentCare operates by recognizing clusters of similar patients, and learning the best screening policy to adopt for each cluster. A cluster of patients is a set of patients with similar features (e.g. age, breast density, family history, etc.), and the screening policy is a set of guidelines on what actions to recommend for a woman given her features and screening test scores. ConfidentCare algorithm ensures that the policy adopted for every cluster of patients satisfies a predefined accuracy requirement with a high level of confidence. We show that our algorithm outperforms the current CPGs in terms of cost-efficiency and false positive rates.
Tasks
Published 2016-02-01
URL http://arxiv.org/abs/1602.00374v1
PDF http://arxiv.org/pdf/1602.00374v1.pdf
PWC https://paperswithcode.com/paper/confidentcare-a-clinical-decision-support
Repo
Framework

An Axiomatic Approach to Routing

Title An Axiomatic Approach to Routing
Authors Omer Lev, Moshe Tennenholtz, Aviv Zohar
Abstract Information delivery in a network of agents is a key issue for large, complex systems that need to do so in a predictable, efficient manner. The delivery of information in such multi-agent systems is typically implemented through routing protocols that determine how information flows through the network. Different routing protocols exist each with its own benefits, but it is generally unclear which properties can be successfully combined within a given algorithm. We approach this problem from the axiomatic point of view, i.e., we try to establish what are the properties we would seek to see in such a system, and examine the different properties which uniquely define common routing algorithms used today. We examine several desirable properties, such as robustness, which ensures adding nodes and edges does not change the routing in a radical, unpredictable ways; and properties that depend on the operating environment, such as an “economic model”, where nodes choose their paths based on the cost they are charged to pass information to the next node. We proceed to fully characterize minimal spanning tree, shortest path, and weakest link routing algorithms, showing a tight set of axioms for each.
Tasks
Published 2016-06-24
URL http://arxiv.org/abs/1606.07523v1
PDF http://arxiv.org/pdf/1606.07523v1.pdf
PWC https://paperswithcode.com/paper/an-axiomatic-approach-to-routing
Repo
Framework

Automated Big Text Security Classification

Title Automated Big Text Security Classification
Authors Khudran Alzhrani, Ethan M. Rudd, Terrance E. Boult, C. Edward Chow
Abstract In recent years, traditional cybersecurity safeguards have proven ineffective against insider threats. Famous cases of sensitive information leaks caused by insiders, including the WikiLeaks release of diplomatic cables and the Edward Snowden incident, have greatly harmed the U.S. government’s relationship with other governments and with its own citizens. Data Leak Prevention (DLP) is a solution for detecting and preventing information leaks from within an organization’s network. However, state-of-art DLP detection models are only able to detect very limited types of sensitive information, and research in the field has been hindered due to the lack of available sensitive texts. Many researchers have focused on document-based detection with artificially labeled “confidential documents” for which security labels are assigned to the entire document, when in reality only a portion of the document is sensitive. This type of whole-document based security labeling increases the chances of preventing authorized users from accessing non-sensitive information within sensitive documents. In this paper, we introduce Automated Classification Enabled by Security Similarity (ACESS), a new and innovative detection model that penetrates the complexity of big text security classification/detection. To analyze the ACESS system, we constructed a novel dataset, containing formerly classified paragraphs from diplomatic cables made public by the WikiLeaks organization. To our knowledge this paper is the first to analyze a dataset that contains actual formerly sensitive information annotated at paragraph granularity.
Tasks
Published 2016-10-21
URL http://arxiv.org/abs/1610.06856v1
PDF http://arxiv.org/pdf/1610.06856v1.pdf
PWC https://paperswithcode.com/paper/automated-big-text-security-classification
Repo
Framework

A large scale study of SVM based methods for abstract screening in systematic reviews

Title A large scale study of SVM based methods for abstract screening in systematic reviews
Authors Tanay Kumar Saha, Mourad Ouzzani, Hossam M. Hammady, Ahmed K. Elmagarmid, Wajdi Dhifli, Mohammad Al Hasan
Abstract A major task in systematic reviews is abstract screening, i.e., excluding, often hundreds or thousand of, irrelevant citations returned from a database search based on titles and abstracts. Thus, a systematic review platform that can automate the abstract screening process is of huge importance. Several methods have been proposed for this task. However, it is very hard to clearly understand the applicability of these methods in a systematic review platform because of the following challenges: (1) the use of non-overlapping metrics for the evaluation of the proposed methods, (2) usage of features that are very hard to collect, (3) using a small set of reviews for the evaluation, and (4) no solid statistical testing or equivalence grouping of the methods. In this paper, we use feature representation that can be extracted per citation. We evaluate SVM-based methods (commonly used) on a large set of reviews ($61$) and metrics ($11$) to provide equivalence grouping of methods based on a solid statistical test. Our analysis also includes a strong variability of the metrics using $500$x$2$ cross validation. While some methods shine for different metrics and for different datasets, there is no single method that dominates the pack. Furthermore, we observe that in some cases relevant (included) citations can be found after screening only 15-20% of them via a certainty based sampling. A few included citations present outlying characteristics and can only be found after a very large number of screening steps. Finally, we present an ensemble algorithm for producing a $5$-star rating of citations based on their relevance. Such algorithm combines the best methods from our evaluation and through its $5$-star rating outputs a more easy-to-consume prediction.
Tasks
Published 2016-10-01
URL http://arxiv.org/abs/1610.00192v3
PDF http://arxiv.org/pdf/1610.00192v3.pdf
PWC https://paperswithcode.com/paper/a-large-scale-study-of-svm-based-methods-for
Repo
Framework

Reinforcement Learning With Temporal Logic Rewards

Title Reinforcement Learning With Temporal Logic Rewards
Authors Xiao Li, Cristian-Ioan Vasile, Calin Belta
Abstract Reinforcement learning (RL) depends critically on the choice of reward functions used to capture the de- sired behavior and constraints of a robot. Usually, these are handcrafted by a expert designer and represent heuristics for relatively simple tasks. Real world applications typically involve more complex tasks with rich temporal and logical structure. In this paper we take advantage of the expressive power of temporal logic (TL) to specify complex rules the robot should follow, and incorporate domain knowledge into learning. We propose Truncated Linear Temporal Logic (TLTL) as specifications language, that is arguably well suited for the robotics applications, together with quantitative semantics, i.e., robustness degree. We propose a RL approach to learn tasks expressed as TLTL formulae that uses their associated robustness degree as reward functions, instead of the manually crafted heuristics trying to capture the same specifications. We show in simulated trials that learning is faster and policies obtained using the proposed approach outperform the ones learned using heuristic rewards in terms of the robustness degree, i.e., how well the tasks are satisfied. Furthermore, we demonstrate the proposed RL approach in a toast-placing task learned by a Baxter robot.
Tasks
Published 2016-12-11
URL http://arxiv.org/abs/1612.03471v2
PDF http://arxiv.org/pdf/1612.03471v2.pdf
PWC https://paperswithcode.com/paper/reinforcement-learning-with-temporal-logic
Repo
Framework

AdaScan: Adaptive Scan Pooling in Deep Convolutional Neural Networks for Human Action Recognition in Videos

Title AdaScan: Adaptive Scan Pooling in Deep Convolutional Neural Networks for Human Action Recognition in Videos
Authors Amlan Kar, Nishant Rai, Karan Sikka, Gaurav Sharma
Abstract We propose a novel method for temporally pooling frames in a video for the task of human action recognition. The method is motivated by the observation that there are only a small number of frames which, together, contain sufficient information to discriminate an action class present in a video, from the rest. The proposed method learns to pool such discriminative and informative frames, while discarding a majority of the non-informative frames in a single temporal scan of the video. Our algorithm does so by continuously predicting the discriminative importance of each video frame and subsequently pooling them in a deep learning framework. We show the effectiveness of our proposed pooling method on standard benchmarks where it consistently improves on baseline pooling methods, with both RGB and optical flow based Convolutional networks. Further, in combination with complementary video representations, we show results that are competitive with respect to the state-of-the-art results on two challenging and publicly available benchmark datasets.
Tasks Action Recognition In Videos, Optical Flow Estimation, Temporal Action Localization
Published 2016-11-24
URL http://arxiv.org/abs/1611.08240v4
PDF http://arxiv.org/pdf/1611.08240v4.pdf
PWC https://paperswithcode.com/paper/adascan-adaptive-scan-pooling-in-deep
Repo
Framework

Deep Learning for Reward Design to Improve Monte Carlo Tree Search in ATARI Games

Title Deep Learning for Reward Design to Improve Monte Carlo Tree Search in ATARI Games
Authors Xiaoxiao Guo, Satinder Singh, Richard Lewis, Honglak Lee
Abstract Monte Carlo Tree Search (MCTS) methods have proven powerful in planning for sequential decision-making problems such as Go and video games, but their performance can be poor when the planning depth and sampling trajectories are limited or when the rewards are sparse. We present an adaptation of PGRD (policy-gradient for reward-design) for learning a reward-bonus function to improve UCT (a MCTS algorithm). Unlike previous applications of PGRD in which the space of reward-bonus functions was limited to linear functions of hand-coded state-action-features, we use PGRD with a multi-layer convolutional neural network to automatically learn features from raw perception as well as to adapt the non-linear reward-bonus function parameters. We also adopt a variance-reducing gradient method to improve PGRD’s performance. The new method improves UCT’s performance on multiple ATARI games compared to UCT without the reward bonus. Combining PGRD and Deep Learning in this way should make adapting rewards for MCTS algorithms far more widely and practically applicable than before.
Tasks Atari Games, Decision Making
Published 2016-04-24
URL http://arxiv.org/abs/1604.07095v1
PDF http://arxiv.org/pdf/1604.07095v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-for-reward-design-to-improve
Repo
Framework

Neuromorphic Silicon Photonic Networks

Title Neuromorphic Silicon Photonic Networks
Authors Alexander N. Tait, Thomas Ferreira de Lima, Ellen Zhou, Allie X. Wu, Mitchell A. Nahmias, Bhavin J. Shastri, Paul R. Prucnal
Abstract Photonic systems for high-performance information processing have attracted renewed interest. Neuromorphic silicon photonics has the potential to integrate processing functions that vastly exceed the capabilities of electronics. We report first observations of a recurrent silicon photonic neural network, in which connections are configured by microring weight banks. A mathematical isomorphism between the silicon photonic circuit and a continuous neural network model is demonstrated through dynamical bifurcation analysis. Exploiting this isomorphism, a simulated 24-node silicon photonic neural network is programmed using “neural compiler” to solve a differential system emulation task. A 294-fold acceleration against a conventional benchmark is predicted. We also propose and derive power consumption analysis for modulator-class neurons that, as opposed to laser-class neurons, are compatible with silicon photonic platforms. At increased scale, Neuromorphic silicon photonics could access new regimes of ultrafast information processing for radio, control, and scientific computing.
Tasks
Published 2016-11-05
URL http://arxiv.org/abs/1611.02272v3
PDF http://arxiv.org/pdf/1611.02272v3.pdf
PWC https://paperswithcode.com/paper/neuromorphic-silicon-photonic-networks
Repo
Framework

Dealing with Class Imbalance using Thresholding

Title Dealing with Class Imbalance using Thresholding
Authors Charmgil Hong, Rumi Ghosh, Soundar Srinivasan
Abstract We propose thresholding as an approach to deal with class imbalance. We define the concept of thresholding as a process of determining a decision boundary in the presence of a tunable parameter. The threshold is the maximum value of this tunable parameter where the conditions of a certain decision are satisfied. We show that thresholding is applicable not only for linear classifiers but also for non-linear classifiers. We show that this is the implicit assumption for many approaches to deal with class imbalance in linear classifiers. We then extend this paradigm beyond linear classification and show how non-linear classification can be dealt with under this umbrella framework of thresholding. The proposed method can be used for outlier detection in many real-life scenarios like in manufacturing. In advanced manufacturing units, where the manufacturing process has matured over time, the number of instances (or parts) of the product that need to be rejected (based on a strict regime of quality tests) becomes relatively rare and are defined as outliers. How to detect these rare parts or outliers beforehand? How to detect combination of conditions leading to these outliers? These are the questions motivating our research. This paper focuses on prediction of outliers and conditions leading to outliers using classification. We address the problem of outlier detection using classification. The classes are good parts (those passing the quality tests) and bad parts (those failing the quality tests and can be considered as outliers). The rarity of outliers transforms this problem into a class-imbalanced classification problem.
Tasks Outlier Detection
Published 2016-07-10
URL http://arxiv.org/abs/1607.02705v1
PDF http://arxiv.org/pdf/1607.02705v1.pdf
PWC https://paperswithcode.com/paper/dealing-with-class-imbalance-using
Repo
Framework

Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient

Title Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient
Authors Liheng Bian, Jinli Suo, Jaebum Chung, Xiaoze Ou, Changhuei Yang, Feng Chen, Qionghai Dai
Abstract Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample’s high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for error removal. Results on both simulated data and real data captured using our laser FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.
Tasks
Published 2016-03-01
URL http://arxiv.org/abs/1603.04746v1
PDF http://arxiv.org/pdf/1603.04746v1.pdf
PWC https://paperswithcode.com/paper/fourier-ptychographic-reconstruction-using
Repo
Framework

One-Trial Correction of Legacy AI Systems and Stochastic Separation Theorems

Title One-Trial Correction of Legacy AI Systems and Stochastic Separation Theorems
Authors Alexander N. Gorban, Ilya Romanenko, Richard Burton, Ivan Y. Tyukin
Abstract We consider the problem of efficient “on the fly” tuning of existing, or {\it legacy}, Artificial Intelligence (AI) systems. The legacy AI systems are allowed to be of arbitrary class, albeit the data they are using for computing interim or final decision responses should posses an underlying structure of a high-dimensional topological real vector space. The tuning method that we propose enables dealing with errors without the need to re-train the system. Instead of re-training a simple cascade of perceptron nodes is added to the legacy system. The added cascade modulates the AI legacy system’s decisions. If applied repeatedly, the process results in a network of modulating rules “dressing up” and improving performance of existing AI systems. Mathematical rationale behind the method is based on the fundamental property of measure concentration in high dimensional spaces. The method is illustrated with an example of fine-tuning a deep convolutional network that has been pre-trained to detect pedestrians in images.
Tasks
Published 2016-10-03
URL http://arxiv.org/abs/1610.00494v4
PDF http://arxiv.org/pdf/1610.00494v4.pdf
PWC https://paperswithcode.com/paper/one-trial-correction-of-legacy-ai-systems-and
Repo
Framework

Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer

Title Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer
Authors Xin Wang, Geoffrey Oxholm, Da Zhang, Yuan-Fang Wang
Abstract Transferring artistic styles onto everyday photographs has become an extremely popular task in both academia and industry. Recently, offline training has replaced on-line iterative optimization, enabling nearly real-time stylization. When those stylization networks are applied directly to high-resolution images, however, the style of localized regions often appears less similar to the desired artistic style. This is because the transfer process fails to capture small, intricate textures and maintain correct texture scales of the artworks. Here we propose a multimodal convolutional neural network that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales. Compared to state-of-the-art networks, our network can also perform style transfer in nearly real-time by conducting much more sophisticated training offline. By properly handling style and texture cues at multiple scales using several modalities, we can transfer not just large-scale, obvious style cues but also subtle, exquisite ones. That is, our scheme can generate results that are visually pleasing and more similar to multiple desired artistic styles with color and texture cues at multiple scales.
Tasks Style Transfer
Published 2016-11-17
URL http://arxiv.org/abs/1612.01895v2
PDF http://arxiv.org/pdf/1612.01895v2.pdf
PWC https://paperswithcode.com/paper/multimodal-transfer-a-hierarchical-deep
Repo
Framework

Semantic homophily in online communication: evidence from Twitter

Title Semantic homophily in online communication: evidence from Twitter
Authors Sanja Šćepanović, Igor Mishkovski, Bruno Gonçalves, Nguyen Trung Hieu, Pan Hui
Abstract People are observed to assortatively connect on a set of traits. This phenomenon, termed assortative mixing or sometimes homophily, can be quantified through assortativity coefficient in social networks. Uncovering the exact causes of strong assortative mixing found in social networks has been a research challenge. Among the main suggested causes from sociology are the tendency of similar individuals to connect (often itself referred as homophily) and the social influence among already connected individuals. An important question to researchers and in practice can be tackled, as we present here: understanding the exact mechanisms of interplay between these tendencies and the underlying social network structure. Namely, in addition to the mentioned assortativity coefficient, there are several other static and temporal network properties and substructures that can be linked to the tendencies of homophily and social influence in the social network and we herein investigate those. Concretely, we tackle a computer-mediated \textit{communication network} (based on Twitter mentions) and a particular type of assortative mixing that can be inferred from the semantic features of communication content that we term \textit{semantic homophily}. Our work, to the best of our knowledge, is the first to offer an in-depth analysis on semantic homophily in a communication network and the interplay between them. We quantify diverse levels of semantic homophily, identify the semantic aspects that are the drivers of observed homophily, show insights in its temporal evolution and finally, we present its intricate interplay with the communication network on Twitter. By analyzing these mechanisms we increase understanding on what are the semantic aspects that shape and how they shape the human computer-mediated communication.
Tasks
Published 2016-06-27
URL http://arxiv.org/abs/1606.08207v3
PDF http://arxiv.org/pdf/1606.08207v3.pdf
PWC https://paperswithcode.com/paper/semantic-homophily-in-online-communication
Repo
Framework

Mapping distributional to model-theoretic semantic spaces: a baseline

Title Mapping distributional to model-theoretic semantic spaces: a baseline
Authors Franck Dernoncourt
Abstract Word embeddings have been shown to be useful across state-of-the-art systems in many natural language processing tasks, ranging from question answering systems to dependency parsing. (Herbelot and Vecchi, 2015) explored word embeddings and their utility for modeling language semantics. In particular, they presented an approach to automatically map a standard distributional semantic space onto a set-theoretic model using partial least squares regression. We show in this paper that a simple baseline achieves a +51% relative improvement compared to their model on one of the two datasets they used, and yields competitive results on the second dataset.
Tasks Dependency Parsing, Question Answering, Word Embeddings
Published 2016-07-11
URL http://arxiv.org/abs/1607.02802v1
PDF http://arxiv.org/pdf/1607.02802v1.pdf
PWC https://paperswithcode.com/paper/mapping-distributional-to-model-theoretic
Repo
Framework
comments powered by Disqus