Paper Group ANR 111
Daleel: Simplifying Cloud Instance Selection Using Machine Learning. SceneNet RGB-D: 5M Photorealistic Images of Synthetic Indoor Trajectories with Ground Truth. Mining the Web for Pharmacovigilance: the Case Study of Duloxetine and Venlafaxine. Variational Inference for On-line Anomaly Detection in High-Dimensional Time Series. Greedy bi-criteria …
Daleel: Simplifying Cloud Instance Selection Using Machine Learning
Title | Daleel: Simplifying Cloud Instance Selection Using Machine Learning |
Authors | Faiza Samreen, Yehia Elkhatib, Matthew Rowe, Gordon S. Blair |
Abstract | Decision making in cloud environments is quite challenging due to the diversity in service offerings and pricing models, especially considering that the cloud market is an incredibly fast moving one. In addition, there are no hard and fast rules, each customer has a specific set of constraints (e.g. budget) and application requirements (e.g. minimum computational resources). Machine learning can help address some of the complicated decisions by carrying out customer-specific analytics to determine the most suitable instance type(s) and the most opportune time for starting or migrating instances. We employ machine learning techniques to develop an adaptive deployment policy, providing an optimal match between the customer demands and the available cloud service offerings. We provide an experimental study based on extensive set of job executions over a major public cloud infrastructure. |
Tasks | Decision Making |
Published | 2016-02-05 |
URL | http://arxiv.org/abs/1602.02159v1 |
http://arxiv.org/pdf/1602.02159v1.pdf | |
PWC | https://paperswithcode.com/paper/daleel-simplifying-cloud-instance-selection |
Repo | |
Framework | |
SceneNet RGB-D: 5M Photorealistic Images of Synthetic Indoor Trajectories with Ground Truth
Title | SceneNet RGB-D: 5M Photorealistic Images of Synthetic Indoor Trajectories with Ground Truth |
Authors | John McCormac, Ankur Handa, Stefan Leutenegger, Andrew J. Davison |
Abstract | We introduce SceneNet RGB-D, expanding the previous work of SceneNet to enable large scale photorealistic rendering of indoor scene trajectories. It provides pixel-perfect ground truth for scene understanding problems such as semantic segmentation, instance segmentation, and object detection, and also for geometric computer vision problems such as optical flow, depth estimation, camera pose estimation, and 3D reconstruction. Random sampling permits virtually unlimited scene configurations, and here we provide a set of 5M rendered RGB-D images from over 15K trajectories in synthetic layouts with random but physically simulated object poses. Each layout also has random lighting, camera trajectories, and textures. The scale of this dataset is well suited for pre-training data-driven computer vision techniques from scratch with RGB-D inputs, which previously has been limited by relatively small labelled datasets in NYUv2 and SUN RGB-D. It also provides a basis for investigating 3D scene labelling tasks by providing perfect camera poses and depth data as proxy for a SLAM system. We host the dataset at http://robotvault.bitbucket.io/scenenet-rgbd.html |
Tasks | 3D Reconstruction, Depth Estimation, Instance Segmentation, Object Detection, Optical Flow Estimation, Pose Estimation, Scene Understanding, Semantic Segmentation |
Published | 2016-12-15 |
URL | http://arxiv.org/abs/1612.05079v3 |
http://arxiv.org/pdf/1612.05079v3.pdf | |
PWC | https://paperswithcode.com/paper/scenenet-rgb-d-5m-photorealistic-images-of |
Repo | |
Framework | |
Mining the Web for Pharmacovigilance: the Case Study of Duloxetine and Venlafaxine
Title | Mining the Web for Pharmacovigilance: the Case Study of Duloxetine and Venlafaxine |
Authors | Abbas Chokor, Abeed Sarker, Graciela Gonzalez |
Abstract | Adverse reactions caused by drugs following their release into the market are among the leading causes of death in many countries. The rapid growth of electronically available health related information, and the ability to process large volumes of them automatically, using natural language processing (NLP) and machine learning algorithms, have opened new opportunities for pharmacovigilance. Survey found that more than 70% of US Internet users consult the Internet when they require medical information. In recent years, research in this area has addressed for Adverse Drug Reaction (ADR) pharmacovigilance using social media, mainly Twitter and medical forums and websites. This paper will show the information which can be collected from a variety of Internet data sources and search engines, mainly Google Trends and Google Correlate. While considering the case study of two popular Major depressive Disorder (MDD) drugs, Duloxetine and Venlafaxine, we will provide a comparative analysis for their reactions using publicly-available alternative data sources. |
Tasks | |
Published | 2016-10-08 |
URL | http://arxiv.org/abs/1610.02567v1 |
http://arxiv.org/pdf/1610.02567v1.pdf | |
PWC | https://paperswithcode.com/paper/mining-the-web-for-pharmacovigilance-the-case |
Repo | |
Framework | |
Variational Inference for On-line Anomaly Detection in High-Dimensional Time Series
Title | Variational Inference for On-line Anomaly Detection in High-Dimensional Time Series |
Authors | Maximilian Soelch, Justin Bayer, Marvin Ludersdorfer, Patrick van der Smagt |
Abstract | Approximate variational inference has shown to be a powerful tool for modeling unknown complex probability distributions. Recent advances in the field allow us to learn probabilistic models of sequences that actively exploit spatial and temporal structure. We apply a Stochastic Recurrent Network (STORN) to learn robot time series data. Our evaluation demonstrates that we can robustly detect anomalies both off- and on-line. |
Tasks | Anomaly Detection, Time Series |
Published | 2016-02-23 |
URL | http://arxiv.org/abs/1602.07109v5 |
http://arxiv.org/pdf/1602.07109v5.pdf | |
PWC | https://paperswithcode.com/paper/variational-inference-for-on-line-anomaly |
Repo | |
Framework | |
Greedy bi-criteria approximations for $k$-medians and $k$-means
Title | Greedy bi-criteria approximations for $k$-medians and $k$-means |
Authors | Daniel Hsu, Matus Telgarsky |
Abstract | This paper investigates the following natural greedy procedure for clustering in the bi-criterion setting: iteratively grow a set of centers, in each round adding the center from a candidate set that maximally decreases clustering cost. In the case of $k$-medians and $k$-means, the key results are as follows. $\bullet$ When the method considers all data points as candidate centers, then selecting $\mathcal{O}(k\log(1/\varepsilon))$ centers achieves cost at most $2+\varepsilon$ times the optimal cost with $k$ centers. $\bullet$ Alternatively, the same guarantees hold if each round samples $\mathcal{O}(k/\varepsilon^5)$ candidate centers proportionally to their cluster cost (as with $\texttt{kmeans++}$, but holding centers fixed). $\bullet$ In the case of $k$-means, considering an augmented set of $n^{\lceil1/\varepsilon\rceil}$ candidate centers gives $1+\varepsilon$ approximation with $\mathcal{O}(k\log(1/\varepsilon))$ centers, the entire algorithm taking $\mathcal{O}(dk\log(1/\varepsilon)n^{1+\lceil1/\varepsilon\rceil})$ time, where $n$ is the number of data points in $\mathbb{R}^d$. $\bullet$ In the case of Euclidean $k$-medians, generating a candidate set via $n^{\mathcal{O}(1/\varepsilon^2)}$ executions of stochastic gradient descent with adaptively determined constraint sets will once again give approximation $1+\varepsilon$ with $\mathcal{O}(k\log(1/\varepsilon))$ centers in $dk\log(1/\varepsilon)n^{\mathcal{O}(1/\varepsilon^2)}$ time. Ancillary results include: guarantees for cluster costs based on powers of metrics; a brief, favorable empirical evaluation against $\texttt{kmeans++}$; data-dependent bounds allowing $1+\varepsilon$ in the first two bullets above, for example with $k$-medians over finite metric spaces. |
Tasks | |
Published | 2016-07-21 |
URL | http://arxiv.org/abs/1607.06203v1 |
http://arxiv.org/pdf/1607.06203v1.pdf | |
PWC | https://paperswithcode.com/paper/greedy-bi-criteria-approximations-for-k |
Repo | |
Framework | |
The Extended Littlestone’s Dimension for Learning with Mistakes and Abstentions
Title | The Extended Littlestone’s Dimension for Learning with Mistakes and Abstentions |
Authors | Chicheng Zhang, Kamalika Chaudhuri |
Abstract | This paper studies classification with an abstention option in the online setting. In this setting, examples arrive sequentially, the learner is given a hypothesis class $\mathcal H$, and the goal of the learner is to either predict a label on each example or abstain, while ensuring that it does not make more than a pre-specified number of mistakes when it does predict a label. Previous work on this problem has left open two main challenges. First, not much is known about the optimality of algorithms, and in particular, about what an optimal algorithmic strategy is for any individual hypothesis class. Second, while the realizable case has been studied, the more realistic non-realizable scenario is not well-understood. In this paper, we address both challenges. First, we provide a novel measure, called the Extended Littlestone’s Dimension, which captures the number of abstentions needed to ensure a certain number of mistakes. Second, we explore the non-realizable case, and provide upper and lower bounds on the number of abstentions required by an algorithm to guarantee a specified number of mistakes. |
Tasks | |
Published | 2016-04-21 |
URL | http://arxiv.org/abs/1604.06162v3 |
http://arxiv.org/pdf/1604.06162v3.pdf | |
PWC | https://paperswithcode.com/paper/the-extended-littlestones-dimension-for |
Repo | |
Framework | |
Efficient Action Detection in Untrimmed Videos via Multi-Task Learning
Title | Efficient Action Detection in Untrimmed Videos via Multi-Task Learning |
Authors | Yi Zhu, Shawn Newsam |
Abstract | This paper studies the joint learning of action recognition and temporal localization in long, untrimmed videos. We employ a multi-task learning framework that performs the three highly related steps of action proposal, action recognition, and action localization refinement in parallel instead of the standard sequential pipeline that performs the steps in order. We develop a novel temporal actionness regression module that estimates what proportion of a clip contains action. We use it for temporal localization but it could have other applications like video retrieval, surveillance, summarization, etc. We also introduce random shear augmentation during training to simulate viewpoint change. We evaluate our framework on three popular video benchmarks. Results demonstrate that our joint model is efficient in terms of storage and computation in that we do not need to compute and cache dense trajectory features, and that it is several times faster than its sequential ConvNets counterpart. Yet, despite being more efficient, it outperforms state-of-the-art methods with respect to accuracy. |
Tasks | Action Detection, Action Localization, Multi-Task Learning, Temporal Action Localization, Temporal Localization, Video Retrieval |
Published | 2016-12-22 |
URL | http://arxiv.org/abs/1612.07403v2 |
http://arxiv.org/pdf/1612.07403v2.pdf | |
PWC | https://paperswithcode.com/paper/efficient-action-detection-in-untrimmed |
Repo | |
Framework | |
Robust Natural Language Processing - Combining Reasoning, Cognitive Semantics and Construction Grammar for Spatial Language
Title | Robust Natural Language Processing - Combining Reasoning, Cognitive Semantics and Construction Grammar for Spatial Language |
Authors | Michael Spranger, Jakob Suchan, Mehul Bhatt |
Abstract | We present a system for generating and understanding of dynamic and static spatial relations in robotic interaction setups. Robots describe an environment of moving blocks using English phrases that include spatial relations such as “across” and “in front of”. We evaluate the system in robot-robot interactions and show that the system can robustly deal with visual perception errors, language omissions and ungrammatical utterances. |
Tasks | |
Published | 2016-07-20 |
URL | http://arxiv.org/abs/1607.05968v1 |
http://arxiv.org/pdf/1607.05968v1.pdf | |
PWC | https://paperswithcode.com/paper/robust-natural-language-processing-combining |
Repo | |
Framework | |
Object Detection from Video Tubelets with Convolutional Neural Networks
Title | Object Detection from Video Tubelets with Convolutional Neural Networks |
Authors | Kai Kang, Wanli Ouyang, Hongsheng Li, Xiaogang Wang |
Abstract | Deep Convolution Neural Networks (CNNs) have shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. For object detection, particularly in still images, the performance has been significantly increased last year thanks to powerful deep networks (e.g. GoogleNet) and detection frameworks (e.g. Regions with CNN features (R-CNN)). The lately introduced ImageNet task on object detection from video (VID) brings the object detection task into the video domain, in which objects’ locations at each frame are required to be annotated with bounding boxes. In this work, we introduce a complete framework for the VID task based on still-image object detection and general object tracking. Their relations and contributions in the VID task are thoroughly studied and evaluated. In addition, a temporal convolution network is proposed to incorporate temporal information to regularize the detection results and shows its effectiveness for the task. |
Tasks | Image Classification, Object Detection, Object Tracking, Semantic Segmentation |
Published | 2016-04-14 |
URL | http://arxiv.org/abs/1604.04053v1 |
http://arxiv.org/pdf/1604.04053v1.pdf | |
PWC | https://paperswithcode.com/paper/object-detection-from-video-tubelets-with |
Repo | |
Framework | |
Tubelets: Unsupervised action proposals from spatiotemporal super-voxels
Title | Tubelets: Unsupervised action proposals from spatiotemporal super-voxels |
Authors | Mihir Jain, Jan van Gemert, Hervé Jégou, Patrick Bouthemy, Cees G. M. Snoek |
Abstract | This paper considers the problem of localizing actions in videos as a sequences of bounding boxes. The objective is to generate action proposals that are likely to include the action of interest, ideally achieving high recall with few proposals. Our contributions are threefold. First, inspired by selective search for object proposals, we introduce an approach to generate action proposals from spatiotemporal super-voxels in an unsupervised manner, we call them Tubelets. Second, along with the static features from individual frames our approach advantageously exploits motion. We introduce independent motion evidence as a feature to characterize how the action deviates from the background and explicitly incorporate such motion information in various stages of the proposal generation. Finally, we introduce spatiotemporal refinement of Tubelets, for more precise localization of actions, and pruning to keep the number of Tubelets limited. We demonstrate the suitability of our approach by extensive experiments for action proposal quality and action localization on three public datasets: UCF Sports, MSR-II and UCF101. For action proposal quality, our unsupervised proposals beat all other existing approaches on the three datasets. For action localization, we show top performance on both the trimmed videos of UCF Sports and UCF101 as well as the untrimmed videos of MSR-II. |
Tasks | Action Localization |
Published | 2016-07-07 |
URL | http://arxiv.org/abs/1607.02003v1 |
http://arxiv.org/pdf/1607.02003v1.pdf | |
PWC | https://paperswithcode.com/paper/tubelets-unsupervised-action-proposals-from |
Repo | |
Framework | |
An Empirical Study of Dimensional Reduction Techniques for Facial Action Units Detection
Title | An Empirical Study of Dimensional Reduction Techniques for Facial Action Units Detection |
Authors | Zhuo Hui, Wen-Sheng Chu |
Abstract | Biologically inspired features, such as Gabor filters, result in very high dimensional measurement. Does reducing the dimensionality of the feature space afford advantages beyond computational efficiency? Do some approaches to dimensionality reduction (DR) yield improved action unit detection? To answer these questions, we compared DR approaches in two relatively large databases of spontaneous facial behavior (45 participants in total with over 2 minutes of FACS-coded video per participant). Facial features were tracked and aligned using active appearance models (AAM). SIFT and Gabor features were extracted from local facial regions. We compared linear (PCA and KPCA), manifold (LPP and LLE), supervised (LDA and KDA) and hybrid approaches (LSDA) to DR with respect to AU detection. For further comparison, a no-DR control condition was included as well. Linear support vector machine classifiers with independent train and test sets were used for AU detection. AU detection was quantified using area under the ROC curve and F1. Baseline results for PCA with Gabor features were comparable with previous research. With some notable exceptions, DR improved AU detection relative to no-DR. Locality embedding approaches proved vulnerable to \emph{out-of-sample} problems. Gradient-based SIFT lead to better AU detection than the filter-based Gabor features. For area under the curve, few differences were found between linear and other DR approaches. For F1, results were mixed. For both metrics, the pattern of results varied among action units. These findings suggest that action unit detection may be optimized by using specific DR for specific action units. PCA and LDA were the most efficient approaches; KDA was the least efficient. |
Tasks | Action Unit Detection, Dimensionality Reduction |
Published | 2016-03-25 |
URL | http://arxiv.org/abs/1603.08039v1 |
http://arxiv.org/pdf/1603.08039v1.pdf | |
PWC | https://paperswithcode.com/paper/an-empirical-study-of-dimensional-reduction |
Repo | |
Framework | |
Bipartite Correlation Clustering – Maximizing Agreements
Title | Bipartite Correlation Clustering – Maximizing Agreements |
Authors | Megasthenis Asteris, Anastasios Kyrillidis, Dimitris Papailiopoulos, Alexandros G. Dimakis |
Abstract | In Bipartite Correlation Clustering (BCC) we are given a complete bipartite graph $G$ with +' and -’ edges, and we seek a vertex clustering that maximizes the number of agreements: the number of all +' edges within clusters plus all -’ edges cut across clusters. BCC is known to be NP-hard. We present a novel approximation algorithm for $k$-BCC, a variant of BCC with an upper bound $k$ on the number of clusters. Our algorithm outputs a $k$-clustering that provably achieves a number of agreements within a multiplicative ${(1-\delta)}$-factor from the optimal, for any desired accuracy $\delta$. It relies on solving a combinatorially constrained bilinear maximization on the bi-adjacency matrix of $G$. It runs in time exponential in $k$ and $\delta^{-1}$, but linear in the size of the input. Further, we show that, in the (unconstrained) BCC setting, an ${(1-\delta)}$-approximation can be achieved by $O(\delta^{-1})$ clusters regardless of the size of the graph. In turn, our $k$-BCC algorithm implies an Efficient PTAS for the BCC objective of maximizing agreements. |
Tasks | |
Published | 2016-03-09 |
URL | http://arxiv.org/abs/1603.02782v1 |
http://arxiv.org/pdf/1603.02782v1.pdf | |
PWC | https://paperswithcode.com/paper/bipartite-correlation-clustering-maximizing |
Repo | |
Framework | |
Graphical Models for Optimal Power Flow
Title | Graphical Models for Optimal Power Flow |
Authors | Krishnamurthy Dvijotham, Pascal Van Hentenryck, Michael Chertkov, Sidhant Misra, Marc Vuffray |
Abstract | Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithm for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. We evaluate our technique numerically on several benchmark networks and show that practical OPF problems can be solved effectively using this approach. |
Tasks | |
Published | 2016-06-21 |
URL | http://arxiv.org/abs/1606.06512v1 |
http://arxiv.org/pdf/1606.06512v1.pdf | |
PWC | https://paperswithcode.com/paper/graphical-models-for-optimal-power-flow |
Repo | |
Framework | |
Finite LTL Synthesis is EXPTIME-complete
Title | Finite LTL Synthesis is EXPTIME-complete |
Authors | Jorge A. Baier, Alberto Camacho, Christian Muise, Sheila A. McIlraith |
Abstract | LTL synthesis – the construction of a function to satisfy a logical specification formulated in Linear Temporal Logic – is a 2EXPTIME-complete problem with relevant applications in controller synthesis and a myriad of artificial intelligence applications. In this research note we consider De Giacomo and Vardi’s variant of the synthesis problem for LTL formulas interpreted over finite rather than infinite traces. Rather surprisingly, given the existing claims on complexity, we establish that LTL synthesis is EXPTIME-complete for the finite interpretation, and not 2EXPTIME-complete as previously reported. Our result coincides nicely with the planning perspective where non-deterministic planning with full observability is EXPTIME-complete and partial observability increases the complexity to 2EXPTIME-complete; a recent related result for LTL synthesis shows that in the finite case with partial observability, the problem is 2EXPTIME-complete. |
Tasks | |
Published | 2016-09-14 |
URL | http://arxiv.org/abs/1609.04371v2 |
http://arxiv.org/pdf/1609.04371v2.pdf | |
PWC | https://paperswithcode.com/paper/finite-ltl-synthesis-is-exptime-complete |
Repo | |
Framework | |
Image encryption with dynamic chaotic Look-Up Table
Title | Image encryption with dynamic chaotic Look-Up Table |
Authors | Med Karim Abdmouleh, Ali Khalfallah, Med Salim Bouhlel |
Abstract | In this paper we propose a novel image encryption scheme. The proposed method is based on the chaos theory. Our cryptosystem uses the chaos theory to define a dynamic chaotic Look-Up Table (LUT) to compute the new value of the current pixel to cipher. Applying this process on each pixel of the plain image, we generate the encrypted image. The results of different experimental tests, such as Key space analysis, Information Entropy and Histogram analysis, show that the proposed encryption image scheme seems to be protected against various attacks. A comparison between the plain and encrypted image, in terms of correlation coefficient, proves that the plain image is very different from the encrypted one. |
Tasks | |
Published | 2016-02-09 |
URL | http://arxiv.org/abs/1602.03205v1 |
http://arxiv.org/pdf/1602.03205v1.pdf | |
PWC | https://paperswithcode.com/paper/image-encryption-with-dynamic-chaotic-look-up |
Repo | |
Framework | |