Paper Group ANR 37
Co-evolutionary multi-task learning for dynamic time series prediction. Calibration for the (Computationally-Identifiable) Masses. Environment-Independent Task Specifications via GLTL. Spice up Your Chat: The Intentions and Sentiment Effects of Using Emoji. Compressive Sensing of Color Images Using Nonlocal Higher Order Dictionary. Generating Compa …
Co-evolutionary multi-task learning for dynamic time series prediction
Title | Co-evolutionary multi-task learning for dynamic time series prediction |
Authors | Rohitash Chandra, Yew-Soon Ong, Chi-Keong Goh |
Abstract | Time series prediction typically consists of a data reconstruction phase where the time series is broken into overlapping windows known as the timespan. The size of the timespan can be seen as a way of determining the extent of past information required for an effective prediction. In certain applications such as the prediction of wind-intensity of storms and cyclones, prediction models need to be dynamic in accommodating different values of the timespan. These applications require robust prediction as soon as the event takes place. We identify a new category of problem called dynamic time series prediction that requires a model to give prediction when presented with varying lengths of the timespan. In this paper, we propose a co-evolutionary multi-task learning method that provides a synergy between multi-task learning and co-evolutionary algorithms to address dynamic time series prediction. The method features effective use of building blocks of knowledge inspired by dynamic programming and multi-task learning. It enables neural networks to retain modularity during training for making a decision in situations even when certain inputs are missing. The effectiveness of the method is demonstrated using one-step-ahead chaotic time series and tropical cyclone wind-intensity prediction. |
Tasks | Multi-Task Learning, Time Series, Time Series Prediction |
Published | 2017-02-27 |
URL | http://arxiv.org/abs/1703.01887v2 |
http://arxiv.org/pdf/1703.01887v2.pdf | |
PWC | https://paperswithcode.com/paper/co-evolutionary-multi-task-learning-for |
Repo | |
Framework | |
Calibration for the (Computationally-Identifiable) Masses
Title | Calibration for the (Computationally-Identifiable) Masses |
Authors | Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, Guy N. Rothblum |
Abstract | As algorithms increasingly inform and influence decisions made about individuals, it becomes increasingly important to address concerns that these algorithms might be discriminatory. The output of an algorithm can be discriminatory for many reasons, most notably: (1) the data used to train the algorithm might be biased (in various ways) to favor certain populations over others; (2) the analysis of this training data might inadvertently or maliciously introduce biases that are not borne out in the data. This work focuses on the latter concern. We develop and study multicalbration – a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data. Multicalibration guarantees accurate (calibrated) predictions for every subpopulation that can be identified within a specified class of computations. We think of the class as being quite rich; in particular, it can contain many overlapping subgroups of a protected group. We show that in many settings this strong notion of protection from discrimination is both attainable and aligned with the goal of obtaining accurate predictions. Along the way, we present new algorithms for learning a multicalibrated predictor, study the computational complexity of this task, and draw new connections to computational learning models such as agnostic learning. |
Tasks | Calibration |
Published | 2017-11-22 |
URL | http://arxiv.org/abs/1711.08513v2 |
http://arxiv.org/pdf/1711.08513v2.pdf | |
PWC | https://paperswithcode.com/paper/calibration-for-the-computationally |
Repo | |
Framework | |
Environment-Independent Task Specifications via GLTL
Title | Environment-Independent Task Specifications via GLTL |
Authors | Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan |
Abstract | We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly. |
Tasks | |
Published | 2017-04-14 |
URL | http://arxiv.org/abs/1704.04341v1 |
http://arxiv.org/pdf/1704.04341v1.pdf | |
PWC | https://paperswithcode.com/paper/environment-independent-task-specifications |
Repo | |
Framework | |
Spice up Your Chat: The Intentions and Sentiment Effects of Using Emoji
Title | Spice up Your Chat: The Intentions and Sentiment Effects of Using Emoji |
Authors | Tianran Hu, Han Guo, Hao Sun, Thuy-vy Thi Nguyen, Jiebo Luo |
Abstract | Emojis, as a new way of conveying nonverbal cues, are widely adopted in computer-mediated communications. In this paper, first from a message sender perspective, we focus on people’s motives in using four types of emojis – positive, neutral, negative, and non-facial. We compare the willingness levels of using these emoji types for seven typical intentions that people usually apply nonverbal cues for in communication. The results of extensive statistical hypothesis tests not only report the popularities of the intentions, but also uncover the subtle differences between emoji types in terms of intended uses. Second, from a perspective of message recipients, we further study the sentiment effects of emojis, as well as their duplications, on verbal messages. Different from previous studies in emoji sentiment, we study the sentiments of emojis and their contexts as a whole. The experiment results indicate that the powers of conveying sentiment are different between four emoji types, and the sentiment effects of emojis vary in the contexts of different valences. |
Tasks | |
Published | 2017-03-08 |
URL | http://arxiv.org/abs/1703.02860v1 |
http://arxiv.org/pdf/1703.02860v1.pdf | |
PWC | https://paperswithcode.com/paper/spice-up-your-chat-the-intentions-and |
Repo | |
Framework | |
Compressive Sensing of Color Images Using Nonlocal Higher Order Dictionary
Title | Compressive Sensing of Color Images Using Nonlocal Higher Order Dictionary |
Authors | Khanh Quoc Dinh, Thuong Nguyen Canh, Byeungwoo Jeon |
Abstract | This paper addresses an ill-posed problem of recovering a color image from its compressively sensed measurement data. Differently from the typical 1D vector-based approach of the state-of-the-art methods, we exploit the nonlocal similarities inherently existing in images by treating each patch of a color image as a 3D tensor consisting of not only horizontal and vertical but also spectral dimensions. A group of nonlocal similar patches form a 4D tensor for which a nonlocal higher order dictionary is learned via higher order singular value decomposition. The multiple sub-dictionaries contained in the higher order dictionary decorrelate the group in each corresponding dimension, thus help the detail of color images to be reconstructed better. Furthermore, we promote sparsity of the final solution using a sparsity regularization based on a weight tensor. It can distinguish those coefficients of the sparse representation generated by the higher order dictionary which are expected to have large magnitude from the others in the optimization. Accordingly, in the iterative solution, it acts like a weighting process which is designed by approximating the minimum mean squared error filter for more faithful recovery. Experimental results confirm improvement by the proposed method over the state-of-the-art ones. |
Tasks | Compressive Sensing |
Published | 2017-11-26 |
URL | http://arxiv.org/abs/1711.09375v1 |
http://arxiv.org/pdf/1711.09375v1.pdf | |
PWC | https://paperswithcode.com/paper/compressive-sensing-of-color-images-using |
Repo | |
Framework | |
Generating Compact Tree Ensembles via Annealing
Title | Generating Compact Tree Ensembles via Annealing |
Authors | Gitesh Dawer, Yangzi Guo, Adrian Barbu |
Abstract | Tree ensembles are flexible predictive models that can capture relevant variables and to some extent their interactions in a compact and interpretable manner. Most algorithms for obtaining tree ensembles are based on versions of boosting or Random Forest. Previous work showed that boosting algorithms exhibit a cyclic behavior of selecting the same tree again and again due to the way the loss is optimized. At the same time, Random Forest is not based on loss optimization and obtains a more complex and less interpretable model. In this paper we present a novel method for obtaining compact tree ensembles by growing a large pool of trees in parallel with many independent boosting threads and then selecting a small subset and updating their leaf weights by loss optimization. We allow for the trees in the initial pool to have different depths which further helps with generalization. Experiments on real datasets show that the obtained model has usually a smaller loss than boosting, which is also reflected in a lower misclassification error on the test set. |
Tasks | |
Published | 2017-09-16 |
URL | https://arxiv.org/abs/1709.05545v4 |
https://arxiv.org/pdf/1709.05545v4.pdf | |
PWC | https://paperswithcode.com/paper/relevant-ensemble-of-trees |
Repo | |
Framework | |
Topic Modeling the Hàn diăn Ancient Classics
Title | Topic Modeling the Hàn diăn Ancient Classics |
Authors | Colin Allen, Hongliang Luo, Jaimie Murdock, Jianghuai Pu, Xiaohong Wang, Yanjie Zhai, Kun Zhao |
Abstract | Ancient Chinese texts present an area of enormous challenge and opportunity for humanities scholars interested in exploiting computational methods to assist in the development of new insights and interpretations of culturally significant materials. In this paper we describe a collaborative effort between Indiana University and Xi’an Jiaotong University to support exploration and interpretation of a digital corpus of over 18,000 ancient Chinese documents, which we refer to as the “Handian” ancient classics corpus (H`an di\u{a}n g\u{u} j'i, i.e, the “Han canon” or “Chinese classics”). It contains classics of ancient Chinese philosophy, documents of historical and biographical significance, and literary works. We begin by describing the Digital Humanities context of this joint project, and the advances in humanities computing that made this project feasible. We describe the corpus and introduce our application of probabilistic topic modeling to this corpus, with attention to the particular challenges posed by modeling ancient Chinese documents. We give a specific example of how the software we have developed can be used to aid discovery and interpretation of themes in the corpus. We outline more advanced forms of computer-aided interpretation that are also made possible by the programming interface provided by our system, and the general implications of these methods for understanding the nature of meaning in these texts. |
Tasks | |
Published | 2017-02-02 |
URL | http://arxiv.org/abs/1702.00860v1 |
http://arxiv.org/pdf/1702.00860v1.pdf | |
PWC | https://paperswithcode.com/paper/topic-modeling-the-han-dian-ancient-classics |
Repo | |
Framework | |
Comments on `High-dimensional simultaneous inference with the bootstrap’
Title | Comments on `High-dimensional simultaneous inference with the bootstrap’ | |
Authors | Jelena Bradic, Yinchu Zhu |
Abstract | We provide comments on the article “High-dimensional simultaneous inference with the bootstrap” by Ruben Dezeure, Peter Buhlmann and Cun-Hui Zhang. |
Tasks | |
Published | 2017-05-06 |
URL | http://arxiv.org/abs/1705.02441v1 |
http://arxiv.org/pdf/1705.02441v1.pdf | |
PWC | https://paperswithcode.com/paper/comments-on-high-dimensional-simultaneous |
Repo | |
Framework | |
ImageNet MPEG-7 Visual Descriptors - Technical Report
Title | ImageNet MPEG-7 Visual Descriptors - Technical Report |
Authors | Frédéric Rayar |
Abstract | ImageNet is a large scale and publicly available image database. It currently offers more than 14 millions of images, organised according to the WordNet hierarchy. One of the main objective of the creators is to provide to the research community a relevant database for visual recognition applications such as object recognition, image classification or object localisation. However, only a few visual descriptors of the images are available to be used by the researchers. Only SIFT-based features have been extracted from a subset of the collection. This technical report presents the extraction of some MPEG-7 visual descriptors from the ImageNet database. These descriptors are made publicly available in an effort towards open research. |
Tasks | Image Classification, Object Recognition |
Published | 2017-02-01 |
URL | http://arxiv.org/abs/1702.00187v1 |
http://arxiv.org/pdf/1702.00187v1.pdf | |
PWC | https://paperswithcode.com/paper/imagenet-mpeg-7-visual-descriptors-technical |
Repo | |
Framework | |
Detecting Faces Using Region-based Fully Convolutional Networks
Title | Detecting Faces Using Region-based Fully Convolutional Networks |
Authors | Yitong Wang, Xing Ji, Zheng Zhou, Hao Wang, Zhifeng Li |
Abstract | Face detection has achieved great success using the region-based methods. In this report, we propose a region-based face detector applying deep networks in a fully convolutional fashion, named Face R-FCN. Based on Region-based Fully Convolutional Networks (R-FCN), our face detector is more accurate and computational efficient compared with the previous R-CNN based face detectors. In our approach, we adopt the fully convolutional Residual Network (ResNet) as the backbone network. Particularly, We exploit several new techniques including position-sensitive average pooling, multi-scale training and testing and on-line hard example mining strategy to improve the detection accuracy. Over two most popular and challenging face detection benchmarks, FDDB and WIDER FACE, Face R-FCN achieves superior performance over state-of-the-arts. |
Tasks | Face Detection |
Published | 2017-09-14 |
URL | http://arxiv.org/abs/1709.05256v2 |
http://arxiv.org/pdf/1709.05256v2.pdf | |
PWC | https://paperswithcode.com/paper/detecting-faces-using-region-based-fully |
Repo | |
Framework | |
Deep Reinforcement Learning for Robotic Manipulation-The state of the art
Title | Deep Reinforcement Learning for Robotic Manipulation-The state of the art |
Authors | Smruti Amarjyoti |
Abstract | The focus of this work is to enumerate the various approaches and algorithms that center around application of reinforcement learning in robotic ma- ]]nipulation tasks. Earlier methods utilized specialized policy representations and human demonstrations to constrict the policy. Such methods worked well with continuous state and policy space of robots but failed to come up with generalized policies. Subsequently, high dimensional non-linear function approximators like neural networks have been used to learn policies from scratch. Several novel and recent approaches have also embedded control policy with efficient perceptual representation using deep learning. This has led to the emergence of a new branch of dynamic robot control system called deep r inforcement learning(DRL). This work embodies a survey of the most recent algorithms, architectures and their implementations in simulations and real world robotic platforms. The gamut of DRL architectures are partitioned into two different branches namely, discrete action space algorithms(DAS) and continuous action space algorithms(CAS). Further, the CAS algorithms are divided into stochastic continuous action space(SCAS) and deterministic continuous action space(DCAS) algorithms. Along with elucidating an organ- isation of the DRL algorithms this work also manifests some of the state of the art applications of these approaches in robotic manipulation tasks. |
Tasks | |
Published | 2017-01-31 |
URL | http://arxiv.org/abs/1701.08878v1 |
http://arxiv.org/pdf/1701.08878v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-reinforcement-learning-for-robotic-1 |
Repo | |
Framework | |
Pixel-wise Ear Detection with Convolutional Encoder-Decoder Networks
Title | Pixel-wise Ear Detection with Convolutional Encoder-Decoder Networks |
Authors | Žiga Emeršič, Luka Lan Gabriel, Vitomir Štruc, Peter Peer |
Abstract | Object detection and segmentation represents the basis for many tasks in computer and machine vision. In biometric recognition systems the detection of the region-of-interest (ROI) is one of the most crucial steps in the overall processing pipeline, significantly impacting the performance of the entire recognition system. Existing approaches to ear detection, for example, are commonly susceptible to the presence of severe occlusions, ear accessories or variable illumination conditions and often deteriorate in their performance if applied on ear images captured in unconstrained settings. To address these shortcomings, we present in this paper a novel ear detection technique based on convolutional encoder-decoder networks (CEDs). For our technique, we formulate the problem of ear detection as a two-class segmentation problem and train a convolutional encoder-decoder network based on the SegNet architecture to distinguish between image-pixels belonging to either the ear or the non-ear class. The output of the network is then post-processed to further refine the segmentation result and return the final locations of the ears in the input image. Different from competing techniques from the literature, our approach does not simply return a bounding box around the detected ear, but provides detailed, pixel-wise information about the location of the ears in the image. Our experiments on a dataset gathered from the web (a.k.a. in the wild) show that the proposed technique ensures good detection results in the presence of various covariate factors and significantly outperforms the existing state-of-the-art. |
Tasks | Object Detection |
Published | 2017-02-01 |
URL | http://arxiv.org/abs/1702.00307v2 |
http://arxiv.org/pdf/1702.00307v2.pdf | |
PWC | https://paperswithcode.com/paper/pixel-wise-ear-detection-with-convolutional |
Repo | |
Framework | |
A Novel Approach for Effective Learning in Low Resourced Scenarios
Title | A Novel Approach for Effective Learning in Low Resourced Scenarios |
Authors | Sri Harsha Dumpala, Rupayan Chakraborty, Sunil Kumar Kopparapu |
Abstract | Deep learning based discriminative methods, being the state-of-the-art machine learning techniques, are ill-suited for learning from lower amounts of data. In this paper, we propose a novel framework, called simultaneous two sample learning (s2sL), to effectively learn the class discriminative characteristics, even from very low amount of data. In s2sL, more than one sample (here, two samples) are simultaneously considered to both, train and test the classifier. We demonstrate our approach for speech/music discrimination and emotion classification through experiments. Further, we also show the effectiveness of s2sL approach for classification in low-resource scenario, and for imbalanced data. |
Tasks | Emotion Classification |
Published | 2017-12-15 |
URL | http://arxiv.org/abs/1712.05608v1 |
http://arxiv.org/pdf/1712.05608v1.pdf | |
PWC | https://paperswithcode.com/paper/a-novel-approach-for-effective-learning-in |
Repo | |
Framework | |
Convolutional neural networks pretrained on large face recognition datasets for emotion classification from video
Title | Convolutional neural networks pretrained on large face recognition datasets for emotion classification from video |
Authors | Boris Knyazev, Roman Shvetsov, Natalia Efremova, Artem Kuharenko |
Abstract | In this paper we describe a solution to our entry for the emotion recognition challenge EmotiW 2017. We propose an ensemble of several models, which capture spatial and audio features from videos. Spatial features are captured by convolutional neural networks, pretrained on large face recognition datasets. We show that usage of strong industry-level face recognition networks increases the accuracy of emotion recognition. Using our ensemble we improve on the previous best result on the test set by about 1 %, achieving a 60.03 % classification accuracy without any use of visual temporal information. |
Tasks | Emotion Classification, Emotion Recognition, Face Recognition |
Published | 2017-11-13 |
URL | http://arxiv.org/abs/1711.04598v1 |
http://arxiv.org/pdf/1711.04598v1.pdf | |
PWC | https://paperswithcode.com/paper/convolutional-neural-networks-pretrained-on |
Repo | |
Framework | |
Tensor Contraction Layers for Parsimonious Deep Nets
Title | Tensor Contraction Layers for Parsimonious Deep Nets |
Authors | Jean Kossaifi, Aran Khanna, Zachary C. Lipton, Tommaso Furlanello, Anima Anandkumar |
Abstract | Tensors offer a natural representation for many kinds of data frequently encountered in machine learning. Images, for example, are naturally represented as third order tensors, where the modes correspond to height, width, and channels. Tensor methods are noted for their ability to discover multi-dimensional dependencies, and tensor decompositions in particular, have been used to produce compact low-rank approximations of data. In this paper, we explore the use of tensor contractions as neural network layers and investigate several ways to apply them to activation tensors. Specifically, we propose the Tensor Contraction Layer (TCL), the first attempt to incorporate tensor contractions as end-to-end trainable neural network layers. Applied to existing networks, TCLs reduce the dimensionality of the activation tensors and thus the number of model parameters. We evaluate the TCL on the task of image recognition, augmenting two popular networks (AlexNet, VGG). The resulting models are trainable end-to-end. Applying the TCL to the task of image recognition, using the CIFAR100 and ImageNet datasets, we evaluate the effect of parameter reduction via tensor contraction on performance. We demonstrate significant model compression without significant impact on the accuracy and, in some cases, improved performance. |
Tasks | Model Compression |
Published | 2017-06-01 |
URL | http://arxiv.org/abs/1706.00439v1 |
http://arxiv.org/pdf/1706.00439v1.pdf | |
PWC | https://paperswithcode.com/paper/tensor-contraction-layers-for-parsimonious |
Repo | |
Framework | |