Paper Group AWR 10
Automatically tracking neurons in a moving and deforming brain. RSSL: Semi-supervised Learning in R. Towards Real-Time, Country-Level Location Classification of Worldwide Tweets. Multiple target tracking based on sets of trajectories. ProjE: Embedding Projection for Knowledge Graph Completion. Theano-MPI: a Theano-based Distributed Training Framewo …
Automatically tracking neurons in a moving and deforming brain
Title | Automatically tracking neurons in a moving and deforming brain |
Authors | Jeffrey P. Nguyen, Ashley N. Linder, George S. Plummer, Joshua W. Shaevitz, Andrew M. Leifer |
Abstract | Advances in optical neuroimaging techniques now allow neural activity to be recorded with cellular resolution in awake and behaving animals. Brain motion in these recordings pose a unique challenge. The location of individual neurons must be tracked in 3D over time to accurately extract single neuron activity traces. Recordings from small invertebrates like C. elegans are especially challenging because they undergo very large brain motion and deformation during animal movement. Here we present an automated computer vision pipeline to reliably track populations of neurons with single neuron resolution in the brain of a freely moving C. elegans undergoing large motion and deformation. 3D volumetric fluorescent images of the animal’s brain are straightened, aligned and registered, and the locations of neurons in the images are found via segmentation. Each neuron is then assigned an identity using a new time-independent machine-learning approach we call Neuron Registration Vector Encoding. In this approach, non-rigid point-set registration is used to match each segmented neuron in each volume with a set of reference volumes taken from throughout the recording. The way each neuron matches with the references defines a feature vector which is clustered to assign an identity to each neuron in each volume. Finally, thin-plate spline interpolation is used to correct errors in segmentation and check consistency of assigned identities. The Neuron Registration Vector Encoding approach proposed here is uniquely well suited for tracking neurons in brains undergoing large deformations. When applied to whole-brain calcium imaging recordings in freely moving C. elegans, this analysis pipeline located 150 neurons for the duration of an 8 minute recording and consistently found more neurons more quickly than manual or semi-automated approaches. |
Tasks | |
Published | 2016-10-14 |
URL | http://arxiv.org/abs/1610.04579v1 |
http://arxiv.org/pdf/1610.04579v1.pdf | |
PWC | https://paperswithcode.com/paper/automatically-tracking-neurons-in-a-moving |
Repo | https://github.com/leiferlab/NeRVEclustering |
Framework | none |
RSSL: Semi-supervised Learning in R
Title | RSSL: Semi-supervised Learning in R |
Authors | Jesse H. Krijthe |
Abstract | In this paper, we introduce a package for semi-supervised learning research in the R programming language called RSSL. We cover the purpose of the package, the methods it includes and comment on their use and implementation. We then show, using several code examples, how the package can be used to replicate well-known results from the semi-supervised learning literature. |
Tasks | |
Published | 2016-12-23 |
URL | http://arxiv.org/abs/1612.07993v1 |
http://arxiv.org/pdf/1612.07993v1.pdf | |
PWC | https://paperswithcode.com/paper/rssl-semi-supervised-learning-in-r |
Repo | https://github.com/jkrijthe/RSSL |
Framework | none |
Towards Real-Time, Country-Level Location Classification of Worldwide Tweets
Title | Towards Real-Time, Country-Level Location Classification of Worldwide Tweets |
Authors | Arkaitz Zubiaga, Alex Voss, Rob Procter, Maria Liakata, Bo Wang, Adam Tsakalidis |
Abstract | In contrast to much previous work that has focused on location classification of tweets restricted to a specific country, here we undertake the task in a broader context by classifying global tweets at the country level, which is so far unexplored in a real-time scenario. We analyse the extent to which a tweet’s country of origin can be determined by making use of eight tweet-inherent features for classification. Furthermore, we use two datasets, collected a year apart from each other, to analyse the extent to which a model trained from historical tweets can still be leveraged for classification of new tweets. With classification experiments on all 217 countries in our datasets, as well as on the top 25 countries, we offer some insights into the best use of tweet-inherent features for an accurate country-level classification of tweets. We find that the use of a single feature, such as the use of tweet content alone – the most widely used feature in previous work – leaves much to be desired. Choosing an appropriate combination of both tweet content and metadata can actually lead to substantial improvements of between 20% and 50%. We observe that tweet content, the user’s self-reported location and the user’s real name, all of which are inherent in a tweet and available in a real-time scenario, are particularly useful to determine the country of origin. We also experiment on the applicability of a model trained on historical tweets to classify new tweets, finding that the choice of a particular combination of features whose utility does not fade over time can actually lead to comparable performance, avoiding the need to retrain. However, the difficulty of achieving accurate classification increases slightly for countries with multiple commonalities, especially for English and Spanish speaking countries. |
Tasks | |
Published | 2016-04-25 |
URL | http://arxiv.org/abs/1604.07236v3 |
http://arxiv.org/pdf/1604.07236v3.pdf | |
PWC | https://paperswithcode.com/paper/towards-real-time-country-level-location |
Repo | https://github.com/MALHARULHAS/A-Country_level-location-classification-system-for-twitter-tweets-from-the-whole-world |
Framework | none |
Multiple target tracking based on sets of trajectories
Title | Multiple target tracking based on sets of trajectories |
Authors | Ángel F. García-Fernández, Lennart Svensson, Mark R. Morelande |
Abstract | We propose a solution of the multiple target tracking (MTT) problem based on sets of trajectories and the random finite set framework. A full Bayesian approach to MTT should characterise the distribution of the trajectories given the measurements, as it contains all information about the trajectories. We attain this by considering multi-object density functions in which objects are trajectories. For the standard tracking models, we also describe a conjugate family of multitrajectory density functions. |
Tasks | |
Published | 2016-05-26 |
URL | https://arxiv.org/abs/1605.08163v5 |
https://arxiv.org/pdf/1605.08163v5.pdf | |
PWC | https://paperswithcode.com/paper/multiple-target-tracking-based-on-sets-of |
Repo | https://github.com/Agarciafernandez/MTT |
Framework | none |
ProjE: Embedding Projection for Knowledge Graph Completion
Title | ProjE: Embedding Projection for Knowledge Graph Completion |
Authors | Baoxu Shi, Tim Weninger |
Abstract | With the large volume of new information created every day, determining the validity of information in a knowledge graph and filling in its missing parts are crucial tasks for many researchers and practitioners. To address this challenge, a number of knowledge graph completion methods have been developed using low-dimensional graph embeddings. Although researchers continue to improve these models using an increasingly complex feature space, we show that simple changes in the architecture of the underlying model can outperform state-of-the-art models without the need for complex feature engineering. In this work, we present a shared variable neural network model called ProjE that fills-in missing information in a knowledge graph by learning joint embeddings of the knowledge graph’s entities and edges, and through subtle, but important, changes to the standard loss function. In doing so, ProjE has a parameter size that is smaller than 11 out of 15 existing methods while performing $37%$ better than the current-best method on standard datasets. We also show, via a new fact checking task, that ProjE is capable of accurately determining the veracity of many declarative statements. |
Tasks | Feature Engineering, Knowledge Graph Completion |
Published | 2016-11-16 |
URL | http://arxiv.org/abs/1611.05425v1 |
http://arxiv.org/pdf/1611.05425v1.pdf | |
PWC | https://paperswithcode.com/paper/proje-embedding-projection-for-knowledge |
Repo | https://github.com/Sujit-O/pykg2vec |
Framework | tf |
Theano-MPI: a Theano-based Distributed Training Framework
Title | Theano-MPI: a Theano-based Distributed Training Framework |
Authors | He Ma, Fei Mao, Graham W. Taylor |
Abstract | We develop a scalable and extendable training framework that can utilize GPUs across nodes in a cluster and accelerate the training of deep learning models based on data parallelism. Both synchronous and asynchronous training are implemented in our framework, where parameter exchange among GPUs is based on CUDA-aware MPI. In this report, we analyze the convergence and capability of the framework to reduce training time when scaling the synchronous training of AlexNet and GoogLeNet from 2 GPUs to 8 GPUs. In addition, we explore novel ways to reduce the communication overhead caused by exchanging parameters. Finally, we release the framework as open-source for further research on distributed deep learning |
Tasks | |
Published | 2016-05-26 |
URL | http://arxiv.org/abs/1605.08325v1 |
http://arxiv.org/pdf/1605.08325v1.pdf | |
PWC | https://paperswithcode.com/paper/theano-mpi-a-theano-based-distributed |
Repo | https://github.com/uoguelph-mlrg/Theano-MPI |
Framework | none |
Smart Content Recognition from Images Using a Mixture of Convolutional Neural Networks
Title | Smart Content Recognition from Images Using a Mixture of Convolutional Neural Networks |
Authors | Tee Connie, Mundher Al-Shabi, Michael Goh |
Abstract | With rapid development of the Internet, web contents become huge. Most of the websites are publicly available, and anyone can access the contents from anywhere such as workplace, home and even schools. Nevertheless, not all the web contents are appropriate for all users, especially children. An example of these contents is pornography images which should be restricted to certain age group. Besides, these images are not safe for work (NSFW) in which employees should not be seen accessing such contents during work. Recently, convolutional neural networks have been successfully applied to many computer vision problems. Inspired by these successes, we propose a mixture of convolutional neural networks for adult content recognition. Unlike other works, our method is formulated on a weighted sum of multiple deep neural network models. The weights of each CNN models are expressed as a linear regression problem learned using Ordinary Least Squares (OLS). Experimental results demonstrate that the proposed model outperforms both single CNN model and the average sum of CNN models in adult content recognition. |
Tasks | |
Published | 2016-12-30 |
URL | http://arxiv.org/abs/1612.09506v2 |
http://arxiv.org/pdf/1612.09506v2.pdf | |
PWC | https://paperswithcode.com/paper/smart-content-recognition-from-images-using-a |
Repo | https://github.com/mundher/NSFW |
Framework | none |
Multi-Agent Cooperation and the Emergence of (Natural) Language
Title | Multi-Agent Cooperation and the Emergence of (Natural) Language |
Authors | Angeliki Lazaridou, Alexander Peysakhovich, Marco Baroni |
Abstract | The current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are interested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communication. We study this learning in the context of referential games. In these games, a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message from a fixed, arbitrary vocabulary to the receiver. The receiver must rely on this message to identify the target. Thus, the agents develop their own language interactively out of the need to communicate. We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore how to make changes to the game environment to cause the “word meanings” induced in the game to better reflect intuitive semantic properties of the images. In addition, we present a simple strategy for grounding the agents’ code into natural language. Both of these are necessary steps towards developing machines that are able to communicate with humans productively. |
Tasks | |
Published | 2016-12-21 |
URL | http://arxiv.org/abs/1612.07182v2 |
http://arxiv.org/pdf/1612.07182v2.pdf | |
PWC | https://paperswithcode.com/paper/multi-agent-cooperation-and-the-emergence-of |
Repo | https://github.com/pranavmodi/language-learning |
Framework | tf |
Star-galaxy Classification Using Deep Convolutional Neural Networks
Title | Star-galaxy Classification Using Deep Convolutional Neural Networks |
Authors | Edward J. Kim, Robert J. Brunner |
Abstract | Most existing star-galaxy classifiers use the reduced summary information from catalogs, requiring careful feature extraction and selection. The latest advances in machine learning that use deep convolutional neural networks allow a machine to automatically learn the features directly from data, minimizing the need for input from human experts. We present a star-galaxy classification framework that uses deep convolutional neural networks (ConvNets) directly on the reduced, calibrated pixel values. Using data from the Sloan Digital Sky Survey (SDSS) and the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS), we demonstrate that ConvNets are able to produce accurate and well-calibrated probabilistic classifications that are competitive with conventional machine learning techniques. Future advances in deep learning may bring more success with current and forthcoming photometric surveys, such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST), because deep neural networks require very little, manual feature engineering. |
Tasks | Feature Engineering |
Published | 2016-08-15 |
URL | http://arxiv.org/abs/1608.04369v2 |
http://arxiv.org/pdf/1608.04369v2.pdf | |
PWC | https://paperswithcode.com/paper/star-galaxy-classification-using-deep |
Repo | https://github.com/EdwardJKim/dl4astro |
Framework | none |
Model-Agnostic Interpretability of Machine Learning
Title | Model-Agnostic Interpretability of Machine Learning |
Authors | Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin |
Abstract | Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of interpretable models has found renewed interest. In some applications, such models are as accurate as non-interpretable ones, and thus are preferred for their transparency. Even when they are not accurate, they may still be preferred when interpretability is of paramount importance. However, restricting machine learning to interpretable models is often a severe limitation. In this paper we argue for explaining machine learning predictions using model-agnostic approaches. By treating the machine learning models as black-box functions, these approaches provide crucial flexibility in the choice of models, explanations, and representations, improving debugging, comparison, and interfaces for a variety of users and models. We also outline the main challenges for such methods, and review a recently-introduced model-agnostic explanation approach (LIME) that addresses these challenges. |
Tasks | Feature Engineering, Model Selection |
Published | 2016-06-16 |
URL | http://arxiv.org/abs/1606.05386v1 |
http://arxiv.org/pdf/1606.05386v1.pdf | |
PWC | https://paperswithcode.com/paper/model-agnostic-interpretability-of-machine |
Repo | https://github.com/maburd/Website-Attribution-and-LIME |
Framework | none |
Select-Additive Learning: Improving Generalization in Multimodal Sentiment Analysis
Title | Select-Additive Learning: Improving Generalization in Multimodal Sentiment Analysis |
Authors | Haohan Wang, Aaksha Meghawat, Louis-Philippe Morency, Eric P. Xing |
Abstract | Multimodal sentiment analysis is drawing an increasing amount of attention these days. It enables mining of opinions in video reviews which are now available aplenty on online platforms. However, multimodal sentiment analysis has only a few high-quality data sets annotated for training machine learning algorithms. These limited resources restrict the generalizability of models, where, for example, the unique characteristics of a few speakers (e.g., wearing glasses) may become a confounding factor for the sentiment classification task. In this paper, we propose a Select-Additive Learning (SAL) procedure that improves the generalizability of trained neural networks for multimodal sentiment analysis. In our experiments, we show that our SAL approach improves prediction accuracy significantly in all three modalities (verbal, acoustic, visual), as well as in their fusion. Our results show that SAL, even when trained on one dataset, achieves good generalization across two new test datasets. |
Tasks | Multimodal Sentiment Analysis, Sentiment Analysis |
Published | 2016-09-16 |
URL | http://arxiv.org/abs/1609.05244v2 |
http://arxiv.org/pdf/1609.05244v2.pdf | |
PWC | https://paperswithcode.com/paper/select-additive-learning-improving |
Repo | https://github.com/HaohanWang/SelectAdditiveLearning |
Framework | none |
FastText.zip: Compressing text classification models
Title | FastText.zip: Compressing text classification models |
Authors | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov |
Abstract | We consider the problem of producing compact architectures for text classification, such that the full model fits in a limited amount of memory. After considering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store word embeddings. While the original technique leads to a loss in accuracy, we adapt this method to circumvent quantization artefacts. Our experiments carried out on several benchmarks show that our approach typically requires two orders of magnitude less memory than fastText while being only slightly inferior with respect to accuracy. As a result, it outperforms the state of the art by a good margin in terms of the compromise between memory usage and accuracy. |
Tasks | Quantization, Text Classification, Word Embeddings |
Published | 2016-12-12 |
URL | http://arxiv.org/abs/1612.03651v1 |
http://arxiv.org/pdf/1612.03651v1.pdf | |
PWC | https://paperswithcode.com/paper/fasttextzip-compressing-text-classification |
Repo | https://github.com/romik9999/fasttext-1925f09ed3 |
Framework | none |
Seeing into Darkness: Scotopic Visual Recognition
Title | Seeing into Darkness: Scotopic Visual Recognition |
Authors | Bo Chen, Pietro Perona |
Abstract | Images are formed by counting how many photons traveling from a given set of directions hit an image sensor during a given time interval. When photons are few and far in between, the concept of image' breaks down and it is best to consider directly the flow of photons. Computer vision in this regime, which we call scotopic’, is radically different from the classical image-based paradigm in that visual computations (classification, control, search) have to take place while the stream of photons is captured and decisions may be taken as soon as enough information is available. The scotopic regime is important for biomedical imaging, security, astronomy and many other fields. Here we develop a framework that allows a machine to classify objects with as few photons as possible, while maintaining the error rate below an acceptable threshold. A dynamic and asymptotically optimal speed-accuracy tradeoff is a key feature of this framework. We propose and study an algorithm to optimize the tradeoff of a convolutional network directly from lowlight images and evaluate on simulated images from standard datasets. Surprisingly, scotopic systems can achieve comparable classification performance as traditional vision systems while using less than 0.1% of the photons in a conventional image. In addition, we demonstrate that our algorithms work even when the illuminance of the environment is unknown and varying. Last, we outline a spiking neural network coupled with photon-counting sensors as a power-efficient hardware realization of scotopic algorithms. |
Tasks | |
Published | 2016-10-03 |
URL | http://arxiv.org/abs/1610.00405v1 |
http://arxiv.org/pdf/1610.00405v1.pdf | |
PWC | https://paperswithcode.com/paper/seeing-into-darkness-scotopic-visual |
Repo | https://github.com/bochencaltech/scotopic |
Framework | none |
Sync-DRAW: Automatic Video Generation using Deep Recurrent Attentive Architectures
Title | Sync-DRAW: Automatic Video Generation using Deep Recurrent Attentive Architectures |
Authors | Gaurav Mittal, Tanya Marwah, Vineeth N. Balasubramanian |
Abstract | This paper introduces a novel approach for generating videos called Synchronized Deep Recurrent Attentive Writer (Sync-DRAW). Sync-DRAW can also perform text-to-video generation which, to the best of our knowledge, makes it the first approach of its kind. It combines a Variational Autoencoder~(VAE) with a Recurrent Attention Mechanism in a novel manner to create a temporally dependent sequence of frames that are gradually formed over time. The recurrent attention mechanism in Sync-DRAW attends to each individual frame of the video in sychronization, while the VAE learns a latent distribution for the entire video at the global level. Our experiments with Bouncing MNIST, KTH and UCF-101 suggest that Sync-DRAW is efficient in learning the spatial and temporal information of the videos and generates frames with high structural integrity, and can generate videos from simple captions on these datasets. (Accepted as oral paper in ACM-Multimedia 2017) |
Tasks | Video Generation |
Published | 2016-11-30 |
URL | http://arxiv.org/abs/1611.10314v4 |
http://arxiv.org/pdf/1611.10314v4.pdf | |
PWC | https://paperswithcode.com/paper/sync-draw-automatic-video-generation-using |
Repo | https://github.com/Singularity42/Sync-DRAW |
Framework | tf |
Semantic Word Clusters Using Signed Normalized Graph Cuts
Title | Semantic Word Clusters Using Signed Normalized Graph Cuts |
Authors | João Sedoc, Jean Gallier, Lyle Ungar, Dean Foster |
Abstract | Vector space representations of words capture many aspects of word similarity, but such methods tend to make vector spaces in which antonyms (as well as synonyms) are close to each other. We present a new signed spectral normalized graph cut algorithm, signed clustering, that overlays existing thesauri upon distributionally derived vector representations of words, so that antonym relationships between word pairs are represented by negative weights. Our signed clustering algorithm produces clusters of words which simultaneously capture distributional and synonym relations. We evaluate these clusters against the SimLex-999 dataset (Hill et al.,2014) of human judgments of word pair similarities, and also show the benefit of using our clusters to predict the sentiment of a given text. |
Tasks | |
Published | 2016-01-20 |
URL | http://arxiv.org/abs/1601.05403v1 |
http://arxiv.org/pdf/1601.05403v1.pdf | |
PWC | https://paperswithcode.com/paper/semantic-word-clusters-using-signed |
Repo | https://github.com/jsedoc/SignedSpectralClustering |
Framework | none |