Paper Group ANR 8
Multi-task multiple kernel machines for personalized pain recognition from functional near-infrared spectroscopy brain signals. Learning to Communicate Implicitly By Actions. EIGEN: Ecologically-Inspired GENetic Approach for Neural Network Structure Searching from Scratch. Online Multi-Label Classification: A Label Compression Method. Learning Neur …
Multi-task multiple kernel machines for personalized pain recognition from functional near-infrared spectroscopy brain signals
Title | Multi-task multiple kernel machines for personalized pain recognition from functional near-infrared spectroscopy brain signals |
Authors | Daniel Lopez-Martinez, Ke Peng, Sarah C. Steele, Arielle J. Lee, David Borsook, Rosalind Picard |
Abstract | Currently there is no validated objective measure of pain. Recent neuroimaging studies have explored the feasibility of using functional near-infrared spectroscopy (fNIRS) to measure alterations in brain function in evoked and ongoing pain. In this study, we applied multi-task machine learning methods to derive a practical algorithm for pain detection derived from fNIRS signals in healthy volunteers exposed to a painful stimulus. Especially, we employed multi-task multiple kernel learning to account for the inter-subject variability in pain response. Our results support the use of fNIRS and machine learning techniques in developing objective pain detection, and also highlight the importance of adopting personalized analysis in the process. |
Tasks | |
Published | 2018-08-21 |
URL | http://arxiv.org/abs/1808.06774v1 |
http://arxiv.org/pdf/1808.06774v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-task-multiple-kernel-machines-for |
Repo | |
Framework | |
Learning to Communicate Implicitly By Actions
Title | Learning to Communicate Implicitly By Actions |
Authors | Zheng Tian, Shihao Zou, Ian Davies, Tim Warr, Lisheng Wu, Haitham Bou Ammar, Jun Wang |
Abstract | In situations where explicit communication is limited, human collaborators act by learning to: (i) infer meaning behind their partner’s actions, and (ii) convey private information about the state to their partner implicitly through actions. The first component of this learning process has been well-studied in multi-agent systems, whereas the second — which is equally crucial for successful collaboration — has not. To mimic both components mentioned above, thereby completing the learning process, we introduce a novel algorithm: Policy Belief Learning (PBL). PBL uses a belief module to model the other agent’s private information and a policy module to form a distribution over actions informed by the belief module. Furthermore, to encourage communication by actions, we propose a novel auxiliary reward which incentivizes one agent to help its partner to make correct inferences about its private information. The auxiliary reward for communication is integrated into the learning of the policy module. We evaluate our approach on a set of environments including a matrix game, particle environment and the non-competitive bidding problem from contract bridge. We show empirically that this auxiliary reward is effective and easy to generalize. These results demonstrate that our PBL algorithm can produce strong pairs of agents in collaborative games where explicit communication is disabled. |
Tasks | |
Published | 2018-10-10 |
URL | https://arxiv.org/abs/1810.04444v4 |
https://arxiv.org/pdf/1810.04444v4.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-communicate-implicitly-by-actions |
Repo | |
Framework | |
EIGEN: Ecologically-Inspired GENetic Approach for Neural Network Structure Searching from Scratch
Title | EIGEN: Ecologically-Inspired GENetic Approach for Neural Network Structure Searching from Scratch |
Authors | Jian Ren, Zhe Li, Jianchao Yang, Ning Xu, Tianbao Yang, David J. Foran |
Abstract | Designing the structure of neural networks is considered one of the most challenging tasks in deep learning, especially when there is few prior knowledge about the task domain. In this paper, we propose an Ecologically-Inspired GENetic (EIGEN) approach that uses the concept of succession, extinction, mimicry, and gene duplication to search neural network structure from scratch with poorly initialized simple network and few constraints forced during the evolution, as we assume no prior knowledge about the task domain. Specifically, we first use primary succession to rapidly evolve a population of poorly initialized neural network structures into a more diverse population, followed by a secondary succession stage for fine-grained searching based on the networks from the primary succession. Extinction is applied in both stages to reduce computational cost. Mimicry is employed during the entire evolution process to help the inferior networks imitate the behavior of a superior network and gene duplication is utilized to duplicate the learned blocks of novel structures, both of which help to find better network structures. Experimental results show that our proposed approach can achieve similar or better performance compared to the existing genetic approaches with dramatically reduced computation cost. For example, the network discovered by our approach on CIFAR-100 dataset achieves 78.1% test accuracy under 120 GPU hours, compared to 77.0% test accuracy in more than 65, 536 GPU hours in [35]. |
Tasks | |
Published | 2018-06-05 |
URL | http://arxiv.org/abs/1806.01940v3 |
http://arxiv.org/pdf/1806.01940v3.pdf | |
PWC | https://paperswithcode.com/paper/eigen-ecologically-inspired-genetic-approach |
Repo | |
Framework | |
Online Multi-Label Classification: A Label Compression Method
Title | Online Multi-Label Classification: A Label Compression Method |
Authors | Zahra Ahmadi, Stefan Kramer |
Abstract | Many modern applications deal with multi-label data, such as functional categorizations of genes, image labeling and text categorization. Classification of such data with a large number of labels and latent dependencies among them is a challenging task, and it becomes even more challenging when the data is received online and in chunks. Many of the current multi-label classification methods require a lot of time and memory, which make them infeasible for practical real-world applications. In this paper, we propose a fast linear label space dimension reduction method that transforms the labels into a reduced encoded space and trains models on the obtained pseudo labels. Additionally, it provides an analytical method to update the decoding matrix which maps the labels into the original space and is used during the test phase. Experimental results show the effectiveness of this approach in terms of running times and the prediction performance over different measures. |
Tasks | Dimensionality Reduction, Multi-Label Classification, Text Categorization |
Published | 2018-04-04 |
URL | http://arxiv.org/abs/1804.01491v1 |
http://arxiv.org/pdf/1804.01491v1.pdf | |
PWC | https://paperswithcode.com/paper/online-multi-label-classification-a-label |
Repo | |
Framework | |
Learning Neural Random Fields with Inclusive Auxiliary Generators
Title | Learning Neural Random Fields with Inclusive Auxiliary Generators |
Authors | Yunfu Song, Zhijian Ou |
Abstract | Neural random fields (NRFs), which are defined by using neural networks to implement potential functions in undirected models (sometimes known as energy-based models), provide an interesting family of model spaces for machine learning, besides various directed models such as generative adversarial networks (GANs). In this paper we propose a new approach, the inclusive-NRF approach, to learning NRFs for continuous data (e.g. images), by developing inclusive-divergence minimized auxiliary generators and stochastic gradient sampling. As demonstrations of how the new approach can be flexibly and effectively used, specific inclusive-NRF models are developed and thoroughly evaluated for a number of tasks - unsupervised/supervised image generation, semi-supervised classification and anomaly detection. The proposed models consistently achieve strong experimental results in all these tasks compared to state-of-the-art methods. Remarkably, in addition to superior sample generation, one fundamental additional benefit of our inclusive-NRF approach is that, unlike GANs, it directly provides (unnormalized) density estimate for sample evaluation. With these contributions and results, this paper significantly advances the learning and applications of undirected models to a new level, both theoretically and empirically, which have never been obtained before. |
Tasks | Anomaly Detection, Image Generation |
Published | 2018-06-01 |
URL | http://arxiv.org/abs/1806.00271v4 |
http://arxiv.org/pdf/1806.00271v4.pdf | |
PWC | https://paperswithcode.com/paper/learning-neural-random-fields-with-inclusive |
Repo | |
Framework | |
Geometry of energy landscapes and the optimizability of deep neural networks
Title | Geometry of energy landscapes and the optimizability of deep neural networks |
Authors | Simon Becker, Yao Zhang, Alpha A. Lee |
Abstract | Deep neural networks are workhorse models in machine learning with multiple layers of non-linear functions composed in series. Their loss function is highly non-convex, yet empirically even gradient descent minimisation is sufficient to arrive at accurate and predictive models. It is hitherto unknown why are deep neural networks easily optimizable. We analyze the energy landscape of a spin glass model of deep neural networks using random matrix theory and algebraic geometry. We analytically show that the multilayered structure holds the key to optimizability: Fixing the number of parameters and increasing network depth, the number of stationary points in the loss function decreases, minima become more clustered in parameter space, and the tradeoff between the depth and width of minima becomes less severe. Our analytical results are numerically verified through comparison with neural networks trained on a set of classical benchmark datasets. Our model uncovers generic design principles of machine learning models. |
Tasks | |
Published | 2018-08-01 |
URL | http://arxiv.org/abs/1808.00408v1 |
http://arxiv.org/pdf/1808.00408v1.pdf | |
PWC | https://paperswithcode.com/paper/geometry-of-energy-landscapes-and-the |
Repo | |
Framework | |
Intrinsic Image Transformation via Scale Space Decomposition
Title | Intrinsic Image Transformation via Scale Space Decomposition |
Authors | Lechao Cheng, Chengyi Zhang, Zicheng Liao |
Abstract | We introduce a new network structure for decomposing an image into its intrinsic albedo and shading. We treat this as an image-to-image transformation problem and explore the scale space of the input and output. By expanding the output images (albedo and shading) into their Laplacian pyramid components, we develop a multi-channel network structure that learns the image-to-image transformation function in successive frequency bands in parallel, within each channel is a fully convolutional neural network with skip connections. This network structure is general and extensible, and has demonstrated excellent performance on the intrinsic image decomposition problem. We evaluate the network on two benchmark datasets: the MPI-Sintel dataset and the MIT Intrinsic Images dataset. Both quantitative and qualitative results show our model delivers a clear progression over state-of-the-art. |
Tasks | Intrinsic Image Decomposition |
Published | 2018-05-25 |
URL | http://arxiv.org/abs/1805.10253v1 |
http://arxiv.org/pdf/1805.10253v1.pdf | |
PWC | https://paperswithcode.com/paper/intrinsic-image-transformation-via-scale |
Repo | |
Framework | |
All-in-one: Multi-task Learning for Rumour Verification
Title | All-in-one: Multi-task Learning for Rumour Verification |
Authors | Elena Kochkina, Maria Liakata, Arkaitz Zubiaga |
Abstract | Automatic resolution of rumours is a challenging task that can be broken down into smaller components that make up a pipeline, including rumour detection, rumour tracking and stance classification, leading to the final outcome of determining the veracity of a rumour. In previous work, these steps in the process of rumour verification have been developed as separate components where the output of one feeds into the next. We propose a multi-task learning approach that allows joint training of the main and auxiliary tasks, improving the performance of rumour verification. We examine the connection between the dataset properties and the outcomes of the multi-task learning models used. |
Tasks | Multi-Task Learning, Rumour Detection |
Published | 2018-06-10 |
URL | http://arxiv.org/abs/1806.03713v1 |
http://arxiv.org/pdf/1806.03713v1.pdf | |
PWC | https://paperswithcode.com/paper/all-in-one-multi-task-learning-for-rumour |
Repo | |
Framework | |
CQASUMM: Building References for Community Question Answering Summarization Corpora
Title | CQASUMM: Building References for Community Question Answering Summarization Corpora |
Authors | Tanya Chowdhury, Tanmoy Chakraborty |
Abstract | Community Question Answering forums such as Quora, Stackoverflow are rich knowledge resources, often catering to information on topics overlooked by major search engines. Answers submitted to these forums are often elaborated, contain spam, are marred by slurs and business promotions. It is difficult for a reader to go through numerous such answers to gauge community opinion. As a result summarization becomes a prioritized task for CQA forums. While a number of efforts have been made to summarize factoid CQA, little work exists in summarizing non-factoid CQA. We believe this is due to the lack of a considerably large, annotated dataset for CQA summarization. We create CQASUMM, the first huge annotated CQA summarization dataset by filtering the 4.4 million Yahoo! Answers L6 dataset. We sample threads where the best answer can double up as a reference summary and build hundred word summaries from them. We treat other answers as candidates documents for summarization. We provide a script to generate the dataset and introduce the new task of Community Question Answering Summarization. Multi document summarization has been widely studied with news article datasets, especially in the DUC and TAC challenges using news corpora. However documents in CQA have higher variance, contradicting opinion and lesser amount of overlap. We compare the popular multi document summarization techniques and evaluate their performance on our CQA corpora. We look into the state-of-the-art and understand the cases where existing multi document summarizers (MDS) fail. We find that most MDS workflows are built for the entirely factual news corpora, whereas our corpus has a fair share of opinion based instances too. We therefore introduce OpinioSumm, a new MDS which outperforms the best baseline by 4.6% w.r.t ROUGE-1 score. |
Tasks | Community Question Answering, Document Summarization, Multi-Document Summarization, Question Answering |
Published | 2018-11-12 |
URL | http://arxiv.org/abs/1811.04884v1 |
http://arxiv.org/pdf/1811.04884v1.pdf | |
PWC | https://paperswithcode.com/paper/cqasumm-building-references-for-community |
Repo | |
Framework | |
Webly Supervised Learning for Skin Lesion Classification
Title | Webly Supervised Learning for Skin Lesion Classification |
Authors | Fernando Navarro, Sailesh Conjeti, Federico Tombari, Nassir Navab |
Abstract | Within medical imaging, manual curation of sufficient well-labeled samples is cost, time and scale-prohibitive. To improve the representativeness of the training dataset, for the first time, we present an approach to utilize large amounts of freely available web data through web-crawling. To handle noise and weak nature of web annotations, we propose a two-step transfer learning based training process with a robust loss function, termed as Webly Supervised Learning (WSL) to train deep models for the task. We also leverage search by image to improve the search specificity of our web-crawling and reduce cross-domain noise. Within WSL, we explicitly model the noise structure between classes and incorporate it to selectively distill knowledge from the web data during model training. To demonstrate improved performance due to WSL, we benchmarked on a publicly available 10-class fine-grained skin lesion classification dataset and report a significant improvement of top-1 classification accuracy from 71.25 % to 80.53 % due to the incorporation of web-supervision. |
Tasks | Skin Lesion Classification, Transfer Learning |
Published | 2018-03-31 |
URL | https://arxiv.org/abs/1804.00177v2 |
https://arxiv.org/pdf/1804.00177v2.pdf | |
PWC | https://paperswithcode.com/paper/webly-supervised-learning-for-skin-lesion |
Repo | |
Framework | |
Autonomous Exploration, Reconstruction, and Surveillance of 3D Environments Aided by Deep Learning
Title | Autonomous Exploration, Reconstruction, and Surveillance of 3D Environments Aided by Deep Learning |
Authors | Louis Ly, Yen-Hsi Richard Tsai |
Abstract | We propose a greedy and supervised learning approach for visibility-based exploration, reconstruction and surveillance. Using a level set representation, we train a convolutional neural network to determine vantage points that maximize visibility. We show that this method drastically reduces the on-line computational cost and determines a small set of vantage points that solve the problem. This enables us to efficiently produce highly-resolved and topologically accurate maps of complex 3D environments. Unlike traditional next-best-view and frontier-based strategies, the proposed method accounts for geometric priors while evaluating potential vantage points. While existing deep learning approaches focus on obstacle avoidance and local navigation, our method aims at finding near-optimal solutions to the more global exploration problem. We present realistic simulations on 2D and 3D urban environments. |
Tasks | |
Published | 2018-09-17 |
URL | https://arxiv.org/abs/1809.06025v2 |
https://arxiv.org/pdf/1809.06025v2.pdf | |
PWC | https://paperswithcode.com/paper/autonomous-exploration-reconstruction-and |
Repo | |
Framework | |
Mining for meaning: from vision to language through multiple networks consensus
Title | Mining for meaning: from vision to language through multiple networks consensus |
Authors | Iulia Duta, Andrei Liviu Nicolicioiu, Simion-Vlad Bogolin, Marius Leordeanu |
Abstract | Describing visual data into natural language is a very challenging task, at the intersection of computer vision, natural language processing and machine learning. Language goes well beyond the description of physical objects and their interactions and can convey the same abstract idea in many ways. It is both about content at the highest semantic level as well as about fluent form. Here we propose an approach to describe videos in natural language by reaching a consensus among multiple encoder-decoder networks. Finding such a consensual linguistic description, which shares common properties with a larger group, has a better chance to convey the correct meaning. We propose and train several network architectures and use different types of image, audio and video features. Each model produces its own description of the input video and the best one is chosen through an efficient, two-phase consensus process. We demonstrate the strength of our approach by obtaining state of the art results on the challenging MSR-VTT dataset. |
Tasks | |
Published | 2018-06-05 |
URL | http://arxiv.org/abs/1806.01954v2 |
http://arxiv.org/pdf/1806.01954v2.pdf | |
PWC | https://paperswithcode.com/paper/mining-for-meaning-from-vision-to-language |
Repo | |
Framework | |
High-Dimensional Bayesian Optimization via Additive Models with Overlapping Groups
Title | High-Dimensional Bayesian Optimization via Additive Models with Overlapping Groups |
Authors | Paul Rolland, Jonathan Scarlett, Ilija Bogunovic, Volkan Cevher |
Abstract | Bayesian optimization (BO) is a popular technique for sequential black-box function optimization, with applications including parameter tuning, robotics, environmental monitoring, and more. One of the most important challenges in BO is the development of algorithms that scale to high dimensions, which remains a key open problem despite recent progress. In this paper, we consider the approach of Kandasamy et al. (2015), in which the high-dimensional function decomposes as a sum of lower-dimensional functions on subsets of the underlying variables. In particular, we significantly generalize this approach by lifting the assumption that the subsets are disjoint, and consider additive models with arbitrary overlap among the subsets. By representing the dependencies via a graph, we deduce an efficient message passing algorithm for optimizing the acquisition function. In addition, we provide an algorithm for learning the graph from samples based on Gibbs sampling. We empirically demonstrate the effectiveness of our methods on both synthetic and real-world data. |
Tasks | Hyperparameter Optimization |
Published | 2018-02-20 |
URL | http://arxiv.org/abs/1802.07028v2 |
http://arxiv.org/pdf/1802.07028v2.pdf | |
PWC | https://paperswithcode.com/paper/high-dimensional-bayesian-optimization-via |
Repo | |
Framework | |
Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration
Title | Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration |
Authors | Soham De, Anirbit Mukherjee, Enayat Ullah |
Abstract | RMSProp and ADAM continue to be extremely popular algorithms for training neural nets but their theoretical convergence properties have remained unclear. Further, recent work has seemed to suggest that these algorithms have worse generalization properties when compared to carefully tuned stochastic gradient descent or its momentum variants. In this work, we make progress towards a deeper understanding of ADAM and RMSProp in two ways. First, we provide proofs that these adaptive gradient algorithms are guaranteed to reach criticality for smooth non-convex objectives, and we give bounds on the running time. Next we design experiments to empirically study the convergence and generalization properties of RMSProp and ADAM against Nesterov’s Accelerated Gradient method on a variety of common autoencoder setups and on VGG-9 with CIFAR-10. Through these experiments we demonstrate the interesting sensitivity that ADAM has to its momentum parameter $\beta_1$. We show that at very high values of the momentum parameter ($\beta_1 = 0.99$) ADAM outperforms a carefully tuned NAG on most of our experiments, in terms of getting lower training and test losses. On the other hand, NAG can sometimes do better when ADAM’s $\beta_1$ is set to the most commonly used value: $\beta_1 = 0.9$, indicating the importance of tuning the hyperparameters of ADAM to get better generalization performance. We also report experiments on different autoencoders to demonstrate that NAG has better abilities in terms of reducing the gradient norms, and it also produces iterates which exhibit an increasing trend for the minimum eigenvalue of the Hessian of the loss function at the iterates. |
Tasks | |
Published | 2018-07-18 |
URL | http://arxiv.org/abs/1807.06766v3 |
http://arxiv.org/pdf/1807.06766v3.pdf | |
PWC | https://paperswithcode.com/paper/convergence-guarantees-for-rmsprop-and-adam |
Repo | |
Framework | |
DeepStyle: Multimodal Search Engine for Fashion and Interior Design
Title | DeepStyle: Multimodal Search Engine for Fashion and Interior Design |
Authors | Ivona Tautkute, Tomasz Trzcinski, Aleksander Skorupa, Lukasz Brocki, Krzysztof Marasek |
Abstract | In this paper, we propose a multimodal search engine that combines visual and textual cues to retrieve items from a multimedia database aesthetically similar to the query. The goal of our engine is to enable intuitive retrieval of fashion merchandise such as clothes or furniture. Existing search engines treat textual input only as an additional source of information about the query image and do not correspond to the real-life scenario where the user looks for ‘the same shirt but of denim’. Our novel method, dubbed DeepStyle, mitigates those shortcomings by using a joint neural network architecture to model contextual dependencies between features of different modalities. We prove the robustness of this approach on two different challenging datasets of fashion items and furniture where our DeepStyle engine outperforms baseline methods by 18-21% on the tested datasets. Our search engine is commercially deployed and available through a Web-based application. |
Tasks | |
Published | 2018-01-08 |
URL | http://arxiv.org/abs/1801.03002v2 |
http://arxiv.org/pdf/1801.03002v2.pdf | |
PWC | https://paperswithcode.com/paper/deepstyle-multimodal-search-engine-for |
Repo | |
Framework | |