Paper Group ANR 282
![Paper Group ANR 282](/2016/images/pwc/paper-arxiv_hu144ec288a26b3e360d673e256787de3e_28623_900x500_fit_q75_box.jpg)
Empirically Grounded Agent-Based Models of Innovation Diffusion: A Critical Review. Leveraging Recurrent Neural Networks for Multimodal Recognition of Social Norm Violation in Dialog. TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning. Variable-Length Hashing. On the Reducibility of Submodular Functions. Live Orchestral Pian …
Empirically Grounded Agent-Based Models of Innovation Diffusion: A Critical Review
Title | Empirically Grounded Agent-Based Models of Innovation Diffusion: A Critical Review |
Authors | Haifeng Zhang, Yevgeniy Vorobeychik |
Abstract | Innovation diffusion has been studied extensively in a variety of disciplines, including sociology, economics, marketing, ecology, and computer science. Traditional literature on innovation diffusion has been dominated by models of aggregate behavior and trends. However, the agent-based modeling (ABM) paradigm is gaining popularity as it captures agent heterogeneity and enables fine-grained modeling of interactions mediated by social and geographic networks. While most ABM work on innovation diffusion is theoretical, empirically grounded models are increasingly important, particularly in guiding policy decisions. We present a critical review of empirically grounded agent-based models of innovation diffusion, developing a categorization of this research based on types of agent models as well as applications. By connecting the modeling methodologies in the fields of information and innovation diffusion, we suggest that the maximum likelihood estimation framework widely used in the former is a promising paradigm for calibration of agent-based models for innovation diffusion. Although many advances have been made to standardize ABM methodology, we identify four major issues in model calibration and validation, and suggest potential solutions. |
Tasks | Calibration |
Published | 2016-08-30 |
URL | http://arxiv.org/abs/1608.08517v4 |
http://arxiv.org/pdf/1608.08517v4.pdf | |
PWC | https://paperswithcode.com/paper/empirically-grounded-agent-based-models-of |
Repo | |
Framework | |
Leveraging Recurrent Neural Networks for Multimodal Recognition of Social Norm Violation in Dialog
Title | Leveraging Recurrent Neural Networks for Multimodal Recognition of Social Norm Violation in Dialog |
Authors | Tiancheng Zhao, Ran Zhao, Zhao Meng, Justine Cassell |
Abstract | Social norms are shared rules that govern and facilitate social interaction. Violating such social norms via teasing and insults may serve to upend power imbalances or, on the contrary reinforce solidarity and rapport in conversation, rapport which is highly situated and context-dependent. In this work, we investigate the task of automatically identifying the phenomena of social norm violation in discourse. Towards this goal, we leverage the power of recurrent neural networks and multimodal information present in the interaction, and propose a predictive model to recognize social norm violation. Using long-term temporal and contextual information, our model achieves an F1 score of 0.705. Implications of our work regarding developing a social-aware agent are discussed. |
Tasks | |
Published | 2016-10-10 |
URL | http://arxiv.org/abs/1610.03112v1 |
http://arxiv.org/pdf/1610.03112v1.pdf | |
PWC | https://paperswithcode.com/paper/leveraging-recurrent-neural-networks-for |
Repo | |
Framework | |
TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning
Title | TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning |
Authors | Jayaraman J. Thiagarajan, Bhavya Kailkhura, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy |
Abstract | With the advent of highly predictive but opaque deep learning models, it has become more important than ever to understand and explain the predictions of such models. Existing approaches define interpretability as the inverse of complexity and achieve interpretability at the cost of accuracy. This introduces a risk of producing interpretable but misleading explanations. As humans, we are prone to engage in this kind of behavior \cite{mythos}. In this paper, we take a step in the direction of tackling the problem of interpretability without compromising the model accuracy. We propose to build a Treeview representation of the complex model via hierarchical partitioning of the feature space, which reveals the iterative rejection of unlikely class labels until the correct association is predicted. |
Tasks | |
Published | 2016-11-22 |
URL | http://arxiv.org/abs/1611.07429v1 |
http://arxiv.org/pdf/1611.07429v1.pdf | |
PWC | https://paperswithcode.com/paper/treeview-peeking-into-deep-neural-networks |
Repo | |
Framework | |
Variable-Length Hashing
Title | Variable-Length Hashing |
Authors | Honghai Yu, Pierre Moulin, Hong Wei Ng, Xiaoli Li |
Abstract | Hashing has emerged as a popular technique for large-scale similarity search. Most learning-based hashing methods generate compact yet correlated hash codes. However, this redundancy is storage-inefficient. Hence we propose a lossless variable-length hashing (VLH) method that is both storage- and search-efficient. Storage efficiency is achieved by converting the fixed-length hash code into a variable-length code. Search efficiency is obtained by using a multiple hash table structure. With VLH, we are able to deliberately add redundancy into hash codes to improve retrieval performance with little sacrifice in storage efficiency or search complexity. In particular, we propose a block K-means hashing (B-KMH) method to obtain significantly improved retrieval performance with no increase in storage and marginal increase in computational cost. |
Tasks | Code Search |
Published | 2016-03-17 |
URL | http://arxiv.org/abs/1603.05414v1 |
http://arxiv.org/pdf/1603.05414v1.pdf | |
PWC | https://paperswithcode.com/paper/variable-length-hashing |
Repo | |
Framework | |
On the Reducibility of Submodular Functions
Title | On the Reducibility of Submodular Functions |
Authors | Jincheng Mei, Hao Zhang, Bao-Liang Lu |
Abstract | The scalability of submodular optimization methods is critical for their usability in practice. In this paper, we study the reducibility of submodular functions, a property that enables us to reduce the solution space of submodular optimization problems without performance loss. We introduce the concept of reducibility using marginal gains. Then we show that by adding perturbation, we can endow irreducible functions with reducibility, based on which we propose the perturbation-reduction optimization framework. Our theoretical analysis proves that given the perturbation scales, the reducibility gain could be computed, and the performance loss has additive upper bounds. We further conduct empirical studies and the results demonstrate that our proposed framework significantly accelerates existing optimization methods for irreducible submodular functions with a cost of only small performance losses. |
Tasks | |
Published | 2016-01-04 |
URL | http://arxiv.org/abs/1601.00393v1 |
http://arxiv.org/pdf/1601.00393v1.pdf | |
PWC | https://paperswithcode.com/paper/on-the-reducibility-of-submodular-functions |
Repo | |
Framework | |
Live Orchestral Piano, a system for real-time orchestral music generation
Title | Live Orchestral Piano, a system for real-time orchestral music generation |
Authors | Léopold Crestel, Philippe Esling |
Abstract | This paper introduces the first system for performing automatic orchestration based on a real-time piano input. We believe that it is possible to learn the underlying regularities existing between piano scores and their orchestrations by notorious composers, in order to automatically perform this task on novel piano inputs. To that end, we investigate a class of statistical inference models called conditional Restricted Boltzmann Machine (cRBM). We introduce a specific evaluation framework for orchestral generation based on a prediction task in order to assess the quality of different models. As prediction and creation are two widely different endeavours, we discuss the potential biases in evaluating temporal generative models through prediction tasks and their impact on a creative system. Finally, we introduce an implementation of the proposed model called Live Orchestral Piano (LOP), which allows to perform real-time projective orchestration of a MIDI keyboard input. |
Tasks | Music Generation |
Published | 2016-09-05 |
URL | http://arxiv.org/abs/1609.01203v2 |
http://arxiv.org/pdf/1609.01203v2.pdf | |
PWC | https://paperswithcode.com/paper/live-orchestral-piano-a-system-for-real-time |
Repo | |
Framework | |
Size-Consistent Statistics for Anomaly Detection in Dynamic Networks
Title | Size-Consistent Statistics for Anomaly Detection in Dynamic Networks |
Authors | Timothy La Fond, Jennifer Neville, Brian Gallagher |
Abstract | An important task in network analysis is the detection of anomalous events in a network time series. These events could merely be times of interest in the network timeline or they could be examples of malicious activity or network malfunction. Hypothesis testing using network statistics to summarize the behavior of the network provides a robust framework for the anomaly detection decision process. Unfortunately, choosing network statistics that are dependent on confounding factors like the total number of nodes or edges can lead to incorrect conclusions (e.g., false positives and false negatives). In this dissertation we describe the challenges that face anomaly detection in dynamic network streams regarding confounding factors. We also provide two solutions to avoiding error due to confounding factors: the first is a randomization testing method that controls for confounding factors, and the second is a set of size-consistent network statistics which avoid confounding due to the most common factors, edge count and node count. |
Tasks | Anomaly Detection, Time Series |
Published | 2016-08-02 |
URL | http://arxiv.org/abs/1608.00712v1 |
http://arxiv.org/pdf/1608.00712v1.pdf | |
PWC | https://paperswithcode.com/paper/size-consistent-statistics-for-anomaly |
Repo | |
Framework | |
Reinforcement Learning for Semantic Segmentation in Indoor Scenes
Title | Reinforcement Learning for Semantic Segmentation in Indoor Scenes |
Authors | Md. Alimoor Reza, Jana Kosecka |
Abstract | Future advancements in robot autonomy and sophistication of robotics tasks rest on robust, efficient, and task-dependent semantic understanding of the environment. Semantic segmentation is the problem of simultaneous segmentation and categorization of a partition of sensory data. The majority of current approaches tackle this using multi-class segmentation and labeling in a Conditional Random Field (CRF) framework or by generating multiple object hypotheses and combining them sequentially. In practical settings, the subset of semantic labels that are needed depend on the task and particular scene and labelling every single pixel is not always necessary. We pursue these observations in developing a more modular and flexible approach to multi-class parsing of RGBD data based on learning strategies for combining independent binary object-vs-background segmentations in place of the usual monolithic multi-label CRF approach. Parameters for the independent binary segmentation models can be learned very efficiently, and the combination strategy—learned using reinforcement learning—can be set independently and can vary over different tasks and environments. Accuracy is comparable to state-of-art methods on a subset of the NYU-V2 dataset of indoor scenes, while providing additional flexibility and modularity. |
Tasks | Semantic Segmentation |
Published | 2016-06-03 |
URL | http://arxiv.org/abs/1606.01178v1 |
http://arxiv.org/pdf/1606.01178v1.pdf | |
PWC | https://paperswithcode.com/paper/reinforcement-learning-for-semantic |
Repo | |
Framework | |
Predictive Coding for Dynamic Vision : Development of Functional Hierarchy in a Multiple Spatio-Temporal Scales RNN Model
Title | Predictive Coding for Dynamic Vision : Development of Functional Hierarchy in a Multiple Spatio-Temporal Scales RNN Model |
Authors | Minkyu Choi, Jun Tani |
Abstract | The current paper presents a novel recurrent neural network model, the predictive multiple spatio-temporal scales RNN (P-MSTRNN), which can generate as well as recognize dynamic visual patterns in the predictive coding framework. The model is characterized by multiple spatio-temporal scales imposed on neural unit dynamics through which an adequate spatio-temporal hierarchy develops via learning from exemplars. The model was evaluated by conducting an experiment of learning a set of whole body human movement patterns which was generated by following a hierarchically defined movement syntax. The analysis of the trained model clarifies what types of spatio-temporal hierarchy develop in dynamic neural activity as well as how robust generation and recognition of movement patterns can be achieved by using the error minimization principle. |
Tasks | |
Published | 2016-06-06 |
URL | http://arxiv.org/abs/1606.01672v3 |
http://arxiv.org/pdf/1606.01672v3.pdf | |
PWC | https://paperswithcode.com/paper/predictive-coding-for-dynamic-vision |
Repo | |
Framework | |
An Efficient Minibatch Acceptance Test for Metropolis-Hastings
Title | An Efficient Minibatch Acceptance Test for Metropolis-Hastings |
Authors | Daniel Seita, Xinlei Pan, Haoyu Chen, John Canny |
Abstract | We present a novel Metropolis-Hastings method for large datasets that uses small expected-size minibatches of data. Previous work on reducing the cost of Metropolis-Hastings tests yield variable data consumed per sample, with only constant factor reductions versus using the full dataset for each sample. Here we present a method that can be tuned to provide arbitrarily small batch sizes, by adjusting either proposal step size or temperature. Our test uses the noise-tolerant Barker acceptance test with a novel additive correction variable. The resulting test has similar cost to a normal SGD update. Our experiments demonstrate several order-of-magnitude speedups over previous work. |
Tasks | |
Published | 2016-10-19 |
URL | http://arxiv.org/abs/1610.06848v3 |
http://arxiv.org/pdf/1610.06848v3.pdf | |
PWC | https://paperswithcode.com/paper/an-efficient-minibatch-acceptance-test-for |
Repo | |
Framework | |
Multiview Rectification of Folded Documents
Title | Multiview Rectification of Folded Documents |
Authors | Shaodi You, Yasuyuki Matsushita, Sudipta Sinha, Yusuke Bou, Katsushi Ikeuchi |
Abstract | Digitally unwrapping images of paper sheets is crucial for accurate document scanning and text recognition. This paper presents a method for automatically rectifying curved or folded paper sheets from a few images captured from multiple viewpoints. Prior methods either need expensive 3D scanners or model deformable surfaces using over-simplified parametric representations. In contrast, our method uses regular images and is based on general developable surface models that can represent a wide variety of paper deformations. Our main contribution is a new robust rectification method based on ridge-aware 3D reconstruction of a paper sheet and unwrapping the reconstructed surface using properties of developable surfaces via $\ell_1$ conformal mapping. We present results on several examples including book pages, folded letters and shopping receipts. |
Tasks | 3D Reconstruction |
Published | 2016-06-01 |
URL | http://arxiv.org/abs/1606.00166v1 |
http://arxiv.org/pdf/1606.00166v1.pdf | |
PWC | https://paperswithcode.com/paper/multiview-rectification-of-folded-documents |
Repo | |
Framework | |
Towards Learning to Perceive and Reason About Liquids
Title | Towards Learning to Perceive and Reason About Liquids |
Authors | Connor Schenck, Dieter Fox |
Abstract | Recent advances in AI and robotics have claimed many incredible results with deep learning, yet no work to date has applied deep learning to the problem of liquid perception and reasoning. In this paper, we apply fully-convolutional deep neural networks to the tasks of detecting and tracking liquids. We evaluate three models: a single-frame network, multi-frame network, and a LSTM recurrent network. Our results show that the best liquid detection results are achieved when aggregating data over multiple frames and that the LSTM network outperforms the other two in both tasks. This suggests that LSTM-based neural networks have the potential to be a key component for enabling robots to handle liquids using robust, closed-loop controllers. |
Tasks | |
Published | 2016-08-02 |
URL | http://arxiv.org/abs/1608.00887v1 |
http://arxiv.org/pdf/1608.00887v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-learning-to-perceive-and-reason-about |
Repo | |
Framework | |
Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes
Title | Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes |
Authors | Carlo Baldassi, Christian Borgs, Jennifer Chayes, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, Riccardo Zecchina |
Abstract | In artificial neural networks, learning from data is a computationally demanding task in which a large number of connection weights are iteratively tuned through stochastic-gradient-based heuristic processes over a cost-function. It is not well understood how learning occurs in these systems, in particular how they avoid getting trapped in configurations with poor computational performance. Here we study the difficult case of networks with discrete weights, where the optimization landscape is very rough even for simple architectures, and provide theoretical and numerical evidence of the existence of rare - but extremely dense and accessible - regions of configurations in the network weight space. We define a novel measure, which we call the “robust ensemble” (RE), which suppresses trapping by isolated configurations and amplifies the role of these dense regions. We analytically compute the RE in some exactly solvable models, and also provide a general algorithmic scheme which is straightforward to implement: define a cost-function given by a sum of a finite number of replicas of the original cost-function, with a constraint centering the replicas around a driving assignment. To illustrate this, we derive several powerful new algorithms, ranging from Markov Chains to message passing to gradient descent processes, where the algorithms target the robust dense states, resulting in substantial improvements in performance. The weak dependence on the number of precision bits of the weights leads us to conjecture that very similar reasoning applies to more conventional neural networks. Analogous algorithmic schemes can also be applied to other optimization problems. |
Tasks | |
Published | 2016-05-20 |
URL | http://arxiv.org/abs/1605.06444v3 |
http://arxiv.org/pdf/1605.06444v3.pdf | |
PWC | https://paperswithcode.com/paper/unreasonable-effectiveness-of-learning-neural |
Repo | |
Framework | |
Compilation as a Typed EDSL-to-EDSL Transformation
Title | Compilation as a Typed EDSL-to-EDSL Transformation |
Authors | Emil Axelsson |
Abstract | This article is about an implementation and compilation technique that is used in RAW-Feldspar which is a complete rewrite of the Feldspar embedded domain-specific language (EDSL) (Axelsson et al. 2010). Feldspar is high-level functional language that generates efficient C code to run on embedded targets. The gist of the technique presented in this post is the following: rather writing a back end that converts pure Feldspar expressions directly to C, we translate them to a low-level monadic EDSL. From the low-level EDSL, C code is then generated. This approach has several advantages: 1. The translation is simpler to write than a complete C back end. 2. The translation is between two typed EDSLs, which rules out many potential errors. 3. The low-level EDSL is reusable and can be shared between several high-level EDSLs. Although the article contains a lot of code, most of it is in fact reusable. As mentioned in Discussion, we can write the same implementation in less than 50 lines of code using generic libraries that we have developed to support Feldspar. |
Tasks | |
Published | 2016-03-29 |
URL | http://arxiv.org/abs/1603.08865v3 |
http://arxiv.org/pdf/1603.08865v3.pdf | |
PWC | https://paperswithcode.com/paper/compilation-as-a-typed-edsl-to-edsl |
Repo | |
Framework | |
Cooperative Training of Descriptor and Generator Networks
Title | Cooperative Training of Descriptor and Generator Networks |
Authors | Jianwen Xie, Yang Lu, Ruiqi Gao, Song-Chun Zhu, Ying Nian Wu |
Abstract | This paper studies the cooperative training of two generative models for image modeling and synthesis. Both models are parametrized by convolutional neural networks (ConvNets). The first model is a deep energy-based model, whose energy function is defined by a bottom-up ConvNet, which maps the observed image to the energy. We call it the descriptor network. The second model is a generator network, which is a non-linear version of factor analysis. It is defined by a top-down ConvNet, which maps the latent factors to the observed image. The maximum likelihood learning algorithms of both models involve MCMC sampling such as Langevin dynamics. We observe that the two learning algorithms can be seamlessly interwoven into a cooperative learning algorithm that can train both models simultaneously. Specifically, within each iteration of the cooperative learning algorithm, the generator model generates initial synthesized examples to initialize a finite-step MCMC that samples and trains the energy-based descriptor model. After that, the generator model learns from how the MCMC changes its synthesized examples. That is, the descriptor model teaches the generator model by MCMC, so that the generator model accumulates the MCMC transitions and reproduces them by direct ancestral sampling. We call this scheme MCMC teaching. We show that the cooperative algorithm can learn highly realistic generative models. |
Tasks | |
Published | 2016-09-29 |
URL | http://arxiv.org/abs/1609.09408v3 |
http://arxiv.org/pdf/1609.09408v3.pdf | |
PWC | https://paperswithcode.com/paper/cooperative-training-of-descriptor-and |
Repo | |
Framework | |