Paper Group ANR 380
Function Driven Diffusion for Personalized Counterfactual Inference. Material Recognition from Local Appearance in Global Context. Transfer Learning for Material Classification using Convolutional Networks. A 4D Light-Field Dataset and CNN Architectures for Material Recognition. Geometry-Informed Material Recognition. Analysis of the Entropy-guided …
Function Driven Diffusion for Personalized Counterfactual Inference
Title | Function Driven Diffusion for Personalized Counterfactual Inference |
Authors | Alexander Cloninger |
Abstract | We consider the problem of constructing diffusion operators high dimensional data $X$ to address counterfactual functions $F$, such as individualized treatment effectiveness. We propose and construct a new diffusion metric $K_F$ that captures both the local geometry of $X$ and the directions of variance of $F$. The resulting diffusion metric is then used to define a localized filtration of $F$ and answer counterfactual questions pointwise, particularly in situations such as drug trials where an individual patient’s outcomes cannot be studied long term both taking and not taking a medication. We validate the model on synthetic and real world clinical trials, and create individualized notions of benefit from treatment. |
Tasks | Counterfactual Inference |
Published | 2016-10-31 |
URL | http://arxiv.org/abs/1610.10025v5 |
http://arxiv.org/pdf/1610.10025v5.pdf | |
PWC | https://paperswithcode.com/paper/function-driven-diffusion-for-personalized |
Repo | |
Framework | |
Material Recognition from Local Appearance in Global Context
Title | Material Recognition from Local Appearance in Global Context |
Authors | Gabriel Schwartz, Ko Nishino |
Abstract | Recognition of materials has proven to be a challenging problem due to the wide variation in appearance within and between categories. Global image context, such as where the material is or what object it makes up, can be crucial to recognizing the material. Existing methods, however, operate on an implicit fusion of materials and context by using large receptive fields as input (i.e., large image patches). Many recent material recognition methods treat materials as yet another set of labels like objects. Materials are, however, fundamentally different from objects as they have no inherent shape or defined spatial extent. Approaches that ignore this can only take advantage of limited implicit context as it appears during training. We instead show that recognizing materials purely from their local appearance and integrating separately recognized global contextual cues including objects and places leads to superior dense, per-pixel, material recognition. We achieve this by training a fully-convolutional material recognition network end-to-end with only material category supervision. We integrate object and place estimates to this network from independent CNNs. This approach avoids the necessity of preparing an impractically-large amount of training data to cover the product space of materials, objects, and scenes, while fully leveraging contextual cues for dense material recognition. Furthermore, we perform a detailed analysis of the effects of context granularity, spatial resolution, and the network level at which we introduce context. On a recently introduced comprehensive and diverse material database \cite{Schwartz2016}, we confirm that our method achieves state-of-the-art accuracy with significantly less training data compared to past methods. |
Tasks | Material Recognition |
Published | 2016-11-28 |
URL | http://arxiv.org/abs/1611.09394v3 |
http://arxiv.org/pdf/1611.09394v3.pdf | |
PWC | https://paperswithcode.com/paper/material-recognition-from-local-appearance-in |
Repo | |
Framework | |
Transfer Learning for Material Classification using Convolutional Networks
Title | Transfer Learning for Material Classification using Convolutional Networks |
Authors | Patrick Wieschollek, Hendrik P. A. Lensch |
Abstract | Material classification in natural settings is a challenge due to complex interplay of geometry, reflectance properties, and illumination. Previous work on material classification relies strongly on hand-engineered features of visual samples. In this work we use a Convolutional Neural Network (convnet) that learns descriptive features for the specific task of material recognition. Specifically, transfer learning from the task of object recognition is exploited to more effectively train good features for material classification. The approach of transfer learning using convnets yields significantly higher recognition rates when compared to previous state-of-the-art approaches. We then analyze the relative contribution of reflectance and shading information by a decomposition of the image into its intrinsic components. The use of convnets for material classification was hindered by the strong demand for sufficient and diverse training data, even with transfer learning approaches. Therefore, we present a new data set containing approximately 10k images divided into 10 material categories. |
Tasks | Material Classification, Material Recognition, Object Recognition, Transfer Learning |
Published | 2016-09-20 |
URL | http://arxiv.org/abs/1609.06188v1 |
http://arxiv.org/pdf/1609.06188v1.pdf | |
PWC | https://paperswithcode.com/paper/transfer-learning-for-material-classification |
Repo | |
Framework | |
A 4D Light-Field Dataset and CNN Architectures for Material Recognition
Title | A 4D Light-Field Dataset and CNN Architectures for Material Recognition |
Authors | Ting-Chun Wang, Jun-Yan Zhu, Ebi Hiroaki, Manmohan Chandraker, Alexei A. Efros, Ravi Ramamoorthi |
Abstract | We introduce a new light-field dataset of materials, and take advantage of the recent success of deep learning to perform material recognition on the 4D light-field. Our dataset contains 12 material categories, each with 100 images taken with a Lytro Illum, from which we extract about 30,000 patches in total. To the best of our knowledge, this is the first mid-size dataset for light-field images. Our main goal is to investigate whether the additional information in a light-field (such as multiple sub-aperture views and view-dependent reflectance effects) can aid material recognition. Since recognition networks have not been trained on 4D images before, we propose and compare several novel CNN architectures to train on light-field images. In our experiments, the best performing CNN architecture achieves a 7% boost compared with 2D image classification (70% to 77%). These results constitute important baselines that can spur further research in the use of CNNs for light-field applications. Upon publication, our dataset also enables other novel applications of light-fields, including object detection, image segmentation and view interpolation. |
Tasks | Image Classification, Material Recognition, Object Detection, Semantic Segmentation |
Published | 2016-08-24 |
URL | http://arxiv.org/abs/1608.06985v1 |
http://arxiv.org/pdf/1608.06985v1.pdf | |
PWC | https://paperswithcode.com/paper/a-4d-light-field-dataset-and-cnn |
Repo | |
Framework | |
Geometry-Informed Material Recognition
Title | Geometry-Informed Material Recognition |
Authors | Joseph DeGol, Mani Golparvar-Fard, Derek Hoiem |
Abstract | Our goal is to recognize material categories using images and geometry information. In many applications, such as construction management, coarse geometry information is available. We investigate how 3D geometry (surface normals, camera intrinsic and extrinsic parameters) can be used with 2D features (texture and color) to improve material classification. We introduce a new dataset, GeoMat, which is the first to provide both image and geometry data in the form of: (i) training and testing patches that were extracted at different scales and perspectives from real world examples of each material category, and (ii) a large scale construction site scene that includes 160 images and over 800,000 hand labeled 3D points. Our results show that using 2D and 3D features both jointly and independently to model materials improves classification accuracy across multiple scales and viewing directions for both material patches and images of a large scale construction site scene. |
Tasks | Material Classification, Material Recognition |
Published | 2016-07-18 |
URL | http://arxiv.org/abs/1607.05338v1 |
http://arxiv.org/pdf/1607.05338v1.pdf | |
PWC | https://paperswithcode.com/paper/geometry-informed-material-recognition |
Repo | |
Framework | |
Analysis of the Entropy-guided Switching Trimmed Mean Deviation-based Anisotropic Diffusion filter
Title | Analysis of the Entropy-guided Switching Trimmed Mean Deviation-based Anisotropic Diffusion filter |
Authors | Uche A. Nnolim |
Abstract | This report describes the experimental analysis of a proposed switching filter-anisotropic diffusion hybrid for the filtering of the fixed value (salt and pepper) impulse noise (FVIN). The filter works well at both low and high noise densities though it was specifically designed for high noise density levels. The filter combines the switching mechanism of decision-based filters and the partial differential equation-based formulation to yield a powerful system capable of recovering the image signals at very high noise levels. Experimental results indicate that the filter surpasses other filters, especially at very high noise levels. Additionally, its adaptive nature ensures that the performance is guided by the metrics obtained from the noisy input image. The filter algorithm is of both global and local nature, where the former is chosen to reduce computation time and complexity, while the latter is used for best results. |
Tasks | |
Published | 2016-04-21 |
URL | http://arxiv.org/abs/1604.06427v2 |
http://arxiv.org/pdf/1604.06427v2.pdf | |
PWC | https://paperswithcode.com/paper/analysis-of-the-entropy-guided-switching |
Repo | |
Framework | |
Separation of Concerns in Reinforcement Learning
Title | Separation of Concerns in Reinforcement Learning |
Authors | Harm van Seijen, Mehdi Fatemi, Joshua Romoff, Romain Laroche |
Abstract | In this paper, we propose a framework for solving a single-agent task by using multiple agents, each focusing on different aspects of the task. This approach has two main advantages: 1) it allows for training specialized agents on different parts of the task, and 2) it provides a new way to transfer knowledge, by transferring trained agents. Our framework generalizes the traditional hierarchical decomposition, in which, at any moment in time, a single agent has control until it has solved its particular subtask. We illustrate our framework with empirical experiments on two domains. |
Tasks | |
Published | 2016-12-15 |
URL | http://arxiv.org/abs/1612.05159v2 |
http://arxiv.org/pdf/1612.05159v2.pdf | |
PWC | https://paperswithcode.com/paper/separation-of-concerns-in-reinforcement |
Repo | |
Framework | |
Sequence-based Sleep Stage Classification using Conditional Neural Fields
Title | Sequence-based Sleep Stage Classification using Conditional Neural Fields |
Authors | Intan Nurma Yulita, Mohamad Ivan Fanany, Aniati Murni Arymurthy |
Abstract | Sleep signals from a polysomnographic database are sequences in nature. Commonly employed analysis and classification methods, however, ignored this fact and treated the sleep signals as non-sequence data. Treating the sleep signals as sequences, this paper compared two powerful unsupervised feature extractors and three sequence-based classifiers regarding accuracy and computational (training and testing) time after 10-folds cross-validation. The compared feature extractors are Deep Belief Networks (DBN) and Fuzzy C-Means (FCM) clustering. Whereas the compared sequence-based classifiers are Hidden Markov Models (HMM), Conditional Random Fields (CRF) and its variants, i.e., Hidden-state CRF (HCRF) and Latent-Dynamic CRF (LDCRF); and Conditional Neural Fields (CNF) and its variant (LDCNF). In this study, we use two datasets. The first dataset is an open (public) polysomnographic dataset downloadable from the Internet, while the second dataset is our polysomnographic dataset (also available for download). For the first dataset, the combination of FCM and CNF gives the highest accuracy (96.75%) with relatively short training time (0.33 hours). For the second dataset, the combination of DBN and CRF gives the accuracy of 99.96% but with 1.02 hours training time, whereas the combination of DBN and CNF gives slightly less accuracy (99.69%) but also less computation time (0.89 hours). |
Tasks | |
Published | 2016-10-06 |
URL | http://arxiv.org/abs/1610.01935v1 |
http://arxiv.org/pdf/1610.01935v1.pdf | |
PWC | https://paperswithcode.com/paper/sequence-based-sleep-stage-classification |
Repo | |
Framework | |
Integrating Local Material Recognition with Large-Scale Perceptual Attribute Discovery
Title | Integrating Local Material Recognition with Large-Scale Perceptual Attribute Discovery |
Authors | Gabriel Schwartz, Ko Nishino |
Abstract | Material attributes have been shown to provide a discriminative intermediate representation for recognizing materials, especially for the challenging task of recognition from local material appearance (i.e., regardless of object and scene context). In the past, however, material attributes have been recognized separately preceding category recognition. In contrast, neuroscience studies on material perception and computer vision research on object and place recognition have shown that attributes are produced as a by-product during the category recognition process. Does the same hold true for material attribute and category recognition? In this paper, we introduce a novel material category recognition network architecture to show that perceptual attributes can, in fact, be automatically discovered inside a local material recognition framework. The novel material-attribute-category convolutional neural network (MAC-CNN) produces perceptual material attributes from the intermediate pooling layers of an end-to-end trained category recognition network using an auxiliary loss function that encodes human material perception. To train this model, we introduce a novel large-scale database of local material appearance organized under a canonical material category taxonomy and careful image patch extraction that avoids unwanted object and scene context. We show that the discovered attributes correspond well with semantically-meaningful visual material traits via Boolean algebra, and enable recognition of previously unseen material categories given only a few examples. These results have strong implications in how perceptually meaningful attributes can be learned in other recognition tasks. |
Tasks | Material Recognition |
Published | 2016-04-05 |
URL | http://arxiv.org/abs/1604.01345v4 |
http://arxiv.org/pdf/1604.01345v4.pdf | |
PWC | https://paperswithcode.com/paper/integrating-local-material-recognition-with |
Repo | |
Framework | |
Contextual Media Retrieval Using Natural Language Queries
Title | Contextual Media Retrieval Using Natural Language Queries |
Authors | Sreyasi Nag Chowdhury, Mateusz Malinowski, Andreas Bulling, Mario Fritz |
Abstract | The widespread integration of cameras in hand-held and head-worn devices as well as the ability to share content online enables a large and diverse visual capture of the world that millions of users build up collectively every day. We envision these images as well as associated meta information, such as GPS coordinates and timestamps, to form a collective visual memory that can be queried while automatically taking the ever-changing context of mobile users into account. As a first step towards this vision, in this work we present Xplore-M-Ego: a novel media retrieval system that allows users to query a dynamic database of images and videos using spatio-temporal natural language queries. We evaluate our system using a new dataset of real user queries as well as through a usability study. One key finding is that there is a considerable amount of inter-user variability, for example in the resolution of spatial relations in natural language utterances. We show that our retrieval system can cope with this variability using personalisation through an online learning-based retrieval formulation. |
Tasks | |
Published | 2016-02-16 |
URL | http://arxiv.org/abs/1602.04983v1 |
http://arxiv.org/pdf/1602.04983v1.pdf | |
PWC | https://paperswithcode.com/paper/contextual-media-retrieval-using-natural |
Repo | |
Framework | |
A Novel Boundary Matching Algorithm for Video Temporal Error Concealment
Title | A Novel Boundary Matching Algorithm for Video Temporal Error Concealment |
Authors | Seyed Mojtaba Marvasti-Zadeh, Hossein Ghanei-Yakhdan, Shohreh Kasaei |
Abstract | With the fast growth of communication networks, the video data transmission from these networks is extremely vulnerable. Error concealment is a technique to estimate the damaged data by employing the correctly received data at the decoder. In this paper, an efficient boundary matching algorithm for estimating damaged motion vectors (MVs) is proposed. The proposed algorithm performs error concealment for each damaged macro block (MB) according to the list of identified priority of each frame. It then uses a classic boundary matching criterion or the proposed boundary matching criterion adaptively to identify matching distortion in each boundary of candidate MB. Finally, the candidate MV with minimum distortion is selected as an MV of damaged MB and the list of priorities is updated. Experimental results show that the proposed algorithm improves both objective and subjective qualities of reconstructed frames without any significant increase in computational cost. The PSNR for test sequences in some frames is increased about 4.7, 4.5, and 4.4 dB compared to the classic boundary matching, directional boundary matching, and directional temporal boundary matching algorithm, respectively. |
Tasks | |
Published | 2016-10-25 |
URL | http://arxiv.org/abs/1610.07753v1 |
http://arxiv.org/pdf/1610.07753v1.pdf | |
PWC | https://paperswithcode.com/paper/a-novel-boundary-matching-algorithm-for-video |
Repo | |
Framework | |
A causal framework for discovering and removing direct and indirect discrimination
Title | A causal framework for discovering and removing direct and indirect discrimination |
Authors | Lu Zhang, Yongkai Wu, Xintao Wu |
Abstract | Anti-discrimination is an increasingly important task in data science. In this paper, we investigate the problem of discovering both direct and indirect discrimination from the historical data, and removing the discriminatory effects before the data is used for predictive analysis (e.g., building classifiers). We make use of the causal network to capture the causal structure of the data. Then we model direct and indirect discrimination as the path-specific effects, which explicitly distinguish the two types of discrimination as the causal effects transmitted along different paths in the network. Based on that, we propose an effective algorithm for discovering direct and indirect discrimination, as well as an algorithm for precisely removing both types of discrimination while retaining good data utility. Different from previous works, our approaches can ensure that the predictive models built from the modified data will not incur discrimination in decision making. Experiments using real datasets show the effectiveness of our approaches. |
Tasks | Decision Making |
Published | 2016-11-22 |
URL | http://arxiv.org/abs/1611.07509v1 |
http://arxiv.org/pdf/1611.07509v1.pdf | |
PWC | https://paperswithcode.com/paper/a-causal-framework-for-discovering-and |
Repo | |
Framework | |
Formal Verification of Autonomous Vehicle Platooning
Title | Formal Verification of Autonomous Vehicle Platooning |
Authors | Maryam Kamali, Louise A. Dennis, Owen McAree, Michael Fisher, Sandor M. Veres |
Abstract | The coordination of multiple autonomous vehicles into convoys or platoons is expected on our highways in the near future. However, before such platoons can be deployed, the new autonomous behaviors of the vehicles in these platoons must be certified. An appropriate representation for vehicle platooning is as a multi-agent system in which each agent captures the “autonomous decisions” carried out by each vehicle. In order to ensure that these autonomous decision-making agents in vehicle platoons never violate safety requirements, we use formal verification. However, as the formal verification technique used to verify the agent code does not scale to the full system and as the global verification technique does not capture the essential verification of autonomous behavior, we use a combination of the two approaches. This mixed strategy allows us to verify safety requirements not only of a model of the system, but of the actual agent code used to program the autonomous vehicles. |
Tasks | Autonomous Vehicles, Decision Making |
Published | 2016-02-04 |
URL | http://arxiv.org/abs/1602.01718v1 |
http://arxiv.org/pdf/1602.01718v1.pdf | |
PWC | https://paperswithcode.com/paper/formal-verification-of-autonomous-vehicle |
Repo | |
Framework | |
Adaptive Joint Learning of Compositional and Non-Compositional Phrase Embeddings
Title | Adaptive Joint Learning of Compositional and Non-Compositional Phrase Embeddings |
Authors | Kazuma Hashimoto, Yoshimasa Tsuruoka |
Abstract | We present a novel method for jointly learning compositional and non-compositional phrase embeddings by adaptively weighting both types of embeddings using a compositionality scoring function. The scoring function is used to quantify the level of compositionality of each phrase, and the parameters of the function are jointly optimized with the objective for learning phrase embeddings. In experiments, we apply the adaptive joint learning method to the task of learning embeddings of transitive verb phrases, and show that the compositionality scores have strong correlation with human ratings for verb-object compositionality, substantially outperforming the previous state of the art. Moreover, our embeddings improve upon the previous best model on a transitive verb disambiguation task. We also show that a simple ensemble technique further improves the results for both tasks. |
Tasks | |
Published | 2016-03-19 |
URL | http://arxiv.org/abs/1603.06067v3 |
http://arxiv.org/pdf/1603.06067v3.pdf | |
PWC | https://paperswithcode.com/paper/adaptive-joint-learning-of-compositional-and |
Repo | |
Framework | |
Learning Cost-Effective Treatment Regimes using Markov Decision Processes
Title | Learning Cost-Effective Treatment Regimes using Markov Decision Processes |
Authors | Himabindu Lakkaraju, Cynthia Rudin |
Abstract | Decision makers, such as doctors and judges, make crucial decisions such as recommending treatments to patients, and granting bails to defendants on a daily basis. Such decisions typically involve weighting the potential benefits of taking an action against the costs involved. In this work, we aim to automate this task of learning \emph{cost-effective, interpretable and actionable treatment regimes}. We formulate this as a problem of learning a decision list – a sequence of if-then-else rules – which maps characteristics of subjects (eg., diagnostic test results of patients) to treatments. We propose a novel objective to construct a decision list which maximizes outcomes for the population, and minimizes overall costs. We model the problem of learning such a list as a Markov Decision Process (MDP) and employ a variant of the Upper Confidence Bound for Trees (UCT) strategy which leverages customized checks for pruning the search space effectively. Experimental results on real world observational data capturing judicial bail decisions and treatment recommendations for asthma patients demonstrate the effectiveness of our approach. |
Tasks | |
Published | 2016-10-21 |
URL | http://arxiv.org/abs/1610.06972v1 |
http://arxiv.org/pdf/1610.06972v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-cost-effective-treatment-regimes |
Repo | |
Framework | |