January 27, 2020

3310 words 16 mins read

Paper Group ANR 1261

Paper Group ANR 1261

Residual Pyramid FCN for Robust Follicle Segmentation. A Zero-Shot Learning application in Deep Drawing process using Hyper-Process Model. The Intrinsic Scale of Networks is Small. Convolutional Neural Network-aided Bit-flipping for Belief Propagation Decoding of Polar Codes. Semi-Supervised Hierarchical Recurrent Graph Neural Network for City-Wide …

Residual Pyramid FCN for Robust Follicle Segmentation

Title Residual Pyramid FCN for Robust Follicle Segmentation
Authors Zhewei Wang, Weizhen Cai, Charles D. Smith, Noriko Kantake, Thomas J. Rosol, Jundong Liu
Abstract In this paper, we propose a pyramid network structure to improve the FCN-based segmentation solutions and apply it to label thyroid follicles in histology images. Our design is based on the notion that a hierarchical updating scheme, if properly implemented, can help FCNs capture the major objects, as well as structure details in an image. To this end, we devise a residual module to be mounted on consecutive network layers, through which pixel labels would be propagated from the coarsest layer towards the finest layer in a bottom-up fashion. We add five residual units along the decoding path of a modified U-Net to make our segmentation network, Res-Seg-Net. Experiments demonstrate that the multi-resolution set-up in our model is effective in producing segmentations with improved accuracy and robustness.
Tasks
Published 2019-01-11
URL http://arxiv.org/abs/1901.03760v1
PDF http://arxiv.org/pdf/1901.03760v1.pdf
PWC https://paperswithcode.com/paper/residual-pyramid-fcn-for-robust-follicle
Repo
Framework

A Zero-Shot Learning application in Deep Drawing process using Hyper-Process Model

Title A Zero-Shot Learning application in Deep Drawing process using Hyper-Process Model
Authors João Reis, Gil Gonçalves
Abstract One of the consequences of passing from mass production to mass customization paradigm in the nowadays industrialized world is the need to increase flexibility and responsiveness of manufacturing companies. The high-mix / low-volume production forces constant accommodations of unknown product variants, which ultimately leads to high periods of machine calibration. The difficulty related with machine calibration is that experience is required together with a set of experiments to meet the final product quality. Unfortunately, all possible combinations of machine parameters is so high that is difficult to build empirical knowledge. Due to this fact, normally trial and error approaches are taken making one-of-a-kind products not viable. Therefore, a Zero-Shot Learning (ZSL) based approach called hyper-process model (HPM) to learn the relation among multiple tasks is used as a way to shorten the calibration phase. Assuming each product variant is a task to solve, first, a shape analysis on data to learn common modes of deformation between tasks is made, and secondly, a mapping between these modes and task descriptions is performed. Ultimately, the present work has two main contributions: 1) Formulation of an industrial problem into a ZSL setting where new process models can be generated for process optimization and 2) the definition of a regression problem in the domain of ZSL. For that purpose, a 2-d deep drawing simulated process was used based on data collected from the Abaqus simulator, where a significant number of process models were collected to test the effectiveness of the approach. The obtained results show that is possible to learn new tasks without any available data (both labeled and unlabeled) by leveraging information about already existing tasks, allowing to speed up the calibration phase and make a quicker integration of new products into manufacturing systems.
Tasks Calibration, Zero-Shot Learning
Published 2019-01-24
URL http://arxiv.org/abs/1901.08969v1
PDF http://arxiv.org/pdf/1901.08969v1.pdf
PWC https://paperswithcode.com/paper/a-zero-shot-learning-application-in-deep
Repo
Framework

The Intrinsic Scale of Networks is Small

Title The Intrinsic Scale of Networks is Small
Authors Malik Magdon-Ismail, Kshiteesh Hegde
Abstract We define the intrinsic scale at which a network begins to reveal its identity as the scale at which subgraphs in the network (created by a random walk) are distinguishable from similar sized subgraphs in a perturbed copy of the network. We conduct an extensive study of intrinsic scale for several networks, ranging from structured (e.g. road networks) to ad-hoc and unstructured (e.g. crowd sourced information networks), to biological. We find: (a) The intrinsic scale is surprisingly small (7-20 vertices), even though the networks are many orders of magnitude larger. (b) The intrinsic scale quantifies ``structure’’ in a network – networks which are explicitly constructed for specific tasks have smaller intrinsic scale. (c) The structure at different scales can be fragile (easy to disrupt) or robust. |
Tasks
Published 2019-01-15
URL http://arxiv.org/abs/1901.09680v1
PDF http://arxiv.org/pdf/1901.09680v1.pdf
PWC https://paperswithcode.com/paper/the-intrinsic-scale-of-networks-is-small
Repo
Framework

Convolutional Neural Network-aided Bit-flipping for Belief Propagation Decoding of Polar Codes

Title Convolutional Neural Network-aided Bit-flipping for Belief Propagation Decoding of Polar Codes
Authors Chieh-Fang Teng, Kuan-Shiuan Ho, Chen-Hsi Wu, Sin-Sheng Wong, An-Yeu Wu
Abstract Known for their capacity-achieving abilities, polar codes have been selected as the control channel coding scheme for 5G communications. To satisfy the needs of high throughput and low latency, belief propagation (BP) is chosen as the decoding algorithm. However, in general, the error performance of BP is worse than that of enhanced successive cancellation (SC). Recently, critical-set bit-flipping (CS-BF) is applied to BP decoding to lower the error rate. However, its trial and error process result in even longer latency. In this work, we propose a convolutional neural network-assisted bit-flipping (CNN-BF) mechanism to further enhance BP decoding of polar codes. With carefully designed input data and model architecture, our proposed CNN-BF can achieve much higher prediction accuracy and better error correction capability than CS-BF but with only half latency. It also achieves a lower block error rate (BLER) than SC list (CA-SCL).
Tasks
Published 2019-11-05
URL https://arxiv.org/abs/1911.01704v3
PDF https://arxiv.org/pdf/1911.01704v3.pdf
PWC https://paperswithcode.com/paper/convolutional-neural-network-aided-bit
Repo
Framework

Semi-Supervised Hierarchical Recurrent Graph Neural Network for City-Wide Parking Availability Prediction

Title Semi-Supervised Hierarchical Recurrent Graph Neural Network for City-Wide Parking Availability Prediction
Authors Weijia Zhang, Hao Liu, Yanchi Liu, Jingbo Zhou, Hui Xiong
Abstract The ability to predict city-wide parking availability is crucial for the successful development of Parking Guidance and Information (PGI) systems. Indeed, the effective prediction of city-wide parking availability can improve parking efficiency, help urban planning, and ultimately alleviate city congestion. However, it is a non-trivial task for predicting citywide parking availability because of three major challenges: 1) the non-Euclidean spatial autocorrelation among parking lots, 2) the dynamic temporal autocorrelation inside of and between parking lots, and 3) the scarcity of information about real-time parking availability obtained from real-time sensors (e.g., camera, ultrasonic sensor, and GPS). To this end, we propose Semi-supervised Hierarchical Recurrent Graph Neural Network (SHARE) for predicting city-wide parking availability. Specifically, we first propose a hierarchical graph convolution structure to model non-Euclidean spatial autocorrelation among parking lots. Along this line, a contextual graph convolution block and a soft clustering graph convolution block are respectively proposed to capture local and global spatial dependencies between parking lots. Additionally, we adopt a recurrent neural network to incorporate dynamic temporal dependencies of parking lots. Moreover, we propose a parking availability approximation module to estimate missing real-time parking availabilities from both spatial and temporal domain. Finally, experiments on two real-world datasets demonstrate the prediction performance of SHARE outperforms seven state-of-the-art baselines.
Tasks
Published 2019-11-24
URL https://arxiv.org/abs/1911.10516v1
PDF https://arxiv.org/pdf/1911.10516v1.pdf
PWC https://paperswithcode.com/paper/semi-supervised-hierarchical-recurrent-graph
Repo
Framework

Exploring Temporal Information for Improved Video Understanding

Title Exploring Temporal Information for Improved Video Understanding
Authors Yi Zhu
Abstract In this dissertation, I present my work towards exploring temporal information for better video understanding. Specifically, I have worked on two problems: action recognition and semantic segmentation. For action recognition, I have proposed a framework, termed hidden two-stream networks, to learn an optimal motion representation that does not require the computation of optical flow. My framework alleviates several challenges faced in video classification, such as learning motion representations, real-time inference, multi-framerate handling, generalizability to unseen actions, etc. For semantic segmentation, I have introduced a general framework that uses video prediction models to synthesize new training samples. By scaling up the training dataset, my trained models are more accurate and robust than previous models even without modifications to the network architectures or objective functions. I believe videos have much more potential to be mined, and temporal information is one of the most important cues for machines to perceive the visual world better.
Tasks Optical Flow Estimation, Semantic Segmentation, Video Classification, Video Prediction, Video Understanding
Published 2019-05-25
URL https://arxiv.org/abs/1905.10654v1
PDF https://arxiv.org/pdf/1905.10654v1.pdf
PWC https://paperswithcode.com/paper/exploring-temporal-information-for-improved
Repo
Framework

Semantic Regularization: Improve Few-shot Image Classification by Reducing Meta Shift

Title Semantic Regularization: Improve Few-shot Image Classification by Reducing Meta Shift
Authors Da Chen, Yongliang Yang, Zunlei Feng, Xiang Wu, Mingli Song, Wenbin Li, Yuan He, Hui Xue, Feng Mao
Abstract Few-shot image classification requires the classifier to robustly cope with unseen classes even if there are only a few samples for each class. Recent advances benefit from the meta-learning process where episodic tasks are formed to train a model that can adapt to class change. However, these task sare independent to each other and existing works mainly rely on limited samples of individual support set in a single meta task. This strategy leads to severe meta shift issues across multiple tasks, meaning the learned prototypes or class descriptors are not stable as each task only involves their own support set. To avoid this problem, we propose a concise Semantic RegularizationNetwork to learn a common semantic space under the framework of meta-learning. In this space, all class descriptors can be regularized by the learned semantic basis, which can effectively solve the meta shift problem. The key is to train a class encoder and decoder structure that can encode the sample embedding features into the semantic domain with trained semantic basis, and generate a more stable and general class descriptor from the decoder. We evaluate our work by extensive comparisons with previous methods on three benchmark datasets (MiniImageNet, TieredImageNet, and CUB). The results show that the semantic regularization module improves performance by 4%-7% over the baseline method, and achieves competitive results over the current state-of-the-art models.
Tasks Few-Shot Image Classification, Image Classification, Meta-Learning
Published 2019-12-18
URL https://arxiv.org/abs/1912.08395v2
PDF https://arxiv.org/pdf/1912.08395v2.pdf
PWC https://paperswithcode.com/paper/class-regularization-improve-few-shot-image
Repo
Framework

A New Confidence Interval for the Mean of a Bounded Random Variable

Title A New Confidence Interval for the Mean of a Bounded Random Variable
Authors Erik Learned-Miller, Philip S. Thomas
Abstract We present a new method for constructing a confidence interval for the mean of a bounded random variable from samples of the random variable. We conjecture that the confidence interval has guaranteed coverage, i.e., that it contains the mean with high probability for all distributions on a bounded interval, for all samples sizes, and for all confidence levels. This new method provides confidence intervals that are competitive with those produced using Student’s t-statistic, but does not rely on normality assumptions. In particular, its only requirement is that the distribution be bounded on a known finite interval.
Tasks
Published 2019-05-15
URL https://arxiv.org/abs/1905.06208v1
PDF https://arxiv.org/pdf/1905.06208v1.pdf
PWC https://paperswithcode.com/paper/a-new-confidence-interval-for-the-mean-of-a
Repo
Framework

Predicting Visual Memory Schemas with Variational Autoencoders

Title Predicting Visual Memory Schemas with Variational Autoencoders
Authors Cameron Kyle-Davidson, Adrian Bors, Karla Evans
Abstract Visual memory schema (VMS) maps show which regions of an image cause that image to be remembered or falsely remembered. Previous work has succeeded in generating low resolution VMS maps using convolutional neural networks. We instead approach this problem as an image-to-image translation task making use of a variational autoencoder. This approach allows us to generate higher resolution dual channel images that represent visual memory schemas, allowing us to evaluate predicted true memorability and false memorability separately. We also evaluate the relationship between VMS maps, predicted VMS maps, ground truth memorability scores, and predicted memorability scores.
Tasks Image-to-Image Translation
Published 2019-07-19
URL https://arxiv.org/abs/1907.08514v1
PDF https://arxiv.org/pdf/1907.08514v1.pdf
PWC https://paperswithcode.com/paper/predicting-visual-memory-schemas-with
Repo
Framework

Learning Disentangled Representations for Recommendation

Title Learning Disentangled Representations for Recommendation
Authors Jianxin Ma, Chang Zhou, Peng Cui, Hongxia Yang, Wenwu Zhu
Abstract User behavior data in recommender systems are driven by the complex interactions of many latent factors behind the users’ decision making processes. The factors are highly entangled, and may range from high-level ones that govern user intentions, to low-level ones that characterize a user’s preference when executing an intention. Learning representations that uncover and disentangle these latent factors can bring enhanced robustness, interpretability, and controllability. However, learning such disentangled representations from user behavior is challenging, and remains largely neglected by the existing literature. In this paper, we present the MACRo-mIcro Disentangled Variational Auto-Encoder (MacridVAE) for learning disentangled representations from user behavior. Our approach achieves macro disentanglement by inferring the high-level concepts associated with user intentions (e.g., to buy a shirt or a cellphone), while capturing the preference of a user regarding the different concepts separately. A micro-disentanglement regularizer, stemming from an information-theoretic interpretation of VAEs, then forces each dimension of the representations to independently reflect an isolated low-level factor (e.g., the size or the color of a shirt). Empirical results show that our approach can achieve substantial improvement over the state-of-the-art baselines. We further demonstrate that the learned representations are interpretable and controllable, which can potentially lead to a new paradigm for recommendation where users are given fine-grained control over targeted aspects of the recommendation lists.
Tasks Decision Making, Recommendation Systems
Published 2019-10-31
URL https://arxiv.org/abs/1910.14238v1
PDF https://arxiv.org/pdf/1910.14238v1.pdf
PWC https://paperswithcode.com/paper/learning-disentangled-representations-for
Repo
Framework

Biasing MCTS with Features for General Games

Title Biasing MCTS with Features for General Games
Authors Dennis J. N. J. Soemers, Éric Piette, Cameron Browne
Abstract This paper proposes using a linear function approximator, rather than a deep neural network (DNN), to bias a Monte Carlo tree search (MCTS) player for general games. This is unlikely to match the potential raw playing strength of DNNs, but has advantages in terms of generality, interpretability and resources (time and hardware) required for training. Features describing local patterns are used as inputs. The features are formulated in such a way that they are easily interpretable and applicable to a wide range of general games, and might encode simple local strategies. We gradually create new features during the same self-play training process used to learn feature weights. We evaluate the playing strength of an MCTS player biased by learnt features against a standard upper confidence bounds for trees (UCT) player in multiple different board games, and demonstrate significantly improved playing strength in the majority of them after a small number of self-play training games.
Tasks Board Games
Published 2019-03-21
URL http://arxiv.org/abs/1903.08942v1
PDF http://arxiv.org/pdf/1903.08942v1.pdf
PWC https://paperswithcode.com/paper/biasing-mcts-with-features-for-general-games
Repo
Framework

Domain Adaptation-based Augmentation for Weakly Supervised Nuclei Detection

Title Domain Adaptation-based Augmentation for Weakly Supervised Nuclei Detection
Authors Nicolas Brieu, Armin Meier, Ansh Kapil, Ralf Schoenmeyer, Christos G. Gavriel, Peter D. Caie, Günter Schmidt
Abstract The detection of nuclei is one of the most fundamental components of computational pathology. Current state-of-the-art methods are based on deep learning, with the prerequisite that extensive labeled datasets are available. The increasing number of patient cohorts to be analyzed, the diversity of tissue stains and indications, as well as the cost of dataset labeling motivates the development of novel methods to reduce labeling effort across domains. We introduce in this work a weakly supervised ‘inter-domain’ approach that (i) performs stain normalization and unpaired image-to-image translation to transform labeled images on a source domain to synthetic labeled images on an unlabeled target domain and (ii) uses the resulting synthetic labeled images to train a detection network on the target domain. Extensive experiments show the superiority of the proposed approach against the state-of-the-art ‘intra-domain’ detection based on fully-supervised learning.
Tasks Domain Adaptation, Image-to-Image Translation
Published 2019-07-10
URL https://arxiv.org/abs/1907.04681v1
PDF https://arxiv.org/pdf/1907.04681v1.pdf
PWC https://paperswithcode.com/paper/domain-adaptation-based-augmentation-for
Repo
Framework
Title A Survey on Knowledge Graph Embeddings with Literals: Which model links better Literal-ly?
Authors Genet Asefa Gesese, Russa Biswas, Mehwish Alam, Harald Sack
Abstract Knowledge Graphs (KGs) are composed of structured information about a particular domain in the form of entities and relations. In addition to the structured information KGs help in facilitating interconnectivity and interoperability between different resources represented in the Linked Data Cloud. KGs have been used in a variety of applications such as entity linking, question answering, recommender systems, etc. However, KG applications suffer from high computational and storage costs. Hence, there arises the necessity for a representation able to map the high dimensional KGs into low dimensional spaces, i.e., embedding space, preserving structural as well as relational information. This paper conducts a survey of KG embedding models which not only consider the structured information contained in the form of entities and relations in a KG but also the unstructured information represented as literals such as text, numerical values, images, etc. Along with a theoretical analysis and comparison of the methods proposed so far for generating KG embeddings with literals, an empirical evaluation of the different methods under identical settings has been performed for the general task of link prediction.
Tasks Entity Linking, Knowledge Graph Embeddings, Knowledge Graphs, Link Prediction, Question Answering, Recommendation Systems
Published 2019-10-28
URL https://arxiv.org/abs/1910.12507v1
PDF https://arxiv.org/pdf/1910.12507v1.pdf
PWC https://paperswithcode.com/paper/a-survey-on-knowledge-graph-embeddings-with
Repo
Framework

One Size Does Not Fit All: Modeling Users’ Personal Curiosity in Recommender Systems

Title One Size Does Not Fit All: Modeling Users’ Personal Curiosity in Recommender Systems
Authors Fakhri Abbas, Xi Niu
Abstract Today’s recommender systems are criticized for recommending items that are too obvious to arouse users’ interest. That’s why the recommender systems research community has advocated some “beyond accuracy” evaluation metrics such as novelty, diversity, coverage, and serendipity with the hope of promoting information discovery and sustain users’ interest over a long period of time. While bringing in new perspectives, most of these evaluation metrics have not considered individual users’ difference: an open-minded user may favor highly novel or diversified recommendations whereas a conservative user’s appetite for novelty or diversity may not be that large. In this paper, we developed a model to approximate an individual’s curiosity distribution over different levels of stimuli guided by the well-known Wundt curve in Psychology. We measured an item’s surprise level to assess the stimulation level and whether it is in the range of the user’s appetite for stimulus. We then proposed a recommendation system framework that considers both user preference and appetite for stimulus where the curiosity is maximally aroused. Our framework differs from a typical recommender system in that it leverages human’s curiosity to promote intrinsic interest with the system. A series of evaluation experiments have been conducted to show that our framework is able to rank higher the items with not only high ratings but also high response likelihood. The recommendation list generated by our algorithm has higher potential of inspiring user curiosity compared to traditional approaches. The personalization factor for assessing the stimulus (surprise) strength further helps the recommender achieve smaller (better) inter-user similarity.
Tasks Recommendation Systems
Published 2019-06-29
URL https://arxiv.org/abs/1907.00119v2
PDF https://arxiv.org/pdf/1907.00119v2.pdf
PWC https://paperswithcode.com/paper/one-size-does-not-fit-all-modeling-users
Repo
Framework

Trend-responsive User Segmentation Enabling Traceable Publishing Insights. A Case Study of a Real-world Large-scale News Recommendation System

Title Trend-responsive User Segmentation Enabling Traceable Publishing Insights. A Case Study of a Real-world Large-scale News Recommendation System
Authors Joanna Misztal-Radecka, Dominik Rusiecki, Michał Żmuda, Artur Bujak
Abstract The traditional offline approaches are no longer sufficient for building modern recommender systems in domains such as online news services, mainly due to the high dynamics of environment changes and necessity to operate on a large scale with high data sparsity. The ability to balance exploration with exploitation makes the multi-armed bandits an efficient alternative to the conventional methods, and a robust user segmentation plays a crucial role in providing the context for such online recommendation algorithms. In this work, we present an unsupervised and trend-responsive method for segmenting users according to their semantic interests, which has been integrated with a real-world system for large-scale news recommendations. The results of an online A/B test show significant improvements compared to a global-optimization algorithm on several services with different characteristics. Based on the experimental results as well as the exploration of segments descriptions and trend dynamics, we propose extensions to this approach that address particular real-world challenges for different use-cases. Moreover, we describe a method of generating traceable publishing insights facilitating the creation of content that serves the diversity of all users needs.
Tasks Multi-Armed Bandits, Recommendation Systems
Published 2019-10-28
URL https://arxiv.org/abs/1911.11070v1
PDF https://arxiv.org/pdf/1911.11070v1.pdf
PWC https://paperswithcode.com/paper/trend-responsive-user-segmentation-enabling
Repo
Framework
comments powered by Disqus