Paper Group ANR 226
Pixel Level Data Augmentation for Semantic Image Segmentation using Generative Adversarial Networks. Contextual Parameter Generation for Universal Neural Machine Translation. A Bio-inspired Collision Detecotr for Small Quadcopter. Kalman Filter Modifier for Neural Networks in Non-stationary Environments. Comparing Temporal Graphs Using Dynamic Time …
Pixel Level Data Augmentation for Semantic Image Segmentation using Generative Adversarial Networks
Title | Pixel Level Data Augmentation for Semantic Image Segmentation using Generative Adversarial Networks |
Authors | Shuangting Liu, Jiaqi Zhang, Yuxin Chen, Yifan Liu, Zengchang Qin, Tao Wan |
Abstract | Semantic segmentation is one of the basic topics in computer vision, it aims to assign semantic labels to every pixel of an image. Unbalanced semantic label distribution could have a negative influence on segmentation accuracy. In this paper, we investigate using data augmentation approach to balance the semantic label distribution in order to improve segmentation performance. We propose using generative adversarial networks (GANs) to generate realistic images for improving the performance of semantic segmentation networks. Experimental results show that the proposed method can not only improve segmentation performance on those classes with low accuracy, but also obtain 1.3% to 2.1% increase in average segmentation accuracy. It shows that this augmentation method can boost accuracy and be easily applicable to any other segmentation models. |
Tasks | Data Augmentation, Semantic Segmentation |
Published | 2018-11-01 |
URL | https://arxiv.org/abs/1811.00174v4 |
https://arxiv.org/pdf/1811.00174v4.pdf | |
PWC | https://paperswithcode.com/paper/pixel-level-data-augmentation-for-semantic |
Repo | |
Framework | |
Contextual Parameter Generation for Universal Neural Machine Translation
Title | Contextual Parameter Generation for Universal Neural Machine Translation |
Authors | Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, Tom Mitchell |
Abstract | We propose a simple modification to existing neural machine translation (NMT) models that enables using a single universal model to translate between multiple languages while allowing for language specific parameterization, and that can also be used for domain adaptation. Our approach requires no changes to the model architecture of a standard NMT system, but instead introduces a new component, the contextual parameter generator (CPG), that generates the parameters of the system (e.g., weights in a neural network). This parameter generator accepts source and target language embeddings as input, and generates the parameters for the encoder and the decoder, respectively. The rest of the model remains unchanged and is shared across all languages. We show how this simple modification enables the system to use monolingual data for training and also perform zero-shot translation. We further show it is able to surpass state-of-the-art performance for both the IWSLT-15 and IWSLT-17 datasets and that the learned language embeddings are able to uncover interesting relationships between languages. |
Tasks | Domain Adaptation, Machine Translation |
Published | 2018-08-26 |
URL | http://arxiv.org/abs/1808.08493v1 |
http://arxiv.org/pdf/1808.08493v1.pdf | |
PWC | https://paperswithcode.com/paper/contextual-parameter-generation-for-universal |
Repo | |
Framework | |
A Bio-inspired Collision Detecotr for Small Quadcopter
Title | A Bio-inspired Collision Detecotr for Small Quadcopter |
Authors | Jiannan Zhao, Cheng Hu, Chun Zhang, Zhihua Wang, Shigang Yue |
Abstract | Sense and avoid capability enables insects to fly versatilely and robustly in dynamic complex environment. Their biological principles are so practical and efficient that inspired we human imitating them in our flying machines. In this paper, we studied a novel bio-inspired collision detector and its application on a quadcopter. The detector is inspired from LGMD neurons in the locusts, and modeled into an STM32F407 MCU. Compared to other collision detecting methods applied on quadcopters, we focused on enhancing the collision selectivity in a bio-inspired way that can considerably increase the computing efficiency during an obstacle detecting task even in complex dynamic environment. We designed the quadcopter’s responding operation imminent collisions and tested this bio-inspired system in an indoor arena. The observed results from the experiments demonstrated that the LGMD collision detector is feasible to work as a vision module for the quadcopter’s collision avoidance task. |
Tasks | |
Published | 2018-01-14 |
URL | http://arxiv.org/abs/1801.04530v1 |
http://arxiv.org/pdf/1801.04530v1.pdf | |
PWC | https://paperswithcode.com/paper/a-bio-inspired-collision-detecotr-for-small |
Repo | |
Framework | |
Kalman Filter Modifier for Neural Networks in Non-stationary Environments
Title | Kalman Filter Modifier for Neural Networks in Non-stationary Environments |
Authors | Honglin Li, Frieder Ganz, Shirin Enshaeifar, Payam Barnaghi |
Abstract | Learning in a non-stationary environment is an inevitable problem when applying machine learning algorithm to real world environment. Learning new tasks without forgetting the previous knowledge is a challenge issue in machine learning. We propose a Kalman Filter based modifier to maintain the performance of Neural Network models under non-stationary environments. The result shows that our proposed model can preserve the key information and adapts better to the changes. The accuracy of proposed model decreases by 0.4% in our experiments, while the accuracy of conventional model decreases by 90% in the drifts environment. |
Tasks | |
Published | 2018-11-06 |
URL | http://arxiv.org/abs/1811.02361v1 |
http://arxiv.org/pdf/1811.02361v1.pdf | |
PWC | https://paperswithcode.com/paper/kalman-filter-modifier-for-neural-networks-in |
Repo | |
Framework | |
Comparing Temporal Graphs Using Dynamic Time Warping
Title | Comparing Temporal Graphs Using Dynamic Time Warping |
Authors | Vincent Froese, Brijnesh Jain, Rolf Niedermeier, Malte Renken |
Abstract | Within many real-world networks the links between pairs of nodes change over time. Thus, there has been a recent boom in studying temporal graphs. Recognizing patterns in temporal graphs requires a similarity (dissimilarity) measure to compare different temporal graphs. To this end, we propose to study dynamic time warping on temporal graphs. We define the dynamic temporal graph warping distance (dtgw) to determine the similarity of two temporal graphs. Our novel measure is flexible and can be applied in various application domains. We show that computing the dtgw-distance is a challenging (in general) NP-hard optimization problem and identify some polynomial-time solvable special cases. Moreover, we develop a quadratic programming formulation and an efficient heuristic. In experiments on real-word data we show that the heuristic performs very well and that our dtgw-distance performs favorably in de-anonymizing networks compared to other approaches. |
Tasks | Time Series |
Published | 2018-10-15 |
URL | https://arxiv.org/abs/1810.06240v3 |
https://arxiv.org/pdf/1810.06240v3.pdf | |
PWC | https://paperswithcode.com/paper/comparing-temporal-graphs-using-dynamic-time |
Repo | |
Framework | |
Probabilistic Inference Using Generators - The Statues Algorithm
Title | Probabilistic Inference Using Generators - The Statues Algorithm |
Authors | Pierre Denis |
Abstract | We present here a new probabilistic inference algorithm that gives exact results in the domain of discrete probability distributions. This algorithm, named the Statues algorithm, calculates the marginal probability distribution on probabilistic models defined as direct acyclic graphs. These models are made up of well-defined primitives that allow to express, in particular, joint probability distributions, Bayesian networks, discrete Markov chains, conditioning and probabilistic arithmetic. The Statues algorithm relies on a variable binding mechanism based on the generator construct, a special form of coroutine; being related to the enumeration algorithm, this new algorithm brings important improvements in terms of efficiency, which makes it valuable in regard to other exact marginalization algorithms. After introduction of several definitions, primitives and compositional rules, we present in details the Statues algorithm. Then, we briefly discuss the interest of this algorithm compared to others and we present possible extensions. Finally, we introduce Lea and MicroLea, two Python libraries implementing the Statues algorithm, along with several use cases. A proof of the correctness of the algorithm is provided in appendix. |
Tasks | |
Published | 2018-06-24 |
URL | http://arxiv.org/abs/1806.09997v2 |
http://arxiv.org/pdf/1806.09997v2.pdf | |
PWC | https://paperswithcode.com/paper/probabilistic-inference-using-generators-the |
Repo | |
Framework | |
On the Importance of Strong Baselines in Bayesian Deep Learning
Title | On the Importance of Strong Baselines in Bayesian Deep Learning |
Authors | Jishnu Mukhoti, Pontus Stenetorp, Yarin Gal |
Abstract | Like all sub-fields of machine learning Bayesian Deep Learning is driven by empirical validation of its theoretical proposals. Given the many aspects of an experiment it is always possible that minor or even major experimental flaws can slip by both authors and reviewers. One of the most popular experiments used to evaluate approximate inference techniques is the regression experiment on UCI datasets. However, in this experiment, models which have been trained to convergence have often been compared with baselines trained only for a fixed number of iterations. We find that a well-established baseline, Monte Carlo dropout, when evaluated under the same experimental settings shows significant improvements. In fact, the baseline outperforms or performs competitively with methods that claimed to be superior to the very same baseline method when they were introduced. Hence, by exposing this flaw in experimental procedure, we highlight the importance of using identical experimental setups to evaluate, compare, and benchmark methods in Bayesian Deep Learning. |
Tasks | |
Published | 2018-11-23 |
URL | http://arxiv.org/abs/1811.09385v2 |
http://arxiv.org/pdf/1811.09385v2.pdf | |
PWC | https://paperswithcode.com/paper/on-the-importance-of-strong-baselines-in |
Repo | |
Framework | |
Niching an Archive-based Gaussian Estimation of Distribution Algorithm via Adaptive Clustering
Title | Niching an Archive-based Gaussian Estimation of Distribution Algorithm via Adaptive Clustering |
Authors | Yongsheng Liang, Zhigang Ren, Bei Pang, An Chen |
Abstract | As a model-based evolutionary algorithm, estimation of distribution algorithm (EDA) possesses unique characteristics and has been widely applied to global optimization. However, traditional Gaussian EDA (GEDA) may suffer from premature convergence and has a high risk of falling into local optimum when dealing with multimodal problem. In this paper, we first attempts to improve the performance of GEDA by utilizing historical solutions and develops a novel archive-based EDA variant. The use of historical solutions not only enhances the search efficiency of EDA to a large extent, but also significantly reduces the population size so that a faster convergence could be achieved. Then, the archive-based EDA is further integrated with a novel adaptive clustering strategy for solving multimodal optimization problems. Taking the advantage of the clustering strategy in locating different promising areas and the powerful exploitation ability of the archive-based EDA, the resultant algorithm is endowed with strong capability in finding multiple optima. To verify the efficiency of the proposed algorithm, we tested it on a set of well-known niching benchmark problems and compared it with several state-of-the-art niching algorithms. The experimental results indicate that the proposed algorithm is competitive. |
Tasks | |
Published | 2018-03-01 |
URL | http://arxiv.org/abs/1803.00986v1 |
http://arxiv.org/pdf/1803.00986v1.pdf | |
PWC | https://paperswithcode.com/paper/niching-an-archive-based-gaussian-estimation |
Repo | |
Framework | |
dAIrector: Automatic Story Beat Generation through Knowledge Synthesis
Title | dAIrector: Automatic Story Beat Generation through Knowledge Synthesis |
Authors | Markus Eger, Kory W. Mathewson |
Abstract | dAIrector is an automated director which collaborates with humans storytellers for live improvisational performances and writing assistance. dAIrector can be used to create short narrative arcs through contextual plot generation. In this work, we present the system architecture, a quantitative evaluation of design choices, and a case-study usage of the system which provides qualitative feedback from a professional improvisational performer. We present relevant metrics for the understudied domain of human-machine creative generation, specifically long-form narrative creation. We include, alongside publication, open-source code so that others may test, evaluate, and run the dAIrector. |
Tasks | |
Published | 2018-10-31 |
URL | https://arxiv.org/abs/1811.03423v1 |
https://arxiv.org/pdf/1811.03423v1.pdf | |
PWC | https://paperswithcode.com/paper/dairector-automatic-story-beat-generation |
Repo | |
Framework | |
FRAME Revisited: An Interpretation View Based on Particle Evolution
Title | FRAME Revisited: An Interpretation View Based on Particle Evolution |
Authors | Xu Cai, Yang Wu, Guanbin Li, Ziliang Chen, Liang Lin |
Abstract | FRAME (Filters, Random fields, And Maximum Entropy) is an energy-based descriptive model that synthesizes visual realism by capturing mutual patterns from structural input signals. The maximum likelihood estimation (MLE) is applied by default, yet conventionally causes the unstable training energy that wrecks the generated structures, which remains unexplained. In this paper, we provide a new theoretical insight to analyze FRAME, from a perspective of particle physics ascribing the weird phenomenon to KL-vanishing issue. In order to stabilize the energy dissipation, we propose an alternative Wasserstein distance in discrete time based on the conclusion that the Jordan-Kinderlehrer-Otto (JKO) discrete flow approximates KL discrete flow when the time step size tends to 0. Besides, this metric can still maintain the model’s statistical consistency. Quantitative and qualitative experiments have been respectively conducted on several widely used datasets. The empirical studies have evidenced the effectiveness and superiority of our method. |
Tasks | |
Published | 2018-12-04 |
URL | http://arxiv.org/abs/1812.01186v4 |
http://arxiv.org/pdf/1812.01186v4.pdf | |
PWC | https://paperswithcode.com/paper/frame-revisited-an-interpretation-view-based |
Repo | |
Framework | |
A Simple Way to Deal with Cherry-picking
Title | A Simple Way to Deal with Cherry-picking |
Authors | Junpei Komiyama, Takanori Maehara |
Abstract | Statistical hypothesis testing serves as statistical evidence for scientific innovation. However, if the reported results are intentionally biased, hypothesis testing no longer controls the rate of false discovery. In particular, we study such selection bias in machine learning models where the reporter is motivated to promote an algorithmic innovation. When the number of possible configurations (e.g., datasets) is large, we show that the reporter can falsely report an innovation even if there is no improvement at all. We propose a `post-reporting’ solution to this issue where the bias of the reported results is verified by another set of results. The theoretical findings are supported by experimental results with synthetic and real-world datasets. | |
Tasks | |
Published | 2018-10-11 |
URL | http://arxiv.org/abs/1810.04996v1 |
http://arxiv.org/pdf/1810.04996v1.pdf | |
PWC | https://paperswithcode.com/paper/a-simple-way-to-deal-with-cherry-picking |
Repo | |
Framework | |
Learning monocular visual odometry with dense 3D mapping from dense 3D flow
Title | Learning monocular visual odometry with dense 3D mapping from dense 3D flow |
Authors | Cheng Zhao, Li Sun, Pulak Purkait, Tom Duckett, Rustam Stolkin |
Abstract | This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modelling is employed in the loss function. The L-VO network achieves an overall performance of 2.68% for average translational error and 0.0143 deg/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is fully leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved. |
Tasks | Monocular Visual Odometry, Visual Odometry |
Published | 2018-03-06 |
URL | http://arxiv.org/abs/1803.02286v2 |
http://arxiv.org/pdf/1803.02286v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-monocular-visual-odometry-with-dense |
Repo | |
Framework | |
Your 2 is My 1, Your 3 is My 9: Handling Arbitrary Miscalibrations in Ratings
Title | Your 2 is My 1, Your 3 is My 9: Handling Arbitrary Miscalibrations in Ratings |
Authors | Jingyan Wang, Nihar B. Shah |
Abstract | Cardinal scores (numeric ratings) collected from people are well known to suffer from miscalibrations. A popular approach to address this issue is to assume simplistic models of miscalibration (such as linear biases) to de-bias the scores. This approach, however, often fares poorly because people’s miscalibrations are typically far more complex and not well understood. In the absence of simplifying assumptions on the miscalibration, it is widely believed by the crowdsourcing community that the only useful information in the cardinal scores is the induced ranking. In this paper, inspired by the framework of Stein’s shrinkage, empirical Bayes, and the classic two-envelope problem, we contest this widespread belief. Specifically, we consider cardinal scores with arbitrary (or even adversarially chosen) miscalibrations which are only required to be consistent with the induced ranking. We design estimators which despite making no assumptions on the miscalibration, strictly and uniformly outperform all possible estimators that rely on only the ranking. Our estimators are flexible in that they can be used as a plug-in for a variety of applications, and we provide a proof-of-concept for A/B testing and ranking. Our results thus provide novel insights in the eternal debate between cardinal and ordinal data. |
Tasks | |
Published | 2018-06-13 |
URL | http://arxiv.org/abs/1806.05085v2 |
http://arxiv.org/pdf/1806.05085v2.pdf | |
PWC | https://paperswithcode.com/paper/your-2-is-my-1-your-3-is-my-9-handling |
Repo | |
Framework | |
TED: Teaching AI to Explain its Decisions
Title | TED: Teaching AI to Explain its Decisions |
Authors | Michael Hind, Dennis Wei, Murray Campbell, Noel C. F. Codella, Amit Dhurandhar, Aleksandra Mojsilović, Karthikeyan Natesan Ramamurthy, Kush R. Varshney |
Abstract | Artificial intelligence systems are being increasingly deployed due to their potential to increase the efficiency, scale, consistency, fairness, and accuracy of decisions. However, as many of these systems are opaque in their operation, there is a growing demand for such systems to provide explanations for their decisions. Conventional approaches to this problem attempt to expose or discover the inner workings of a machine learning model with the hope that the resulting explanations will be meaningful to the consumer. In contrast, this paper suggests a new approach to this problem. It introduces a simple, practical framework, called Teaching Explanations for Decisions (TED), that provides meaningful explanations that match the mental model of the consumer. We illustrate the generality and effectiveness of this approach with two different examples, resulting in highly accurate explanations with no loss of prediction accuracy for these two examples. |
Tasks | |
Published | 2018-11-12 |
URL | https://arxiv.org/abs/1811.04896v2 |
https://arxiv.org/pdf/1811.04896v2.pdf | |
PWC | https://paperswithcode.com/paper/ted-teaching-ai-to-explain-its-decisions |
Repo | |
Framework | |
Back-Translation Sampling by Targeting Difficult Words in Neural Machine Translation
Title | Back-Translation Sampling by Targeting Difficult Words in Neural Machine Translation |
Authors | Marzieh Fadaee, Christof Monz |
Abstract | Neural Machine Translation has achieved state-of-the-art performance for several language pairs using a combination of parallel and synthetic data. Synthetic data is often generated by back-translating sentences randomly sampled from monolingual data using a reverse translation model. While back-translation has been shown to be very effective in many cases, it is not entirely clear why. In this work, we explore different aspects of back-translation, and show that words with high prediction loss during training benefit most from the addition of synthetic data. We introduce several variations of sampling strategies targeting difficult-to-predict words using prediction losses and frequencies of words. In addition, we also target the contexts of difficult words and sample sentences that are similar in context. Experimental results for the WMT news translation task show that our method improves translation quality by up to 1.7 and 1.2 Bleu points over back-translation using random sampling for German-English and English-German, respectively. |
Tasks | Machine Translation |
Published | 2018-08-27 |
URL | http://arxiv.org/abs/1808.09006v2 |
http://arxiv.org/pdf/1808.09006v2.pdf | |
PWC | https://paperswithcode.com/paper/back-translation-sampling-by-targeting |
Repo | |
Framework | |