Paper Group AWR 440
Equalizing Gender Biases in Neural Machine Translation with Word Embeddings Techniques. Graph Element Networks: adaptive, structured computation and memory. Demystifying Brain Tumour Segmentation Networks: Interpretability and Uncertainty Analysis. Believe It or Not, We Know What You Are Looking at!. Learning Edge Properties in Graphs from Path Agg …
Equalizing Gender Biases in Neural Machine Translation with Word Embeddings Techniques
Title | Equalizing Gender Biases in Neural Machine Translation with Word Embeddings Techniques |
Authors | Joel Escudé Font, Marta R. Costa-jussà |
Abstract | Neural machine translation has significantly pushed forward the quality of the field. However, there are remaining big issues with the output translations and one of them is fairness. Neural models are trained on large text corpora which contain biases and stereotypes. As a consequence, models inherit these social biases. Recent methods have shown results in reducing gender bias in other natural language processing tools such as word embeddings. We take advantage of the fact that word embeddings are used in neural machine translation to propose a method to equalize gender biases in neural machine translation using these representations. Specifically, we propose, experiment and analyze the integration of two debiasing techniques over GloVe embeddings in the Transformer translation architecture. We evaluate our proposed system on the WMT English-Spanish benchmark task, showing gains up to one BLEU point. As for the gender bias evaluation, we generate a test set of occupations and we show that our proposed system learns to equalize existing biases from the baseline system. |
Tasks | Machine Translation, Word Embeddings |
Published | 2019-01-10 |
URL | https://arxiv.org/abs/1901.03116v2 |
https://arxiv.org/pdf/1901.03116v2.pdf | |
PWC | https://paperswithcode.com/paper/equalizing-gender-biases-in-neural-machine |
Repo | https://github.com/joelescudefont/genbiasmt |
Framework | none |
Graph Element Networks: adaptive, structured computation and memory
Title | Graph Element Networks: adaptive, structured computation and memory |
Authors | Ferran Alet, Adarsh K. Jeewajee, Maria Bauza, Alberto Rodriguez, Tomas Lozano-Perez, Leslie Pack Kaelbling |
Abstract | We explore the use of graph neural networks (GNNs) to model spatial processes in which there is no a priori graphical structure. Similar to finite element analysis, we assign nodes of a GNN to spatial locations and use a computational process defined on the graph to model the relationship between an initial function defined over a space and a resulting function in the same space. We use GNNs as a computational substrate, and show that the locations of the nodes in space as well as their connectivity can be optimized to focus on the most complex parts of the space. Moreover, this representational strategy allows the learned input-output relationship to generalize over the size of the underlying space and run the same model at different levels of precision, trading computation for accuracy. We demonstrate this method on a traditional PDE problem, a physical prediction problem from robotics, and learning to predict scene images from novel viewpoints. |
Tasks | |
Published | 2019-04-18 |
URL | https://arxiv.org/abs/1904.09019v5 |
https://arxiv.org/pdf/1904.09019v5.pdf | |
PWC | https://paperswithcode.com/paper/graph-element-networks-adaptive-structured |
Repo | https://github.com/patrickjohncyh/gen-mnist |
Framework | pytorch |
Demystifying Brain Tumour Segmentation Networks: Interpretability and Uncertainty Analysis
Title | Demystifying Brain Tumour Segmentation Networks: Interpretability and Uncertainty Analysis |
Authors | Parth Natekar, Avinash Kori, Ganapathy Krishnamurthi |
Abstract | The accurate automatic segmentation of gliomas and its intra-tumoral structures is important not only for treatment planning but also for follow-up evaluations. Several methods based on 2D and 3D Deep Neural Networks (DNN) have been developed to segment brain tumors and to classify different categories of tumors from different MRI modalities. However, these networks are often black-box models and do not provide any evidence regarding the process they take to perform this task. Increasing transparency and interpretability of such deep learning techniques are necessary for the complete integration of such methods into medical practice. In this paper, we explore various techniques to explain the functional organization of brain tumor segmentation models and to extract visualizations of internal concepts to understand how these networks achieve highly accurate tumor segmentations. We use the BraTS 2018 dataset to train three different networks with standard architectures and outline similarities and differences in the process that these networks take to segment brain tumors. We show that brain tumor segmentation networks learn certain human-understandable disentangled concepts on a filter level. We also show that they take a top-down or hierarchical approach to localizing the different parts of the tumor. We then extract visualizations of some internal feature maps and also provide a measure of uncertainty with regards to the outputs of the models to give additional qualitative evidence about the predictions of these networks. We believe that the emergence of such human-understandable organization and concepts might aid in the acceptance and integration of such methods in medical diagnosis. |
Tasks | Brain Tumor Segmentation, Medical Diagnosis |
Published | 2019-09-03 |
URL | https://arxiv.org/abs/1909.01498v3 |
https://arxiv.org/pdf/1909.01498v3.pdf | |
PWC | https://paperswithcode.com/paper/demystifying-brain-tumour-segmentation |
Repo | https://github.com/koriavinash1/BioExp |
Framework | tf |
Believe It or Not, We Know What You Are Looking at!
Title | Believe It or Not, We Know What You Are Looking at! |
Authors | Dongze Lian, Zehao Yu, Shenghua Gao |
Abstract | By borrowing the wisdom of human in gaze following, we propose a two-stage solution for gaze point prediction of the target persons in a scene. Specifically, in the first stage, both head image and its position are fed into a gaze direction pathway to predict the gaze direction, and then multi-scale gaze direction fields are generated to characterize the distribution of gaze points without considering the scene contents. In the second stage, the multi-scale gaze direction fields are concatenated with the image contents and fed into a heatmap pathway for heatmap regression. There are two merits for our two-stage solution based gaze following: i) our solution mimics the behavior of human in gaze following, therefore it is more psychological plausible; ii) besides using heatmap to supervise the output of our network, we can also leverage gaze direction to facilitate the training of gaze direction pathway, therefore our network can be more robustly trained. Considering that existing gaze following dataset is annotated by the third-view persons, we build a video gaze following dataset, where the ground truth is annotated by the observers in the videos. Therefore it is more reliable. The evaluation with such a dataset reflects the capacity of different methods in real scenarios better. Extensive experiments on both datasets show that our method significantly outperforms existing methods, which validates the effectiveness of our solution for gaze following. Our dataset and codes are released in https://github.com/svip-lab/GazeFollowing. |
Tasks | |
Published | 2019-07-04 |
URL | https://arxiv.org/abs/1907.02364v1 |
https://arxiv.org/pdf/1907.02364v1.pdf | |
PWC | https://paperswithcode.com/paper/believe-it-or-not-we-know-what-you-are |
Repo | https://github.com/svip-lab/GazeFollowing |
Framework | pytorch |
Learning Edge Properties in Graphs from Path Aggregations
Title | Learning Edge Properties in Graphs from Path Aggregations |
Authors | Rakshit Agrawal, Luca de Alfaro |
Abstract | Graph edges, along with their labels, can represent information of fundamental importance, such as links between web pages, friendship between users, the rating given by users to other users or items, and much more. We introduce LEAP, a trainable, general framework for predicting the presence and properties of edges on the basis of the local structure, topology, and labels of the graph. The LEAP framework is based on the exploration and machine-learning aggregation of the paths connecting nodes in a graph. We provide several methods for performing the aggregation phase by training path aggregators, and we demonstrate the flexibility and generality of the framework by applying it to the prediction of links and user ratings in social networks. We validate the LEAP framework on two problems: link prediction, and user rating prediction. On eight large datasets, among which the arXiv collaboration network, the Yeast protein-protein interaction, and the US airlines routes network, we show that the link prediction performance of LEAP is at least as good as the current state of the art methods, such as SEAL and WLNM. Next, we consider the problem of predicting user ratings on other users: this problem is known as the edge-weight prediction problem in weighted signed networks (WSN). On Bitcoin networks, and Wikipedia RfA, we show that LEAP performs consistently better than the Fairness & Goodness based regression models, varying the amount of training edges between 10 to 90%. These examples demonstrate that LEAP, in spite of its generality, can match or best the performance of approaches that have been especially crafted to solve very specific edge prediction problems. |
Tasks | Link Prediction |
Published | 2019-03-11 |
URL | http://arxiv.org/abs/1903.04613v1 |
http://arxiv.org/pdf/1903.04613v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-edge-properties-in-graphs-from-path |
Repo | https://github.com/rakshit-agrawal/LEAP |
Framework | tf |
EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs
Title | EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs |
Authors | Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao B. Schardl, Charles E. Leiserson |
Abstract | Graph representation learning resurges as a trending research subject owing to the widespread use of deep learning for Euclidean data, which inspire various creative designs of neural networks in the non-Euclidean domain, particularly graphs. With the success of these graph neural networks (GNN) in the static setting, we approach further practical scenarios where the graph dynamically evolves. Existing approaches typically resort to node embeddings and use a recurrent neural network (RNN, broadly speaking) to regulate the embeddings and learn the temporal dynamics. These methods require the knowledge of a node in the full time span (including both training and testing) and are less applicable to the frequent change of the node set. In some extreme scenarios, the node sets at different time steps may completely differ. To resolve this challenge, we propose EvolveGCN, which adapts the graph convolutional network (GCN) model along the temporal dimension without resorting to node embeddings. The proposed approach captures the dynamism of the graph sequence through using an RNN to evolve the GCN parameters. Two architectures are considered for the parameter evolution. We evaluate the proposed approach on tasks including link prediction, edge classification, and node classification. The experimental results indicate a generally higher performance of EvolveGCN compared with related approaches. The code is available at \url{https://github.com/IBM/EvolveGCN}. |
Tasks | Graph Representation Learning, Link Prediction, Node Classification, Representation Learning |
Published | 2019-02-26 |
URL | https://arxiv.org/abs/1902.10191v3 |
https://arxiv.org/pdf/1902.10191v3.pdf | |
PWC | https://paperswithcode.com/paper/evolvegcn-evolving-graph-convolutional |
Repo | https://github.com/IBM/EvolveGCN |
Framework | pytorch |
The Assistive Multi-Armed Bandit
Title | The Assistive Multi-Armed Bandit |
Authors | Lawrence Chan, Dylan Hadfield-Menell, Siddhartha Srinivasa, Anca Dragan |
Abstract | Learning preferences implicit in the choices humans make is a well studied problem in both economics and computer science. However, most work makes the assumption that humans are acting (noisily) optimally with respect to their preferences. Such approaches can fail when people are themselves learning about what they want. In this work, we introduce the assistive multi-armed bandit, where a robot assists a human playing a bandit task to maximize cumulative reward. In this problem, the human does not know the reward function but can learn it through the rewards received from arm pulls; the robot only observes which arms the human pulls but not the reward associated with each pull. We offer sufficient and necessary conditions for successfully assisting the human in this framework. Surprisingly, better human performance in isolation does not necessarily lead to better performance when assisted by the robot: a human policy can do better by effectively communicating its observed rewards to the robot. We conduct proof-of-concept experiments that support these results. We see this work as contributing towards a theory behind algorithms for human-robot interaction. |
Tasks | Multi-Armed Bandits |
Published | 2019-01-24 |
URL | http://arxiv.org/abs/1901.08654v1 |
http://arxiv.org/pdf/1901.08654v1.pdf | |
PWC | https://paperswithcode.com/paper/the-assistive-multi-armed-bandit |
Repo | https://github.com/chanlaw/assistive-bandits |
Framework | tf |
Neural Multisensory Scene Inference
Title | Neural Multisensory Scene Inference |
Authors | Jae Hyun Lim, Pedro O. Pinheiro, Negar Rostamzadeh, Christopher Pal, Sungjin Ahn |
Abstract | For embodied agents to infer representations of the underlying 3D physical world they inhabit, they should efficiently combine multisensory cues from numerous trials, e.g., by looking at and touching objects. Despite its importance, multisensory 3D scene representation learning has received less attention compared to the unimodal setting. In this paper, we propose the Generative Multisensory Network (GMN) for learning latent representations of 3D scenes which are partially observable through multiple sensory modalities. We also introduce a novel method, called the Amortized Product-of-Experts, to improve the computational efficiency and the robustness to unseen combinations of modalities at test time. Experimental results demonstrate that the proposed model can efficiently infer robust modality-invariant 3D-scene representations from arbitrary combinations of modalities and perform accurate cross-modal generation. To perform this exploration, we also develop the Multisensory Embodied 3D-Scene Environment (MESE). |
Tasks | Representation Learning |
Published | 2019-10-06 |
URL | https://arxiv.org/abs/1910.02344v2 |
https://arxiv.org/pdf/1910.02344v2.pdf | |
PWC | https://paperswithcode.com/paper/neural-multisensory-scene-inference |
Repo | https://github.com/lim0606/multisensory-embodied-3D-scene-environment |
Framework | none |
Multi-Class Gaussian Process Classification Made Conjugate: Efficient Inference via Data Augmentation
Title | Multi-Class Gaussian Process Classification Made Conjugate: Efficient Inference via Data Augmentation |
Authors | Théo Galy-Fajou, Florian Wenzel, Christian Donner, Manfred Opper |
Abstract | We propose a new scalable multi-class Gaussian process classification approach building on a novel modified softmax likelihood function. The new likelihood has two benefits: it leads to well-calibrated uncertainty estimates and allows for an efficient latent variable augmentation. The augmented model has the advantage that it is conditionally conjugate leading to a fast variational inference method via block coordinate ascent updates. Previous approaches suffered from a trade-off between uncertainty calibration and speed. Our experiments show that our method leads to well-calibrated uncertainty estimates and competitive predictive performance while being up to two orders faster than the state of the art. |
Tasks | Bayesian Inference, Calibration, Data Augmentation |
Published | 2019-05-23 |
URL | https://arxiv.org/abs/1905.09670v1 |
https://arxiv.org/pdf/1905.09670v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-class-gaussian-process-classification |
Repo | https://github.com/UnofficialJuliaMirrorSnapshots/AugmentedGaussianProcesses.jl-38eea1fd-7d7d-5162-9d08-f89d0f2e271e |
Framework | none |
Comparing Semi-Parametric Model Learning Algorithms for Dynamic Model Estimation in Robotics
Title | Comparing Semi-Parametric Model Learning Algorithms for Dynamic Model Estimation in Robotics |
Authors | Sebastian Riedel, Freek Stulp |
Abstract | Physical modeling of robotic system behavior is the foundation for controlling many robotic mechanisms to a satisfactory degree. Mechanisms are also typically designed in a way that good model accuracy can be achieved with relatively simple models and model identification strategies. If the modeling accuracy using physically based models is not enough or too complex, model-free methods based on machine learning techniques can help. Of particular interest to us was therefore the question to what degree semi-parametric modeling techniques, meaning combinations of physical models with machine learning, increase the modeling accuracy of inverse dynamics models which are typically used in robot control. To this end, we evaluated semi-parametric Gaussian process regression and a novel model-based neural network architecture, and compared their modeling accuracy to a series of naive semi-parametric, parametric-only and non-parametric-only regression methods. The comparison has been carried out on three test scenarios, one involving a real test-bed and two involving simulated scenarios, with the most complex scenario targeting the modeling a simulated robot’s inverse dynamics model. We found that in all but one case, semi-parametric Gaussian process regression yields the most accurate models, also with little tuning required for the training procedure. |
Tasks | |
Published | 2019-06-27 |
URL | https://arxiv.org/abs/1906.11909v1 |
https://arxiv.org/pdf/1906.11909v1.pdf | |
PWC | https://paperswithcode.com/paper/comparing-semi-parametric-model-learning |
Repo | https://github.com/numahha/five |
Framework | none |
Hyper-Sphere Quantization: Communication-Efficient SGD for Federated Learning
Title | Hyper-Sphere Quantization: Communication-Efficient SGD for Federated Learning |
Authors | Xinyan Dai, Xiao Yan, Kaiwen Zhou, Han Yang, Kelvin K. W. Ng, James Cheng, Yu Fan |
Abstract | The high cost of communicating gradients is a major bottleneck for federated learning, as the bandwidth of the participating user devices is limited. Existing gradient compression algorithms are mainly designed for data centers with high-speed network and achieve $O(\sqrt{d} \log d)$ per-iteration communication cost at best, where $d$ is the size of the model. We propose hyper-sphere quantization (HSQ), a general framework that can be configured to achieve a continuum of trade-offs between communication efficiency and gradient accuracy. In particular, at the high compression ratio end, HSQ provides a low per-iteration communication cost of $O(\log d)$, which is favorable for federated learning. We prove the convergence of HSQ theoretically and show by experiments that HSQ significantly reduces the communication cost of model training without hurting convergence accuracy. |
Tasks | Quantization |
Published | 2019-11-12 |
URL | https://arxiv.org/abs/1911.04655v2 |
https://arxiv.org/pdf/1911.04655v2.pdf | |
PWC | https://paperswithcode.com/paper/hyper-sphere-quantization-communication |
Repo | https://github.com/xinyandai/gradient-quantization |
Framework | pytorch |
Enriching BERT with Knowledge Graph Embeddings for Document Classification
Title | Enriching BERT with Knowledge Graph Embeddings for Document Classification |
Authors | Malte Ostendorff, Peter Bourgonje, Maria Berger, Julian Moreno-Schneider, Georg Rehm, Bela Gipp |
Abstract | In this paper, we focus on the classification of books using short descriptive texts (cover blurbs) and additional metadata. Building upon BERT, a deep neural language model, we demonstrate how to combine text representations with metadata and knowledge graph embeddings, which encode author information. Compared to the standard BERT approach we achieve considerably better results for the classification task. For a more coarse-grained classification using eight labels we achieve an F1- score of 87.20, while a detailed classification using 343 labels yields an F1-score of 64.70. We make the source code and trained models of our experiments publicly available |
Tasks | Document Classification, Knowledge Graph Embeddings, Language Modelling |
Published | 2019-09-18 |
URL | https://arxiv.org/abs/1909.08402v1 |
https://arxiv.org/pdf/1909.08402v1.pdf | |
PWC | https://paperswithcode.com/paper/enriching-bert-with-knowledge-graph |
Repo | https://github.com/malteos/pytorch-bert-document-classification |
Framework | pytorch |
Using a Logarithmic Mapping to Enable Lower Discount Factors in Reinforcement Learning
Title | Using a Logarithmic Mapping to Enable Lower Discount Factors in Reinforcement Learning |
Authors | Harm van Seijen, Mehdi Fatemi, Arash Tavakoli |
Abstract | In an effort to better understand the different ways in which the discount factor affects the optimization process in reinforcement learning, we designed a set of experiments to study each effect in isolation. Our analysis reveals that the common perception that poor performance of low discount factors is caused by (too) small action-gaps requires revision. We propose an alternative hypothesis that identifies the size-difference of the action-gap across the state-space as the primary cause. We then introduce a new method that enables more homogeneous action-gaps by mapping value estimates to a logarithmic space. We prove convergence for this method under standard assumptions and demonstrate empirically that it indeed enables lower discount factors for approximate reinforcement-learning methods. This in turn allows tackling a class of reinforcement-learning problems that are challenging to solve with traditional methods. |
Tasks | |
Published | 2019-06-03 |
URL | https://arxiv.org/abs/1906.00572v2 |
https://arxiv.org/pdf/1906.00572v2.pdf | |
PWC | https://paperswithcode.com/paper/190600572 |
Repo | https://github.com/microsoft/logrl |
Framework | tf |
A Deep Generative Model for Code-Switched Text
Title | A Deep Generative Model for Code-Switched Text |
Authors | Bidisha Samanta, Sharmila Reddy, Hussain Jagirdar, Niloy Ganguly, Soumen Chakrabarti |
Abstract | Code-switching, the interleaving of two or more languages within a sentence or discourse is pervasive in multilingual societies. Accurate language models for code-switched text are critical for NLP tasks. State-of-the-art data-intensive neural language models are difficult to train well from scarce language-labeled code-switched text. A potential solution is to use deep generative models to synthesize large volumes of realistic code-switched text. Although generative adversarial networks and variational autoencoders can synthesize plausible monolingual text from continuous latent space, they cannot adequately address code-switched text, owing to their informal style and complex interplay between the constituent languages. We introduce VACS, a novel variational autoencoder architecture specifically tailored to code-switching phenomena. VACS encodes to and decodes from a two-level hierarchical representation, which models syntactic contextual signals in the lower level, and language switching signals in the upper layer. Sampling representations from the prior and decoding them produced well-formed, diverse code-switched sentences. Extensive experiments show that using synthetic code-switched text with natural monolingual data results in significant (33.06%) drop in perplexity. |
Tasks | |
Published | 2019-06-21 |
URL | https://arxiv.org/abs/1906.08972v1 |
https://arxiv.org/pdf/1906.08972v1.pdf | |
PWC | https://paperswithcode.com/paper/a-deep-generative-model-for-code-switched |
Repo | https://github.com/bidishasamantakgp/VACS |
Framework | tf |
Comparative evaluation of 2D feature correspondence selection algorithms
Title | Comparative evaluation of 2D feature correspondence selection algorithms |
Authors | Chen Zhao, Jiaqi Yang, Yang Xiao, Zhiguo Cao |
Abstract | Correspondence selection aiming at seeking correct feature correspondences from raw feature matches is pivotal for a number of feature-matching-based tasks. Various 2D (image) correspondence selection algorithms have been presented with decades of progress. Unfortunately, the lack of an in-depth evaluation makes it difficult for developers to choose a proper algorithm given a specific application. This paper fills this gap by evaluating eight 2D correspondence selection algorithms ranging from classical methods to the most recent ones on four standard datasets. The diversity of experimental datasets brings various nuisances including zoom, rotation, blur, viewpoint change, JPEG compression, light change, different rendering styles and multi-structures for comprehensive test. To further create different distributions of initial matches, a set of combinations of detector and descriptor is also taken into consideration. We measure the quality of a correspondence selection algorithm from four perspectives, i.e., precision, recall, F-measure and efficiency. According to evaluation results, the current advantages and limitations of all considered algorithms are aggregately summarized which could be treated as a “user guide” for the following developers. |
Tasks | |
Published | 2019-04-30 |
URL | http://arxiv.org/abs/1904.13383v1 |
http://arxiv.org/pdf/1904.13383v1.pdf | |
PWC | https://paperswithcode.com/paper/comparative-evaluation-of-2d-feature |
Repo | https://github.com/izhangrui/paper_to_read |
Framework | none |