Paper Group ANR 561
Are You Convinced? Choosing the More Convincing Evidence with a Siamese Network. KeystoneDepth: Visualizing History in 3D. A stochastic version of Stein Variational Gradient Descent for efficient sampling. Relational Graph Representation Learning for Open-Domain Question Answering. Information Theoretic Model Predictive Q-Learning. An Analysis of S …
Are You Convinced? Choosing the More Convincing Evidence with a Siamese Network
Title | Are You Convinced? Choosing the More Convincing Evidence with a Siamese Network |
Authors | Martin Gleize, Eyal Shnarch, Leshem Choshen, Lena Dankin, Guy Moshkowich, Ranit Aharonov, Noam Slonim |
Abstract | With the advancement in argument detection, we suggest to pay more attention to the challenging task of identifying the more convincing arguments. Machines capable of responding and interacting with humans in helpful ways have become ubiquitous. We now expect them to discuss with us the more delicate questions in our world, and they should do so armed with effective arguments. But what makes an argument more persuasive? What will convince you? In this paper, we present a new data set, IBM-EviConv, of pairs of evidence labeled for convincingness, designed to be more challenging than existing alternatives. We also propose a Siamese neural network architecture shown to outperform several baselines on both a prior convincingness data set and our own. Finally, we provide insights into our experimental results and the various kinds of argumentative value our method is capable of detecting. |
Tasks | |
Published | 2019-07-21 |
URL | https://arxiv.org/abs/1907.08971v2 |
https://arxiv.org/pdf/1907.08971v2.pdf | |
PWC | https://paperswithcode.com/paper/are-you-convinced-choosing-the-more |
Repo | |
Framework | |
KeystoneDepth: Visualizing History in 3D
Title | KeystoneDepth: Visualizing History in 3D |
Authors | Xuan Luo, Yanmeng Kong, Jason Lawrence, Ricardo Martin-Brualla, Steve Seitz |
Abstract | This paper introduces the largest and most diverse collection of rectified stereo image pairs to the research community, KeystoneDepth, consisting of tens of thousands of stereographs of historical people, events, objects, and scenes between 1860 and 1963. Leveraging the Keystone-Mast raw scans from the California Museum of Photography, we apply multiple processing steps to produce clean stereo image pairs, complete with calibration data, rectification transforms, and depthmaps. A second contribution is a novel approach for view synthesis that runs at real-time rates on a mobile device, simulating the experience of looking through an open window into these historical scenes. We produce results for thousands of antique stereographs, capturing many important historical moments. |
Tasks | Calibration |
Published | 2019-08-21 |
URL | https://arxiv.org/abs/1908.07732v2 |
https://arxiv.org/pdf/1908.07732v2.pdf | |
PWC | https://paperswithcode.com/paper/190807732 |
Repo | |
Framework | |
A stochastic version of Stein Variational Gradient Descent for efficient sampling
Title | A stochastic version of Stein Variational Gradient Descent for efficient sampling |
Authors | Lei Li, Yingzhou Li, Jian-Guo Liu, Zibu Liu, Jianfeng Lu |
Abstract | We propose in this work RBM-SVGD, a stochastic version of Stein Variational Gradient Descent (SVGD) method for efficiently sampling from a given probability measure and thus useful for Bayesian inference. The method is to apply the Random Batch Method (RBM) for interacting particle systems proposed by Jin et al to the interacting particle systems in SVGD. While keeping the behaviors of SVGD, it reduces the computational cost, especially when the interacting kernel has long range. Numerical examples verify the efficiency of this new version of SVGD. |
Tasks | Bayesian Inference |
Published | 2019-02-09 |
URL | http://arxiv.org/abs/1902.03394v2 |
http://arxiv.org/pdf/1902.03394v2.pdf | |
PWC | https://paperswithcode.com/paper/a-stochastic-version-of-stein-variational |
Repo | |
Framework | |
Relational Graph Representation Learning for Open-Domain Question Answering
Title | Relational Graph Representation Learning for Open-Domain Question Answering |
Authors | Salvatore Vivona, Kaveh Hassani |
Abstract | We introduce a relational graph neural network with bi-directional attention mechanism and hierarchical representation learning for open-domain question answering task. Our model can learn contextual representation by jointly learning and updating the query, knowledge graph, and document representations. The experiments suggest that our model achieves state-of-the-art on the WebQuestionsSP benchmark. |
Tasks | Graph Representation Learning, Open-Domain Question Answering, Question Answering, Representation Learning |
Published | 2019-10-18 |
URL | https://arxiv.org/abs/1910.08249v1 |
https://arxiv.org/pdf/1910.08249v1.pdf | |
PWC | https://paperswithcode.com/paper/relational-graph-representation-learning-for |
Repo | |
Framework | |
Information Theoretic Model Predictive Q-Learning
Title | Information Theoretic Model Predictive Q-Learning |
Authors | Mohak Bhardwaj, Ankur Handa, Dieter Fox, Byron Boots |
Abstract | Model-free Reinforcement Learning (RL) algorithms work well in sequential decision-making problems when experience can be collected cheaply and model-based RL is effective when system dynamics can be modeled accurately. However, both of these assumptions can be violated in real world problems such as robotics, where querying the system can be prohibitively expensive and real-world dynamics can be difficult to model accurately. Although sim-to-real approaches such as domain randomization attempt to mitigate the effects of biased simulation,they can still suffer from optimization challenges such as local minima and hand-designed distributions for randomization, making it difficult to learn an accurate global value function or policy that directly transfers to the real world. In contrast to RL, Model Predictive Control (MPC) algorithms use a simulator to optimize a simple policy class online, constructing a closed-loop controller that can effectively contend with real-world dynamics. MPC performance is usually limited by factors such as model bias and the limited horizon of optimization. In this work, we present a novel theoretical connection between information theoretic MPC and entropy regularized RL and develop a Q-learning algorithm that can leverage biased models. We validate the proposed algorithm on sim-to-sim control tasks to demonstrate the improvements over optimal control and reinforcement learning from scratch. Our approach paves the way for deploying reinforcement learning algorithms on real-robots in a systematic manner. |
Tasks | Decision Making, Q-Learning |
Published | 2019-12-31 |
URL | https://arxiv.org/abs/2001.02153v1 |
https://arxiv.org/pdf/2001.02153v1.pdf | |
PWC | https://paperswithcode.com/paper/information-theoretic-model-predictive-q-1 |
Repo | |
Framework | |
An Analysis of Source-Side Grammatical Errors in NMT
Title | An Analysis of Source-Side Grammatical Errors in NMT |
Authors | Antonios Anastasopoulos |
Abstract | The quality of Neural Machine Translation (NMT) has been shown to significantly degrade when confronted with source-side noise. We present the first large-scale study of state-of-the-art English-to-German NMT on real grammatical noise, by evaluating on several Grammar Correction corpora. We present methods for evaluating NMT robustness without true references, and we use them for extensive analysis of the effects that different grammatical errors have on the NMT output. We also introduce a technique for visualizing the divergence distribution caused by a source-side error, which allows for additional insights. |
Tasks | Machine Translation |
Published | 2019-05-24 |
URL | https://arxiv.org/abs/1905.10024v1 |
https://arxiv.org/pdf/1905.10024v1.pdf | |
PWC | https://paperswithcode.com/paper/an-analysis-of-source-side-grammatical-errors |
Repo | |
Framework | |
Automatic Weight Estimation of Harvested Fish from Images
Title | Automatic Weight Estimation of Harvested Fish from Images |
Authors | Dmitry A. Konovalov, Alzayat Saleh, Dina B. Efremova, Jose A. Domingos, Dean R. Jerry |
Abstract | Approximately 2,500 weights and corresponding images of harvested Lates calcarifer (Asian seabass or barramundi) were collected at three different locations in Queensland, Australia. Two instances of the LinkNet-34 segmentation Convolutional Neural Network (CNN) were trained. The first one was trained on 200 manually segmented fish masks with excluded fins and tails. The second was trained on 100 whole-fish masks. The two CNNs were applied to the rest of the images and yielded automatically segmented masks. The one-factor and two-factor simple mathematical weight-from-area models were fitted on 1072 area-weight pairs from the first two locations, where area values were extracted from the automatically segmented masks. When applied to 1,400 test images (from the third location), the one-factor whole-fish mask model achieved the best mean absolute percentage error (MAPE), MAPE=4.36%. Direct weight-from-image regression CNNs were also trained, where the no-fins based CNN performed best on the test images with MAPE=4.28%. |
Tasks | |
Published | 2019-09-06 |
URL | https://arxiv.org/abs/1909.02710v1 |
https://arxiv.org/pdf/1909.02710v1.pdf | |
PWC | https://paperswithcode.com/paper/automatic-weight-estimation-of-harvested-fish |
Repo | |
Framework | |
Deep Neural Network Framework Based on Backward Stochastic Differential Equations for Pricing and Hedging American Options in High Dimensions
Title | Deep Neural Network Framework Based on Backward Stochastic Differential Equations for Pricing and Hedging American Options in High Dimensions |
Authors | Yangang Chen, Justin W. L. Wan |
Abstract | We propose a deep neural network framework for computing prices and deltas of American options in high dimensions. The architecture of the framework is a sequence of neural networks, where each network learns the difference of the price functions between adjacent timesteps. We introduce the least squares residual of the associated backward stochastic differential equation as the loss function. Our proposed framework yields prices and deltas on the entire spacetime, not only at a given point. The computational cost of the proposed approach is quadratic in dimension, which addresses the curse of dimensionality issue that state-of-the-art approaches suffer. Our numerical simulations demonstrate these contributions, and show that the proposed neural network framework outperforms state-of-the-art approaches in high dimensions. |
Tasks | |
Published | 2019-09-25 |
URL | https://arxiv.org/abs/1909.11532v1 |
https://arxiv.org/pdf/1909.11532v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-neural-network-framework-based-on |
Repo | |
Framework | |
Hidden Unit Specialization in Layered Neural Networks: ReLU vs. Sigmoidal Activation
Title | Hidden Unit Specialization in Layered Neural Networks: ReLU vs. Sigmoidal Activation |
Authors | Elisa Oostwal, Michiel Straat, Michael Biehl |
Abstract | We study layered neural networks of rectified linear units (ReLU) in a modelling framework for stochastic training processes. The comparison with sigmoidal activation functions is in the center of interest. We compute typical learning curves for shallow networks with K hidden units in matching student teacher scenarios. The systems exhibit sudden changes of the generalization performance via the process of hidden unit specialization at critical sizes of the training set. Surprisingly, our results show that the training behavior of ReLU networks is qualitatively different from that of networks with sigmoidal activations. In networks with K >= 3 sigmoidal hidden units, the transition is discontinuous: Specialized network configurations co-exist and compete with states of poor performance even for very large training sets. On the contrary, the use of ReLU activations results in continuous transitions for all K: For large enough training sets, two competing, differently specialized states display similar generalization abilities, which coincide exactly for large networks in the limit K to infinity. |
Tasks | |
Published | 2019-10-16 |
URL | https://arxiv.org/abs/1910.07476v1 |
https://arxiv.org/pdf/1910.07476v1.pdf | |
PWC | https://paperswithcode.com/paper/hidden-unit-specialization-in-layered-neural |
Repo | |
Framework | |
One Epoch Is All You Need
Title | One Epoch Is All You Need |
Authors | Aran Komatsuzaki |
Abstract | In unsupervised learning, collecting more data is not always a costly process unlike the training. For example, it is not hard to enlarge the 40GB WebText used for training GPT-2 by modifying its sampling methodology considering how many webpages there are in the Internet. On the other hand, given that training on this dataset already costs tens of thousands of dollars, training on a larger dataset naively is not cost-wise feasible. In this paper, we suggest to train on a larger dataset for only one epoch unlike the current practice, in which the unsupervised models are trained for from tens to hundreds of epochs. Furthermore, we suggest to adjust the model size and the number of iterations to be performed appropriately. We show that the performance of Transformer language model becomes dramatically improved in this way, especially if the original number of epochs is greater. For example, by replacing the training for 10 epochs with the one epoch training, this translates to 1.9-3.3x speedup in wall-clock time in our settings and more if the original number of epochs is greater. Under one epoch training, no overfitting occurs, and regularization method does nothing but slows down the training. Also, the curve of test loss over iterations follows power-law extensively. We compare the wall-clock time of the training of models with different parameter budget under one epoch training, and we show that size/iteration adjustment based on our proposed heuristics leads to 1-2.7x speedup in our cases. With the two methods combined, we achieve 3.3-5.1x speedup. Finally, we speculate various implications of one epoch training and size/iteration adjustment. In particular, based on our analysis we believe that we can reduce the cost to train the state-of-the-art models as BERT and GPT-2 dramatically, maybe even by the factor of 10. |
Tasks | Language Modelling |
Published | 2019-06-16 |
URL | https://arxiv.org/abs/1906.06669v1 |
https://arxiv.org/pdf/1906.06669v1.pdf | |
PWC | https://paperswithcode.com/paper/one-epoch-is-all-you-need |
Repo | |
Framework | |
Combinatorial Keyword Recommendations for Sponsored Search with Deep Reinforcement Learning
Title | Combinatorial Keyword Recommendations for Sponsored Search with Deep Reinforcement Learning |
Authors | Zhipeng Li, Jianwei Wu, Lin Sun, Tao Rong |
Abstract | In sponsored search, keyword recommendations help advertisers to achieve much better performance within limited budget. Many works have been done to mine numerous candidate keywords from search logs or landing pages. However, the strategy to select from given candidates remains to be improved. The existing relevance-based, popularity-based and regular combinatorial strategies fail to take the internal or external competitions among keywords into consideration. In this paper, we regard keyword recommendations as a combinatorial optimization problem and solve it with a modified pointer network structure. The model is trained on an actor-critic based deep reinforcement learning framework. A pre-clustering method called Equal Size K-Means is proposed to accelerate the training and testing procedure on the framework by reducing the action space. The performance of framework is evaluated both in offline and online environments, and remarkable improvements can be observed. |
Tasks | Combinatorial Optimization |
Published | 2019-07-18 |
URL | https://arxiv.org/abs/1907.08686v1 |
https://arxiv.org/pdf/1907.08686v1.pdf | |
PWC | https://paperswithcode.com/paper/combinatorial-keyword-recommendations-for |
Repo | |
Framework | |
Learning Fair Representations via an Adversarial Framework
Title | Learning Fair Representations via an Adversarial Framework |
Authors | Rui Feng, Yang Yang, Yuehan Lyu, Chenhao Tan, Yizhou Sun, Chunping Wang |
Abstract | Fairness has become a central issue for our research community as classification algorithms are adopted in societally critical domains such as recidivism prediction and loan approval. In this work, we consider the potential bias based on protected attributes (e.g., race and gender), and tackle this problem by learning latent representations of individuals that are statistically indistinguishable between protected groups while sufficiently preserving other information for classification. To do that, we develop a minimax adversarial framework with a generator to capture the data distribution and generate latent representations, and a critic to ensure that the distributions across different protected groups are similar. Our framework provides a theoretical guarantee with respect to statistical parity and individual fairness. Empirical results on four real-world datasets also show that the learned representation can effectively be used for classification tasks such as credit risk prediction while obstructing information related to protected groups, especially when removing protected attributes is not sufficient for fair classification. |
Tasks | |
Published | 2019-04-30 |
URL | http://arxiv.org/abs/1904.13341v1 |
http://arxiv.org/pdf/1904.13341v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-fair-representations-via-an |
Repo | |
Framework | |
Online Diversity Control in Symbolic Regression via a Fast Hash-based Tree Similarity Measure
Title | Online Diversity Control in Symbolic Regression via a Fast Hash-based Tree Similarity Measure |
Authors | Bogdan Burlacu, Michael Affenzeller, Gabriel Kronberger, Michael Kommenda |
Abstract | Diversity represents an important aspect of genetic programming, being directly correlated with search performance. When considered at the genotype level, diversity often requires expensive tree distance measures which have a negative impact on the algorithm’s runtime performance. In this work we introduce a fast, hash-based tree distance measure to massively speed-up the calculation of population diversity during the algorithmic run. We combine this measure with the standard GA and the NSGA-II genetic algorithms to steer the search towards higher diversity. We validate the approach on a collection of benchmark problems for symbolic regression where our method consistently outperforms the standard GA as well as NSGA-II configurations with different secondary objectives. |
Tasks | |
Published | 2019-02-03 |
URL | http://arxiv.org/abs/1902.00882v1 |
http://arxiv.org/pdf/1902.00882v1.pdf | |
PWC | https://paperswithcode.com/paper/online-diversity-control-in-symbolic |
Repo | |
Framework | |
Automatic Generation of Atomic Consistency Preserving Search Operators for Search-Based Model Engineering
Title | Automatic Generation of Atomic Consistency Preserving Search Operators for Search-Based Model Engineering |
Authors | Alexandru Burdusel, Steffen Zschaler, Stefan John |
Abstract | Recently there has been increased interest in combining the fields of Model-Driven Engineering (MDE) and Search-Based Software Engineering (SBSE). Such approaches use meta-heuristic search guided by search operators (model mutators and sometimes breeders) implemented as model transformations. The design of these operators can substantially impact the effectiveness and efficiency of the meta-heuristic search. Currently, designing search operators is left to the person specifying the optimisation problem. However, developing consistent and efficient search-operator rules requires not only domain expertise but also in-depth knowledge about optimisation, which makes the use of model-based meta-heuristic search challenging and expensive. In this paper, we propose a generalised approach to automatically generate atomic consistency preserving search operators (aCPSOs) for a given optimisation problem. This reduces the effort required to specify an optimisation problem and shields optimisation users from the complexity of implementing efficient meta-heuristic search mutation operators. We evaluate our approach with a set of case studies, and show that the automatically generated rules are comparable to, and in some cases better than, manually created rules at guiding evolutionary search towards near-optimal solutions. This paper is an extended version of the paper with the same title published in the proceedings of the 22nd International Conference on Model Driven Engineering Languages and Systems (MODELS ‘19). |
Tasks | |
Published | 2019-07-12 |
URL | https://arxiv.org/abs/1907.05647v1 |
https://arxiv.org/pdf/1907.05647v1.pdf | |
PWC | https://paperswithcode.com/paper/automatic-generation-of-atomic-consistency |
Repo | |
Framework | |
A 3D-Deep-Learning-based Augmented Reality Calibration Method for Robotic Environments using Depth Sensor Data
Title | A 3D-Deep-Learning-based Augmented Reality Calibration Method for Robotic Environments using Depth Sensor Data |
Authors | Linh Kästner, Vlad Catalin Frasineanu, Jens Lambrecht |
Abstract | Augmented Reality and mobile robots are gaining much attention within industries due to the high potential to make processes cost and time efficient. To facilitate augmented reality, a calibration between the Augmented Reality device and the environment is necessary. This is a challenge when dealing with mobile robots due to the mobility of all entities making the environment dynamic. On this account, we propose a novel approach to calibrate the Augmented Reality device using 3D depth sensor data. We use the depth camera of a cutting edge Augmented Reality Device - the Microsoft Hololens for deep learning based calibration. Therefore, we modified a neural network based on the recently published VoteNet architecture which works directly on the point cloud input observed by the Hololens. We achieve satisfying results and eliminate external tools like markers, thus enabling a more intuitive and flexible work flow for Augmented Reality integration. The results are adaptable to work with all depth cameras and are promising for further research. Furthermore, we introduce an open source 3D point cloud labeling tool, which is to our knowledge the first open source tool for labeling raw point cloud data. |
Tasks | Calibration |
Published | 2019-12-27 |
URL | https://arxiv.org/abs/1912.12101v1 |
https://arxiv.org/pdf/1912.12101v1.pdf | |
PWC | https://paperswithcode.com/paper/a-3d-deep-learning-based-augmented-reality |
Repo | |
Framework | |