Paper Group ANR 1365
Cooperative Pathfinding based on high-scalability Multi-agent RRT*. Instance-Level Microtubule Tracking. Recombination of Artificial Neural Networks. A high performance computing method for accelerating temporal action proposal generation. A Framework for Explainable Text Classification in Legal Document Review. RL-RRT: Kinodynamic Motion Planning …
Cooperative Pathfinding based on high-scalability Multi-agent RRT*
Title | Cooperative Pathfinding based on high-scalability Multi-agent RRT* |
Authors | Jinmingwu Jiang, Kaigui Wu |
Abstract | Problems that claim several agents to find no-conflicts paths from their start locations to their destinations are named as cooperative pathfinding problems. This problem can be very challenging and it has received a lot of attention. There are many works proposed to solve this problem in the last decades: coupled, decoupled, and hybrid approach. In the field of coupled approach, the computational cost of many algorithms increases as the dimension of problems increases. Recently, an algorithm called Multi-agent RRT*(MA-RRT*) was proposed. This algorithm can alleviate the increase in computational cost and efficiently solve cooperative pathfinding problems. However, in a relatively dense environment, the application of MA-RRT* is hindered, because some of the random samples in the free space cannot be explored by the rapidly random tree. This paper proposes an improved version of MA-RRT*, called Multi-agent RRT* Potential Field (MA-RRT*PF), an anytime algorithm that can efficiently guide the rapidly random tree to the free space in relatively dense environments. It works by incorporating a potential field to the GREEDY function to enhance the ability to avoid the obstacles. The results show that MA-RRT*PF performs much better than MA-RRT* in relatively dense environments in terms of scalability while still maintaining the solution quality. |
Tasks | |
Published | 2019-11-16 |
URL | https://arxiv.org/abs/1911.07840v2 |
https://arxiv.org/pdf/1911.07840v2.pdf | |
PWC | https://paperswithcode.com/paper/cooperative-pathfinding-based-on-high |
Repo | |
Framework | |
Instance-Level Microtubule Tracking
Title | Instance-Level Microtubule Tracking |
Authors | Samira Masoudi, Afsaneh Razi, Cameron H. G. Wright, Jay C. Gatlin, Ulas Bagci |
Abstract | We propose a new method of instance-level microtubule (MT) tracking in time-lapse image series using recurrent attention. Our novel deep learning algorithm segments individual MTs at each frame. Segmentation results from successive frames are used to assign correspondences among MTs. This ultimately generates a distinct path trajectory for each MT through the frames. Based on these trajectories, we estimate MT velocities. To validate our proposed technique, we conduct experiments using real and simulated data. We use statistics derived from real time-lapse series of MT gliding assays to simulate realistic MT time-lapse image series in our simulated data. This dataset is employed as pre-training and hyperparameter optimization for our network before training on the real data. Our experimental results show that the proposed supervised learning algorithm improves the precision for MT instance velocity estimation drastically to 71.3% from the baseline result (29.3%). We also demonstrate how the inclusion of temporal information into our deep network can reduce the false negative rates from 67.8% (baseline) down to 28.7% (proposed). Our findings in this work are expected to help biologists characterize the spatial arrangement of MTs, specifically the effects of MT-MT interactions. |
Tasks | Hyperparameter Optimization |
Published | 2019-01-17 |
URL | https://arxiv.org/abs/1901.06006v2 |
https://arxiv.org/pdf/1901.06006v2.pdf | |
PWC | https://paperswithcode.com/paper/instance-level-microtubule-segmentation-using |
Repo | |
Framework | |
Recombination of Artificial Neural Networks
Title | Recombination of Artificial Neural Networks |
Authors | Aaron Vose, Jacob Balma, Alex Heye, Alessandro Rigazzi, Charles Siegel, Diana Moise, Benjamin Robbins, Rangan Sukumar |
Abstract | We propose a genetic algorithm (GA) for hyperparameter optimization of artificial neural networks which includes chromosomal crossover as well as a decoupling of parameters (i.e., weights and biases) from hyperparameters (e.g., learning rate, weight decay, and dropout) during sexual reproduction. Children are produced from three parents; two contributing hyperparameters and one contributing the parameters. Our version of population-based training (PBT) combines traditional gradient-based approaches such as stochastic gradient descent (SGD) with our GA to optimize both parameters and hyperparameters across SGD epochs. Our improvements over traditional PBT provide an increased speed of adaptation and a greater ability to shed deleterious genes from the population. Our methods improve final accuracy as well as time to fixed accuracy on a wide range of deep neural network architectures including convolutional neural networks, recurrent neural networks, dense neural networks, and capsule networks. |
Tasks | Hyperparameter Optimization |
Published | 2019-01-12 |
URL | http://arxiv.org/abs/1901.03900v1 |
http://arxiv.org/pdf/1901.03900v1.pdf | |
PWC | https://paperswithcode.com/paper/recombination-of-artificial-neural-networks |
Repo | |
Framework | |
A high performance computing method for accelerating temporal action proposal generation
Title | A high performance computing method for accelerating temporal action proposal generation |
Authors | Shiye Lei, Tian Wang, Youyou Jiang, Zihang Deng, Hichem Snoussi, Chang Choi |
Abstract | Temporal action recognition always depends on temporal action proposal generation to hypothesize actions. Applications require temporal action proposal generation to handle both large video dataset and generate more potential actions and suffer from high computation cost due to the bottleneck of temporal action proposal generation. To address this, we introduce a ring parallel architecture based on Message Passing Interface, which is a reliable communication protocol and could be supported by multiple programming languages. In our work, total data transmission is reduced by adding a connection between multiple computing load in our new architecture, which is different from the traditional Parameter Server architecture. Remarkably, our parallel architecture outperforms the Parameter Server architecture in the tasks of temporal action proposal generation, especially for large datasets of millions of videos. In addition, a time metric is proposed to evaluate the speed performance in the distributed training process. |
Tasks | Action Detection, Temporal Action Proposal Generation |
Published | 2019-06-15 |
URL | https://arxiv.org/abs/1906.06496v3 |
https://arxiv.org/pdf/1906.06496v3.pdf | |
PWC | https://paperswithcode.com/paper/improving-temporal-action-proposal-generation |
Repo | |
Framework | |
A Framework for Explainable Text Classification in Legal Document Review
Title | A Framework for Explainable Text Classification in Legal Document Review |
Authors | Christian J. Mahoney, Jianping Zhang, Nathaniel Huber-Fliflet, Peter Gronvall, Haozhen Zhao |
Abstract | Companies regularly spend millions of dollars producing electronically-stored documents in legal matters. Recently, parties on both sides of the ‘legal aisle’ are accepting the use of machine learning techniques like text classification to cull massive volumes of data and to identify responsive documents for use in these matters. While text classification is regularly used to reduce the discovery costs in legal matters, it also faces a peculiar perception challenge: amongst lawyers, this technology is sometimes looked upon as a “black box”, little information provided for attorneys to understand why documents are classified as responsive. In recent years, a group of AI and ML researchers have been actively researching Explainable AI, in which actions or decisions are human understandable. In legal document review scenarios, a document can be identified as responsive, if one or more of its text snippets are deemed responsive. In these scenarios, if text classification can be used to locate these snippets, then attorneys could easily evaluate the model’s classification decision. When deployed with defined and explainable results, text classification can drastically enhance overall quality and speed of the review process by reducing the review time. Moreover, explainable predictive coding provides lawyers with greater confidence in the results of that supervised learning task. This paper describes a framework for explainable text classification as a valuable tool in legal services: for enhancing the quality and efficiency of legal document review and for assisting in locating responsive snippets within responsive documents. This framework has been implemented in our legal analytics product, which has been used in hundreds of legal matters. We also report our experimental results using the data from an actual legal matter that used this type of document review. |
Tasks | Text Classification |
Published | 2019-12-19 |
URL | https://arxiv.org/abs/1912.09501v1 |
https://arxiv.org/pdf/1912.09501v1.pdf | |
PWC | https://paperswithcode.com/paper/a-framework-for-explainable-text |
Repo | |
Framework | |
RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies
Title | RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies |
Authors | Hao-Tien Lewis Chiang, Jasmine Hsu, Marek Fiser, Lydia Tapia, Aleksandra Faust |
Abstract | This paper addresses two challenges facing sampling-based kinodynamic motion planning: a way to identify good candidate states for local transitions and the subsequent computationally intractable steering between these candidate states. Through the combination of sampling-based planning, a Rapidly Exploring Randomized Tree (RRT) and an efficient kinodynamic motion planner through machine learning, we propose an efficient solution to long-range planning for kinodynamic motion planning. First, we use deep reinforcement learning to learn an obstacle-avoiding policy that maps a robot’s sensor observations to actions, which is used as a local planner during planning and as a controller during execution. Second, we train a reachability estimator in a supervised manner, which predicts the RL policy’s time to reach a state in the presence of obstacles. Lastly, we introduce RL-RRT that uses the RL policy as a local planner, and the reachability estimator as the distance function to bias tree-growth towards promising regions. We evaluate our method on three kinodynamic systems, including physical robot experiments. Results across all three robots tested indicate that RL-RRT outperforms state of the art kinodynamic planners in efficiency, and also provides a shorter path finish time than a steering function free method. The learned local planner policy and accompanying reachability estimator demonstrate transferability to the previously unseen experimental environments, making RL-RRT fast because the expensive computations are replaced with simple neural network inference. Video: https://youtu.be/dDMVMTOI8KY |
Tasks | Motion Planning |
Published | 2019-07-10 |
URL | https://arxiv.org/abs/1907.04799v2 |
https://arxiv.org/pdf/1907.04799v2.pdf | |
PWC | https://paperswithcode.com/paper/rl-rrt-kinodynamic-motion-planning-via |
Repo | |
Framework | |
Frequentist Consistency of Generalized Variational Inference
Title | Frequentist Consistency of Generalized Variational Inference |
Authors | Jeremias Knoblauch |
Abstract | This paper investigates Frequentist consistency properties of the posterior distributions constructed via Generalized Variational Inference (GVI). A number of generic and novel strategies are given for proving consistency, relying on the theory of $\Gamma$-convergence. Specifically, this paper shows that under minimal regularity conditions, the sequence of GVI posteriors is consistent and collapses to a point mass at the population-optimal parameter value as the number of observations goes to infinity. The results extend to the latent variable case without additional assumptions and hold under misspecification. Lastly, the paper explains how to apply the results to a selection of GVI posteriors with especially popular variational families. For example, consistency is established for GVI methods using the mean field normal variational family, normal mixtures, Gaussian process variational families as well as neural networks indexing a normal (mixture) distribution. |
Tasks | |
Published | 2019-12-10 |
URL | https://arxiv.org/abs/1912.04946v1 |
https://arxiv.org/pdf/1912.04946v1.pdf | |
PWC | https://paperswithcode.com/paper/frequentist-consistency-of-generalized |
Repo | |
Framework | |
Feature Partitioning for Efficient Multi-Task Architectures
Title | Feature Partitioning for Efficient Multi-Task Architectures |
Authors | Alejandro Newell, Lu Jiang, Chong Wang, Li-Jia Li, Jia Deng |
Abstract | Multi-task learning holds the promise of less data, parameters, and time than training of separate models. We propose a method to automatically search over multi-task architectures while taking resource constraints into consideration. We propose a search space that compactly represents different parameter sharing strategies. This provides more effective coverage and sampling of the space of multi-task architectures. We also present a method for quick evaluation of different architectures by using feature distillation. Together these contributions allow us to quickly optimize for efficient multi-task models. We benchmark on Visual Decathlon, demonstrating that we can automatically search for and identify multi-task architectures that effectively make trade-offs between task resource requirements while achieving a high level of final performance. |
Tasks | Multi-Task Learning |
Published | 2019-08-12 |
URL | https://arxiv.org/abs/1908.04339v1 |
https://arxiv.org/pdf/1908.04339v1.pdf | |
PWC | https://paperswithcode.com/paper/feature-partitioning-for-efficient-multi-task |
Repo | |
Framework | |
Attention Enriched Deep Learning Model for Breast Tumor Segmentation in Ultrasound Images
Title | Attention Enriched Deep Learning Model for Breast Tumor Segmentation in Ultrasound Images |
Authors | Aleksandar Vakanski, Min Xian, Phoebe Freer |
Abstract | Incorporating human expertise and domain knowledge is particularly important for medical image processing applications, marked with small datasets, and objects of interests in the form of organs or lesions not typically seen in traditional datasets. However, the incorporation of prior knowledge for breast tumor detection is challenging, since shape, boundary, curvature, intensity, or other common medical priors vary significantly across patients and cannot be employed. This work proposes an approach for integrating visual saliency into a deep learning model for breast tumor segmentation in ultrasound images. Visual saliency emphasizes regions that are more likely to attract radiologists’ visual attention and stand out from its surrounding. Our approach is based on a U-Net model and employs attention blocks to introduce visual saliency. Such model forces learning feature representations that prioritize spatial regions with high levels of saliency. The approach is validated using a dataset of 510 breast ultrasound images. |
Tasks | |
Published | 2019-10-20 |
URL | https://arxiv.org/abs/1910.08978v1 |
https://arxiv.org/pdf/1910.08978v1.pdf | |
PWC | https://paperswithcode.com/paper/attention-enriched-deep-learning-model-for |
Repo | |
Framework | |
AdaSample: Adaptive Sampling of Hard Positives for Descriptor Learning
Title | AdaSample: Adaptive Sampling of Hard Positives for Descriptor Learning |
Authors | Xin-Yu Zhang, Le Zhang, Zao-Yi Zheng, Yun Liu, Jia-Wang Bian, Ming-Ming Cheng |
Abstract | Triplet loss has been widely employed in a wide range of computer vision tasks, including local descriptor learning. The effectiveness of the triplet loss heavily relies on the triplet selection, in which a common practice is to first sample intra-class patches (positives) from the dataset for batch construction and then mine in-batch negatives to form triplets. For high-informativeness triplet collection, researchers mostly focus on mining hard negatives in the second stage, while paying relatively less attention to constructing informative batches. To alleviate this issue, we propose AdaSample, an adaptive online batch sampler, in this paper. Specifically, hard positives are sampled based on their informativeness. In this way, we formulate a hardness-aware positive mining pipeline within a novel maximum loss minimization training protocol. The efficacy of the proposed method is evaluated on several standard benchmarks, where it demonstrates a significant and consistent performance gain on top of the existing strong baselines. |
Tasks | |
Published | 2019-11-27 |
URL | https://arxiv.org/abs/1911.12110v1 |
https://arxiv.org/pdf/1911.12110v1.pdf | |
PWC | https://paperswithcode.com/paper/adasample-adaptive-sampling-of-hard-positives |
Repo | |
Framework | |
SesameBERT: Attention for Anywhere
Title | SesameBERT: Attention for Anywhere |
Authors | Ta-Chun Su, Hsiang-Chih Cheng |
Abstract | Fine-tuning with pre-trained models has achieved exceptional results for many language tasks. In this study, we focused on one such self-attention network model, namely BERT, which has performed well in terms of stacking layers across diverse language-understanding benchmarks. However, in many downstream tasks, information between layers is ignored by BERT for fine-tuning. In addition, although self-attention networks are well-known for their ability to capture global dependencies, room for improvement remains in terms of emphasizing the importance of local contexts. In light of these advantages and disadvantages, this paper proposes SesameBERT, a generalized fine-tuning method that (1) enables the extraction of global information among all layers through Squeeze and Excitation and (2) enriches local information by capturing neighboring contexts via Gaussian blurring. Furthermore, we demonstrated the effectiveness of our approach in the HANS dataset, which is used to determine whether models have adopted shallow heuristics instead of learning underlying generalizations. The experiments revealed that SesameBERT outperformed BERT with respect to GLUE benchmark and the HANS evaluation set. |
Tasks | |
Published | 2019-10-08 |
URL | https://arxiv.org/abs/1910.03176v1 |
https://arxiv.org/pdf/1910.03176v1.pdf | |
PWC | https://paperswithcode.com/paper/sesamebert-attention-for-anywhere |
Repo | |
Framework | |
READ: Recursive Autoencoders for Document Layout Generation
Title | READ: Recursive Autoencoders for Document Layout Generation |
Authors | Akshay Gadi Patil, Omri Ben-Eliezer, Or Perel, Hadar Averbuch-Elor |
Abstract | Layout is a fundamental component of any graphic design. Creating large varieties of plausible document layouts can be a tedious task, requiring numerous constraints to be satisfied, including local ones relating different semantic elements and global constraints on the general appearance and spacing. In this paper, we present a novel framework, coined READ, for REcursive Autoencoders for Document layout generation, to generate plausible 2D layouts of documents in large quantities and varieties. First, we devise an exploratory recursive method to extract a structural decomposition of a single document. Leveraging a dataset of documents annotated with labeled bounding boxes, our recursive neural network learns to map the structural representation, given in the form of a simple hierarchy, to a compact code, the space of which is approximated by a Gaussian distribution. Novel hierarchies can be sampled from this space, obtaining new document layouts. Moreover, we introduce a combinatorial metric to measure structural similarity among document layouts. We deploy it to show that our method is able to generate highly variable and realistic layouts. We further demonstrate the utility of our generated layouts in the context of standard detection tasks on documents, showing that detection performance improves when the training data is augmented with generated documents whose layouts are produced by READ. |
Tasks | |
Published | 2019-09-01 |
URL | https://arxiv.org/abs/1909.00302v3 |
https://arxiv.org/pdf/1909.00302v3.pdf | |
PWC | https://paperswithcode.com/paper/read-recursive-autoencoders-for-document |
Repo | |
Framework | |
Variational Resampling Based Assessment of Deep Neural Networks under Distribution Shift
Title | Variational Resampling Based Assessment of Deep Neural Networks under Distribution Shift |
Authors | Xudong Sun, Alexej Gossmann, Yu Wang, Bernd Bischl |
Abstract | A novel variational inference based resampling framework is proposed to evaluate the robustness and generalization capability of deep learning models with respect to distribution shift. We use Auto Encoding Variational Bayes to find a latent representation of the data, on which a Variational Gaussian Mixture Model is applied to deliberately create distribution shift by dividing the dataset into different clusters. Wasserstein distance is used to characterize the extent of distribution shift between the generated data splits. We compare several popular Convolutional Neural Network (CNN) architectures and Bayesian CNN models for image classification on the Fashion-MNIST dataset, to assess their robustness and generalization behavior under the deliberately created distribution shift, as well as under random Cross Validation. Our method of creating artificial domain splits of a single dataset can also be used to establish novel model selection criteria and assessment tools in machine learning, as well as benchmark methods for domain adaptation and domain generalization approaches. |
Tasks | Domain Adaptation, Domain Generalization, Image Classification, Model Selection |
Published | 2019-06-07 |
URL | https://arxiv.org/abs/1906.02972v6 |
https://arxiv.org/pdf/1906.02972v6.pdf | |
PWC | https://paperswithcode.com/paper/resampling-based-assessment-of-robustness-to |
Repo | |
Framework | |
Learning Reciprocity in Complex Sequential Social Dilemmas
Title | Learning Reciprocity in Complex Sequential Social Dilemmas |
Authors | Tom Eccles, Edward Hughes, János Kramár, Steven Wheelwright, Joel Z. Leibo |
Abstract | Reciprocity is an important feature of human social interaction and underpins our cooperative nature. What is more, simple forms of reciprocity have proved remarkably resilient in matrix game social dilemmas. Most famously, the tit-for-tat strategy performs very well in tournaments of Prisoner’s Dilemma. Unfortunately this strategy is not readily applicable to the real world, in which options to cooperate or defect are temporally and spatially extended. Here, we present a general online reinforcement learning algorithm that displays reciprocal behavior towards its co-players. We show that it can induce pro-social outcomes for the wider group when learning alongside selfish agents, both in a $2$-player Markov game, and in $5$-player intertemporal social dilemmas. We analyse the resulting policies to show that the reciprocating agents are strongly influenced by their co-players’ behavior. |
Tasks | |
Published | 2019-03-19 |
URL | http://arxiv.org/abs/1903.08082v1 |
http://arxiv.org/pdf/1903.08082v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-reciprocity-in-complex-sequential |
Repo | |
Framework | |
An Estimation of Personnel Food Demand Quantity for Businesses by Using Artificial Neural Networks
Title | An Estimation of Personnel Food Demand Quantity for Businesses by Using Artificial Neural Networks |
Authors | M. Hanefi Calp |
Abstract | Today, many public or private institutions provide professional food service for personnels working in their own organizations. Regarding the planning of the said service, there are some obstacles due to the fact that the number of the personnel working in the institutions is generally high and the personnel are out of the institution due to personal or institutional reasons. Because of this, it is difficult to determine the daily food demand, and this causes cost, time and labor loss for the institutions. Statistical or heuristic methods are used to remove or at least minimize these losses. In this study, an artificial intelligence model was proposed, which estimates the daily food demand quantity using artificial neural networks for businesses. The data are obtained from a refectory database of a private institution with a capacity of 110 people serving daily meals and serving at different levels, covering the last two years (2016-2018). The model was created using the MATLAB package program. The performance of the model was determinde by the Regression values, the Mean Absolute Percentage Error (MAPE) and the Mean Squared Error (MSE). In the training of the ANN model, feed forward back propagation network architecture is used. The best model obtained as a result of the experiments is a multi-layer (8-10-10-1) structure with a training R ratio of 0,9948, a testing R ratio of 0,9830 and an error rate of 0,003783, respectively. Experimental results demonstrated that the model has low error rate, high performance and positive effect of using artificial neural networks for demand estimating. |
Tasks | |
Published | 2019-02-05 |
URL | http://arxiv.org/abs/1902.04412v1 |
http://arxiv.org/pdf/1902.04412v1.pdf | |
PWC | https://paperswithcode.com/paper/an-estimation-of-personnel-food-demand |
Repo | |
Framework | |