Paper Group ANR 460
Efficient and Robust Shape Correspondence via Sparsity-Enforced Quadratic Assignment. A Soft Recommender System for Social Networks. Deep Multi-Task Learning via Generalized Tensor Trace Norm. Learning in Markov Decision Processes under Constraints. Regularized Autoencoders via Relaxed Injective Probability Flow. Multi-objective beetle antennae sea …
Efficient and Robust Shape Correspondence via Sparsity-Enforced Quadratic Assignment
Title | Efficient and Robust Shape Correspondence via Sparsity-Enforced Quadratic Assignment |
Authors | Rui Xiang, Rongjie Lai, Hongkai Zhao |
Abstract | In this work, we introduce a novel local pairwise descriptor and then develop a simple, effective iterative method to solve the resulting quadratic assignment through sparsity control for shape correspondence between two approximate isometric surfaces. Our pairwise descriptor is based on the stiffness and mass matrix of finite element approximation of the Laplace-Beltrami differential operator, which is local in space, sparse to represent, and extremely easy to compute while containing global information. It allows us to deal with open surfaces, partial matching, and topological perturbations robustly. To solve the resulting quadratic assignment problem efficiently, the two key ideas of our iterative algorithm are: 1) select pairs with good (approximate) correspondence as anchor points, 2) solve a regularized quadratic assignment problem only in the neighborhood of selected anchor points through sparsity control. These two ingredients can improve and increase the number of anchor points quickly while reducing the computation cost in each quadratic assignment iteration significantly. With enough high-quality anchor points, one may use various pointwise global features with reference to these anchor points to further improve the dense shape correspondence. We use various experiments to show the efficiency, quality, and versatility of our method on large data sets, patches, and point clouds (without global meshes). |
Tasks | |
Published | 2020-03-19 |
URL | https://arxiv.org/abs/2003.08680v2 |
https://arxiv.org/pdf/2003.08680v2.pdf | |
PWC | https://paperswithcode.com/paper/efficient-and-robust-shape-correspondence-via |
Repo | |
Framework | |
A Soft Recommender System for Social Networks
Title | A Soft Recommender System for Social Networks |
Authors | Marzieh Pourhojjati-Sabet, Azam Rabiee |
Abstract | Recent social recommender systems benefit from friendship graph to make an accurate recommendation, believing that friends in a social network have exactly the same interests and preferences. Some studies have benefited from hard clustering algorithms (such as K-means) to determine the similarity between users and consequently to define degree of friendships. In this paper, we went a step further to identify true friends for making even more realistic recommendations. we calculated the similarity between users, as well as the dependency between a user and an item. Our hypothesis is that due to the uncertainties in user preferences, the fuzzy clustering, instead of the classical hard clustering, is beneficial in accurate recommendations. We incorporated the C-means algorithm to get different membership degrees of soft users’ clusters. Then, the users’ similarity metric is defined according to the soft clusters. Later, in a training scheme we determined the latent representations of users and items, extracting from the huge and sparse user-item-tag matrix using matrix factorization. In the parameter tuning, we found the optimum coefficients for the influence of our soft social regularization and the user-item dependency terms. Our experimental results convinced that the proposed fuzzy similarity metric improves the recommendations in real data compared to the baseline social recommender system with the hard clustering. |
Tasks | Recommendation Systems |
Published | 2020-01-08 |
URL | https://arxiv.org/abs/2001.02520v1 |
https://arxiv.org/pdf/2001.02520v1.pdf | |
PWC | https://paperswithcode.com/paper/a-soft-recommender-system-for-social-networks |
Repo | |
Framework | |
Deep Multi-Task Learning via Generalized Tensor Trace Norm
Title | Deep Multi-Task Learning via Generalized Tensor Trace Norm |
Authors | Yi Zhang, Yu Zhang, Wei Wang |
Abstract | The trace norm is widely used in multi-task learning as it can discover low-rank structures among tasks in terms of model parameters. Nowadays, with the emerging of big datasets and the popularity of deep learning techniques, tensor trace norms have been used for deep multi-task models. However, existing tensor trace norms cannot discover all the low-rank structures and they require users to manually determine the importance of their components. To solve those two issues together, in this paper, we propose a Generalized Tensor Trace Norm (GTTN). The GTTN is defined as a convex combination of matrix trace norms of all possible tensor flattenings and hence it can discover all the possible low-rank structures. In the induced objective function, we will learn combination coefficients in the GTTN to automatically determine the importance. Experiments on real-world datasets demonstrate the effectiveness of the proposed GTTN. |
Tasks | Multi-Task Learning |
Published | 2020-02-12 |
URL | https://arxiv.org/abs/2002.04799v1 |
https://arxiv.org/pdf/2002.04799v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-multi-task-learning-via-generalized |
Repo | |
Framework | |
Learning in Markov Decision Processes under Constraints
Title | Learning in Markov Decision Processes under Constraints |
Authors | Rahul Singh, Abhishek Gupta, Ness B. Shroff |
Abstract | We consider reinforcement learning (RL) in Markov Decision Processes (MDPs) in which at each time step the agent, in addition to earning a reward, also incurs an $M$ dimensional vector of costs. The objective is to design a learning rule that maximizes the cumulative reward earned over a finite time horizon of $T$ steps, while simultaneously ensuring that the cumulative cost expenditures are bounded appropriately. The considerations on the cumulative cost expenditures is in departure from the existing RL literature, in that the agent now additionally needs to balance the cost expenses in an \emph{online manner}, while simultaneously performing optimally the exploration-exploitation trade-off typically encountered in RL tasks. This is challenging since either of the duo objectives of exploration and exploitation necessarily require the agent to expend resources. When the constraints are placed on the average costs, we present a version of UCB algorithm and prove that its reward as well as cost regrets are upper-bounded as $O\left(T_{M}S\sqrt{AT\log(T)}\right)$, where $T_{M}$ is the mixing time of the MDP, $S$ is the number of states, $A$ is the number of actions, and $T$ is the time horizon. We further show how to modify the algorithm in order to reduce regrets of a desired subset of the $M$ costs, at the expense of increasing the regrets of rewards and the remaining costs. We then consider RL under the constraint that the vector comprising of the cumulative cost expenditures until each time $t\le T$ must be less than $\mathbf{c}^{ub}t$. We propose a “finite ($B$)-state” algorithm and show that its average reward is within $O\left(e^{-B}\right)$ of $r^{\star}$, the latter being the optimal average reward under average cost constraints. |
Tasks | |
Published | 2020-02-27 |
URL | https://arxiv.org/abs/2002.12435v1 |
https://arxiv.org/pdf/2002.12435v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-in-markov-decision-processes-under |
Repo | |
Framework | |
Regularized Autoencoders via Relaxed Injective Probability Flow
Title | Regularized Autoencoders via Relaxed Injective Probability Flow |
Authors | Abhishek Kumar, Ben Poole, Kevin Murphy |
Abstract | Invertible flow-based generative models are an effective method for learning to generate samples, while allowing for tractable likelihood computation and inference. However, the invertibility requirement restricts models to have the same latent dimensionality as the inputs. This imposes significant architectural, memory, and computational costs, making them more challenging to scale than other classes of generative models such as Variational Autoencoders (VAEs). We propose a generative model based on probability flows that does away with the bijectivity requirement on the model and only assumes injectivity. This also provides another perspective on regularized autoencoders (RAEs), with our final objectives resembling RAEs with specific regularizers that are derived by lower bounding the probability flow objective. We empirically demonstrate the promise of the proposed model, improving over VAEs and AEs in terms of sample quality. |
Tasks | |
Published | 2020-02-20 |
URL | https://arxiv.org/abs/2002.08927v1 |
https://arxiv.org/pdf/2002.08927v1.pdf | |
PWC | https://paperswithcode.com/paper/regularized-autoencoders-via-relaxed |
Repo | |
Framework | |
Multi-objective beetle antennae search algorithm
Title | Multi-objective beetle antennae search algorithm |
Authors | Junfei Zhang, Yimiao Huang, Guowei Ma, Brett Nener |
Abstract | In engineering optimization problems, multiple objectives with a large number of variables under highly nonlinear constraints are usually required to be simultaneously optimized. Significant computing effort are required to find the Pareto front of a nonlinear multi-objective optimization problem. Swarm intelligence based metaheuristic algorithms have been successfully applied to solve multi-objective optimization problems. Recently, an individual intelligence based algorithm called beetle antennae search algorithm was proposed. This algorithm was proved to be more computationally efficient. Therefore, we extended this algorithm to solve multi-objective optimization problems. The proposed multi-objective beetle antennae search algorithm is tested using four well-selected benchmark functions and its performance is compared with other multi-objective optimization algorithms. The results show that the proposed multi-objective beetle antennae search algorithm has higher computational efficiency with satisfactory accuracy. |
Tasks | |
Published | 2020-02-24 |
URL | https://arxiv.org/abs/2002.10090v1 |
https://arxiv.org/pdf/2002.10090v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-objective-beetle-antennae-search |
Repo | |
Framework | |
Learning Predictive Representations for Deformable Objects Using Contrastive Estimation
Title | Learning Predictive Representations for Deformable Objects Using Contrastive Estimation |
Authors | Wilson Yan, Ashwin Vangipuram, Pieter Abbeel, Lerrel Pinto |
Abstract | Using visual model-based learning for deformable object manipulation is challenging due to difficulties in learning plannable visual representations along with complex dynamic models. In this work, we propose a new learning framework that jointly optimizes both the visual representation model and the dynamics model using contrastive estimation. Using simulation data collected by randomly perturbing deformable objects on a table, we learn latent dynamics models for these objects in an offline fashion. Then, using the learned models, we use simple model-based planning to solve challenging deformable object manipulation tasks such as spreading ropes and cloths. Experimentally, we show substantial improvements in performance over standard model-based learning techniques across our rope and cloth manipulation suite. Finally, we transfer our visual manipulation policies trained on data purely collected in simulation to a real PR2 robot through domain randomization. |
Tasks | Deformable Object Manipulation |
Published | 2020-03-11 |
URL | https://arxiv.org/abs/2003.05436v1 |
https://arxiv.org/pdf/2003.05436v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-predictive-representations-for |
Repo | |
Framework | |
PF-Net: Point Fractal Network for 3D Point Cloud Completion
Title | PF-Net: Point Fractal Network for 3D Point Cloud Completion |
Authors | Zitian Huang, Yikuan Yu, Jiawen Xu, Feng Ni, Xinyi Le |
Abstract | In this paper, we propose a Point Fractal Network (PF-Net), a novel learning-based approach for precise and high-fidelity point cloud completion. Unlike existing point cloud completion networks, which generate the overall shape of the point cloud from the incomplete point cloud and always change existing points and encounter noise and geometrical loss, PF-Net preserves the spatial arrangements of the incomplete point cloud and can figure out the detailed geometrical structure of the missing region(s) in the prediction. To succeed at this task, PF-Net estimates the missing point cloud hierarchically by utilizing a feature-points-based multi-scale generating network. Further, we add up multi-stage completion loss and adversarial loss to generate more realistic missing region(s). The adversarial loss can better tackle multiple modes in the prediction. Our experiments demonstrate the effectiveness of our method for several challenging point cloud completion tasks. |
Tasks | |
Published | 2020-03-01 |
URL | https://arxiv.org/abs/2003.00410v1 |
https://arxiv.org/pdf/2003.00410v1.pdf | |
PWC | https://paperswithcode.com/paper/pf-net-point-fractal-network-for-3d-point |
Repo | |
Framework | |
Handwritten Character Recognition Using Unique Feature Extraction Technique
Title | Handwritten Character Recognition Using Unique Feature Extraction Technique |
Authors | Sai Abhishikth Ayyadevara, P N V Sai Ram Teja, Bharath K P, Rajesh Kumar M |
Abstract | One of the most arduous and captivating domains under image processing is handwritten character recognition. In this paper we have proposed a feature extraction technique which is a combination of unique features of geometric, zone-based hybrid, gradient features extraction approaches and three different neural networks namely the Multilayer Perceptron network using Backpropagation algorithm (MLP BP), the Multilayer Perceptron network using Levenberg-Marquardt algorithm (MLP LM) and the Convolutional neural network (CNN) which have been implemented along with the Minimum Distance Classifier (MDC). The procedures lead to the conclusion that the proposed feature extraction algorithm is more accurate than its individual counterparts and also that Convolutional Neural Network is the most efficient neural network of the three in consideration. |
Tasks | |
Published | 2020-01-13 |
URL | https://arxiv.org/abs/2001.04208v1 |
https://arxiv.org/pdf/2001.04208v1.pdf | |
PWC | https://paperswithcode.com/paper/handwritten-character-recognition-using |
Repo | |
Framework | |
Optimization of Retrieval Algorithms on Large Scale Knowledge Graphs
Title | Optimization of Retrieval Algorithms on Large Scale Knowledge Graphs |
Authors | Jens Dörpinghaus, Andreas Stefan |
Abstract | Knowledge graphs have been shown to play an important role in recent knowledge mining and discovery, for example in the field of life sciences or bioinformatics. Although a lot of research has been done on the field of query optimization, query transformation and of course in storing and retrieving large scale knowledge graphs the field of algorithmic optimization is still a major challenge and a vital factor in using graph databases. Few researchers have addressed the problem of optimizing algorithms on large scale labeled property graphs. Here, we present two optimization approaches and compare them with a naive approach of directly querying the graph database. The aim of our work is to determine limiting factors of graph databases like Neo4j and we describe a novel solution to tackle these challenges. For this, we suggest a classification schema to differ between the complexity of a problem on a graph database. We evaluate our optimization approaches on a test system containing a knowledge graph derived biomedical publication data enriched with text mining data. This dense graph has more than 71M nodes and 850M relationships. The results are very encouraging and - depending on the problem - we were able to show a speedup of a factor between 44 and 3839. |
Tasks | Knowledge Graphs |
Published | 2020-02-10 |
URL | https://arxiv.org/abs/2002.03686v1 |
https://arxiv.org/pdf/2002.03686v1.pdf | |
PWC | https://paperswithcode.com/paper/optimization-of-retrieval-algorithms-on-large |
Repo | |
Framework | |
DFVS: Deep Flow Guided Scene Agnostic Image Based Visual Servoing
Title | DFVS: Deep Flow Guided Scene Agnostic Image Based Visual Servoing |
Authors | Y V S Harish, Harit Pandya, Ayush Gaud, Shreya Terupally, Sai Shankar, K. Madhava Krishna |
Abstract | Existing deep learning based visual servoing approaches regress the relative camera pose between a pair of images. Therefore, they require a huge amount of training data and sometimes fine-tuning for adaptation to a novel scene. Furthermore, current approaches do not consider underlying geometry of the scene and rely on direct estimation of camera pose. Thus, inaccuracies in prediction of the camera pose, especially for distant goals, lead to a degradation in the servoing performance. In this paper, we propose a two-fold solution: (i) We consider optical flow as our visual features, which are predicted using a deep neural network. (ii) These flow features are then systematically integrated with depth estimates provided by another neural network using interaction matrix. We further present an extensive benchmark in a photo-realistic 3D simulation across diverse scenes to study the convergence and generalisation of visual servoing approaches. We show convergence for over 3m and 40 degrees while maintaining precise positioning of under 2cm and 1 degree on our challenging benchmark where the existing approaches that are unable to converge for majority of scenarios for over 1.5m and 20 degrees. Furthermore, we also evaluate our approach for a real scenario on an aerial robot. Our approach generalizes to novel scenarios producing precise and robust servoing performance for 6 degrees of freedom positioning tasks with even large camera transformations without any retraining or fine-tuning. |
Tasks | Optical Flow Estimation |
Published | 2020-03-08 |
URL | https://arxiv.org/abs/2003.03766v1 |
https://arxiv.org/pdf/2003.03766v1.pdf | |
PWC | https://paperswithcode.com/paper/dfvs-deep-flow-guided-scene-agnostic-image |
Repo | |
Framework | |
Edit Distance Embedding using Convolutional Neural Networks
Title | Edit Distance Embedding using Convolutional Neural Networks |
Authors | Xinyan Dai, Xiao Yan, Kaiwen Zhou, Yuxuan Wang, Han Yang, James Cheng |
Abstract | Edit-distance-based string similarity search has many applications such as spell correction, data de-duplication, and sequence alignment. However, computing edit distance is known to have high complexity, which makes string similarity search challenging for large datasets. In this paper, we propose a deep learning pipeline (called CNN-ED) that embeds edit distance into Euclidean distance for fast approximate similarity search. A convolutional neural network (CNN) is used to generate fixed-length vector embeddings for a dataset of strings and the loss function is a combination of the triplet loss and the approximation error. To justify our choice of using CNN instead of other structures (e.g., RNN) as the model, theoretical analysis is conducted to show that some basic operations in our CNN model preserve edit distance. Experimental results show that CNN-ED outperforms data-independent CGK embedding and RNN-based GRU embedding in terms of both accuracy and efficiency by a large margin. We also show that string similarity search can be significantly accelerated using CNN-based embeddings, sometimes by orders of magnitude. |
Tasks | |
Published | 2020-01-31 |
URL | https://arxiv.org/abs/2001.11692v1 |
https://arxiv.org/pdf/2001.11692v1.pdf | |
PWC | https://paperswithcode.com/paper/edit-distance-embedding-using-convolutional |
Repo | |
Framework | |
MULTEXT-East
Title | MULTEXT-East |
Authors | Tomaž Erjavec |
Abstract | MULTEXT-East language resources, a multilingual dataset for language engineering research, focused on the morphosyntactic level of linguistic description. The MULTEXT-East dataset includes the EAGLES-based morphosyntactic specifications, morphosyntactic lexicons, and an annotated multilingual corpora. The parallel corpus, the novel “1984” by George Orwell, is sentence aligned and contains hand-validated morphosyntactic descriptions and lemmas. The resources are uniformly encoded in XML, using the Text Encoding Initiative Guidelines, TEI P5, and cover 16 languages: Bulgarian, Croatian, Czech, English, Estonian, Hungarian, Macedonian, Persian, Polish, Resian, Romanian, Russian, Serbian, Slovak, Slovene, and Ukrainian. This dataset is extensively documented, and freely available for research purposes. This case study gives a history of the development of the MULTEXT-East resources, presents their encoding and components, discusses related work and gives some conclusions. |
Tasks | |
Published | 2020-03-31 |
URL | https://arxiv.org/abs/2003.14026v1 |
https://arxiv.org/pdf/2003.14026v1.pdf | |
PWC | https://paperswithcode.com/paper/multext-east |
Repo | |
Framework | |
Evolved Neuromorphic Control for High Speed Divergence-based Landings of MAVs
Title | Evolved Neuromorphic Control for High Speed Divergence-based Landings of MAVs |
Authors | J. J. Hagenaars, F. Paredes-Vallés, S. M. Bohté, G. C. H. E. de Croon |
Abstract | Flying insects are capable of vision-based navigation in cluttered environments, reliably avoiding obstacles through fast and agile maneuvers, while being very efficient in the processing of visual stimuli. Meanwhile, autonomous micro air vehicles still lag far behind their biological counterparts, displaying inferior performance with a much higher energy consumption. In light of this, we want to mimic flying insects in terms of their processing capabilities, and consequently apply gained knowledge to a maneuver of relevance. This letter does so through evolving spiking neural networks for controlling landings of micro air vehicles using the divergence of the optical flow field of a downward-looking camera. We demonstrate that the resulting neuromorphic controllers transfer robustly from a highly abstracted simulation to the real world, performing fast and safe landings while keeping network spike rate minimal. Furthermore, we provide insight into the resources required for successfully solving the problem of divergence-based landing, showing that high-resolution control can potentially be learned with only a single spiking neuron. To the best of our knowledge, this is the first work integrating spiking neural networks in the control loop of a real-world flying robot. Videos of the experiments can be found at http://bit.ly/neuro-controller . |
Tasks | Optical Flow Estimation |
Published | 2020-03-06 |
URL | https://arxiv.org/abs/2003.03118v1 |
https://arxiv.org/pdf/2003.03118v1.pdf | |
PWC | https://paperswithcode.com/paper/evolved-neuromorphic-control-for-high-speed |
Repo | |
Framework | |
Occlusion Aware Unsupervised Learning of Optical Flow From Video
Title | Occlusion Aware Unsupervised Learning of Optical Flow From Video |
Authors | Jianfeng Li, Junqiao Zhao, Tiantian Feng, Chen Ye, Lu Xiong |
Abstract | In this paper, we proposed an unsupervised learning method for estimating the optical flow between video frames, especially to solve the occlusion problem. Occlusion is caused by the movement of an object or the movement of the camera, defined as when certain pixels are visible in one video frame but not in adjacent frames. Due to the lack of pixel correspondence between frames in the occluded area, incorrect photometric loss calculation can mislead the optical flow training process. In the video sequence, we found that the occlusion in the forward ($t\rightarrow t+1$) and backward ($t\rightarrow t-1$) frame pairs are usually complementary. That is, pixels that are occluded in subsequent frames are often not occluded in the previous frame and vice versa. Therefore, by using this complementarity, a new weighted loss is proposed to solve the occlusion problem. In addition, we calculate gradients in multiple directions to provide richer supervision information. Our method achieves competitive optical flow accuracy compared to the baseline and some supervised methods on KITTI 2012 and 2015 benchmarks. This source code has been released at https://github.com/jianfenglihg/UnOpticalFlow.git. |
Tasks | Optical Flow Estimation |
Published | 2020-03-04 |
URL | https://arxiv.org/abs/2003.01960v1 |
https://arxiv.org/pdf/2003.01960v1.pdf | |
PWC | https://paperswithcode.com/paper/occlusion-aware-unsupervised-learning-of-2 |
Repo | |
Framework | |