Paper Group ANR 986
Statistical Parameter Selection for Clustering Persistence Diagrams. Fully Hyperbolic Convolutional Neural Networks. Depth Estimation on Underwater Omni-directional Images Using a Deep Neural Network. How do neural networks see depth in single images?. Unsupervised Domain-Adaptive Person Re-identification Based on Attributes. Painting on Placement: …
Statistical Parameter Selection for Clustering Persistence Diagrams
Title | Statistical Parameter Selection for Clustering Persistence Diagrams |
Authors | Max Kontak, Jules Vidal, Julien Tierny |
Abstract | In urgent decision making applications, ensemble simulations are an important way to determine different outcome scenarios based on currently available data. In this paper, we will analyze the output of ensemble simulations by considering so-called persistence diagrams, which are reduced representations of the original data, motivated by the extraction of topological features. Based on a recently published progressive algorithm for the clustering of persistence diagrams, we determine the optimal number of clusters, and therefore the number of significantly different outcome scenarios, by the minimization of established statistical score functions. Furthermore, we present a proof-of-concept prototype implementation of the statistical selection of the number of clusters and provide the results of an experimental study, where this implementation has been applied to real-world ensemble data sets. |
Tasks | Decision Making |
Published | 2019-10-17 |
URL | https://arxiv.org/abs/1910.08398v1 |
https://arxiv.org/pdf/1910.08398v1.pdf | |
PWC | https://paperswithcode.com/paper/statistical-parameter-selection-for |
Repo | |
Framework | |
Fully Hyperbolic Convolutional Neural Networks
Title | Fully Hyperbolic Convolutional Neural Networks |
Authors | Keegan Lensink, Eldad Haber, Bas Peters |
Abstract | Convolutional Neural Networks (CNN) have recently seen tremendous success in various computer vision tasks. However, their application to problems with high dimensional input and output, such as high-resolution image and video segmentation or 3D medical imaging, has been limited by various factors. Primarily, in the training stage, it is necessary to store network activations for back propagation. In these settings, the memory requirements associated with storing activations can exceed what is feasible with current hardware, especially for problems in 3D. Previously proposed reversible architectures allow one to recalculate activations in the backwards pass instead of storing them. For computer visions tasks, only block reversible networks have been possible because pooling operations are not reversible. Block-reversibility still requires storing a number of activations that grows with the number of blocks. Motivated by the propagation of signals over physical networks, that are governed by the hyperbolic Telegraph equation, in this work we introduce a fully conservative hyperbolic network for problems with high dimensional input and output. We introduce a coarsening operation that allows completely reversible CNNs by using the Discrete Wavelet Transform and its inverse to both coarsen and interpolate the network state and change the number of channels. This means that during training we do not need to store any of the activations from the forward pass, and can train arbitrarily deep networks. We show that fully reversible networks are able to achieve results comparable to the state of the art in image depth estimation and full 3D video segmentation, with a much lower memory footprint that is a constant independent of the network depth. |
Tasks | Depth Estimation, Image Classification, Semantic Segmentation, Video Semantic Segmentation |
Published | 2019-05-24 |
URL | https://arxiv.org/abs/1905.10484v2 |
https://arxiv.org/pdf/1905.10484v2.pdf | |
PWC | https://paperswithcode.com/paper/fully-hyperbolic-convolutional-neural |
Repo | |
Framework | |
Depth Estimation on Underwater Omni-directional Images Using a Deep Neural Network
Title | Depth Estimation on Underwater Omni-directional Images Using a Deep Neural Network |
Authors | Haofei Kuang, Qingwen Xu, Sören Schwertfeger |
Abstract | In this work, we exploit a depth estimation Fully Convolutional Residual Neural Network (FCRN) for in-air perspective images to estimate the depth of underwater perspective and omni-directional images. We train one conventional and one spherical FCRN for underwater perspective and omni-directional images, respectively. The spherical FCRN is derived from the perspective FCRN via a spherical longitude-latitude mapping. For that, the omni-directional camera is modeled as a sphere, while images captured by it are displayed in the longitude-latitude form. Due to the lack of underwater datasets, we synthesize images in both data-driven and theoretical ways, which are used in training and testing. Finally, experiments are conducted on these synthetic images and results are displayed in both qualitative and quantitative way. The comparison between ground truth and the estimated depth map indicates the effectiveness of our method. |
Tasks | Depth Estimation |
Published | 2019-05-23 |
URL | https://arxiv.org/abs/1905.09441v1 |
https://arxiv.org/pdf/1905.09441v1.pdf | |
PWC | https://paperswithcode.com/paper/depth-estimation-on-underwater-omni |
Repo | |
Framework | |
How do neural networks see depth in single images?
Title | How do neural networks see depth in single images? |
Authors | Tom van Dijk, Guido C. H. E. de Croon |
Abstract | Deep neural networks have lead to a breakthrough in depth estimation from single images. Recent work often focuses on the accuracy of the depth map, where an evaluation on a publicly available test set such as the KITTI vision benchmark is often the main result of the article. While such an evaluation shows how well neural networks can estimate depth, it does not show how they do this. To the best of our knowledge, no work currently exists that analyzes what these networks have learned. In this work we take the MonoDepth network by Godard et al. and investigate what visual cues it exploits for depth estimation. We find that the network ignores the apparent size of known obstacles in favor of their vertical position in the image. Using the vertical position requires the camera pose to be known; however we find that MonoDepth only partially corrects for changes in camera pitch and roll and that these influence the estimated depth towards obstacles. We further show that MonoDepth’s use of the vertical image position allows it to estimate the distance towards arbitrary obstacles, even those not appearing in the training set, but that it requires a strong edge at the ground contact point of the object to do so. In future work we will investigate whether these observations also apply to other neural networks for monocular depth estimation. |
Tasks | Depth Estimation, Monocular Depth Estimation |
Published | 2019-05-16 |
URL | https://arxiv.org/abs/1905.07005v1 |
https://arxiv.org/pdf/1905.07005v1.pdf | |
PWC | https://paperswithcode.com/paper/how-do-neural-networks-see-depth-in-single |
Repo | |
Framework | |
Unsupervised Domain-Adaptive Person Re-identification Based on Attributes
Title | Unsupervised Domain-Adaptive Person Re-identification Based on Attributes |
Authors | Xiangping Zhu, Pietro Morerio, Vittorio Murino |
Abstract | Pedestrian attributes, e.g., hair length, clothes type and color, locally describe the semantic appearance of a person. Training person re-identification (ReID) algorithms under the supervision of such attributes have proven to be effective in extracting local features which are important for ReID. Unlike person identity, attributes are consistent across different domains (or datasets). However, most of ReID datasets lack attribute annotations. On the other hand, there are several datasets labeled with sufficient attributes for the case of pedestrian attribute recognition. Exploiting such data for ReID purpose can be a way to alleviate the shortage of attribute annotations in ReID case. In this work, an unsupervised domain adaptive ReID feature learning framework is proposed to make full use of attribute annotations. We propose to transfer attribute-related features from their original domain to the ReID one: to this end, we introduce an adversarial discriminative domain adaptation method in order to learn domain invariant features for encoding semantic attributes. Experiments on three large-scale datasets validate the effectiveness of the proposed ReID framework. |
Tasks | Domain Adaptation, Pedestrian Attribute Recognition, Person Re-Identification |
Published | 2019-08-27 |
URL | https://arxiv.org/abs/1908.10359v1 |
https://arxiv.org/pdf/1908.10359v1.pdf | |
PWC | https://paperswithcode.com/paper/unsupervised-domain-adaptive-person-re |
Repo | |
Framework | |
Painting on Placement: Forecasting Routing Congestion using Conditional Generative Adversarial Nets
Title | Painting on Placement: Forecasting Routing Congestion using Conditional Generative Adversarial Nets |
Authors | Cunxi Yu, Zhiru Zhang |
Abstract | Physical design process commonly consumes hours to days for large designs, and routing is known as the most critical step. Demands for accurate routing quality prediction raise to a new level to accelerate hardware innovation with advanced technology nodes. This work presents an approach that forecasts the density of all routing channels over the entire floorplan, with features collected up to placement, using conditional GANs. Specifically, forecasting the routing congestion is constructed as an image translation (colorization) problem. The proposed approach is applied to a) placement exploration for minimum congestion, b) constrained placement exploration and c) forecasting congestion in real-time during incremental placement, using eight designs targeting a fixed FPGA architecture. |
Tasks | Colorization |
Published | 2019-04-15 |
URL | http://arxiv.org/abs/1904.07077v1 |
http://arxiv.org/pdf/1904.07077v1.pdf | |
PWC | https://paperswithcode.com/paper/painting-on-placement-forecasting-routing |
Repo | |
Framework | |
UrbanLoco: A Full Sensor Suite Dataset for Mapping and Localization in Urban Scenes
Title | UrbanLoco: A Full Sensor Suite Dataset for Mapping and Localization in Urban Scenes |
Authors | Weisong Wen, Yiyang Zhou, Guohao Zhang, Saman Fahandezh-Saadi, Xiwei Bai, Wei Zhan, Masayoshi Tomizuka, Li-Ta Hsu |
Abstract | Mapping and localization is a critical module of autonomous driving, and significant achievements have been reached in this field. Beyond Global Navigation Satellite System (GNSS), research in point cloud registration, visual feature matching, and inertia navigation has greatly enhanced the accuracy and robustness of mapping and localization in different scenarios. However, highly urbanized scenes are still challenging: LIDAR- and camera-based methods perform poorly with numerous dynamic objects; the GNSS-based solutions experience signal loss and multipath problems; the inertia measurement units (IMU) suffer from drifting. Unfortunately, current public datasets either do not adequately address this urban challenge or do not provide enough sensor information related to mapping and localization. Here we present UrbanLoco: a mapping/localization dataset collected in highly-urbanized environments with a full sensor-suite. The dataset includes 13 trajectories collected in San Francisco and Hong Kong, covering a total length of over 40 kilometers. Our dataset includes a wide variety of urban terrains: urban canyons, bridges, tunnels, sharp turns, etc. More importantly, our dataset includes information from LIDAR, cameras, IMU, and GNSS receivers. Now the dataset is publicly available through the link in the footnote. Dataset Link: https://advdataset2019.wixsite.com/advlocalization. |
Tasks | Autonomous Driving, Point Cloud Registration |
Published | 2019-12-19 |
URL | https://arxiv.org/abs/1912.09513v1 |
https://arxiv.org/pdf/1912.09513v1.pdf | |
PWC | https://paperswithcode.com/paper/urbanloco-a-full-sensor-suite-dataset-for |
Repo | |
Framework | |
SMT-based Constraint Answer Set Solver EZSMT+
Title | SMT-based Constraint Answer Set Solver EZSMT+ |
Authors | Da Shen, Yuliya Lierler |
Abstract | Constraint answer set programming integrates answer set programming with constraint processing. System EZSMT+ is a constraint answer set programming tool that utilizes satisfiability modulo theory solvers for search. Its theoretical foundation lies on generalizations of Niemela’s characterization of answer sets of a logic program via so called level rankings. |
Tasks | |
Published | 2019-05-08 |
URL | https://arxiv.org/abs/1905.03334v2 |
https://arxiv.org/pdf/1905.03334v2.pdf | |
PWC | https://paperswithcode.com/paper/190503334 |
Repo | |
Framework | |
A fast online cascaded regression algorithm for face alignment
Title | A fast online cascaded regression algorithm for face alignment |
Authors | Lin Feng, Caifeng Liu, Shenglan Liu, Huibing Wang |
Abstract | Traditional face alignment based on machine learning usually tracks the localizations of facial landmarks employing a static model trained offline where all of the training data is available in advance. When new training samples arrive, the static model must be retrained from scratch, which is excessively time-consuming and memory-consuming. In many real-time applications, the training data is obtained one by one or batch by batch. It results in that the static model limits its performance on sequential images with extensive variations. Therefore, the most critical and challenging aspect in this field is dynamically updating the tracker’s models to enhance predictive and generalization capabilities continuously. In order to address this question, we develop a fast and accurate online learning algorithm for face alignment. Particularly, we incorporate on-line sequential extreme learning machine into a parallel cascaded regression framework, coined incremental cascade regression(ICR). To the best of our knowledge, this is the first incremental cascaded framework with the non-linear regressor. One main advantage of ICR is that the tracker model can be fast updated in an incremental way without the entire retraining process when a new input is incoming. Experimental results demonstrate that the proposed ICR is more accurate and efficient on still or sequential images compared with the recent state-of-the-art cascade approaches. Furthermore, the incremental learning proposed in this paper can update the trained model in real time. |
Tasks | Face Alignment |
Published | 2019-05-10 |
URL | https://arxiv.org/abs/1905.04010v1 |
https://arxiv.org/pdf/1905.04010v1.pdf | |
PWC | https://paperswithcode.com/paper/a-fast-online-cascaded-regression-algorithm |
Repo | |
Framework | |
Productivity equation and the m distributions of information processing in workflows
Title | Productivity equation and the m distributions of information processing in workflows |
Authors | Charles Roberto Telles |
Abstract | This research investigates an equation of productivity for workflows regarding its robustness towards the definition of workflows as probabilistic distributions. The equation was formulated across its derivations through a theoretical framework about information theory, probabilities and complex adaptive systems. By defining the productivity equation for organism-object interactions, workflows mathematical derivations can be predicted and monitored without strict empirical methods and allows workflow flexibility for organism-object environments. |
Tasks | |
Published | 2019-06-17 |
URL | https://arxiv.org/abs/1906.06997v1 |
https://arxiv.org/pdf/1906.06997v1.pdf | |
PWC | https://paperswithcode.com/paper/productivity-equation-and-the-m-distributions |
Repo | |
Framework | |
Leveraging Semantics for Incremental Learning in Multi-Relational Embeddings
Title | Leveraging Semantics for Incremental Learning in Multi-Relational Embeddings |
Authors | Angel Daruna, Weiyu Liu, Zsolt Kira, Sonia Chernova |
Abstract | Service robots benefit from encoding information in semantically meaningful ways to enable more robust task execution. Prior work has shown multi-relational embeddings can encode semantic knowledge graphs to promote generalizability and scalability, but only within a batched learning paradigm. We present Incremental Semantic Initialization (ISI), an incremental learning approach that enables novel semantic concepts to be initialized in the embedding in relation to previously learned embeddings of semantically similar concepts. We evaluate ISI on mined AI2Thor and MatterPort3D datasets; our experiments show that on average ISI improves immediate query performance by 41.4%. Additionally, ISI methods on average reduced the number of epochs required to approach model convergence by 78.2%. |
Tasks | Knowledge Graphs |
Published | 2019-05-29 |
URL | https://arxiv.org/abs/1905.12181v2 |
https://arxiv.org/pdf/1905.12181v2.pdf | |
PWC | https://paperswithcode.com/paper/leveraging-semantics-for-incremental-learning |
Repo | |
Framework | |
Low-Resource Sequence Labeling via Unsupervised Multilingual Contextualized Representations
Title | Low-Resource Sequence Labeling via Unsupervised Multilingual Contextualized Representations |
Authors | Zuyi Bao, Rui Huang, Chen Li, Kenny Q. Zhu |
Abstract | Previous work on cross-lingual sequence labeling tasks either requires parallel data or bridges the two languages through word-byword matching. Such requirements and assumptions are infeasible for most languages, especially for languages with large linguistic distances, e.g., English and Chinese. In this work, we propose a Multilingual Language Model with deep semantic Alignment (MLMA) to generate language-independent representations for cross-lingual sequence labeling. Our methods require only monolingual corpora with no bilingual resources at all and take advantage of deep contextualized representations. Experimental results show that our approach achieves new state-of-the-art NER and POS performance across European languages, and is also effective on distant language pairs such as English and Chinese. |
Tasks | Language Modelling |
Published | 2019-10-24 |
URL | https://arxiv.org/abs/1910.10893v1 |
https://arxiv.org/pdf/1910.10893v1.pdf | |
PWC | https://paperswithcode.com/paper/low-resource-sequence-labeling-via |
Repo | |
Framework | |
Recency predicts bursts in the evolution of author citations
Title | Recency predicts bursts in the evolution of author citations |
Authors | Filipi Nascimento Silva, Aditya Tandon, Diego Raphael Amancio, Alessandro Flammini, Filippo Menczer, Staša Milojević, Santo Fortunato |
Abstract | The citations process for scientific papers has been studied extensively. But while the citations accrued by authors are the sum of the citations of their papers, translating the dynamics of citation accumulation from the paper to the author level is not trivial. Here we conduct a systematic study of the evolution of author citations, and in particular their bursty dynamics. We find empirical evidence of a correlation between the number of citations most recently accrued by an author and the number of citations they receive in the future. Using a simple model where the probability for an author to receive new citations depends only on the number of citations collected in the previous 12-24 months, we are able to reproduce both the citation and burst size distributions of authors across multiple decades. |
Tasks | |
Published | 2019-11-27 |
URL | https://arxiv.org/abs/1911.11926v1 |
https://arxiv.org/pdf/1911.11926v1.pdf | |
PWC | https://paperswithcode.com/paper/recency-predicts-bursts-in-the-evolution-of |
Repo | |
Framework | |
Simple Natural Language Processing Tools for Danish
Title | Simple Natural Language Processing Tools for Danish |
Authors | Leon Derczynski |
Abstract | This technical note describes a set of baseline tools for automatic processing of Danish text. The tools are machine-learning based, using natural language processing models trained over previously annotated documents. They are maintained at ITU Copenhagen and will always be freely available. |
Tasks | |
Published | 2019-06-27 |
URL | https://arxiv.org/abs/1906.11608v2 |
https://arxiv.org/pdf/1906.11608v2.pdf | |
PWC | https://paperswithcode.com/paper/simple-natural-language-processing-tools-for |
Repo | |
Framework | |
Hard Sample Mining for the Improved Retraining of Automatic Speech Recognition
Title | Hard Sample Mining for the Improved Retraining of Automatic Speech Recognition |
Authors | Jiabin Xue, Jiqing Han, Tieran Zheng, Jiaxing Guo, Boyong Wu |
Abstract | It is an effective way that improves the performance of the existing Automatic Speech Recognition (ASR) systems by retraining with more and more new training data in the target domain. Recently, Deep Neural Network (DNN) has become a successful model in the ASR field. In the training process of the DNN based methods, a back propagation of error between the transcription and the corresponding annotated text is used to update and optimize the parameters. Thus, the parameters are more influenced by the training samples with a big propagation error than the samples with a small one. In this paper, we define the samples with significant error as the hard samples and try to improve the performance of the ASR system by adding many of them. Unfortunately, the hard samples are sparse in the training data of the target domain, and manually label them is expensive. Therefore, we propose a hard samples mining method based on an enhanced deep multiple instance learning, which can find the hard samples from unlabeled training data by using a small subset of the dataset with manual labeling in the target domain. We applied our method to an End2End ASR task and obtained the best performance. |
Tasks | Multiple Instance Learning, Speech Recognition |
Published | 2019-04-17 |
URL | http://arxiv.org/abs/1904.08031v1 |
http://arxiv.org/pdf/1904.08031v1.pdf | |
PWC | https://paperswithcode.com/paper/hard-sample-mining-for-the-improved |
Repo | |
Framework | |