Paper Group ANR 1134
Super-resolution of spatiotemporal event-stream image captured by the asynchronous temporal contrast vision sensor. The Coherent Point Drift for Clustered Point Sets. Towards Robust Neural Networks with Lipschitz Continuity. Discourse-Aware Neural Rewards for Coherent Text Generation. Gaussian Process Classification with Privileged Information by S …
Super-resolution of spatiotemporal event-stream image captured by the asynchronous temporal contrast vision sensor
Title | Super-resolution of spatiotemporal event-stream image captured by the asynchronous temporal contrast vision sensor |
Authors | Hongmin Li, Guoqi Li, Hanchao Liu, Luping Shi |
Abstract | Super-resolution (SR) is a useful technology to generate a high-resolution (HR) visual output from the low-resolution (LR) visual inputs overcoming the physical limitations of the cameras. However, SR has not been applied to enhance the resolution of spatiotemporal event-stream images captured by the frame-free dynamic vision sensors (DVSs). SR of event-stream image is fundamentally different from existing frame-based schemes since basically each pixel value of DVS images is an event sequence. In this work, a two-stage scheme is proposed to solve the SR problem of the spatiotemporal event-stream image. We use a nonhomogeneous Poisson point process to model the event sequence, and sample the events of each pixel by simulating a nonhomogeneous Poisson process according to the specified event number and rate function. Firstly, the event number of each pixel of the HR DVS image is determined with a sparse signal representation based method to obtain the HR event-count map from that of the LR DVS recording. The rate function over time line of the point process of each HR pixel is computed using a spatiotemporal filter on the corresponding LR neighbor pixels. Secondly, the event sequence of each new pixel is generated with a thinning based event sampling algorithm. Two metrics are proposed to assess the event-stream SR results. The proposed method is demonstrated through obtaining HR event-stream images from a series of DVS recordings with the proposed method. Results show that the upscaled HR event streams has perceptually higher spatial texture detail than the LR DVS images. Besides, the temporal properties of the upscaled HR event streams match that of the original input very well. This work enables many potential applications of event-based vision. |
Tasks | Event-based vision, Super-Resolution |
Published | 2018-02-07 |
URL | http://arxiv.org/abs/1802.02398v2 |
http://arxiv.org/pdf/1802.02398v2.pdf | |
PWC | https://paperswithcode.com/paper/super-resolution-of-spatiotemporal-event |
Repo | |
Framework | |
The Coherent Point Drift for Clustered Point Sets
Title | The Coherent Point Drift for Clustered Point Sets |
Authors | Dmitry Lachinov, Vadim Turlapov |
Abstract | The problem of non-rigid point set registration is a key problem for many computer vision tasks. In many cases the nature of the data or capabilities of the point detection algorithms can give us some prior information on point sets distribution. In non-rigid case this information is able to drastically improve registration results by limiting number of possible solutions. In this paper we explore use of prior information about point sets clustering, such information can be obtained with preliminary segmentation. We extend existing probabilistic framework for fitting two level Gaussian mixture model and derive closed form solution for maximization step of the EM algorithm. This enables us to improve method accuracy with almost no performance loss. We evaluate our approach and compare the Cluster Coherent Point Drift with other existing non-rigid point set registration methods and show it’s advantages for digital medicine tasks, especially for heart template model personalization using patient’s medical data. |
Tasks | |
Published | 2018-12-14 |
URL | http://arxiv.org/abs/1812.05869v1 |
http://arxiv.org/pdf/1812.05869v1.pdf | |
PWC | https://paperswithcode.com/paper/the-coherent-point-drift-for-clustered-point |
Repo | |
Framework | |
Towards Robust Neural Networks with Lipschitz Continuity
Title | Towards Robust Neural Networks with Lipschitz Continuity |
Authors | Muhammad Usama, Dong Eui Chang |
Abstract | Deep neural networks have shown remarkable performance across a wide range of vision-based tasks, particularly due to the availability of large-scale datasets for training and better architectures. However, data seen in the real world are often affected by distortions that not accounted for by the training datasets. In this paper, we address the challenge of robustness and stability of neural networks and propose a general training method that can be used to make the existing neural network architectures more robust and stable to input visual perturbations while using only available datasets for training. Proposed training method is convenient to use as it does not require data augmentation or changes in the network architecture. We provide theoretical proof as well as empirical evidence for the efficiency of the proposed training method by performing experiments with existing neural network architectures and demonstrate that same architecture when trained with the proposed training method perform better than when trained with conventional training approach in the presence of noisy datasets. |
Tasks | Data Augmentation |
Published | 2018-11-22 |
URL | http://arxiv.org/abs/1811.09008v1 |
http://arxiv.org/pdf/1811.09008v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-robust-neural-networks-with-lipschitz |
Repo | |
Framework | |
Discourse-Aware Neural Rewards for Coherent Text Generation
Title | Discourse-Aware Neural Rewards for Coherent Text Generation |
Authors | Antoine Bosselut, Asli Celikyilmaz, Xiaodong He, Jianfeng Gao, Po-Sen Huang, Yejin Choi |
Abstract | In this paper, we investigate the use of discourse-aware rewards with reinforcement learning to guide a model to generate long, coherent text. In particular, we propose to learn neural rewards to model cross-sentence ordering as a means to approximate desired discourse structure. Empirical results demonstrate that a generator trained with the learned reward produces more coherent and less repetitive text than models trained with cross-entropy or with reinforcement learning with commonly used scores as rewards. |
Tasks | Sentence Ordering, Text Generation |
Published | 2018-05-10 |
URL | http://arxiv.org/abs/1805.03766v1 |
http://arxiv.org/pdf/1805.03766v1.pdf | |
PWC | https://paperswithcode.com/paper/discourse-aware-neural-rewards-for-coherent |
Repo | |
Framework | |
Gaussian Process Classification with Privileged Information by Soft-to-Hard Labeling Transfer
Title | Gaussian Process Classification with Privileged Information by Soft-to-Hard Labeling Transfer |
Authors | Ryosuke Kamesawa, Issei Sato, Masashi Sugiyama |
Abstract | Learning using privileged information is an attractive problem setting that helps many learning scenarios in the real world. A state-of-the-art method of Gaussian process classification (GPC) with privileged information is GPC+, which incorporates privileged information into a noise term of the likelihood. A drawback of GPC+ is that it requires numerical quadrature to calculate the posterior distribution of the latent function, which is extremely time-consuming. To overcome this limitation, we propose a novel classification method with privileged information based on Gaussian processes, called “soft-label-transferred Gaussian process (SLT-GP).” Our basic idea is that we construct another learning task of predicting soft labels (continuous values) obtained from privileged information and we perform transfer learning from this task to the target task of predicting hard labels. We derive a PAC-Bayesian bound of our proposed method, which justifies optimizing hyperparameters by the empirical Bayes method. We also experimentally show the usefulness of our proposed method compared with GPC and GPC+. |
Tasks | Gaussian Processes, Transfer Learning |
Published | 2018-02-12 |
URL | http://arxiv.org/abs/1802.03877v1 |
http://arxiv.org/pdf/1802.03877v1.pdf | |
PWC | https://paperswithcode.com/paper/gaussian-process-classification-with |
Repo | |
Framework | |
A Text Classification Application: Poet Detection from Poetry
Title | A Text Classification Application: Poet Detection from Poetry |
Authors | Durmus Ozkan Sahin, Oguz Emre Kural, Erdal Kilic, Armagan Karabina |
Abstract | With the widespread use of the internet, the size of the text data increases day by day. Poems can be given as an example of the growing text. In this study, we aim to classify poetry according to poet. Firstly, data set consisting of three different poetry of poets written in English have been constructed. Then, text categorization techniques are implemented on it. Chi-Square technique are used for feature selection. In addition, five different classification algorithms are tried. These algorithms are Sequential minimal optimization, Naive Bayes, C4.5 decision tree, Random Forest and k-nearest neighbors. Although each classifier showed very different results, over the 70% classification success rate was taken by sequential minimal optimization technique. |
Tasks | Feature Selection, Text Categorization, Text Classification |
Published | 2018-10-24 |
URL | http://arxiv.org/abs/1810.11414v1 |
http://arxiv.org/pdf/1810.11414v1.pdf | |
PWC | https://paperswithcode.com/paper/a-text-classification-application-poet |
Repo | |
Framework | |
Recent progress in semantic image segmentation
Title | Recent progress in semantic image segmentation |
Authors | Xiaolong Liu, Zhidong Deng, Yuhan Yang |
Abstract | Semantic image segmentation, which becomes one of the key applications in image processing and computer vision domain, has been used in multiple domains such as medical area and intelligent transportation. Lots of benchmark datasets are released for researchers to verify their algorithms. Semantic segmentation has been studied for many years. Since the emergence of Deep Neural Network (DNN), segmentation has made a tremendous progress. In this paper, we divide semantic image segmentation methods into two categories: traditional and recent DNN method. Firstly, we briefly summarize the traditional method as well as datasets released for segmentation, then we comprehensively investigate recent methods based on DNN which are described in the eight aspects: fully convolutional network, upsample ways, FCN joint with CRF methods, dilated convolution approaches, progresses in backbone network, pyramid methods, Multi-level feature and multi-stage method, supervised, weakly-supervised and unsupervised methods. Finally, a conclusion in this area is drawn. |
Tasks | Semantic Segmentation |
Published | 2018-09-20 |
URL | http://arxiv.org/abs/1809.10198v1 |
http://arxiv.org/pdf/1809.10198v1.pdf | |
PWC | https://paperswithcode.com/paper/recent-progress-in-semantic-image |
Repo | |
Framework | |
Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation
Title | Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation |
Authors | Bohan Zhuang, Chunhua Shen, Mingkui Tan, Lingqiao Liu, Ian Reid |
Abstract | In this paper, we propose to train convolutional neural networks (CNNs) with both binarized weights and activations, leading to quantized models specifically} for mobile devices with limited power capacity and computation resources. Previous works on quantizing CNNs seek to approximate the floating-point information using a set of discrete values, which we call value approximation, but typically assume the same architecture as the full-precision networks. In this paper, however, we take a novel ‘structure approximation’ view for quantization—it is very likely that a different architecture may be better for best performance. In particular, we propose a `network decomposition’ strategy, named \textbf{Group-Net}, in which we divide the network into groups. In this way, each full-precision group can be effectively reconstructed by aggregating a set of homogeneous binary branches. In addition, we learn effective connections among groups to improve the representational capability. Moreover, the proposed Group-Net shows strong generalization to other tasks. For instance, we extend Group-Net for highly accurate semantic segmentation by embedding rich context into the binary structure. Experiments on both classification and semantic segmentation tasks demonstrate the superior performance of the proposed methods over various popular architectures. In particular, we outperform the previous best binary neural networks in terms of accuracy and major computation savings. | |
Tasks | Image Classification, Quantization, Semantic Segmentation |
Published | 2018-11-22 |
URL | http://arxiv.org/abs/1811.10413v2 |
http://arxiv.org/pdf/1811.10413v2.pdf | |
PWC | https://paperswithcode.com/paper/structured-binary-neural-networks-for |
Repo | |
Framework | |
Steady-state Non-Line-of-Sight Imaging
Title | Steady-state Non-Line-of-Sight Imaging |
Authors | Wenzheng Chen, Simon Daneau, Fahim Mannan, Felix Heide |
Abstract | Conventional intensity cameras recover objects in the direct line-of-sight of the camera, whereas occluded scene parts are considered lost in this process. Non-line-of-sight imaging (NLOS) aims at recovering these occluded objects by analyzing their indirect reflections on visible scene surfaces. Existing NLOS methods temporally probe the indirect light transport to unmix light paths based on their travel time, which mandates specialized instrumentation that suffers from low photon efficiency, high cost, and mechanical scanning. We depart from temporal probing and demonstrate steady-state NLOS imaging using conventional intensity sensors and continuous illumination. Instead of assuming perfectly isotropic scattering, the proposed method exploits directionality in the hidden surface reflectance, resulting in (small) spatial variation of their indirect reflections for varying illumination. To tackle the shape-dependence of these variations, we propose a trainable architecture which learns to map diffuse indirect reflections to scene reflectance using only synthetic training data. Relying on consumer color image sensors, with high fill factor, high quantum efficiency and low read-out noise, we demonstrate high-fidelity color NLOS imaging for scene configurations tackled before with picosecond time resolution. |
Tasks | |
Published | 2018-11-24 |
URL | http://arxiv.org/abs/1811.09910v2 |
http://arxiv.org/pdf/1811.09910v2.pdf | |
PWC | https://paperswithcode.com/paper/steady-state-non-line-of-sight-imaging |
Repo | |
Framework | |
An Adaptive Population Size Differential Evolution with Novel Mutation Strategy for Constrained Optimization
Title | An Adaptive Population Size Differential Evolution with Novel Mutation Strategy for Constrained Optimization |
Authors | Yuan Fu, Hu Wang, Meng-Zhu Yang |
Abstract | Differential evolution (DE) has competitive performance on constrained optimization problems (COPs), which targets at searching for global optimal solution without violating the constraints. Generally, researchers pay more attention on avoiding violating the constraints than better objective function value. To achieve the aim of searching the feasible solutions accurately, an adaptive population size method and an adaptive mutation strategy are proposed in the paper. The adaptive population method is similar to a state switch which controls the exploring state and exploiting state according to the situation of feasible solution search. The novel mutation strategy is designed to enhance the effect of status switch based on adaptive population size, which is useful to reduce the constraint violations. Moreover, a mechanism based on multipopulation competition and a more precise method of constraint control are adopted in the proposed algorithm. The proposed differential evolution algorithm, APDE-NS, is evaluated on the benchmark problems from CEC2017 constrained real parameter optimization. The experimental results show the effectiveness of the proposed method is competitive compared to other state-of-the-art algorithms. |
Tasks | |
Published | 2018-05-11 |
URL | http://arxiv.org/abs/1805.04217v1 |
http://arxiv.org/pdf/1805.04217v1.pdf | |
PWC | https://paperswithcode.com/paper/an-adaptive-population-size-differential |
Repo | |
Framework | |
Global Second-order Pooling Convolutional Networks
Title | Global Second-order Pooling Convolutional Networks |
Authors | Zilin Gao, Jiangtao Xie, Qilong Wang, Peihua Li |
Abstract | Deep Convolutional Networks (ConvNets) are fundamental to, besides large-scale visual recognition, a lot of vision tasks. As the primary goal of the ConvNets is to characterize complex boundaries of thousands of classes in a high-dimensional space, it is critical to learn higher-order representations for enhancing non-linear modeling capability. Recently, Global Second-order Pooling (GSoP), plugged at the end of networks, has attracted increasing attentions, achieving much better performance than classical, first-order networks in a variety of vision tasks. However, how to effectively introduce higher-order representation in earlier layers for improving non-linear capability of ConvNets is still an open problem. In this paper, we propose a novel network model introducing GSoP across from lower to higher layers for exploiting holistic image information throughout a network. Given an input 3D tensor outputted by some previous convolutional layer, we perform GSoP to obtain a covariance matrix which, after nonlinear transformation, is used for tensor scaling along channel dimension. Similarly, we can perform GSoP along spatial dimension for tensor scaling as well. In this way, we can make full use of the second-order statistics of the holistic image throughout a network. The proposed networks are thoroughly evaluated on large-scale ImageNet-1K, and experiments have shown that they outperformed non-trivially the counterparts while achieving state-of-the-art results. |
Tasks | Object Recognition |
Published | 2018-11-29 |
URL | http://arxiv.org/abs/1811.12006v2 |
http://arxiv.org/pdf/1811.12006v2.pdf | |
PWC | https://paperswithcode.com/paper/global-second-order-pooling-convolutional |
Repo | |
Framework | |
Retrospective correction of Rigid and Non-Rigid MR motion artifacts using GANs
Title | Retrospective correction of Rigid and Non-Rigid MR motion artifacts using GANs |
Authors | Karim Armanious, Sergios Gatidis, Konstantin Nikolaou, Bin Yang, Thomas Küstner |
Abstract | Motion artifacts are a primary source of magnetic resonance (MR) image quality deterioration with strong repercussions on diagnostic performance. Currently, MR motion correction is carried out either prospectively, with the help of motion tracking systems, or retrospectively by mainly utilizing computationally expensive iterative algorithms. In this paper, we utilize a new adversarial framework, titled MedGAN, for the joint retrospective correction of rigid and non-rigid motion artifacts in different body regions and without the need for a reference image. MedGAN utilizes a unique combination of non-adversarial losses and a new generator architecture to capture the textures and fine-detailed structures of the desired artifact-free MR images. Quantitative and qualitative comparisons with other adversarial techniques have illustrated the proposed model performance. |
Tasks | |
Published | 2018-09-17 |
URL | http://arxiv.org/abs/1809.06276v2 |
http://arxiv.org/pdf/1809.06276v2.pdf | |
PWC | https://paperswithcode.com/paper/retrospective-correction-of-rigid-and-non |
Repo | |
Framework | |
A Trace Lasso Regularized L1-norm Graph Cut for Highly Correlated Noisy Hyperspectral Image
Title | A Trace Lasso Regularized L1-norm Graph Cut for Highly Correlated Noisy Hyperspectral Image |
Authors | Ramanarayan Mohanty, S L Happy, Nilesh Suthar, Aurobinda Routray |
Abstract | This work proposes an adaptive trace lasso regularized L1-norm based graph cut method for dimensionality reduction of Hyperspectral images, called as `Trace Lasso-L1 Graph Cut’ (TL-L1GC). The underlying idea of this method is to generate the optimal projection matrix by considering both the sparsity as well as the correlation of the data samples. The conventional L2-norm used in the objective function is sensitive to noise and outliers. Therefore, in this work L1-norm is utilized as a robust alternative to L2-norm. Besides, for further improvement of the results, we use a penalty function of trace lasso with the L1GC method. It adaptively balances the L2-norm and L1-norm simultaneously by considering the data correlation along with the sparsity. We obtain the optimal projection matrix by maximizing the ratio of between-class dispersion to within-class dispersion using L1-norm with trace lasso as the penalty. Furthermore, an iterative procedure for this TL-L1GC method is proposed to solve the optimization function. The effectiveness of this proposed method is evaluated on two benchmark HSI datasets. | |
Tasks | Dimensionality Reduction |
Published | 2018-07-22 |
URL | http://arxiv.org/abs/1807.10602v1 |
http://arxiv.org/pdf/1807.10602v1.pdf | |
PWC | https://paperswithcode.com/paper/a-trace-lasso-regularized-l1-norm-graph-cut |
Repo | |
Framework | |
BomJi at SemEval-2018 Task 10: Combining Vector-, Pattern- and Graph-based Information to Identify Discriminative Attributes
Title | BomJi at SemEval-2018 Task 10: Combining Vector-, Pattern- and Graph-based Information to Identify Discriminative Attributes |
Authors | Enrico Santus, Chris Biemann, Emmanuele Chersoni |
Abstract | This paper describes BomJi, a supervised system for capturing discriminative attributes in word pairs (e.g. yellow as discriminative for banana over watermelon). The system relies on an XGB classifier trained on carefully engineered graph-, pattern- and word embedding based features. It participated in the SemEval- 2018 Task 10 on Capturing Discriminative Attributes, achieving an F1 score of 0:73 and ranking 2nd out of 26 participant systems. |
Tasks | |
Published | 2018-04-30 |
URL | http://arxiv.org/abs/1804.11251v1 |
http://arxiv.org/pdf/1804.11251v1.pdf | |
PWC | https://paperswithcode.com/paper/bomji-at-semeval-2018-task-10-combining |
Repo | |
Framework | |
Multi-Output Convolution Spectral Mixture for Gaussian Processes
Title | Multi-Output Convolution Spectral Mixture for Gaussian Processes |
Authors | Kai Chen, Perry Groot, Jinsong Chen, Elena Marchiori |
Abstract | Multi-output Gaussian processes (MOGPs) are recently extended by using spectral mixture kernel, which enables expressively pattern extrapolation with a strong interpretation. In particular, Multi-Output Spectral Mixture kernel (MOSM) is a recent, powerful state of the art method. However, MOSM cannot reduce to the ordinary spectral mixture kernel (SM) when using a single channel. Moreover, when the spectral density of different channels is either very close or very far from each other in the frequency domain, MOSM generates unreasonable scale effects on cross weights which produces an incorrect description of the channel correlation structure. In this paper, we tackle these drawbacks and introduce a principled multi-output convolution spectral mixture kernel (MOCSM) framework. In our framework, we model channel dependencies through convolution of time and phase delayed spectral mixtures between different channels. Results of extensive experiments on synthetic and real datasets demontrate the advantages of MOCSM and its state of the art performance. |
Tasks | Gaussian Processes |
Published | 2018-08-07 |
URL | http://arxiv.org/abs/1808.02266v6 |
http://arxiv.org/pdf/1808.02266v6.pdf | |
PWC | https://paperswithcode.com/paper/multi-output-convolution-spectral-mixture-for |
Repo | |
Framework | |