Paper Group ANR 1108
Bayesian optimization in ab initio nuclear physics. “Mask and Infill” : Applying Masked Language Model to Sentiment Transfer. A Gaussian process latent force model for joint input-state estimation in linear structural systems. Deep Learning based HEp-2 Image Classification: A Comprehensive Review. Multi-layered Spiking Neural Network with Target Ti …
Bayesian optimization in ab initio nuclear physics
Title | Bayesian optimization in ab initio nuclear physics |
Authors | A. Ekström, C. Forssén, C. Dimitrakakis, D. Dubhashi, H. T. Johansson, A. S. Muhammad, H. Salomonsson, A. Schliep |
Abstract | Theoretical models of the strong nuclear interaction contain unknown coupling constants (parameters) that must be determined using a pool of calibration data. In cases where the models are complex, leading to time consuming calculations, it is particularly challenging to systematically search the corresponding parameter domain for the best fit to the data. In this paper, we explore the prospect of applying Bayesian optimization to constrain the coupling constants in chiral effective field theory descriptions of the nuclear interaction. We find that Bayesian optimization performs rather well with low-dimensional parameter domains and foresee that it can be particularly useful for optimization of a smaller set of coupling constants. A specific example could be the determination of leading three-nucleon forces using data from finite nuclei or three-nucleon scattering experiments. |
Tasks | Calibration |
Published | 2019-02-03 |
URL | http://arxiv.org/abs/1902.00941v1 |
http://arxiv.org/pdf/1902.00941v1.pdf | |
PWC | https://paperswithcode.com/paper/bayesian-optimization-in-ab-initio-nuclear |
Repo | |
Framework | |
“Mask and Infill” : Applying Masked Language Model to Sentiment Transfer
Title | “Mask and Infill” : Applying Masked Language Model to Sentiment Transfer |
Authors | Xing Wu, Tao Zhang, Liangjun Zang, Jizhong Han, Songlin Hu |
Abstract | This paper focuses on the task of sentiment transfer on non-parallel text, which modifies sentiment attributes (e.g., positive or negative) of sentences while preserving their attribute-independent content. Due to the limited capability of RNNbased encoder-decoder structure to capture deep and long-range dependencies among words, previous works can hardly generate satisfactory sentences from scratch. When humans convert the sentiment attribute of a sentence, a simple but effective approach is to only replace the original sentimental tokens in the sentence with target sentimental expressions, instead of building a new sentence from scratch. Such a process is very similar to the task of Text Infilling or Cloze, which could be handled by a deep bidirectional Masked Language Model (e.g. BERT). So we propose a two step approach “Mask and Infill”. In the mask step, we separate style from content by masking the positions of sentimental tokens. In the infill step, we retrofit MLM to Attribute Conditional MLM, to infill the masked positions by predicting words or phrases conditioned on the context1 and target sentiment. We evaluate our model on two review datasets with quantitative, qualitative, and human evaluations. Experimental results demonstrate that our models improve state-of-the-art performance. |
Tasks | Language Modelling, Text Infilling |
Published | 2019-08-21 |
URL | https://arxiv.org/abs/1908.08039v1 |
https://arxiv.org/pdf/1908.08039v1.pdf | |
PWC | https://paperswithcode.com/paper/mask-and-infill-applying-masked-language |
Repo | |
Framework | |
A Gaussian process latent force model for joint input-state estimation in linear structural systems
Title | A Gaussian process latent force model for joint input-state estimation in linear structural systems |
Authors | Rajdip Nayek, Souvik Chakraborty, Sriram Narasimhan |
Abstract | The problem of combined state and input estimation of linear structural systems based on measured responses and a priori knowledge of structural model is considered. A novel methodology using Gaussian process latent force models is proposed to tackle the problem in a stochastic setting. Gaussian process latent force models (GPLFMs) are hybrid models that combine differential equations representing a physical system with data-driven non-parametric Gaussian process models. In this work, the unknown input forces acting on a structure are modelled as Gaussian processes with some chosen covariance functions which are combined with the mechanistic differential equation representing the structure to construct a GPLFM. The GPLFM is then conveniently formulated as an augmented stochastic state-space model with additional states representing the latent force components, and the joint input and state inference of the resulting model is implemented using Kalman filter. The augmented state-space model of GPLFM is shown as a generalization of the class of input-augmented state-space models, is proven observable, and is robust compared to conventional augmented formulations in terms of numerical stability. The hyperparameters governing the covariance functions are estimated using maximum likelihood optimization based on the observed data, thus overcoming the need for manual tuning of the hyperparameters by trial-and-error. To assess the performance of the proposed GPLFM method, several cases of state and input estimation are demonstrated using numerical simulations on a 10-dof shear building and a 76-storey ASCE benchmark office tower. Results obtained indicate the superior performance of the proposed approach over conventional Kalman filter based approaches. |
Tasks | Gaussian Processes |
Published | 2019-03-29 |
URL | http://arxiv.org/abs/1904.00093v2 |
http://arxiv.org/pdf/1904.00093v2.pdf | |
PWC | https://paperswithcode.com/paper/a-gaussian-process-latent-force-model-for |
Repo | |
Framework | |
Deep Learning based HEp-2 Image Classification: A Comprehensive Review
Title | Deep Learning based HEp-2 Image Classification: A Comprehensive Review |
Authors | Saimunur Rahman, Lei Wang, Changming Sun, Luping Zhou |
Abstract | Classification of HEp-2 cell patterns plays a significant role in the indirect immunofluorescence test for identifying autoimmune diseases in the human body. Many automatic HEp-2 cell classification methods have been proposed in recent years, amongst which deep learning based methods have shown impressive performance. This paper provides a comprehensive review of the existing deep learning based HEp-2 cell image classification methods. These methods perform HEp-2 image classification in two levels, namely, cell-level and specimen-level. Both levels are covered in this review. In each level, the methods are organized with a deep network usage based taxonomy. The core idea, notable achievements, and key advantages and weakness of each method are critically analyzed. Furthermore, a concise review of the existing HEp-2 datasets that are commonly used in the literature is given. The paper ends with an overview of the current state-of-the-arts and a discussion on novel opportunities and future research directions in this field. It is hoped that this paper would give readers a comprehensive reference of this novel, challenging, and thriving field. |
Tasks | Image Classification |
Published | 2019-11-20 |
URL | https://arxiv.org/abs/1911.08916v1 |
https://arxiv.org/pdf/1911.08916v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-learning-based-hep-2-image |
Repo | |
Framework | |
Multi-layered Spiking Neural Network with Target Timestamp Threshold Adaptation and STDP
Title | Multi-layered Spiking Neural Network with Target Timestamp Threshold Adaptation and STDP |
Authors | Pierre Falez, Pierre Tirilly, Ioan Marius Bilasco, Philippe Devienne, Pierre Boulet |
Abstract | Spiking neural networks (SNNs) are good candidates to produce ultra-energy-efficient hardware. However, the performance of these models is currently behind traditional methods. Introducing multi-layered SNNs is a promising way to reduce this gap. We propose in this paper a new threshold adaptation system which uses a timestamp objective at which neurons should fire. We show that our method leads to state-of-the-art classification rates on the MNIST dataset (98.60%) and the Faces/Motorbikes dataset (99.46%) with an unsupervised SNN followed by a linear SVM. We also investigate the sparsity level of the network by testing different inhibition policies and STDP rules. |
Tasks | |
Published | 2019-04-03 |
URL | http://arxiv.org/abs/1904.01908v1 |
http://arxiv.org/pdf/1904.01908v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-layered-spiking-neural-network-with |
Repo | |
Framework | |
ShapeGlot: Learning Language for Shape Differentiation
Title | ShapeGlot: Learning Language for Shape Differentiation |
Authors | Panos Achlioptas, Judy Fan, Robert X. D. Hawkins, Noah D. Goodman, Leonidas J. Guibas |
Abstract | In this work we explore how fine-grained differences between the shapes of common objects are expressed in language, grounded on images and 3D models of the objects. We first build a large scale, carefully controlled dataset of human utterances that each refers to a 2D rendering of a 3D CAD model so as to distinguish it from a set of shape-wise similar alternatives. Using this dataset, we develop neural language understanding (listening) and production (speaking) models that vary in their grounding (pure 3D forms via point-clouds vs. rendered 2D images), the degree of pragmatic reasoning captured (e.g. speakers that reason about a listener or not), and the neural architecture (e.g. with or without attention). We find models that perform well with both synthetic and human partners, and with held out utterances and objects. We also find that these models are amenable to zero-shot transfer learning to novel object classes (e.g. transfer from training on chairs to testing on lamps), as well as to real-world images drawn from furniture catalogs. Lesion studies indicate that the neural listeners depend heavily on part-related words and associate these words correctly with visual parts of objects (without any explicit network training on object parts), and that transfer to novel classes is most successful when known part-words are available. This work illustrates a practical approach to language grounding, and provides a case study in the relationship between object shape and linguistic structure when it comes to object differentiation. |
Tasks | Transfer Learning |
Published | 2019-05-08 |
URL | https://arxiv.org/abs/1905.02925v1 |
https://arxiv.org/pdf/1905.02925v1.pdf | |
PWC | https://paperswithcode.com/paper/shapeglot-learning-language-for-shape |
Repo | |
Framework | |
Probabilistic Energy Forecasting using Quantile Regressions based on a new Nearest Neighbors Quantile Filter
Title | Probabilistic Energy Forecasting using Quantile Regressions based on a new Nearest Neighbors Quantile Filter |
Authors | Jorge Ángel González Ordiano, Lutz Gröll, Ralf Mikut, Veit Hagenmeyer |
Abstract | Parametric quantile regressions are a useful tool for creating probabilistic energy forecasts. Nonetheless, since classical quantile regressions are trained using a non-differentiable cost function, their creation using complex data mining techniques (e.g., artificial neural networks) may be complicated. This article presents a method that uses a new nearest neighbors quantile filter to obtain quantile regressions independently of the utilized data mining technique and without the non-differentiable cost function. Thereafter, a validation of the presented method using the dataset of the Global Energy Forecasting Competition of 2014 is undertaken. The results show that the presented method is able to solve the competition’s task with a similar accuracy and in a similar time as the competition’s winner, but requiring a much less powerful computer. This property may be relevant in an online forecasting service for which the fast computation of probabilistic forecasts using not so powerful machines is required. |
Tasks | |
Published | 2019-03-18 |
URL | http://arxiv.org/abs/1903.07390v1 |
http://arxiv.org/pdf/1903.07390v1.pdf | |
PWC | https://paperswithcode.com/paper/probabilistic-energy-forecasting-using |
Repo | |
Framework | |
Deep learning vessel segmentation and quantification of the foveal avascular zone using commercial and prototype OCT-A platforms
Title | Deep learning vessel segmentation and quantification of the foveal avascular zone using commercial and prototype OCT-A platforms |
Authors | Morgan Heisler, Forson Chan, Zaid Mammo, Chandrakumar Balaratnasingam, Pavle Prentasic, Gavin Docherty, MyeongJin Ju, Sanjeeva Rajapakse, Sieun Lee, Andrew Merkur, Andrew Kirker, David Albiani, David Maberley, K. Bailey Freund, Mirza Faisal Beg, Sven Loncaric, Marinko V. Sarunic, Eduardo V. Navajas |
Abstract | Automatic quantification of perifoveal vessel densities in optical coherence tomography angiography (OCT-A) images face challenges such as variable intra- and inter-image signal to noise ratios, projection artefacts from outer vasculature layers, and motion artefacts. This study demonstrates the utility of deep neural networks for automatic quantification of foveal avascular zone (FAZ) parameters and perifoveal vessel density of OCT-A images in healthy and diabetic eyes. OCT-A images of the foveal region were acquired using three OCT-A systems: a 1060nm Swept Source (SS)-OCT prototype, RTVue XR Avanti (Optovue Inc., Fremont, CA), and the ZEISS Angioplex (Carl Zeiss Meditec, Dublin, CA). Automated segmentation was then performed using a deep neural network. Four FAZ morphometric parameters (area, min/max diameter, and eccentricity) and perifoveal vessel density were used as outcome measures. The accuracy, sensitivity and specificity of the DNN vessel segmentations were comparable across all three device platforms. No significant difference between the means of the measurements from automated and manual segmentations were found for any of the outcome measures on any system. The intraclass correlation coefficient (ICC) was also good (> 0.51) for all measurements. Automated deep learning vessel segmentation of OCT-A may be suitable for both commercial and research purposes for better quantification of the retinal circulation. |
Tasks | |
Published | 2019-09-25 |
URL | https://arxiv.org/abs/1909.11289v1 |
https://arxiv.org/pdf/1909.11289v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-learning-vessel-segmentation-and |
Repo | |
Framework | |
Diagnostic checking in FARIMA models with uncorrelated but non-independent error terms
Title | Diagnostic checking in FARIMA models with uncorrelated but non-independent error terms |
Authors | Yacouba Boubacar Maïnassara, Youssef Esstafa, Bruno Saussereau |
Abstract | This work considers the problem of modified portmanteau tests for testing the adequacy of FARIMA models under the assumption that the errors are uncorrelated but not necessarily independent (i.e. weak FARIMA). We first study the joint distribution of the least squares estimator and the noise empirical autocovariances. We then derive the asymp-totic distribution of residual empirical autocovariances and autocorrelations. We deduce the asymptotic distribution of the Ljung-Box (or Box-Pierce) modified portmanteau statistics for weak FARIMA models. We also propose another method based on a self-normalization approach to test the adequacy of FARIMA models. Finally some simulation studies are presented to corroborate our theoretical work. An application to the Standard & Poor’s 500 and Nikkei returns also illustrate the practical relevance of our theoretical results. AMS 2000 subject classifications: Primary 62M10, 62F03, 62F05; secondary 91B84, 62P05. |
Tasks | |
Published | 2019-11-29 |
URL | https://arxiv.org/abs/1912.00013v1 |
https://arxiv.org/pdf/1912.00013v1.pdf | |
PWC | https://paperswithcode.com/paper/diagnostic-checking-in-farima-models-with |
Repo | |
Framework | |
Predicting Parkinson’s Disease using Latent Information extracted from Deep Neural Networks
Title | Predicting Parkinson’s Disease using Latent Information extracted from Deep Neural Networks |
Authors | Ilianna Kollia, Andreas-Georgios Stafylopatis, Stefanos Kollias |
Abstract | This paper presents a new method for medical diagnosis of neurodegenerative diseases, such as Parkinson’s, by extracting and using latent information from trained Deep convolutional, or convolutional-recurrent Neural Networks (DNNs). In particular, our approach adopts a combination of transfer learning, k-means clustering and k-Nearest Neighbour classification of deep neural network learned representations to provide enriched prediction of the disease based on MRI and/or DaT Scan data. A new loss function is introduced and used in the training of the DNNs, so as to perform adaptation of the generated learned representations between data from different medical environments. Results are presented using a recently published database of Parkinson’s related information, which was generated and evaluated in a hospital environment. |
Tasks | Medical Diagnosis, Transfer Learning |
Published | 2019-01-23 |
URL | http://arxiv.org/abs/1901.07822v1 |
http://arxiv.org/pdf/1901.07822v1.pdf | |
PWC | https://paperswithcode.com/paper/predicting-parkinsons-disease-using-latent |
Repo | |
Framework | |
Making Convex Loss Functions Robust to Outliers using $e$-Exponentiated Transformation
Title | Making Convex Loss Functions Robust to Outliers using $e$-Exponentiated Transformation |
Authors | Suvadeep Hajra |
Abstract | In this paper, we propose a novel {\em $e$-exponentiated} transformation, $0 \le e<1$, for loss functions. When the transformation is applied to a convex loss function, the transformed loss function become more robust to outliers. Using a novel generalization error bound, we have theoretically shown that the transformed loss function has a tighter bound for datasets corrupted by outliers. Our empirical observation shows that the accuracy obtained using the transformed loss function can be significantly better than the same obtained using the original loss function and comparable to that obtained by some other state of the art methods in the presence of label noise. |
Tasks | |
Published | 2019-02-16 |
URL | http://arxiv.org/abs/1902.06127v2 |
http://arxiv.org/pdf/1902.06127v2.pdf | |
PWC | https://paperswithcode.com/paper/making-convex-loss-functions-robust-to |
Repo | |
Framework | |
Learning Patterns in Sample Distributions for Monte Carlo Variance Reduction
Title | Learning Patterns in Sample Distributions for Monte Carlo Variance Reduction |
Authors | Oskar Elek, Manu M. Thomas, Angus Forbes |
Abstract | This paper investigates a novel a-posteriori variance reduction approach in Monte Carlo image synthesis. Unlike most established methods based on lateral filtering in the image space, our proposition is to produce the best possible estimate for each pixel separately, from all the samples drawn for it. To enable this, we systematically study the per-pixel sample distributions for diverse scene configurations. Noting that these are too complex to be characterized by standard statistical distributions (e.g. Gaussians), we identify patterns recurring in them and exploit those for training a variance-reduction model based on neural nets. In result, we obtain numerically better estimates compared to simple averaging of samples. This method is compatible with existing image-space denoising methods, as the improved estimates of our model can be used for further processing. We conclude by discussing how the proposed model could in future be extended for fully progressive rendering with constant memory footprint and scene-sensitive output. |
Tasks | Denoising, Image Generation |
Published | 2019-06-01 |
URL | https://arxiv.org/abs/1906.00124v1 |
https://arxiv.org/pdf/1906.00124v1.pdf | |
PWC | https://paperswithcode.com/paper/190600124 |
Repo | |
Framework | |
Learning Direct and Inverse Transmission Matrices
Title | Learning Direct and Inverse Transmission Matrices |
Authors | Daniele Ancora, Luca Leuzzi |
Abstract | Linear problems appear in a variety of disciplines and their application for the transmission matrix recovery is one of the most stimulating challenges in biomedical imaging. Its knowledge turns any random media into an optical tool that can focus or transmit an image through disorder. Here, converting an input-output problem into a statistical mechanical formulation, we investigate how inference protocols can learn the transmission couplings by pseudolikelihood maximization. Bridging linear regression and thermodynamics let us propose an innovative framework to pursue the solution of the scattering-riddle. |
Tasks | |
Published | 2019-01-15 |
URL | http://arxiv.org/abs/1901.04816v2 |
http://arxiv.org/pdf/1901.04816v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-direct-and-inverse-transmission |
Repo | |
Framework | |
Transport-Based Neural Style Transfer for Smoke Simulations
Title | Transport-Based Neural Style Transfer for Smoke Simulations |
Authors | Byungsoo Kim, Vinicius C. Azevedo, Markus Gross, Barbara Solenthaler |
Abstract | Artistically controlling fluids has always been a challenging task. Optimization techniques rely on approximating simulation states towards target velocity or density field configurations, which are often handcrafted by artists to indirectly control smoke dynamics. Patch synthesis techniques transfer image textures or simulation features to a target flow field. However, these are either limited to adding structural patterns or augmenting coarse flows with turbulent structures, and hence cannot capture the full spectrum of different styles and semantically complex structures. In this paper, we propose the first Transport-based Neural Style Transfer (TNST) algorithm for volumetric smoke data. Our method is able to transfer features from natural images to smoke simulations, enabling general content-aware manipulations ranging from simple patterns to intricate motifs. The proposed algorithm is physically inspired, since it computes the density transport from a source input smoke to a desired target configuration. Our transport-based approach allows direct control over the divergence of the stylization velocity field by optimizing incompressible and irrotational potentials that transport smoke towards stylization. Temporal consistency is ensured by transporting and aligning subsequent stylized velocities, and 3D reconstructions are computed by seamlessly merging stylizations from different camera viewpoints. |
Tasks | Style Transfer |
Published | 2019-05-17 |
URL | https://arxiv.org/abs/1905.07442v2 |
https://arxiv.org/pdf/1905.07442v2.pdf | |
PWC | https://paperswithcode.com/paper/transport-based-neural-style-transfer-for |
Repo | |
Framework | |
Neural parameters estimation for brain tumor growth modeling
Title | Neural parameters estimation for brain tumor growth modeling |
Authors | Ivan Ezhov, Jana Lipkova, Suprosanna Shit, Florian Kofler, Nore Collomb, Benjamin Lemasson, Emmanuel Barbier, Bjoern Menze |
Abstract | Understanding the dynamics of brain tumor progression is essential for optimal treatment planning. Cast in a mathematical formulation, it is typically viewed as evaluation of a system of partial differential equations, wherein the physiological processes that govern the growth of the tumor are considered. To personalize the model, i.e. find a relevant set of parameters, with respect to the tumor dynamics of a particular patient, the model is informed from empirical data, e.g., medical images obtained from diagnostic modalities, such as magnetic-resonance imaging. Existing model-observation coupling schemes require a large number of forward integrations of the biophysical model and rely on simplifying assumption on the functional form, linking the output of the model with the image information. In this work, we propose a learning-based technique for the estimation of tumor growth model parameters from medical scans. The technique allows for explicit evaluation of the posterior distribution of the parameters by sequentially training a mixture-density network, relaxing the constraint on the functional form and reducing the number of samples necessary to propagate through the forward model for the estimation. We test the method on synthetic and real scans of rats injected with brain tumors to calibrate the model and to predict tumor progression. |
Tasks | |
Published | 2019-07-01 |
URL | https://arxiv.org/abs/1907.00973v2 |
https://arxiv.org/pdf/1907.00973v2.pdf | |
PWC | https://paperswithcode.com/paper/neural-parameters-estimation-for-brain-tumor |
Repo | |
Framework | |