Bayesian Image Classification with Deep Convolutional Gaussian Processes

Date August 26, 2020
Authors Vincent Dutordoir, Mark van der Wilk, Artem Artemev, James Hensman

In decision-making systems, it is important to have classifiers that have calibrated uncertainties, with an optimisation objective that can be used for automated model selection and training. Gaussian processes (GPs) provide uncertainty estimates and a marginal likelihood objective, but their weak inductive biases lead to inferior accuracy. This has limited their applicability in certain tasks (e.g. image classification). We propose a translation-insensitive convolutional kernel, which relaxes the translation invariance constraint imposed by previous convolutional GPs. We show how we can use the marginal likelihood to learn the degree of insensitivity. We also reformulate GP image-to-image convolutional mappings as multi-output GPs, leading to deep convolutional GPs. We show experimentally that our new kernel improves performance in both single-layer and deep models. We also demonstrate that our fully Bayesian approach improves on dropout-based Bayesian deep learning methods in terms of uncertainty and marginal likelihood estimates.

View paper

Share
,,

Related articles

What is Stochastic Network Control?

研究分野
機能

Soft Q-Learning with Mutual-Information Regularization

研究分野
Papers

Secondmind appoints Professor Carl Edward Rasmussen as Chief Scientist

News
Company
Optimization Engine
    もっと詳しく
ソリューション
記事やブログ
企業情報
研究内容
©2024 Secondmind Ltd.