Skip to contents

Construct finite-sample calibrated predictive intervals for Bayesian models, following the approach in Barber et al. (2021). By default, the intervals will also reflect the relative uncertainty in the Bayesian model, using the locally-weighted conformal methods of Lei et al. (2018).

Usage

# S3 method for class 'conformal'
predictive_interval(object, probs = 0.9, plus = NULL, local = TRUE, ...)

Arguments

object

A fitted model which has been passed through loo_conformal()

probs

The coverage probabilities to calculate intervals for. Empirically, the coverage rate of the constructed intervals will generally match these probabilities, but the theoretical guarantee for a probability of \(1-\alpha\) is only for coverage of at least \(1-2\alpha\), and only if plus=TRUE (below).

plus

If TRUE, construct jackknife+ intervals, which have a theoretical guarantee. These require higher computational costs, which scale with both the number of training and prediction points. Defaults to TRUE when both of these numbers are less than 500.

local

If TRUE (the default), perform locally-weighted conformal inference. This will inflate the width of the predictive intervals by a constant amount across all predictions, preserving the relative amount of uncertainty captured by the model. If FALSE, all predictive intervals will have (nearly) the same width.

...

Further arguments to the posterior_predict() method for object.

Value

A matrix with the number of rows matching the number of predictions. Columns will be labeled with a percentile corresponding to probs; e.g. if probs=0.9 the columns will be 5% and 95%.

References

Barber, R. F., Candes, E. J., Ramdas, A., & Tibshirani, R. J. (2021). Predictive inference with the jackknife+. The Annals of Statistics, 49(1), 486-507.

Lei, J., G’Sell, M., Rinaldo, A., Tibshirani, R. J., & Wasserman, L. (2018). Distribution-free predictive inference for regression. Journal of the American Statistical Association, 113(523), 1094-1111.

Examples

if (requireNamespace("rstanarm", quietly=TRUE)) suppressWarnings({
    library(rstanarm)
    # fit a simple linear regression
    m = stan_glm(mpg ~ disp + cyl, data=mtcars,
        chains=1, iter=500,
        control=list(adapt_delta=0.999), refresh=0)

    m = loo_conformal(m)
    # make predictive intervals
    predictive_interval(m)
})
#>                            5%      95%
#> Mazda RX4           16.381827 28.18769
#> Mazda RX4 Wag       16.614507 26.94348
#> Datsun 710          20.804507 31.90627
#> Hornet 4 Drive      15.086216 24.68408
#> Hornet Sportabout    9.529887 20.03695
#> Valiant             15.603716 26.06700
#> Duster 360           8.791614 19.63106
#> Merc 240D           19.871652 30.89690
#> Merc 230            20.277989 31.29874
#> Merc 280            16.401084 27.43578
#> Merc 280C           15.997483 27.16487
#> Merc 450SE          11.039505 21.84140
#> Merc 450SL          11.272833 20.92848
#> Merc 450SLC         10.685265 21.95885
#> Cadillac Fleetwood   6.719272 17.16997
#> Lincoln Continental  7.123097 17.73893
#> Chrysler Imperial    7.520443 18.17208
#> Fiat 128            21.301428 32.33201
#> Honda Civic         21.850400 32.43134
#> Toyota Corolla      21.333528 32.55120
#> Toyota Corona       21.126467 31.13913
#> Dodge Challenger    10.087606 20.10560
#> AMC Javelin          9.886367 20.76156
#> Camaro Z28           9.365667 20.26677
#> Pontiac Firebird     8.625435 18.89637
#> Fiat X1-9           21.365234 32.22251
#> Porsche 914-2       20.851954 31.24738
#> Lotus Europa        21.651287 32.48514
#> Ford Pantera L       8.940693 20.53603
#> Ferrari Dino        16.596912 27.61475
#> Maserati Bora        9.957017 20.86331
#> Volvo 142E          20.487699 31.29541