Skip to contents

Fits one of three possible Bayesian Instrumental Regression for Disparity Estimation (BIRDiE) models to BISG probabilities and covariates. The simplest Categorical-Dirichlet model (cat_dir()) is appropriate when there are no covariates or when all covariates are discrete and fully interacted with another. The more general Categorical mixed-effects model (cat_mixed()) is a supports any number of fixed effects and up to one random intercept. For continuous outcomes a Normal linear model is available (gaussian()).


  family = cat_dir(),
  prior = NULL,
  weights = NULL,
  algorithm = c("em", "gibbs", "em_boot"),
  iter = 400,
  warmup = 50,
  prefix = "pr_",
  ctrl = birdie.ctrl()



A data frame or matrix of BISG probabilities, with one row per individual. The output of bisg() can be used directly here.


A two-sided formula object describing the model structure. The left-hand side is the outcome variable, which must be discrete. A single random intercept term, denoted with a vertical bar ("(1 | <term>)"), is supported on the right-hand side.


An optional data frame containing the variables named in formula.


A description of the complete-data model type to fit. Options are:

  • cat_dir(): Categorical-Dirichlet model. All covariates must be fully interacted.

  • cat_mixed(): Categorical mixed-effects model. Up to one random effect is supported.

  • gaussian(): Linear model.

See the Details section below for more information on the various models.


A list with entries specifying the model prior.

  • For the cat_dir model, the only entry is alpha, which should be a matrix of Dirichlet hyperparameters. The matrix should have one row for every level of the outcome variable and one column for every racial group. The default prior (used when prior=NULL) is an empirical Bayes prior equal to the weighted-mean estimate of the outcome-race table. A fully noninformative prior with all entries set to \(\epsilon\) can be obtained by setting prior=NA. When prior=NULL and algorithm="em" or "em_boot", 1 is added to the prior so that the posterior mode, rather than the mean, is shrunk toward these values.

  • For the cat_mixed model, the prior list should contain three scalar entries: scale_int, the standard deviation on the Normal prior for the intercepts (which control the global estimates of Y|R), scale_beta, the standard deviation on the Normal prior for the fixed effects, and scale_sigma, the prior mean of the standard deviation of the random intercepts. These can be a single scalar or a vector with an entry for each racial group.

  • For the gaussian model, the prior list should contain two entries: scale_int, controlling, the standard deviation on the Normal prior for the intercepts (which control the global estimates of Y|R), and scale_beta, controlling the standard deviation on the Normal prior for the fixed effects. These must be a single scalar. Each is expressed in terms of the estimated residual standard deviation (i.e., they are multiplied together to form the "true" prior).

The prior is stored after model fitting in the $prior element of the fitted model object.


An optional numeric vector specifying likelihood weights.


The inference algorithm to use. One of 3 options:

  • "em": An expectation-maximization algorithm which will perform inference for the maximum a posteriori (MAP) parameter values. Computationally efficient and supported by all the model families. No uncertainty quantification.

  • "gibbs": A Gibbs sampler for performing full Bayesian inference. Generally more computationally demanding than the EM algorithm, but provides uncertainty quantification. Currently supported for cat_dir() and gaussian() model families. Computation-reliability tradeoff can be controlled with iter argument.

  • "em_boot": Bootstrapped version of EM algorithm. Number of bootstrap replicates controlled by iter parameter. Provides approximate uncertainty quantification. Currently supported for cat_dir() and gaussian() model families.


The number of post-warmup Gibbs samples, or the number of bootstrap replicates to use to compute approximate standard errors for the main model estimates. Only available when family=cat_dir() or gaussian(). Ignored if algorithm="em".

For bootstrapping, when there are fewer than 1,000 individuals or 100 or fewer replicates, a Bayesian bootstrap is used instead (i.e., weights are drawn from a \(\text{Dirichlet}(1, 1, ..., 1)\) distribution, which produces more reliable estimates.


Number of warmup iterations for Gibbs sampling. Ignored unless algorithm="gibbs".


If r_probs is a data frame, the columns containing racial probabilities will be selected as those with names starting with prefix. The default will work with the output of bisg().


A list containing control parameters for the EM algorithm and optimization routines. A list in the proper format can be made using birdie.ctrl().


An object of class birdie, for which many methods are available. The model estimates may be accessed with coef.birdie(), and updated BISG probabilities (conditioning on the outcome) may be accessed with fitted.birdie(). Uncertainty estimates, if available, can be accessed with $se and vcov.birdie().


By default, birdie() uses an expectation-maximization (EM) routine to find the maximum a posteriori (MAP) estimate for the specified model. Asymptotic variance-covariance matrices for the MAP estimate are available for the Categorical-Dirichlet and Normal linear models via bootstrapping. Full Bayesian inference is supported via Gibbs sampling for the Categorical-Dirichlet and Normal linear models as well.

Whatever model or method is used, a finite-population estimate of the outcome-given-race distribution for the entire observed sample is always calculated and stored as $est in the returned object, which can be accessed with coef.birdie() as well.

The Categorical-Dirichlet model is specified as follows: $$ Y_i \mid R_i, X_i, \Theta \sim \text{Categorical}(\theta_{R_iX_i}) \\ \theta_{rx} \sim \text{Dirichlet}(\alpha_r), $$ where \(Y\) is the outcome variable, \(R\) is race, \(X\) are covariates (fixed effects), and \(\theta_{rx}\) and \(\alpha_r\) are vectors with length matching the number of levels of the outcome variable. There is one vector \(\theta_{rx}\) for every combination of race and covariates, hence the need for formula to either have no covariates or a fully interacted structure.

The Categorical mixed-effects model is specified as follows: $$ Y_i \mid R_i, X_i, \Theta \sim \text{Categorical}(g^{-1}(\mu_{R_iX_i})) \\ \mu_{rxy} = W\beta_{ry} + Zu_{ry} \\ u_{r} \mid \vec\sigma_{r}, L_r \sim \mathcal{N}(0, \text{diag}(\vec\sigma_{r})C_r\text{diag}(\vec\sigma_{r})) \\ \beta_{ry} \sim \mathcal{N}(0, s^2_{r\beta}) \\ \sigma_{ry} \sim \text{Inv-Gamma}(4, 3s_{r\sigma}) \\ C_r \sim \text{LKJ}(2), $$ where \(\beta_{ry}\) are the fixed effects, \(u_{ry}\) is the random intercept, and \(g\) is a softmax link function. Estimates for \(\beta_{ry}\) and \(\sigma_{ry}\) are stored in the $beta and $sigma elements of the fitted model object.

The Normal linear model is specified as follows: $$ Y_i \mid R_i, \vec X_i, \Theta \sim \mathcal{N}(\vec X_i^\top\vec\theta, \sigma^2) \\ \sigma^2 \sim \text{Inv-Gamma}(n_\sigma/2, l_\sigma^2 n_\sigma/2) \\ \beta_{\text{intercept}} \sim \mathcal{N}(0, s^2_\text{int}) \\ \beta_k \sim \mathcal{N}(0, s^2_\beta), \\ $$ where \(\vec\theta\) is a vector of linear model coefficients. Estimates for \(\theta\) and \(\sigma\) are stored in the $beta and $sigma elements of the fitted model object.

More details on the models and their properties may be found in the paper referenced below.


McCartan, C., Fisher, R., Goldin, J., Ho, D.E., & Imai, K. (2024). Estimating Racial Disparities when Race is Not Observed. Available at


# \donttest{

r_probs = bisg(~ nm(last_name) + zip(zip), data=pseudo_vf)

# Process zip codes to remove missing values
pseudo_vf$zip = proc_zip(pseudo_vf$zip)

fit = birdie(r_probs, turnout ~ 1, data=pseudo_vf)
#> Using weakly informative empirical Bayes prior for Pr(Y | R)
#> This message is displayed once every 8 hours.
#> Categorical-Dirichlet BIRDiE model
#> Formula: turnout ~ 1
#>    Data: pseudo_vf
#> Number of obs: 5,000
#> Estimated distribution:
#>     white black  hisp asian  aian other
#> no    0.3 0.358 0.391  0.61 0.768 0.269
#> yes   0.7 0.642 0.609  0.39 0.232 0.731
fit$se # uncertainty quantification

fit = birdie(r_probs, turnout ~ zip, data=pseudo_vf, algorithm="gibbs")

fit = birdie(r_probs, turnout ~ (1 | zip), data=pseudo_vf,
             family=cat_mixed(), ctrl=birdie.ctrl(abstol=1e-3))
#> Using default prior for Pr(Y | R):
#> → Prior scale on intercepts: 2.0
#> → Prior scale on fixed effects coefficients: 0.2
#> → Prior mean of random effects standard deviation: 0.10
#> This message is displayed once every 8 hours.
#> ⠙ EM iterations 3 done (0.98/s) | 3.1s
#> ⠹ EM iterations 4 done (1/s) | 4s
#> ⠸ EM iterations 7 done (0.99/s) | 7.1s
#> ⠼ EM iterations 10 done (1/s) | 9.9s
#> ⠼ EM iterations 12 done (1/s) | 12s

#> Categorical mixed-effects BIRDiE model
#> Formula: turnout ~ (1 | zip)
#>    Data: pseudo_vf
#> 12 iterations and 12 secs to convergence
#> Number of observations: 5,000
#> Number of groups: 618
#> Entropy decrease from marginal race distribution:
#>    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
#> -0.4805  0.1545  0.4362  0.4581  0.7993  1.0730 
#> Entropy decrease from BISG probabilities:
#>     Min.  1st Qu.   Median     Mean  3rd Qu.     Max. 
#> -0.88308 -0.02054  0.02267  0.07493  0.15424  1.18798 
#> Estimated outcome-by-race distribution:
#>     white black  hisp asian  aian other
#> no  0.294 0.368 0.375 0.562 0.635 0.369
#> yes 0.706 0.632 0.625 0.438 0.365 0.631
#>         white     black      hisp     asian      aian     other
#> no  0.2937651 0.3676978 0.3750361 0.5624857 0.6346433 0.3686167
#> yes 0.7062349 0.6323022 0.6249639 0.4375143 0.3653567 0.6313833
#> # A tibble: 5,000 × 6
#>    pr_white pr_black   pr_hisp   pr_asian  pr_aian pr_other
#>       <dbl>    <dbl>     <dbl>      <dbl>    <dbl>    <dbl>
#>  1 0.952    0.000210 0.0141    0.000553   0.0113    0.0220 
#>  2 0.000179 0.994    0.0000658 0.000116   0.000514  0.00542
#>  3 0.961    0.00714  0.0000438 0.00000548 0.000399  0.0311 
#>  4 0.600    0.367    0.000342  0.0000128  0.00129   0.0316 
#>  5 0.984    0.00417  0.0000530 0.00221    0.000302  0.00955
#>  6 0.560    0.267    0.0980    0.00640    0.00269   0.0657 
#>  7 0.110    0.788    0.0372    0.00553    0.00174   0.0575 
#>  8 0.964    0.00427  0.0319    0          0         0      
#>  9 0.739    0.217    0.00995   0.000774   0.000822  0.0322 
#> 10 0.864    0.0979   0.0139    0.000971   0.00148   0.0214 
#> # ℹ 4,990 more rows
# }