For this purpose, an assumption is reasonable if it satisfies the following criteria...it reflects the actuary’s estimate of future experience, the actuary’s observation of the estimates inherent in market data (if any), or a combination thereof; and it is expected to have no significant bias (i.e., it is not significantly optimistic or pessimistic)..."
—Actuarial Standard of Practice No. 27: Selection of Assumptions for Measuring Pension Obligations.
Coherent Bayesian Priors
Forecast competitions have scoring rules for how well a forecaster does. It's feedback to calibrate on. But this is after the outcome is known. Before seeing any data or outcomes, the forecaster encodes their background information as a prior. How then do they check the prior for coherence before seeing anything else?
Here's a test: if you already expect which direction the next piece of evidence will move your expected beliefs, it's an incoherent prior. Keep going.
Suppose an actuary has a prior expectation that future costs are $100. They then think, "I bet costs are going up, and when I see next quarter's financials, I'll probably update my belief to $125." If the actuary expects $125 now, their prior should encode that, and in this case, the prior should encode $125 now. Not doing this is to hold incoherent priors. If the actuary records $100 but believes in their heart of hearts that it's $125, they will be overly surprised by a result below $125 and question the model's performance—even if the model did its job as intended. Inscribing $100 did not change this actuary's belief.
Information is about surprises. If they anticipated $125, they would have little information from seeing $125 and would not be surprised to see $125. But they externalized $100, and were also not surprised to see $125. There should have been a sense of information gained by moving from a $100 prior to an updated $125, but there wasn't, because they were at $125 regardless of the $100 inscribed.
If holding a coherent prior, the actuary cannot predict whether the next new piece of evidence will move their updated expected belief up or down. This is the law of total expectation, it is the expectations of expectations:
$$E[H] = E\big[E[H \mid E]\big]$$
- \( H \): A belief or assumption — the hypothesis whose probability we want to track.
- \( E \): New data or observations that could update that belief.
- \( E[\cdot] \): The expectation operator — a weighted mean over outcomes, where each outcome is weighted by its probability. A straight arithmetic average is the special case where every outcome has equal probability.
Or equivalently:
$$P(H) = E\big[P(H \mid E)\big] = \big[P(H \mid E) \times P(E)\big] + \big[P(H \mid \neg E) \times P(\neg E)\big]$$
- \( P(H) \): Your prior belief in (H) before learning whether (E) occurred.
- \( P(H \mid E) \): Your posterior belief in (H) given that (E) is observed.
- \( P(H \mid \neg E) \): Your posterior belief in (H) given that (E) is not observed.
In terms of priors, what you expect now should be the same thing you expect to expect after seeing new data. If you expect your expectations to change after seeing new data, then something is wrong, because this equation no longer balances when it must.
| Sex | Geo | Value |
|---|---|---|
| F | W | 157 |
| M | N | 91 |
| F | E | 178 |
| M | N | 55 |
| F | E | 182 |
| M | W | 159 |
| F | W | 53 |
| F | W | 70 |
| Mean | 118.1 | |
| Sex | N | Mean |
|---|---|---|
| F | 5 | 128.0 |
| M | 3 | 101.7 |
| Weighted Mean | 118.1 |
| Geo | N | Mean |
|---|---|---|
| E | 2 | 180.0 |
| N | 2 | 73.0 |
| W | 4 | 109.8 |
| Weighted Mean | 118.1 |
Split up your data by some conditioning variable, like age, sex, geography, so forth, and if you take the weighted mean of those splits, you get the same thing as taking the mean of your data before the splits. The unconditional mean (left table) equals the weighted conditional means (right tables) .
Before looking at the data, externalize your expectations as priors. Then think, "Do I expect my updated expected belief to be higher or lower than this?" If the answer is not "I'm unsure," adjust your prior until you reach a value at which you are unsure whether new data will move you up or down from that expected value. This prior is the coherent prior; this is the Law of Total Expectation.
Actuarial Standards Board. (2023). Actuarial Standard of Practice No. 27: Selection of Assumptions for Measuring Pension Obligations. https://www.actuarialstandardsboard.org/asops/adopted-asop-no-27-selection-of-assumptions-for-measuring-pension-obligations/
Jaynes, E. T. (2003). Probability Theory: The Logic of Science (G. L. Bretthorst, Ed.). Cambridge University Press.
Yudkowsky, E. (2018). Conservation of expected evidence. In Map and Territory (Rationality: From AI to Zombies, Book 1). Machine Intelligence Research Institute. https://www.lesswrong.com/posts/jiBFC7DcCrZjGmZnJ/conservation-of-expected-evidence
Calculations and graphics done in R version 4.3.3, with these packages:
Wickham H, et al. (2019). Welcome to the tidyverse. Journal of Open Source Software, 4 (43), 1686. https://doi.org/10.21105/joss.01686
Wilke C, Wiernik B (2022). ggtext: Improved Text Rendering Support for 'ggplot2'. R package version 0.1.2. https://CRAN.R-project.org/package=ggtext
Generative AIs like Anthropic's Claude Opus 4.6 were used in parts of coding and reviewing the writing. Cover art was created by the author with generative AI.
This website reflects the author's personal exploration of ideas and methods. The views expressed are solely their own and may not represent the policies or practices of any affiliated organizations, employers, or clients. Different perspectives, goals, or constraints within teams or organizations can lead to varying appropriate methods. The information provided is for general informational purposes only and should not be construed as legal, actuarial, or professional advice.