The Brier Score – Accuracy of a probability forecast


The Brier Score is probably the most commonly used verification measure for assessing
the accuracy of probability forecasts. The score is the mean squared error of the
probability forecasts over the verification sample and is expressed as:
where N is the sample size. The observations
o_{j}
are all binary, 1 if the event occurs
and 0 if it doesn’t. The Brier score ranges from 0 for a perfect forecast to 1 for
the worst possible forecast. Although the score can be computed on a single forecast,
the result wouldn’t be very meaningful because the observation is binary and the forecast
is a probability.
The following table shows ten forecasts of the probability of precipitation from each of four forecasters,
"Mr. Prob", "Mr. Sharp", "Mr. Climat", and "Mr. Cats." Mr. Prob believes he can distinguish
the likelihood of rain to within 10% intervals, so feels free to use all probability values,
to the nearest 10%. Mr. Sharp believes that one should give clear guidance;
that forecasting near 50% is useless to everyone. Mr. Climat has no confidence whatsoever in
his ability to discern greater and lesser chances of rain, but he knows that rain happens on average
on 4 days out of 10, 40% of the occasions. Mr. Cats is a traditional deterministic forecaster,
and thinks that the duty of the forecaster is to give his best estimate of what will happen.
He forecasts categorically, forecasting rain if he thinks it is likely to happen.
Using the observations given in the last row, compute the Brier scores for these forecasts and
answer the following questions.


To aid the computations, the square of each of the possible error
values is shown below.
1. Drag and drop the correct Brier scores at right into the table, based on your calculations:
Please put the labels on one of the existing boxes.
You have completed the exercise