Paper on the best way to input very large numbers of observations into weather models published in Monthly Weather Review

Summary: Almost all weather forecasts for times longer than a few hours in the future are made using complicated computer apps working together on the largest computers. This is known as numerical weather prediction, or NWP for short. One of the most important NWP apps takes all kinds of data and mixes them together to make a full picture of what the weather is doing right now. So, data for the temperature, wind, and humidity all around the world and all the way from the ground to very high up are put together in what we call data assimilation, or DA for short. The data include observations (actual measurements) and short (6-hour) forecasts, but none of these data are perfect – they all have errors. Data that have small errors are given higher weighting, and data with large errors are given smaller weighting during DA. So, to get the most out of the observations that NOAA collects, scientists need to give the DA app a perfect understanding of the data errors. This is impossible, because—hold on tight because this part will bend your brain—perfect understanding of the errors of the data would require perfect understanding of the truth and we already know that we can’t get the exact truth. The DA app isn’t perfect, so scientists use different short cuts to make it work IRL. (Ha! That’s another short way to say something, but one you already know!) Some of these short cuts are to use different, simple ways of describing the errors of the data, to use only some of the observations (to “thin” the observations), or to combine observations close together into what are called “superobservations”. This study uses a simple setup, one so simple that DA can be done perfectly. It then looks at the different short cuts and how they work in different situations. From the results in the simple setup, the study then comes up with ideas to improve the much more complicated DA app.
Screen Shot 2018-04-27 at 10.18.41 AM.png

Important Conclusions:

  • The weighting given to data by the DA app depends on the size of the errors of the data and on how similar errors are for data that are close together. Most DA apps assume the observation errors are not similar to each other at all. It’s more correct to assume there is some similarity, especially if the observations are close to each other. This amount of similarity is described by what is called the “correlation length scale.” This might be 50 miles for real observations and 200 miles for the 6-hour forecast. There is essentially no similarity (no correlation) when data are separated by two or more correlation length scales.
  • When the DA app makes use of observation error correlations, mistakes in estimating the sizes of the errors are more important than mistakes in estimating the correlation length scales.
  • When the DA app assumes the observation errors are uncorrelated (not like each other at all), the best results are obtained when the observations are thinned combined with some tuning of the estimated observation errors.
  • Without tuning of the estimated sizes of the observation errors, the best way is to combine the data into superobservations with the distance between neighboring superobservations about the same as the correlation length scale.
    Read the paper at