Trend can be estimated in various ways, e.g. by first detecting "seasonality" and removing it, or by simple linear regression. The result will be a quantity (a trend estimator) that has a statistical error. The main question is to compute that error, e.g. the standard deviation of the trend estimator. This is what I called "sigma".
One can compute "sigma" in various ways, but one way is to take the linear estimates of trend over different 10-year intervals and see how those trends typically differ. Then do the same for 20-year intervals. Then for 30-year intervals. For example, taking a linear estimate over various 10-year intervals, you get trend numbers between -0.2 to 0.2. This is just due to natural variability and long-term correlations of temperature. You cannot reduce this uncertainty except by taking longer time intervals. Taking linear estimates for 30-year intervals, I expect you to get numbers between -0.1 and 0.1 or so. Eventually, when you take a large enough time interval, you will start seeing a trend value that is definitely two sigma away from zero. The main question is, how large must be the time interval for that. I estimated 100 years.
After that, there is the second question - did the trend change after 1950. Again, you need to show this beyond two sigma. Right now, I think the data is insufficient for that. Using my estimates of sigma, I expect that we need to wait until at least 2050 to see if the trend is the same or different after 1950.
Because of the natural variability, it is impossible to show that the trend is equal to X in 2000-2010 and to Y in 2010-2020. The values X and Y are statistically indistinguishable because of too much natural variability and because of large time correlations in the temperature. The data shows very clearly that a decade is far too short for us to be able to estimate the trend reliably - even if we had no missing data, and even if we could measure the temperature with precision of 0.001 C every microsecond at every point on the Earth and in the atmosphere over a 1 mm grid in three dimensions. Natural variability together with time correlations makes this impossible.
no subject
Trend can be estimated in various ways, e.g. by first detecting "seasonality" and removing it, or by simple linear regression. The result will be a quantity (a trend estimator) that has a statistical error. The main question is to compute that error, e.g. the standard deviation of the trend estimator. This is what I called "sigma".
One can compute "sigma" in various ways, but one way is to take the linear estimates of trend over different 10-year intervals and see how those trends typically differ. Then do the same for 20-year intervals. Then for 30-year intervals. For example, taking a linear estimate over various 10-year intervals, you get trend numbers between -0.2 to 0.2. This is just due to natural variability and long-term correlations of temperature. You cannot reduce this uncertainty except by taking longer time intervals. Taking linear estimates for 30-year intervals, I expect you to get numbers between -0.1 and 0.1 or so. Eventually, when you take a large enough time interval, you will start seeing a trend value that is definitely two sigma away from zero. The main question is, how large must be the time interval for that. I estimated 100 years.
After that, there is the second question - did the trend change after 1950. Again, you need to show this beyond two sigma. Right now, I think the data is insufficient for that. Using my estimates of sigma, I expect that we need to wait until at least 2050 to see if the trend is the same or different after 1950.
Because of the natural variability, it is impossible to show that the trend is equal to X in 2000-2010 and to Y in 2010-2020. The values X and Y are statistically indistinguishable because of too much natural variability and because of large time correlations in the temperature. The data shows very clearly that a decade is far too short for us to be able to estimate the trend reliably - even if we had no missing data, and even if we could measure the temperature with precision of 0.001 C every microsecond at every point on the Earth and in the atmosphere over a 1 mm grid in three dimensions. Natural variability together with time correlations makes this impossible.