So, we are back to the original question - estimate "b" from a given dataset T(t) using the assumption that T(t) = a + b*t + U(t) where U is unknown noise with zero mean.
You performed a calculation where you estimated "b" linearly from different time intervals. Let us first assume that the true value "b" is the same for all time from 1900 to 2020. Then you can perform linear fit for "b" with different time intervals. For example, take the 20-year intervals 1900-1920, 1901-1921, 1902-1922 and so on until 2000-2020. The result will be 100 different estimates of "b". They are not independent, of course, but highly correlated. Nevertheless, you can look at the resulting distribution of estimates and see if there is evidence that the mean of "b" is not zero.
You can compute the mean and the standard deviation of the set of 100 estimates of "b". Roughly, if the mean is > 2 stdev then the mean is nonzero with high confidence. You can also use other statistical tests for nonzero mean, of course.
no subject
Date: 2020-01-09 01:40 pm (UTC)You performed a calculation where you estimated "b" linearly from different time intervals. Let us first assume that the true value "b" is the same for all time from 1900 to 2020. Then you can perform linear fit for "b" with different time intervals. For example, take the 20-year intervals 1900-1920, 1901-1921, 1902-1922 and so on until 2000-2020. The result will be 100 different estimates of "b". They are not independent, of course, but highly correlated. Nevertheless, you can look at the resulting distribution of estimates and see if there is evidence that the mean of "b" is not zero.
You can compute the mean and the standard deviation of the set of 100 estimates of "b". Roughly, if the mean is > 2 stdev then the mean is nonzero with high confidence. You can also use other statistical tests for nonzero mean, of course.