sassa_nf ([personal profile] sassa_nf) wrote in [personal profile] juan_gandhi 2020-01-08 02:25 pm (UTC)

"Eventually, when you take a large enough time interval, you will start seeing a trend value that is definitely two sigma away from zero."

I'd like to understand this bit more. On what grounds do we use "two sigma" here? Is the slope ("the trend") meant to be normally distributed?

I mean, I can't wrap my head around double assumption: If T were a normally distributed value, then "the trend" would be some f(T) that is not necessarily normally distributed, but would be for some f, and won't be for some other f. But for "the trend" to be non-zero, T must be not a normally distributed value; why do we assume f(T) is normally distributed?.. How do we know that's the right hypothesis to test?

I mean, even the absence of normal distribution for f(T) doesn't mean there is "a trend" - T may still be a normally distributed random value, but f is such that f(T) is not normally distributed - because f(T) are not i.i.d. (not "independent").

Post a comment in response:

This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting