Yesterday explained the procedure for the equal variance test → equal mean test for the test when the variances are not equal.
This can be organized as follows.
Null hypothesis
σ_a^2 = σ_b^2
Statistics and distribution
T = U_a^2 / U_b^2 \ge 1 \\
F(Na - 1, Nb -1)
The rejection range depends on how the variances are compared in the alternative hypothesis (equal or greater / one-sided or two-sided).
By the way, today I will talk about denying the contents of Yesterday.
In classical statistics textbooks, when the population variances σ_a ^ 2, σ_b ^ 2 are unknown and it is determined that σ_a ^ 2 and σ_b ^ 2 are not equal in the two normal populations, [Welch's t-test (Welch's) t test)](http://ja.wikipedia.org/wiki/%E3%82%A6%E3%82%A7%E3%83%AB%E3%83%81%E3%81%AEt%E6% A4% 9C% E5% AE% 9A) is used.
The t-test is a test that assumes that the variances are equal, but the major difference is that Welch's test does not necessarily assume that the variances are equal.
In recent years, methods that do not assume equal variance are becoming mainstream, and rather there is an atmosphere that the default t-test may be used for Welch's t-test. 2013/12 / welchtanovastatwing.html).
If you read the following articles and each link, you will be able to grasp the problems in the flow of the homoscedastic test (F test) → t test, which is common in statistics textbooks.
Problems from homoscedastic test to t-test, analysis of variance (ANOVA), and Welch test http://note.chiebukuro.yahoo.co.jp/detail/n13859
On the problem of multiplicity that arises when comparing the mean values between two independent groups http://www2.vmas.kitasato-u.ac.jp/lecture0/statistics/stat_info03.pdf
Some papers criticize the two-step test and recommend unifying with tests that do not assume equal variances. http://beheco.oxfordjournals.org/content/17/4/688.full
For a deeper understanding, please also refer to Welch's Test Literature.
So what does our SciPy look like?
If you read the reference in scipy.stats.ttest_ind, it's obvious that the equal_var parameter is False. By specifying, the equal dispersion is no longer assumed, that is, the Welch's t-test.
The degrees of freedom m of this t distribution are as follows.
m = \frac {(\frac {S_a^2} {n_a - 1} + \frac {S_b^2} {n_b - 1})^2} { \frac {({S_a^2})^2} {{({n_a - 1})}^3} + \frac {({S_b^2})^2} {{({n_b - 1})}^3} }
The rest is the same as the SciPy reference, but when n1! = N2, the t-test and Welch's test are as follows.
from scipy import stats
import numpy as np
np.random.seed(12345678)
rvs1 = stats.norm.rvs(loc=5,scale=10,size=500)
rvs2 = stats.norm.rvs(loc=5, scale=20, size=100)
#Student's t-test
stats.ttest_ind(rvs1, rvs2)
# => (0.24107173714677796, 0.8095821484893867)
#Welch's t-test
stats.ttest_ind(rvs1, rvs2, equal_var = False)
# => (0.15778525230427601, 0.87491760438549948)
Also, as mentioned in the blog post linked earlier, Welch's t-test is the default even in T-Tests of statistical software Statwing Is there.
Based on the above, we recommend that we use Welch's t-test regardless of whether it is homoscedastic or not.