PROC UNIVARIATE uses a modified Kolmogorov statistic to test the data against a normal distribution with mean and variance equal to the sample mean and variance.
requests tests for normality that include a series of goodness-of-fit tests based on the empirical distribution function. The table provides test statistics and -values for the Shapiro-Wilk test (provided the sample size is less than or equal to 2000), the Kolmogorov-Smirnov test, the Anderson-Darling test, and the Cramér–von Mises test. This option does not apply if you use a WEIGHT statement.
The empirical CDF is denoted by The Kolmogorov-Smirnov statistic (D) is based on the largest vertical difference between the theoretical and the empirical cumulative distribution function: The null and the alternative hypotheses are: The hypothesis regarding the distributional form is rejected at the chosen significance level () if the test statistic, D, is greater than the critical value obtained from a table.
The Kolmogorov-Smirnov statistic is computed as the maximum of and , where is the largest vertical distance between the EDF and the distribution function when the EDF is greater than the distribution function, and is the largest vertical distance when the EDF is less than the distribution function.
The Kolmogorov-Smirnov statistic belongs to the supremum class of EDF statistics. This class of statistics is based on the largest vertical difference between and .
The Kolmogorov-Smirnov statistic, the Anderson-Darling statistic, and the Cramér-von Mises statistic are based on the empirical distribution function (EDF). However, some EDF tests are not supported when certain combinations of the parameters of a specified distribution are estimated. See for a list of the EDF tests available. You determine whether to reject the null hypothesis by examining the -value that is associated with a goodness-of-fit statistic. When the -value is less than the predetermined critical value (), you reject the null hypothesis and conclude that the data did not come from the specified distribution.
George Marsaglia, Wai Wan Tsang & Jingbo Wang (2003),Evaluating Kolmogorov's distribution.Journal of Statistical Software, 8/18..
We would use a histogram, P - P or Q - Q plot to show that the data is approximately normally distributed, this would only give us an indication that the population is normally distributed. Statistically better though is a Test of Normality such as Kolmogorov -Smirnov or Shapiro-Wilk. I try to say Kolmogorov-Smirnov at least once a week during semester!