ClusterSeer provides several ways to evaluate the significance of the test statistic (Tk).
ClusterSeer provides the Upper-tail P-value, which is the probability under the null hypothesis of observing a Tk as large or larger than the one given in Tk. It is based on a normal distribution of the data.
ClusterSeer also generates P-values for the Monte Carlo simulation for each k. ClusterSeer randomizes the data by shuffling the case-control labels for each of the spatial locations. This is a way to compare the observed Tk to the distribution of Tk based on a random distribution of the data.
ClusterSeer contains a multiple comparisons feature that allows you to take multiple testing into account when running the Cuzick & Edwards method. ClusterSeer provides a combined p-value for all tests performed at one initial alpha level. This is accomplished for Bonferroni and Simes adjustments.
Bonferroni Pc= j[min(Pi)]
Simes Pc= min(n +1-i)Pi
In this case, Pc denotes the combined P-value for all tests, Pi the value for an individual test, (j) is the number of comparisons, and (i) is the sequential index for the individual test considered. You can compare this value to your original alpha level to see if the set of tests show significant results.
This topic is related to the combined P-values feature available in Multiple Comparisons for other methods, but it uses Simes' formula instead of Holm's.