Process Description
Mixed Model Power
The Mixed Model Power process assists you in the planning of your experiments. Starting with an exemplary experimental design data set and parameter settings for a relevant mixed model, it enables you to calculate power curves for a range of Type 1 error probabilities (alpha). In other words, this process helps you decide how big an experiment you need to run in order to be reasonably assured that the true effects in the study (change in gene expression, for example) are deemed statistically significant. Conversely, this process also enables you to calculate the statistical power of an experiment, given a specified sample size. This process is typically run before you run your experiment, and it helps to have run a pilot study in order to determine reasonable values for variance components.
Mixed Model Power computes the statistical power of a set of one-degree-of-freedom hypothesis tests arising from a mixed linear model. You specify an experimental design file, parameters for relevant PROC Mixed statements (including fixed values for the variance components and ESTIMATE statements), and ranges of values for alpha and effect sizes, and the process outputs a table of power values calculated using a noncentral t- distribution.
What do I need?
Two data sets are required to run Mixed Model Power. The first is the Experimental Design Data Set (EDDS). This data set provides information about the design, typically for one gene or protein, of the proposed experiment. It must include all relevant design variables of the experiment for which you want to compute power. The sample size equals the number of rows in this data set.
The second required file contains PROC MIXED ESTIMATE statements See Estimate Builder for more details. ESTIMATE statements are used to specify linear hypotheses of interest that are valid for each specified fixed effects model. Distinct power values are computed for each hypothesis test.
For detailed information about the files and data sets used or created by JMP Genomics software, see Files and Data Sets.
Output/Results
The output of the Mixed Model Power process includes one output data set listing the t-statistics and Overlay Plots showing the associated power values for each multiplier and each level of alpha (not shown) and the associated power curves. This output is accessed from the tabbed Results window.
8 | Examine the sample overlay plot shown below. |
Effect sizes (log2 differences) are plotted along the x-axes. Power is plotted along the y-axis of each plot. The greater the power, the higher the probability of rejecting the null hypothesis (in this case, there is no difference in expression due to the experimental variable) when the observed difference is real. Note that, as might be expected, power increases for all effects as the effect size increases. In other words, the greater the difference due to the effect, the more likely you are to successfully conclude that the observed difference is real.
In all cases, we are also less likely to correctly reject the null at more stringent levels of alpha.
You might need to adjust the experimental design, depending on the results of this analysis. You might find that the power of the proposed design is not sufficient for you to reject the null with confidence. One way to increase power is to increase the size of your experiment, adding technical replicates for example, until the power is sufficient. Alternatively, if predicted power is more than sufficient for your experimental conditions, you might be able to reduce the size of experiment, saving valuable resources.
Regardless of how you adjust the conditions, you should plan on rerunning this analysis using the new design. To compute power for a different design, use DOE > Custom Design to generate the design of interest, save the table as a SAS data set, and rerun Mixed Model Power using the new design as the EDDS.
With the results of this process, you can now design the size of your experiment to ensure there is sufficient statistical power to draw meaningful conclusions.