Note: Although a custom design is not optimal for a non-linear situation, in this example, for simplicity, you will use the Custom Design platform rather than the Nonlinear Design platform. For an example illustrating why a design constructed using the Nonlinear Design platform is better than an orthogonal design, see Examples of Nonlinear Designs in the Design of Experiments Guide.
2.
|
3.
|
Simulate likelihood ratio test p-values to explore the power of detecting a difference over a range of probability values that is determined by the linear predictor. See Explore Power.
|
where π(X) denotes the probability that a part passes at the given design settings X = (X1, X2, ..., X6).
β0
|
|
β1
|
|
β2
|
|
β3
|
|
β4
|
|
β5
|
|
β6
|
Because the intercept in the linear predictor is 0, when all factors are set to 0, the probability of a passing part equals 50%. The probabilities associated with the levels of the ith factor, when all other factors are set to 0, are given below.
X1
|
|||
X2
|
|||
X3
|
|||
X4
|
|||
X5
|
|||
X6
|
For example, when all factors other than X1 are set to 0, the difference in pass rates that you want to detect is 46.2%. The smallest difference in pass rates that you want to detect occurs when all factors other than X6 are set to zero and that difference is 24.5%.
Note: If you prefer to skip the steps in this section, select Help > Sample Data Library and open Design Experiment/Binomial Experiment.jmp. Click the green triangle next to the DOE Simulate script and then go to Define Simulated Responses.
1.
|
Select DOE > Custom Design.
|
2.
|
In the Factors outline, type 6 next to Add N Factors.
|
3.
|
Click Add Factor > Continuous.
|
4.
|
Click Continue.
|
5.
|
Under Number of Runs, type 60 next to User Specified.
|
6.
|
Click the Custom Design red triangle and select Simulate Responses.
|
Note: Setting the Random Seed in step 7 and Number of Starts in step 8 reproduces the exact results shown in this example. In constructing a design on your own, these steps are not necessary.
7.
|
(Optional) Click the Custom Design red triangle and select Set Random Seed. Type 12345 and click OK.
|
8.
|
9.
|
Click Make Design.
|
10.
|
Click Make Table.
|
Note: The entries in your Y and Y Simulated columns will differ from those that appear in Figure 9.166.
Figure 9.166 Partial View of Design Table
Figure 9.167 Simulate Responses Window
–
|
Y contains a set of values simulated according to the specifications in the Simulate Responses window.
|
–
|
Y Simulated contains a formula that calculates its values using the formula for the model that is specified in the Simulate Responses window. To view the formula, click on the plus sign to the right of the column name in the Columns panel.
|
Your plan is to simulate binomial response data where the probability of success is given by a logistic model. For more information on Simulate Response, see Simulate Responses in the Design of Experiments Guide.
Note: If you prefer to skip the steps in this section, click the green triangle next to the Simulate Model Responses script. Then go to Fit the Generalized Linear Model.
1.
|
–
|
Next to X1, 1 is entered by default. Keep that value.
|
–
|
Next to X2, type 0.9.
|
–
|
Next to X3, type 0.8.
|
–
|
Next to X4, type 0.7.
|
–
|
Next to X5, type 0.6.
|
–
|
Next to X6, type 0.5.
|
2.
|
In the Distribution outline, select Binomial.
|
Leave the value for N set to 1, indicating that there is only one unit per trial.
Figure 9.168 Completed Simulate Responses Window
3.
|
Click Apply.
|
In the design data table, the Y Simulated column is replaced with a formula column that generates binomial values. A column called Y N Trials indicates the number of trials for each run.
4.
|
(Optional) Click on the plus sign to the right of Y Simulated in the Columns panel.
|
Figure 9.169 Random Binomial Formula for Y Simulated
5.
|
Click Cancel.
|
1.
|
In the data table, click the green triangle next to the Model script.
|
2.
|
3.
|
You are replacing Y with a column that contains randomly generated binomial values.
4.
|
From the Personality menu, select Generalized Linear Model.
|
5.
|
From the Distribution menu, select Binomial.
|
6.
|
Click Run.
|
Next, explore the power of tests to detect a difference over the range of probability values determined by the linear predictor with the coefficient values given in Plan for the Example.
1.
|
Figure 9.170 Simulate Window
The column Y Simulated under the Column to Switch Out contains the values that were used to fit the model. When you select the column Y Simulated under Column to Switch In, for each simulation, you are telling JMP to replace the values in Y Simulated with a new column of values that are simulated using the formula in the column Y Simulated.
The column you have selected in the report, Prob>ChiSq, is the p-value for a likelihood ratio test of whether the associated main effect is 0. The Prob>ChiSq value will be simulated for each effect listed in the Effect Tests table.
2.
|
3.
|
4.
|
Click OK.
|
Note: Because response values are simulated, your simulated p-values will differ from those shown in Figure 9.171.
Figure 9.171 Table of Simulated Results, Partial View
The first row of the table contains the initial values of Prob>ChiSq and is excluded. The remaining 500 rows contain simulated values.
5.
|
Run the Power Analysis script.
|
Note: Because response values are simulated, your simulated power results will differ from those shown in Figure 9.172.
Figure 9.172 Distribution Plots for the First Three Effects
7.
|
8.
|
Note: Because response values are simulated, your simulated power results will differ from those shown in Figure 9.173.
Figure 9.173 Power Results for the First Three Effects
In the Simulated Power outlines, the Rejection Rate for each row gives the proportion of p-values that are smaller than the corresponding Alpha. For example, for X3, which corresponds to a coefficient value of 0.8 and a probability difference of 38%, the simulated power for a 0.05 significance level is 379/500 = 0.758. Table 9.6 summarizes the estimated power at the 0.05 significance level for all effects. Notice how power decreases as the Difference to Detect decreases. Also notice that the power to detect an effect as large as 24.5% (X6) is only approximately 0.37.
Note: Because response values are simulated, your simulated power results will differ from those shown in Table 9.6.
X1
|
||||
X2
|
||||
X3
|
||||
X4
|
||||
X5
|
||||
X6
|