Model Comparisons
Click on a button corresponding to a predictive modeling model comparison process. Refer to the table below for guidance.
Process |
Choose this process for... |
Test Set File Use |
Comparing cross validation statistics for an arbitrary collection of predictive models, and determining which models are best suited for prediction |
Not specifiable |
|
Comparing the relative abilities of different predictive models to make consistent, valid predictions through the computation of performance metrics on one or more test sets |
Required |
|
Constructing and comparing learning curves on predictive model settings that you select, with cross validation used to evaluate each model with different sample sizes, revealing the influence of sample size on model accuracy and variability |
Optional |
|
Summarizing the cross-validation results from the Predictive Modeling Review process. It helps to compare multiple predictive models for more than one outcome. It also enables users to select the best models and ensemble their outputs through averaging the predicted values. |
|
The remaining processes are special predictive modeling model comparison results-merging utilities that can be used only after multiple runs of the Cross Validation Model Comparison (CVMC) or Learning Curve Model Comparison (LCMC) processes have been generated.
Tip: You might consider running these processes in cases where you have a large number of predictive models to compare, but insufficient computing resources to include them all in a single CVMC or LCMC run. These processes enable you to easily compare the combined results in one series of plots.
Process |
Choose this process for... |
Merging the results of multiple CVMC runs into one set of results |
|
Merging the results of multiple LCMC runs into one set of results |
See Predictive Modeling for other subcategories.