Free
Clinical Science  |   February 2001
The Cardiac Anesthesia Risk Evaluation Score: A Clinically Useful Predictor of Mortality and Morbidity after Cardiac Surgery
Author Affiliations & Notes
  • Jean-Yves Dupuis, M.D., F.R.C.P.(C.)
    *
  • Feng Wang, M.D., M.Sc.
  • Howard Nathan, M.D., F.R.C.P.(C.)
  • Miu Lam, Ph.D.
    §
  • Scott Grimes, R.T.
    ‖‖
  • Michael Bourke, M.D., F.R.C.P.(C.)
    #
  • *Associate Professor, † Research Associate, ‡Professor, ‖‖Respiratory Therapist and Research Assistant, #Assistant Professor, Department of Anesthesia, University of Ottawa Heart Institute. §Associate Professor, Department of Community Health and Epidemiology, Queen’s University.
  • Received from the Department of Anesthesia, University of Ottawa Heart Institute, Ottawa, Ontario, Canada.
Article Information
Clinical Science
Clinical Science   |   February 2001
The Cardiac Anesthesia Risk Evaluation Score: A Clinically Useful Predictor of Mortality and Morbidity after Cardiac Surgery
Anesthesiology 2 2001, Vol.94, 194-204. doi:
Anesthesiology 2 2001, Vol.94, 194-204. doi:
THE growing interest in risk-adjusted analysis of outcome in cardiac surgery has led to the development and validation of several predictive models for postoperative mortality, morbidity, and prolonged hospital stay. 1–15 Most models are multifactorial risk indexes developed using multiple regression analysis. Despite their potential usefulness for quality assurance and perioperative care planning, multifactorial risk indexes remain poorly integrated into clinical practice. This is probably because of their complexity in use, their inaccuracy in predicting outcome for individual patients, and their dependence on clinical variables that are not always available. 10–12,16 In contrast to multifactorial risk indexes, functional classifications like the New York Heart Association classification or the American Society of Anesthesiologists (ASA) physical status classification are routinely used by anesthesiologists. However, those classifications are not designed to predict outcome after cardiac surgery. Consequently, their predictive ability in this setting is limited and inconsistent. 17 
Previous studies in cardiac surgery have demonstrated that a large amount of prognostic information can be obtained from a few clinical variables 18,19 or clinical judgment alone. 17,20,21 On that basis, we developed the Cardiac Anesthesia Risk Evaluation (CARE) score, which is a simple risk classification with an ordinal scale (table 1). The CARE score combines clinical judgment and the recognition of three risk factors previously identified by multifactorial risk indexes: comorbid conditions categorized as controlled or uncontrolled, the surgical complexity, and the urgency of the procedure.
Table 1. Cardiac Anesthesia Risk Evaluation Score
Image not available
Table 1. Cardiac Anesthesia Risk Evaluation Score
×
In this study, we hypothesized that the CARE score would be a valid predictor of outcome after cardiac surgery and that clinicians would easily integrate this risk model into their practice. Accordingly, the study had three specific objectives: first, to determine the predictive performance of the CARE score in predicting mortality and major postoperative complications; second, to compare the predictive performance of the CARE score with that of three existing multifactorial risk indexes 1–3 for cardiac surgical patients; and finally, to determine the interrater variability and predictive performance of the CARE score when used by experienced cardiac anesthesiologists.
Methods
Population
This was a prospective observational study approved by the Human Research Ethics Committee of the University of Ottawa Heart Institute, Ottawa, Ontario, Canada. Written consents were not obtained from the individual patients because the study was based on data collected for routine care. A total of 3,548 consecutive patients who underwent a cardiac surgical procedure at the University of Ottawa Heart Institute were included. The first 2,000 patients, who had surgery between November 12, 1996 and March 18, 1998, served as a reference group to develop logistic regression models for each risk model tested in the study. The next 1,548 patients, operated on between March 19, 1998 and April 2, 1999, were used for validation of the risk models. Patients undergoing heart transplantation or implantation of ventricular assist devices as a primary surgery were excluded because of the infrequency of those procedures. Patients who underwent more than one cardiac surgical procedure during the same hospitalization were counted as single cases. However, subsequent cardiac or noncardiac procedures were computed as postoperative complications, unless planned before the primary cardiac intervention.
Development of the Cardiac Anesthesia Risk Evaluation Score
To facilitate its use by anesthesiologists, the CARE score was designed to resemble the ASA physical status classification, a model also familiar to surgeons. The rationale behind the definition of each risk category of the CARE score is based on general and accepted knowledge in cardiac surgery (table 1). For example, the definitions of the first category (CARE 1) and last category (CARE 5) are based on the fact that clinicians can correctly identify very low- and very high-risk cardiac surgical patients. 17,21 
In contrast to the assessment of the two risk extremes in cardiac surgery, the subjective estimation of patients with intermediate risk is often inaccurate and inconsistent. 21 The use of a few objective clinical variables results in better risk prediction for those patients. 18,19,22,23 Therefore, two general, but objective, groups of risk factors were selected to define the intermediate risk levels in the CARE score (CARE 2–4): the complexity of the surgery and the presence of various comorbid conditions, which may be categorized as controlled or uncontrolled. The ranking or relative importance of those covariates in the CARE score is consistent with the findings from most existing multifactorial risk indexes. 1–8 Patients with controlled medical problems (e.g.  , diabetes mellitus, hypertension, etc.) are at greater risk than patients without any disease. However, they are at lower risk than patients with uncontrolled diseases (heart failure with pulmonary edema, renal insufficiency, etc.), thus, the rationale for the CARE 2 and 3 categories. Uncontrolled comorbid factors and complex or difficult procedures have comparable scores in most multiple logistic regression models. Thus, the same prognostic weight was given to both groups of factors in the CARE score. This explains why one or the other group can be used to define the CARE 3 category. The CARE 4 category accounts for the fact that uncontrolled medical conditions and complex procedures have an additive effect on risk. 1–8 
Finally, special consideration is given to emergency in the CARE score, so that emergency cases can be easily differentiated from others. This is because emergencies or catastrophic states are the most important predictors of outcome in cardiac surgery. 1–8 Therefore, the CARE score has eight possible risk categories: scores 1–5 for elective or urgent procedures, and scores 3E, 4E, and 5E for emergency conditions requiring immediate surgery. Unstable cardiac conditions requiring surgery within 24 h, but not immediately, are considered uncontrolled medical problems. By definition, emergency never applies to CARE 1 or 2.
Data Collection
Preoperative and intraoperative data were collected prospectively by the attending anesthesiologists, who completed a database form at the time of surgery. The database form contains 130 preoperative variables pertaining to the severity of the patient’s disease and comorbid factors before the operation and 80 variables documenting intraoperative procedures and events. Research assistants who also looked after the Cardiac Surgical Unit database verified the accuracy and completeness of the collected data on a daily basis. Postoperative outcome data were retrieved from the medical charts after patients’ discharge and from the Cardiac Surgical Unit database, which contains 92 variables related to postoperative evolution. The quality of the collected data was assessed by an independent observer, who extracted 50% of the database information from 175 randomly selected patients and compared them with those found in the charts. An agreement rate of 98% was found between the database information and the data obtained from the charts.
Outcomes
The primary outcomes in this study were in-hospital mortality, regardless of length of stay (LOS), and morbidity, defined as one or more of the following: (1) cardiovascular—low cardiac output, hypotension, or both treated with intraaortic balloon pump, with two or more intravenous inotropes or vasopressors for more than 24 h, or with both, malignant arrhythmia (asystole and ventricular tachycardia or fibrillation) requiring cardiopulmonary resuscitation, antiarrhythmia therapy, or automatic cardiodefibrillator implantation; (2) respiratory—mechanical ventilation for more than 48 h, tracheostomy, reintubation; (3) neurologic—focal brain injury with permanent functional deficit, irreversible encephalopathy; (4) renal—acute renal failure requiring dialysis; (5) infectious—septic shock with positive blood cultures, deep sternal or leg wound infection requiring intravenous antibiotics, surgical debridement, or both; (6) other—any surgery or invasive procedure necessary to treat a postoperative adverse event associated with the initial cardiac surgery.
In the absence of morbidity data, prolonged postoperative LOS in hospital has been used as a surrogate for morbidity in other studies. 3,10 This was, therefore, analyzed as a secondary outcome in this study. Prolonged postoperative LOS in the hospital was defined as a stay of 14 days or more. This corresponds to the 90th percentiles for postoperative LOS in the entire study population. This cutoff point was proposed in a previous study because it likely reflects a LOS resulting from complications rather than differences in discharge practice. 3 
Risk Classification of All Patients
Using the validated preoperative database information, two investigators (JYD and SG) gave a CARE score to each patient. Throughout the study, the investigators used strict definitions of uncontrolled medical problems and complex surgical procedures, as presented in the footnotes of table 1. Multifactorial risk scores were also determined for each patient according to the risk indexes developed for general cardiac surgical populations by Parsonnet et al.  , 1 Tuman et al.  , 2 and Tu et al.  3 (table 2). Those classifications contain variables available in most of our patients, and like the CARE score, they apply to all cardiac surgical patients, not only to those undergoing coronary artery surgery. Patients were risk stratified according to the original criteria and definitions described in each of those risk classifications. 1–3 To attenuate the reported inconsistency associated with two subjective risk factors (catastrophic states and rare circumstances) in the original Parsonnet classification, 16 60 conditions potentially computable under those factors were listed and given a risk value at the beginning of the study. This list of conditions and scores was used consistently by the investigators to compute Parsonnet risk score, similar to what was done by Gabrielle et al.  9 
Table 2. Multifactorial Risk Indexes for the Prediction of Outcome after Cardiac Surgery
Image not available
Table 2. Multifactorial Risk Indexes for the Prediction of Outcome after Cardiac Surgery
×
Use of the Cardiac Anesthesia Risk Evaluation Score by Clinicians
Eight experienced cardiac anesthesiologists participated in the study. They were asked to provide a CARE score for all their patients, before surgery. The investigators’ ratings served as a reference for comparison with both the anesthesiologists’ ratings and the multifactorial risk indexes.
Statistical Analysis
The association between the patients’ characteristics and mortality or morbidity was determined by univariate analysis, using a chi-square test or a Fisher exact test when appropriate. For each risk index considered in this study, including the CARE score, different predictive models for mortality, morbidity, and prolonged postoperative LOS were developed using logistic regression analysis. The CARE score categories were coded as 1 = 1, 2 = 2, 3 = 3, 4 = 3E, 5 = 4, 6 = 4E, 7 = 5, and 8 = 5E; those numeric scores were used as independent variables in the logistic regression models, because the models do not handle nominal scores such as 3E, 4E, and 5E. In the cases of the multifactorial risk indexes, logistic regression models were similarly developed, using the original risk categories (not the total score or integer) proposed by the developers of those indexes. The predictive performances of all predictive models were assessed by determining their discrimination and calibration for mortality, morbidity, and prolonged LOS.
Discrimination, or predictive accuracy, was assessed for all predictive models by building receiver operating characteristic (ROC) curves for mortality, morbidity, and prolonged postoperative LOS. 24 The ROC curve is a graphic technique plotting the true-positive rate (sensitivity) versus  the false-positive rate (1-specificity) for diagnostic tests, using different cutoff points (in this study, those points were the various categories of each risk classification). The top of the y-axis represents a perfect test with a 100% true-positive rate and a 0% false-positive rate. The area under the ROC curve equals the probability of correctly identifying the patient with a complication when applying the risk classification to a pair of randomly selected patients (always one with a complication and one without a complication) on successive trials. Thus, the area under the ROC curve is commonly used to measure and compare the predictive accuracy of risk classifications. An area under the ROC curve of 1.0 indicates perfect accuracy of the risk classification, whereas an area less than 0.5 (line of no discrimination) means that it is no better than chance. Areas of 0.5 to 0.7 suggest a low accuracy and values more than 0.7 confirm the usefulness of the risk classification as a risk predictor. 24 In this study, the area under the various ROC curves and their standard error (SE) were measured and compared by performing the two-tailed nonparametric ROC analysis of DeLong et al.  , 25 using the statistical program AccuROC for Windows 95 (Accumetric Corporation, Montreal, Canada), with correction made for multiple comparisons.
All risk indexes in the study, including the CARE score, were tested as categorical. Data were tabulated in contingency tables and calibration, which represents the precision of the probabilities generated by a prediction model, was assessed using the Pearson chi-square goodness-of-fit test. This is the most commonly used statistic for contingency tables. 26 It compares the estimated predicted outcomes (mortality, morbidity, or prolonged LOS) from the logistic regression models with the observed outcomes for each risk category of the prediction model. 26,27 A small chi-square value (or a P  value > 0.05) indicates acceptable calibration.
The interrater variability in using the CARE score was determined by measuring the concordance rate and the κ measure of agreement between the attending anesthesiologists’ and the investigators’ assessments. This analysis was first performed with the data from the entire population. It was then repeated with the data from the reference and validation groups separately, to determine any possible change with use (learning effect) of the CARE score over time. The discrimination and calibration analyses were also performed for the CARE score ratings by the attending anesthesiologists. To confirm the usefulness and uniqueness of the CARE score, its discrimination in predicting outcome was compared with that of other variables commonly used by clinicians: ASA physical status, New York Heart Association classification for heart failure, left ventricular ejection fraction, age, serum creatinine, operative priority, and type of surgery.
For all analyses and comparisons, a P  value less than 0.05 was used to determine statistical significance.
Results
Population
The patients’ characteristics in the reference group (n = 2,000) and their association with mortality and morbidity are presented in table 3. The mortality and morbidity rates in the reference group were 3.4% and 20.7%, respectively. Comparable patients’ characteristics, mortality (3.4%), and morbidity (22.2%) rates were found in the validation group (n = 1548). The mortality and morbidity rates were comparable between the eight cardiac surgeons who participated in the study. The mean postoperative LOS was 8.8 ± 11.0 days in the reference population and 9.0 ± 10.3 days in the validation group, with a median of 6 days in both groups. The incidences of prolonged postoperative LOS were 10.2% and 12.3% in the reference and validation groups, respectively. The mortality rate in this study compares very well with values recently obtained through large multicenter databases. 28,29 The morbidity rate and the incidence of prolonged postoperative LOS are more difficult to compare with those from other studies because of variations in outcome definitions.
Table 3. Patient Characteristics in the Reference Group (n = 2000) and Association with Perioperative Outcomes, as Determined by Univariate Analysis
Image not available
Table 3. Patient Characteristics in the Reference Group (n = 2000) and Association with Perioperative Outcomes, as Determined by Univariate Analysis
×
The Cardiac Anesthesia Risk Evaluation Score as a Predictive Risk Model
Table 4shows the probabilities of mortality, morbidity, and prolonged postoperative LOS associated with each category of the CARE score, as determined from logistic regression analysis in the reference population.
Table 4. Probabilities of Mortality, Morbidity and Prolonged Postoperative Length of Stay in Hospital, as Predicted by the CARE Score
Image not available
Table 4. Probabilities of Mortality, Morbidity and Prolonged Postoperative Length of Stay in Hospital, as Predicted by the CARE Score
×
Predictive Performance of the Risk Classifications
In the reference group, all risk classifications had comparable areas under the ROC curves for the prediction of mortality and morbidity (figs. 1 and 2). With all risk classifications, the discrimination for mortality was significantly better than for morbidity. Age being a risk factor not explicitly taken into account by the CARE score, the discrimination of the CARE score in predicting mortality and morbidity was further tested in various age subgroups. The areas under the ROC curve for the prediction of mortality and morbidity were 0.791 ± 0.067 and 0.740 ± 0.024 in the patients younger than 65 yr of age, 0.763 ± 0.045 and 0.721 ± 0.026 in the patients 65–74 yr of age, and 0.795 ± 0.049 and 0.715 ± 0.031 in the patients 75 yr of age or older, respectively. For prolonged postoperative LOS, the area under the ROC curve was 0.715 ± 0.018 with the CARE score, 0.774 ± 0.016 with the Parsonnet classification (P  = 0.05 vs.  all other classifications), 0.730 ± 0.019 with the Tuman classification, and 0.730 ± 0.018 with the Tu classification.
Fig. 1. Receiver operating characteristic curves obtained with each risk model for prediction of mortality in the reference group (n = 2,000 patients). CARE = Cardiac Anesthesia Risk Evaluation; AUC = area under the curve.
Fig. 1. Receiver operating characteristic curves obtained with each risk model for prediction of mortality in the reference group (n = 2,000 patients). CARE = Cardiac Anesthesia Risk Evaluation; AUC = area under the curve.
Fig. 1. Receiver operating characteristic curves obtained with each risk model for prediction of mortality in the reference group (n = 2,000 patients). CARE = Cardiac Anesthesia Risk Evaluation; AUC = area under the curve.
×
Fig. 2. Receiver operating characteristic curves obtained with each risk model for prediction of morbidity in the reference group (n = 2,000 patients). CARE = Cardiac Anesthesia Risk Evaluation; AUC = area under the curve.
Fig. 2. Receiver operating characteristic curves obtained with each risk model for prediction of morbidity in the reference group (n = 2,000 patients). CARE = Cardiac Anesthesia Risk Evaluation; AUC = area under the curve.
Fig. 2. Receiver operating characteristic curves obtained with each risk model for prediction of morbidity in the reference group (n = 2,000 patients). CARE = Cardiac Anesthesia Risk Evaluation; AUC = area under the curve.
×
In the validation group, the areas under the ROC curve for the prediction of mortality and morbidity were 0.807 ± 0.031 and 0.721 ± 0.016 with the CARE score, 0.804 ±0.026 and 0.698 ± 0.017 with the Parsonnet classification, 0.823 ± 0.030 and 0.699 ± 0.017 with the Tuman classification, and 0.801 ± 0.032 and 0.688 ± 0.017 with the Tu classification, respectively. A significant difference was found between the CARE score and the Tu classification in predicting morbidity (P  = 0.029). For the prediction of prolonged postoperative LOS, the area under the ROC curve was 0.728 ± 0.020 with the CARE score, 0.769 ± 0.017 with the Parsonnet classification (P  < 0.05 vs.  all other classifications), 0.720 ± 0.020 with the Tuman classification, and 0.741 ± 0.018 with the Tu classification.
The calibration analysis for mortality in the validation group showed an acceptable fit between the observed and expected values for all risk classifications (tables 5–8). For morbidity, an acceptable level of agreement between the observed and expected values was found for the CARE score, the Tuman, and the Tu classifications, but not for the Parsonnet classification (tables 5–8). For prolonged postoperative LOS, all classifications failed the calibration analysis:P  = 0.014 with the CARE score, P  = 0.036 with the Parsonnet classification, P  = 0.012 with the Tuman classification, and P  = 0.026 with the Tu classification.
Table 5. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the CARE Score
Image not available
Table 5. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the CARE Score
×
Table 6. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the Parsonnet Classification
Image not available
Table 6. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the Parsonnet Classification
×
Table 7. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the Tuman Classification
Image not available
Table 7. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the Tuman Classification
×
Table 8. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the Tu Classification
Image not available
Table 8. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the Tu Classification
×
Use of the Cardiac Anesthesia Risk Evaluation Score by Clinicians
An overall concordance rate of 85.1% in CARE score ratings was found between the investigators and the eight participating cardiac anesthesiologists, with a κ value of 0.790 (SE = 0.008;P  < 0.001). In the reference group, a concordance rate of 86.3% and a κκalue of 0.806 (SE = 0.011;P  < 0.001) were found between the two ratings. A comparable concordance rate of 83.6% and a κ value of 0.770 (SE = 0.013;P  < 0.001) were found in the validation group, suggesting no significant change over time in the anesthesiologists’ use of the CARE score.
The CARE score used by the attending anesthesiologists had areas under the ROC curve of 0.782 ± 0.028 for mortality, 0.710 ± 0.016 for morbidity, and 0.715 ± 0.018 for prolonged postoperative LOS in the reference group. In the validation group, the areas under the ROC curves were 0.789 ± 0.031 for mortality, 0.721 ± 0.016 for morbidity, and 0.710 ± 0.019 for postoperative LOS. Those values were not significantly different from those obtained from the CARE score used by the two investigators. The Pearson goodness-of-fit test for the CARE score used by clinicians showed an acceptable fit between the observed and expected rates of mortality (chi-square = 3.056;df  = 8;P  = 0.931) and morbidity (chi-square = 14.174;df  = 8;P  = 0.077), but not for prolonged postoperative LOS (P  = 0.045).
When compared with other clinical variables used by clinicians in the reference population, the CARE score predicted mortality and morbidity significantly better than any of those markers alone (table 9), confirming its uniqueness as a clinical tool for risk assessment and classification of cardiac surgical patients.
Table 9. Accuracy of the CARE Score versus Other Clinical Variables for Prediction of Mortality and Morbidity in the Reference Population
Image not available
Table 9. Accuracy of the CARE Score versus Other Clinical Variables for Prediction of Mortality and Morbidity in the Reference Population
×
Discussion
The results of this study show that the CARE score is an accurate predictor of mortality and morbidity after cardiac surgery. Its discrimination and calibration for the prediction of mortality and morbidity compare very well with those of more complex multifactorial risk indexes. The study also suggests that experienced cardiac anesthesiologists can integrate the CARE score into their clinical practice in a consistent manner that will provide accurate predictions of outcome after cardiac surgery.
Many mathematical modeling techniques are available to quantify the risk associated with cardiac surgery. Of those techniques, multiple regression analysis is probably the most commonly used. 1–8 With this statistic, independent risk factors are identified and entered in a complex equation that expresses the probability of adverse outcomes. For practical reasons, continuous data derived from this equation are converted to a multifactorial risk index, where the predictive value of each risk factor is reduced to an integer, and the sum of the integers determines the risk category to which individual patients belong. In general, the discrimination provided by those risk indexes, as determined by the area under the ROC curve, is in the range of 0.65–0.85 for mortality, morbidity, or increased postoperative LOS in hos-pital. 3–5,7,9–13,15 So, most existing multifactorial risk indexes meet the acceptability criteria for risk-adjusted analysis of outcomes in cardiac surgery.
Despite their apparent simplicity, multifactorial risk indexes are difficult to memorize. For example, the Tu classification, 3 which is the simplest model tested in this study, counts six risk factors and 17 point strata from which nine risk categories can be derived. This is an obvious handicap for daily application in the practice of cardiac anesthesia and surgery. The CARE score is a proposed alternative to this cumbersome approach. It is a simple risk ranking system designed for routine use by cardiac anesthesiologists and surgeons. It allows some clinical judgment within a framework of general concepts related to risk in cardiac surgery. This approach may become appealing to clinicians, especially if shown to be as good a predictor as more complicated multifactorial risk indexes.
In this study, the CARE score was submitted to the usual evaluation criteria for predictive risk models in two consecutive cohorts of patients, a reference and a validation group. Two investigators gave a CARE score to each patient and determined their risk category according to the multifactorial indexes developed by Parsonnet et al.  , 1 Tuman et al.  , 2 and Tu et al.  3 The CARE score predicted postoperative mortality or morbidity with as much accuracy as the multifactorial risk indexes. In fact, only the Parsonnet classification (which is the most complex model tested in this study) and the CARE score predicted mortality with areas under the ROC curve equal to or more than 0.80 in both cohorts of patients. Furthermore, the CARE score is the only model that provided areas under the ROC curve more than 0.70 for the prediction of morbidity in the two groups of patients. It also provided an acceptable level of agreement between the observed and expected rates of mortality and morbidity in the validation set of patients. For those two outcomes, the fit was not perfect for all the CARE score categories, but none of the multifactorial indexes had a perfect fit for all its categories. In the case of the CARE score, poor fits were observed with the 3E and 4E categories, where the predicted mortality was underestimated, and with the 3E category, where the predicted morbidity was also underestimated. The small number of patients and expected outcome in those categories may explain those results. 30 
A common finding to all the risk indexes tested in this study was their lower accuracy in predicting morbidity or prolonged postoperative LOS as compared with mortality. This difference has also been observed in previous studies. 3,5 Those results suggest that some complications defining morbidity and some causes of prolonged postoperative LOS are not well accounted for by the risk factors used by the CARE score or the tested multifactorial risk indices. Another common finding to the CARE score and the tested multifactorial risk indexes was their poor calibration for the prediction of prolonged postoperative LOS. The reason for this lack of calibration is unclear. The mean and median postoperative LOS were comparable between the reference and the validation groups, suggesting that a significant change in practice pattern between the reference and validation groups is unlikely to be the cause of poor calibration for this outcome.
One major objective of this study was to determine the performance of the CARE score when used by cardiac anesthesiologists in their daily practice. As previously observed with the ASA physical status classification, 31,32 differences in CARE score ratings were expected between the attending anesthesiologists and the investigators. However, a very high agreement between the two ratings was observed, with an overall concordance rate of 85% for the 3,548 studied patients. Consequently, clinicians predicted mortality, morbidity, and prolonged postoperative LOS with almost as much accuracy as the investigators. Because only two ratings were obtained for each patient in this study, the whole range of variation in CARE score rating remains undetermined. However, the overall results suggest that the participating anesthesiologists can use the CARE score in a manner that is consistent enough to provide appropriate risk-adjusted analysis of outcome in their institution.
The subjective risk assessment of cardiac surgical patients, using the equivalent of a visual analog score, is another simple alternative to multivariate risk analyses. This approach was recently tested and compared with a multivariate risk model in 1,198 patients from seven centers. All patients were given a subjective risk score of 1–5 by their attending surgeon. This subjective risk model provided an area under the ROC curve of 0.70 for the prediction of mortality. This was significantly less than the 0.76 value obtained with the multivariate risk model in the same population. The subjective risk assessment was accurate in identifying the very low- and very high-risk patients, but inaccurate for those with intermediate risk. In the present study, the CARE score predicted mortality as well as any of the tested multivariate risk models. In addition, the CARE score was useful in predicting outcome for patients with intermediate risk levels. Thus, the CARE score may have certain advantages over a purely subjective risk model. Only comparisons of both methods of risk assessment in the same population could determine the true difference between the two methods.
Following public reporting of cardiac surgical outcomes over the last decade, 33–35 patient risk stratification has been suggested to avoid unfair comparisons between individual cardiac surgeons and institutions. 1,36,37 In this context, a risk classification like the CARE score may be discredited because it allows some rater’s subjectivity, which may facilitate risk overrating to obtain better risk-adjusted outcome results. This phenomenon, previously called gaming  , 38 is also possible with multifactorial risk indexes when certain risk factors are unavailable before surgery (e.g.  , left ventricular ejection fraction, pulmonary artery pressure) or when their definition is influenced by practice patterns (e.g.  , urgency, emergency, use of intraaortic balloon pump) or the technique of measurement (e.g.  , pulmonary artery pressure, ejection fraction). 16,38 Furthermore, the use of multifactorial risk indexes usually involve data collection, entry in computers, and individual risk calculation by clerks or research assistants. The potential errors in each step of this process may add distortions to risk calculation. Because no predictive model is perfect, there will always be a risk of making mistakes by analyzing and comparing risk-adjusted outcomes to determine quality of care. 39 Recognizing this limitation, all characteristics of the various models must be considered before selecting one that suits an institution or department needs for risk calculation. This study does demonstrate that a model does not have to be complex and free of clinicians’ subjective input to provide reliable risk prediction.
One major limitation of this study is that it was performed in a single institution. External validation of the CARE score in other centers will be necessary to confirm its predictive accuracy. Another limitation is the fact that it was tested among a small group of experienced cardiac anesthesiologists only. The predictive performance of the CARE score may be different when used by residents, cardiac surgeons, or intensivists. However, looking at table 1and the operational criteria in its footnotes, it seems that a CARE score can be assigned to most patients with minimal clinical judgment and experience. The research assistant participating in this study (SG) had no major difficulties using the score. This suggests that the accuracy of the CARE score would not be altered significantly by the experience of its user. Further studies will be required to confirm this hypothesis.
Currently, multifactorial risk indexes are used mainly by professional and government bodies that produce annual reports on risk-adjusted outcome 1 or 2 yr after the data collection. Many clinicians feel distant from that process, possibly because of its complexity and financial and time requirements. The CARE score is a model proposed to facilitate data collection and interpretation by busy clinicians involved in cardiac surgery. It can easily be added to patient discharge summaries and used by medical record departments to produce frequent risk-adjusted mortality reports. If shown to be an accurate risk predictor in other institutions, the CARE score may be appealing to many clinicians, mainly because it keeps the whole process of risk evaluation at a clinical level.
The authors thank Mrs. Geraldine Wells, Research Division, Department of Anesthesia, University of Ottawa Heart Institute, Ottawa, Ontario, Canada, for her help in preparing this manuscript and Ioulia Doumkina M.D., for reviewing patient charts and assessing the quality of database information.
References
Parsonnet V, Dean D, Bernstein AD: A method of uniform stratification of risk for evaluating the results of surgery in acquired adult heart disease. Circulation 1989; 79 (suppl I): 3–12Parsonnet, V Dean, D Bernstein, AD
Tuman KJ, McCarthy RJ, March RJ, Hassan N, Ivankovich AD: Morbidity and duration of ICU stay after cardiac surgery. A model for preoperative risk assessment. Chest 1992; 102: 36–44Tuman, KJ McCarthy, RJ March, RJ Hassan, N Ivankovich, AD
Tu JV, Jaglal SB, Naylor D, the Steering Committee of the Provincial Adult Care Network of Ontario: Multicenter validation of a risk index for mortality, intensive care unit stay, and overall hospital length of stay after cardiac surgery. Circulation 1995; 91: 677–84Tu, JV Jaglal, SB Naylor, D the Steering Committee of the Provincial Adult Care Network of Ontario:,
O’Connor GT, Plume SK, Olmstead EM, Coffin LH, Morton JR, Maloney CT, Nowicki ER, Levy DG, Tryzelaar JF, Hernandez F, Adrian L, Casey KJ, Bundy D, Soule DN, Marrin CAS, Nugent WC, Charlesworth DC, Clough R, Katz S, Leavitt BJ, Wennberg JE, for the Northern New England Cardiovascular Disease Study Group: Multivariate prediction of in-hospital mortality associated with coronary artery bypass graft surgery. Circulation 1992; 85: 2110–8O’Connor, GT Plume, SK Olmstead, EM Coffin, LH Morton, JR Maloney, CT Nowicki, ER Levy, DG Tryzelaar, JF Hernandez, F Adrian, L Casey, KJ Bundy, D Soule, DN Marrin, CAS Nugent, WC Charlesworth, DC Clough, R Katz, S Leavitt, BJ Wennberg, JE for the Northern New England Cardiovascular Disease Study Group:,
Higgins TL, Estafanous FG, Loop FD, Beck GJ, Blum JM, Paranandi L: Stratification of morbidity and mortality outcome by preoperative risk factors in coronary artery bypass patients. A clinical severity score. JAMA 1992; 267: 2344–8Higgins, TL Estafanous, FG Loop, FD Beck, GJ Blum, JM Paranandi, L
Edwards FH, Clark RE, Schwartz M: Coronary artery bypass grafting: The Society of Thoracic Surgeons National Database Experience. Ann Thorac Surg 1994; 57: 12–9Edwards, FH Clark, RE Schwartz, M
Magovern JA, Sakert T, Magovern GJ Jr, Benckart DH, Burkholder JA, Liebler GA, Magovern GJ Sr: A model that predicts morbidity and mortality after coronary artery bypass graft surgery. J Am Coll Cardiol 1996; 28: 1147–53Magovern, JA Sakert, T Magovern, GJ Benckart, DH Burkholder, JA Liebler, GA Magovern, GJ
Nashef SAM, Roques F, Michel P, Gauducheau E, Lemeshow S, Salamon R, the EuroSCORE Study Group: European system for cardiac operative risk evaluation (EuroSCORE). Eur J Cardio-thorac Surg 1999; 16: 9–13Nashef, SAM Roques, F Michel, P Gauducheau, E Lemeshow, S Salamon, R the EuroSCORE Study Group:,
Gabrielle F, Roques F, Michel P, Bernard A, de Vicentis C, Roques X, Brenot R, Baudet E, David M: Is the Parsonnet’s score a good predictive score of mortality in adult cardiac surgery? Assessment by a French multicentre study. Eur J Cardiothorac Surg 1997; 11: 406–14Gabrielle, F Roques, F Michel, P Bernard, A de Vicentis, C Roques, X Brenot, R Baudet, E David, M
Weightman WM, Gibbs NM, Sheminant MR, Thackray NM, Newman AJ: Risk prediction in coronary artery surgery: A comparison of four risk scores. Med J Aust 1997; 166: 408–11Weightman, WM Gibbs, NM Sheminant, MR Thackray, NM Newman, AJ
Pons JMV, Espinas JA, Borras JM, Moreno V, Martin I, Granados A: Cardiac surgical mortality. Comparison among different additive risk-scoring models in a multicenter sample. Arch Surg 1998; 133: 1053–7Pons, JMV Espinas, JA Borras, JM Moreno, V Martin, I Granados, A
Orr RK, Maini BS, Sottile FD, Dumas EM, O’Mara P: A comparison of four severity-adjusted models to predict mortality after coronary artery bypass graft surgery. Arch Surg 1995; 130: 301–6Orr, RK Maini, BS Sottile, FD Dumas, EM O’Mara, P
Martínez-Alario J, Tuesta ID, Plasencia E, Santana M, Mora ML: Mortality prediction in cardiac surgery patients. Comparative performance of Parsonnet and general severity systems. Circulation 1999; 99: 2378–82Martínez-Alario, J Tuesta, ID Plasencia, E Santana, M Mora, ML
Sanchez Garcia R, Nygard E, Christensen JB, Lund JT, Micheelsen F, Niebuhr-Jorgensen U, Bie P: Perioperative risk assessment in connection with heart surgery. Results from 628 patients discussed with emphasis on the difficulties associated with comparison between performances of different departments. Ugeskr Laeger 1995; 157: 6720–5Sanchez Garcia, R Nygard, E Christensen, JB Lund, JT Micheelsen, F Niebuhr-Jorgensen, U Bie, P
Leonard RC, van Heerden PV, Power BM, Cameron PD: Validation of Tu’s cardiac surgical risk prediction index in a Western Australian population. Anaesth Intensive Care 1999; 27: 182–4Leonard, RC van Heerden, PV Power, BM Cameron, PD
Parsonnet V, Bernstein AD, Gera M: Clinical usefulness of risk-stratified outcome analysis in cardiac surgery in New Jersey. Ann Thorac Surg 1996; 61: S8–11Parsonnet, V Bernstein, AD Gera, M
Urzua J, Dominguez P, Quiroga M, Moran S, Irarrazaval M, Maturana G, Dubernet J: Preoperative estimation of risk in cardiac surgery. Anesth Analg 1981; 60: 625–8Urzua, J Dominguez, P Quiroga, M Moran, S Irarrazaval, M Maturana, G Dubernet, J
Jones RH, Hannan EL, Hammermeister KE, DeLong ER, O’Connor GT, Luepker RV, Parsonnet V, Pryor DB, for the Working Group Panel on the Cooperative CABG Database Project: Identification of preoperative variables needed for risk adjustment of short-term mortality after coronary artery bypass graft surgery. J Am Coll Cardiol 1996; 28: 1478–87Jones, RH Hannan, EL Hammermeister, KE DeLong, ER O’Connor, GT Luepker, RV Parsonnet, V Pryor, DB for the Working Group Panel on the Cooperative CABG Database Project:,
Tu JV, Sykora K, Naylor CD, for the Steering Committee of the Cardiac Care Network of Ontario: Assessing the outcomes of coronary artery bypass graft surgery: How many risk factors are enough? J Am Coll Cardiol 1997; 30: 1317–23Tu, JV Sykora, K Naylor, CD for the Steering Committee of the Cardiac Care Network of Ontario:,
Marshall G, Grover FL, Henderson WG, Hammermeister KE: Assessment of predictive models for binary outcomes: An empirical approach using operative death from cardiac surgery. Statistics in Medicine 1994; 13: 1501–11Marshall, G Grover, FL Henderson, WG Hammermeister, KE
Pons JMV, Borras JM, Espinas JA, Moreno V, Cardona M, Granados A: Subjective versus statistical model assessment of mortality risk in open heart surgical procedures. Ann Thorac Surg 1999; 67: 635–40Pons, JMV Borras, JM Espinas, JA Moreno, V Cardona, M Granados, A
Tremblay NA, Hardy JF, Perrault J, Carrier M: A simple classification of the risk in cardiac surgery: The first decade. Can J Anaesth 1993; 40: 103–11Tremblay, NA Hardy, JF Perrault, J Carrier, M
Dupuis JY, Wynands JE: Risk-adjusted mortality to assess quality of care in cardiac surgery (editorial). Can J Anaesth 1993; 38: 91–7Dupuis, JY Wynands, JE
Swets JA: Measuring the accuracy of diagnostic systems. Science 1988; 240: 1285–93Swets, JA
DeLong ER, DeLong DM, Clarke-Pearson DL: Comparing the areas under two or more correlated receiver operating characteristic curves: A nonparametric approach. Biometrics 1988; 44: 837–45DeLong, ER DeLong, DM Clarke-Pearson, DL
Hosmer DW, Lemeshow S: Assessing the fit of the model, Applied Logistic Regression. Edited by Hosmer DW, Lemeshow S. New York, Wiley, 1989, pp 135–75.
Plackett RL. Karl Pearson and the chi-squared test. International Statistical Review 1983; 51: 59–72Plackett, RL
Ferguson TB Jr, Dziuban SW, Edwards FH, Eiken MC, Shroyer ALW, Pairolero PC, Anderson RP, Grover FL: The STS national database: Current changes and challenges for the new millenium. Ann Thorac Surg 2000; 69: 680–91Ferguson, TB Dziuban, SW Edwards, FH Eiken, MC Shroyer, ALW Pairolero, PC Anderson, RP Grover, FL
Wynne-Jones K, Jackson M, Grotte G, Bridgewater B, on behalf of the North West Regional Cardiac Surgery Audit Steering Group: Limitations of the Parsonnet score for measuring risk stratified mortality in the north west of England. Heart 2000; 84: 71–8Wynne-Jones, K Jackson, M Grotte, G Bridgewater, B on behalf of the North West Regional Cardiac Surgery Audit Steering Group:,
Rosner B: Fundamentals of Biostatistics, 4th edition. Belmont, Duxbury Press, 1995, pp 419–23
Ranta S, Hynynen M, Tammisto T: A survey of the ASA physical status classification: Significant variation in allocation among Finnish anaesthesiologists. Acta Anaesthesiol Scand 1997; 41: 629–32Ranta, S Hynynen, M Tammisto, T
Haynes SR, Lawler GP: An assessment of the consistency of ASA physical status classification allocation. Anaesthesia 1995; 50: 195–9Haynes, SR Lawler, GP
Green J, Wintfeld N: Report cards on cardiac surgeons. Assessing New York State’s approach. N Engl J Med 1995; 332: 1229–32Green, J Wintfeld, N
Hannan EL, Kilburn H Jr, O’Donnell JF, Lukacik G, Shields EP: Adult open heart surgery in New York Sate. An analysis of risk factors and hospital mortality rates. JAMA 1990; 264: 2768–74Hannan, EL Kilburn, H O’Donnell, JF Lukacik, G Shields, EP
Williams SV, Nash DB, Goldfarb N: Differences in mortality from coronary artery bypass graft surgery at five teaching hospitals. JAMA 1991; 266: 810–5Williams, SV Nash, DB Goldfarb, N
Edwards FH, Albus RA, Zajtchuk R, Graeber GM, Barry M: A quality assurance model of operative mortality in coronary artery surgery. Ann Thorac Surg 1989; 47: 646–9Edwards, FH Albus, RA Zajtchuk, R Graeber, GM Barry, M
Griffith BP, Hattler BG, Hardesty RL, Kormos RL, Pham SM, Bahnson HT: The need for accurate risk-adjusted measures of outcome in surgery. Lessons learned through coronary artery bypass. Ann Surg 1995; 222: 593–9Griffith, BP Hattler, BG Hardesty, RL Kormos, RL Pham, SM Bahnson, HT
Burack JH, Impellizzeri P, Homel P, Cunningham JN Jr: Public reporting of surgical mortality: A survey of New York State Cardiothoracic surgeons. Ann Thorac Surg 1999; 68: 1195–202Burack, JH Impellizzeri, P Homel, P Cunningham, JN
Iezzoni LI: The risks of risk adjustment. JAMA 1997; 278: 1600–7Iezzoni, LI
Fig. 1. Receiver operating characteristic curves obtained with each risk model for prediction of mortality in the reference group (n = 2,000 patients). CARE = Cardiac Anesthesia Risk Evaluation; AUC = area under the curve.
Fig. 1. Receiver operating characteristic curves obtained with each risk model for prediction of mortality in the reference group (n = 2,000 patients). CARE = Cardiac Anesthesia Risk Evaluation; AUC = area under the curve.
Fig. 1. Receiver operating characteristic curves obtained with each risk model for prediction of mortality in the reference group (n = 2,000 patients). CARE = Cardiac Anesthesia Risk Evaluation; AUC = area under the curve.
×
Fig. 2. Receiver operating characteristic curves obtained with each risk model for prediction of morbidity in the reference group (n = 2,000 patients). CARE = Cardiac Anesthesia Risk Evaluation; AUC = area under the curve.
Fig. 2. Receiver operating characteristic curves obtained with each risk model for prediction of morbidity in the reference group (n = 2,000 patients). CARE = Cardiac Anesthesia Risk Evaluation; AUC = area under the curve.
Fig. 2. Receiver operating characteristic curves obtained with each risk model for prediction of morbidity in the reference group (n = 2,000 patients). CARE = Cardiac Anesthesia Risk Evaluation; AUC = area under the curve.
×
Table 1. Cardiac Anesthesia Risk Evaluation Score
Image not available
Table 1. Cardiac Anesthesia Risk Evaluation Score
×
Table 2. Multifactorial Risk Indexes for the Prediction of Outcome after Cardiac Surgery
Image not available
Table 2. Multifactorial Risk Indexes for the Prediction of Outcome after Cardiac Surgery
×
Table 3. Patient Characteristics in the Reference Group (n = 2000) and Association with Perioperative Outcomes, as Determined by Univariate Analysis
Image not available
Table 3. Patient Characteristics in the Reference Group (n = 2000) and Association with Perioperative Outcomes, as Determined by Univariate Analysis
×
Table 4. Probabilities of Mortality, Morbidity and Prolonged Postoperative Length of Stay in Hospital, as Predicted by the CARE Score
Image not available
Table 4. Probabilities of Mortality, Morbidity and Prolonged Postoperative Length of Stay in Hospital, as Predicted by the CARE Score
×
Table 5. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the CARE Score
Image not available
Table 5. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the CARE Score
×
Table 6. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the Parsonnet Classification
Image not available
Table 6. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the Parsonnet Classification
×
Table 7. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the Tuman Classification
Image not available
Table 7. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the Tuman Classification
×
Table 8. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the Tu Classification
Image not available
Table 8. Pearson Chi-square Goodness-of-fit Test for Prediction of Mortality and Morbidity with the Tu Classification
×
Table 9. Accuracy of the CARE Score versus Other Clinical Variables for Prediction of Mortality and Morbidity in the Reference Population
Image not available
Table 9. Accuracy of the CARE Score versus Other Clinical Variables for Prediction of Mortality and Morbidity in the Reference Population
×