Possible Strategies to Make Warfarin Dosing Algorithm Prediction More Accurately in Patients With Extreme Dose
The major reason for prediction bias is that the known pharmacogenetics and clinical factors could only explain about 55% of the warfarin dose variation.3 To improve the prediction accuracy, more pharmacogenomics studies are needed to find new factors related to warfarin doses. However, due to the large effect of CYP2C9 and VKORC1 polymorphisms, few new factors are identified through pharmacogenetics studies with the conventional strategy. Another reason is that gene–gene interactions might exist between CYP2C9 and VKORC1, indicating a nonlinear warfarin dose variation.4 One possible method to solve these problems is conducting pharmacogenetics studies and establishing the dosing algorithm in the patients with the same CYP2C9‐VKORC1 combined genotype, which could eliminate the effect of CYP2C9 and VKORC1 polymorphisms.
The major role of dosing algorithms is to predict warfarin dose requirements at the initiation of therapy, which will help patients minimize the time for dose adjustment to the therapeutic international normalized ratio (INR) range. Indeed, adjusting the dose according to the measurements of the INR is the most effective method to ensure the safety of warfarin use at present. When warfarin is used, no matter how accurate the dosing algorithm prediction performed, the INR values still need to be monitored. In theory, if a patient's initial dose is under‐ or overpredicted, his/her INR values will depart widely from the normal range after using warfarin. Therefore, integrating the INR values detected the first few times and the predicted dose, researchers may establish a new algorithm to re‐predict. In future studies, based on pharmacogenetics factors, clinical phenotype, and INR values, researchers can construct two dosing algorithms to predict warfarin dose twice for the patients with an extreme dose, and that may help doctors adjust the dose more accurately.