The P values tell us the chance of making a type I error of finding a difference where there is none. In the 1970s, exact P values were laborious to calculate and were generally approximated from statistical tables, in the form of P < 0.01 or 0.05 < P < 0.10, etc. In the past decades with the advent of computers, it became easy to calculate exact P values such as 0.84 or 0.007. The cut-off P values have not been completely abandoned, but broader attention is given to the interpretation of the exact P values. This article reviews standard and renewed interpretations of P values. (1) Standard interpretation of cut-off P values such as P < 0.05: The null hypothesis of no difference can be rejected on the limitations/assumptions that we have up to a 5% chance of a type I error of finding a difference where there is none, we have 50% chance of a type II error of finding no difference where there is one, the data are normally distributed, they follow exactly the same distribution as that of the population from which the sample was taken. (2) A common misunderstanding of the P value: It is actually the chance that the null hypothesis is true and consequently that a P > 0.05 indicates a significant similarity in the data. P > 0.05 may indeed indicate similarity. However, a study sample too small or study design inadequate to detect the difference must be considered. (3) Renewed interpretations of the P values: Exact P values enable more refined conclusions from the research than cut-off levels. Instead of concluding significantly yes/no, we are able to consider levels of probabilities from very likely to be true to very likely to be untrue. Very large P values are not compatible with a normal gaussian frequency distribution; very small P values do not completely confirm prior expectations. They must be scrutinized and may have been inadequately improved.