Quantitative assessment of the scientific merit of journals and articles is being used increasingly to assess and compare researchers and institutions. The most commonly used measure is the 2 year Impact Factor, which broadly reflects the number of times each article in the journal has been cited over the previous 2 years. There are clear limitations to the use of such measures – not least, Impact Factors reflect the journal not the article, vary with time and correlate only poorly with perceived excellence. Simple comparison of impact factors in different specialties may be misleading. Review journals often have higher Impact Factors than those with original data. Both authors and editors can try to manipulate journal Impact Factors. However, despite valid concerns, Impact Factors are widely used and offer, at present, the best simple tool for comparison of output. Like all measures, the use of Impact Factors has to be tempered with knowledge of their limitations and common sense used in interpreting any data based on any analysis.