Brain Monitoring and the Depth of Anesthesia: Another Goldilocks Dilemma

    loading  Checking for direct PDF access through Ovid

Excerpt

If each medical specialty were to be identified by a key organ of the body, would “brain” have been the choice for anesthesiology? The practice of anesthesia is historically rooted in the need for management of issues such as pain and consciousness—all falling within the domain of nervous system—and many anesthetic agents exert their primary effect on the brain. Yet, looking at an anesthesiologist in the operating room today, the central role of brain and monitoring its function might not be evident at first sight. Indeed, it seems that attention is paid more to monitoring vital signs, supporting the cardiovascular system, maintaining perfusion, and keeping track of fluid flow. The assumption might be that the brain will be ok as long as we are taking care of the other “supportive” organs. After all, monitoring actual brain function might look a bit too abstract to tackle.
In its continuum, depth of sedation is primarily defined based on the “responsiveness” of the patient, ranging from normal response to verbal stimulation at minimal sedation (anxiolysis) to a state of complete unarousability at general anesthesia. Given that patient responsiveness is clearly a function of the central nervous system, monitoring brain function during anesthesia should be a “no brainer.”
The reason we are not paying a lot more attention to brain function in anesthesia practice has more to do with technical limitations than ignorance. The heart, circulatory system, blood, lungs, and kidneys all function in ways that are relatively easier to understand and measure using various physical and chemical characteristics which are often tied in a more meaningful way to their function. Examples include pulse rate, blood pressure, creatinine level, and oxygen saturation, which are routinely used to assess and monitor the function of other organs. In comparison, brain function is significantly more complex and cryptic. Our oldest and most widely used window to this “black box”—electroencephalography (EEG)—provides data that in its raw form, is usually left to be interpreted by highly trained clinicians, and even then, the information that can be deduced is often hindered by the interpersonal variations, relatively low signal to noise ratio and limited resolution to detect more subtle changes. Nonetheless, the real-time monitoring of the overall electrical activity of the brain (predominantly the more peripheral areas of the brain that are proximal to the electrodes) can be a highly useful tool for the anesthesiologists to objectively assess anesthesia and sedation.1
The progressive nature of EEG changes during stages of anesthesia have been studied in detail, and it is commonly described as a gradual shift toward higher-amplitude and lower-frequency activity as the level of anesthesia becomes deeper.2 The distinct phases of EEG patterns that correspond to various levels of consciousness and the depth of anesthesia are listed in the Figure.2 Surgery is usually performed in phase 2 or 3.
Given these well-established patterns, it is not surprising that many of the currently available brain function monitoring devices predominantly draw on EEG measurements to provide clinically meaningful outputs. The difference usually lies in the parts of the EEG data selected, how the data are “cleaned up” and analyzed, and how the output measures are constructed and displayed. In general, the approach to simplify the complex multichannel EEG signals into some easily understandable arbitrary measures involves analysis and fitting models on a plethora of processed and simplified EEG data (see below) derived from multitudes of subjects with known level of awareness or depth of anesthesia.
    loading  Loading Related Articles