-
EC eCQMs - Eligible Clinicians
-
Resolution: Answered
-
Moderate
-
None
-
None
-
Peter Basch
-
2023600299
-
MedStar Health
-
Thank you for your comments. We value your input and will take this feedback into account when considering changes for a future update cycle.
-
CMS0002v13
-
CMS0002v12
-
There is no disagreement with CMS2's importance as a screening measure. Depression is common in the US, and is often unrecognized. CMS2 was originally drafted with exclusions for patients with "a history of depression or bipolar disorder." I am only guessing, but I believe the clinical intent of that exclusion for "history of depression" was based on confusion between the problem list and past medical history. Thus, IMO it makes sense to EXCLUDE from screening someone who is currently diagnosed with depression and is under treatment. After all, screening is meant for patients without the condition you are screening them for.
Several years ago, the exclusion for "history of depression" was removed, with the clinical rationale expressed as (paraphrasing) "patients with a history of depression may develop depression again, and thus deserve to be re-screened. I couldn't agree more. In fact there is a set of ICD-10 codes specifically reserved for recurrent depression.
That said, the correction to the original oversight in definition of population to be screened, was also poorly worded. This could have been fixed by only excluding patients with an active problem (or current diagnosis) of depression or bipolar disorder. However, by attempting to fix the measure by including all patients (except those with bipolar disorder), a new problem was created. PHQ9s and other depression tests are not just used for screening; they are also used to assess treatment for people with depression. The problem is that unless the clinician created a new referral or wrote a new prescription for an antidepressant - the patient is now considered a measure fail.
Let me clarify with an example. A patient without a diagnosis/problem of depression and sees their PCP and is screened for depression and has a PHQ9 score of 12. The PCP determines the patient is suffering from moderate depression and refers the patient to psychiatry for further diagnosis and treatment. This patient meets the measure because a screening was done and a referral was made. I would also add that the measure is also measuring appropriate care.
Now the patient is seeing the psychiatrist, who confirms the diagnosis, and after shared decision making with the patient, decides on a course of medication and counseling. At 8 or 12 weeks, a repeat PHQ9 is administered to assess treatment efficacy. The patient now has a PHQ9 of 6 - still a positive test (for mild depression), but a test showing an excellent response to treatment. The psychiatrist and patient decide to continue treatment for another few months, and depending on how the patient is doing, consider stopping treatment at that point. While this is a good story of appropriate care, this is also a story of a measure "gotcha," as this patient is now scored as a measure FAIL. And why is that? Because each positive depression "screen" must have a documented follow-up, such as a referral or prescription. The psychiatrist is not going to refer to herself after every PHQ9.
And in this case the medication is likely to be tapered and stopped, not refilled or modified.
Last year, this "known issue" included a suggested work-around... mapping the narrative phrase to "continue current treatment" to one of two SNOMED codes (that were in the value set for follow-up satisfiers). How do you do that in a world where NLP is not ubiquitously deployed and consistently works? You add a check-box to a form. Sure this can be done - but how likely is it to be done consistently? My guess, not often. And is it reasonable to hound clinicians to remember to click a checkbox? I think not... there are so many other things we remind clinicians to do that actually have the potential to improve care.
And to make matters worse - I see for v14 (the 2025 update), instead of saying there is a problem and we are working on the solution, language was inserted in the rationale section of the measure which essentially says, "we meant to do that." That language (IMO) distorts good clinical care for depression - which is to periodically re-assess treatment efficacy with the same standardized tests we use for screening, and instead say "patients with depression should be periodically re-screened..."
A good process measure with a poorly worded exclusion has been transformed into a bad measure, where the only solution is burdensome checkbox documentation for patients under treatment for depression. Scores for many clinicians providing good care will decrease, which will lead to frustration with the entire measurement process.
There is no reason to justify a bad fix for an exclusion oversight with a new clinical construct
- clinicians should repeat a PHQ9 or similar standardized test at 8-12 weeks to assess treatment efficacy. Please just fix the measure.