Quarterly Assessments of Clinician Care (AKA Peer Review)

In the world of health centers, we commonly talk about the activity we call, “Peer Review”. However, the HRSA Site Visit Protocol does not ever call peer review, “Peer Review”. What we understand to be peer review, HRSA calls “Quarterly Assessments of Clinician Care”. This is found in the SVP in Chapter 10: QI/QA, Element d, questions 7-10 (April 13, 2023 version of the SVP).

The only requirements outlined by HRSA are that they:

  • Are “conducted by physicians or other licensed health care professionals” (Nurse practitioners, registered nurses, etc.)

  • Are conducted “on at least a quarterly basis”

  • Are “based on data systematically collected from patient records”

  • They “demonstrate that the health center is tracking and, as necessary, addressing issues related to the quality and safety of the care provided to health center patients”

So beyond these requirements, it’s up to the health center to determine what best practices they will adopt. The following are some possible best practices.

Who should do the peer review?

The peer review must be completed by “physicians or other licensed health care professionals” but it is less specific on who reviews who.

Common questions are: “Does a MD have to review another MD?”. “Can a Dentist be evaluated by an MD?”. Can a Director of Behavioral Health review all behavioral health staff or do the people evaluating have to be peers?”. “Can a NP evaluate an MD?”. “Who needs to evaluate the CMO?”. In short, HRSA does not have a requirement to these questions that is clearly spelled-out.

In general, the best practice is to have peers evaluate peers. This is not always possible in smaller organizations and so there are a number of options. One option is to have a higher-qualified staff member review a staff member with fewer qualifications. We have also seen health centers have a “peer review exchange” among peers at a friendly nearby FQHC. Many health centers are not comfortable with this as they don’t always want other organizations having this level of insight into another organization-but we have seen it work well. The health center can also provide “blind reviews” where key provider/patient information is redacted so reviewers do not know the actual identity of the clinical staff member they are reviewing.

We have seen some health centers have the director of the department (For example the Director of Behavioral Health) evaluate all of the Behavioral Health providers. Though there is likely nothing wrong with this process from a HRSA compliance perspective, this may not be sustainable, and it may not provide the variety and diversity in reviews that may be best.

Another common related question we have received is whether Licensed Independent Practitioners (LIPs), Other Licensed or Certified Practitioners (OLCPs), and unlicensed/uncertified Other Clinical Staff (OCS) are all required to be included in this peer review. In the Site Visit Protocol, the language states that these assessments are completed to ensure, "provider adherence to current evidence-based clinical guidelines...". "Providers" are generally assumed to be "Licensed Independent Practitioners" (LIPs) or Physicians, Advanced Practice Registered Nurses, Physician Assistants, etc. Most HRSA reviewers will only require "Quarterly Assessments of Clinician Care" for LIPs.

How many charts should we review every quarter?

Again, there is no requirement from HRSA but a best practice is that each LIP and OLCP should have 3-5 charts/service dates reviewed in a systematic way every quarter. Practically, you want to choose a number that can actually be completed sustainably each quarter. It is better to have 1-2 charts per LIP/OLCP every quarter than to aim for 10 charts per LIP/OLCP that is done inconsistently or not at all.

How do we demonstrate that the data is collected systematically?

HRSA is looking for consistency and intentionality in reviewing clinicians’ care. Most health centers will have some type of form that is decided upon that allows peer reviewers to evaluate charts in a standardized fashion. Other health centers will pull provider-level EHR reports. An example we have seen that we would not classify as systematic would be a Chief Medical Officer writing a narrative paragraph on the provider’s general professionalism.

How do we demonstrate that the health center is tracking and addressing issues related to the quality and safety of the care provided?

HRSA wants to see that these reviews aren’t just an exercise of “checking the boxes” that this is filed into a folder somewhere. Sometimes we will se great peer review forms completed but there is no evidence anything was ever done with the review. A best practice we see commonly is for the form to have a place at the bottom where the “reviewer” and the “reviewee” both sign and date acknowledgement of the review and then it has a place for noting deficiencies and associated action steps to improve. This is a great example of how a health center can demonstrate that issues are “tracked and addressed”.

How do we make this sustainable?

Let’s face it. Peer review can be cumbersome and time-consuming. Our general recommendation is always to attempt to combine efforts when possible. For example, if you are doing Peer Review every quarter, use that data to evaluate “current clinical competence” for your credentialing and privileging process - no need to create a duplicative process for evaluating your clinicians’ care. As we said above, we would always prefer a health center to have a simple, sustainable peer review process versus a complex, aspirational peer review process that is not sustainable!

Subscribe to the RegLantern Blog

Previous
Previous

Preparing for Your Pre-OSV Conference Call

Next
Next

Changes to HRSA OSVs Regarding FTCA, Chapter 21