Peer Learning (PL) is an engaging activity in which practicing radiologists come together to review cases from which they can learn jointly. The major impetus for PL lies in the overarching goal of improving diagnosis in radiology through a team-based culture of viewing mistakes as an opportunity to learn.1 In its book Improving Diagnosis in Health Care, the Institute of Medicine (IOM) found that most people will be affected by at least 1 diagnostic error in health care.2
As a result of the IOM book Improving Diagnosis in Health Care, several external drivers are in place to change the practices of health care providers toward improved diagnosis. Radiologists realized that PL, but not random score-based peer review, best meets the IOM goals of establishing effective teamwork, educating practitioners in the diagnostic process, learning from mistakes, creating a culture that improves diagnostic performance, and establishing a reporting mechanism for discrepancies.1 Following a 2020 Radiology Peer Learning Summit, the American College of Radiology (ACR) developed a new accreditation pathway that replaces score-based random peer review with PL.3,4 The American Board of Radiology (ABR) added PL as an alternative participatory activity for meeting Maintenance of Certification (MOC) Part 4 criteria.5 The Joint Commission (TJC) serves as another external driver of improved practitioner performance, eg, peer review, through Ongoing Professional Practice Evaluation (OPPE) requirements.6
Assessments of agreement among university neuroradiologists showed up to 12.4% disagreement rates,7 which are much higher than the reported 2.9% for score-based random peer review.8 Among errors in neuroradiology are discrepancies regarding vascular, neoplastic, and congenital disorders, as well as artifacts.7 Moreover, neuroradiology errors can also arise from test-selection errors, protocolling errors, technical errors, and failure to communicate results in a timely fashion. Education can decrease errors, given that neuroradiologists with high participation rates in the Tumor Board have lower diagnostic error rates.9 Having awareness of “blind spots,” for example with complex head and neck anatomy and pathology, may decrease interpretive errors and could conceivably be improved with PL.10,11 In fact, there is currently some evidence that PL creates learning opportunities,12,13 but there is still a lack of adoption of PL programs nationwide14 and a lack of scientific evidence demonstrating its effectiveness.
This Perspectives comes from members of the American Society of Neuroradiology (ASNR) Quality, Safety and Value Committee. Here, we describe several challenges faced by neuroradiologists who are interested in serving as PL champions. Among these challenges is an inability to recruit volunteer champions to drive PL programs, lack of resources for running a PL program, unknown effects on required reporting to TJC, and lack of evidence favoring PL over score-based random peer review.
Challenge 1: Who Wants to Be a PL Champion?
The barriers related to appointing PL champions may be related to implicit expectations regarding clinical rather than noninterpretive performance and to the scope of this role, depending on the existing culture within the neuroradiology practice and the resources available to support a PL program.
The current practice environment in neuroradiology is characterized by rising clinical volumes, tight finances, and the unfolding Great Resignation. As a result, neuroradiologists may cut back on noninterpretive duties.15 In a pediatric neuroradiology example, it was stated that serving as a PL champion requires, at a minimum, several hours of work in preparation for PL meetings and may require additional time for managing discrepancies as well as for external reporting such as generating and submitting data for the ACR accreditation program, for TJC, or for receiving Continuing Medical Education (CME) credits.13 There are currently no established physician roles that would provide PL champions with protected time for these tasks.
PL champions face additional challenges. Some radiologists believe that participating in interesting case conferences is the same as PL. While this belief is literally true, the term PL is to be viewed in the context of meeting goals from the IOM Improving Diagnosis in Health Care, which includes several layers of accountability, foremost a process for handling discrepancies.1 There needs to be a clear process for reporting discrepancies, for consistently notifying the original interpreting radiologist of the discrepancy, and for ensuring optimal patient care. To meet TJC and ACR requirements, PL must include cases with discrepancies, but additionally, the inclusion of great catches, interesting cases, and so forth can foster shared learning. PL champions may need to drive this culture shift, emphasizing the importance of reporting discrepancies for learning purposes.
The ACR acknowledges the importance of separating the performance review of radiologists from learning and professional growth by creating the ACR Accreditation Pathway for PL,4 which substitutes agreement/disagreement ratings with measures of participation in a PL program. For neuroradiologists who are used to the randomized score-based peer review, the culture needs to shift away from perceiving discrepancy reporting as punishment or as a performance-assessment tool and toward sharing learning opportunities so the group can learn and grow. This shift in measuring performance is crucial because traditional score-based peer review has been used as a punitive measure in the past.14,16 Achieving such a culture shift could be an impossible lift for neuroradiologists. Leadership support is critical to the success of any PL program.
Another culture shift revolves around how discrepant opinions are handled. Absolute certainty in medicine is rare to come by, and often there are differences of opinion regarding a diagnosis in radiology. In the randomized score-based peer review system, a voting system would be used to decide which image interpretation is more “correct.” This culture may be founded in the traditional “learn-what” approach, such as reading an article or taking an online course that may provide a sense of certainty of a diagnosis. Instead, PL emphasizes the idea of “learn-how,” by sharing knowledge, offering suggestions, and discussing alternative diagnostic approaches in the setting of discrepant opinions.17
Thankfully, it is not necessary to design a PL program de novo. The ACR’s PL checklist and sample policies provide a great starting point for building a PL program that is founded on the IOM goals for Improving Diagnosis in Health Care.1,2 The main pillars of PL programs include a mechanism for managing discrepancies, providing a safe learning environment, having a clear separation of learning from performance evaluation, and anonymization. There are ample opportunities for the identification of discrepancies in neuroradiology, for example during any comparison with a prior study, from secondary review for the Neuro-Oncology Tumor Board and other neuroimaging conferences and reading room secondary opinion consultations, during teaching sessions, and from clinical error reporting.
The PL champion is also tasked with creating a safe learning environment. Groups with higher psychological safety have a “shared belief that the team is safe for interpersonal risk-taking.”18 As a result, members of such groups practice open communication, are not afraid to voice concerns or ask questions, and seek feedback without fear of being judged. Achieving such a culture requires deliberate effort to flatten authority gradients and eliminate any language that implies blaming, shaming, or judging. To create safe learning environments the PL champions will set clear ground rules, serve as role models, foster nonjudgmental behavior by demonstrating openness to different perspectives, and actively discourage any dismissiveness/hostility.19
Challenge 2: Are There Resources for Running a PL Program?
Besides a PL champion driving and executing a PL program, there are also resources required for external regulatory reporting, such as the ACR Accreditation Pathway for PL or TJC, and possibly reporting for claiming CME credits. Among required resources, besides the PL champion’s time and enthusiasm, are software tools and support staff time.
There are a few commercial tools that help manage various aspects of PL, but there is not currently a comprehensive commercial tool that manages the entire process, starting when a radiologist submits a case and including management of discrepancies, ensuring optimal clinical care, aiding PL champions in preparing for the PL conference, documenting PL performance targets, running a conference with anonymized PACS cases, capturing learning and improvement initiatives, and generating an annual report. Many commercial tools facilitate case submissions and case rating/classification systems (discrepancy, great catch, and so forth) but lack the ability to extract data for monthly and annual tracking of radiologist performance targets, such as the number of monthly cases submitted and participation in PL conferences. Meeting attendance can be tracked separately, and this tracking can be facilitated by choosing virtual meeting platforms that automatically generate attendance forms at the conclusion of the meeting.
At a coauthor’s (N.K.) institution in a pediatric neuroradiology program, the REDCap (Research Electronic Data Capture; projectredcap.org) research tool is used to drive a large portion of the PL process. Specifically, the tool serves as a case-reporting tool, it can be used to notify original readers of their reported cases and reporting reasons, it indicates whether any actions for clinical care need to be taken by the original reader, and it allows data-tracking and data summary required by the ACR Accreditation Pathway for PL (Fig 1). Having to create tools and processes and then implement them can be time-consuming and may require collaboration with other subject matter experts, which can delay their implementation.
REDCap tool for PL. A, The submitter can indicate his or her name to receive credit against the monthly case submission requirement per the ACR Accreditation Pathway for PL. The submitter selects the reason for case submission, which includes discrepancies as well as interesting cases, good catches, and more. We use the PACS accession number as the case identifier. Any additional required actions can be entered, and the submitter attests to being responsible for ensuring optimal patient care. B, After submitting the content in the survey, a PDF is created that contains all survey input, except the name of the case submitter. The submitter can input the original reader’s email to quickly share the feedback. C, The REDCap tool allows the creation of reports that easily summarize information such as learning and improvement actions resulting from the PL program, which can be used for annual reporting on the ACR Accreditation Pathway for PL. D, We also have an administrative assistant monitor monthly case submissions and send an email with current submissions to every participating radiologist midway through the monthly reporting period.
Being able to present cases in an anonymized fashion can represent another challenge. Preserving anonymity in PL is important because it can positively impact learners’ perceptions of the value of PL, can foster the provision of more critical peer feedback, and can lead to increased performance.20 The extent of anonymity required may depend on the maturity level of the safety culture within a group of neuroradiologists but generally involves anonymity of the notification of a discrepancy as well as anonymous case presentation during PL meetings. Anonymity facilitates a nonpunitive atmosphere during review of cases among a group of attendings and trainees. The cases should be prepared by the PL champion to minimize the number of people who can identify cases and readers. Most interesting, to meet the ACR PL Accreditation Pathway criteria, the identity of anyone submitting cases needs to be captured to meet performance targets, but the identity of the original reader whose report was flagged as a discrepancy is not required to be captured. Inclusion of interesting cases or great catches, however, can increase PL participation and transparency21 and represents an opportunity to celebrate individuals by name. Of note, many PACS systems do not completely anonymize patient identification, and PL champions need to be cautious when screensharing the entire PACS window. Certain virtual meeting applications allow sharing only a portion of the screen, which may be better suited to preserving anonymity. Another way to preserve anonymity is to create slide presentations, which can be very time-consuming.13
A key outcome of a PL program from the perspective of the ACR Accreditation Pathway for PL is the documentation of quality improvements that arose from the PL program. PL meetings and case discussions can lead to the discovery of process and system issues that can be addressed. Many issues may be addressed directly by the neuroradiologists in the PL group, such as changing CT and MR imaging protocols or reporting templates, but larger issues, such as a broken system for providing feedback from neuroradiologists to technologists, may require escalation to a dedicated improvement team.22 It may be challenging to set up a process for handing off such projects to a dedicated quality team if the practice even has access to one.23
Challenge 3: How Does Peer Learning Meet TJC Requirements?
Another barrier affecting the transition from random score-based peer review to PL relates to external reporting of radiologists’ performance. Specifically, it is still unknown whether TJC will accept metrics derived from PL to replace the widely accepted random score-based review performance evaluation metrics of agreement/disagreement rates between radiologists.
TJC requires health care entities to provide both qualitative and quantitative data for OPPE and has traditionally accepted data from random score-based peer review to meet this requirement in radiology. It is technically possible to maintain random score-based peer review for external reporting purposes while also participating in PL, but this practice could cause confusion and mistrust among practicing radiologists, which would counteract the basic principles of a safe PL environment.
It may be better to replace score-based random peer review data reporting for OPPE with a different set of performance data. For example, 1 coauthor (N.K.) is proposing the use of report turnaround times (TAT) in conjunction with PL metrics (the number of cases submitted per radiologist per month, PL meeting participation, and so forth) for reporting only quantitative data for OPPE21 (Table). Additional qualitative data that TJC may require for OPPE could be collected through other pathways. For example, annual peer evaluations could be collected like those commonly used in the credentialing process (Fig 2). In addition, data from reporting systems for issues of physician practices could be used to reflect a qualitative assessment of radiologists’ performance (Fig 3).
A sample OPPE form allowing peers to evaluate their peers. This qualitative evaluation aligns with the 6 Accreditation Council for Graduate Medical Education (ACGME) core competencies and can serve to identify any practice concerns.
A sample report describing Focused Professional Practice Evaluation (FPPE) events to division directors, which represents and qualitative assessment that OPPE can use for TJC reporting. Division directors will not know the nature of the events that have been investigated, but they can still easily glean from this type of reporting across time whether a radiologist’s practice raises concern in terms of a higher-than-usual number of relevant issues and dispositions, such as behavior concerns, contract violations, and verbal or administrative interventions.
Sample approach to defining quantitative data for OPPE use that could replace score-based random review dataa
Neuroradiologists who are interested in discontinuing random score-based peer review will have to consider external reporting requirements and work with representatives from those agencies to ensure that any new metrics meet existing requirements.
Challenge 4: What Is the Scientific Evidence Favoring PL?
It may be difficult for PL champions to convince leadership of abandoning traditional peer review in favor of PL. There is some evidence in the scientific literature that PL is a better approach to improving diagnosis than randomized score-based peer review but not necessarily that it is a performance-evaluation tool for radiologists. There is, however, ample evidence that score-based peer review is a flawed performance-evaluation tool24 that failed to demonstrate learning25 and failed to engage radiologists.14,26
If one is considering addendum rates as a surrogate marker of improved patient care, then PL by far exceeds the effects of score-based random peer review,27 but data directly linking peer learning to improved patient outcomes are still missing.
There is some evidence that PL may lead to greater radiologist engagement.12,13 Physician burnout poses an increased risk of patient safety incidents as well as poor quality of care and low patient satisfaction.28 A recent report stated that reported burnout among US neuroradiologists ranged from 49% to 79%.29 There remains an opportunity to generate additional scientific evidence linking PL to radiologist engagement metrics and linking improved engagement to improved patient outcomes.
Overall, the field of PL offers an opportunity for neuroradiologists to apply a scholarly angle. For example, we need evidence that PL leads to a neuroradiologist’s improved ability to reliably make an accurate diagnosis, that PL improves the cohesiveness of neuroradiology teams, and that PL could reduce burnout. There is a traditional view that high clinical volumes lead to lower academic output in neuroradiology, as measured by peer-reviewed articles, presentations, and abstracts.30 However, this simplistic linkage, which disregards factors like seniority and work schedules, has been criticized by other neuroradiologists31 and should not deter neuroradiologists from engaging in roles that do not contribute to clinical output.
The field of PL in radiology is still evolving. Neuroradiologists have an opportunity to become leaders in this field. Meeting the challenges presented in this article can result in professional and personal growth, improved job satisfaction, and reduced feelings of burnout. These are important possible gains to consider when weighing the commitment required to fill a PL champion role.
Footnotes
Disclosure forms provided by the authors are available with the full text and PDF of this article at www.ajnr.org.
References
- Received May 24, 2023.
- Accepted after revision July 21, 2023.
- © 2023 by American Journal of Neuroradiology