Why You Should Care about the CPA UGRC Report
by Christopher Cimino, MD, FACMI, VP of Medical Academics, Kaplan Medical | May 12, 2021
What is the CPA UGRC and Why Should You Care?
Back in 2019, when the USMLE Invitational Conference on USMLE Scoring (InCUS) made the initial recommendations to make Step 1 Pass-Fail, they also recommended several other things. One was that a multi-organization coalition make recommendations on how to improve the match process. That task was given to the Coalition for Physician Accountability (CPA).
CPA includes as members every organization involved in medical education from medical school (undergraduate medical education or UGE) through residency (graduate medical education or GME). The initial report was expected sometime in the summer of 2020 but world events intervened.
At that time, a more urgent problem was how the pandemic would interfere with fourth-year, away electives, and in turn interfere with students hoping to do audition electives to obtain their “best match.” CPA was positioned to address this problem, and since it was of immediate urgency, that is what they focused on. In late 2020, they returned to the more long-range problem they’d been asked to address. On April 26th, 2021, the CPA Undergraduate Medical Education (UME) and Graduate Medical Education (GME) Review Committee (collectively the UGRC) published an initial report and preliminary recommendations and asked for public comment.
To put this in context, these recommendations are meant to be the same order of magnitude as the change of Step 1 to Pass-Fail―the largest shift in medical education in this century so far. To be clear, CPA is a “coalition” of organizations and can not implement these recommendations. The member organizations might have the power to implement some of these recommendations, but many of them are going to require additional cooperation and acceptance by students, faculty, school administrators, program directors, hospital administrators, the public, and State Licensing Boards.
In their preamble statement, they point out that the ecosystem of UGE to GME Transitions is, “a decentralized collection of interdependent parts, each with their own interests, which currently do not communicate effectively or function cohesively.”
That remains true, and the work of this committee is a notable exception. The process that created this report was well thought out and it shows. The report and recommendations were well crafted, creative, and provide a clear vision on how the process can change. But the report is silent on how any of this might be implemented. There is “some” momentum but more is needed. Public comments are one way to sustain and increase that momentum.
Let Your Voice Be Heard
There is a lot to digest here. If you can, I urge you to read all 42 recommendations set forth in this 21 page report. But if you can’t, I have set down some of my thoughts here. Whether you agree or disagree, you can use my thoughts as a starting point for your own comments.
The broader the public commentary, the more equitable the final recommendations. The more equitable the recommendations, the broader support they will get. The broader the support, the more likely they will get implemented.
How to Grasp What is Being Recommended
In the preamble, they describe the recommendations as being either transactional, investigational, or transformational. They don’t say which are which; you can decide for yourself.
But they point out that:
- The transactional and investigational recommendations will be easier to carry out
- The transformational ones are more long-lasting and of greater importance.
They also group the recommendations into categories either based on:
- Steps in the process (advising, assessment, away rotations, application, interviewing, match, post-match transition)
- Or overarching themes (oversight, diversity, faculty support, policy implications, research.)
The logic in these two organizing principles makes sense for different overview purposes. However, neither approach is likely to work well for individuals trying to see how it all fits together.
Unpacking the CPA UGRC
The following was my approach in thinking about this report:
Application Overload and Applicant Competitiveness
Students have been flooding programs with applications because they are anxious about graduating from medical school without having a job. Having little information to inform them about their competitiveness at any given program, they are using a “shotgun approach” hoping some will find them acceptable. This problem is compounded by programs using an “airline booking approach” to scheduling interviews, with frequent stories of applicants getting interview offers only to find out all the slots are taken by the time they call back. What would help this situation would be if programs made public what information they use to make interview, or even applicant review, decisions. However, programs are reluctant because they are competing against each other.
As the number of applications ballooned, program directors have looked for ways to filter or limit the process because they can’t reasonably review 1,000 applications for a handful of positions. They have used Step 1 and Level 1 scores in the past. This turned those exams into super-high stakes assessments which have contributed to student anxiety. To reverse that trend, USMLE changed the exams to be Pass-Fail. There seems to be universal agreement that some other metric (e.g., Step 2 CK or Level 2 CE) will take its place. Other tactics have been to require unique personal statements for each program―which seems like an adversarial way to start what might be a multi-year relationship.
If the licensure exams all become Pass-Fail, then program directors will be looking for some other mechanism. The feeling is that much of what a medical school provides is subjective and that these schools have a conflict of interest in trying to evaluate the student trying to ensure their success in the match.
Several of the UGRC recommendations address this complex dynamic, and are worth considering together:
Recommendations 10, 13, 22, and 23 suggest standardizing the format and data including in the Medical Student Performance Evaluations (MSPEs) and Letters of Recommendation (LORs). MSPEs are already standardized of course but more could be done. LORs currently are not but they recommend what was done to the Dean’s Letters of the past to turn them into MSPE be done to LORs to turn them into Structured Evaluation Letters (SELs). These recommendations go further in saying some of the information should exist in discrete data fields which implies search and filter capabilities. Recommendation 22 goes on step further in saying that data fields should also be applied to other documents such as transcripts and personal statements. A note of caution is introduced in Recommendation 23, which says thought should be given to which fields should be allowed to be used as filters and to try and anticipate unintended consequences before we recreate a scenario similar to what we had with Step 1 and Level 1 scores. These recommendations fall into the Assessment and Application sections and certainly address program director concerns.
Recommendations 20 and 21 address student concerns about program information. One is to provide a comprehensive, free of charge, verified database of program information and the other is to use the same data field information to provide deidentified historical data about applicant characteristics in applying to specific programs. Students could be able to filter this data in the same way that program directors would filter applicants. In that way, they could find the characteristics of applicants similar to themselves and see which programs they got interviews with and where then eventually matched.
I believe they should be implemented collectively, because all these recommendations are tied to the complex application dynamics described above. Having one without the others would create an imbalance. Stakeholders from applicants, schools, programs, and ECFMG would all need to be represented.
Recommendation 28, which is described as part of the match process, also has as its intent to reduce the number of applications. It does this by suggesting an early match. This would remove the most competitive programs and applicants from the process which may in turn reduce the chaos for everyone else.
I endorse this recommendation even though it is likely it will have a small effect on application numbers initially. After all, people are still hoping they can get those most competitive spots because no one really knows just how competitive they are. It also would need to be rolled out gradually because a dramatic change will introduce unintended consequences.
The advantage to this approach is potentially it can be implemented by cooperation between National Residency Match Program (NRMP) and the Electronic Residency Application Service (ERAS). The fewer the players the greater the chance of success. After the first year, there would be public information about which programs are the most competitive which would guide future applicants; not much information but more than we have now.
Applicants are sending many applications as a way to increase their chances of getting interviews, but there is evidence that the most competitive applicants receive more interview invites than they need. This disadvantages both other applicants and the programs. The programs end up using interview spots for applicants who likely will go to some other program. Less competitive applicants end up with no spots.
Recommendation 27 proposes a limit on specialty-specific interviews per applicant. This means that someone who receives more interview invites than this limit will need to make some decisions about their priorities and decline some of the invites. This may seem a little scary for someone who is unsure of their chances of matching, but the sheer number of invites should tell them how competitive they are. Presumably, someone who has reached their interview limit would then withdraw all applications from programs where they have not yet gotten an interview. Both these steps would reduce the burden on applicants and programs.
I endorse this recommendation, although I foresee difficulties in it being implemented. There currently is no uniform way that programs schedule interviews. Some use ERAS, but many do not. Many use third-party software to manage the complex schedule and rescheduling process. Confirmations might be by email, phone, or text.
Unless uniformity is brought to all programs within a specialty at the same time, there will either be no benefit or inequities in the implementation. I see this as a daunting task.
Diversity & Inclusion
There is a section on improving diversity, equity, and reducing bias. All the recommendations in this section make sense (Recommendations 16-19) and I endorse them. There are also more subtle phrases included in many other recommendations that also promote these goals. For example, the limit on interviews (Recommendation 27) reduces the financial burden on applicants and so reduces any advantage applicants might have just because they are affluent.
For similar reasons, Recommendation 15, which asks to look at the purpose and utility of away electives, points out the cost factor disadvantages applicants with economic limitations. Are these electives really learning experiences, especially 3 or 4 in the same discipline, or an extension of the interview process?
Recommendations 25 and 26 make further suggestions for the interview process to be standardized, and to study whether the remote virtual interview seen in 2020 meets the needs of programs while reducing cost for applicants.
Recommendations 20 and 21 are about providing more information both about current programs and historical applicant success, but both also point out that such data should be freely available to applicants.
Recommendation 6 explicitly says career advising support should be equally available to domestic and international applicants. This is an important element of diversity that is frequently overlooked.
The core diversity recommendations indicate the problem is large and difficult, and will require more information, freely shared, faculty development, and development of new residents after beginning their program.
I am strongly in favor of all these recommendations.
There has been growing interest and usage of Entrustable Professional Activities (EPAs) as an approach to a competency-based assessment process. Recommendations 7, 8, 9, and 12 do not explicitly mention EPAs, but strongly endorse a shift to competencies. This aligns medical education with the goals of practicing physicians and moves them away from proxies like standardized scored knowledge exams. Implicit is the idea that making the licensure exams pass-fail should not mean a new scored exam should take their place to serve the purposes of the residency match process.
Although I endorse competencies, their implementation is complex. Creating a comprehensive set of competencies currently leads to comprehensive, exhaustive, and exhausting checklists. An approach gaining momentum is the idea of a Global Rating Assessment which with faculty development can correlate with exhaustive checklists. But it is easy to see that maintaining the required level of faculty development is a step that might be skipped and eventually lead to subjective assessments which are more prone to bias and gaps of competencies. There are other approaches to improving the coverage of a competency-based assessment while reducing the demand on evaluators, but I am not aware of any schools trying these approaches.
I see these as the transformation recommendations the committee has put forward, and it will take years before we even approach this goal.
Recommendation 24, in the application section, looks for ways to create parity between USMLE and COMLEX-USA licensure examinations as part of the application process. DO applicants should not have to feel that they need to take 2 sets of exams and double their expenses to have parity with MD applicants. However, I think as long as the opportunity for DO students to take both exams is allowed, there will be those who will take both because they perceive it will increase their chances. Even if that is not true, the perception creates an inequity that gives an advantage to those who are affluent.
Recommendation 39 calls for standardization of the licensure process at least as it applies to the first year of training. Some states require a temporary license and some do not. The process can be time-consuming and might not be completed between the time of graduation and residency start. This seems just like common sense.
Recommendation 40 calls for CMS rules to change how the “Initial Residency Period” is defined, which would give more flexibility to those who discover that the specialty of their dreams might not be what they want their intended career to be.
Support, Research, and Oversight
I have only covered about two-thirds of the recommendations here. The recommendations I haven’t mentioned specifically, I still support. I have skipped them because either they are less controversial or dramatic than the ones I have mentioned, or I foresee challenges to implementing them. All of these initiatives mentioned will require a culture change and some of that can be instituted as part of the learning process for students.
The need for faculty and clinicians to change will require its own faculty development effort. Support is also needed for applicants transitioning to residency. This includes the need for information sharing, protected time and resources to assist the geographical, social, and emotional transition, and a smooth assumption of the competency development process by programs. There remain some unanswered questions, especially around diversity and equity. The report includes recommendations that these issues be studied.
Perhaps the most important part of the report is Recommendation 1 which calls a group to provide ongoing oversight of the changes. This is an especially important but also challenging task as there is no one group with centralized authority. A half-hearted implementation of any of the recommendations will undercut this effort and make any future effort more difficult. Implementations that favor specific stakeholders will only cement and codify our existing problems and inequities.
This initial report is an excellent starting point providing both creativity and balanced solutions. It suggests to me an obvious “Recommendation Zero,” which is that CPA should be charged as the coalition that provides the necessary oversight and carries on this work.
Dr. Cimino has earned a reputation internationally as an award-winning medical educator. He was the founding Assistant Dean for Educational Informatics at Albert Einstein College of Medicine and former Associate Dean for Student Affairs at New York Medical College. He is board certified in Neurology and Clinical Informatics. He served as a member of the NBME Step 1 Behavioral Science Committee and the NBME End of Life Care Task Force.