Education and Training

Going Global: Scaling the Artificial Intelligence Literacy Course to an International Audience

Authors
  • Jordan D Perchik (University of Alabama at Birmingham)
  • Houman Sotoudeh (University of Alabama at Birmingham)
  • Steven A Rothenberg (University of Alabama at Birmingham)
  • Srini Tridandapani (University of Alabama at Birmingham)
  • Vishwa Parekh (University of Maryland)
  • Paul H Yi (University of Maryland)
  • Marcelo Straus Takahashi
  • Hari Trivedi (Emory University)
  • Andrew Dennis Smith (University of Alabama at Birmingham)

Abstract

Introduction: Applications of artificial intelligence (AI) in radiology continue to increase every year, however most radiology residencies lack a dedicated AI education curriculum. Fundamental AI education resources are even more sparse for trainees in low- to middle-income countries and under-resourced healthcare systems. The AI Literacy Course assesses the effectiveness and scalability of a free, remote AI education curriculum to increase understanding of fundamental AI terms, methods, and applications in radiology among radiology trainees in the United States and internationally.

Method: A week-long AI in radiology literacy course for radiology trainees was held October 3-7, 2022. Ten 30-minute lectures utilizing a remote learning format covered basic AI terms and methods, clinical applications of AI in radiology by three different subspecialties, and special topics lectures. A proctored, hands-on clinical AI session allowed participants to directly use an FDA-cleared, AI-assisted viewer and reporting system for advanced cancer. Pre- and post-course electronic surveys were distributed to assess participants’ knowledge of AI terminology and applications, as well as their interest in AI education.

Results: A total of 25 residency programs throughout the US participated in the course with participants attending from 10 countries. An average of 150 participants viewed the course per day. Nearly all participants reported insufficient exposure to AI in their radiology training (95.8%). Participant knowledge of fundamental AI terms and methods increased after completion of the course, with an average pre-course evaluation of 8.3/15 and a post-course evaluation of 10.0/15 (p=0.01).

Conclusion: The scalability of the AI Literacy Course demonstrates a viable model to bring accessible fundamental AI education to radiology trainees in the United States and internationally.

Keywords: Artificial Intelligence, Machine learning, Remote education

How to Cite:

Perchik JD, Sotoudeh H, Rothenberg SA, Tridandapani S, Parekh V, Yi PH, et al. J Glob Radiol. 2024;10(1):783.

Rights:

Copyright © 2024 Perchik et al. This open access article is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0 DEED), which permits use, distribution, and reproduction for noncommercial purposes, provided the original author and source are credited and original material is not modified. 

Funding

143 Views

64 Downloads

Published on
24 May 2024
Peer Reviewed
ea002cba-f60a-4ad3-8af5-503db4f02152

Introduction

Artificial intelligence (AI) has made a substantial impact on the practice of radiology, and the scope and scale of the radiology market continues to increase. The majority of FDA-cleared AI applications in healthcare are related to radiology, with new applications securing FDA clearance every month (1-3). And yet, despite an increasing number of applications, increasing research and publication, and increasing clinical integration of AI in radiology, the majority of radiology trainees continue to report that AI education in their diagnostic radiology training program is inadequate (4-12).

Opportunities for AI education have become more available, but barriers still remain for accessible AI education (13-16). Few radiology training programs have dedicated AI curricula; surveying 62 radiology training programs in the US, a recent study found that less than half had any formal AI educational initiatives, and only 3% of programs advertised their training pathway to residents or fellows (17). Access to AI education and hands-on experience with AI becomes even more sparse in low-resource settings and in low- and middle-income countries (LMICs). The vast majority of radiology trainees in the United States, and globally, report that AI will have a substantial impact on the practice of radiology, but that they have not had sufficient training in AI (11,12,18-21).

The AI Literacy Course, led by Artificial Intelligence in Radiology Education (AIRE), held remote lectures on fundamental terms, methods, and applications of AI in Radiology for radiology training programs in the Southeastern United States (22). In 2021, the course included nine training programs and reached over 150 radiology trainees. Participants demonstrated increased knowledge of AI terms and methods and increased comfort with AI in radiology. In 2022, the course was expanded and made accessible, free of charge, to radiology trainees and radiologists in practice around the world. Remote education has become widely used in medical education, but there are important considerations in leading a nationwide or international lecture series (23,24). By surveying participants of this AI Literacy Course, we assessed its effectiveness with an expanded international audience.

Methods

Outreach

Program directors and coordinators from 33 radiology training programs in the Southeastern United States were invited by email to register for the course. The AI Literacy Course directors partnered with RAD-AID and Health4theWorld, two of the largest international radiology non-governmental organizations (NGOs), to increase access to the course for international participants. These organizations were selected due to their large international audience, particularly in LMIC settings, their experience in remote education, and their existing AI education initiatives for their member programs.

The course was promoted on Twitter (now X, San Francisco, California) with social media outreach targeting radiology resident groups. Course lecturers and partner programs (radiology training programs and the international radiology outreach organizations) were encouraged to promote the course on their personal and organization’s social media accounts.

Curriculum

The curriculum for the AI Literacy Course maintained the overall structure of the 2020 and 2021 courses. This curriculum was developed by a panel of three academic radiologists and a lead radiology resident, each with greater than five years of experience in AI research and education. To ensure the course met the needs of their members, the course goals and curriculum were discussed with RAD-AID and Health4theWorld leaders during curriculum development. Two 30-minute lectures were held each day and broadcast via Zoom (San Jose, California). Lectures were held from 12-1pm CST on October 3-7, 2022, and recorded lectures were posted to the course YouTube channel one week after the course. All lectures were in English.

The primary goals and objectives of the course were to establish a foundational knowledge base of AI terms, methods, and applications in radiology. Although the course was developed primarily with the educational needs of radiology trainees in mind, the course was available free of charge to all participants including medical students, residents, fellows, postdoctoral researchers, and attending physicians. The AI Literacy Course lectures rotate on a two-year cycle, highlighting different subspecialties and special topics each year; the 2021 course highlighted applications of AI in neuroradiology, abdominal imaging, breast imaging, and cardiothoracic imaging (22), and the 2022 course highlighted AI applications in pediatric imaging, musculoskeletal imaging, and nuclear imaging. Topics covered in the 2022 AI Literacy Course can be found in the course syllabus (Figure 1).

Graphical user interface, text, application, email Description automatically generated

Figure 1. The 2022 AI Literacy Course schedule and curriculum.

Sessions included introductory lectures on fundamental terms and methods of AI, lectures on applications of AI by subspecialty, special topics lectures, and a hands-on AI demonstration with an FDA-cleared radiology AI application. The hands-on event featured an augmented image tool for oncologic imaging that provides automated segmentation of index lesions and tracks change in lesion size on restaging exams. Participants utilized the tool on a set of prepared baseline and restaging CT exams with metastatic cancer of various types on the vendor’s test server, and live technology support was provided during the demonstration.

Evaluation

A voluntary, anonymous, IRB-approved survey and evaluation were distributed to course participants before and after the course. A seven-question pre-course survey and eight-question post-course survey assessed trainees’ subjective knowledge of AI terms and methods, opinions of the impact of AI on radiology, and their opinions of the AI Literacy Course (post-course survey, only), scored on a Likert Scale of 1 to 5. Participants were prompted to report what prior experience they had with AI, if any. The pre-and post-course evaluations consisted of the same question set, however the question order was randomized between the pre- and post-course evaluation. The 2022 course evaluations were taken from the 2020/2021 course evaluation (22) to allow for a more direct comparison in performance between the two cohorts; answer order and wording were altered, and participants were not informed of their results or of the correct answers on the evaluations during the 2020, 2021, or 2022 courses. On the post-course survey, participants were able to provide feedback for the course and suggest topics for future courses.

Survey and evaluation documents were hosted on Google Forms (Mountain View, California). Statistics were performed using Microsoft Excel (Redmond, Washington). Pre-course and post-course evaluation data were compared using a Student’s t-test. Survey data were assessed using the Wilcoxon rank sum test. A p-value of less than 0.05 was considered statistically significant.

An equivalence test was used to determine if the results of the post-course evaluation from 2022 were equivalent to or inferior to prior iterations of the course in 2021 and 2020. The sample size and standard deviation were used to calculate the inferiority limit (Δ), yielding an equivalence interval of 2.5/15 (17%). To determine equivalence, the 95% confidence interval of the 2022 evaluation scores must be within +/- 2.5/15 (17%) of the combined 2020/2021 post-course evaluation mean (25,26).

Results

Course invitations were sent directly to 33 radiology training programs in the Southeast and participants from 25 programs in total joined the course, representing over 500 US radiology trainees. Individuals from participating training programs could view the course as a group or join the course individually. 347 individual participants from 10 countries registered for the course, including the US, Colombia, Grenada, the Netherlands, Cameroon, Nigeria, Egypt, Lebanon, Saudi Arabia, and India. Because some training programs viewed the course lectures as a group, daily participant numbers were estimated. The number of accounts logged in for daily lectures ranged from 30 to 75 accounts; allowing for individual account logins and variable attendance in the group logins, a conservative estimate of 150-250 participants daily participants was made with an estimated daily average of 175.

There were 96 responses to the pre-course survey, 46 responses to the pre-course evaluation, and 45 responses to both the post-course survey and evaluation. The estimated total number of participants eligible to respond to the pre- and post-course survey and evaluation was 500 radiology resident participants plus 347 individual participants, resulting in a response rate of 11.3% (96/847) and 5.4% (46/847) for the pre-course survey and evaluation, respectively, and a response rate of 5.3% (45/847) for the post-course survey and evaluation. Partial responses were accepted for the pre- and post-course survey, which resulted in different denominators for some questions; the total number of responses for each question is specified in the denominator. Partial responses were not accepted for the evaluation, and two post-course evaluations were excluded due to incomplete submissions. On the pre-course survey, the majority of respondents were radiology residents, representing 71.9% of attendees (69/96), followed by medical students at 18.8% (18/96), attending physicians at 8.3% (8/96), and postdoctoral research fellows at 1% (1/96).

Respondents reported that AI education is important to radiology training, with 74.8% (71/95) rating AI education as important or very important. The majority of respondents, 64.2% (61/95), reported low or very low familiarity with fundamental terms, methods, and applications of AI, and 95.8% (92/96) of respondents reported that they have not had sufficient training in AI. Compared to results from the 2021 survey, a higher proportion of respondents had prior experience with AI. Respondents who reported “No prior experience with AI” decreased from 77.2% (71/92) in 2021 to 59.1% (55/93), with the largest proportion of respondents reporting participating in AI research (12.9%, 12/93), previously participating in the AI Literacy Course (11.8%, 11/93), and attending AI lectures at their training program (10.8%, 10/93) (Figures 2a-b).

a)

A picture containing chart Description automatically generated

b)

Chart Description automatically generated

Figures 2a-b. Pre-course survey responses to the question “What experience have you had with AI, including prior to residency?” Survey respondents could select more than one answer. A) Results from the 2021 course demonstrate that 77.2% of respondents had no prior experience with AI. B) Results from 2022 course demonstrate that 59.1% of respondents had no prior experience with AI. Most reported AI experience came from prior AI research (12.9%), previous participation in the AI literacy Course (11.8%), or lectures through their radiology training program.

On the post-course survey, 93.2% (41/44) of respondents reported that the course increased their understanding of AI and 86.4% (38/44) reported interest in participating in radiology AI courses in the future. Subjective comfort with AI also increased significantly; respondents reported familiarity with AI terms and methods as 2.1 out of 5 on the pre-course survey (1-not at all familiar, 5-very familiar) (Figure 3a) which increased to 3.0 out of 5 on the post-course survey (p<0.001) (Figure 3b).

a)Forms response chart. Question title: Please rate your familiarity with fundamental terms, methods, and applications of artificial intelligence. Number of responses: 95 responses.

b)

Forms response chart. Question title: Please rate your familiarity with fundamental terms, methods, and applications of artificial intelligence.. Number of responses: 44 responses.

Figure 3. Survey results showing participant comfort with fundamental terms, methods, and applications of artificial intelligence (AI) before completing the AI Literacy Course (a) and after the course (b). Participant familiarity with AI terms, methods and applications was scored on a Likert scale of 1 to 5, with one being not at all familiar and five being very familiar.

The median score on the pre-course evaluation was 9/15 with a mean score of 55% (8.3 out of 15) (Figure 4a). There was a significant increase in participant scores on the post-course evaluation (Figure 4b), with a median score of 11/15 and a mean of 67% (10.0 out of 15) (p=0.01).

a)

b)

Figures 4a-b. Course evaluation scores from the 2022 AI Literacy Course on the pre-course evaluation (a) and post-course evaluation (b). A 15-question evaluation was sent to course participants before and after completion of the course. Number correct (max. 15) refers to the total number of questions, out of a total 15, each participant answered correctly. Number of participants represents the number of participants with each score. There was a significant increase in evaluation score from the pre-course to the post-course evaluation (p=0.01), with an average score of 8.3/15 on the pre-course evaluation to 10.0/15 on the post-course evaluation.

The combined data from 2020 and 2021 yielded a post-course evaluation mean score of 68% (10.1/15) and standard deviation of 3.2 from a total of 34 respondents. Figure 5a shows the distribution of post-course evaluation scores from the combined 2020 and 2021 courses and Figure 5b shows the post-course evaluation scores from the 2022 course. Utilizing the 2022 mean score of 67% (10/15) and standard deviation of 3.1 from a total of 43 respondents, a 95% confidence interval was calculated, yielding an interval of -1.73 to 1.53. The 95% confidence interval was within the equivalence interval of -2.5 to 2.5 (-Δ to +Δ), confirming statistical equivalence (Figure 5c).

a)

b)

c)

Chart, box and whisker chart Description automatically generated

Figures 5a-c. Post-course evaluation scores and equivalence margin comparing the combined 2020 and 2021 AI Literacy Courses cohorts and the 2022 AI Literacy Course. The post-course evaluation consisted of a 15-question exam scored on a scale of 0 to 15. A) Post-course evaluation for the 2020 and 2021 AI Literacy Courses; results for the post-course evaluation for the 2020 and 2021 courses were combined into one cohort. B) Post-course evaluation for the 2022 AI Literacy Course. C) Equivalence test using the combined 2020 and 2021 results as the baseline data and the 2022 results as the test data. The inferiority limit (Δ) was determined to be 2.5, meaning that the 95% confidence interval of the 2022 evaluation must be within +/- 2.5/15 (17%) of the combined 2020 and 2021 evaluation mean to establish equivalence [Walker, Ahn]. The 95% confidence interval of the 2022 evaluation results was calculated, yielding an interval of -1.73 to 1.53 (black line) which is within the equivalence interval of -2.5 to 2.5 (-Δ to Δ, green box). The black circle at value 0 represents true equivalence or “no difference”.

Discussion

There continues to be an unmet need for AI education in radiology training. The program outreach broadened the course’s audience, increasing the number of partner programs from nine to 25, and the partnership with RAD-AID and Health4theWorld helped to bring the course to participants in 10 countries. The vast majority of participants in the course report the need for more education, and inadequate AI exposure in their training. A higher proportion of participants reported prior experience with AI, owing to more programs integrating AI-related lectures and AI research (Figure 2b). Notably, 11.8% (11 of 93) of survey respondents reported attending the AI Literacy Course previously, which demonstrates the growing impact and value of the course. An increasing number of resources have become available for AI education for radiologists in training and radiologists in practice, including those led by training programs and those led by professional societies, (11,18-20) but barriers remain to effectively address the broader need for AI education in the US and internationally. Current AI education programs are often limited to trainees in a particular program and courses sponsored by professional societies can cost hundreds of dollars in tuition (13-16_. Fundamentally, the AI Literacy Course strives to address the disparity in AI education by providing free, accessible, and clinically relevant lectures for radiologists that will use AI in their practice.

The course was successful in increasing participant knowledge of AI, demonstrated by the significant increase in evaluation scores after completion of the course despite the challenges of a larger audience, with participants across multiple time zones and representing various practice settings and backgrounds. Course directors anticipated challenges reaching participants in different time zones, specifically participants’ ability to attend lectures consistently if lectures occurred during clinical hours or after hours, in addition to challenges with language barriers, especially in relation to specialized AI vocabulary. Coordinating with RAD-AID and Health4theWorld, as well as the partnered radiology training programs in the US, enabled the course directors to develop a curriculum that met the needs of radiologists in these different settings, and despite the new challenges, the post-course evaluation scores were statistically equivalent to previous versions of the course.

Hands-on experience with radiology AI continues to be a cornerstone of the course. Residents have reported in prior studies that hands-on experience with AI is an important aspect of AI education; allowing learners to see the concepts of AI in action helps to demonstrate how AI can be used in clinical practice (10,11,16,22). The hands-on session for this course was held on a test server with dedicated use cases. Representatives from the vendor led a short orientation and provided real-time IT support during the session. The hands-on session was one of the highest reviewed sessions and course feedback demonstrated participant interest in continuing these sessions in future courses.

Limitations of the study include the low response rate on both the pre- and post-course survey and evaluations. The total number of participants and daily participation in the lectures could only be estimated due to the ability of participants to register and view the lectures as a group. The calculation of the equivalence margin was limited due to the sample size of the prior courses in 2020 and 2021, which was treated as the baseline. The pre- and post-course evaluations featured questions that had been present in prior course evaluations in 2020 and 2021, which could have resulted in higher mean scores on the 2022 evaluation. Several measures were taken to address this possibility, including randomizing question order, rewording questions between each iteration of the course, and not informing participants of correct answers to the evaluation questions, however it is still possible that prior participants may recall questions from past courses. It should be noted, however, that only 11.8% of participants reported attending the course previously, so the effect of question recall on the mean evaluation would likely be limited. The mean scores of the post-course evaluation were nearly identical from the 2021 cohort, 10/15, compared with the baseline mean of 10.1/15. However, the calculation could only support an equivalence margin of 2.5. It is possible that if a lower equivalence margin was selected then the null hypothesis could not have been rejected.

The course directors for the AI Literacy Course and AIRE plan to continue expanding the course and making AI education resources open and available for all learners. Lectures were recorded and uploaded to the AIRE channel on YouTube (San Bruno, California) one week after conclusion of the course. The recorded lectures provide the opportunity for learners to participate in the course asynchronously, however only participants who attended the live course were evaluated in this project. The recorded lectures also provided accessibility features including closed captions and live subtitle translation, although it has been noted that these services are imperfect and prone to error, especially when using highly specialized vocabulary (27). With these limitations in mind, broadening the audience of the AI course to non-English speaking participants and programs may be best accomplished by coaching local leaders in radiology AI to develop curricula in their native languages. AIRE provides sample curricula and past curricula on the course website, which is open access and free to download for instructors seeking to lead their own program. Future courses will broaden the types of hands-on experiences available for course participants, highlight the types of AI applications that radiologists can encounter in clinical practice, and will provide opportunities for participants to train their own algorithms on curated data sets.

Conclusion

There is a continued need for AI education in radiology training throughout the United States and globally. Although opportunities for AI experience have increased, disparity in accessible content remains and is most significant in lower-resourced settings. The AI Literacy Course was successfully expanded from a regional course in the Southeastern United States to reach an audience of over 500 participants in 25 training programs and 10 countries. The course was as effective in increasing understanding of radiology AI to a global audience as it was to its smaller regional audience in previous years, and the success of this course suggests that this open model of remote lectures is scalable and effective for a global audience.

References

  1. Benjamens S, Dhunoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. npj Digit Med. 2020;3(118). Available from: https://doi.org/10.1038/s41746-020-00324-0

  2. Artificial intelligence and machine learning (AI/ML)-enabled medical devices [Internet]. Washington, DC: Food & Drug Administration; 2023 Dec 6 [cited 2022 Aug 10]. Available from: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices.

  3. Dreyer, KJ, Wald C, Allen Jr. B, et al. AI Central [Internet]. Reston, VA: American College of Radiology Data Science Institute; 2024 [cited 2023 Feb 11]. Available from: https://aicentral.acrdsi.org/.

  4. Nguyen GK, Shetty AS. Artificial intelligence and machine learning: opportunities for radiologists in training. J Am Coll Radiol. 2018;15(9):1320-1321. Available from: https://doi.org/10.1016/j.jacr.2018.05.024

  5. Langlotz CP, Allen B, Erickson BJ, Kalpathy-Cramer J, Bigelow K, Cook TS, et al. A roadmap for foundational research on artificial intelligence in medical imaging: from the 2018 NIH/RSNA/ACR/The Academy Workshop. Radiology. 2019;291:781-91. Available from: https://doi.org/10.1148/radiol.2019190613

  6. Wood MJ, Tenenholtz NA, Geis JR, Michalski MH, Andriole KP. The need for a machine learning curriculum for radiologists. J Am Coll Radiol. 2019;16(5):740-742. Available from: https://doi.org/10.1016/j.jacr.2018.10.008

  7. Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional neural networks for radiologic images: a radiologist’s guide. Radiology. 2019;290:590-606. Available from: https://doi.org/10.1148/radiol.2018180547

  8. Slanetz PJ, Daye D, Chen P, Salkowski LR. Artificial intelligence and machine learning in radiology education is ready for prime time. J Am Coll Radiol. 2020;17(12):1705-1707. Available from: https://doi.org/10.1016/j.jacr.2020.04.022

  9. Tejani AS, Fielding JR, Peshock RM. What should radiology residency and fellowship training in artificial intelligence include? a trainee’s perspective – radiology in training. Radiology. 2021;299(2). Available from: https://doi.org/10.1148/radiol.2021204406

  10. Gowda V, Jordan SG, Awan OA. Artificial intelligence in radiology education: a longitudinal approach. Acad Radiol. 2022;29(5):788-790. Available from: https://doi.org/10.1016/j.acra.2021.08.022

  11. Salastekar NV, Maxfield C, Hanna TN, Krupinski EA, Heitkamp D, Grimm LJ. Artificial intelligence/machine learning education in radiology: multi-institutional survey of radiology residents in the United States. Acad Radiol. 2023;30(7):1481-1487.Available from: https://doi.org/10.1016/j.acra.2023.01.005

  12. Yang L, Ene IC, Belaghi RA, Koff D, Stein N, Santaguida PL. Stakeholders’ perspectives on the future of artificial intelligence in radiology: a scoping review. European Radiology. 2022;32:1477-1495. Available from: https://doi.org/10.1007/s00330-021-08214-z

  13. Lindqwister AL, Hassanpour S, Lewis PJ, Sin JM. AI-RADS: An artificial intelligence curriculum for residents. Acad Radiol. 2021;28(12):1810-1816. Available from: https://doi.org/10.1016/j.acra.2020.09.017

  14. Wiggins WF, Caton Jr. MT. Magudia K, Rosenthal MH, Andriole KP. A conference-friendly, hands-on introduction to deep learning for radiology trainees. J Digit Imaging. 2021;34:1026-1033. Available from: https://doi.org/10.1007/s10278-021-00492-9

  15. The RSNA Imaging AI Certificate Program [Internet]. Oak Brook, IL: Radiological Society of North America; 2024 [cited 2023 Feb 18]. Available from: https://www.rsna.org/ai-certificate

  16. Hu R, Rizwan A, Hu Z, Li T, Chung AD, Kwan BYM. An artificial intelligence training workshop for diagnostic radiology residents. Radiol Artif Intell. 2023;5(2). Available from: https://doi.org/10.1148/ryai.220170

  17. Li D, Morkos J, Gage D, Yi PH. Artificial intelligence educational & research initiatives and leadership positions in academic radiology departments. Curr Probl Diagn Radiol. 2022;51(4):552-555. Available from: https://doi.org/10.1067/j.cpradiol.2022.01.004

  18. Gorospe-Sarasúa L, Muñoz-Olmedo JM, Sendra-Portero F, de Luis-García R. Challenges of radiology education in the era of artificial intelligence. Radiol. 2022;64(1):54-59. Available from: https://doi.org/10.1016/j.rx.2020.10.003

  19. Abuzaid MM, Elshami W, McConnell J, Tekin HO. An extensive survey of radiographers from the Middle East and India on artificial intelligence integration in radiology practice. Health Technol. 2021;11:1045-1050. Available from: https://doi.org/10.1007/s12553-021-00583-1

  20. Mollura DJ, Culp MP, Pollack E, Battino G, Scheel JR, Mango VL, et al. Artificial intelligence in low- and middle-income countries: innovating global health radiology. Radiology. 2020;297(3):513-520. Available from: https://doi.org/10.1148/radiol.2020201434

  21. Hathaway QA, Lakhani DA. Fostering artificial intelligence education within radiology residencies: a two-tiered approach. Acad Radiol. 2023;30(9):2097-2098. Available from: https://doi.org/10.1016/j.acra.2022.12.009

  22. Perchik JD, Smith AD, Elkassem AA, Park JM, Rothenberg SA, Tanwar M, et al. Artificial intelligence literacy: developing a multi-institutional infrastructure for AI education. Acad Radiol. 2023;30(7):1472-1480. Available from: https://doi.org/10.1016/j.acra.2022.10.002

  23. Lanier MH, Wheeler CA, Ballard DH. A new normal in radiology resident education: lessons learned from the COVID-19 pandemic. Radiographics. 2021;41(3). Available from: https://doi.org/10.1148/rg.2021210030

  24. Saxena A, Khamis S. “I’ll see you on Zoom!” international educators’ perceptions of online teaching amid, and beyond, Covid-19. Arab Media Soc. 2021;30:1-21. Available from: https://www.arabmediasociety.com/ill-see-you-on-zoom-international-educators-perceptions-of-online-teaching-amid-and-beyond-covid-19/

  25. Walker J. Non-inferiority statistics and equivalence studies. BJA Educ. 2019;19(8):267-271. Available from: https://doi.org/10.1016/j.bjae.2019.03.004

  26. Ahn S, Park SH, Lee KH. How to demonstrate similarity by using noninferiority and equivalence statistical testing in radiology research. Radiology. 2013;267(2):328-338. Available from: https://doi.org/10.1148/radiol.12120725

  27. Parton BS. Video captions for online courses: Do YouTube’s auto-generated captions meet deaf students’ needs? J Open Flexible Distance Learn. 2016;20(1):8-18. Available from: https://doi.org/10.61468/jofdl.v20i1.255