Emergency icon Important Updates

In this episode, we continue our conversation with Michelle Kraft, MLS, Director of Library Services, Cleveland Clinic; Jon Bonezzi, MBA, Director of Technical and Educational Resources, Cleveland Clinic; and Dr. Neil Mehta, Associate Dean of Curricular Affairs, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, who delve into ChatGPT and its pros and cons with respect to medical education, and medical and scientific publishing. They address concerns raised by the medical library community, guidelines for responsible usage in research papers and the broader implications of AI advancement. Join them as they debate the idea of a moratorium on advanced training for AI and its potential threats to and opportunities for academic medical centers.

Subscribe:    Apple Podcasts    |    Podcast Addict    |    Spotify    |    Buzzsprout

Exploring ChatGPT: Navigating its Implications for an Academic Medical Center (Part 2)

Podcast Transcript

James K. Stoller:

Hello and welcome to MedEd Thread, a Cleveland Clinic Education Institute podcast that explores the latest innovation in medical education and amplifies the tremendous work of our educators across the enterprise.

Dr. Tony Tizzano:

Hi, welcome to today's episode of MedEd Thread on ChatGPT, ethical considerations for an academic medical center. I'm your host, Dr. Tony Tizzano, Director of Student and Lerner Health here at Cleveland Clinic in Cleveland, Ohio. Today I'm very pleased to have Michelle Kraft, Director of Library Services and Co-editor in Chief of the Journal of the Medical Library Association, Dr. Neil Mehta, Associate Dean for Curricular Affairs at the Cleveland Clinic, Lerner College of Medicine at Case Western Reserve University, and finally Jonathan Bonezzi, Director of Technical and Educational Resources at Cleveland Clinic here to join us to explore this fascinating and timely topic. Michelle, Neil, and Jonathan, welcome to the podcast. If we could get started, perhaps with you, Neil, telling us a little bit about yourself, your background, your education, what brought you to Cleveland Clinic and your role here.

Dr. Neil Mehta:

Sure. Thanks, Tony, for having me. I'm an internist at heart, so I practice general internal medicine and primary care on the main campus at the Cleveland Clinic. And in my role in the curriculum in the medical school, I'm in charge of a lot of medical education, obviously. But I think if you want to look at me, I thrive if you drew a Venn diagram of technology, medicine, and teaching or education, the intersection of that is, I think, where I thrive. And I think a project that I was recently involved in was probably the first foray of artificial intelligence into medicine was the IBM project where we worked with IBM to try and use natural language programming or processing on electronic medical records to try and help make sense of it. Can computers, and I'm making air quotes here, make sense of electronic medical records?

Dr. Tony Tizzano:

Fabulous, thank you. And anyone who knows you knows how you have a foot in all these different camps and really are very good in all of them. Michelle?

Michelle Kraft:

Hi Tony, thanks for having me. I have been a medical librarian at the Cleveland Clinic for 25 years. I've been the director for about six years. What got me started in libraries is I really like finding things. And what I do, it's, it's like a scavenger hunt to me. And for libraries and medical libraries, it's a virtual scavenger hunt, finding the right information to give to people, to treat patients and, uh, help with their care. What I focus on is technology and user experience and finding the information. So, working with ChatGPT and other technologies to actually enhance or improve that kind of delivery of information is what interests me all the time.

Dr. Tony Tizzano:

Wonderful. What would we do without librarians? Jon?

Jonathan Bonezzi:

Hi Tony, thank you for having me today. So, I've been at the clinic now for about 14 years. Now, I'm originally started over in protective services and then came over to education about 11 years ago. And for me, and, like, a lot like Dr. Mehta, you know, my focus is on really, like, anything technology related. I've had a fascination with it since I was a kid. It makes sense to me. Programming and coding just kind of came naturally to me, and I love solving problems with any sort of technology. I, I love to build things, whether it's physical or whether it's just something from the programmatic perspective and it's just, it's always fascinating to me, and I love using, you know, tools like this to kind of make our job easier, to kind of assist with other things that we're working on.

Dr. Tony Tizzano:

Neil, as Associate Dean for the Curricular Affairs at the Cleveland Clinic Lerner College of Medicine, please frame for us the ethical implications surrounding ChatGPT. Is it cause for excitement or cause for concern?

Dr. Neil Mehta:

That's a great question. You know, our medical school program is based on professionalism, reflective practice, and academic integrity. And a lot of these are character traits that we develop over time as a part of our professional identity formation journey. And what we really try and do is expose students to experiences and make them reflect on what decision they made, what choices they made during the experiences. What choices did the people around them make during those experiences? And then as they reflect, they realize what they would've done, what someone else did that was different, and try and accommodate how they think or align themselves. And this is just one more of those triggers that we plan to use and, uh, students are gonna be exposed to it even if we don't plan to use. So right now, where we are is really trying to figure out how do we create those experiences that in this controlled setting while they are still in medical school, they internalize the appropriate use of these tools. So, I'm actually very excited. This is huge. This is a test on how our medical schoolworks. And this is the test of how we think people develop their professional identity.

Dr. Tony Tizzano:

Well thank you for that, Neil. So, Michelle, how readily can a medical research library, and someone such as yourself, discern that a chatbot has served as an assistant to summarize a research paper, generate content, an abstract, citations?

Michelle Kraft:

It can be difficult. Sometimes the content is just a little bit off, but, you know, you don't know whether that is the actual writer or the chatbot. Right now, and I say right now, the citations are also incorrect. But as they start working with AIs and refining them, that may no longer be the case in the future. And that's important to know because that might actually be very helpful for people later on. Right now, when the citations are incorrect, that's helpful to know, buddy. So, it requires a lot of time looking at it, inspecting it. Also, sometimes, we've tried ChatGPT and other AIs and to see how it does perform literature searches in the databases, PubMed being one that's free that everybody can access, and the AIs can access. Right now, they don't do a very good job because PubMed is a very structured database with controlled vocabulary and terms you would use for certain situations, and that is the next level searching even for humans, let alone AI. So, when we've looked at how an AI searches, like if somebody wants to do a systematic review and they use AI to help with it, we've found that it really kind of makes up some of the subject headings when it shouldn't. And so that's one of the ways too.

Dr. Tony Tizzano:

So, it makes it difficult, to say the least. So, Neil, can a chatbot literally generate a presentable student essay or answer a question well enough to pass a medical exam? And in doing so, are you cheating or is this plagiarism?

Dr. Neil Mehta:

That's a great question again, and very, very timely, Tony. So, we already have seen those papers where these tools have caught to receive passing levels on multiple choice standardized tests like USMLE Step 1. The question you're asking is an essay type question, which by definition should be a deeper dive and require integration of complex concepts. So, we have research study just started to answer this very question. But to start the study, we did a proof of concept. So, we took some of our essay type questions that are retired, and I ran them through Bing Chat and after three prompts, I was also learning, like Jon was saying, develop the right prompts, right? So, after three prompts, it created a question that the course director felt was an essay that was better than the best student essay he had ever seen.

So, this was our proof of concept, so now we need to generalize it and extend it to other topic areas, but I think if the answer is not yes today, it'll be yes, very soon. The part that wasn't as good was the references cited. They were accurate. We use GPT-4. Not ChatGPT, but our students generally find more specific targeted references for each statement. These were more review papers, stuff that kind of covered every statement that they had made. So that may be a way to tell, but as soon as, like Michelle's saying, these tools have access to content behind the general paywalls, that's gonna go away.

Dr. Tony Tizzano:

Boy, so that's the informational side. Jon let's shift to, you know, your realm. So, can it create useful computer code, spam, ransomware? And if so, who takes credit for it, who's liable for it?

Jonathan Bonezzi:

Yeah, that, I mean, that's, that's a great question, Tony. Well, I'll kind of, you know, unpackage that into a couple different sections, first start off with, you know, generating computer code. The very short answer is yes, it is capable of it. The kind of exploded out version is, it, it's a little bit more complicated than that. And as anyone that's ever been through the development cycle knows, that one of the most time intensive things that, before you go to release, is debugging. And it is incredibly difficult to try to debug something that you did not write, when it was done by another programmer. It's even more complex if it's done by something with AI, you don't use the same logic. So, it can do it, but as of right now, I don't really think it's a viable way of doing that for anything complex. I mean, if you want to do something very simple, take it down to the simplest programming test of print, "Hello world," on the screen. Anyone that's ever been a programmer knows that that's kind of, like, the first thing you ever learn in really any language that you're doing, how to make the screen say, "Hello world."

Now if we go to the next piece, you know, spam and ransomware, ransomware I think is gonna kind of fall into that first category of writing code. They may be able to write applications with it and it may, you know, be a tool to use that. We'll really have to see where that one goes. Those things are incredibly specific, they're usually directed at what their target is, and there, you know, kind of tailor-made and custom for that. So, I don't necessarily know if it would be the right application right now, but, again, it could. Spam is where it really becomes a problem because this is something that can send out a million text messages if it gets a list of mobile numbers in the United States. It can send out a million text messages and carry on conversations with these people. I shouldn't even say the old days because we're still in these days, but it's not like the time where you get an email that says, "Hey, here's a bill that you just paid. You know, click here," and that takes you somewhere and then that becomes a problem when you click on the link where it's one thing and they're hoping that somebody clicks into it.

This can actually engage in conversation and very, very believable conversation with you. More people than not cannot identify that they are speaking with a chatbot. It is incredibly difficult to see that, you know, until you get into really detailed topics. So, to me, that is where we really have that concern, like, with spam because it's not just a single email. And I mean, we all know that you get something that is very, very poorly worded, it uses strange lexicon, it's probably fake, that's one of the first red flags that you look for when you're looking at these things. ChatGPT, chatbots, you know, AI like that, they don't have that problem. They understand the lexicon, they use it appropriately.

Dr. Tony Tizzano:

Boy, spam phone calls. So, my practice then of, if I don't recognize the number, simply not taking the call and calling the number back, because you can never call the number back. I mean, that's the only way I know. I don't know what else works, but I feel like I'm slave to that.

Jonathan Bonezzi:

Yeah, no, just a quick note on that, that'll thankfully not be a problem anymore in the future. They're working on legislation where you're not gonna be allowed to spoof numbers without actually taking ownership of them, but that, so that'll help a little bit.

Dr. Tony Tizzano:

Oh, well, cross our fingers. Cross our fingers. So, Neil, given that chatbots draw from information that's already been put forth by humans, will their output be reflective of the same implicit biases held by the people who created them? And how will those individuals who put this together to train ChatGPT recognize that the data is sufficiently and appropriately diverse so that their models avoid perpetuating those issues?

Dr. Neil Mehta:

Yeah. I mean, that is probably, you know, used to be what we thought was one of the good sides of AI, was it would not have the biases that humans have. And we very quickly realized, like, from the studies in dermatology where they trained the AI to read images of people who had lighter skin. And then when you apply it to the general population, and this was for pigmented lesions of the skin, you can immediately see that the biases that are built into the training data are perpetuated by the product. That is absolutely still true, it goes back to what ChatGPT, the 3.5 version's big problem is, it's built on training data before 2021.

Now, it's also built on training data like Michelle said, that is accessible outside the paywalls. So, unless you really make everything available that has ever been published, and even then, that has biases in there, right? So, I, there is really almost no way that you're going to clean this up. And, you know, you think about some of the trolley problem type of things, right? Where it's ethics, something that, you know, there is a train or a tram running down a track, it's going to run over, say, two people, you have a lever. If you pull that, you will divert the tram and now it'll run over only one person. What do you do? Do you actively kill one person, or do you passively let two people die? These are ethical issues that humans struggle with. AI is not even gonna know that it's a struggle. It's going to do whatever is in there. So, and unfortunately my answer is, I don't have an answer.

Dr. Tony Tizzano:

That's okay. I think we're into that gray area. Michelle?

Michelle Kraft:

Well, and I, I will say that ChatGPT, the OpenAI does not tell us the data that ChatGPT uses, but there are other AIs out there and they are using information on the internet, so ChatGPT also uses the information on the internet, we just don't know exactly what it is, but what I've told people is, if it's on the internet, the good, the bad, and the ugly, it's gonna be in that AI. So, some AIs find information from very questionable sources that are racist, that are fake news sites, so it's important that we have to know that if it's out there, it's gonna be retrieved.

Dr. Tony Tizzano:

So along those lines, Michelle, will the argument that artificial intelligence is gonna require a unique set of standards that only persons like you have, is it gonna take it out of the public arena into these closed rooms of individuals who have expertise or privilege around this information or, you know, global technocracy where, you know, you know all these things? As I listen to the three of you talk, I can barely keep up with the conversation. So, is that where we're headed?

Michelle Kraft:

Yes and no. I think yes in the sense that there'll be those of us who understand it or who are designing it or who are testing it and kicking the tires so that we are behind the scenes, but no, because I think the people who understand it are using it so that it can be pushed out to the global groups of people to use it with the hopes that it helps refine how we use it in everyday living.

Dr. Tony Tizzano:

Okay, excellent. So, question for all of you, given that we have looked at materials that come into us and sometimes we identify it and say, "Hey, this looks like ChatGPT was used." It ends up, uh, at a board for academic honesty, and a student is accused, and in this case, wrongly accused of having used it, but suffers the consequences of, felt that they plagiarized this material or used ChatGPT. You know, now the table turns, they turn around, are able to prove that they were the ones who created the information. You know, what should the liability be for the institution? What is the liability for the institution?

Jonathan Bonezzi:

That's difficult because for me, I don't fully know if I even agree with the main premise of the question of, like, should this even be happening? You know, as Dr. Mehta mentioned earlier about students, you know, that it's, it's actually getting baked in to, you know, assignments, intentionally using it. You know, I think a lot of the focus right now is on, you know, how are we gonna identify this so we can make sure that doesn't happen? And, you know, honestly, I just, I think it's the wrong approach. I think that we're, we're going about the wrong way. I think we need to learn how to actually integrate it and use it and not try to find it as like, "Hey, this is a problem, we need to weed this out and make sure you don't do that." Because in all honesty, I think we're just setting ourselves up f- for, you know, something we can't do. We're setting an unrealistic expectation for this thing and we're just intentionally creating a hurdle that once we get there, we're gonna have the decision to just blow through that wall or stop and be done with it. And I don't think that's a place where we want to be, and I think we all know what the answer's gonna be when we get there, so why don't we just start working towards that now?

Dr. Tony Tizzano:

And I guess part of that is, sometimes you may not have, if I understood correctly, you may not even know that you're using it. And how do you be, fault it for that?

Jonathan Bonezzi:

Yes.

Dr. Tony Tizzano:

Neil.

Dr. Neil Mehta:

Yeah, I mean, I, you know, if we really want to think about how do we create solutions for this, I think it's actually every challenge is a huge opportunity, right? And this is a huge existential challenge almost. So, what could be the opportunity? And I can imagine, and again, some of these terms will be applied loosely, but people who are owners of verified, validated, good information, so imagine all the actually good peer-reviewed publications out there, all the datasets that are created using good techniques and good methods, and you release this model, but it is trained only to access those datasets, and now you have a tool available that will clearly say it's only used on, trained on these validated, verified, accurate, authentic information. Yeah, you can quibble about what is authentic and what is not and what is validated and what is not, but it would be different than the whole web. So, I think that's an opportunity and it's not too difficult imagine someone would actually make that happen.

And like, you know, you use DynaMed or UpToDate, you could actually have a proprietary AI tool that is actually trained on information that you trust. And if you think about the use of these tools, there are administrative uses. I think it's a given. It is 100 percent going to be better than not having it. There's just no doubt the efficiency it adds. Then you get to working with patients in patient care, being able to craft a message back to a patient who has a question, as long as it's somehow based on these authentic information's, it's going to actually be able to respond to patients and they will get information that they can trust as opposed to searching on the web. It's what happens right now. And then same thing with research, you are building off a foundation that has been vetted and validated. I think that is the future and these tools will actually make it much, much better. Someone needs to get all the people who own this information together to share it.

Dr. Tony Tizzano:

I'm glad we're coming to a hopeful solution. Michelle, you get the final word.

Michelle Kraft:

Well, I agree with both Jon and Neil. We keep lumping AI into ChatGPT because I think that's what people think of right away because it made the biggest splash. AI is everywhere, we're using it now as we speak, and we just don't know about it sometimes. Those parts that we use right now are making our lives a little easier, or we're able to make decisions a little bit better. As Neil said, once we start figuring out other types of uses for AI, where we have the specific data, or its other uses for other implementations, that's the future we're headed towards. And I think that's very interesting and fun.

Dr. Tony Tizzano:

Well, it's good. It's becoming fun. Well, thank you so much Jon, Michelle, Neil. This has probably been one of our more thought-provoking segments of the MedEd Thread. To our listeners, thank you very much for joining and we hope to see you on our next podcast. Have a wonderful day.

James K. Stoller:

This concludes this episode of MedEd Thread, a Cleveland Clinic Education Institute podcast. Be sure to subscribe to hear new episodes via iTunes, Google Play, SoundCloud, Stitcher, Spotify, or wherever you get your podcasts. Until next time, thanks for listening to MedEd Thread and please join us again soon.

MedEd Thread
MedED podcast logo VIEW ALL EPISODES

MedEd Thread

MedEd Thread explores the latest innovations in medical education and amplifies the tremendous work of our educators across the Cleveland Clinic enterprise.  
More Cleveland Clinic Podcasts
Back to Top