Emergency icon Important Updates

In this episode, we talk with Michelle Kraft, MLS, Director of Library Services, Cleveland Clinic; Jon Bonezzi, MBA, Director of Technical and Educational Resources, Cleveland Clinic; and Dr. Neil Mehta, Associate Dean of Curricular Affairs, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, who delve into ChatGPT and its pros and cons with respect to medical education, and medical and scientific publishing. They address concerns raised by the medical library community, guidelines for responsible usage in research papers and the broader implications of AI advancement. Join them as they debate the idea of a moratorium on advanced training for AI and its potential threats to and opportunities for academic medical centers.

Subscribe:    Apple Podcasts    |    Podcast Addict    |    Spotify    |    Buzzsprout

Exploring ChatGPT: Navigating its Implications for an Academic Medical Center (Part 1)

Podcast Transcript

Dr. James K. Stoller:

Hello and welcome to MedEd Thread, a Cleveland Clinic Education Institute podcast that explores the latest innovations in medical education and amplifies the tremendous work of our educators across the enterprise.

Dr. Tony Tizzano:

Hi, welcome to today's episode of MedEd Thread. I'm your host, Tony Tizzano, Director of Student and Lerner Health here at Cleveland Clinic in Cleveland, Ohio. This is part one of a two-part series Exploring ChatGPT, that focuses on its technology use and implications for an academic medical center. In a subsequent segment, we'll discuss ethical considerations related to its use in research and education. Today I'm very pleased to have Michelle Kraft, Director of Library Services and co-editor in chief of the Journal of the Medical Library Association, Dr. Neil Mehta, associate Dean for Curricular Affairs at the Cleveland Clinic, Lerner College of Medicine at Case Western Reserve University. And finally, Jonathan Bonezzi, director of technical and Educational Resources at Cleveland Clinic. Here to join us to explore this fascinating and timely topic.

Michelle, Neil and Jonathan, welcome to the podcast. If we could get started, perhaps with you, Neil, telling us a little bit about yourself, your background, your education, what brought you to Cleveland Clinic and your role here.

Dr. Neil Mehta:

Sure. Thanks, Tony, for having me. I'm an internist at heart, so I practice general internal medicine and primary care on the main campus at the Cleveland Clinic. And in my role in the curriculum in the medical school, I'm in charge of a lot of medical education, obviously. But I think if you want to look at me, I thrive if you drew a Venn diagram of technology, medicine and teaching or education, the intersection of that is I think where I thrive. And I think a project that I was recently involved in was probably the first foray of artificial intelligence into medicine was the IBM project where we worked with IBM to try and use natural language programming or processing on electronic medical records to try and help make sense of it. Can computers, and I'm making air quotes here, make sense of electronic medical records?

Dr. Tony Tizzano:

Fabulous, thank you. And anyone who knows you knows how you have a foot in all these different camps and really are very good in all of them. Michelle?

Michelle Kraft:

Hi Tony. Thanks for having me. I have been a medical librarian at the Cleveland Clinic for 25 years. I've been the director for about six years. What got me started in libraries is I really like finding things and what I do is, it's like a scavenger hunt to me and for libraries and medical libraries, it's a virtual scavenger hunt, finding the right information to give to people, to treat patients and, uh, help with their care. What I focus on is technology and user experience and finding the information. So, working with ChatGPT and other technologies to actually enhance or improve that kind of delivery of information is what interests me all the time.

Dr. Tony Tizzano:

Wonderful. What would we do without librarians? John?

Jonathan Bonezzi:

Hi Tony. Thank you for having me today. So, I've been at the clinic for about 14 years now. I originally started over in protective services and then came over to education about 11 years ago. And for me, and like a lot like Dr. Mehta, you know, my focus is on really anything technology related. I've had a fascination with it since I was a kid. It makes sense to me. Programming and coding just kind of came naturally to me and I love solving problems with any sort of technology. I, I love to build things, whether it's physical or whether it's just something from the programmatic perspective and it's just, it's always fascinating to me, and I love using, you know, tools like this to kind of make our job easier to kind of assist with other things that we're working on.

Dr. Tony Tizzano:

Great. Thanks for brains like yours. My son who's an electrical engineer always tells me when I ask him why something in computers. He said, "Dad, it's, it's kind of complicated. You really don't have to understand. You just have to believe." Well John, perhaps you've got the tallest order. So, if you could help frame and define ChatGPT, what does the acronym stand for and when did it become mainstream?

Jonathan Bonezzi:

So ChatGPT is, it's an implementation of GBT 3.5. It originally started off with 3.0, was released in November of 2022. It's essentially just an artificial intelligence chatbot that the goal of it is to predict what the next word in any given sentence or any given response is going to be and use that to string together language. That sounds kind of like a foreign concept, but if you think about it, that's essentially the way that, you know, as humans, we experience language. We are looking at things, we're trying to take things into context. You know, you understand that when someone says my favorite food is blank, you're expecting someone to say pizza or hotdogs or steak. You know that's what's gonna come next. You don't need to know what they're gonna say, but you know that they're gonna say a food because you do understand the language, you understand how it flows together, you understand the weight of the words that are used within it.

And that's essentially what ChatGPT tries to replicate. It answers questions, it continues to, you know, move forward with various different sentences to give you what you're asking for. It is all based on prompts and the quality of the output that you're gonna get from it is entirely based on the quality of the input. And then you can continue to drill down with things. You can ask an initial question, it gives you an answer, but then you can ask to follow up questions and you can further distill and refine that response to get what you want.

Dr. Tony Tizzano:

So, the more detail you give up front the better the answer you receive?

Jonathan Bonezzi:

Absolutely.

Dr. Tony Tizzano:

So, along those same lines, the acronym itself, what does it stand for?

Jonathan Bonezzi:

So GPT is generative pre-trained transformer.

Dr. Tony Tizzano:

Okay.

Jonathan Bonezzi:

And then the three that's involved in, it's just the third generation 3.5 that's currently in use for the free version of ChatGPT is, you know, just a slight update to that. And then about two months ago in March of this year, I believe it was, they moved over to GPT four as the base engine for it. However, that is now behind a paywall as Open AI starts to move forward with its revenue goals.

Dr. Tony Tizzano:

Okay. And a chatbot, what is that word? You use that word. What does that mean?

Jonathan Bonezzi:

It's essentially just something where it seems like you are communicating, you know, via, via text, via platform with another individual. But there is no one on the other end. You ask questions, you make statements, it responds to you as though it is, you know, you can actually tell it to respond to you as though it is George Washington and it'll frame its conversation as though it is George Washington speaking back with you.

Dr. Tony Tizzano:

Okay. And the last word that I've tried to understand was reinforcement learning with human feedback, RLHF, was used to train ChatGPT. Help me understand that a bit better.

Jonathan Bonezzi:

The whole training component, you know, how they put everything together with GPT is, I mean it's incredibly complex. So just to kind of give the scale of it, the base data that it used, the text that it pulled in is, consumes about 45 terabytes, which you think with, you know how hard drives are coming nowadays, it's not like, it's not a lot of physical volume. But the sheer amount of verbiage that's in there, if you started reading it when they broke ground in the Great Pyramid Giza and you never stopped, today, you have only been a quarter of the way through it at average reading speeds. It is just a massive amount of text that was consumed with this. And what it did with that is it used all of that text again to understand how language flows, to understand the weight with words when people say things, what's most often to come next, when words are strung together in a certain way, what likely is going to happen after that?

So, the pre-training and the reinforcement model, what was done there is it would make decisions on things. It says, "Okay, these are the two things that we think are gonna happen here." And then a human would actually select this is the best one and it will continue to learn that this is what is, you know, fits the bill the most. If I go back to, you know, my food example from earlier of the beginning of the sentence can be my favorite food is blank. There could be two ways to go. You could say my favorite food is steak or my favorite food is something that tastes good. Both of those are technically correct, but when a human reads it, they say, "No. The stake, that answer makes more sense." And then that gets kind of baked into how the entire thing works, how the processing works.

And that's where you have the reinforcement method come in. It is constantly giving you answers, and you can select which one is working. Now that's not the standard way where it's all coming out and you always have to select, but they do have that. That's continuously going on in the background. It's continuously fine-tuning the process.

Dr. Tony Tizzano:

It's fascinating. Fascinating as it is confusing. So, Michelle, since ChatGPT has become somewhat mainstream, I understand that the medical library world has been really watching and testing its use as it relates to medical research and publications. You know, what are your concerns?

Michelle Kraft:

Well, you're right. We were actually, when it came on board, we started testing it because people were using it to ask for information and we wanted to make sure it was giving back the correct information. The current free version ChatGPT 3.5, the information, it says up to date to 2021. In other words, it doesn't have anything current past 2021. So, if you wanted the most recent information, you may not be getting it. You might be getting something that is older and in certain fields of medicine and in certain fields of inquiry, you need that most recent information. The other problem that we discovered is that ChatGPT only accesses things that are free online. So, a vast majority of our medical literature, e-Books, e-journals databases, those are not free. Those are subscription based. And so ChatGPT doesn't have access to those articles.

So, when somebody is asking ChatGPT to summarize an article and it actually happens to be a newer article behind a paywall, it makes things up. I actually tried that a while back ago. I asked it to summarize an article behind a paywall that was published in 2022 and it provided a summary, and it looked like it was on the topic. But then when I called it out and said, "Hey, this article is from 2022 and it's behind a paywall, how were you able to do this?" Then that's when ChatGPT said, "Oh that's my bad. I made a mistake; I can't really access that." So, we're really concerned about how it actually generates its responses. It might not be the correct answer. It makes up things, they call it AI hallucination. It just scrapes things together and comes up with a response.

So, it's kind of unreliable on the information that it presents. It fabricates citations. So, if you're looking for the evidence-based answer to a medical question, it will provide what it thinks, or you want as the answer and then it'll give citations that look very legit. They go to actual journals with actual authors, but it completely screen-scrapers so to speak. It matches author A with journal B, and it fabricates in a completely different citation. So, our caregivers and the librarians look at these citations and they look legit, but then we spend a lot of time verifying and trying to find the article and it actually doesn't exist.

Dr. Tony Tizzano:

And Michelle, you're an expert at this and yet you find this hard to do.

Michelle Kraft:

Yes.

Dr. Tony Tizzano:

So how does the person who is using this for the first time or is trying to take a shortcut, they can, they really count on the information they're getting and how do they weigh and measure it?

Michelle Kraft:

I probably wouldn't use it to find information if I'm doing research. I would use it to help me summarize information that I know is already out there and available. If I'm using it to find information for research, I would stick to our primary resources that we're already using that, like the databases that we have access to because we know the data there.

Dr. Tony Tizzano:

Great. That's a note that everyone should write down.

Jonathan Bonezzi:

Yes.

Dr. Tony Tizzano:

John, what do you think?

Jonathan Bonezzi:

This, you know, ChatGPT, is not a search engine, it is not designed to go out and find the resources for you. It's, you know, the kind of the, the easiest way that you can connect it back is it's like a person that has read all this material and you're asking them questions and you're asking for their opinion. They're going to give you their synthesized version of what they read, of what they consumed. They're not gonna give you the document itself and say, "Here you go, read."

Michelle Kraft:

And to build on John's point, it is not a search engine yet, people don't understand that quite because it's on the internet. And so, they're used to treating a lot of things that have a prompt to type into as a search engine and you, you can't use a hammer to screw in a screw.

Dr. Tony Tizzano:

Great, great, great analogy. Neil, comments on this?

Dr. Neil Mehta:

You know, I want to actually first come out and say it is May 22nd because if someone listens to this tomorrow, it'll be something, the answer could be different, right? So, we should date stamp every comment we make today.

But along those lines, I mean we, John and Michelle both talked about ChatGPT, John mentioned version three, 3.5 and four being behind a paywall. But actually, there are tools out there to access GPT four without a paywall, right? And without being pro any company, uh, if you use Bing for your search, it actually does access the most current papers. It doesn't have that limit of being cut off in 2021 and it does a live search. So, you can actually see that happen is when you are searching, it'll say searching right there. So, whatever we say of, for the free version of GPT, which is ChatGPT, it's actually changing as we speak. And so, something to keep in mind as our listeners listen to this is, remember this is May 22nd.

Jonathan Bonezzi:

Yeah, and that's a great point. I mean just when I was putting together information on this, I had to modify things multiple times over the past month because they were no longer accurate. They were no longer up to date, it's now new.

Dr. Tony Tizzano:

Unbelievable. I think a decade ago I heard our CEO say that the doubling time for the medical lexicon, all that's ever been published was about 160 days. And I looked it up before we came to this meeting. It's now 37.

Michelle Kraft:

Wow.

Dr. Tony Tizzano:

The volume, I mean it is just, it's like trying to fathom Avagadro's number and put it into perspective. So, Neil, sticking with you, when you look at this in the context of student research at Lerner, what are your thoughts and what are your concerns?

Dr. Neil Mehta:

Oh, it's such a timely question. On Friday we had our meeting to talk about our policy towards generative AI, which is where this fits in. And it was a very, very rich discussion. And right now, we are talking about a tool that people have to go somewhere and know how to access. Even I right now had to mention that you could use Bing and get it otherwise you would have to pay for it. Very soon, so this is the first phase, few knowledgeable people and increasing number are starting to access this and it's a separate tool. The next phase is going to be integrated. It's integrated into whatever word processing tool you use. Is it Google Docs, Word, and it's integrated into every browser, which it already is getting into Chrome and Bing, which are search engines but Edge's Microsoft. And if you use Bing, you get that.

So that's phase two. And can you imagine right now telling someone, submitting a paper to a decent journal, "Don't use check or grammar check." Would we stop them from using it? Or actually if I put on my peer reviewer hat, if I see poor spelling, I will say that person is lazy because the tool is right there, and they didn't use it. And that's going to be the next phase. And when it gets there, it's going to be very difficult to even know what was created using generative AI or not. So, if we talk about my concern as of today, I would like some kind of firewall that you shall not use generative AI to do certain things. And it's both for creating your research ideas, getting research ideas, writing your proposal for research in response to an RFP and then writing your manuscript.

And even there you could split hair and say, "Actually why not use it to augment your research ideas, you may actually get some additional ideas around it?" But definitely most people and most journals probably will be at a point where, one, you don't use it to write your manuscript, you don't have it listed as an author because if it writes your method section, should you list it as an author, that's a controversy. And most journals are not. There are some which say you have to specify where you use generative AI. So, it's a very rapidly moving landscape. But as I talk about phase two, that's actually gonna go away. There's no way you'll identify what tool was used.

You can't have a policy which you can't police at all. It'll be purely honoring code, but the author themselves will not be sure, just like you don't remember which paragraph you edited with grammar check. So these are questions that we discussed and our, basically where we ended up was we need to prepare our students who are gonna be physicians of the future to actually learn how to use this as part of the curriculum because it'll be phase two and I'll talk about phase three at some point by the time they come out. So, we have to prepare them for that future.

But right now, we do feel that there are certain skills they need to learn. And learning is a struggle. Learning is intentionally an effort. You can't make it so easy that they don't learn. And so right now we have assignments, we do things, research where they do it themselves, but we will have curricular activities and research activities where they'll actually leverage generative AI, so they learn how to use it in the future.

Dr. Tony Tizzano:

And would see the pitfalls in doing that.

Michelle Kraft:

Neil brings up a good point about the journals. I sit on the library advisory board for the Journal of American Medical Association and JAMA has already put out a statement that they will not accept any articles using ChatGPT to generate the text or, of the article unless the research is about using an AI. And then that's mentioned in the methods. Nature, they actually allow it to be used in the article, however you have to mention it in the methods or mention it in an acknowledgement. They just don't allow it to become an author because as they state, an author has to be somebody who can take responsibility and an AI cannot take responsibility for its writing.

Dr. Tony Tizzano:

Boy, the pool gets deeper and deeper as I listen to this. So, is it possible then that someone who uses this or maybe they don't even know they're using it from what Neil had said, that you are at risk of making your work unpublishable?

Michelle Kraft:

It depends upon the journal.

Dr. Tony Tizzano:

Okay. So, writer beware. You know, Neil, you had talked when we were first kind of flushing out this topic you mentioned, and I think you touched on this, that you could use ChatGPT, actually mandate it as part of the assignment and then turn around and have to vet it, including its citations as Michelle to, to make sure these hallucinatory, you know, aspects aren't there. Do, do you see yourself preparing to do that?

Dr. Neil Mehta:

Oh, totally. I think we have to prepare for that future because that future is probably not in years but in months. And the way you want to think about education is, you know, we often use Bloom's taxonomy, which is a way to have a hierarchy of cognitive interactivity, let's say, or cognitive involvement or difficulty, complexity. And you at the lowest level, not that this is not important, is something you understand and at the highest level is things like apply, right? So, you know, we think of knowledge as you understand something, you use it and then you apply it, and you can explain it to others. That's how the hierarchy goes.

And right now, we are talking about can you use ChatGPT or generative AI in these domains? We are already thinking about building a taxonomy of ChatGPT. So, can you understand ChatGPT, can you apply ChatGPT? And then once you get to that point, what are the assignments that students can demonstrate that they've reached that higher level of Bloom's taxonomy? And we had actually started a couple research projects in this area involving students and faculty to see where and how exactly we designed this curriculum.

Dr. Tony Tizzano:

Boy, this, this really is the cutting edge. I love it. So, John, you know, there has been thoughts of a potential moratorium or a pause in training beyond ChatGPT for at least six months. Is that a plausible strategy?

Jonathan Bonezzi:

I mean, that, not only do I think it's not plausible, but I also think it would be terribly ill-advised. I mean it, what we would essentially be doing to put a moratorium on it is say, "Hey, we don't fully understand this thing that we know is heading in a positive direction. Let's just not do it for now until we fully understand the tool that's not yet fully developed." It, it kind of creates a circular reference for itself of, "We're not gonna put this into practice until we actually know what it can do. But then we're not gonna be able to see what it can do because we're not gonna put it into practice." And it's just gonna kind of keep moving around.

You know, it, a lot of it is gonna come down to guardrails and kind of common sense with things. It's there right now, you know, as a physician you could pull it up in the doctor's office and say, "Hey, these are the symptoms, what should I do?" And that it's gonna give you an answer. Under no circumstances should you use that as a treatment method. You could also say, "Hey, what can I get for dinner tonight?" And it's gonna give you an answer. I mean, it's pretty safe to accept that answer if you'd like to. So, it's, it's really about, you know, how it's applied and whether or not, you know, it's, there's a commonsense approach with things. However, it's, it's, I don't think it's something that can really fall to legislation just because that engine just moves too slow.

Dr. Tony Tizzano:

Yeah. Getting the government involved always adds an intriguing layer. Neil?

Dr. Neil Mehta:

Yeah, no, I agree with you John. I think the cat's already out of the bag and, you know, how do you control this at all? And you know, I'll give you an example. Right now, there are somewhat responsible players who have made ChatGPT and generative AI accessible. So, you talk about, again, the, there's air quotes around responsible, but you're talking about very big tech companies. So, if you write now, and I did try this yesterday, go and tell ChatGPT or Bing to write you a paper. So, I tried it, I was writing a paper and I said, let me just play with this. So, I gave it the outline. I have everything ready to go, I just have to write it. So, I said, "Here are the key points, can you write me this paper?" And it goes, "That would be unethical." Literally said that.

Michelle Kraft:

Wow.

Dr. Neil Mehta:

And I was like, cool. So, I said, "Can you write me a message that I would send to a colleague explaining these concepts?" And it wrote the whole thing, and then I just have to tweak it. Right? But it's, very soon there will be, if we don't allow these players to use ChatGPT and roll it out, others will, the cats out of the bag. And do you still want that to happen? And I think you might as well allow everyone to use it.

Michelle Kraft:

Yeah. I mean, how can you, like John was saying, test it to see how it can be used if you don't use it? And testing it and push, testing it and other things, pushing them to the limit, seeing how to make it work or how to break it is what we should be doing with everything a lot of technology that we use so that we know it's limitations and we know when it's a good time to use it and when is not the ideal time to use it.

Dr. Tony Tizzano:

Million-dollar questions across the board. And I had meant to ask this first, but John is, is the whole concept moving, of artificial intelligence, is it moving faster than it should be or too fast for our own good?

Jonathan Bonezzi:

I really don't think so. We're, we're kind of a little bit too accustomed to things, you know, moving slower than this, but a lot of things in this space have kind of always moved quickly. I mean, Moore's law, going back to, uh, Dr. Mehta, you can probably catch me on this one. In, in the early '70s, you know, was the, the, the density of a transistors within a given circuit will double roughly every 16 months. That has significantly come down. They've kind of modified Moore's law too, to speak to computing power. But the same thing, it roughly doubles every 12 months or so. And if we want to say, you know, have a six-month moratorium on that, we are losing a lot of ground.

This is something that, you know, was, was put out there on November 30th, 2022. It was updated to a version 3.5, you know, in early February and by March or version four was out there. So, if you look at just what happened in those short four months, we want to essentially say, "Look, we're gonna wipe that amount of progress out and just wait." And this isn't gonna be a global stop. It's, you know, we can't say there's a moratorium on AI and then everybody's gonna listen. That's not gonna happen. We're just then gonna significantly fall behind and falling behind by six months, falling behind by a year in something like this, you, you're basically no longer in the game.

Dr. Tony Tizzano:

Yeah. And another meeting I heard was, it's like, akin to asking Einstein not to come with the E equals MC squared to just put that on the back burner for a couple of years or six months and the ramifications of that. So, Neil and Michelle as educators and researchers, publishers dive into this and chatbots, you know, what kind of legitimate tools are there that can be employed to identify abuse or are there such tools?

Dr. Neil Mehta:

Well Michelle, you want me to take that first?

Michelle Kraft:

Sure.

Dr. Neil Mehta:

So, there are tools, you know, I'd looked it up a while ago and again in a few days it changes. So, there is a tool called, for example, ZeroGPT, which is designed to pick up use of ChatGPT. Some of the tools that create generative content quotes are working on watermarks. So, they themselves are coming out and saying, "There will be a watermarking system that'll mark the content created by this." As soon as that came out, actually just last week, a paper came out which kind of debunked the whole thing because they said there are generative AI tools that'll take content created by generative AI and strip everything off whatever you put like a watermark.

There is already, almost a year ago, there was a tool out, out there, it's called QuillBot I think. And you could put in a paragraph and say transcribe it would take, keep the exact sense of the article and rewrite it better than most copywriters. So even if five students go in and use the same prompt to create a content after they have made it go through some of these tools, you wouldn't be able to tell that it was created by the same chatbot. So, I feel it's a little bit like the radar detector, we are just gonna keep going that route and, uh, I don't think there's gonna be a tool that'll stay ahead of this.

Michelle Kraft:

I think the tools are reactive and not as proactive. I mean, in some ways people do need to use some of the tools, especially depending upon, I know a lot of journal editors are using some of those tools just because they're getting flooded with some articles that clearly were ChatGPT or another AI just based on really quickly looking at the style of writing and then the author, it hasn't published anything before kind of a thing. So, using those tools in conjunction with some other investigative methods can be helpful. But the other thing to think about is, right now with ChatGPT, we're just, we're talking about the written word.

When Neil was talking about the watermarks, we have to remember that there are AIs out there that are now doing visual and music and those kinds of things that are generating pictures from copyrighted material or they're just generating totally different pictures. They are creating voices and mimicking famous singers' voices, all sorts of things like that. So, I mean, it's a wide-open area and we have to use a little bit of our skepticism in addition to the tools, but Neil's right, the tools will always be playing catch up. So that's why we have to just kind of also use our human brain.

Dr. Tony Tizzano:

Yeah. So maybe, as Neil pointed out, maybe having students do exercises requiring that they use it to point out the pitfalls might be the best protection against them using it when they're trying to produce original work.

So boy, it's a quagmire. I can just see so many different layers. And the other thing that's become clear to me in just doing a little poking around is there are institutions, academic institutions who have accused a student of saying, "Hey, this is not your own work" and wrongly accuse them.

And there were ramifications, you know, they may have, you know, had disciplinary action, who knows, that then were turned around when they could prove that this was not the case. And now the academic institution is on the hook legally for having impugned the integrity of this student. So, boy, I mean, we have to be careful on both sides of this equation. So, you know, there are all these legitimate, not so legitimate uses and try to devise guidelines for the future, you know, but at the end of the day, you know, when we look at it where it's at right now, does it pose a threat to transparent scientific investigation?

Dr. Neil Mehta:

Again, this is starting to get a little bit like looking at a crystal ball because whatever we say today is gonna be different tomorrow. And you talk about this point that ChatGPT was trained on things that were cut off at 2021, right? But depending on what you train it on, imagine you train it on a whole bunch of literature that is maybe not peer reviewed, maybe created by not very reliable sources, it'll actually respond in a very authentic way and make it look like it's real. And we are probably not too far away from a time where people will create entire journals using generative AI and generative AI will search for the information and it'll think these are good sources and then use that to spout information.

So that is phase three, which is a slippery slope. But we are heading towards that because right now, if you, like I said, some reliable providers don't allow you to create a paper from a concept, but others will. And then how will you tell that apart? So, I feel like the inability to tell information from misinformation created by humans versus those not created by humans, the deep fakes, that probably is our biggest near-term risk. And it's very close. It's very close.

Michelle Kraft:

I would agree.

Dr. Tony Tizzano:

This segues into my next question, and that was what's on the horizon? I think you've answered that in part, John, what would you say from your perspective?

Jonathan Bonezzi:

And it's, you know, there's already some implementations of it that are occurring. I mean, I just read a couple of days ago that last month, a city in Japan, Yokosuka is the first city to use this actually as an employee. So, individuals, you know, contact this, they interact with it about administrative things related to the city. They use it, you know, to pay for parking tickets, to get information on property taxes, to do any of that. And it acts essentially as an assistant that is in that group, but it's not there 8:00 to 5:00, it's there 24 hours a day, seven days a week. It doesn't matter if one person's asking a question, or a thousand people are asking a question at the same time. It's there, it's ready, it's available. And it, it really kind of changes the dynamic on how things like that are going to work in the future.

To me, like one of the big components with it, you know, once some of the answers side of things gets flushed out a little bit is efficiency and you know, how well processes can be completed, how well tasks can be completed. And I mentioned as we were talking before the podcast about, you know, the calculator, when that first started to become popular, it was cheating. You weren't allowed to use that for examinations. You weren't allowed to use that in tests. And now if you try to tell, you know, a college calculus student, "Nope, you're not allowed to use a calculator," they're not gonna be able to do the test. That's it. Like, it's just part of it. That's how things are going and it's just an accepted practice and it's accepted how it works.

So, it, it's, you know, we're, we're already there. As Dr. Mehta said, the cat's already out of the bag. Pandora's box has been opened, we've rubbed the lamp and here's the genie.

Dr. Tony Tizzano:

So go forward with an open mind.

Jonathan Bonezzi:

Yes.

Dr. Tony Tizzano:

Michelle, thoughts?

Michelle Kraft:

Well, and you know, Neil mentioned that they'll be journals that are already created all by ChatGPT or some sort of AI. Springer has already published a book that was completely AI generated. They had a human editor, but it was completely AI generated. So, we're already there in some areas. Regarding the potential threat and transparent scientific investigation, I'm kind of a yes or and no person, I guess I'm a little wishy-washy. Yes, it's a potential threat because right now we don't know the data behind it and what data it's finding and whether it's valid data. So, to say it's a threat, I would say it's a caution because we have to understand and where, what it's feeding back to us. But I would also say no because it has so many opportunities if we use the tool correctly to really move forward and go in different directions.

Dr. Tony Tizzano:

Boy, very profound words. Go ahead. One more layer.

Dr. Neil Mehta:

Yeah. And I think this quote has been used every 20 to 30 years. So, it's modified quote is the researcher, and the physician are not gonna be beaten by AI, but the physician and researchers who use AI will beat the physician and researchers who don't. That at least is a hopeful, optimistic way of looking at this. And I, I truly believe that it helps so much in so many ways. The concern is the blurring of lines between what is real and what is not, and how do we figure that out. And I think that is where we can look with a little bit of concern or a lot of concern and hopefully find ways to mitigate that.

Dr. Tony Tizzano:

Yeah, that to me is the takeaway. How do you know it's real and how can you validate it? And you know, I just want to close by thinking to myself, I heard someone discuss, you know, the ways and, and, that we look at information, lots of information and make very complicated decisions such as and, and humans doing this. So of course, we have the corner on validity compared to machines. But we got into World War I, World War II, Korea, and Vietnam, probably not predicated on the best information, and yet we got into two world wars and then two that we're not very proud of.

And so, this is a topic that will go on and on and I'd like to thank you all so much, John, Michelle, and, and Dr. Mehta, this has been a fabulous podcast. To our listeners, thank you much for joining and we hope to see you on our next podcast where we explore ChatGPT related to ethical considerations for an academic medical center. Have a wonderful day.

Dr. James K. Stoller:

This concludes this episode of MedEd Thread, a Cleveland Clinic Education Institute podcast. Be sure to subscribe to hear new episodes via iTunes, Google Play, SoundCloud, Stitcher, Spotify, or wherever you get your podcasts. Until next time, thanks for listening to MedEd Thread and please join us again soon.

MedEd Thread
MedED podcast logo VIEW ALL EPISODES

MedEd Thread

MedEd Thread explores the latest innovations in medical education and amplifies the tremendous work of our educators across the Cleveland Clinic enterprise.  
More Cleveland Clinic Podcasts
Back to Top