Skip to main content

What Iteration Really Means with AI [AI Today Podcast] AI transcript and summary - episode of podcast AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

· 19 min read

Go to PodExtra AI's episode page (What Iteration Really Means with AI [AI Today Podcast]) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.

Go to PodExtra AI's podcast page (AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion) to view the AI-processed content of all episodes of this podcast.

View full AI transcripts and summaries of all podcast episodes on the blog: AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Episode: What Iteration Really Means with AI [AI Today Podcast]

What Iteration Really Means with AI [AI Today Podcast]

Author: AI & Data Today
Duration: 00:08:15

Episode Shownotes

AI projects aren’t dying because of big problems. Rather it’s the small things that are causing projects to fail. In this episode of AI Today hosts Kathleen Walch and Ron Schmelzer discuss what iteration really means for AI projects. Continuous Model Iteration We always say that AI projects are never

set it and forget it. Continue reading What Iteration Really Means with AI [AI Today Podcast] at Cognilytica.

Full Transcript

00:00:01 Speaker_01
The AI Today podcast, produced by Cognolitica, cuts through the hype and noise to identify what is really happening now in the world of artificial intelligence.

00:00:10 Speaker_01
Learn about emerging AI trends, technologies, and use cases from Cognolitica analysts and guest experts.

00:00:22 Speaker_02
Hello, and welcome to the AI Today podcast. I'm your host, Kathleen Mulch.

00:00:26 Speaker_00
And I'm your host, Ron Schmelzer. And it's interesting when we see the way people put AI into practice. So they're putting it into the real world and making real people interact with them.

00:00:37 Speaker_00
There's this temptation to do what we call set it and forget it. They put it up, it's working maybe okay, maybe it's working great, but then they sort of move on. I guess they say, well, why we implemented this project?

00:00:52 Speaker_00
It was a chat bot to do this, or it was a rag that we used to interact with the document. I already built it. Why do I need to keep looking at it? It's like, well, it's because stuff keeps changing, right? The world keeps changing.

00:01:03 Speaker_00
The data keeps changing. Models keep changing. That's what we call data drift and model drift. It's a real thing. If you don't know what it's all about, you should listen to some of our previous podcasts on those topics.

00:01:12 Speaker_00
You can learn more about that even on our site in our glossary. But I think the reason why we're talking about this today, this is really important, is that we've seen hundreds and hundreds of AI projects, you know, thousands maybe at this point.

00:01:23 Speaker_00
And of course, the vast majority of them are not successful. And it may surprise you, right? Or some of them are moderately successful in that they're doing something, but it's kind of more of a curiosity than it is

00:01:34 Speaker_00
something that's really core to their business, like, oh, yeah, we depend on this for this business operation, or it's really providing some fundamental thing. Those do exist, too, but they're the minority. They're not the majority, right?

00:01:45 Speaker_00
And I think one of the things we found here, a good way to summarize is the ones that, the projects that don't succeed, the ones that die, usually don't die for some big reason, like pick the wrong product or have the wrong data, because you could fix those big problems.

00:02:00 Speaker_00
They usually die of what's called death from 1,000 cuts. It's like one little thing isn't going to hurt you, but it's just one thing after another, after another, after another. It's like bad data, bad iteration, comes up, it's not working.

00:02:16 Speaker_00
Because that's the case, the moment you stop looking at things, the moment you're just like, okay, it's good enough, didn't you just implement your project after realizing you had to fix 500 problems?

00:02:28 Speaker_00
Why would you think that those problems would all of a sudden go away the day you hit the upload button or whatever, made it available to the public. So that's what we're going to talk about here. How how you can avoid death by a thousand cuts.

00:02:40 Speaker_02
Exactly. And so if you haven't done so already, subscribe to AI Today so you can get notified of all of our upcoming episodes. And I also encourage you to go back and listen to previous episodes as well.

00:02:50 Speaker_02
We talk a lot about common reasons why AI projects fail, and Ron and I both speak a lot. And so I encourage you to attend some of our sessions. Some are virtual, some are in person.

00:03:00 Speaker_02
You can find all of them on LinkedIn or in our newsletter that we'll link to in the show notes as well. But it's important to understand that AI projects are failing.

00:03:09 Speaker_02
But as we've looked at all of these AI projects, we find that there's common reasons why they fail.

00:03:14 Speaker_02
And so we like to continue to present them because even though we've presented them in the past, and even though we continue to talk about this, they're still happening.

00:03:22 Speaker_02
Maybe some people haven't heard these podcasts or they need it presented in a different way. And so that's why we're doing this. And on today's podcast, we really want to talk about

00:03:31 Speaker_02
what it means for continuous model iteration and how you have to budget for that. A lot of times, organizations, they're like, okay, we're going to move forward with this. We always say your AI projects are not free.

00:03:44 Speaker_02
Know that there's time, money, and resources that need to be devoted to this. Because Ron said it's not a set it and forget it, that means that that's Continuous, that's going to be for the life of the project.

00:03:57 Speaker_02
Now, maybe you don't need as many people and your budget can be smaller, but you're going to have to have somebody, some resource on there to make sure that the model continues to perform as it needs to over time.

00:04:10 Speaker_02
For your project, that's going to vary how often you need to be checking that. Maybe it could be monthly. Maybe it needs to be weekly.

00:04:18 Speaker_02
Maybe it needs to be daily, depending on how integrated and integral it is, and how often your data changes, and how often you're noticing that your model is drifting and decaying over time, needs to be retrained.

00:04:30 Speaker_02
But it really is important to understand that Organizations need to continually monitor your models for model decay, data decay, right? Data just changes over time. Your model's going to need to be retrained.

00:04:43 Speaker_02
And you're going to have future iterations and future versions of that model. And that means that you need to budget for that. And so unfortunately, we've seen far too often that organizations go, OK, we're done. We're moving this resource to the next

00:04:59 Speaker_02
Project and you're like well now it's been a month and you're like it's not performing as it used to and so what do i do and your organization like sorry we don't have resources for it so you're gonna have to wait get in line again and you're like this is a pretty important thing and people are using it and it's continuing to not perform as expected and so what do you do you shut it down.

00:05:19 Speaker_02
Do you have to kind of pull people on immediately and now have this fire alarm project that you're doing? And so this is important. This is why we always say think big, start small and iterate often.

00:05:29 Speaker_02
So think about the big picture and then start small, but know that you are going to continually iterate and you're gonna need resources on this.

00:05:37 Speaker_00
Yeah. And I think that's part of the reason why a lot of projects fail is they just take the foot off the gas, if you will, and they just kind of put it out there. And of course, stuff happens.

00:05:44 Speaker_00
And then at some point, the problems outweigh the benefits and projects get canceled. That happens all the time, right? The reason why AI is so susceptible to this is because everything is constantly changing.

00:05:56 Speaker_00
Even if we say that, for whatever reason, the data is staying the same—let's just say we're doing handwriting recognition and the data is not really changing that much, and the models don't need to change that much—the tools and technology and the way people are working are changing quite a bit.

00:06:10 Speaker_00
Maybe you had something that had like you did some handwriting tool and had some acceptable performance. Actually, I'll use sentiment analysis because that's a better case.

00:06:18 Speaker_00
A couple of years ago, you had to basically build your own models to do sentiment analysis.

00:06:23 Speaker_00
And for those that aren't aware, sentiment analysis is something like reading some text and understanding if that text is positive or negative or angry or whatever, sad. And that may be important because maybe

00:06:34 Speaker_00
you're analyzing millions of online reviews, thousands of reviews. You want to know after you've introduced some product whether people like it or not. It's very hard to do that with people.

00:06:42 Speaker_00
So what people do is they build a machine learning model that, say, analyzes past text and you train it. And it can probably work fairly reliably. Maybe people's language doesn't change that much over time.

00:06:53 Speaker_00
You might say, yeah, the model we built a couple of years ago still works generally the same because the review sites are the same and the kinds of reviews are going to people the same.

00:07:02 Speaker_00
But what's going to happen is that now generative AI systems are out and with NLP tools, maybe those NLP tools perform much better. Or maybe people are starting to write reviews

00:07:14 Speaker_00
using generative AI, and all of a sudden your user behavior starts to look different because people are using generative AI tools to write reviews, and now they don't match your models just as well.

00:07:24 Speaker_00
In all these situations, these are things that are not in your control. So you can't just decide as a business, well, we're going to stop funding iteration of the model because we are done.

00:07:34 Speaker_00
We're not going to introduce new features, the model will be embedded where we want it to be embedded, we're not going to change that, and maybe the site's don't change, so we're going to be like, we're done.

00:07:45 Speaker_00
Well, it's not your choice because in the future, other people may change the way things are done. So what are you going to do?

00:07:50 Speaker_00
Like all of a sudden restart the project when you realize that stuff isn't working, but you won't even know that things aren't working if you're not keeping an eye on those things.

00:07:57 Speaker_00
So a while back, there was this whole movement towards something called MLOps, which was this idea that we're going to have operational control visibility over our machine learning systems, very ML focused, not even AI, just ML focused.

00:08:13 Speaker_00
And there are all these tools that sort of came out to help you do that. But the problem is, is that people just one didn't feel like it was worth it for them to even invest in monitoring, because if they weren't,

00:08:24 Speaker_00
they wouldn't be having so much failure, right? And two, it never really made sense to have other tools that did it because you would probably do the monitoring and stuff within the tools you're using.

00:08:34 Speaker_00
You're not going to get some new tool to do something on the side when the tools that you already use for building and operationalizing and embedding your models are what you use every day. So you're going to probably expect it.

00:08:43 Speaker_00
So that whole movement actually as a technology vendor movement kind of went away. But that didn't mean that we're not doing model iteration anymore or model monitoring. We're still doing those things.

00:08:56 Speaker_00
But now, as we said in our previous podcast, this is not a technology thing. This is a process thing. It's not something you buy. It's something you do.

00:09:04 Speaker_00
And so that is still very new for most organizations haven't really figured out that they need to do this thing that was formerly like in this era of tools called MLOps.

00:09:15 Speaker_02
Exactly. And I think that it's really important here to understand that, you know, we had brought this up in previous podcasts, but large organizations really don't always have a plan when they're, you know, moving forward with projects.

00:09:28 Speaker_02
And they definitely don't have a framework or a step-by-step approach that they are following for AI projects. We've interviewed hundreds

00:09:37 Speaker_02
of, you know, C-level and decision makers, you know, the doers at organizations, and they're just kind of winging it with their AI projects.

00:09:45 Speaker_02
Well, when you do that, you don't understand that full iteration life cycle and what it means and what is entailed. So when you're not following a step-by-step approach, you may miss things.

00:09:56 Speaker_02
And that is why a lot of times organizations don't have that resource and don't have budgeting on the backend. because they said, OK, well, we got it out there. We're done. We're moving on.

00:10:08 Speaker_02
If you've listened to our podcast for a while or heard us talk, you know we're big advocates of CPM-AI, the Cognitive Project Management for AI, methodology or framework, whatever you'd like to call it. Sometimes the word methodology trips people up.

00:10:20 Speaker_02
But it really is this step-by-step approach for how to run and manage AI projects. We always say, think big, start small, and iterate often. Well, when you're thinking big, you're thinking about your overall goal that you'd like.

00:10:31 Speaker_02
And then you start small. You say, what is the smallest

00:10:34 Speaker_02
know thing that I can do in this iteration that's using AI we talk about the seven patterns of AI and so you know how do you break this down so that you can move forward in that small iteration and continue to continue then to iterate on that project add more capabilities in

00:10:51 Speaker_02
with each iteration. And that is unfortunately what people are not doing.

00:10:55 Speaker_02
So as Ron mentioned, well, yeah, ML apps, people were supposed to be, you have to be monitoring your models, you have to be paying attention to it, understanding how they're going to change over time, have governance in place for all this, but you really need to follow that step-by-step approach.

00:11:11 Speaker_02
So just doing one thing doesn't mean you're actually doing AI right. And that's why it's so important.

00:11:18 Speaker_02
We also say, you know, when you're doing a lot of small little things and you think that you're all kind of moving forward, you may all be moving in different directions and you're not actually moving towards that goal.

00:11:27 Speaker_02
When you follow that step-by-step approach, you know you're doing it in a logical order. You know you're supposed to be doing things when and everybody's on the same page. And then also you document it.

00:11:37 Speaker_02
so that if somebody leaves the team, you know exactly where they were in the process and you can pick right back up. So that's why we say, you know, it really is important.

00:11:45 Speaker_02
You're gonna have to understand that there's continuous model iteration, but what exactly does that mean?

00:11:49 Speaker_02
Well, you need to be following that step-by-step approach so that you're doing it at the right time and you're understanding how it fits in and how for each individual project you're doing this too, right?

00:12:00 Speaker_02
This isn't like some organizational thing and you say, well, we're gonna iterate each model once a month. It's like, no, each project needs to be different. And that's why you need to understand it. That's why you have it written down.

00:12:10 Speaker_02
And that's why everybody on the team is on the same page.

00:12:13 Speaker_00
Yeah, so I think this is part of our general recommendations for best practices. These are practices, you know? And you might say, well, how do I know these are best practices? Well, they're proven practices, so people have been implementing them.

00:12:24 Speaker_00
And it's better to have a practice and a methodology than none at all. We keep talking about how we're still surprised that so many organizations that say that AI is so important, is so critical to their

00:12:35 Speaker_00
business organization, but they have no real method, no plan. They just let people do stuff with AI, whatever way they want to do stuff with AI. Why are they surprised that stuff isn't working? I have no idea.

00:12:46 Speaker_00
I don't know why they're taking this approach with AI, but they don't take this approach with other things in their organization, like their CRM system. You let everybody implement their own CRM system? Sure.

00:12:54 Speaker_00
Anybody can go online and sign up with a credit card and You get your CRM of choice, free ones, paid ones, whatever.

00:13:01 Speaker_00
But you don't do that because you're like, well, that's because that's not how you run a sales organization, or a marketing organization, or cybersecurity. It's like you don't just let everybody respond however they want to respond. You have a plan.

00:13:15 Speaker_00
Yes, you need technology because you can't do cybersecurity without threat mitigation and threat detection and all this sort of stuff. But cybersecurity isn't a thing you buy, it's a thing you do, just like MLOps. So we said this many, many years ago.

00:13:28 Speaker_00
At the time, people did not necessarily agree with us, but I think we have been proven right. So if you are listening to us and you think we're wrong,

00:13:36 Speaker_00
then let us know, but we think we have been proven right that MLOps, for example, is not something you buy, it's something you do, and that the MLOps vendors in the market have, by and large, sort of either disappeared on their own or have been absorbed within larger tech companies, as they should have been.

00:13:50 Speaker_00
But that's because, just like any sort of thing that requires constant monitoring and iteration, it's a process. And if you don't know who, this is the people part, yes, the process might be constant monitoring, but who does it?

00:14:03 Speaker_00
Is it the technical part of your organization? Is it the business part of your organization? Is it some project management role? A lot of times, it's not defined either. The who is very important.

00:14:14 Speaker_00
The people is very important to the process because if it's a technical person, they're going to care about the technical things that they're monitoring.

00:14:20 Speaker_00
If it's a business person, they're going to care about the business things that they're monitoring. Who is right? The answer is they're both right. That might mean maybe you need a multidisciplinary management team.

00:14:31 Speaker_00
Some people are keeping an eye on the business KPIs and metrics. Some people are keeping an eye on the technology KPIs and metrics. That's great. Whatever works for you, but have a plan.

00:14:39 Speaker_00
Because as we've said before, failure to plan is planning for failure. Exactly. You can do better.

00:14:46 Speaker_02
I know. So, we are always big advocates of CPM AI, like we said, the Cognitive Project Management for AI. It really is that step-by-step approach.

00:14:54 Speaker_02
We say invest in yourself so that you can learn this methodology, you can learn this step-by-step approach, and then you can enhance your career and you can go into organizations and have a proven step-by-step approach.

00:15:06 Speaker_02
Don't reinvent the wheel here, right? This is proven by hundreds of organizations have adopted it for running and managing their AI projects. We'll link to it in the show notes, or you can go to Cognolitica.com slash CPM AI.

00:15:17 Speaker_02
And also, if you haven't done so, please subscribe to AI Today. We love to hear from our listeners. I know many of you reach out to us. You let us know what episodes you like, maybe different topics you want us to dig deeper into.

00:15:27 Speaker_02
And also, please leave reviews. We love when, you know, our podcast listeners can rate and review our podcasts, and we love five-star reviews. Like this episode and want to hear more?

00:15:38 Speaker_02
With hundreds of episodes and over 3 million downloads, check out more AI Today podcasts at aitoday.live.

00:15:45 Speaker_02
Make sure to subscribe to AI Today if you haven't already on Apple Podcasts, Spotify, Stitcher, Google, Amazon, or your favorite podcast platform. Want to dive deeper and get resources to drive your AI efforts further?

00:15:58 Speaker_02
We've put together a carefully curated collection of resources and tools. handcrafted for you, our listeners, to expand your knowledge, dive deeper into the world of AI, and provide you with the essential resources you need.

00:16:11 Speaker_02
Check it out at aitoday.live slash list. This sound recording and its contents are copyright by Cognolitica. All rights reserved. Music by Matsu Gravas. As always, thanks for listening to AI Today, and we'll catch you at the next podcast.