Prompt Engineering Best Practices: What is Prompt Chaining? [AI Today Podcast] AI transcript and summary - episode of podcast AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
Go to PodExtra AI's episode page (Prompt Engineering Best Practices: What is Prompt Chaining? [AI Today Podcast]) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.
Go to PodExtra AI's podcast page (AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion) to view the AI-processed content of all episodes of this podcast.
View full AI transcripts and summaries of all podcast episodes on the blog: AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
Episode: Prompt Engineering Best Practices: What is Prompt Chaining? [AI Today Podcast]
Author: AI & Data Today
Duration: 00:26:52
Episode Shownotes
To improve the reliability and performance of LLMs, sometimes you need to break large tasks/prompts into sub-tasks. Prompt chaining is when a task is split into sub-tasks with the idea to create a chain of prompt operations. Prompt chaining is useful if the LLM is struggling to complete your larger
complex task in one step. Continue reading Prompt Engineering Best Practices: What is Prompt Chaining? [AI Today Podcast] at Cognilytica.
Full Transcript
00:00:01 Speaker_00
The AI Today podcast, produced by Cognolitica, cuts through the hype and noise to identify what is really happening now in the world of artificial intelligence.
00:00:10 Speaker_00
Learn about emerging AI trends, technologies, and use cases from Cognolitica analysts and guest experts.
00:00:22 Speaker_02
Hello, and welcome to the AI Today podcast.
00:00:24 Speaker_01
I'm your host, Kathleen Mulch. And I'm your host, Ron Schmelzer. And honestly, we love talking to you guys. Don't be shy. If you have some, even a problem, like you're listening to our AI Today podcast and you disagree with us.
00:00:40 Speaker_01
Or maybe you have a comment or maybe you want to add some commentary to it. Reach out to us, email us, comment on our posts on LinkedIn. We like this engagement because the whole purpose of this podcast is not to talk to ourselves.
00:00:55 Speaker_01
The purpose of our podcast is to talk to you, our audience. So we always like to hear from you.
00:01:01 Speaker_01
And one of the things that we have been hearing is as we've been going down our series of prompt engineering, you have been really interested in the specifics.
00:01:10 Speaker_01
Because, as you know, prompt engineering and prompting, this is sort of like right now the killer app of AI, generative AI. These tools are out there. They're becoming embedded in everything we're using.
00:01:23 Speaker_01
Really, as we said before, the power of AI is in the hands of so many people that Right now, what separates productive people from unproductive people are who can actually make the most effective use of AI.
00:01:35 Speaker_01
And that's why we're going to continue along on this Prompt Engineering series, unless we hear from you and you want to do something else. So, tell us.
00:01:42 Speaker_02
Exactly. So as Ron mentioned, this is, we have a lot more podcasts lined up in our Prompt Engineering series. So subscribe to AI Today if you haven't done so already, so you can get notified of all of that.
00:01:53 Speaker_02
And if you'd like it in written form, then subscribe to our newsletter. You can go to our website and do that, or you can go to LinkedIn to the Cognolitica page and subscribe on there, where we break it down into even greater detail.
00:02:06 Speaker_02
But for today's podcast,
00:02:07 Speaker_02
You know, we thought it was important to continue on with our prompt engineering series and talk about prompt chaining, because we wanted to make sure that, you know, we're providing a comprehensive approach to how best to do this, what are the best practices, and understand that there really are two types, two different, you know, general ways that you can do prompts.
00:02:30 Speaker_02
And so one is one-shot prompts, which is basically where you write out the prompt, one big prompt that contains everything that you're looking for.
00:02:39 Speaker_02
In the previous podcast, we had talked about prompt patterns, and so you want to make sure that you are following a pattern.
00:02:45 Speaker_02
We'll link to that in the show notes in case you haven't checked that out so you understand a little bit more about what patterns are.
00:02:50 Speaker_02
But they're just that step-by-step approach, making sure that you are getting all of the things that you need to into your prompt. Because again, the prompts can't read your mind. So you need to specifically say what the role is and what the task is.
00:03:04 Speaker_02
Is it writing a blog post that's 500 words? Is it writing an email in a friendly tone? You need to be setting all that. So that's one-shot prompt. But the second way that you can do this is with prompt chaining.
00:03:14 Speaker_02
And this is where you break down the prompt into smaller steps.
00:03:18 Speaker_02
So for example, if we wanted that one-shot prompt, I could say, you know, act as a social media marketer who's writing a blog post for a project manager blog, and I want a 500-word blog post on prompt engineering best practices.
00:03:37 Speaker_02
That's a one-shot prompt. The prompt chaining, you can break it down into smaller tasks. So you can do this either if you just want it to be more specific or if you have a super large prompt and it doesn't fit into the prompt window.
00:03:51 Speaker_02
So if I wanted to do prompt chaining for that example, I could say, you know, act as a project manager with 20 years of experience who is also experienced in writing blog posts and has been doing so for the past five years.
00:04:06 Speaker_02
And then you, you know, say what you want, say that you're writing a blog post on prompt engineering best practices for project managers and provide me an outline. And then it'll come back with the outline.
00:04:18 Speaker_02
Then from there, you can say, OK, this is great. Now write the first paragraph for me or the opening of my blog in this tone for this audience. You can just continue to be specific.
00:04:29 Speaker_02
And then you can continue to write section by section to get a full blog. So these are two different ways that you can do it.
00:04:38 Speaker_01
Yeah, and I think sort of the important reason why this comes up is that when you're having a conversation, if you will, with the LLM, as Kathleen mentioned, there's a context window.
00:04:48 Speaker_01
There's like all the things that the LLM considers when it's generating a response. And the context window consists not just of your prompt, but also everything that the LLM has generated in response to your prompt, right?
00:05:04 Speaker_01
And as we have talked about in earlier podcasts, LLMs don't really understand words. They understand basically numbers and tokens. And a token is a representation of your words mapped to a particular set of dimensions, so it also knows the meaning.
00:05:18 Speaker_01
Because if you say the word bat,
00:05:20 Speaker_01
doesn't know what you're talking about a bat that you had a ball with the bat that's flying around maybe even like bats that use for installation that those are all different kinds of bats so the context helps and what happens that that word gets mapped to a specific number or content numerical concept that gets them embedded with the other words that's related so if you go baseball batman embeds the word bat closer to the things that have to do with baseball if you say flying bat it goes somewhere else
00:05:47 Speaker_01
unless you mean you're throwing your bat. So you can see how the context really matters here. So anyway, the LLM considers all of these tokens. It has a sort of limit as to the number of tokens it will consider when it's generating a response.
00:05:59 Speaker_01
That's the context window. And it's all the stuff that's from your prompt, as well as all the responses that the LLM has generated, and then all of your follow-ups.
00:06:09 Speaker_01
That's why prompt chaining can work, because it's not like you're starting the conversation from scratch every time.
00:06:16 Speaker_01
And because it's keeping these sort of tokens, if you will, already kind of in memory, if you will, in the context window, so that it can come back and say, well, I've already mapped up all these concepts.
00:06:26 Speaker_01
I've already sort of come up with, narrowed down the universe of all the data I'm considering just to this particular role. I've already generated the output in a format that you want. So you could say, do more, right?
00:06:38 Speaker_01
So you may already have had experience with this idea when you've generated some text, and maybe the LLM hasn't gotten it right. And so you say, no, no, no, replace this with that, or change this, or do that.
00:06:50 Speaker_01
What you're actually doing is you're doing a form of prompt chaining. You're actually iterating by saying, OK, take everything that you've generated in the past, Use that as the context. Now, I'm basically iterating.
00:07:01 Speaker_01
I'm adding to my prompt and saying, do something else or change it. And that's what prompt chaining really is. It's just a matter of taking these prompts and the responses and putting them together.
00:07:12 Speaker_01
There's a lot of reasons why we might want to chain it. Let's just say we want to iteratively answer and ask questions about a document. So we can upload the document once.
00:07:23 Speaker_01
right, at the beginning of our conversation, or we could ask for something once. And then I can have ongoing, iterative questions about that document or the responses. And I could say, for example, this is great for learning.
00:07:36 Speaker_01
If I want to learn a subject, teach me about calculus. OK, well, there's a lot to learn about calculus. Where do you want to start? And maybe even the LLM may even say that. Tell me the basics. And then you could say, well, wait a second.
00:07:48 Speaker_01
I don't understand this. and you can dive deeper. These are sort of aspects of prompt chaining, where it's either iteratively answering questions or doing what's called the next thing, validating and refining the responses.
00:08:02 Speaker_01
And you can even have it check and say, are you sure that's the answer? I read something else somewhere else, and you can do that.
00:08:08 Speaker_01
But there's lots of other reasons why we might want to have this sort of ongoing iterative style of prompting with an LLM.
00:08:16 Speaker_02
Right, we may want to be performing parallel tasks. So this allows you to do that, you know, with having really optimizing that AI assistant with performance on complex tasks.
00:08:30 Speaker_02
Also, as I mentioned earlier, maybe you want to simplify the writing of long form content, you know, maybe it can help you with breaking down the writing process.
00:08:39 Speaker_02
Maybe we're not trying to write a 500-word blog post, but we're trying to write a 5,000-word essay. So rather than just having it generate something all in one go, we can break down that writing process into outlined sections or chapters.
00:08:54 Speaker_02
Maybe we're using it to help write a full-on book. that we can then have the AI expand upon in sequence.
00:09:01 Speaker_02
So first it gives me the outline, then it gives me the first chapter, then the second chapter, and so on and so forth, rather than just writing something all in one take.
00:09:10 Speaker_02
It can also be a stepwise approach when we are doing different projects, especially if it's research projects. where now first we can have it find source documents, then we can have it extract key facts and data, and then synthesize the conclusion.
00:09:25 Speaker_02
So rather than doing that all in one go, first we get all the documents that we want. We make sure that all of them are needed. Maybe we need to be doing some refinement here with adding more documents. Then we have it extract some of that
00:09:39 Speaker_02
key information, the key facts, the key data from those findings, make sure that that's what we want. And then once we have that, then we can synthesize the conclusions.
00:09:48 Speaker_02
So this helps with that step-by-step approach so that we aren't doing everything in one go saying, hey, this is really wrong, and it's going to take a lot more effort to get it to where we want.
00:09:59 Speaker_01
Yeah, and we can also do that with any sort of iterative approach, whether it's computer programming, where I maybe start with a project plan or an outline, maybe some pseudocode that it can go into different aspects.
00:10:10 Speaker_01
Let's say I want our GPT system to make a game. Well, you're not going to be able to get that all done in one prompt, and you probably won't get the results back in all in one prompt.
00:10:19 Speaker_01
The other thing I want to mention is that sometimes the LLMs don't always finish generating. So they may start something and they may just summarize the results or they may stop at a certain point and you may want it to continue.
00:10:32 Speaker_01
So you might go type continue or you forgot something or please expand, right? These are all aspects of prompt training because you don't have to start. You don't have to explain what you want to do all over again.
00:10:43 Speaker_01
Like, no, now I got to start from the beginning. I have to explain my problem, et cetera, et cetera. That's why this is so powerful, because it kind of remembers, right? I mean, that's sort of the power here.
00:10:53 Speaker_01
And of course, it's not actually remembering. It's just keeping all of that context sort of in the window of your chat. This is another little tip.
00:11:01 Speaker_01
If all of a sudden the AI system is starting to kind of drift and kind of go in a weird direction, you're like, I don't know why it's paying attention to some fact. It's like, OK, ignore that. Focus on this.
00:11:13 Speaker_01
And then every time you ask a question, it's still paying attention to it.
00:11:16 Speaker_01
That's probably time to kind of close your chat window, start a new one, because what you're doing is you're basically clearing out the context window and you're saying, don't ignore it. Don't pay attention to all that.
00:11:26 Speaker_01
That will be annoying, though, because you do have to start everything from scratch and kind of build that context up. But maybe you could accelerate things, right? So all of these are really general approaches.
00:11:36 Speaker_01
I mean, for those of you who've been using GPT, a lot of this seems kind of obvious because you're probably already doing it. The odds of you
00:11:45 Speaker_01
Putting in a prompt and getting exactly what you want and then not doing anything further is probably not happening.
00:11:52 Speaker_01
I would say that's more of a Google search style of using GPT, where you ask a question, you get a response, because you can't refine the responses that any search engine gives you. Like, OK, now trim this down and only do that.
00:12:06 Speaker_01
No, I'm sorry, you have to put in a new search query for that. That would be what would happen here if we didn't have context window. You'd have to start another.
00:12:13 Speaker_01
conversation so so there are a few kind of this seems obvious honestly to a lot of us but but there are sort of some again patterns or formula formulae that we can take when we are trying to do some prompt chaining so that maybe it's not always just
00:12:30 Speaker_01
you know, responding to whatever the GPT said. Maybe we can, or the LLM system said, maybe we can think about this in advance and say, okay, let's plan for this interaction with the LLM to be a chain. So let's plan out our chain process.
00:12:45 Speaker_01
So we have a few options here.
00:12:47 Speaker_02
Exactly. So one of them is chain of thought prompting. So think about this as breaking down big tasks. And this method really does help you take that larger task that you have and break it down into smaller subtasks and then chain them together.
00:13:02 Speaker_02
We always say at Cognolitica, think big, start small, and iterate often. So think big, you have your larger task, you break it down into smaller tasks, and you continue to chain it together.
00:13:11 Speaker_02
So it helps you tackle each part one by one, and then connect each step as you go along, rather than doing one whole big monstrous thing at once. So think about, you know, it's really how you solve problems, right? Naturally, it's step by step.
00:13:25 Speaker_02
So, and again, the reason that we wanted to present this in a podcast series is just to help you maybe understand some terminology if you haven't before, or understand maybe the way that you're doing things really are best practice, and some of them have names now associated with it.
00:13:39 Speaker_02
And if you haven't been doing it that way, then you can think about doing it this way. So another way is self-consistency or react techniques. So we say this is, you know, really thinking harder and smarter, letting the tool help you.
00:13:53 Speaker_02
So these advanced prompt engineering techniques are designed to enhance the AI's reasoning capabilities. We talk a lot about how we don't yet have machine reasoning. So this is just a little tip and trick to help it feel like it has more reasoning.
00:14:10 Speaker_02
And these are fancy ways to make the AI think more deeply and try different approaches to solve a problem. Because maybe it's thinking in different ways that you as a human aren't able to. So you're helping augment your role.
00:14:24 Speaker_02
And think about this as it's like when you're trying to figure something out and you keep testing different solutions in your head. Or you ask yourself the same question in a few different ways to get the best answer.
00:14:36 Speaker_02
So really, this is just a helpful tool that you can use when you want to think harder and smarter. And then there's a few other ways.
00:14:44 Speaker_01
Yeah, and there's sort of like a related idea to that, which is called the flipped interaction.
00:14:48 Speaker_01
So you might think that it's your job as the human to ask the large language model or to tell LLM what you want or ask the LLM some question and it's just going to be the wizard and answer. But actually,
00:15:00 Speaker_01
LLMs are just text predictors, so you can actually have them ask you a question. You might say, I'm interested in writing a blog post. What are the steps that I should consider to writing a blog post? You might already know them, but like,
00:15:16 Speaker_01
let's actually have the LLM come back, right? So this is flipped interaction. So what flipped interaction will do is it'll come back and you ask the question, what should I consider? And it comes back and it gives answers.
00:15:26 Speaker_01
And then the user, or it could even ask, have you clarified? It could say, hey, LLM, maybe ask what question should I answer that would maybe help me write a blog article that would get the most traffic from this audience?
00:15:41 Speaker_01
And they might say, oh, you should probably answer this question. And the LM can actually ask you questions or have you clarify the results.
00:15:49 Speaker_01
And so this is kind of a way of sort of iteratively, it's almost like you have, I would like to say it's more like a therapist. I don't know. than a wizard, where it's like where the therapist goes, and how does that make you feel?
00:16:00 Speaker_01
And you're basically as the person, you're kind of answering your own questions. You're not asking for it. So it's a way of doing it.
00:16:06 Speaker_01
It's good for certain styles, especially when there are many, many ways of doing something, or maybe when you're not necessarily the expert. We were actually talking about this very recently.
00:16:16 Speaker_01
One of the powers of large language models is that it does give you the capability of gaining expertise in something that you're not an expert at, right?
00:16:25 Speaker_01
If someone says, I need you to write an article, and you're not an expert, you're like, well, hey, this LLM is an expert. But it's not. It's just been trained, right, on what other experts produce, right?
00:16:34 Speaker_01
So what you're really leveraging is the output of what all the experts have produced. And it's kind of summarizing all that. That's the secret. That's what's actually happening.
00:16:43 Speaker_01
So you can thank all the experts who have actually written that stuff on the internet. Thank you guys for creating the training data because they're the ones that did it.
00:16:49 Speaker_01
But sort of in a related mode is this idea of what's called the question refinement pattern. That's another one where basically you could start with something like a general question. You can have the LLM kind of come back and
00:17:00 Speaker_01
and say, well, what would help me get to a better answer faster? So the question refinement pattern of this prompt chaining technique is this iterative approach to refining, specifying the questions to kind of get more to that specific information.
00:17:16 Speaker_01
Again, really, all of this depends on what you're trying to do. It's not like we do this for every interaction with a prompt.
00:17:22 Speaker_01
If I have a simple question and a simple answer, as we said in our previous best practices one, I can use a very simple prompt pattern. I could probably do it in a single shot.
00:17:30 Speaker_01
i get my answer life can go on right i don't need to have more conversations but if i'm trying to do some analysis or i'm trying to solve a complex task and i talk as i said you should think of the lm is like somebody that you're working with right hopefully it's knowledgeable
00:17:47 Speaker_01
If they're not, then you should tell that LLM to be knowledgeable. You should be a knowledgeable person. And you treat that person as your assistant, not necessarily as someone who's just doing a task for you, where you tell them to do a task.
00:18:00 Speaker_01
That would be forms of prompt chaining, whereas I can do this, then do this, then do this. But also, that person can be sort of a guide and say, hey, you've done this before a few times. Maybe I'm doing this the wrong way.
00:18:12 Speaker_01
I have to say, this is a change of mentality because even for us, you have to change your way of thinking. You think, I need help. So the first thing you might think of is searching. That's one thing. You go out. But let's just be honest.
00:18:25 Speaker_01
Searching now is a mess. People have gamed the SEO system. There's lots of content that's not good. It may be obsolete. Who knows what you're dealing with? Add content.
00:18:34 Speaker_01
The other thing is that the next response is, well, maybe I just need to hire someone to help me. I'm telling you, LLMs might change all of that.
00:18:42 Speaker_01
Maybe the first step should be like, maybe I should first query the source of something that really has access to all the information. And then if the LLM is not useful, then maybe I do something else. It's definitely a mind shift, right?
00:18:56 Speaker_02
Yeah, it absolutely is. And even we need to train our, you know, reflex muscles as well. And that's why we wanted to have a podcast series on this because we want to help our listeners train their reflex muscles as well.
00:19:09 Speaker_02
And also let them know the art of the possible, right? What other people are doing, how they're using this. Maybe it gives you ideas to say, hey, maybe I can do this with my role. Maybe my organization can use it for this reason.
00:19:23 Speaker_02
And that's why it's important that we present this. Like we said, we love to have these conversations because even we find different ways of doing things when we have these conversations and understand how other people are using things.
00:19:35 Speaker_02
So it's always wonderful to hear what everybody's doing. And if you are interested in getting to meet us in person, we have a list of all of the upcoming places that we'll be speaking at both virtually and in person in our newsletter.
00:19:49 Speaker_02
So if you haven't done so already, subscribe to the newsletter. You can do that at Cognolitica.com or you can subscribe on LinkedIn.
00:19:56 Speaker_02
So we want to wrap up this podcast by just talking about a few different ways that, you know, we say that, yes, this is still pretty new, but there are some now best practices, current practices to follow.
00:20:07 Speaker_02
So when you're looking to chain prompts, a few best practices that we recommend following is to one, decompose the task.
00:20:15 Speaker_02
So we've talked about this now, that you're going to have your big overall task that needs to be accomplished and then break it down into that smaller manageable subtasks. So I talked about think big, start small, and iterate often.
00:20:27 Speaker_02
Think big, start with one task, and continue to work on your tasks. You also want to craft your prompts. So for each subtask, you want to craft a clear and precise prompt using prompting best practices. So we talked about how there's prompt patterns.
00:20:42 Speaker_02
You want to make sure that you're using that. You want to make sure that you're providing as much detail as you can.
00:20:48 Speaker_02
so that because again it can't read your mind these systems so you want to make sure that you're providing it you also want to make sure that it understands that you've kind of scoped it down so you've said act as a high school math teacher with 30 years of experience or act as a marketing professional who works at a large organization and has been there for five years whatever it is
00:21:09 Speaker_02
And then test your prompts. So this may seem obvious, but again, you're going to have to test your prompts. It's most likely not going to work the first time as expected, maybe not the second or the third either. That's okay.
00:21:20 Speaker_02
Test it, make sure that it is performing the way that you want. And then with time, you're just going to get better as well. And you'll learn how to do this so that it's working as expected.
00:21:30 Speaker_02
And then the next step is very important because you're going to troubleshoot this, you're going to make sure that it's working, and then it will. Now you can chain your prompts.
00:21:38 Speaker_02
So begin with that initial prompt and use the output of that for the input of the subsequent prompt. And you're going to continue to repeat this process until all the prompts have been chained. Now you can say, well, how many prompts do I need?
00:21:51 Speaker_02
There really is no set rule to how many prompts you can or should have in your chain. So just understand there is no set rule, but as you're chaining these along, figure out maybe how much is too much. Is 100 chains too many? Is 10 that sweet spot?
00:22:08 Speaker_02
You're going to play around and figure it out over time. And then evaluate and iterate as needed. Obviously, you want to assess the final output against some original work objective. We always say, what is the problem that you are trying to solve?
00:22:21 Speaker_02
And compare it against that. Did it do a good job? If it didn't meet your expectations, you're going to have to tweak the prompt and repeat the process until you get that desired output.
00:22:32 Speaker_02
And remember that this really is just about adjusting and iterating your prompts. It's never going to be right the first time, but it will continue to improve over time. And that's why you want to evaluate and iterate as needed.
00:22:45 Speaker_02
As we say with all AI, it is never a set it and forget it. It is probably not going to be right the first time. So just keep trying.
00:22:52 Speaker_01
Yeah. Another thing you can do is you can even tell the LLM itself to respond with steps. People like to use double pipe syntax or double hash, whatever. It's just a way of telling the computer to separate ideas, basically.
00:23:08 Speaker_01
But you can say, respond, but respond in this way. Respond with this in step one, and then respond with this in step two, and then respond with this in step three. You can even say, tell me this is step one, step two, step three.
00:23:20 Speaker_01
Whatever it is you want to do. It's really kind of a powerful idea, even though it's kind of a simple idea, just like a prompt pattern that we talked about in our last podcast.
00:23:30 Speaker_01
A simple idea is just giving you a format, a recipe to write it so that you don't forget anything. Prompt chaining is just a way of dealing with the fact that we can have a back and forth with the LLM.
00:23:41 Speaker_01
It's not just about, you know, it got my answer wrong. I want another answer. That's certainly one way of doing it. But it's also like, hey, let's start refining it. Let's work together, me and the buddy, me and my LLM team here.
00:23:52 Speaker_01
Let's work together on a sort of problem and hopefully come to an answer. And I think this is sort of what's mind blowing in general. I think my last sort of general thought about this is that, you know, we've been covering AI since 2017.
00:24:04 Speaker_01
We've been doing well over 400 podcasts here on the AI Today. And it's like the pace keeps accelerating in terms of what's coming out. And you might think that after all these years, we've kind of seen it all, and maybe you'd be jaded.
00:24:17 Speaker_01
You're like, yeah, yeah, I saw that years ago. That is not the case. It's almost strange that I'm kind of almost more jazzed up about AI than we were when the promise was autonomous vehicles and image recognition was like the big thing.
00:24:31 Speaker_01
Now it's like Gen AI, which may seem to be pedestrian, everybody's doing it. But this is the most impactful aspect of AI that I think I've ever seen in terms of what it may actually do to actually change people's lives.
00:24:45 Speaker_01
We had been talking about this all the time. We're going to have to redo our AI-enabled vision of the future again, because I have this opinion that we talked about these four aspects.
00:24:54 Speaker_01
I think generative AI has impacted everything from helping you learn how to play music better to anything, all the stuff we talked about. So we will definitely do that again. But I, but I, we are definitely energized by this.
00:25:08 Speaker_01
And I think, you know, taking the tools of, even if it's just generative AI, you don't even know how to do any of the other kinds of AI. Taking that, maximizing this will have transformative impact for you as a person for sure.
00:25:22 Speaker_01
And for you as an organization and whatever else we're doing. And I don't know, future is going to be interesting.
00:25:30 Speaker_02
Yeah, we definitely agree. And that's why we said that we would re-look at our AI-enabled vision of the future, because as we continue to have these conversations, it keeps coming up. So stay subscribed so you can hear that upcoming episode.
00:25:44 Speaker_02
And also reach out to us with your thoughts. We'd love to hear what your idea of an AI-enabled vision of the future is.
00:25:50 Speaker_02
So you can reach out to us by emailing us info at Cognolitica.com, reaching out to us on LinkedIn at Cognolitica, or personally as well. Like this episode and want to hear more?
00:26:00 Speaker_02
With hundreds of episodes and over 3 million downloads, check out more AI Today podcasts at aitoday.live.
00:26:07 Speaker_02
Make sure to subscribe to AI Today if you haven't already on Apple Podcasts, Spotify, Stitcher, Google, Amazon, or your favorite podcast platform. Want to dive deeper and get resources to drive your AI efforts further?
00:26:21 Speaker_02
We've put together a carefully curated collection of resources and tools. Handcrafted for you, our listeners, to expand your knowledge, dive deeper into the world of AI, and provide you with the essential resources you need.
00:26:33 Speaker_02
Check it out at aitoday.live slash list. This sound recording and its contents are copyright by Cognolitica. All rights reserved. Music by Matsu Gravas. As always, thanks for listening to AI Today, and we'll catch you at the next podcast.