Ethical AI and Project Management: Interview with Vince Lynch, CEO of IV.AI [AI Today Podcast] AI transcript and summary - episode of podcast AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
Go to PodExtra AI's episode page (Ethical AI and Project Management: Interview with Vince Lynch, CEO of IV.AI [AI Today Podcast]) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.
Go to PodExtra AI's podcast page (AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion) to view the AI-processed content of all episodes of this podcast.
View full AI transcripts and summaries of all podcast episodes on the blog: AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
Episode: Ethical AI and Project Management: Interview with Vince Lynch, CEO of IV.AI [AI Today Podcast]
Author: AI & Data Today
Duration: 00:47:34
Episode Shownotes
This episode is the audio recording of the discussion moderated by Kathleen Walch at PMI’s Global Summit 2024 event. This event took place in LA on September 20th. On this panel we get insights from Vince Lynch who is CEO of IV.AI. The discussion starts off with Vince sharing some
of the biggest challenges he’s seen with managing AI-driven projects. Continue reading Ethical AI and Project Management: Interview with Vince Lynch, CEO of IV.AI [AI Today Podcast] at Cognilytica.
Full Transcript
00:00:00 Speaker_06
The AI Today podcast, powered by PMI, cuts through the hype and noise to identify what is really happening now in the world of artificial intelligence.
00:00:09 Speaker_06
Learn about emerging AI trends, best practices, and use cases on making AI work for you today with PMI hosts and expert guests.
00:00:21 Speaker_07
Hello, and welcome to the AI Today podcast. I'm your host, Kathleen Walsh. And on today's episode, we have a special audio recording of a discussion that I moderated at PMI's Global Summit 2024 event that took place in LA on September 20th.
00:00:36 Speaker_07
On this panel, I get to interview Vince Lynch, who is CEO of iv.ai. I hope that you enjoy the discussion. All right, well, welcome. Thank you all for being here. So excited.
00:00:48 Speaker_07
And if you were at the keynote today, you know the big announcement that Cognolitica was acquired by PMI, and now we have joined the PMI family. So I'm super excited, and I'm excited to be here today. Yay! So stay tuned for big things with AI.
00:01:04 Speaker_07
But first, I'd like to just, Vince, have you introduce yourself to the audience and let them know a little bit about what you do.
00:01:10 Speaker_01
Sure. Hi, I'm Vince. I'm the CEO of Ivy.ai. Ivy.ai is a AI platform that gets used by a lot of the largest companies in the world.
00:01:20 Speaker_01
And they use it to solve kind of hard business problems that are usually requiring multiple different AI models in the mix, different pieces of software in the mix as well.
00:01:29 Speaker_01
And then a whole bunch of kind of human strategic thought in relation to how you get the most out of the AI. We've been around since 2016, we have offices all over the world, and we are really happy that AI has gone mainstream.
00:01:43 Speaker_01
It's made our lives a lot easier, and now we get to just do fun things like this where we talk about AI use cases in relation to all these wonderful things.
00:01:52 Speaker_01
And the stuff we're going to talk about today in relation to social good, the anchorings of it are really true for everything we do as a company.
00:01:59 Speaker_01
So even though we're talking about social good things, a lot of the foundations of that are what is being deployed by some of the largest Fortune 10 companies in the world that are creating foundations that they can build upon with AI that includes these kinds of ways of thinking.
00:02:15 Speaker_07
Yeah, wonderful. So maybe before I get started with my first question so that we can get a better sense of the audience, who here is managing an AI project? OK, a few of you. And who here uses AI tools to help them do their job better? All right, great.
00:02:31 Speaker_07
So as you know, with running and managing AI projects, we always want project success, but sometimes there can be some challenges. So what are some of the biggest challenges that you've encountered in managing AI-driven projects?
00:02:44 Speaker_01
How much time do you have?
00:02:45 Speaker_07
We could spend the entire time. We have 40 minutes.
00:02:48 Speaker_01
Let's talk about all the hard stuff.
00:02:50 Speaker_01
Some of the biggest challenges are kind of classic challenges that are true for any kind of engagement with the business, which is strategic problems, where you aren't sitting there and thinking about the problem that you're solving for as it relates to the way a human can hold the information in their head.
00:03:10 Speaker_01
So if you can't look at the thing that you're building and the data that you're playing with and the model that you're going to be using and what it's likely to be doing, that's one piece of the pie.
00:03:19 Speaker_01
And then it's all the ways that engages with how that will impact the business. What happens to the human systems that are in the loop? What does that mean to all the other digital solutions that are in the loop?
00:03:29 Speaker_01
And so like that overall framing of a business model Not being close to the nature of the challenge that's happening with all the moving parts. My water, see, my moving part was it wasn't here on the stand thing and it moved right off the chair.
00:03:45 Speaker_01
So it's actually helping with what I was saying. Not being close enough to all those moving parts means that you basically end up with huge ripple effects of pain because AI is a process that you
00:03:59 Speaker_01
You know, if you're building something from scratch, you're getting the data, you're training the model, you're seeing how the model performs, you're testing the model, you're then testing it in relation to all these other pieces, and then maybe you're putting it out in the wild.
00:04:11 Speaker_01
All of those are months of engagement, potentially, if not. quarters and years. So if you get it wrong in relation to the premise, in relation to the team that's building the model and training it, then you have a longer ripple.
00:04:24 Speaker_01
And it just creates a lot more pain. So it just, it kind of amplifies how important it is to have a really good strategic understanding of the entire challenge. And when that goes wrong, the whole thing goes wrong. And often that's what's happening.
00:04:39 Speaker_01
It's not being communicated well enough to the team that's working on the AI.
00:04:43 Speaker_01
There's feedback loops that are not including enough of the stakeholders to be able to really refine what's going on with the different phases of production that can be problematic. I think those are always what it stems from, in my opinion.
00:04:57 Speaker_07
Yeah, we always say think big, start small, and iterate often. And I think that a lot of times, especially project professionals can feel this. Maybe you don't think big enough, or you're not starting small enough at your team.
00:05:10 Speaker_07
And then scope creep happens. And then before you know it, you're spending way too long on each iteration, and you're not actually seeing results that you need. So there's a number of common reasons why AI projects fail. Data is a big, big part of it.
00:05:24 Speaker_07
So we have data quality and data quantity. Do we have good data and do we have enough of it or do we have too much of it? So these are really important issues. And then a few years ago now, we saw, OK, can I use AI?
00:05:38 Speaker_07
And how is this going to help my organization? And I think that that problem has been answered. People are saying, OK, I see the benefit in this. So now we're taking this approach of, OK, how do we make trustworthy, ethical, and responsible AI systems?
00:05:52 Speaker_07
Do we want a trustworthy AI framework in place so that we make sure that people actually trust what we're building? Because at the end of the day, if people don't trust the solution, they're not going to use it.
00:06:03 Speaker_07
So how can project managers ensure that AI is used ethically and responsibly in their organizations?
00:06:10 Speaker_01
Such a good question. It's such a hard problem to solve that we actually like went and tackled this with a whole bunch of other data scientists about a year and a half ago.
00:06:20 Speaker_01
So I'm on the board of a not-for-profit called the World Ethical Data Forum and it's basically 60,000 data scientists around the world that all collaborate on different things and talk about data ethics and are really kind of trying to make sure that data ethics are at the heart of the conversation with data scientists.
00:06:36 Speaker_01
And I was seeing AI go mainstream and like chat GPT had just come out, everybody's getting super frothy and kind of running with the thing and like none of the, you know, logistics of how that data is being licensed have been thought about and then how that relates to usage and then, you know, copyright infringement and all these things were just like starting to come to the fore.
00:06:57 Speaker_01
So I put together a large group of people that all work in AI that I thought were, that I, you know, knew over the years. And we kind of took this internal framework that we used as a company for building AI and we kind of dumped it out to the group.
00:07:11 Speaker_01
And then we got a whole bunch of folks as well that worked in corporate and then deployed AI inside of companies before to also input. And we created this basically free tool called the Open Standard.
00:07:23 Speaker_01
that now lives out in the world and anybody can have it that wants it. If you go to the World Ethical Data Forum, sorry, World Ethical Data Foundation, you can see it there under the open standard.
00:07:33 Speaker_01
And it's basically the three major parts of building an AI model broken down. with questions you should be asking yourself in relation to each of those parts. So the first part being training a model.
00:07:45 Speaker_01
So it means you have to get data and where's that data coming from, where's it been sourced from, who's touched that data, what went into getting that data, how can you better understand it, how can you test it, how can you figure out what's going on with it.
00:07:57 Speaker_01
Who are the people that informed that that data should be used? Do you need all that data? Is there any PII information in there? All the stuff that could be happening around that part of the data collection process.
00:08:07 Speaker_01
And that is also part of the training process of like, if it's going into the model, how is it training it? What's going to happen there with the training phase?
00:08:15 Speaker_01
And then the actual build phase as well, which is like, am I using an off-the-shelf model that's been created by someone else?
00:08:22 Speaker_01
It's an open source algorithm potentially, or I'm using a third party, or I'm buying something off the shelf, or I'm creating something from scratch. All the different questions that you should be asking yourself at that phase as well.
00:08:32 Speaker_01
And then finally, the output phase. Don't just plug it into the world. Think about the different ways you can test it before you get it out there. Think about all the impacts it can make.
00:08:43 Speaker_01
Think about the audience that it's changing potentially by this AI telling them to do things. People think an AI is like a smart thing they can trust the information from, which is completely not true.
00:08:53 Speaker_01
It's like you have to really think about all the bias that can creep into that model process that can then be thought of as this like, you know, great poobah communicating things through at the end, which is really really potentially problematic.
00:09:06 Speaker_01
So you have to be considering that testing for that. So that's the process that we put out there. And then on top of that, it's the multiple people that are building.
00:09:15 Speaker_01
Because the bias part is such a huge problem, and what happens often is AI models will amplify bias, and it will creep in, and it will live in the model, and you won't even realize it's there because it came in from some of the, either the data going in or the people that are working on it, and then it ends up showing up as a thing in the world, and it creates even more problems.
00:09:33 Speaker_01
You can see this with social media when you look at recommendation engines showing you things to watch on social or whatever, showing you things to watch on TikTok, things to see on Twitter.
00:09:43 Speaker_01
All of that is connected to the bias of the recommendation model that's trying to do some mechanism, like keep you on that platform longer.
00:09:50 Speaker_01
So it has created polarization on our planet that nobody really talks about because it iterates for time spent. And time spent is often related to shock or related to disagreement.
00:10:01 Speaker_01
So it's forcing people into these rabbit holes and reinforcing their belief systems because it's only showing them things that relate to some bias that they have. So then amplifies the bias and creates this crazy feedback loop.
00:10:11 Speaker_01
So it is a really, really important piece that is not talked about enough. And obviously, at the scale of social media, we can see the problems and it's impacting our society. In relation to a company, those problems are there too.
00:10:23 Speaker_01
In relation to a person who's building a startup, those problems are there too. So, that's the reason that we created this thing and that's why I think it's really important that everybody has access to it.
00:10:33 Speaker_01
And it's something that I think should be a free resource that is just out there in the world. And that was, you know, our kind of approach to it.
00:10:40 Speaker_01
The final bit there is in the build process, like, it was like, we're putting this thing together, AI became super frothy. I've been working in space for like 10 years. I was like, oh my God, everything's happening all at once.
00:10:52 Speaker_01
It's like, how do you, how do you like break it down into a system? So I thought about like me, we, and it.
00:10:58 Speaker_01
So me is like I'm the builder, I'm creating the thing, I'm responsible for my own things, what things am I asking myself as I go through the process?
00:11:06 Speaker_01
And then we is the team that should be working on the modeling process to reduce the biases there by having a larger group.
00:11:13 Speaker_01
And then it is the thing itself and what impact that thing is making and what kind of things live inside of that model that are creating this output that's scaling through.
00:11:22 Speaker_01
So that's kind of the thing with the open standard, and I think it's pretty cool. And we have a whole bunch of lawyers kind of got involved in this as well, and people from different universities.
00:11:31 Speaker_01
So it's not like an end-all be-all list, but it's a super good spot to start.
00:11:36 Speaker_07
Yeah, I like that. Me, we, it. Also, we say that your framework should be open. You should be bringing in multiple stakeholders from your organization.
00:11:45 Speaker_07
This should not be shelfware, which I think a lot of times people build something and say, oh, this is great. And now they put it away.
00:11:51 Speaker_07
It's something that you should be revisiting quite often and make sure that you have engagement from multiple groups. And then how much do you want to share externally, which there should be a strategy around that.
00:12:03 Speaker_07
But then this should be something that's shared widely internally. because people need to know exactly what's going on, all of the policies.
00:12:10 Speaker_07
So, Cognolitica has a framework as well, a trustworthy AI framework, and we go through five different layers and a series of questions that you should be asking in each layer.
00:12:17 Speaker_07
So, we start with ethical AI and a series of questions you know right from wrong, and then we have responsible AI just because you can do it should you do it. And then we have transparent AI and governed AI and then explainable AI.
00:12:30 Speaker_07
And this really comes into play with, OK, you know, everybody loves deep learning, but do we always have to have deep learning? Maybe there can be an algorithm that's not that black box algorithm that we can use for a solution.
00:12:42 Speaker_07
And so we need to be having these conversations and having all stakeholders and people from all different departments, so legal, but also have people from marketing and communications, and have the line of business there as well.
00:12:54 Speaker_07
So it's nice to hear, you know, other people are really focusing on this now. And when it comes to AI, there's so many stories out in the media always about how the bad of AI, right?
00:13:05 Speaker_07
How we have AI failures, or how we have deep fakes, or people can no longer say, they can no longer believe what they read, hear, or see. But AI can be used for good as well.
00:13:17 Speaker_07
And those stories aren't always covered, because I guess they're not clickbait, right? They don't keep you on the platform. So can you share an example where AI was used for good, and what role project management played in that success?
00:13:30 Speaker_01
Yeah, absolutely. And I love that your framing of your framework sounds wonderful as well. I think it's like the drilling into the different use cases.
00:13:39 Speaker_01
Actually, this use case I'll talk to kind of speaks to what you just said in relation to, you know, you don't always throw deep learning at the problem. You don't always throw a generative large language model at a problem.
00:13:50 Speaker_01
And if you only throw a generative model or something at the problem, then it's really hard to unpack what that model is actually doing. So having these additional approaches that are there can be really helpful.
00:14:00 Speaker_01
So the example I'll give is in relation to the work we did with this group called Sustain Chain. So it's a partnership between the US Coalition of Sustainability plus the United Nations. And basically, it's a platform that's really brilliant.
00:14:14 Speaker_01
It's putting together all of these different sustainability people in one spot. And it's basically kind of like, a really much more targeted LinkedIn for sustainability. It has projects on there of what other people are working on.
00:14:27 Speaker_01
You can find experts that are working in the field. There's a whole bunch of feeds in relation to what's going on with that space.
00:14:33 Speaker_01
It's this really quite large and encompassing way of approaching sustainability problems by bringing the humans in the loop into this platform. So it's a platform play. Obviously, that creates a whole bunch of need for data.
00:14:46 Speaker_01
And this goes back to what I was saying about the portability and different ways of approaching a model. So we trained an AI on all of the ESG reports for all of the public companies in the world. And going back, I think, six years.
00:14:59 Speaker_01
So through that, we had the E part of ESG, which is environment. That part was really good at understanding environmental challenges. And starting with unsupervised understanding of that.
00:15:10 Speaker_01
So figuring out, you let the model show you what stuff exists in the space of environment. What's happening with this sector? Show me the clusters of information that group together. What are the big things that happen in this thing?
00:15:21 Speaker_01
And then we ended up with the supervised model that was digging into the specifics of those things that lived inside of this big unstructured space. So now we have this model that's really good at classifying things that happen in sustainability.
00:15:33 Speaker_01
We used it on our platform for these public companies, for ESG, and it was really good. We were able to expose it through, and we're getting like 95 to 99% accuracy. Very, very high.
00:15:43 Speaker_01
And that's a very good number in the AI space, to get to that level of accuracy in relation to the machine understanding a concept. And the thing that was great about it is it's also portable.
00:15:53 Speaker_01
So we're able to take it from that ESG model, making sense of corporate data, and we're able to give it to them. So they just plug it in, and they just send people that are engaging in their platform.
00:16:03 Speaker_01
I think when they're logging in and when they're building a project, it pings our model and then sends back to them the classifications of the thing that it's working on.
00:16:12 Speaker_01
So now everyone that's going into their product and sustained chain is able to have the classifications that are happening around the work that they're doing so they can find each other, so that they can find projects.
00:16:22 Speaker_01
And it's like a part of the AI mix in relation to the stuff sustained chain's doing. They're doing a bunch of their own stuff as well. There's like other partners that are in the mix that are helping out with different parts of the problem.
00:16:32 Speaker_01
But it really speaks to the kind of nature of what you were saying before in relation to deep learning versus supervised models. And it also speaks to
00:16:41 Speaker_01
the way that if you get it right, and you have the right amount of data in there, and you spend enough time focusing on it, you can solve for one problem, and then you can drop into another.
00:16:51 Speaker_01
So it has this scalability effect, and you don't think of it as a thing that lives on a shelf.
00:16:55 Speaker_01
It's like a piece of intelligence that you can plug into and put into different spots, which is really a big part of why this stuff is so important for us all to be paying attention to.
00:17:04 Speaker_07
Yeah, you know, and it's interesting when we think about and talk about AI that, you know, lots of books are written about this, right? Science fiction, Hollywood really does a good job at saying, okay, what's the good? What's the bad for AI?
00:17:19 Speaker_07
Even in the keynote earlier today, we had Skynet, right? So when, and when we think about AI, it, creates fears and concerns that other technologies don't, right?
00:17:29 Speaker_07
Like, I don't think people have, like, tons of fears and concerns around mobile, for example. But when it comes to AI, we do. And so fears can be, you know, somewhat irrational. And maybe people think that AI is going to take over the world.
00:17:43 Speaker_07
But we can't diminish those fears because it's something real that people are feeling. And then we have concerns, which are more rational. And so we can say, well, maybe we feel that, you know, it's using too much of my data.
00:17:53 Speaker_07
So we need to have this balance between fears and concerns and how do we address that. So how do you address the fears and concerns associated with AI among project professionals?
00:18:03 Speaker_01
I think, you know, fears and concerns are healthy because they give you anchor points to look at. So it's like, well, what is my fear around this thing? Well, what is driving that fear?
00:18:14 Speaker_01
If I take that things that are driving that fear, if I break it down into the different pillars of that fear, there's maybe some useful pieces in there.
00:18:20 Speaker_01
So I think it's very healthy and wise to be concerned and fearful about a thing that you turn on and then does a bunch of stuff for you. So it's really like there is the result of that can be really monumental.
00:18:36 Speaker_01
And do you want it to be a monument of something great that happened or a monument of something horrible that happened that lives in the world forever? So it's really important to approach it with a healthy dose of concern.
00:18:49 Speaker_01
But those concerns can be alleviated by structure. And those structures are really important for people that are building. And I think that by following the structures, like I was talking about the open standard, like the framework that you mentioned,
00:19:02 Speaker_01
taking other PM frameworks that you know work in relation to how you dissect this thing that's moving, I tend to think of AI like a human team.
00:19:11 Speaker_01
So it's like you build a model that's doing this thing, and it's like, cool, that's great, but I can't just leave it there and just not be thinking about all the moving parts or what it's doing or what it's good at or what it's bad at or the additional resources we need to give it.
00:19:23 Speaker_01
It is very kind of similar in the same way. And when you create an AI model, you are also creating a team of people. that are around it.
00:19:31 Speaker_01
I heard earlier that there was challenges or fears with the project management community in relation to replacement of jobs because of AI. And I think that's actually the opposite is true.
00:19:41 Speaker_01
I think there's gonna create a lot more project management jobs because the complexity that AI creates in the system is actually extraordinary. And so in order to manage that, you need really good PMs that are really close to the problem.
00:19:53 Speaker_01
You need great managers that can kind of stay on top of the moving parts of the thing that's happening there.
00:19:58 Speaker_01
And because of the complexity and because of the liquid nature of the thing, you really need people that are close to the ways that you're testing it, the ways that it's learning, the ways that it's improving, the accuracies it's getting, the changing world in which it lives, because it's a model that lives in the world in time going forward.
00:20:15 Speaker_01
So it's really important to have these things in the mix, and I think that PMs are actually going to be leaned into more and more. the complexity of which, with the things that they're having to hold in their project management is going to increase.
00:20:31 Speaker_01
And it's going to need to allow for these liquid systems, but it's kind of, I don't think that that's where a lot of the job loss is gonna come from when it comes to AI.
00:20:42 Speaker_07
Yeah, well that segues very well into my next question. How do you see the role of project managers evolving as AI becomes more integrated into the business landscape?
00:20:53 Speaker_01
I think that they will have to, I think that there's like the breaking down of the parts of how AI works creates these human systems that need to take into account the nature of that moving space.
00:21:07 Speaker_01
So like, you know, where before you'd have like a product manager that would be across a product, a tech product.
00:21:13 Speaker_01
Now you need like a product manager across the back end of the tech product, across the front end, maybe across the application layer, and then maybe an overarching product manager that's sitting there as well.
00:21:23 Speaker_01
And you need multiple tiers of systems of people that are engaging with that thing. I think that's an important part of the problem.
00:21:30 Speaker_01
And I think it's interesting that that is also what's happening with the framing of these agent systems that people are deploying. We have these agentic systems where you have a supervisor AI.
00:21:41 Speaker_01
Its job is to watch the other AIs and what the other AIs are doing. These different AI agents that are focused on different parts of the problem of what they're trying to solve for. And they are only focused on that problem that they're solving for.
00:21:53 Speaker_01
And then you have ones that are just like policing the others. It's like these systems just in relation to what the LLMs are doing.
00:21:59 Speaker_01
Because of the nature of hallucinations and because of the complexity of getting high accuracy out of them in relation to a given task that's very specific. Like that's just a whole new set of things that needs to be managed by a human.
00:22:13 Speaker_01
And you can't just leave it to itself because the nature of it is changing over time. And then the nature of the value that it's creating is changing over time. So I think that's what's going to happen.
00:22:21 Speaker_01
It's going to create this monster amplification effect on those folks that are holding the keepers of the keys. And they're going to just have to do more and more.
00:22:29 Speaker_07
Yeah. And even when we talk to project managers, I think that you look at this from two angles, right? How are project managers going to use AI to help them do their job better? And how are project managers going to run and manage AI projects?
00:22:44 Speaker_07
And so it's really exciting times ahead. And at Cognolitica, and now PMI, we have the CPM AI methodology, the Cognitive Project Management for AI methodology, which helps give a step-by-step approach for running and managing AI projects.
00:22:58 Speaker_07
you know, even going back to some of these questions earlier, and we can maybe, you know, talk through some of this and where you've seen challenges. So we always start with business understanding. What problem are we trying to solve?
00:23:09 Speaker_07
And we want to make sure that it's a real business problem. And then if it is, what parts of that need to be solved with AI and what parts of it don't need to be solved with AI? And then we want to be talking about that return on investment.
00:23:20 Speaker_07
So when we had talked earlier about some of the challenges that you see, I think some of the challenges start with just business understanding, identifying what problem we're trying to solve.
00:23:30 Speaker_07
Then once we know that we're solving a real problem, we move to data understanding. So do we have access to the data? What data do we need, the data sources? I know even with your trustworthy framework, you talk about that.
00:23:42 Speaker_07
So maybe can you share some of the challenges specifically that people have and how they're getting past that when it comes to, One, figuring out what problem they're trying to solve, and then two, some of the data challenges.
00:23:54 Speaker_01
Yeah, it's kind of, it's a lot of the things that we were saying before.
00:23:58 Speaker_01
It's just kind of, you know, really anchoring into the source of the data, where it's coming from, how you can check that you have clarity on what's in it, what's in the metadata, where PII, et cetera.
00:24:12 Speaker_01
And then, I mean, it would probably be helpful to talk through a workflow to go into it. Do you have like one in mind? Do you want to talk through a business case or like a social good case or something?
00:24:22 Speaker_07
like come up with an example? Well maybe should we ask the audience if they have an example they want to talk through? Has anybody had a challenge that they've been running into with a project? No? Nobody? Everybody's okay.
00:24:38 Speaker_01
Yeah, that's a great example. And it's also like such a classic problem. So it's kind of, where did the data come from the source? If you're buying that data, it's third party data. Where did it originally come from?
00:24:48 Speaker_01
In their terms conditions of when they collected that data, who did they collect it from? And what was their rights to use in relation to that data? And then what is the quality of that data in relation to the thing that you're trying to solve for?
00:25:00 Speaker_01
When you put that data into your model, and then it makes an impact on the model, if that's good, if you get a good impact from it, you've got to keep paying for that data. Do you have to pay for it forever?
00:25:10 Speaker_01
Does it now inform your model and your models learn from it, and therefore you've broken your terms and conditions with the data that you brought in there? Because you have to pull it out at some point, but the model's already learned.
00:25:18 Speaker_01
So it creates this huge amount of complexity, and that's a great example of having closeness to that part of the problem, the data collection, the data sourcing.
00:25:27 Speaker_01
We have a VP of analytics, I think his title is, that all he does is think about the data that's going in all day long, VP level. And then it's just like working with others in his team. It's just like that data sourcing problem.
00:25:38 Speaker_01
What's happening with that market? How do you make sense of it? It's a monster challenge. And it really speaks to the concerns that should be there as well as the level of closeness you have to have to it.
00:25:49 Speaker_01
and the way that you think strategically about how you're training your model in the first place. So the nice thing that's happening in this large language model world is this RAG architecture thing. You may have heard of it.
00:25:59 Speaker_01
It's basically like you have the LLM. It's been trained on this thing. Somebody else's problem. I'm licensing it, and now I'm just going to do whatever it is around it.
00:26:07 Speaker_01
But if you have a RAG architecture on top, you basically show it like, here's a bunch of the thing that I think is really important. learn this thing, and then do this other thing. So it's like, keep referencing this thing that I'm showing you.
00:26:17 Speaker_01
That simplifies it down a lot.
00:26:19 Speaker_01
Like, you're not getting to really great, like, necessarily repeatable things at scale, but at least in relation to that model at that point in time, you can get some outputs that are really useful, and it can reduce some of that complexity for you, because then you find out the data, the company who sold you the data made a mistake.
00:26:37 Speaker_01
you have to pull that data out of your model. Okay, you just pull it out of, you know what it is, because it's actually living on top of this other model.
00:26:44 Speaker_07
Great example, I know we say data is the heart of AI, and 80% of AI projects are gonna be your data engineering projects, and data, it's really important to get right, but also something that's hard, and I think that people just sometimes wanna move forward for whatever reason.
00:27:01 Speaker_07
You feeling pressure or upper management is saying, let's just move forward on this AI project, and you don't necessarily have the data in place that you should, and you just move forward, and then you have project failure, and we wonder why.
00:27:12 Speaker_07
That's why we have project managers, to help with all this. So I have one more question, and then we're going to open it up to the audience, because I'm sure many of you have questions.
00:27:20 Speaker_07
So I love this idea of augmented intelligence, where we're not looking to replace the human, but just help the human do their job better with the help of AI.
00:27:29 Speaker_07
And so we've seen this, you know, large language models, for example, we're not going to replace the human with marketing or with writing articles, but we're just going to use it to do our jobs better.
00:27:40 Speaker_07
It's a number of different ways that we can use this. So how are you seeing project managers use AI as an augmented intelligence tool to not replace the work of humans, but just enhance the work of humans?
00:27:52 Speaker_01
Yeah, I'm seeing a lot of people getting benefit from like the cold start problem, which is like allow it to create a first version of a thing and then you have a framework to work with and then you can kind of take that and tweak it.
00:28:02 Speaker_01
So it's like I think that cold start problem is wonderful in relation to the model giving you a head start. That's a big piece of it.
00:28:09 Speaker_01
I mean, for us, we literally will train models against the thing that we are trying to do, and we'll take a bunch of data, put it in, like if we're doing research, we'll grab a whole bunch of data, put it into the model, train the model, see how it performs, then be able to have a much better sense of a much larger piece of information.
00:28:26 Speaker_01
It's gone through the tool, we've even created a product on the back of that, which we are releasing in October, which is basically all of the AI, we scraped all the AI companies in the world, put them into the model, got really high accuracy in relation to those AI companies, and then you can search them and better understand what they're doing in relation to each other.
00:28:47 Speaker_01
So it's like, that's a great example of something that started from a need, went into a series of R&D in relation to building something for ourselves, realizing it was useful to others, and then it turned into a product.
00:28:58 Speaker_01
So it's kind of definitely like iterating, like you said before, like small then large.
00:29:05 Speaker_07
Think big, start small, iterate often.
00:29:07 Speaker_01
Think big, start small, iterate often. Great, I need a mnemonic device for that one.
00:29:13 Speaker_07
We need shirts.
00:29:14 Speaker_01
Yes.
00:29:15 Speaker_07
Watch out, next Global Summit, I'm gonna have my shirt. You know, and I think also, if anybody has not gone to the Infinity booth, check it out, because this is a really great example of how we can use AI as an augmented intelligence tool.
00:29:29 Speaker_07
They have tools in there and so project charters, for example, why start from scratch? We say never, never start with a blank page. You don't need to.
00:29:39 Speaker_07
If anybody hasn't taken the prompt engineering e-learning, I highly encourage it because it helps you. So it not only helps to say, okay, how can I use AI? What is it good for? What shouldn't I be using this for?
00:29:52 Speaker_07
But also how do you help with how do you use your power skills for AI and how does AI help you with your power skills?
00:29:58 Speaker_07
So how can we be better communicators and collaborators and critical thinking and those you know This is really how we can leverage these tools and that helps us with augmented intelligence So we'd like to open it up to the audience now if anybody has questions Otherwise, I have many more for Vince, but we'd like to hear from you Okay, so somebody have a mic that they can pass around.
00:30:19 Speaker_07
Yes, we do. It's in the back. I Oh, you're loud.
00:30:22 Speaker_00
That voice, wow. Get up here. I would like to speak.
00:30:27 Speaker_05
My question is that AI is learning. Everybody is learning. This one is better, and this one is better, and this one is wrong, and there's a lot of repetition. The data is a big issue for us.
00:31:01 Speaker_05
Everybody's asking us to have this AI in your project management. Even they're not aware that I'm coming to the data. One, to decide which AI and to convince them. And then there's data, like this data privacy, data collection in silos, data
00:31:21 Speaker_05
biasness like in the collection time, from where it's coming and what is the human bias in it, because that's an input level. Then coming to the analysis and to the data cleaning and then analyzing all those steps. But start from these.
00:31:39 Speaker_05
So is there any specific or smart kind of solution to do that data right from the source
00:31:49 Speaker_05
not to get the results which are mostly different, especially for those projects which are operating in a very fragile environment like a war zone or a conflict-created zone or maybe some disaster-prone areas like earthquake or flooding or some pandemics.
00:32:05 Speaker_05
So in terms of data, if there is a data, AI plays very well with it, but how to make sure that the data coming to the system is like all these, what I mentioned, that they are not happening and it's clean and it's like verified kind of data.
00:32:24 Speaker_01
So I think it goes to that kind of core question we were talking about in relation to the different systems of the build of an AI model.
00:32:33 Speaker_01
So if your question is around the data that's going into the model, how can we understand the provenance of that data, and then also how it moves throughout the system? This is a great question. I think something that people should be thinking about.
00:32:43 Speaker_01
And one of the ways of thinking about that is you're building a model and working with the data science team where you're warehousing the data, you're training the model, you're seeing the progress of that model, and then you're following it through to the output and the impact it's making and the iterations that you're driving inside of the production of the AI are being tagged and tracked properly so that that data can be pulled out if needed.
00:33:05 Speaker_01
Or you can also see what happens if you pull that data out. Or you can see what happens if you have a new iteration of that data that's just maybe a little bit enriched or something, and how that makes an impact through the chain.
00:33:16 Speaker_01
And through this process of elimination, you can figure out how it fits into the whole mix, if you're in control of the whole flow, if you're training the model, if you're part of it end to end.
00:33:26 Speaker_01
It's much trickier when you're using a large language model, like you're saying in the beginning of your question, because you don't really have sight on what's going on with that model, and you don't really know what's happened with that data that's going in there.
00:33:36 Speaker_01
And this is part of the reason why these large language model companies have indemnified their users from lawsuits, because they don't really know yet either what's going to happen in relation to the data that's there, but they're saying, if we get sued, we're going to protect you.
00:33:48 Speaker_01
And I don't know exactly what the language is around it, but that's essentially the gist of it. And I think that really speaks exactly to that problem as well. It's such a big problem.
00:33:57 Speaker_01
The biggest companies in the world have had to say, we will indemnify you if you get sued by using our product. That's never happened before. It speaks to the nature and the impact that this stuff can make. So it is a massive part of the challenge.
00:34:11 Speaker_01
And as you're going through the process, you can have different versions of models that are running through that same mix.
00:34:18 Speaker_01
It's called ensemble models, this is one example, where you basically, they're trained on the same thing, they learn in the same way, but you have two different ways of looking at what the output is of the model, and then how they sit together, or they can train each other.
00:34:30 Speaker_01
So you basically have different iterations, different approaches of machine learning, AI, statistical regression, things you can figure out, how these things are gonna impact across your whole chain.
00:34:40 Speaker_01
So it's like, don't think of an LLM as your only approach to the problem. Think of it as a huge, amazing tool. And then other things that sit along with it can help reduce some of those blind spots that you're talking about.
00:35:04 Speaker_05
I'm going back to my organization and they're asking me, okay, tell us which one is better. So what key factor I should consider selecting an AI system for especially in this data?
00:35:16 Speaker_01
Better in relation to what?
00:35:19 Speaker_05
For instance, this big data coming from the field, analyzing different factors, but we have seen like these days Power BI is like, okay, that is one of the tool, but like including the AI in it, there are some aspect of the, they are analyzing it for us and generating report.
00:35:37 Speaker_05
But is there any other factor we should consider? Or like a key two, three factors, okay, if these are there, so this is a strong AI system. Not a specific one, it's okay, but the factors.
00:35:50 Speaker_01
I think the word better is a good word. Quality is another good word. Confidence is another word that gets used in the space as well. What is the confidence of the model? How accurate is it against the thing? How do you test what that accuracy is?
00:36:04 Speaker_01
That open standard thing I was talking about before, there's a whole piece in there around how do you test the model and how it's performing?
00:36:10 Speaker_01
And so the way it's often done is you have a model that creates a bunch of something, and then you have humans that tag that thing that happens and say, this is true, or this was not true, or this is what I believe this thing is.
00:36:23 Speaker_01
And you have multiple verifications of that, and then that's your testing data. And then with that testing data, you can see what is that model performing well against this specific task in relation to what we want it to do in the world.
00:36:35 Speaker_01
So if you know, I want to plug this model in in relation to speaking to someone about this topic, and I want it to be super accurate in relation to that topic. I want to then look at all the ways that it says stuff about that topic.
00:36:48 Speaker_01
I want to have an output. I want to have a bunch of humans tag a bunch of it, as much as you can, 10% at least. Then you're looking at all the human tags of it. Then you test it to see how performant it is in relation to that concept.
00:37:00 Speaker_01
It's great at that concept. Awesome. We might be able to trust it in relation to that problem. What happens if it goes rogue? There are different ways of approaching that.
00:37:07 Speaker_01
You can have another model is take that, taking that testing data and you can train a model on that. And then you can plug that in and you can have that model model monitor the LLM.
00:37:17 Speaker_01
So there's like lots of iterations that can sit there in the mix and you have to go to that level of detail if you're trying to discern which model you're wanting to use for a given task.
00:37:26 Speaker_01
And it's like humans in the loop is like a huge part of how you get to that quality.
00:37:31 Speaker_07
Yeah, we say never remove humans from the loop. And also you need to really understand why you want it, right? And so because when you say, well, how do you compare? It's like, well, what are you comparing it on?
00:37:41 Speaker_07
Because I had gotten a question the other day about, well, what AI tool should I be using? And I said, well, Honestly, what do you want to use it for? And they go, well, what are some of the good tools you recommend?
00:37:53 Speaker_07
I'm like, well, it depends on what you want it to do. So you need to say, what do you want it to do? Do you have access to that? And is it able to do what you want to do?
00:38:04 Speaker_07
And so as you continue to evaluate tools, because every single day, new tools come out. And so it's really hard to evaluate. Another problem that I've seen, because 80 plus percent of AI projects are failing. especially as we're building these models.
00:38:19 Speaker_07
So if we're out there and we're saying, well, we're a vendor X shop, so you just have to use these tools, Microsoft, for example, or whatever vendor it is that your organization uses.
00:38:29 Speaker_07
They may have really great tools, but it's not a great tool for your problem. And so you need to say, what am I trying to get to? And then how, what tool out there is going to help? And I always say, right, collaboration, big power scale.
00:38:43 Speaker_07
So talk to others, talk to others in the community, talk to other peers, maybe out at your organization or here, for example, and say, what are you trying to do? And then what tools are they using?
00:38:54 Speaker_07
Because it's constantly going to be changing and evolving. And so to say, well, what's the best tool? That's hard because it's like, well, what's the best for you?
00:39:16 Speaker_08
as the data and the analytics and the end product. So can you speak a little bit to, and you mentioned earlier, what AI can solve versus what AI can't solve, what it should solve versus what it can't solve?
00:39:32 Speaker_07
So I break it down to the seven patterns of AI, which I highly encourage everybody to listen to. The AI Today podcast, we talk about that. Also some Forbes articles that I've written. And you connect with me afterwards. I'm happy to share that.
00:39:44 Speaker_07
And that's part of the CPM AI methodology. Because the reason that we even came up with that is AI is an umbrella term. So people go, oh, I'm doing AI. And you're like, OK, well, you may be thinking about autonomous vehicles.
00:39:55 Speaker_07
I'm thinking about an AI-enabled chatbot. They're all different applications.
00:39:59 Speaker_07
building the AI systems, then that means that we're going to have different algorithms that we're going to pick, or different data that needs to go into it, or different return on investment.
00:40:08 Speaker_07
And also AI is not the right solution for every single problem. So knowing the seven patterns of AI says, okay, this is what AI is good at, and if it doesn't fall into one or more of these seven patterns, maybe we shouldn't do it.
00:40:20 Speaker_07
Also, what outcomes do we need? Because Vince said earlier, his model was about 95 to 97, 99 percent accurate. That's amazing. That's not a 100 percent. AI is never going to be a 100 percent. It's probabilistic, not deterministic.
00:40:35 Speaker_07
So, if you need something done the exact same way every single time, you need a 100 percent accuracy, AI is not going to be the right solution for that. So you need to understand what you want and make sure that you're having those discussions.
00:40:49 Speaker_07
Then if AI is the right solution, okay, now we can look, are we going to build this? Are we going to get a third-party tool that's already been done to solve our problem?
00:40:57 Speaker_07
But until we even know if AI is the right solution, that's where we have to start.
00:41:03 Speaker_01
Does that answer your question?
00:41:05 Speaker_04
Okay. We've got a question here.
00:41:08 Speaker_01
No, I think that's right. The only thing I would say in relation to the probabilistic thing is that AI is, and this shocked me, is that AI is getting to 100% at math, like K through 12 problems, like at like STEM problems for kids, which is great.
00:41:26 Speaker_01
Google built this model that my friend helped build there, built there. It's basically like, it's getting like 100% accuracy on STEM stuff. Because it's structured answers, because it's a math problem, it's a science problem.
00:41:38 Speaker_01
Business problems never, unfortunately, have that kind of structure. So yeah, it's highly unlikely you'll get to 100%.
00:41:44 Speaker_01
But if you get to 99%, and if that 1% of thing, it doesn't say something horrible, it just can't answer the question, then it's great. And then you just have to bring a human in the loop. Or it gets to 78%, and then you have
00:41:57 Speaker_01
you know, human in the loop for the other 22, that's cool too. You can just like find ways of making it work and iterating and then building like human things that the human is much better at into that 22% that's left over.
00:42:10 Speaker_01
So it's like they're getting more time to do something much better and take it to a new level because they're freed up to do that.
00:42:17 Speaker_03
Hey, a comment and actually three questions, but they're not that deep. So my comment is, thank you so much for creating all the podcasts and helping drive the change that's needed. Thank you so much for doing that.
00:42:31 Speaker_03
And my questions, I don't know if you're the right... right people to answer these questions, but they're related since you brought up PMI Infinity.
00:42:39 Speaker_03
So I heard today during the keynote that it was going to be 2.0 was going to be released by the end of the month, but I also thought I heard today that 2.0 was released today. That's my first question. Can you clarify that?
00:42:52 Speaker_03
The second question is, do you have any insight into the usage? Is it growing? Do you have any insight into that? And then my third question is, do you know if there's a new marketing campaign coming out from PMI to talk about this?
00:43:09 Speaker_03
So did you get all three of those?
00:43:11 Speaker_07
Yes, I did. OK, so by show of hands, who here has used Infinity? Oh, okay, great. So yes, usage is going up and we're going to continue to hope that usage goes up. And what'd you say, version two?
00:43:26 Speaker_07
Yeah, version two, it was announced that it'll be out by end of September. So yes. And then is there a marketing campaign? We should probably talk to marketing about that.
00:43:38 Speaker_00
It would be silly if they didn't. It sounds like they should talk about that. That's like pretty great.
00:43:44 Speaker_07
But we could connect after as well. Okay. We're being told we have to wrap up. Does anybody have like one? Okay. It better be not three parts. Is it one part? Okay.
00:43:59 Speaker_04
Thank you so much for the session. It's really informative and insightful. My question is, and it's with high optimism, as you said, that there'll be next generations of PMs what needed to manage AI projects.
00:44:13 Speaker_04
What skill sets do you think that will be key for such PMs?
00:44:19 Speaker_07
who are running and managing AI projects?
00:44:21 Speaker_04
Yes.
00:44:22 Speaker_07
Well, plug for CPM AI certification. So I definitely think that everybody should get CPM AI certified now that it is part of PMI as well. Because you need to know how to write. These are data projects.
00:44:35 Speaker_07
So actually, Vince and I were talking earlier, and we were saying, Well, how are people running and managing AI projects? And unfortunately, people are not doing it with a step-by-step approach.
00:44:45 Speaker_07
So some people have told me that they're using the scientific method. And I said, no, you are not. But they wanted an answer. So I think that those skill sets are needed.
00:44:53 Speaker_07
And then as a practitioner, I think maybe you can share some of the skill sets that you see as well.
00:44:59 Speaker_01
I think having closeness to the whole of the picture of the thing and being able to hold like a lot of moving parts in your head is going to be a skill that's become more and more important.
00:45:09 Speaker_01
And being okay with not knowing all the stuff is important too, where you have enough of a scope of the things that are happening to be able to understand that they're impacting the overall system that's at play from a project management perspective is really key.
00:45:25 Speaker_01
And then being able to like remember it
00:45:27 Speaker_01
so that when things are changing and like you're getting like five minutes of an update in a meeting that's like the group team that are working on the problem, you're able to like track it, tag it, hold on to it because it impacts your pod in relation to what you're doing.
00:45:42 Speaker_01
So I think that kind of like, you know, very kind of like a
00:45:46 Speaker_01
I don't know even what you call that, like holding a lot of space in your head at once in relation to these moving parts, but being okay with not having to have the depth of clarity in relation to it.
00:45:56 Speaker_01
Sometimes I find with PMs that are working on a problem, they want to go so deep because they're that kind of person, they're so good at that.
00:46:01 Speaker_01
but then like with so many moving parts and in relation to this one model that's making this huge amount of impact you got to be okay with not going super deep into all of it and then burning all your time but going deep enough to understand the impact that it could make and how it sits in the system and then in relation to your thread you go super deep and you're the owner of that thing and you have enough of a closeness to the other moving parts that it's okay that you're not the ultimate source of knowledge in relation to it.
00:46:29 Speaker_07
And then my one final note, if you have not visited the Infinity booth and given them your feedback, please do. All right. Thank you, everybody.
00:46:37 Speaker_02
Thank you. Keep the conversation going and keep the applause going as well. Thank you so much, guys. That was a terrific panel.
00:46:44 Speaker_07
Like this episode and want to hear more? With hundreds of episodes and over 3 million downloads, check out more AI Today podcasts at AIToday.live.
00:46:54 Speaker_07
Make sure to subscribe to AI Today if you haven't already on Apple Podcasts, Spotify, Stitcher, Google, Amazon, or your favorite podcast platform. Want to dive deeper and get resources to drive your AI efforts further?
00:47:07 Speaker_07
We've put together a carefully curated collection of resources and tools. Handcrafted for you, our listeners, to expand your knowledge, dive deeper into the world of AI, and provide you with the essential resources you need.
00:47:19 Speaker_07
Check it out at aitoday.live slash list. Music by Matsu Gravas. As always, thanks for listening to AI Today, and we'll catch you at the next podcast.