Let's go.You're listening to Making Data Simple, where we make the world of data effortless, relevant, and yes, even fun.Podcast listeners, welcome back.Al Martin here.Hope everything is going extremely well.
We're gonna jump right in, have some fun today.I have Philip Swan, who is a dynamic forward-thinking leader renowned for turning complex challenges into strategic opportunity.
By way of role, he is the chief product and go-to-market officer at the AI Solution Group.
He blends customer-centered go-to-market strategies with cutting-edge product development, leadership, partnerships, and he does this by integrating sales, marketing, product, operations, business growth, all that put together. Welcome, Philip.
Thank you for being here on our humble show.I'm glad that you were able to find us.
Oh, thank you.I really appreciate the opportunity to meet you and your audience, so thank you for having me.
Yeah, we'll have some fun.We'll have some fun.By the way, I like to go everywhere, so we'll ask all kinds of different questions, but the first question I'm going to ask is I always ask,
is to allow you to introduce yourself, who you are, and what essentially brings you to us today.
Great.Well, thank you very much for the opportunity.So my background, I started off life as a computer science graduate and engineer.I I built products for various parts of the ecosystems.
It started off working for the government building out infrastructure and then went into industry and took my first company public which was a company called Wind River Systems.I was part of that team and
I got the taste for always getting into business.So I started off life as an entrepreneur at eight years old and I did my first distribution strategy at nine.So it started off with a paper route and I went in and hired four of my friends.
I built out a channel strategy and I built it at nine years old.Fantastic. And then sold that and went to sell baby clothes.
And at the age of 16, I was going to China and actually Hong Kong as a 16-year-old and negotiating buying product from Hong Kong so that I could actually earn money.And I've always been an entrepreneur on that side.
But my focus has always been technology.And I love technology.I love solving complex problems.
So what do you mean?You hired your friends for your paper route?I mean, like, so you could sit at home and watch and send them out so they could get the job done?Or how does that work?
I know, I did the work too, so I added more routes to it and I basically got a little cut of the top of bringing them that business.It was a tiny little amount, but when you have five friends, it all added up.
The driving force in all of this was me buying a road bike as a kid because my parents never gave me a cent of pocket money or allowance in my life.It was always about building my own responsibilities. really good at instilling that into me.
Driving customer solutions has been with me from the earliest age.Did you appreciate it at the time?Absolutely, I did.Really?I really did because I got to feel value in, hey, I bought something with my own money and I earned it.That really, really.
Fantastic.What is Wind River Systems? So Wind River Systems was a company that was focused on what's called real-time operating systems.And it was back in the day when everybody was building their own custom software.
And we had a vision of actually driving custom off-the-shelf embedded systems, which ultimately became Internet of Things. I've been in this deeply embedded world my entire career and building custom solutions to solve client problems.
We took it public and then it got sold.
Congratulations.Have you ever held a corporate job?
Absolutely.So after one of my startups called Telegy Networks, I worked at Texas Instruments where I ran a very large part of the Texas Instrument business, which is the wireless.And then I went to Microsoft.
Microsoft recruited me to run three global businesses for them.So yes, I have had the corporate job.I know what it's like to work in the corporate environment.And what's the verdict?The verdict is I love working with enterprises.
I don't like working inside enterprises.
Why is that?Tell me more.You gotta, you gotta expand on that.
Well, the expansion on that is just like, I don't deal well with politics.All right.And there's, and I like getting stuff done.I don't believe in big teams.
I believe in small pods to solve complex problems because when you have too many voices in the room with different, different competing agendas, things slow down and things don't seem to happen.
So what I've always really instilled in all of my teams globally is that you've got to eliminate the noise and focus in on what's important for solving the customer problem.And the problem when you get into a large enterprise
is there's a lot of overhead.Understandably so, but it's really about how can you work within this large enterprise environment, create these small pods and create action forward.
And this is where AI culture is going to be changing things as we lead into this new age of
not only generative AI, but also what's about to come through the door with artificial general intelligence or AGI, which is where the humans cannot differentiate whether they're talking to a computer or a human being.
You jumped right into it.All right, fantastic.So before I go into AGI, I will ask you a question on where your head's at on AGI, but you're the chief product go to market officer at the AI Solution Group.
Is this a company that you co-founded or tell me more there?
Yes, so I'm part of the co-founding team.So the company started about five months ago. and we're both a software product and services company.Our exclusive focus is on safe and responsible AI.And what does that mean?
What that means is that you've got the explainability, the transparency, and the observability to have confidence in what the artificial intelligence is actually doing for you.So the problem that we've seen is that
companies are not aware of the problems that are about to hit them over the face and over the head.And that is all the regulatory efforts that are going on across the globe with respect to AI.And we are very much focused on this.
So I helped co-found this team.We all have our own respective superpowers.And our goal is really to help large enterprises, fortune 500s, and above to really bring AI solutions to the market responsibly.Now, what does that mean?
I'm sorry to just spill off here.No, it's all right.Keep going. But the important thing here is to notice that 87% of AI and data projects do not see the light of day.They do not go into production.And that's a real problem.
And the real issue that we have recognized on this is the disconnect between the business and the CIOs in the company.Because CIOs and CTOs are not generally pro-artificial intelligence.
Okay, there's a lot to go through with that.But to back up, so I think I got the premise and I get the issues around responsible AI, but the AI solution group, what is your use cases or the go-to-market that you're driving?
Is it just around safe and responsible AI?Start, finish, done?
Ultimately, yes, it will be, but in the short term, no.So our use cases initially were focusing on on three key areas.Manufacturing, because there's a massive problem in asset management and inventory management.
Safety and security on manufacturing floors.That is another big use case for us.The other use cases for us fall into financial services and insurance. So things like fraud detection and know your customer type activities.
And the third one is pharma and healthcare.We're less focused on that right now, but we're primarily focused on manufacturing and FSI as verticals that we're going after.And the reason for that is that that's where the low-hanging fruit is today.
So you're saying manufacturing, I presume the safety and security went along with manufacturing.I had that as a separate.So, okay. Manufacturing, safety and security, financial services and insurance, and then pharma and healthcare.
Those are the three.And those are the three because you see those are the biggest opportunities in use cases that are in high demand right now.
In high demand right now and where the biggest problem is.We don't look at pain points, we look at migraine level pain points. And so the problems we're tackling are $100 million plus problems for the companies we're working with.
Because if we can save them, not only in terms of capital and their capital intensive inventory management and asset management, we can help them also with the safety, which we are on the factory floor.
These are all combined to deliver really useful and valuable solutions to our customers.Does that make sense?Makes perfect sense.
But to drill on that, is this essentially putting virtual agents on top of these technologies or exactly tell me what AI we're talking about here.
Great question.So we are literally building products on quicksand right now.And let me tell you what I mean by that.
We are in a transition phase because what we've seen with generative AI today pales in comparison of what we're going to see within six months.And What I'm referring to is that AGI becomes real within six months.
So what we're building right now is really focusing on building for our clients agents, just like you said, and what is an agent?An agent is a software entity that does a specialized task. and you will typically have families of agents.
So we are literally building agents for our customers that are both microservices and agents that will deliver those solutions.And we're very focused on the security and data protection.
because that is absolutely critical to data is everything with us.
So you say AGI in six months.
You know, there's some people I have on the podcast that there's a question that I always ask, one of my favorite questions.Maybe I'll ask it to you at the end.In fact, I will.You get a heads up.And that is what's true, but nobody believes you on.
Like a couple of times I've asked that question.I think more than once the answer that I've received is AGI is not going to happen. There's a lot that believe that we're a ways off one way or another, whether it happens or not.
You're saying, no, we'll have it in six months.Can you say more on that?
We have seen through various sources, which are reliable sources.One of our co-founders is the head of OpenAI's developer forum. So we have, without disclosing any NDAs, we see the direction where things are going, right?
So we have what's called O1 Preview right now, which is the latest model out from OpenAI.That is the model that is training GPT-5 or the code name Orion.Orion will be AGI.They may not call it AGI, but it will be.
The big challenge that they've got is going to be around security.It's going to be around data protection. But we are all of the belief that this will happen within six months.
Now, what will happen within three years is something called artificial superintelligence, which takes it to a next level, which is why safe and responsible AI is important right now.
All right.So now I'm sure you've got all the listeners very intrigued.They're leaning in.But what's your definition of AGI then?
My definition of AGI is where I could not tell as a human whether I was talking to a computer or it's the old Alan Turing test, the old Turing test.
And it's kind of tell the difference between a computer and a human, I could not when, when this was down to so what was what was shown to me was
an application where I could get it, where I was using it and I pretended to be a very angry person on the end of a customer service call, right?And there was cursing involved, there was everything else.
The AGI went into immediate de-escalation mode and problem-solving mode with me right there. and solve the problem less than a minute.And it was a stupid example, which was, hey, I just bought a lawnmower and the thing doesn't work, right?
And I'm still using general terms, getting really angry, pretending to be really angry, really focused enough.The other example we used was, we had,
We had it sing happy birthday in Mandarin with a northern Chinese accent, and one of my business partners' wife is from northern China, so that was greater than 99% accurate, and with the local accent.
While these are quote-unquote simple examples, they were very real.Very compelling.Very compelling.
So you started out And there must be good reason that you went right into safe and responsible AI.You didn't hit on the use cases like the manufacturing, the financial services, the pharma.You went into safe and responsible AI.
There must be a reason for that.So tell me more on that now.
Yes.So the reason for that is there are over 300 pieces of legislation globally going on around AI today. And it's a quagmire.The EU is further ahead than anywhere else.
And so there's this thing called the EU AI Act, which the EU is getting ready to sue companies for up to 7% of their global revenues. So they're just itching to go after a large corporation for a multi-billion dollar fine.And that will happen.
And when that happens, everybody's going to wake up to the fact that they need to actually pay attention to these laws and regulations.
We have on our team a gentleman by the name of Alistair L. Norris, who was Satya Nadella's chief change officer at Microsoft.He is our head of responsible AI.
really focused in on the global standards, the global regulations, the global compliance issues.And we have taken this to a new level that very few other people have access to this level of knowledge within one company.
So when we're building product, we're factoring in that safe and responsible layout to ensure that we're giving our clients that green light that when they deploy to market, they're not suddenly going to have the EU or California or any of the other 36 states in the US that are doing their own pieces of legislation.
We will take care of that for you and ensure you're always in the green.That's what we do.
So is it all about regulation?Is the regulation the sole driving factor that you're up against?And the idea is to be able to ensure a company's adherence to that legislation and regulation, or is there more behind curtain?
There's a lot more behind this, the governance side.So what we're talking about is the compliance cycle.And what we're also doing is helping our clients with is the governance side.So how do you build the governance? into your company culturally.
So it's not just about technology.Our differentiation is that we bring the strategy, the technology, and the behavior to change your culture because when you talk about an AI culture,
it's radically different from the cultures that are within large enterprises today.And there's so many threats to companies and their existence on this because you can find, as an example, Samsung's source code on GPT.Why?
Because somebody at Samsung basically put the code into GPT and GPT learned it.That's called shadow AI.That is dangerous in and of itself for intellectual property.
When you're challenging the culture of a company, you've got to bring the strategy and the behavior in to make the technological changes that you're trying to make.
So, it's governance, it's compliance, it's cultural bias, it's discrimination, it's all of those things that are being really focused in on the commercially available solutions today that are there.
We have to take that and build these environments that are safe for the employees, safe for their company and the company brand value, and safe for the company's customers as well, because it just takes one breach
for people to lose confidence in your brand.
Is that more tech or is that more process or change management or governance?
Change management.At the core of this, and you hit it right in the nail right in the head, is a change management exercise.It's changing behavior.It's changing culture.
It's changing how you're delivering better solutions and better outcomes for your customers using technology.So technology has a major play on this, but change management, if anything, underwrites absolutely everything.
I'll tell you what's going through my head.When I hear regulation, it's kind of a shame in some sense that we have to have regulation to force some of our behavior.
Right now, as I look at many of the large language models that, as we've talked about before on this podcast, that just scrape the internet and everything that goes along with it, including hate speech, including copywriting material, as if nobody should or is supposed to care.
And I see enterprises, I see individuals using that material as if, yeah, it's not my problem. And I am proud to be part of IBM.
Our large language models, you know, we're really cleaning these models, taking us longer from go to market to do that, so that we can fully indemnify knowing that our models don't have the copyrighted material, don't have the hate speech, etc.
I guess what I'm trying to say is it feels like Most in the industry don't care right now.And you may be right.
I don't know if you've said this, but it seems like you were alluding to the fact that it's about targeting some of that regulation, because I'm not a regulation guy.I'm not sure without regulation, people will act or change their behavior.
So maybe it is the catalyst.God, it hurts me to even say that.But what do you think?
I'm with you 100 percent, because when you talk to clients today, safe and responsible AI is not a thing for them. They're trying to figure out how to integrate AI into their work lives.
The problem is with most corporations, there's very little governance, so people are going off on their own, in their own teams.
whether it's using chat GPT or perplexity or PI or whatever solution they're doing, they're putting intellectual property out on the ether, right?And training these other large language models with their solution.
that it's only a matter of time before a catastrophe happens.And there are two types of catastrophes.There's one, there's a breach, right?And the other one is, I'm going to get a big fine, right?I'm sure there are plenty others, but these are
the core things that are on people's minds.
So it's going to take somebody either getting a big fine or being a large breach for people to start recognizing we need to have our own clean environments, whether it's on-premise in my own dedicated data centers, or whether it's in the cloud in a private data center using a foundation model, depending on how your security posture looks like.
It's, do I prepare for the future now?
with a company like the AI Solution Group where we build in so that when the catastrophe happens, you're already there, you're protected, or do you wait for the catastrophe to happen and then you're scrambling and you're going to be panicking into implementing a solution.
So, we're not there yet from a compliance or regulatory standpoint.We're getting ready for when it happens.
What we're selling to our customers today are safe and responsible AI solutions that you can get into production now and be delivering value to your customers, whether it's internal or external, now. And that's where we're at.
So you say that much of what you have is change management or planning for on the regulation side is change management.Does this involve like playbooks that you've learned from experience?And how will your company maintain that differentiation?
I mean, what do you know that nobody else knows?
Well, if I told you that, then... It wouldn't be a secret.We do know something that I'm not telling you and I'm not prepared to tell the audience yet, but it does bring into change management.
Change management is the core component because we believe that change, the lowest common denominator in change is a conversation.
So you have to start talking to people, whether it's customers or internal stakeholders, to ensure that what is the problem or the migraine that we're trying to solve for?
And are we solving it in a way that is compelling for the person, the end user that's going to be using it?It might be our customer, or it might be somebody internal.And it's got to be in a way that is compelling for them.
And so what we're seeing is that we're building out all of this knowledge to teach our customers how to fish and how to build their own center of excellence around artificial intelligence internally that will allow them to actually bring
value both internally and externally to the customers using artificial intelligence.Does that mean you're more like a consulting?You're shaking your head no.It's both.So we're both software and services.So we do both.
And that is one of our core differentiators. is that we bring both the management consulting side of things and the product side.So we are building products as well as building solutions.
So we're doing build with our customers, which means that platform that we're building that we will be releasing to market sometime in 2025 is going to be something that is useful because we're building it with customers today.
But to be fair, Philip, I mean, if you're a listener out there, they're saying, all right, Philip comes in, he's selling software and a playbook based on his unique super secret experience.Why would I bring Philip and the AI group in?
What's your two minute pitch to that?
The simple reason is, is that every single one of us in the company has worked for and brought product to market within large enterprises for a virtually our whole careers.
And so what we bring together is the different aspects of understanding the internals of the enterprise and working with C-suites.We've all worked for CEOs of multi-billion dollar companies directly.
And so we know how, we understand the problems by talking to our customers and how to solve these problems in a way that allows them to scale.So that it's for us, for what they're getting from us is product.
And what they're getting from us is also advice on how to change their culture internally to not only support this product, but the future going forward.So we merge both.
Our management consulting is actually a separate group within the AI solution group.We have a product group and we have the management consulting group.
And that's how we are able to separate our ability to actually deliver great solutions for our customers.
Yeah.Say more on the product if you could.Well, in the product we're building, so we have listened to our customers the last five months.
And what we have really focused in on is the ability to bring a platform that will effectively become what we intend to be a de facto standard in
enterprise-grade AI solutions that will allow our clients to build their own AI solutions without having to rely on large consulting firms, and they'll be self-sufficient and being able to own their own destiny.And so our value is delivering them.
Why isn't Microsoft and IBM sufficient?
We're focused on abstracting the complexity out of the compliance, the safety and the ability to actually build your AI solution that can go into production without any fear of brand value erosion.
That's pretty similar to IBM strategy in that we're hybrid cloud.We're trying to rise above on the platform sense so we could be a competitor, but that's okay.This is your show.It's all good.
On the tooling, I presume it's going to be an AI reference architecture around manufacturing, financial services, or pharma, as you mentioned earlier?
We already have the reference architectures.Those reference architectures are already there.And so we have that ability today to actually deploy those.
What we're building is a platform that will enable customers, your clients, to actually not only build solutions, but
be able to outsource the development of these solutions simply and easily without having to go and hire these large expensive consulting firms to actually drive the volume back because we are able to build that expertise and knowledge right into the platform.
I can't give you more detail than that because I'm dancing around non-disclosures and everything else.
Like you mentioned earlier, how long has the AI solution group been in existence?You said like six months or something?
Yes, basically five months.So what we're doing is a build with our customers right now.So we have a whole product architecture.We have all the product features that we have already lined up and prioritized.
And now we're building those with our customers, understanding what they need today and what they're going to need in three months and what they're going to need in six months.How did this come about?
How did the AI Solution Group come about?You must have identified this gap and then got together with some colleagues and said, we got to do this, we got to do it now.
We did, and we saw the need for safe and responsible AI over a year ago, and we've been talking about it for a while.
We all basically sat down, a little mini workshop in a hotel here in the Pacific Northwest, and we threshed it out, and this is what we ended up with, and our clients are biting. which is good.
And the reason why clients are biting is they understand the need to future-proof themselves.What's not happening is the conversations are not happening at the board level yet.And at the C-suite, how to integrate cultures yet.
So we are ahead of the curve.So what we're delivering to customers today is what they want. right, which is an AI solution that works for them because they've had so many failed attempts.So they want to see if it's hype or it's not.
So we go in typical case like we would work with IBM on this as much as we'd work with Microsoft or anybody else is you establish a beachhead, you make that successful and you expand and you expand and you expand.
What do you see as the top issue or top issues with achieving the regulatory compliance today and or just the definition of safe and responsible AI?
The biggest issue is that people don't understand the complexities. And they don't have people on staff that really have dug deep into what's going on in the EU, the regulatory landscape that's going on in the US.
It hasn't become a concern yet, honestly, until somebody gets sued, right?When somebody gets sued by the EU, that's when people will start to really wake up to this being a real issue.Because people right now are rushing.
to try and differentiate themselves over their competition and perhaps not doing the safest or the most responsible things in order to achieve those goals.
What's the biggest gap, do you think, in terms of achieving those goals?
The biggest gap, honestly, is the cleanliness of data. you know, whether it's PII data, whether it's inaccurate data, whether it's, however you, whatever you want to do, the most common problem is, is that people don't have access to clean data.
Clean data, why don't you just look up, you're up my alley right now because I'm a data guy, but describe clean data.How do you know when it's clean?
Oh, that's a great question.It's one of those things, you know it when you see it, right?When your gut tells you. Yeah, I don't know that's going to fly, but I got you.Keep going.That's probably better for the reality on it.
But the thing is, is your PII data sufficiently obfuscated, right?Is your customer data sufficiently accurate?Is your current information in ERP or CRM accurate?And we all know that the chances are it's not, right?
And so there is a very much a process that has to go in any AI readiness to ensure that there's data cleanliness.And sometimes that leads to a data evacuation exercise.We've seen that happen. Data evacuation exercise.
What we saw with one of our clients was that when we got into the exercise, they suddenly realized that they had data in so many different places that it was costing them so much money.
They did a cost analysis that they realized it would be cheaper if they brought it into their own data centers.
We're hearing this a lot now, and I just did a podcast with a gentleman by the name of Ian Smith of Lighthouse Technologies, and he was talking about The repatriation of data or bringing it back into the data center, you see the same?
You see it in the same way?
You see it in the same way because people are starting to wake up to the fact that a big reason why their data isn't clean is that it's spread over.It's like this Medusa of tentacles all over the place of data that's everywhere and they don't know.
no clear inventory and chain of custody of that data to ensure that it's been accurate.
But do you really believe they're going to bring... I mean, we just went through get to cloud, get to cloud, get to cloud.Now you think we're going to revert that trend?
No, I don't think we're going to revert the trend from being in cloud.I think they're going to go private cloud.
for data or on-prem for some of these, because some of these, and the reason why they'll go on-prem for some of this, they're just simply not the bandwidth available to interact with the data in the real-time nature that you need to do to support some of these AI use cases.
And let me give you a specific example.If you have a computer vision problem where you have a factory,
And in that factory, you have issues around product theft, you have issues around safety, physical safety of human beings of getting injured or worse, dying.
There's a lot of data that goes through those cameras on a per second basis if you're supporting 30 frames per second.How are you going to access that data in real time if you're not on-prem?
If you've got a large factory with hundreds of cameras, you're talking tens of thousands of frames per second of video.How do you process those if it's not locally? And how do you store that if it's not local?
That's one of the problems that we're solving.
Luckily, IBM's strategy has been hybrid cloud and we do have AI solutions on-prem, clear differentiator.It sounds like your organization is preparing for that just as well. But given what you said, I mean, it's going to be a challenge.
Much of the AI in the industry right now, outside of IBM and yourself, is SaaS only.Is that not going to create major inhibitors to safe and responsible AI in the future, particularly as it relates to data?
No, because I don't believe so.And the reason is that we believe that we can offer this as a SaaS-like service over time.
And it will be a hybrid public-private, it will be a hybrid cloud environment, it will be a public versus private cloud discussion, and that everything will be as a service.And we do see this as being a service.
And the question that most companies that we're challenging them with is like, this is the real problem.Here's the way we need to solve it.And a lot of this, you know, to protect your data, it needs to be either on-prem or in a private cloud.
And the challenge that we're seeing now is that people are really understanding how complex their current environment is from a data perspective.And we ran into one situation where
client had 780 databases across their entire company right and it was a massive problem and you know it took us two years this was our prior experience but it took us two years to get you know to get that situation corrected because it was such
It was so complex, unwinding that complexity to turn it into something that was simple was a very difficult exercise.And we've got the patterns down now.
And this is the key, as you know, at IBM, understanding the patterns and being able to repeat those patterns of how we deliver value to our customers.
What scares you most on the opposite side of safe and responsible AI?Obviously that implies very unsafe and irresponsible AI.So there's got to be like a major concern, maybe even above and beyond your company, but just
What gives you pause right now?Biggest pause it gives me is, is it good for people?And is it good for the planet?Right?
So everybody worried that when word process, the old keyboard word process came out that, you know, that was going to change how, you know, we had secretaries back in those days.We had, how was it going to change things?It was about retraining.
This is about retraining and change management, right?So my biggest fear right now is that companies just blindly put out products like what Anthropic just announced last week, this whole screen scraping thing that they announced last week.
I immediately saw that and was like, just because you can, Just because you could doesn't mean you should, right?So my biggest fear is companies like Anthropic that are not thinking from an overall safety perspective.
If you release something into the wild like this, you're not thinking privacy, you're not thinking data protection, you're not thinking security of your customer. So that's my biggest fear.
My biggest fear is that some of these applications get in the wild and people get hurt physically or emotionally.Because this is going to magnify problems.It's going to magnify propaganda.It's going to magnify bad behavior.
It's going to magnify security.AI security is a real thing now. Those are my biggest fears on this, to be quite honest.
And then good for the planet is, are we optimized as much to save our reliance on electricity, water, and other resources that would leave a legacy for our families in the future?
This reminds me of Jeff Goldblum's character in Jurassic Park, Dr. Malcolm or whatever, when he said, your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.
So you say that we're in the same AI vein of that sentiment there.
Yes, I am.That's my biggest fear. Technology and the biggest flip that I have personally made is to let all of my own biases and my own experiences just flow off of here, because it's now about the art of the possible versus the impossible.
And now, just because it's possible, is it the right thing to do?And that's what we're bringing, that morality, that experience.We're a bunch of older people in the company, so we've seen it, we've been through multiple
revolutions in technological Swiss, whether it's mobile, whether it was internet, all of the ones that we've lived through, now this is the biggest one of our lives.
This is a tectonic shift, and with tectonic shifts comes great change, and with great change comes great responsibility, and we are focusing on that.
Taking this one step beneath here is, because I know you mentioned a lot in terms of what I've read, and that is, could you explain shadow AI?
So shadow AI is where you have, I'll go back to my Samsung example.It's like an engineer or engineers somewhere within Samsung had
put their code into ChatGPT for some reason, don't know what the prompt was, but you could literally go into ChatGPT and find Samsung's source code for the mobile operating system.
Shadow AI is where your intellectual property is now part of that public knowledge base that is the large language model.
And this is another threat to another attack surface for companies because all of a sudden now your intellectual property is out in ether.
Now people are able to understand that intellectual property and come up with strategies to counteract you or come up with strategies to attack you and can really cause you real damage.And so it's the issue where secret knowledge
becomes public knowledge without you realizing it.
Good, good explanation, Philip.So look, what would you leave the listeners with in terms of what they should be thinking of the AI solution groups, what you have to offer?How would you wrap this all up?
Wrap it all up.Think about your use case.What is your biggest migraine, right?Think of what a migraine level that is $100 million plus problem for you and why Has it not succeeded?
And where we can come in and help is with a one to two day workshop is really work into those use cases and come up with a plan that will get you into production with an AI solution based off of that use case.We do this very, very quickly.
We understand because of our expertise and experiences,
we're able to bring together the key stakeholders within the large enterprise to build up that vision for their AI strategy and deploy something into production that gets their minds thinking about the possibilities of real value versus hype in delivering those AI solutions to production.
And those production can be, it doesn't matter whether it's on IBM or or Microsoft or anywhere else.It's really about that value proposition to the customer.
Sounds terrific.Where can listeners reach you and or the AI Solution Group?
So the AI Solution Group is very easy.It's the aisolutiongroup.ai is our website.You can reach me on LinkedIn.My LinkedIn profile is Philip with two L's, P-H-I-L-L-I-P-S-W-A-N.You can DM me and I'll always respond.
And my email is philipwith2l at theaisolutiongroup.ai.Happy to take a chat at any time.So thank you.
Fantastic.All right, two final questions.First question is, and I already gave you this warning, what's true and no one agrees with you on?
What's true and nobody agrees with me on.What's true is that companies do not care about safe and responsible AI yet, except for very few. That is true.
And what we focus in on is education, education, and education, and delivering value to our customers.So we are customers obsessed.And so understanding what's true, nobody believes in this.
And right now, it's literally, I fast forward six months, I believe it will be a different story within six months.
And through your education that you mentioned, you're able to pivot the non-believers to believers?
You've seen success there?We have.Absolutely.And the reason is we deploy successful use cases.That's how you convert them.And we see the process.And we see that we're teaching them how to fish.
So they're seeing a whole new world in front of them, which is great.
Last question.So I work out of New York.However, my hometown is Kansas City.I hear you've spent some time in Kansas City.Is that true?Yes, I have.
I used to have a... an outdoor grilling company, believe it or not, once upon a time.Not no tech.You're an interesting dude.All right.Yeah, that was my only my only foray into non-tech.And it was my first and my last.I'll just leave it at that.
I've got some scars on my back.I used to go to Kansas City.I used to sponsor the Kansas City Barbecue Society.And I love Kansas City style barbecue.Same. I know how to do low and slow, man.I just love it.I'm with you.
See, everybody listening, Kansas City style of barbecue, the best.It's low heat and slow cook, right?It's just, it's all, how to make a brisket and brownies, it's all in the cooling.It's not in the cooking.It's all in the cooling.
It's also because we drink a lot here.We just drink all night long.Why are we just letting it cook?
That is true.That is true.Nothing wrong with that either.
Awesome.Hey, Philip, thank you so much for being here.It's been a pleasure talking with you.Thanks.I really appreciate it.Philip Swan, everybody, and check out the AI Solution Group.Thank you for listening.
If you have any comments, questions, whatever you want to talk about, hit us on almartintalksdata at gmail.com.We'd love to hear from you.Until next time, we'll see you on the podcast.