Agentic Companies and the Rise of AI Employees
February 24, 2026

In this episode, Greg will be joined by Roey Zalta, an AI leader working at the enterprise and government level. We’ll talk about what it actually means for a company to become “agentic,” how AI systems are being deployed in production, and what the real risks and opportunities look like over the next few years. This one moves from technical implementation to big-picture implications.
Well, hello everybody. Welcome back to DayToday with Greg Michaelelsson. Uh, I'm real excited to be here joined by0:16
Roy Zalta, uh, whom I just only met recently actually through Pod Stars, right?
0:22
Yeah, that's correct. Uh, a great community actually. I uh I been, you
0:27
know, meeting with great people from that community and it's a great way to connect, hosting a podcast and learn
0:32
more about, you know, AI and other stuff as well. Awesome. Well, give us the uh the
0:38
rundown on you. What uh what's your background? What are you interested in? What what are you doing these days? Well, it's actually pretty interesting.
0:45
Uh this week was my first week in Microsoft. Actually, I started a new job there.
0:51
Okay. Um, but today I will talk uh, you know, because I'm pretty much new over
0:56
there, I will talk with my hat as the chief officer in PWC Israel actually.
1:02
And I think it's really interesting, you know, talking about the state of AI and I'm talking about the state of AI this
1:08
week, you know, because it's changing so constantly. I mean this week let's says
1:15
uh I think the hot topics right now are mold bot probably the things people are
1:20
mold bot have you heard about that I mean clawbot mold bot oh yes yes yes
1:26
yeah so I think this is the hot topic I mean this week and you know the thing I
1:31
do every morning is spending almost like an hour on Twitter uh ro and getting so many information
1:38
about what's new in AI right now because this is I'm not sure it's the best resource course, but it's probably the
1:44
resource that you'll find like the most interesting and know unusual uh stuff online about what's new in AI and what
1:50
people are actually doing. And I have really good cool uh good use cases to
1:56
talk with you about. I mean uh stuff that I heard people are doing and I'll be happy to share with you guys some of
2:02
that. Yeah. How has how's this all this whole LLM thing impacted the way that PWC did
2:08
business or does business? Well, it's uh I mean PWC Israel specifically PWC next
2:15
actually the company I worked for uh they decided as same as other companies like big corporation companies decided
2:22
they want to be agent companies actually which is you know pretty much big words for saying uh we understand that agentic
2:29
AI it's it's great it's still on the early days but we understand that also
2:35
uh in the upcoming years uh companies will have you know human human employees
2:41
and identity employees and AI employees and they're going to work together so that that going to do and some other
2:47
stuff human going to do and I mean it's really good idea to actually prepare yourself for that and think about what
2:53
can I automate already on my company how do I prepare that and you know also
2:58
identity I mean how do I treat those uh aentity employees I saw the uh CEO I think of McKenzie
3:06
came out and said that they've got 20,000 agentic employees right now and uh
3:11
I thought uh yeah that doesn't seem right like maybe that's hype. I don't know what do you think
3:17
that that's actually a lot. I mean I know the PWC the global company is
3:23
really advanced actually on whatever related to AI they have actually something called agent OS which they
3:29
implemented for many companies like big big banks in the US which is like their way of like do you know NAT so it's like
3:36
the corporate way of NAT how do you can create automatic workflows on your company connect various tools and you
3:44
know internal tools and this is something they're really invested on and they think it's really uh going impact
3:49
and I know they have like customers that are using it and I think you know it's
3:55
funny because we're talking about PWC everybody thinking about financial company but the advisory of PWC it's
4:01
huge I mean they understood like on the early days that you know we're sitting with those like you know most of the
4:08
companies in the world right now like you know and so we have this resource and we must use it to you know do
4:13
advisory and then from advisory you know uh cloud computing became so popular. So
4:19
they decide okay we're going to uh we do we're going to do professional services of cloud computing cyber security and
4:26
most recently of course AI and right now it's actually agentic AI they understand that if you can implement agentic uh
4:32
agentic work folks with whichever uh cloud provider you want you can do amazing things uh for businesses and
4:39
actually been seeing it myself I mean we've been doing amazing stuff for Israeli companies Israeli government
4:46
it's been really popular I I I'm curious as to how you feel about how much of it
4:51
is hype versus how much of it is legit. Are are organizations actually getting value out of this or is it mostly a
4:56
science project? Uh I'm pretty much afraid of the air bubble. I think everybody are is
5:04
probably know as you know I think somebody told me that people thought
5:09
that I mean they treated uh cloud computing as a bubble as well uh back in those you know those days probably like
5:16
10 years ago but I mean over here I think the hype is mostly because you know it's everybody are talking about it
5:23
it's really hype and the the phase of the development is really like it's fast
5:28
I mean but if you look at that uh entropic CEO you know he says like almost a year ago AI is going to write
5:36
most of the code and you know everybody you know thought that sounds ridiculous but at the end of it if I'm looking
5:42
today and I'm pretty much you know talking with so many companies and developers everybody using uh I mean uh
5:49
cloud code or other tools I I can say about myself personally I haven't
5:55
written like myself a lot for more than a year I mean I'm not writing code And
6:01
from what I'm understanding, big companies and technology companies, uh the companies don't expect you to write
6:08
code anymore. I mean, if you're writing code yourself, you you're too slow. You won't you won't make it today. So, I
6:15
mean, uh devel developers are the the big getting the really big impact from
6:21
agentic AI. what we're talking about like you know the agentic uh uh workforce and effect
6:27
we're talking about this is when it when it comes to like financial institution and you know those uh things you're
6:34
going to uh uh encounter on your day-to-day life that's when it's going to be like really big impact because
6:40
developers are always the early adopters of those technologies well certainly at Zerve we found huge
6:47
adoption of agentic coding so we're Zerve is a coding environment we we do Python and R and SQL coding. It's like a
6:53
agentic development environment for data. Uh and we discovered when we
6:58
rolled out our agent that 95% of the code that gets written in Zerve is written by the agent. And so and that's
7:05
that's legit. Like that's real data. That's not hype or anything like that. People are going out and they're
7:10
discovering that using the agent to help you write code is outrageously more efficient. Uh so
7:17
I do think you're right that developers are are early adopters.
7:22
what's what's the who's next in terms of adopting this stuff? Um I'm going to
7:27
tell him that I think uh in my opinion the big impact going to be on the CFO
7:33
level and you know the financial uh employees in my opinion it's going to really big make a big impact. My wife
7:41
actually she's economist so I pretty much know this uh field from uh you know
7:46
closed look and I think there's so many stuff they do there. I mean from data
7:52
anal an analysis to you know uh doing a lot of many repetitive uh annually uh
7:59
things they do all the time that can be automated with agenti but you know
8:05
what's the challenge you know why they not already using it because it requires a lot of business logic and when it
8:11
comes to business logic you need really good context engineering and in my opinion it's only comes when you
8:17
understand the business well and the business logic And it it's really hard because you know I was surprised when I
8:24
became a developer that most of the uh the there's no not many documentation about developing. I mean the developer
8:31
world is lack of documentation. So the financial world is more lack of documentation. There's so there's really
8:38
a a big problem of documentation. you know people left people are leaving people are going and it's not an easy
8:44
things to do and there's so many you know uh uh financial workflows that are
8:50
broken for years and they keep be being broken so if you add a genti to a broken
8:56
process it's just going to keep being a broken process you know only works so yeah so what I found where I uh
9:05
advise to companies is that we're you know first trying to understand the business logic and sometimes we
9:12
understand that the business logic is broken that's and the way working it's working at the
9:19
moment so agent I won't help here you need something else so it's really important to understand the difference
9:24
between you know automation and uh agentic workflow so they comes they come
9:31
together but sometimes automation is enough and sometimes agentic without
9:36
human uh interpretation and human uh you a consent inside of it can be really
9:42
it's not it's not the great fit in my opinion. Yeah. I've never seen a aentic workflow
9:47
that didn't that could work that didn't have a human in the loop like we're we're not there yet.
9:52
We're not I I mean for developers as well and I don't think there's any field that human in the loop can be out of it
10:00
right now. That's the that's one of the challenges for sure.
10:05
Yeah, the guardrail problem is a real problem. uh an agent is only as useful as the things that it can do. But as
10:10
soon as you let it start doing things, then suddenly it can do things and it can do things you don't want. And so
10:16
it's uh it can get scary. Yeah, for sure.
10:21
So you said you had some use cases that you were interested in talking about. Tell us uh what uh what you have in
10:26
mind. So yeah um I'll start by saying to the people that are listening to us uh I
10:34
almost a year ago start develop the first aentic uh multi- aent uh process
10:40
uh in my career which was end up as being one of the first uh multi- aent
10:46
system uh in Hebrew in Israel actually I found out it was for a big insurance company here in Israel and uh it was a
10:55
big success actually but There was so many challenges because it was early. I
11:00
mean it's still early but you know controlling a bunch of agents together answering uh customers about problems
11:08
going to databases doing rag and all those kind of stuff together. It's really hard to evaluate as well. So I
11:16
found myself that eval are really big challenge but also a really great
11:21
opportunity for product managers and that's where I you know end up uh uh finding about the thing called the
11:28
product generalist actually which is like a new profession that's saying you know uh product managers needs to be
11:33
better at AI but they can really enhance and make uh AI project faster if they be
11:40
really good at you know uh the evaluation part because evaluation really requires is understanding the
11:45
business and building a good data set for evaluation for agents. So if it's a
11:50
explain a little bit more about what you mean by evaluation. Yeah. So if we're building a genetic
11:57
system either one agent or multiple agent or not agent at all just you know
12:02
a simple rod we want to have uh a set of questions and answer that we expect the
12:08
agent to answer uh based on those uh uh questions. So let's says we're doing a
12:13
rag application and drag application is about uh answering about documents. So
12:19
we need to take those documents and create enough questions for the uh LLM to answer about and we can evaluate it
12:27
about the uh questions in two ways. The first way is human evaluation where we
12:33
ask him about a bunch of questions. we get the answer and then the business will you know evaluate each uh answer
12:40
either it's good or bad and mostly we go for binary so good or bad and they can
12:45
add comments about it. That's uh pretty much always the first step because we're
12:50
doing not a big a large set of uh questions and we want to get like some kind of like a big picture where we act
12:57
and when we go for scale we want to do automation and we want to have a CI/CD about it then we go for uh uh auto eval
13:06
and on auto eval we do like you know big evaluation and many questions and we get
13:11
we must set the uh expected answer which is called grand truth And every time you
13:17
know the answer the agent will answer about each questions another LLM will evaluate the answers and you know he
13:24
will score how much this uh answer is close to the ground truth uh and then we
13:30
get like a score mostly and the score will be like you know a range from I
13:35
don't know 0 to 100 or from 0 to 10 and we get this score and it will tell us
13:41
you know how good it get us some kind of like a metric how good is our agent right now and from there we can grow and
13:48
can set a KPI. We want to be at I don't know 98% and we won't release a new
13:55
agent unless it's you know above I don't know a 95%. So it's really becoming you
14:01
know more like a machine learning classic machine learning to uh produce those kind of agent and you know the
14:08
connection with the business is really important here and the business they get their metrics they know how good uh the
14:14
application is and they can they feel they feel safer about releasing it to customers.
14:20
So the first step in evaluating these guys is human intervention. You could dive in there, read the answers, score them. Uh and the second
14:28
at scale using a large language model to do that evaluation and scoring. Yeah. Is that a training step or is that
14:34
happening in real time or or is that how does that work logistically? uh we want to do it as many as many
14:41
times as possible because we want to know how each change affect our uh application in our LLM and we want to
14:49
make sure that you know if I added the prompt a little bit or I I added an extra tool to the agent I want to make
14:54
sure I didn't broke anything else. So I want to run the same evaluation over and over and I want to make sure the
15:00
evaluation is updated with the data. Let's say I added another tool. I want
15:06
to evaluate this tool and I want to make uh and ask questions that will evaluate the ability of the agents to call the
15:13
tool use the tool and get you know the actual data and answer uh uh do actually
15:19
uh the retrieval from the this data. So you want to do it as mighty as possible you know on classic uh programming we
15:26
have unit testing uh on machine learning we have smoke testing and here we have
15:31
evil actually so it's a must and it's really it's an art that I think there
15:37
are some kind of really cool people I'm following online that are becoming an artist of evaluation and you know it's
15:44
it will become standard and as AI adoption will rise uh eval will be the
15:50
key in my opinion Now it is I guess there's two kinds of adoption, right? So you have companies
15:56
whether they're going to adopt like these chat bots that are smart and agentic that can work with customers and
16:01
answer their questions and do things for them. But then there's also customers are are customers going to be willing to
16:06
talk to an AI in order to get stuff done. Uh, and right now I try to bypass
16:11
those things just personally. Like if I see that I'm talking to an automated chatbot, I'm I'm like representative
16:18
representative representative and I try to bypass those things. Do you do that? So So I actually tried to go with it. I
16:25
mean there's really uh there's a really great company here in Israel called Wonderful.
16:31
Wonderful. Yeah. They're called Wonderful AI and I really I talked with them a couple of
16:36
times. They're amazing in my opinion and at the moment they control the entire
16:42
customer uh uh agents LLM agents of all of many big uh companies of Israel. They
16:49
uh phone call agents. So I let's say uh I had a how how do I find out about
16:55
them? I called uh uh Makabi which is like our uh you know uh customer support
17:00
for uh healthcare stuff and the agent answer for me and you know Hebrew is a
17:06
really hard language to to get a good uh tune and the find the uh how the how
17:13
words are spoken in Hebrew and the Asian spoke amazing and it answers about my problem in a really good way. And then
17:21
when I had really you know complicated uh questions, it uh it took uh it took
17:27
the uh the decision to move my uh uh appointment to you know a human
17:32
representative but it answers successfully and it was really amazing and that's the reason this company is so
17:38
really amazingly uh doing right now and all the company's banks are using it right now because they have a fine tuned
17:44
Hebrew voice agent and they really amazing I mean really I and a really
17:50
good combination. These are text based. These are actual voice conversations over the telephone with a AI.
17:56
Yeah. And it's a real time voice agents. And not only that, they can I I know they're using tool under the hood and
18:03
stuff like that. And they do it really really good. I mean it's they do it in real time. There's no like delay or
18:09
something like that. It feels really natural. And I mean I'm a technologist so I like to talk with LLM sometimes.
18:15
But I understand you that you're saying I don't talk with an a with an LM. Give me a real human. I need the answer. But
18:21
they're doing it really good and the escalation is proper. I mean really good. Uh I mean
18:27
my claps to this company. It's really they're really good. That's fantastic. Cross the AI the verbal the vote the the
18:35
spoken AI agent with customers. I mean yeah there are some companies who
18:41
do it really good. Uh I understand the difficult the it's really difficult for you humans to get okay it feels like the
18:49
LM won't help really help me by uh inquir is too complicated but I think
18:55
this is the field where we're going to see LLM's really controlled it stills the customer uh relationship
19:04
the thing that that bothers me about those kinds of interactions is when the agent gets it wrong and they start
19:11
telling me stuff it's not right. And then if that happens, you know, then not only do you have to chase it down,
19:17
but maybe you've also made bad choices based on what the agent originally suggested. And I've used coding agents
19:24
enough to know that they sometimes go down a rabbit trail and sometimes make some bad choices about what, you know,
19:29
like what packages to use or what implementations to try or what coding structures uh to utilize. And so like if
19:35
I'm calling up, you know, the airline and I'm trying to change my ticket, I need to be sure that that ticket is like
19:42
accurate and like, you know, that the times are right and that the seat is right and all that stuff. And I don't
19:48
know, I'm still a little wary about trusting the agent to to put me in the right seat on the right plane and everything.
19:54
I understand the concern. And you know, I mentioned earlier the health uh customer agent and the I mean, you know,
20:01
I bet the company that implemented this solution thought we're going to implement it and give people health
20:07
advices. This is even more dangerous. There's risk for every kind of those use
20:13
cases. But I think the risk of not doing it is probably higher for those companies,
20:19
for the companies. But what about for the users? What do you think? Um
20:25
sorry in my opinion it's it's going to be harder for you know
20:31
people which are not tax savvy to uh you know believe those uh solutions. I mean I believe it most of the times and you
20:38
know I'm not a health uh expert and not a flight expert so it might be wrong. I mean I don't know any I didn't encounter
20:46
any power any airline AI solution. It's interesting actually to see it but it
20:52
will be really sucked if it get wrong you know and I get on the wrong flight or something like that but in my opinion
20:59
I actually cancelled my flight uh like a week ago or so and I tried to talk with
21:05
the customer support and it was human customer support and you know they did in my opinion I'm I'm sorry a really bad
21:12
job and it was really hard for me to you know try to you know reschedu this flight and I thought I thought to myself
21:19
here like in this scenario I would prefer to talk to an LLM. I mean the human here the human here did not help
21:26
me. So, in my opinion, an LLM, you know, because I they asked me about what specific other dates I would like to
21:32
reschedule my flight, and I had to tell him every time, and he had to check if it's possible. I mean, for an LLM, I
21:37
could just tell him, would I don't know, give me the the cheapest uh flight on uh
21:43
I don't know, on March, and he probably could go and do that or tell me I can't do that. You know, with humans, it's
21:50
harder. I mean, especially when I'm not an English native speaker and you know,
21:55
uh, sometimes our customer support is not in Israel. So, yeah, for sure.
22:00
Yeah. And I guess we just have to sort of learn the boundaries of what these things can do, right? So, like like if
22:06
I'm talking to S, do you have an iPhone or are you an Android? iPhone. Yeah, my man. So, like if you're talking to Siri, you
22:12
know that you can ask her to set an alarm or start a timer or turn up your brightness or turn down the volume. But
22:18
if you asked her to delete an email account or to you know uh filter your
22:24
emails or you know do there's certain stuff that you know she can do and stuff that you know she can't do. Uh but the
22:30
important thing is that she knows what she can do and can't do. And so if I ask Siri to you know like uh I don't know
22:37
delete the last text thread with uh with Roy then she knows that that's not
22:42
something that she's capable of doing. I don't know if the LLM's know that. like they're so eager to please.
22:49
Oh my god, I'm so excited for uh Siri to get smart so so soon with the Gemini
22:56
ideal because it's something I've been waiting for. I mean, I saw the demo for Apple Intelligence and where you know it
23:03
notify you that your grandma is uh on that flight and you need to go and pick it up and stuff like that. And I thought
23:09
it's I mean if if someone could put it up it will be Apple and then you know Siri is still not as uh as good as we
23:17
expected but I think you know as I mean I'm I'm an Apple Apple fan sorry I think
23:23
as every other thing they do just you know give it a time and I think it's it's probably going to be amazing right
23:29
now when when cereal gets smart and you know if if you have the Apple Apple
23:34
ecosystem it could do like amazing stuff. I mean I have cameras on my house
23:40
connected to my home kit. I have uh I have everything I mean connected other than my cats. I think everything else
23:47
here is connected to Apple ecosystem. So the integration together can do like amazing stuff and automation and you
23:53
know and the thing about the ecosystem is closed. So if you wanted to feed your
24:00
iMes uh some agent as a part of software, that's not something that's an option.
24:05
Like that's not possible to do unless you're Apple. Uh the they've got an opportunity I think just like Android
24:11
does uh to harness all that they know about you. And the good thing about Apple is that a lot of the compute is
24:18
done on device. Uh that the edgebased uh LLM computing stuff is really smart.
24:26
Uh, and so getting smaller models that can run on your phone, I think, solves a
24:31
lot of privacy issues, too. I bought this. I have the iPhone 17 Pro Max and I bought it because I thought uh
24:38
it could be the only model that could run like edge SLM, small language models
24:44
or something like that. Hopefully it will come and then you know they probably will use I don't know in my uh
24:51
for what I what I wrote uh they're going to use like a small version of Gemini on
24:57
phone for some of the stuff and it's going to be routing and if it's complicated they go to the Apple cloud
25:03
and it's going to be secure in the same way Apple secure everything and boy it's
25:08
going to be amazing it could be like that's it I mean that's the game changer I mean it's probably going to affect uh
25:15
chat GPT customers for sure. I mean it's going to really affect them because you know uh Apple users are tend to be
25:22
really tax saving. If it's going to be really good they going to ditch it and probably you know if it's going to be
25:28
pay for Apple intelligence people will do it. I will do it for sure and well here's the thing. Uh, I think
25:35
there's going to be a bunch of antitrust stuff that goes on because if Apple signs up with Gemini and that's the only
25:42
large language model that you can use to inter integrate with your operating system, that's an antitrust issue,
25:48
right? Just like it was with uh Safari and Chrome on the phone. Like now
25:53
there's like a whole mechanism for switching your default browser and switching your default search engine. But when the iPhone first came out, none
26:00
of that existed. Like your default was your default. And then there were lawsuits and blah blah blah. And I think
26:05
all that same stuff is going to happen. It is going to happen. But, you know, we'll see how it goes because maybe I
26:13
don't know. They're going to they're going to have their developer event and they're going to have the like app
26:19
developers do their uh integration with Apple intelligence with this new world. And we'll see how it goes because not
26:25
everybody is going to use Gemini. Maybe they're going to use I mean even the model they going to use it's not it's
26:31
going to be Gemini Gemini with change of model weights so it's not really going
26:36
to be Gemini you know it's under the hood Gemini but really great move yeah
26:42
and I think I mean I'm not an Android expert but I think uh Google probably I
26:48
mean going to have their own thing and you know I saw how Gemini is integrated into uh Google Home actually and it's
26:55
really amazing too I tried And you know, you can actually ask about, you know, uh, smart
27:01
stuff on your smart home like you can ask Gemini to turn off the lights and, you know, do all those automation on
27:07
your house. It's pretty amazing. Yeah. Know, I have cameras around my house. I use the Nest one, so it home.
27:13
And every evening I get the home brief where it tells me everything that happened at my house, which I think is amazing. My wife hates it. She's like,
27:19
"Would you stop with the home brief already?" But yeah, that tells me that, you know, a deer walked by and there were some chickens in the yard and
27:25
somebody rode by on a bicycle. I think that stuff's fascinating. It just highlights the key things around the
27:30
house, like if UPS came, that sort of thing, uh, to deliver a package. Uh, you know, that that integration is
27:37
definitely cool. But it's more um sort of like observational and less sort of
27:42
active in terms of like, oh, you know, you weren't home and so I locked your doors for you kind of thing. Uh, which
27:49
you can set up those automations. Those are those are just like rule based but they're not uh agent based if you know
27:54
what I mean. Well uh my cat toilet they have an automated toilet and it's you know they
28:00
have a AI on it. They have a camera in it. It recognize which cat was on the
28:06
bathroom did what and I get like an element that I can ask about it. I just saw it released and like you know I saw
28:12
it. It's the company called Toya. Really really good company actually. It's it's great product. T O Y A.
28:20
T U Y A. Yeah. U Y A. Yeah, they have they have smart home as
28:26
well, so you connect all your smart devices over it. But the the smart cat toilets, it's really I mean
28:32
such a Yeah, it's such a niche. You know how many people like their cat to have a
28:38
smart to cat toilet? But you know, that's that's wild. Yeah. No, people who
28:45
have cats tend to really, really love their cats. And I've met a fair number of cat people that would love nothing
28:50
better than to monitor their cat's digestive health. I have my uh Zatim robot. It's the
28:58
dreamy model. It can it has a mode where it can, you know, scan my house and look
29:03
for my tags. Not clue. Just look for them and take a picture. Go find uh Rusty, huh?
29:10
Yeah. No, I did. I tried the vacuum thing, but
29:16
uh I have to keep the house too clean for the vacuums to work. So, it's kind of an irony, right? So, I would love
29:22
where they would just go, but like if I leave um a charging cable, you know, like if there's a charging cable for my
29:28
phone that's like laying out on the ground, then the vacuum eats it and and dies and then I have to take the vacuum
29:33
apart and and so on. So there's new smart ones that actually know uh you know see obstacles and you
29:40
know some of them even robot rock have like an arm that can move it.
29:45
Shut up. Yeah, the I'm I'm a pretty I'm a vacuum expert. It's the robots by the way.
29:53
What's the best one? Uh I have I have the Dreamy and I really like them because you know the camera and you know
30:00
that looks for my cat but I was really hoping for a vacuum that you know it's
30:05
part of the Apple ecosystem and you know everything works together and you know probably because that's the thing about
30:12
smart houses and you know the possibility of AI with it if it was integrated and there was you know one
30:19
ecosystem because I have so many devices from different companies that would be great you
30:24
Yeah, Google definitely has done better with I with the home Google Home than Apple did with HomeKit.
30:31
Uh, for sure. Unless unless something has changed in the last two or three years. I switched from Apple to Google uh probably two
30:37
years ago. And it is significantly more reliable just in terms of connectivity and and speed and and that sort of
30:43
thing. So, Apple's got some catching up to do on that front, I think. Yeah, for sure.
30:49
All right. So, what's the next big thing? What what did you join Microsoft for? What what do you do there? I'm going to uh do a job head there. I'm
30:57
going to work with startups uh on the AI and that data uh subject of matters. So
31:04
I'm going to work with uh customers uh same that I did PWC but I'm going to
31:09
work with them on the Azure uh part of the world. soon got to work with um
31:14
Azure have Microsoft have Microsoft fabric which is like the data platform at Azure have so many you know the
31:21
reason I really joined Microsoft because you know I follow all the cloud companies I saw what they do and I think
31:28
the vision Microsoft have about aentic AI and how it's going to you know be part of the organization it's really
31:34
like the the greatest vision I saw and I thought to myself well I got to be a part of it I love it I I really like the
31:42
vision and I I'm really excited to be there. I mean, you know, people are amazing. The vision is amazing and it's
31:48
it's a big step. I love it. It's the way they treat the jet AI and know I think
31:54
it's the only company that see right now that agents going to be employees and
32:00
you know that you know to treat them as employees and think further about that. Fascinating. I'm still a little
32:06
skeptical. So thinking about the hyperscalers um Azure has fabric and Google's got um
32:15
vertex right and uh you know you've got SageMaker and a bunch of others have you
32:20
evaluate like compared them which ones do you prefer the best just from a hyperscaler perspective um it really depends on the use case I
32:28
work on people I see with many uh uh clients uh each of them use you know
32:33
different cloud providers so I had to use their you know uh cloud providing and stuff. I think all the companies
32:39
they each one of them solved you know different uh uh problems they had like
32:44
IWS solved different problems Google solved really well the data uh world and
32:50
you know they're really good with agents and they have ADK which is a really good
32:56
agent platform in my opinion framework Microsoft by the way have a Microsoft agent framework which is also open
33:03
source and AWS have agent core which is really good uh product too so they have each of them have really good
33:09
solutions and it's always depends on the customer needs and what they have right
33:15
now. I mean if the clients works with GCP you go for GCP and you see what there but it's really good to you know
33:21
learn from it but you know many clients work uh hybrid they have some of it on
33:26
GCP some of it on others and as a as a developer it's really fun you know to
33:32
work with because you you get to know everything from everything and there's so many really good good use cases and
33:41
uh when I worked here in PWC I had really good use cases with government uh
33:46
uh uh offices. Uh they had really good use cases of agents and LLMs and I
33:53
really see the impact of it because I already see those uh uh chat bots and
33:58
LMS on production and I see how it helps people to you know find information about their uh you know government uh uh
34:06
department really easily. Yeah. No, that's good. They do that. The large language models are great for
34:12
search and for uncovering information and you distilling a lot of stuff down into into exactly what you need to hear.
34:18
What do you think things are going to look like in five years? I've got a really skeptical kind of point of view on what large language models are doing
34:24
to people. Uh, you know, I keep see I keep I saw a congressional hearing not
34:29
long ago about how things like Tik Tok and and all the other social media are so bad like they have such bad effects
34:36
on people and particularly teenagers uh that are using them. Do you It seems to
34:42
me that the large language models are just as bad uh if not worse in a lot of ways. Uh are you optimistic or
34:48
pessimistic? Do you do you agree that what are what are the big pitfalls and what's coming in the next five years? I
34:54
think LLMs will bring uh you know cures and medicines. I really believe that. I
35:00
think it's it's something that going to happen. They're going to discover new materials and it's it really going to
35:05
help and benefit uh human uh lifespan and human uh uh life quality. But it is
35:12
comes with a price and I think the price is the uh uh uh the mind and how it's
35:18
going to affect our mind and you know our uh because social media you know as much as I love social media and I love
35:25
LinkedIn it is it it has its cost and I think with LLMs it will solve you know
35:32
some of people's loneliness maybe but as much as it will it will uh make people
35:37
not even looking for human interaction because it you know it solves their loniness. illness and it is going to
35:42
affect you know let's say for school I don't know how kids I don't have kids yet but how kids are go how kids are
35:48
learning today I mean why why would you learn today if you have like all information in the chatbot I mean how
35:55
how do you teach as a teacher that's something I'm I'm not very uh an expert
36:00
on the education and how people uh teacher are teaching today with AI but I mean it's it sounds for me like a
36:07
problem why would I listen to a teacher where I can just you know have an LLM and why would I even learn why would I
36:13
even learn how to code if I if I'm people are not coding anymore you know I mean what's going on I mean it it has as
36:21
much as the problems that it it solves I mean I I'm really scared about um a wall
36:28
where uh LLMs are going to uh I don't know I I believe I'm I'm still saying
36:35
thank you to Chad I think it will remembers my thank you and appreciate and I'm I'm
36:43
polite, you know. So, you're treating it nicely for when it's our overlord. I mean, I'm trying.
36:51
I don't hate that. But yeah, that's to me it's like, you're completely right. Most teachers that I've come across are
36:57
just sticking their heads in the sand on this and saying, "Don't use it. You can't use it. It's cheating." Blah blah blah. But at the same time, it's a tool.
37:05
And I honestly don't know what the world is going to be like when, you know, like
37:10
once Elon Musk released his robot, releases his robots, which I will buy on day one. Uh, you know, and everybody's
37:16
got a household robot to do your laundry and clean your toilets and make your bed and stuff. Uh, you know, and you're I
37:23
don't know. I think Twitter and and LinkedIn and all these are going to become a wasteland of of bot generated
37:29
content if they're not already. uh you know when when Elon Musk bought Twitter
37:34
somebody estimated like 95% of the content on Twitter was generated by bots
37:40
but now it's so easy to generate content it's like what are those platforms even for uh but then of course if we stop
37:46
writing and creating new stuff then there's no new stuff to train the large language models on and they sort of
37:51
stagnate a bit I think so that's right like that's like the big problem because you know uh they always
37:57
release uh better models which trains on more data But as time goes, more and more of the
38:04
data is going to be uh synthetic and LLM generated. So LM is going to train and
38:09
study based on LLM generated content. So it probably go worse, you know, it get
38:16
worse the for the funny. I don't know how how they going to solve it. I don't know. I mean by by the way most of the
38:24
uh industry talking about that transformers is not the final. It will not be AGI if that's what people think
38:32
it there will be something else. I mean at the end of it transformers they just predict the next best token. It it
38:38
really did it well. Um and you know there's really like what's going on
38:43
right now with Mulbot and all those uh uh stuff. I mean if you guys heard about that that's crazy. Like people do crazy
38:51
stuff with it. Like they have this uh if you heard about that they have their own social media. They even have a website
38:58
when they can rent uh humans to do a job that they can do.
39:04
Roy, hold that door real quick. We're we're about done. We have like maybe five minutes to wrap up. Let me go
39:09
grab the door real quick. One second. Sorry. Yeah, no problem.
39:18
Hello. Yes. Awesome. Very good.
39:24
Yeah, just right here. Thank you.
39:35
All right. Sorry about that. They're just bringing I bought a couch and uh they're it's I've been waiting on it
39:41
forever. Now they're here delivering it and they're like, you know, it's so hard to get them here. Anyway, sorry.
39:48
Continue. I didn't mean to interrupt. Uh well I was just saying that you know
39:53
I was uh thinking about what's the what's people are talking about right now and people are mostly talking about
39:59
mold bot clubbot uh it has so many names their names change every time but mostly
40:06
they talk about it because right now it has the ability to you know connect itself to every kind of external tool.
40:13
It can even like you know without you if you allow it to do anything it can you know buy itself uh uh on Twitter on Two
40:22
phone number and then call you it can call on your behalf and it can do like amazing stuff. It's 100% not 100% not
40:31
secure which I don't recommend to run on your you know personal machine and you need to be really careful with it
40:36
because it you know it can control your entire uh uh uh computer and do whatever
40:42
it like and but it's really really amazing how it became a trend and you know the really big story right now is
40:49
how it's about mold book which how agents actually created their own social
40:54
media where they complain about their humans which is like really funny. I don't know how reliable it is, but it's
41:01
hilarious. That's hilarious. All right, give us a give us a warning or a recommendation.
41:07
So, to say there's somebody out there that has never used Chat GPT, uh knows
41:13
doesn't know very much about this AI thing and they're skeptical, uh but they are curious, too. So, what would you
41:19
give give that person some advice? Well, I think my favorite LLM is Claude
41:27
for sure. And I think a person who never touch either one of those AI tools and want to be like their jaw drop should
41:35
probably go and check out Claude and let it you know build for him stuff because
41:40
I think what Claude really defined that how uh AI become a commodity and the
41:47
thing that you can do with it. I mean you can do amazing stuff with it application and it's so smart and can do
41:53
like if it's something that related to uh uh uh generative generated stuff uh
42:00
out of nothings and not like you know reading stuff uh and and more like creating stuff. Cloud is amazing. I mean
42:07
it's such an amazing tool. They do amazing job over there and I heard uh Claude uh five is going to release uh
42:15
this week or next week. So, I'm really excited about that, too, because it's probably is the smart the smartest uh
42:22
person uh on the planet right now. It's It's amazing.
42:28
So, you're saying I need to cancel my chat GPT Pro and move over to Claude, huh?
42:33
Um, no. I'm I'm keeping my chat GPT because it has it already has my memories which I can uh export out of it
42:41
and it knows me well. So, I I need I need a memory. I need a personalization.
42:46
That's something clog and not as good as ch. Yeah, that's fair. Well, hey Roy, thank
42:53
you so much for taking the time. This has been a really fun talk to talk and uh I'm I'm excited to hear about how it
42:58
goes at Microsoft and see the cool things you do there. Thank you, Greg. It was an honor for me
43:05
to talk with you. Thanks. Appreciate that. Likewise. All right. Thanks so much. I really appreciate it.


