🏀Zerve chosen as NCAA's Agentic Data Platform for 2026 Hackathon·🧮Meet the Zerve Team at Data Decoded London·📈We're hiring — awesome new roles just gone live!
Videos / From the Trenches: A Conversation with Support Leader John Forrest
X
Data Day Podcasts

From the Trenches: A Conversation with Support Leader John Forrest

January 20, 2026

 From the Trenches: A Conversation with Support Leader John Forrest

Two Data Robot alumni. Decades of combined experience in AI and customer success. One conversation.

John Forrest joins Greg Michaelson on Data Day. John has built his career leading technical support at companies like Netezza, Data Robot, and now Qdrant, where he oversees customer success for one of the leading vector database platforms. Expect a candid conversation between two industry veterans about where AI has been, where it's going, what it takes to build lasting customer relationships, and lessons learned from years of solving hard problems for demanding customers.


  • 0:00

    [Music]

    0:11

    All right. Well, welcome back to Daytoday with uh Greg Michaelelsson. I'm here with John Forest, a really great

    0:16

    friend of mine, back from the data robot days, and I'm excited for a really good conversation with him. So, thanks for

    0:22

    joining us, John. Great to be here. Great to Great to be uh

    0:28

    Oh, I seem to have lost audio. Can't I hear?

    0:33

    It must be my No, it's on mute. Yeah, now you didn't. It was me. I had muted myself. But you heard me before,

    0:40

    right? I did and then it just went away. That was weird. Okay. Yeah, it must be the connection or

    0:45

    something. No big deal. So, anyway, welcome. Thanks for being here. Really excited to to have a chat with you. Yeah, I was just saying it's great to be

    0:51

    on a call with you again. Yeah, I miss working with you. So, one day, one day. Although time's running out for me, I

    0:57

    think. Yeah, you and me both, brother. Awesome. So, why don't you just start out, give us a little background on, you

    1:04

    know, how you're how you got started doing what you're doing. And just so everyone knows, John's just about the

    1:09

    best, uh, you know, uh, technical customer support, like dealing with I

    1:15

    rate customers, solving hard technical problems kind of kind of guy you could ever hope to have. So, how'd you get

    1:21

    into it, John? Thank you. Yeah, how'd that do that? Um yeah, so before data robot I worked for

    1:26

    a company called Niza and uh I mistakenly had a joke with my boss. I

    1:32

    was coming to the US actually to manage New York Stock Exchange and Bank of America as a technical account manager

    1:37

    and I jokingly said to her uh hey I could run. She'd fired three support directors and I said I could come and do

    1:42

    that. So rather than go to Charlotte and sit in Bank of America's office, I ended up in Boston in Natiza's office and

    1:48

    never looked back. So that was the that switched me from you know the more kind of pre-sales technical success account

    1:55

    management type stuff to doing what I do now. So do you do you prefer it you prefer the

    2:00

    the firefighting to the sales? Uh yeah I think people get worried about

    2:06

    like a if you get a collaborative uh you know trusting relationship with a

    2:12

    customer you can usually overcome anything. If you don't have that, you can get into shouting matches, but you

    2:18

    need to deal with that, too. So, so yeah, it's it's it's not as hard as people think it is. I'm not sure why people get so worried about doing it,

    2:24

    but uh hopefully the product's good enough it never happens. So, certainly that's true with my current company.

    2:31

    What is your current company? Tell us about um so I work for Quadrant. I called it

    2:37

    quant up until I joined. Um I was a user prior to that. My previous But it's spelled Q D R A N T. Yeah.

    2:44

    Yeah. Yeah, but you've got to pronounce it. It's all the pronunciation, right? German company. I'll say that's the German spelling, which maybe isn't true

    2:50

    either, but yeah. Um, so, uh, yeah, I was my previous company, I was working for a

    2:57

    somebody was trying to do what Data Robot did. You know, we both worked together at Data Robot. Tried to do what Data Robot did, um, in a kind of uh,

    3:06

    it was similar way, but for mid-market customers. And then Genai hit. They went full-on

    3:12

    Genai and it's all chatbased and uh they needed a vector database in there.

    3:17

    Started with a competitor to Quadrant, didn't work out very well. We switched

    3:22

    to Quadrant. So I've been using it for a year. Got to know the technology and uh started talking to the sales leader and

    3:29

    never looked back. So excited to be here. So it's you know it's like the Ferrari of vector databases. Is that

    3:35

    just like So why would somebody need a vector database? That is a good question. Yeah. So the

    3:41

    the the general market of the Genai hype space is all going on about rag and

    3:47

    still going on about rag. So it does form a foundation for rag. But I think the real interesting use cases are um

    3:54

    semantic search. So you've got a a catalog on your website and you want to be able to search for similar items in the catalog both image and text and so

    4:01

    on. Um, so you know, real tier zero mission critical e-commerce type apps where you're doing uh semantic search.

    4:08

    Um, and obviously the flavor of the month is agentic uh AI and the agentic

    4:14

    memory. We're a good fit for that as well. So what what do you mean by agentic memory?

    4:19

    So when you're when you've got agents and you're going through things and interacting with the agents, they've got to store what they find out and be able

    4:25

    to recover the most relevant parts as they go through the flow. So uh you know vector database is a good place to do

    4:30

    that. Ah, I see. So, if I've got like a really long conversation context with a with

    4:37

    chat GPT, say, then you might store that as a a vector. You might shrink it down

    4:44

    and then search it for the relevant data. Yeah. Kind of. Yeah. Yeah. Exactly. Yeah. So, you've you know, when you're

    4:50

    going through a flow, um you're going to reach out to grab things from websites or or go and talk to other systems and

    4:55

    pull back the context. Um, and as you go through the, you know, you can also interact with people as you go through

    5:01

    the flow as well. Um, but it's going to pull all that together. So, there's kind of short term, this is what I need to do the next step, and longer term was all

    5:07

    the things I've done and what I've pulled out. You know, if I'm if I'm searching websites for somebody gave me

    5:12

    a an example of ordering a pizza, right? If you want to order a pizza with a a Genai agent, what you going to do? So,

    5:20

    yeah, it's what are you going to do? Walk through that example. What are you going to do? uh probably not do order pizza. I don't think it's a

    5:26

    bad example exactly, but uh you know, I've done things like um uh support ticket uh automatic case handling type,

    5:34

    you know, and it's it's you've got the ticket, you've got other tickets the customers open, so you've got relevant

    5:39

    context you might want to search to feed into the the flow. Is the how has the whole AI agent thing

    5:46

    impacted the way that support works for you? Uh, that's a good question. For me,

    5:54

    not at all. Right. I think that's a a number of people said, "Well, it's going to go away." My previous CEO said, "You

    6:00

    don't need support. We've got geni now." Which just isn't isn't the case, right? Because it's really an interpersonal

    6:05

    problem rather than a technical problem. So, if you can make something sound and look exactly like a human, I think with

    6:11

    some empathy and uh, you know, the the trust I talked about, I think that's that's great. But, we're a way away from

    6:17

    that, I think. So, yeah. No, I always skip all of that chat thing. My first thing is just representative representative.

    6:23

    Yeah, it's come a long way, right? I, you know, we I certainly have previous role put a number of those in place in

    6:29

    big organizations and they're a lot better than they were, but still it's nicer to you know, you ought to have a

    6:35

    discussion, a face to face discussion to somebody. So, yeah, I did see one. I was fooled by one back at Dane Robot days. Somebody used a

    6:43

    a scheduler. It was an AI scheduler. And the way it worked is you just emailed you just copied this scheduler and she

    6:49

    had a name. I think her name was Amy. I don't remember what app it was, but uh

    6:54

    the person just copied Amy and said, "Hey, can you schedule this?" And I exchanged emails with Amy for like hours

    7:02

    before I realized that Amy was a bot. And uh you know, I was I was talking, you know, exchanging pleasantries with a

    7:09

    with an AI agent. Of course, this is before agents happened, but but yeah. So

    7:14

    I think there are some limited contexts where you can be fooled but most of the time not so much.

    7:21

    Uh I think that's changing though. I do think it's changing right. I do uh I am a big fan of uh Transformers and GI you

    7:30

    know it's a it's a very exciting development. So yeah in my career it's probably the biggest you know you got do and iPhones

    7:37

    and all that stuff. This is it's going to have that level of impact for sure. It's definitely changing the way everything works. Um what uh what

    7:48

    do you have any examples of of how kind of the support world has changed as a

    7:53

    result of all this? I know you said you kind of you're focused on the relationships and on on you know if

    7:58

    somebody's angry they don't want to talk to a bot they want to talk to a human. Um yeah so the so the things I've done in

    8:05

    the past like automated triage a ticket comes in you you know an issue from a customer comes in through one of the channels you can very accurately

    8:12

    categorize it and prioritize it based on the content right an LLM could do a really good job of taking an email and

    8:19

    deter you know looking at some customer context and history and working out what to do next

    8:24

    um and that then leads you to can I automate the next step and for simple things you can so it's definitely a

    8:30

    potential of you know stopping people doing mundane tasks maybe not putting them out of the job completely but you

    8:36

    know when you get to more complex software I think it's a a different story I don't you know you've got to

    8:42

    yeah you know you've seen things before and you can make suggestions to a to a support engineer to go look at

    8:47

    potentially yeah the software space is complicated but I guess if you're doing like a like a a e-commerce return or or you know you

    8:55

    have some something like that those those kinds of things support lend themselves really easily to to a

    9:00

    bot-based kind support type thing. Yeah. Building issues, all those. Yeah. I mean, software companies have those too, right? So, maybe don't always go to

    9:07

    support, but yeah. So, uh All right. Walk me through the uh the John Forest patented approach to

    9:14

    angry customer. So, somebody's got having a they they're having a technical problem. They say, uh you know, this

    9:20

    software sucks. We're we're dropping our subscription. This is horrible. What uh what's the John Forest approach to to

    9:26

    handling these these guys? Yeah. So the

    9:31

    the usual kind of mindset is you just need to say yes to everything and you you can't do that. So it's always a yes and or a yes but right because usually

    9:39

    the customer is wanting something either you can't deliver it and they maybe don't really need it. If they do if they

    9:44

    do need it and we can't deliver it then you need to come clean and just say hey okay goodbye kind of thing. But most of the time it's not that you know some

    9:51

    examples. Oh yeah story. Yeah, see CIO of a big uh I can't I shouldn't drop names so I

    9:58

    won't but uh a big you know DCbased mortgage type company like every month

    10:04

    the sales head of sales and I would go down and be yelled at because the the system had had issues that month and

    10:10

    every month we would have the same conversation well if you don't upgrade and put the bug fix on you're going to have it again next month so we'll see

    10:15

    you next month but go and apply the bug fix. So, you know, it's just building the uh uh making sure people understand

    10:22

    what they what what went wrong and what are the paths, what are the options they've got to prevent it happening

    10:29

    again and move forward with it. Um it's it's really just being collaborative.

    10:35

    It's is, you know, how do you get to the right the right end result with them? Um

    10:41

    and be open and honest. And the most important when you've got a terrible memory like me, you need to be consistent. So, I need to know what I

    10:47

    tell you now. I'm going to tell you the same thing in two weeks time. So, I'm not going to remember if I made something up. So, don't make stuff up.

    10:53

    So, store it in a vector database. Oh, yeah. Yeah. That will change, right? I should be I should do that. What did I

    10:58

    say last time? Yeah. Is there an application for all of these meeting recordings? I I'm looking

    11:04

    for an AI agent, like an AI assistant. There is. We use one I I will drop the

    11:10

    name. We use one called Glyphic. It does a little bit of hallucination in there, but it does a pretty good job of, you

    11:15

    know, it records the meeting. It sits in the background with all the other agents listening away, but it's it's very much

    11:21

    sales focused. So, it kind of has a sales bias to it, but you know, we we use it pretty extensively. So, I I you

    11:28

    know, I've seen a few others like that. But is it uh See, the thing that I want is like I haven't really seen it is this

    11:36

    integration with life, right? So, you know, I tried these I've talked about it a couple of times. I've tried these

    11:41

    wearable things that just listen to everything uh you know and then they you know they

    11:46

    summarize it and stuff like that. The problem that I have was having is that it it can't differentiate between TV and

    11:52

    reality. So I I in this case I was watching Vampire Diaries with my wife. You've seen Vampire. It's a stupid show. Uh

    12:00

    also an awesome show. Uh but my daily summaries started getting like interwoven with like vampires and

    12:06

    dragons and witches and spells and such. So yeah, so that was less than ideal,

    12:12

    but there's got to be context. Yeah, right.

    12:18

    You know, like my phone should know enough about me to be able to remind me

    12:23

    to do X, Y, and Z without me having to tell it to. So I'm completely willing to give up on that level of privacy in

    12:30

    order to to have that level. I'm with you on that. And I think so that's our next startup. Absolutely. There is no to-do list type system ever

    12:38

    that has worked for me. I don't I I you know it's it's got to be the last minute before I do what I need to do. Um

    12:44

    as the bio you asked for this was like 10 minutes ago. You got it.

    12:49

    So I but but if it's integrated I agree with that. If you integrate it into my uh my kind of and it's it's learning

    12:57

    from when I should have done something and when I did it and how quickly how you know how I prioritized which it

    13:03

    absolutely could do. I could do that. Um, I think I would be I think I could improve my the the remainder of my

    13:10

    working experience to be more efficient and effective than I have been previously. Sure. Although I guess there's something to be

    13:15

    said for only doing the things that are important enough to, you know, that they stay top of mind. Yeah.

    13:21

    Yeah. I Yes. Yes. How do you measure that? That's the hard bit. And maybe there's a Yeah, it's

    13:28

    it's not an easy problem. I wish it was. I think the solution is to just create a Slack channel for every possible thing

    13:35

    and then that's welcome back to data robot days. Tell them tell them about the Slack

    13:42

    channel things. I I take full responsibility. I built that. Yeah. Yeah. The the automation in Salesforce was updated every customer

    13:48

    channel like twice a week. So you try you were watching for things going on in customers and then they all went white and you oh silence I can't f you know.

    13:56

    So they kind of hid stuff from you. I think so you can't read them all. I I well we had this I mean the problem that

    14:03

    that we were trying to solve or that I was trying to solve was that we had loads of customers and a different set

    14:08

    of people that were involved with each set of customers and if something came up for that customer you didn't want to

    14:14

    have to go and dig for for like okay who's the data scientist and who's the

    14:19

    engineer and who's the support person and who's the salesperson and who's the customer you know like the whatever

    14:24

    right so I wrote some code to like pull all those people into a slack channel for every different customer. So by the

    14:30

    time you know like data robot grew to a certain point we had hundreds and hundreds of these hundreds these customer support things and it

    14:37

    created so much noise that it just people just muted all of them and they became completely useless.

    14:44

    Yeah. Yeah. And that I I there was a thing I think the start of last year

    14:49

    where Slack sent something out saying hey there's there's really exciting things coming. Genai's here. We're going

    14:54

    to do great things. I'm still waiting. I'm you know they could do a lot more for sure. They could they could do in

    14:59

    fact I just wrote I wrote some code yesterday yesterday day before to uh use the Slack API to

    15:06

    look at every message that I had access to on Slack and then identify what was

    15:12

    relevant to me and summarize it and give me some action items for the last three rolling three-day window. And so every

    15:18

    morning at 5 5:00 a.m. that updates and I start my day out by looking at Slack

    15:23

    and saying, "All right, what happened while I was sleeping?" Because the company's based in Ireland. Zerve is based in Ireland and so lots happens,

    15:30

    you know, while I'm asleep because everybody's waking up and getting to work. They're eight hours ahead of me. So, you know, I can get up and open up

    15:37

    Zerve and look at the the code I wrote and the output and see the action items and all that kind of stuff and see, you

    15:42

    know, what happened last night and what are people worried about and what's what's the vibe and that sort of thing. But that's just Slack. I need that for

    15:48

    email. I need it for text messages. I need it for, you know, Facebook Messenger, for WhatsApp. I need all of

    15:55

    it like in one spot. I think I think you've got a product that does that for you, right? Does that

    16:01

    not allow you to do that? That's uh pretty much you can pull the data. The at uh previous company Rapid Canvas, the

    16:07

    CTO used the product our their product uh to do to do just that. It would pull from HubSpot, Jira, and Slack and

    16:15

    summarize what's going on in the customer each week and what the actions are and what's behind schedule and what,

    16:20

    you know, and it was it was a good place you could then go and just chat with that aggregated information. So,

    16:27

    I guess the danger though would be that you're so automated and so in the loop that you never end up talking to the

    16:32

    customer and just figuring out what's on their mind. So, there's got to be room for that interpersonal connection there. There there absolutely has. Yeah.

    16:39

    Yeah. Yeah. Yeah. That's uh and I think that's

    16:45

    certainly something I've enjoyed is is throughout the Natiza data robot experience was was getting to that. And

    16:52

    it it's interesting you're running if you end up running a support organization. You see the senior team coming out and standing outside your

    16:57

    office looking through the glass. You go this can't be good. But none of them want to go and have that terrible call.

    17:02

    Those terrible calls are actually, you know, if you come out the other side, okay, you got a better relationship with

    17:08

    the customer. There was a I was at a conference, there was a there's a services and support

    17:14

    conference in Vegas once a year. I don't know. I've not been for 10 years. I don't know if it's still going, but uh one of the HP support guys stood up and

    17:21

    he had this chart of when you resolve a ticket and it's, you know, it goes down like this by day and you've not resolved

    17:27

    it. And he said, "There's a point though where you're better to wait because if you wait another four or five days and

    17:32

    then resolve it, it actually bottoms out and comes back up. So your customer sack gets better if you wait a little bit

    17:38

    longer, which I think was a funny uh analogy." So I I don't think the customers would agree, but honestly, if

    17:44

    you've you know I like I think most people think support if I give you the right answer immediately I'm giving you

    17:50

    the best possible support. You don't get the best customer sat from doing that. Not saying you shouldn't do it, but if

    17:56

    you actually like if you have an issue that you need to interact with a customer and take them through it

    18:01

    through the process of solving it with and work with them to get it solved, you end up with a better customer sat score.

    18:06

    For sure. Wait, say more about that. I'm not following at all. So, you're saying that if a if a customer is having an issue

    18:13

    and you just immediately solve it, they're less satisfied than if you took some other route. if they're less uh or

    18:19

    or there's less of a relationship with us as a as an organization I would say. So they don't you know so you'll get a

    18:27

    good uh you tick in the box good support but the it's actually those personal

    18:33

    interactions that tend to make the difference and get you you know the net promoter score where you needs the eight the the nines and tens to really make a

    18:39

    difference those are typically given for when there's some some deeper relationship with a client.

    18:45

    Gotcha. So the idea is if if things get bad and then you show up in the nick of

    18:50

    time and sort of save the day, that's better than if things never got bad in the first place. Uh yes. I mean, if things never got bad,

    18:58

    like if you got the perfect product, I don't know that that's true, right? If your product never fails, but I've never worked with a company who's had a

    19:04

    product that never fails. So maybe yours doesn't, but I Yeah. Service perfect. We never have bugs.

    19:09

    Exactly. That's okay. Show him. Yeah. He's off. He's off. Oh, what kind

    19:15

    of hopefully the noise cancelling is removing the barking. Oh, yeah. No, Riverside's actually pretty amazing. Their their AI

    19:21

    integration is for everybody listening, we're recording in on riverside.com and

    19:27

    uh you know, they'll do like they could remove the ums and the a and you know, all that. Uh they one of the features I

    19:34

    like the best about them is that they fix eye contact. So, you know, like I'm not looking at the camera right now. It

    19:39

    looks like I'm looking down uh on my screen cuz my camera's up up here. So,

    19:44

    if I look at the camera, I can't see you. So, I'm never looking at the camera. But Riverside has a thing that

    19:49

    will automatically move your eyes so that it looks like you're looking at the camera uh as a um what do you call it?

    19:57

    You know, post-processing kind of step, which is amazing. you you know you does it do I

    20:02

    I was using a I did a lot of video demos for the previous company and uh there's a software where you upload the the

    20:08

    video and you it'll take all the my terrible uh you know gaps like that one out of the

    20:15

    uh out just out of the video altogether and then translate it with my accent

    20:21

    into Portuguese for Brazil or Brazilian Portuguese or Mexican Spanish. So I hear my own accent talking fluent high-speed

    20:28

    Portuguese. I went, "Oh, hopefully that's okay." And wow, the team in Port in Brazil said, "Yeah, it's great. Sounds like you know what

    20:34

    you're talking about." So, yeah. Know, I've never tried the translation thing. There is a there is a company that uh a friend of mine, Ry

    20:41

    told me about uh you know Ray me, he was a data robot, too. He was on the podcast a week or so ago, a couple weeks ago.

    20:47

    Um it it's called 11 Labs and you speak to it for 30 seconds and then it replicates your voice and you can have

    20:54

    it say anything you want, which is remarkable. So, I haven't gotten a chance to play with that yet.

    21:00

    Same thing, right? You do you record your video and you go in and you edit the text and then it plays the text

    21:05

    sounding like you. It's got your It does sound just like you, same speech pattern. And then you say, "Okay, I'll

    21:11

    translate into one of however many languages and it gave it me back speaking perfect fluent Portuguese,

    21:17

    which was I got the uh I got my my uh my kids the uh the new

    21:23

    AirPods that do the auto translation if you wear them. I haven't had a chance to work on those yet, but it's definitely a brave new

    21:29

    world. Does it work in Ireland? Maybe not. Two two countries separated by a common language. Yeah.

    21:34

    Yeah. Exactly. All right. So, what um uh at Quadrant uh

    21:41

    you know, what are the how is it different there than it has been previously? So the last three years have been pretty transformative with this

    21:48

    whole AI agent thing, but at the end of the day, customer support, customer service is all about like relationships

    21:54

    and and fixing problems. Is is it is it are things changing?

    22:01

    Um I I mean you still got the same challenges with some you you're you're we're an early stage startup, so we've

    22:07

    got a product that's moving very fast and when you do that, you've got to try and bring the customers with you. So you've got customers on older releases

    22:13

    that are hitting older problems that have long since been fixed. So uh I don't you know those

    22:20

    foundational problems are still there. You still got to deal with them. Um and honestly I

    22:25

    uh from a product perspective we're in the middle of that right the the agentic AI we're one of the the well-known

    22:32

    vector databases or rag systems as you know we're widely used for for solving that but internally we're at a size

    22:39

    where we don't need to. I think that's you know you look into the question is why are you doing it? Are you doing it to save money? Are you doing it to

    22:45

    improve customer experience in you know using using uh technology in the support

    22:50

    process? Are using it and and typically it's done to save money more than anything else right? You want to reduce headcount and

    22:56

    um and we're not in a place where it's consistent enough to do that. We've got customers doing all kinds of different

    23:01

    things. So as we scale uh we will see the change there for sure. I'm I'm sure

    23:06

    we will. But uh we've also got fairly high bar right. So it needs to be near

    23:11

    perfect and that that takes a lot of work. You can get very good quite easily, but that that last mile is hard to get.

    23:17

    Yeah. Yeah. And there's nothing worse than watching a bot mess up an interaction, you know, in retro retrospectively,

    23:24

    right? Because then you're like, "Ah, if only a person had been involved." Yeah. Yeah. Um, how do you think the

    23:31

    That's not true, right? They do. What's that say again? People mess up occasionally as well, I guess. But yeah, that's true. In different ways,

    23:38

    perhaps. Yeah. What do um I'm really curious about your view of software CEOs uh because you've

    23:46

    seen a whole bunch of them and you know we've we've had conversations about these uh startup CEOs and we maybe won't

    23:53

    go into those details but what um what do what do startup CEOs get wrong and

    23:59

    what do they what do they get right in your experience? Um yeah that's good. I and it's it

    24:04

    varies right. I' I'd say I'm looking for my next Natza. Natiza the CEO was

    24:10

    Jitsacina. So I'm happy to throw his name out there because he's just phenomenal guy but uh had previous startups that went were successful and

    24:17

    the culture was right. The whole atmosphere in the company you know if you walk up to an Niza person and say uh

    24:23

    performance value and they'll all say simplicity they've all got that and it wasn't drilled into them. It was part of

    24:29

    the culture of the company. Um, so I think that, you know, you join a startup

    24:35

    not because, uh, well, you hopefully you there's an exit that gives you makes you lots of money, but that's typically not

    24:40

    the case, right? The probabilities are that's not going to happen. So, you've also got to enjoy what you do. Uh, and I think the CEO is a big

    24:47

    part of that. So, um, you know, we we both work for Jeremy at data robot and I I enjoyed it. I'll look back and it's a

    24:53

    close second surprisingly maybe a close second to uh uh how it is experience.

    25:00

    So, um I think some of the common things are I've heard a number of people say, "Oh, we don't need sales people, right?

    25:07

    I don't need a salesman." And that's fine. If you want to sell, you can if you don't need all the zeros at the end of the deals, then maybe you don't need

    25:12

    a salesperson. But I think uh for most most of the product space I'm in, sales people make a big difference. And

    25:18

    understanding how to enable them to sell and make sure that we're getting to the right people is is important.

    25:24

    Um and then not just be too precious about your baby. I think that's the other thing. if it's, you know, the

    25:30

    product works like this because I want it to. It does. No, it's got to work the way the customer needs it. That customer experience comes back into it. So,

    25:36

    yeah. No, I just went through something like that at Zerve that we made a fairly major change to the UI

    25:42

    uh to make it a bit more similar to what people are are used to from a software development like a data science

    25:48

    development perspective. serves a um a data science and and uh development

    25:55

    environment like an agentic coding environment and you know it was way different than

    26:00

    what me and the other founders had had designed uh back in the day and so you

    26:06

    know I was I had I admit I was hesitant to see my baby get uh get changed

    26:12

    uh but it's it's definitely better. you probably were right, but the the is there any point fighting against the the

    26:18

    you know the momentum in the marketplace? I don't think so. Yeah, you get you get comfortable with the way something works and then it's

    26:25

    like hard to change, right? Change is change is hard. Um you know, you a lot of times you'll

    26:31

    see a button in an app that says like switch back to classic view. uh you know if they if they make a major change to

    26:37

    the UI uh like for QuickBooks for example does that like if you uh if you want to go back and look at whatever

    26:43

    your bank register looked like you know before they changed the UI then that button is there but you got to know that

    26:48

    there's like dozens or hundreds of engineers that are like gritting their teeth over that button because they're having to sort of maintain two UIs and

    26:56

    all that stuff just because change is hard or worse still uh let me just interact through an API from a Jupyter notebook

    27:03

    which is you know the previous company did that like no, we've got to kill the Jupyter notebook. But yeah, and it's

    27:08

    I it it was a it was a the functionality needs to be on par. It needs to be

    27:14

    easier, but it's still there as a you know, you've leared to work in Jupiter and that's what that's you know, you

    27:20

    you've gotten that expectation. So yeah, I mean that's that's kind of the space that we're in. Like a lot I mean most

    27:25

    data scientists play in Jupiter in Jupyter notebooks and you know that's that's fine and good. Jupyter notebooks

    27:32

    were designed uh at in a university in California, Berkeley, uh to be scratch

    27:37

    pads, to be classroom scratch pads. And somehow uh they've sort of like grown into the tool of choice for data

    27:43

    scientists, but they have tons and tons of problems. Like most of them are local uh instead of cloud-based. They're

    27:48

    they're impossible to manage like code versioning. They're super unstable, you know, like there's a lot of issues with with notebooks. And yet motivating

    27:55

    people to kind of like move into a new environment is still hard which I suppose is not super unexpected. Have

    28:02

    you encountered that much in your your various uh careers? Yes. Yeah. Yeah. you know, data robot

    28:09

    were bought companies which gave you a notebook in a nice friendly cloud-based environment which I don't think it

    28:15

    really paid for itself in terms of the effort we put in but um and a previous company the the data science team that

    28:21

    the the kind of the services guys going out delivering projects were very they

    28:26

    would do most of the work in the notebook and then move it into the product afterwards initially and so

    28:32

    there was a good learning curve to say what needs to happen in the product to stop them doing that what do we need to

    28:37

    see in there. So, and eventually you get to the point where I can make it better integrated, but still there's times when

    28:43

    I want some kind of notebook like editor to go and do stuff and just play around with it. So, it was kind of a compromise

    28:49

    as well as you said. So, yeah. Yeah. Yeah. I mean, at the end of the day, I think you end up having to give people

    28:54

    what they want and that's not a bad thing, right? You know, it depends. Well, I don't know. It

    29:00

    depends where they work. Like, if they come out of Amazon and they want all the Amazon look and feel and stuff, I think maybe it is, but I don't know. one my

    29:07

    place may maybe it is not the best idea you mean yeah maybe it's not the best idea right yeah exactly

    29:13

    yeah changing user behavior is hard it is it is yeah and it I think that

    29:18

    it's for a uh you know a search engine the like quadrant it's uh you know

    29:23

    people are going to use the API so we've got a nice UI I've worked for Neo forj for years same thing it's a really fancy new UI and you can go through the graph

    29:30

    but actually most people are just hitting the you know the API and running it remotely so so your users are primarily like super

    29:35

    technical Yeah. Or they're developers basically. Yeah. They're writing code and they're they're just making calls and expecting

    29:41

    stuff back. So, and it's uh mostly a product that's used for enhancing the performance of agents

    29:48

    like a rag tool. Um I'd say the the bigger use cases are

    29:53

    really uh semantic search. you're so you're looking you know you're searching for something in documents and it's

    29:59

    using you know transformers and embedding models and so on this the same stuff but uh it's not done specifically

    30:06

    for a kind of genai solution it's done for you know somebody browsing a e-commerce website saying you know I

    30:13

    want brown shoes that look like this pair and uses the search to go and find those.

    30:18

    Gotcha. Do you think that will that technology be sort of like um co-opted

    30:24

    or replaced or or uh supplanted by what by like JPT?

    30:31

    Uh that's a that is a good question. Right. Certainly um there was a movement to make sure that your your website

    30:38

    wasn't just searchable by Google so it came up in the Google search. It now needs to be uh enabled so that the the

    30:44

    training models will go and train your your context. it's not visible on the

    30:50

    website into the model itself. Right? So you there's a whole kind of way of dealing with your your website now. So

    30:56

    so it does appear properly in GPT. I think the answer is no for that though, right? Because you're and and I even

    31:03

    that's not the right answer. So if you look at how it used to be done with kind of full text search

    31:08

    um and image search, it was it was kind of gluing together very complex heavyweight bits of technology. Um the

    31:14

    whole vector search thing has changed that. It's much easier and and simpler to do now. So, and is but do the foundation models do

    31:22

    that under the hood? I mean, is that sort of what's happening, right? Like if I if I asked

    31:27

    ChachiBT to find me, you know, a pair of brown shoes that looks like this one, it could definitely do that. Yeah,

    31:35

    I guess. But it must be reaching out. So, it's going to it's be reaching out to it's not going to find it in the

    31:41

    context it own context of the model, right? I don't think so. Could it be in a vector database? Yeah, definitely could be. But uh you'd have to I'm

    31:49

    guessing there's web search in there and it's doing some some real time stuff as well. Yeah, definitely some web search probably some image search type stuff.

    31:55

    Exactly. Yeah. See, you know that's the that's the scary thing like OpenAI could just or or Anthropic or any of these other

    32:02

    foundation models things. They can snap their fingers and swallow entire companies just by you know using the

    32:09

    magic of LLMs to solve these these problems in an entirely different way. Should software companies be worried?

    32:16

    Uh I think everybody to some extent seems to be a little bit worried, right? I don't know. But uh personally, no. I I

    32:23

    that's a good question. I don't know. Right. I I know at Data Robot, we used to see Terminator on the slides every so

    32:28

    often say, you know, 2045 the singularity happens. We're all we're all killed. Um I think you know repetitive

    32:37

    people in repetit that are doing repetitive tasks you know kind of uh just operational repetitive tasks those

    32:44

    could be automated you could just go and automate those away so I I guess yes is

    32:50

    the answer to that question but it's not the model itself isn't doing it it's how they how you pass the context I think

    32:55

    the interesting thing is where the model starts to not just uh not just take the

    33:02

    input process and give me the output, but it's actually going and doing more internally. I mean, you know, there's there's kind of a a little bit further

    33:08

    it needs to go there before it's really going to be very scary, but it'll get there and it will be very scary, I'm

    33:14

    sure. Yeah, I saw Well, I'm not so worried about like individuals, but as as entire

    33:19

    companies. I I saw this LinkedIn post where somebody I don't recall who was talking about um you know how OpenAI

    33:27

    probably has a list of of like feature requests and sorted by like uh value and

    33:35

    you know they could snap their fingers and satisfy one of those and then suddenly 10 uh startups blink out of

    33:41

    existence because now open AI does this thing. Although I suppose AWS has has had that same sort of capability for the

    33:48

    last years and it hasn't been an impediment to innovation. Yeah. Yeah. I think the it's you're in

    33:54

    that kind of uh um evolutionary expansion and at some point there's

    33:59

    going to be a mass extinction event and people will disappear and I'm sure that's coming as well, right? You saw that with all of these big boom.com

    34:07

    everything went crazy and then Right. So that we're we live in a world where there's 10 trillionaires and

    34:12

    nobody else has anything and it's kind of dystopian. That that sort of situation. Yeah. Yeah. That's that's the Yep.

    34:18

    That's the future. I think as long as we're the trillionaires, it's fine, right? Yeah. That's dark. That's dark. John,

    34:23

    you took it to a dark place. No, I don't think it's that bad. I don't think so. I think the, you know, like my

    34:29

    son's just graduated for with a master's in physics looking at kind of data science type career. It's hard to get a

    34:35

    entry- level, you know, a new grad. New grads are struggling to find jobs now, but he's got to embrace the technology

    34:42

    and work with it. And I think that's that's the difference, right? So, you can do a lot of really cool things that you can't you couldn't do two three

    34:49

    years ago, right now. Um, and so in any space, it's worth applying the technology and you you you'll do well, I

    34:54

    think. Why do you think it's hard for new grads to find harder for new grads to find uh entry- level positions than

    35:02

    it than it was say 10 10 15 years ago? I um or I guess I'm assuming the answer. Do

    35:09

    you think that's the case? Uh yes and no, right? It depends on where

    35:14

    you decided to if you're going to do a degree somewhere other than if you're in the US and you're doing a co-op or something, then you've got a possible

    35:20

    entry into your first job out of that, right? So that's a good that was a good tip which my son didn't follow. Um so I

    35:27

    yeah I think there's I think people believe the hype that geni will will

    35:33

    just replace a certain set of fairly you know those entry level jobs where you're

    35:39

    learning to do something and certainly like I had a a guy at a previous company who was a really brilliant coder just

    35:45

    one of the best guys and he was in my services team one of the best guys I worked with and he pinged me and said what should I learn I'm going to have to

    35:51

    learn um learn some generative AI stuff somehow. What should I go and learn? So, I pointed the right direction and it's

    35:57

    it's you know, he spent a a big chunk of time in his career being the go-to guy

    36:03

    to to write the code and now I can go and write it with cursor and I do, right? And it's terrible, but yeah, I

    36:08

    don't need to it doesn't need to be perfect. It needs to work. And so, so I think uh yeah, so

    36:14

    not that terrible though. I mean, they're the the large language models are outrageously good coders.

    36:20

    The the the issue comes when you iterate, right? So I I typically fire up a Streamlit because I can't you know

    36:26

    reporting out of Jira is a nightmare. So you fire up a Streamlit app against the and uh you say well it'd be nice if I

    36:33

    just and you start tweaking and there's a point where the tweak breaks the code and you have to start explaining to the

    36:39

    whatever you're using that it was getting it wrong and why it was getting it wrong. So good coding practice still applies. You want to break things up and

    36:45

    only change the bit. So you only break the bit, not the whole thing. And yeah. So yeah. Yeah. I think I think we're at sort of

    36:52

    like large language models can get you from zero to 7 a lot and then the 7 to one is is hard

    37:00

    because now not only do you have to write it but you also have to like figure out what the large language models did in the first place.

    37:06

    Exactly. And or or yeah overdid right sometimes you get well this is the best way to write this.

    37:12

    Yeah. We've actually juggled with that a lot. So the coding agent in Zerve uh can the

    37:20

    thing about Zerve is that it can look at your entire project space and see your file system and your database structure

    37:26

    and your all the output you produced and the charts and the graphs and the variable values all of that's in the

    37:31

    context. So it can see all of it. So if I want to pardon me if I want to look at a new

    37:38

    data set uh you know I'm uploading a CSV file or I'm evaluating you know the last

    37:43

    six months of data in my in my data bricks uh uh setup or or whatever it is

    37:48

    I can say you know I could just tell the agent go out and have a look at the data and tell me what you think and then you

    37:54

    know it can see its results and sort of iterate right so um that's kind of been the place where

    38:02

    we're a bit differentiated from like uh you know the code that chat GPT might write or or you know cloud code or or

    38:09

    cursor or something like that is that we're super focused on the data uh and incorporating that into the

    38:14

    coding process whereas a lot of the other large language models you like if you use chat GPT you've got to like teach it like

    38:20

    okay I have a file and these are my variable names and you know and then your context window like rolls off of

    38:26

    that and it forgets all that stuff and then you know the the iteration I I think you're right the iteration piece

    38:32

    is is the challenging part. You know, if the large language model can oneshot the problem, then you're golden.

    38:37

    But if it if it gets something wrong or or makes up something that doesn't exist, then,

    38:43

    you know, it might be time to like scrap it and start again with a different prompt and see if you get lucky the second time.

    38:49

    You get to point N. I've been to 0.9 with a really nice UI doing everything I wanted and I said, I just want to have

    38:54

    a, you know, a drop down filter here and I I kill the soft. I have to throw it away and start again.

    39:00

    Yes. Exactly. because I'm not using GitHub and I'm not using what I should do to to version it and so on. So

    39:06

    yeah, the undo those integrations. It is. Yeah. And I'm not one I did I did

    39:12

    do your uh I did sign up and do a few of your tutorials and like the product, but I I need I'm one of those guys I need a

    39:18

    project to go and play with it properly, but uh if I have to read the manual, I'm not I don't think the software is, you

    39:24

    know, it's it's too hard, right? Yeah. Yeah. No, I'm I'm totally with you. But yeah, it's it's I I think the

    39:30

    large language models have opened things up for people that are technical and can sort of think programmatically.

    39:35

    Um like my I'm a certified Java programmer, C programmer. I hate Python, but I have to go to you know,

    39:41

    everything's in Python now. So why do you hate Python? I I I don't know. It's like it things

    39:47

    shouldn't be or you know I should my alignment shouldn't matter, right? And one of the best bits of code I've seen

    39:53

    is um as a C programmer you write a program and there's this little program that looks like a train and when you run

    39:58

    it it outputs choo choo. So you know and you can format it so the the text looks

    40:04

    like a a steam train. Uh which you couldn't do in Python. So that's the main reason I can't do a steam train.

    40:10

    No no no street programs. Yeah. Yeah. No, it's it's the I I I did for in my very early university days and

    40:16

    you know like the whole punch card I'm not punch card elderly. I'm not that old, but you still got the right for in

    40:22

    that way. And it just I've not got over that pain. Ah, I see. It's a PTSD type issue.

    40:28

    It is. And I just like indentation shouldn't matter. It shouldn't matter. Anyway, anyway, that's been

    40:33

    All right. What's What's coming in the next five years, John? Um,

    40:39

    that I I am looking forward to the next five years. Uh, I have to say and the the the models they kind of I feels like

    40:46

    they've plateaued a bit. And I know you say, okay, you know, open AI can go in, but they're kind of it feels like they're wrapping it around the core

    40:51

    model. It's not inside the model itself. So, I'm interested to see what happens when you when the model's not just

    40:58

    passing through and giving me the answer, it's actually going off and doing other things. So, the the agents kind of get built into the model itself.

    41:05

    Yeah, I think there's a an interesting, you know, I I did my uh my master's thesis

    41:11

    was with a was writing neural network with a biology professor and we were

    41:16

    interested. There was this paper about rat brains and if you teach a rat to go through a maze, if you then go and cut a

    41:22

    slice out of its brain, it doesn't do as well at learning the maze, which is a little bit cruel and unpleasant. We

    41:27

    didn't do the cutting bit, but what we were trying to do is say, can we simulate that in the in the software?

    41:34

    And this was a long time ago and the answer was no, we couldn't. Not even close. But I think now, you know, if I go get the Mistral model and start

    41:40

    hacking away at it, I I can see how it's lang, you know, the I bet that like the HAL 9000, you pull the things out and it

    41:46

    speech degrades, I should be able to see how brain damaging a model replicates

    41:51

    what's actually going on in my neural network inside. Right. Ah, so like can you can you take uh you

    41:57

    know claude or whatever and and give it a labbotomy so that it Yeah. A little a little bit of a shape,

    42:02

    right? I guess. Yeah. Yeah. Yeah. I don't know. Do you think that the current architecture of how these

    42:08

    large language models work is a a dead end? I read an article about uh saying,

    42:13

    you know, these things can do, you know, amazing amazing feats of of uh marketing

    42:18

    content and coding and stuff like that, but at the end of the day, the architecture is flawed and they'll never advance beyond a certain point.

    42:24

    Uh that's a possibility, right? That this is a a dead-end technology that gets us way further than we were, but

    42:30

    can't actually get us to That's and I think there's a there's another leap, right? It's a stepping stone to the

    42:35

    next, you know, and I think the worrying thing for me is that, you know, the there was a a research article where

    42:42

    where four or five people went and tried to figure out why a certain input to the model gave a certain answer and it took

    42:48

    them months to just figure out the path and what happened. So really understanding what's going on inside is, you know, you people say yes, you do,

    42:55

    but really not so much, I think. Right. So yeah, I a fool's errand maybe.

    43:00

    Yeah. Yeah. And but but I there's enough

    43:06

    you know I think my retirement plan is going to try and do a PhD where I do start trying to replicate you know you

    43:12

    you pull a bit out of a out the out of one of the open source models and see how the your results change over time.

    43:18

    Right. Just simulate induced brain damage in inact models.

    43:24

    Exactly. Yeah. Just slow some slow some bit of the network down. Can you give it dementia?

    43:30

    That sort of thing. Exactly. Okay. Okay. Well, I've got to My clock's running out. Before I get dementia, I've got to go into play with that. But

    43:37

    I I don't want to hear it's already too late. I know you you could just say I actually was trying to put that exact

    43:43

    joke together, but it I couldn't I couldn't make it work. You beat me to it. Yeah. Yeah. No, it's it's it's uh Yeah.

    43:50

    Yeah. I don't know. What do you think? What do you think's going to be? I mean, you don't feel you're I I you know, I I

    43:56

    I've seen a lot of data robot folks go off and do great things. you you being one of them. So you you're not worried

    44:01

    that uh Open AI or somebody's going to come and just kill you. Like notebook LM

    44:07

    is doing things that genuinely some of the research stuff that can do is you know

    44:14

    is putting people out of a job. You know there's no need to go and get six people go and study stuff. I can do it in a

    44:19

    couple hours with notebook LM now. Doesn't charge me. Doesn't cost me anything to do. Yeah. I mean it's it's always about the

    44:26

    glue code, right? it's always about like the implementation and the orchestration and and all that sort of stuff. So, uh

    44:33

    you know that's kind of the moat that that Zerve has is that you know we're not just a

    44:39

    a sort of like a a fancy API for one of the foundation models and we do code in a special way. We also handle things

    44:45

    like you know cloud orchestration and resource management and and version

    44:51

    control and collaboration and you know all those kinds of things that you really really need. But, you know, if

    44:57

    you've got VS Code and you've got Jet GPT, you can sort of like backwards uh you know, you can sort of back into it,

    45:04

    you know what I mean? So, yeah. Yeah. Um I So, so I don't think it gets you where I I think that gets it where you need to

    45:10

    be. I do think you need what you do. I think you you you still need it, right? I mean, cuz at the end of the day, all

    45:16

    your data is sitting where it's been sitting for the last 10, 15 years. It's not moved somewhere else. It might be in S3 buckets more than it was 10 years

    45:23

    ago, but there's still databases that have been around forever. And you know, yeah,

    45:28

    but you know, like I I suppose when the models and the agents and the you know, get good enough

    45:34

    that you can tell them to do something and not have to worry about whether they did it or did it correctly, uh that

    45:42

    there will be some major sort of societal transformations for that sort of stuff. But I think we're we're a long

    45:48

    way from even them doing it correctly, let alone people just being able to sort of willingly trust that they can do it

    45:53

    correctly. uh you know like just like like the Tesla self-driving stuff.

    45:59

    It's pretty amazing the way that Teslas can drive themselves around town. In fact, I think I read somewhere that when

    46:04

    you buy a Tesla in Austin that the car delivers itself to you uh you know

    46:10

    without a driver, which is pretty cool. I don't know if that's true or not. Maybe it's just a an urban legend.

    46:16

    Uh, you know, we we had a we got a Tesla a few years ago and we I was totally

    46:22

    uncomfortable letting that thing drive. Like we'd come to a left turn and I'd be like, "Uh, yeah, no, give me control. I

    46:29

    need it." Uh, so I I'm not I'm not super worried, but I

    46:34

    am super eager to use all of these things. So, I definitely want to get to the place where I can trust my, you

    46:42

    know, my Tesla to make a left turn or I can trust my iPhone to remind me if I'm

    46:48

    if I've forgotten to, you know, get ready to go to this uh school event or or whatever, you know, that sort of

    46:54

    thing. I'm looking forward to that. That will help me a lot. That that that somebody who understands what I should be doing

    46:59

    and reminds me to actually do it. Uh, you know, that personal assistant is definitely a

    47:04

    Yeah, could could be amazing. But then, you know, at the same time, I think um that

    47:11

    you know, like for example, people don't know how to get places today because of GPS, right? Nobody's ever had to learn

    47:17

    how to drive to somebody's house without, you know, getting turn byturn directions in their in their dashboard.

    47:24

    So, it's like, you know, the human brain is sort of atrophying in in some

    47:29

    important ways. I think that's as well. Yeah. And that's that's that that was a valid concern that the models are

    47:35

    trained on what on what's available to them and it's spitting out new stuff but the new stuff is really based on the so

    47:42

    it's interesting to see you know the new drug developments were attributed partly to uh you know LLM basically

    47:50

    researchers were collaborating with large language models or yeah like everything nowadays is a copy

    47:55

    of a copy of a copy and maybe it maybe it degrades over time I don't know uh so I think there is a challenge there

    48:01

    for people to like continue to build and maintain their curiosity and and it definitely makes you more efficient

    48:07

    though. I can do things I couldn't like I I can do things I couldn't do two years ago years ago. There's no question about that.

    48:13

    Yep. Well, John, this was a lot of fun. I appreciate you coming on and and chatting with me. It's it's good to see

    48:18

    you. Next time I'm in Boston, we're going to have to uh swing by and grab a beer or something.

    48:24

    I'll look forward to it. I'll look forward to it. Yeah. I enjoyed it. Thank you. Yeah. All right. Thanks, John. Appreciate it. Have a good 2026.

    48:32

    Likewise. Bye. Yeah. Thanks. Bye.

Related Videos

Decision-grade data work

Explore, analyze and deploy your first project in minutes