🏀Zerve chosen as NCAA's Agentic Data Platform for 2026 Hackathon·🧮Meet the Zerve Team at Data Decoded London·📈We're hiring — awesome new roles just gone live!
Videos /Data Vibe Coding with AI: Prompting Strategies That Actually Work
X
Events
Zerve Agent & Features

Data Vibe Coding with AI: Prompting Strategies That Actually Work

November 11, 2025

Data Vibe Coding with AI: Prompting Strategies That Actually Work

How do you get the best results when coding with AI? Should you craft one big, complex prompt and hope for the best, or start small and refine as you go?

In this hands-on session, Greg Michaelson and Jean-Dominique Mercury from Zerve lead a hands-on, two-hour code-along where data builders explored real-world prompting strategies, and built live projects using Zerve’s agentic data science platform. There's lively Q&A, plenty of aha moments, and a front-row seat to how “vibe coding” actually feels when everything clicks.

What you’ll learn:

  • How to turn vague prompts into structured, agentic workflows

  • When (and how) to let AI drive vs. guide the process yourself

  • Real examples of data pipelines, apps, and insights built in Zerve

  • Why context and collaboration are the secret weapons of AI-native coding

They explored “vibe coding,” the art of shaping your AI prompts so that your code agent works with you instead of against you. They shared practical techniques for structuring prompts, steering outputs, and avoiding common dead ends. Along the way, they built something real with the audience in Zerve. Follow along in your own Zerve project, and see how different prompting styles play out in practice.

While they demoed inside Zerve, the prompting principles and vibe coding techniques can apply to any AI coding tool or workflow.

  • 0:11

    Hello and welcome everybody um to what is a highly anticipated uh data science

    0:17

    festival sandbox session. Um I've been looking forward to this one uh since the

    0:23

    uh idea was first discussed. Um and I have to give massive shout out to Zerve

    0:29

    uh for being up for this super creative session. Um as soon as we started talking about it, I knew I just knew

    0:35

    that we that we had to do it. And thank you all for being with us today uh to to be part of it. Um in terms of the

    0:42

    session, um there's two parts to it. Um there's some pre-prepared content for from Greg uh Jean Dominique, obviously

    0:49

    very much focused on uh vibe coding uh within that data space. But then as we

    0:54

    go into the second half, uh both of them have really kindly been so up for making

    0:59

    this a little bit interactive. So you as attendees can get hands-on. Uh they'll

    1:04

    be there to support answer questions on the fly and generally speaking uh ju

    1:10

    just have some fun. Uh so very much looking forward to it. Um in terms of that, couple of bits to call out. Um

    1:16

    number one, the chat is open. Uh so feel free jump in, say hello, uh let us know

    1:21

    who you are, where you where you are in the world. It's always nice when we do these online sessions having people join from all over. And then the other piece

    1:28

    just to call out is the Q&A feature is also there so that you can ask specific

    1:33

    questions uh within the Q&A as they come up as well. Um in terms of now if I could uh please ask Greg and Jean

    1:40

    Dominique to uh turn on their cameras and microphones. Come and come and join me on this virtual stage. So uh how you

    1:48

    both doing today? Good to see you Greg. How you doing? Doing great. Good morning. Amazing stuff. I have to ask, Greg, whereabouts

    1:54

    are you based and what time is it? Where where you're joining us from? I'm calling in from Nevada. It's 6:00

    2:00

    a.m. My man, this is what I love to hear. That is some true dedication. So, the community love should should be being

    2:07

    embraced. So, that's there's got to be a mention in the chat at least for joining uh joining there with uh so early. What

    2:13

    about yourself, Dominique? As you Dominique, I know you mentioned you're in Paris, a little bit closer to me in the UK, but how's your day going?

    2:20

    Uh hi. Hi. Uh very good. Thank you. Yes, it's uh I have only uh 1 hour uh

    2:27

    difference with nothing. Hey, it's middle of the afternoon. So easy. You're taking it easy, mate. As you're as you guys are

    2:34

    here, I am just going to jump into the chat and call out some people. It's always nice to say hello to uh to a few

    2:40

    people that are in the chat. So, first of all, John, hello from Lead UK. Kieran, hello from Dublin. Tamana from

    2:47

    Kent. We have another from London. Ipsswitch. London. UK is popular here. London. Allison. Philadelphia in the

    2:53

    good old US of A. We love that. Aaron, John, David. David from Oxford. Beautiful part of the world, Oxford. We

    2:59

    love that. Another David straight after your armor. Oh man, this list going on. So, I'm going to have my work cut out, I

    3:06

    think. Uh, keeping on top of the chat as as the day goes on. And just a couple of bits to call out, a little bit of life

    3:12

    admin. And the first thing I'm going to call out is the competition. Uh so number one, props to the Zerve team for

    3:18

    being so creative and being up for this whole vibe coding session. Second part, there is a competition. Um in terms of

    3:25

    that, I'm actually wearing them. These aren't the glasses that you will be getting. They are they are brand new,

    3:30

    trust me. But Zer have given away a pair pair of Rayban Meta glasses. Um and to

    3:36

    do that, that is tag team with a certificate uh that everyone who attends the full session uh will be given. So

    3:42

    basically if you attend the full session uh you will receive a vibe coding 101 certificate from the team at Zerve um if

    3:50

    you then post that on socials and tag Zerve and the data science festival you

    3:55

    qualify to enter the competition. So that is definitely not to be not to be sniffed at. Um what I'm going to do now

    4:02

    uh Greg and John Dominique I'll stop sharing my screen um and then if it's

    4:07

    okay I I'll hand over to you Greg. So, the running order for everyone at home, uh we'll be kicking off uh with Greg. Um

    4:14

    he'll be tag teaming uh with Jean Dominique throughout the session with some pre-prepared content. Uh and then

    4:20

    after 45 minutes or so, I'll jump back in uh and we'll be able to jump into the

    4:25

    Q&A uh and we'll be able to do some uh hands-on vibe coding uh hopefully with with your own data if you have it. So,

    4:33

    um, if you're ready, Greg, I'll maybe get you to share your screen and go for

    4:38

    Oh, Greg's in the chat giving us some extra credits as well. I hope everyone's watching this chat and claiming all of

    4:43

    these credits. So, I I'll be first on. I hope they're not limited, Greg. No, no, no. You, anyone that clicks on

    4:50

    the link that I just put in the chat will get 25 extra credits that are good for today. Just you can do pretty much anything you

    4:56

    want without running into any That's what I love. Emoji, a little heart emoji on the screen. I want to see

    5:02

    see some more heart emojis for Greg and the free credits from Zerb. So that that's what we love to see. They're

    5:08

    coming now. There's people out there. We love it. Um in terms of that, Greg, I'll um I'll get you to share your screen.

    5:14

    You have an audience and uh it's over. All right. Excellent. Um let's see. So,

    5:21

    uh we are here today to talk a bit about uh vibe coding, which is a term that I

    5:26

    actually hate. Uh I I think it's going to uh over the coming months people will

    5:33

    will start sort of like scorning people that use that phrase. Uh so maybe agentic coding is a better approach a

    5:41

    better better saying for it. This is um basically the idea of using a

    5:46

    development environment for writing code that is uh specifically connected to

    5:53

    some agent that use utilizes large language models to make the process easier. Whether that might be debugging

    5:59

    code uh or uh writing code completely from scratch or fleshing out ideas. Um

    6:06

    that's the kind of thing that we are we're talking about today and we're going to go through me and my fellow

    6:12

    panelist Jeo. Uh we're going to go through some best practices for doing that and uh we'll be using Zerve to

    6:21

    utilize that agentic coding environment and talking through what works and what

    6:26

    doesn't showing some live examples of of things that that work uh and some things

    6:31

    that don't work actually. So, uh, that link in the, uh, in the chat will get everybody, like I said, some free

    6:37

    credits. Our free plan does give you some free credits, but this is just a mega bonus credits so that you can you

    6:43

    can not worry about the requests that you're sending into the agent uh, if you do write some some code. So, let me pop

    6:49

    up my screen share here, and we're just going to kind of go along

    6:55

    minimizing stuff. All right, so this is our this is our homepage here. Uh, and

    7:01

    we're going to I'm just going to do a quick little demo and show a couple of things that everyone's sort of aware of

    7:07

    what the environment looks like and uh can get their hands dirty. Uh, and then

    7:13

    I'm going to kick it over to uh, uh, Jado and he is going to show uh, some

    7:19

    examples of things that work, some some projects, go through some tips and tricks in terms of like what are some

    7:26

    some good ways to utilize these large language models as you're doing this agentic coding. Uh, and then he is going

    7:33

    to uh, show a project, do some stuff from scratch. Uh, and then I'm going to

    7:39

    show some things that don't work. So, that's kind of the structure of what we're going to talk about today. Uh, be sure and put all your questions in the

    7:45

    Q&A because we definitely want to go through those. The more interactive this is, the the funner it's going to be. So,

    7:51

    feel free to uh ask any question and uh don't pull any punches. So, this this should be a lot of fun.

    7:58

    Um, so uh with that, let me ask John Doe to come on and just do a quick intro and and tell us a little bit about himself

    8:04

    and his background and then we'll dive into a quick demo. Yeah. Uh thank you Greg. Hi everyone. Uh

    8:12

    so I'm John Do. I'm a senior data scientist uh here at Zer for less than

    8:18

    two months. So I'm quite new h but I worked uh several years uh eight years

    8:24

    in AI and machine learning uh mainly in startups uh in cyber

    8:31

    security field. Uh also uh I work for um

    8:36

    intelligence services, police police services uh to to help them uh do their

    8:43

    in investigation uh with AI and to analyze uh a lot of data and uh

    8:50

    different kind of datas with AI. uh and also uh I have also experience in

    8:57

    consulting firms uh in more traditional fields like banking, insurance uh and

    9:04

    telco. That's so funny. The uh the one of the main reasons that I'm a data scientist

    9:10

    today, Jean Doe, I don't know if you know this, uh is that show Numbers uh that used to be on television in the US.

    9:17

    Maybe you don't know it. I don't know if it played in in France, but it's basically about this mathematician guy

    9:22

    that solves crime with math. And I was like, "Yes, I can do that." And then, of

    9:27

    course, I've never done anything like that. But yeah, that was my my motivation. So, if anybody's ever seen that show, definitely shout out in the

    9:34

    uh in the chat. All right, getting my screen sharing going here. Uh so, here

    9:39

    we go. Now, let me point out a few things. So, uh, later in the session, we're going to have, uh, an opportunity

    9:46

    for everybody to sort of like have a play and and use some of their own data. Uh, if you don't have data, but you do

    9:52

    want to use some sample data here, there's a bunch of prepackaged data sets that are that are in here. Titanic is

    9:57

    always a good one. Uh, this is about the survivors on the Titanic, and it's a data set designed to predict who is

    10:03

    likely to survive based on the characteristics of each passenger on the Titanic. Uh, so that's a fun data set.

    10:10

    There's lots of cool things in here that you can kind of play with. And if you'll just click on that data set and hit add to canvas, it'll open up a a new canvas

    10:18

    that you can you can play with that has that data set prepackaged into it. So,

    10:23

    uh you can click create canvas here to just jump into a blank project. Uh or

    10:28

    you can just start working with the agent directly. Here I'm going to say I'm on a live webinar and hundreds of

    10:36

    people are watching. cook me up a I'm gonna say make me a cool demo

    10:43

    cool demo and use made up data. Uh and

    10:49

    so that's going to get us into a new canvas here and that'll give me sort of

    10:54

    a platform for showing you kind of where things live and what everything looks like here in the app. Um so this is my

    11:02

    request that I just entered. the agent is thinking and the way that it's going to work is it's going to come up with a plan and then it's going to present me

    11:08

    that plan and ask me if it's okay if I want to make any changes and if I like it I'm just going to click approve and

    11:14

    then the agent will start working. Nice thing about the agent in Zerve and the big difference between using uh so

    11:20

    here's the plan uh it's going to make a looks like a data sales data uh across

    11:26

    multiple regions. It's going to mock up some fake data and then some key metrics

    11:31

    and revenue trends and product analysis and some cool stuff. So, I'll click approve and we'll just watch it write

    11:37

    some code. Um, but the key difference here between using something like a chat GPT or

    11:43

    claude or something like that uh is that this agent here has full context into

    11:49

    your entire project. So, if you had connected a database, it'd be able to see the structure of that database and

    11:54

    use that as a part of the context. If you had code already written, it would be able to read that code and see it. Uh

    12:01

    if you had analysis done, it would be able to look at the visualizations and the charts and the output. It can see

    12:07

    all of the variables that have been created and what their values are. Uh and so the agent can see all the

    12:13

    different bits of everything that you've done in the project. And then it can write code and you can see it can

    12:18

    execute that code too. this code has just been run uh and it's going to continue working and it'll create a

    12:25

    series of uh plots uh of of code blocks that run one after the other. Now, this

    12:32

    is different from what you might see in like a Jupyter notebook where everything sort of like runs in memory and you've

    12:40

    just kind of uh you know every every code every bit of code that you run uh runs um more you know changes all the

    12:49

    variables in memory and stuff like that. In Zerve each code block is kind of equivalent to a cell in a notebook but

    12:56

    each code block runs independently. So like if I click into full screen mode here uh I can see the code here in the

    13:03

    center and I can see on the left side all of the variables that got created that were feeding into this block. In

    13:10

    this case this is the first block so there's nothing feeding into it. So we're starting with a empty variable space and then on the right side once

    13:18

    the block has run I can see all the variables that have gotten created uh during the execution of this code block.

    13:24

    And what's happened is as these this code runs and we're in this case we're generating some fake data

    13:31

    uh those uh variables are cached and serialized and stored on disk and then

    13:38

    passed downstream. Uh so the variables that are created here they exist out in

    13:43

    this case we're running in in our cloud so this is running in AWS. that these variables are stored out on S3 and

    13:50

    they're accessible to all the other projects or all the other blocks in the project and even accessible externally

    13:55

    uh in the event that you won't need to make calls from external APIs and stuff like that and we won't we won't look at that today but that's completely

    14:02

    possible. Uh but each of those variables is fed downstream. So if I were to click on

    14:07

    this next block here and go into full screen mode then I could see all those variables that have been created. I can

    14:13

    preview them. Uh, so you know, maybe I can look at this variable and that's my list of fake products that have been

    14:20

    created. Uh, and so then I can look down here and I can see, okay, what's what's

    14:25

    going on here? Where's my sales data? And so I can kind of preview those variables incoming and outcoming and so

    14:31

    on. Okay. Uh, a couple other quick things before I turn it over to Jeeo. Uh

    14:36

    you've got your left nav over here and um so this top this is a kind of a list

    14:42

    of all the blocks that have been created. You can see the agent autonames them so there's some sense of what each

    14:48

    block is doing. Oh this is interesting. It just wrote some code that has an error in it. Uh and so c customer

    14:55

    segment it referenced a variable that doesn't exist. Uh and so it's going to

    15:00

    come back to that and it's going to try and fix the mistake. So, it's running iteratively here. It'll see that there's

    15:06

    a problem. It'll try to fix it. Uh, and with any luck here, we'll see in just a minute that it's uh uh going to come

    15:14

    back and try and readress that problem. So, that's Oh, it's made some changes and it's rerunning the code. And so,

    15:20

    we'll see here if it can if it can fix that. But, while it's doing that, let me go through the rest of these menu items

    15:27

    real quick. Uh, this guy here is the file system. So, in the event that you

    15:32

    drag in a data set or that you import one from that example data set screen, you'll be able to see your file system

    15:38

    here and you can add to it and and download and all that kind of stuff. Um,

    15:43

    this guy here is your requirements. So, uh you guys will probably just see one

    15:50

    of these. Um, but either way, this is where if you need to add packages, uh, you can do that. Um, so if I wanted to

    15:57

    add say, you know, XG Boost, if I want to train an XG boost model, then I can just add that guy and click build. Ah,

    16:04

    there. And we've we fixed the problem. So the agent has been hard at work while I've been talking. Uh, and it repaired

    16:09

    the uh the heat map that it was trying to draw. Um, so once you add those those packages

    16:15

    here as a line item, then you click build, it'll rebuild that Docker image and then you'll be you'll have access to

    16:21

    all of that. Uh global imports is code that runs before everything else. So if you're

    16:27

    wanting to use pandas say uh then you might put uh import pandas as pd uh here

    16:34

    and then this code will execute before each block runs. Uh so keep in mind each block here is independent. So if I

    16:41

    import pandas within this block uh it's only accessible during this block during

    16:46

    the execution of this block. uh and so uh that is what is uh this is a way to

    16:52

    get to execute code. So if you want to create global variables if you want to import packages that are accessible

    16:58

    throughout the entire canvas and that sort of thing um then that is how you would do that. And then the last two we

    17:04

    probably won't spend much time in today. Assets are for like data connections and secret management. If you're using API

    17:11

    keys uh for example, you you'd put those in as assets. I'll demonstrate that here in a little bit uh maybe. and then uh

    17:18

    source control. So if you if you are continuing on in Zerf and you're continuing to use it, you can connect it to your GitHub and then sync it with a

    17:25

    repository and so on. And then each block would correspond with a file in your GitHub file system uh so that you

    17:31

    could um you could you know use source control for for uh working with your

    17:37

    code. Couple other important buttons. Here's your share button if you do want to share with uh with other colleagues.

    17:43

    Keep in mind the reserve is free to use. Uh and our free tier gives you five free credits. So per month and so um you can

    17:53

    use that share button to invite folks. If you do invite people and they end up uh uh becoming reserved users, you get

    17:59

    two bonus credits uh and then even more bonus credits if they end up uh upgrading and joining our pro plan and

    18:05

    stuff like that. So lots of ways to get lots of extra credits here. Uh this is also, by the way, the way place where you would post to the public zer gallery

    18:12

    and you get bonus credits for doing that as well. So if you do make something cool and you want to share it with the world, uh then you can you can do that

    18:19

    here using that share button. Um so anyway, we won't we won't really go through all of the cool stuff that uh

    18:26

    that Zerve created in this uh in this little demo. We did create some fake data with total revenue of $32 million,

    18:33

    which is always nice. uh we did we did create some high-v value customers and so on. So it's just plutting away

    18:41

    working on this problem here and creating some some cool visualizations and and uh and stuff like that. So, um,

    18:50

    in that case, I will stop sharing my screen and I'm going to kick it over to

    18:58

    Jeoe to talk a bit about a project that he's going to make from scratch and give

    19:04

    us some tips and tricks about when you're working with a coding agent, uh, whether it's Zerve or Cloud Code or

    19:10

    Cursor or any of these other kind of idees. Um, what are some best practices? you know, what are some things to avoid,

    19:16

    some things to to try in order to get the best possible results uh that that

    19:23

    you conceivably can. So, let me kick it over to Jeopo and give him a chance to

    19:29

    talk to you. Thank you, Greg. Thank you. Uh yeah. So

    19:34

    uh let me um uh present you a little uh uh use case

    19:42

    uh on uh uh Okay, let me share my screen first maybe.

    19:48

    So okay, let's go.

    19:54

    Um so yeah uh so I want to demonstrate uh so some um some tips uh on using the

    20:03

    agent uh for building a complete uh data p pipeline. Um

    20:09

    I will uh so here be be uh begin with creating a new canvas and uh we'll uh

    20:17

    working on a loan uh default uh prediction for for this example.

    20:25

    Um so here as Greg mentioned you you can load uh files. So it could be any files.

    20:32

    I saw in a Q&A that there is a question about which kind of data we can use. Of

    20:39

    course, we can use uh like um textual data uh um also databases but also you

    20:48

    can uh you can use uh like sound images and so on to to work with. Um here I

    20:55

    upload so two files uh my data set uh so

    21:00

    it's uh basically uh uh loan uh data uh

    21:05

    and also the description um and here uh the strategy for the

    21:13

    for the working with the agent. So the idea is it's a collaborative work between you and the agent. Uh I can also

    21:21

    uh work with my team. For example, I can in invent Greg uh on my uh on my canvas

    21:30

    uh Greg serve. So if Greg want to follow my uh my work

    21:39

    um and the idea here is uh so the agent is preconfigured to act to act like a

    21:47

    data scientist. So you don't have to say okay you're a data scientist because it

    21:53

    is already and uh by default it will also generate uh Python code uh in in

    22:00

    the canvas. So you could also ask him to generate uh R code uh or even markdown

    22:08

    if you want to uh to make him explain uh hi Greg to make him uh explain uh

    22:16

    explain uh describe like the data pipeline. Uh you have also like query block uh to

    22:24

    to write SQL. Uh also geni blocks. So it allows you to use LLM

    22:31

    and do some stuff with LLM. Um so so you you have many choices but by

    22:38

    default it's with Python. Um so one my my first tips maybe will be uh

    22:47

    to be uh like it's a generic tip but you have to be explicit uh in in the context

    22:54

    and the goal uh you want uh the the agent to achieve. So what is the context? The context is uh mainly the

    23:01

    data. Uh what is the data? So you have to describe uh as much as possible your

    23:07

    data. So here I um I just upload uh the description of the data. So my agent

    23:14

    will read this file and so know how to interpret the data. Data is very important uh when you're doing data data

    23:21

    science obviously. uh and the goal. Uh so you have to be clear about what you you're wanting to

    23:28

    what do you want to achieve with your uh with your paper pipeline. Um

    23:35

    so I will uh write now uh a prompt I've already prepared.

    23:44

    So I'm saying the agent. So okay um I'm

    23:49

    uh I want to analyze data sets with uh the client's information about loan. Uh

    23:56

    the fields are described in the in the description. TSV

    24:02

    um and uh I will I want first to generate an explo exploratory data

    24:08

    analysis uh explore.

    24:14

    Okay. exploratory data analysis uh with uh several steps. Um of course I could

    24:21

    um uh uh ask for the whole data pipeline but uh in fact the the agent is working

    24:29

    also in an iterative way as Greg mentioned. So it will uh at each step

    24:36

    each independent block of code it will take as uh input the output of the

    24:42

    previous block. So if you want to to work effectively with the agent, my

    24:47

    advice would be to like to to to break down uh the each part of the data

    24:55

    pipeline and maybe uh do it iteratively and start for example with the

    25:01

    exploratory work. Then review the this exploratory work to see if your agent uh

    25:09

    has uh has doing has been doing well or uh you have some problems on the data

    25:15

    interpretation and so on. And then uh after reviewing and maybe correcting it

    25:22

    uh do the uh the other blocks like uh build building models and selecting

    25:28

    models and so on. Of course, you can also uh do it uh uh once, but it will be

    25:34

    uh heavier to review all the code and you have to rerun it after all. So,

    25:40

    let's uh launch this prompt.

    25:47

    Uh maybe while we're waiting for the agent to do stuff, I'll I'll look at a question. Uh John asks uh in your

    25:53

    initial prompt, did you have to specify that you want to work in Python or is the preferred language set in some

    25:58

    settings file somewhere? Uh you don't have to specify, you certainly can. So

    26:03

    you could specify to do work in Python, R or SQL here. Uh the agent will default

    26:09

    to Python if you're not specific. Um but the other thing that I wanted to point out uh before I kick it back to John Doe

    26:15

    is that you can use those languages interchangeably. So if you'll recall I said that each block serializes and

    26:21

    stores its results. Uh so for example if you build a data set if you're working with a pandas dataf frame say that's

    26:26

    going to be serialized as a paret file which you r knows how to interact with. So if you dragged in an R block and

    26:33

    connected it to a Python block you'd be able to interact with those R data types as if they were R data excuse me you'd

    26:39

    be able to interact with those pandas or Python data types as if they were R data types. Uh or and likewise for SQL. So

    26:46

    you could SQL query a pandas dataf frame. You could uh treat an a pandas dataf frame as if it were an R tibble or

    26:53

    something like that. Um so language interoperability there is uh is kind of you get it for free because of the way

    26:59

    the architecture works. So sorry go ahead John thank you. Okay. So as uh the

    27:05

    agent can come come up with a with a plan uh so load and inspect data then do

    27:12

    some data cleaning basic statistics uh also correlation analysis

    27:17

    uh and then a summary uh to create a block explaining uh that summarize uh

    27:23

    the entire the entire process. So let's approve the plan.

    27:29

    Okay. And yeah, Greg, if you want to to answer other question,

    27:35

    don't. Yeah, great. Okay, cool. So, let's see. Um, another question that I liked, let's

    27:40

    say, if we have some of our own data linked to the Zerve platform, and it's rather large, for example, millions of

    27:47

    rows. Will the AI assistant actively choose libraries that are set up to handle this data, for example, polars,

    27:53

    or will it always default to pandas if it is a tabular data set? Um it I guess

    27:59

    it depends. So the data the the agent can see the data sets uh and there's no

    28:04

    we haven't instructed it in to act in any particular way in the back end. So

    28:09

    if you had a you know 2 gigabyte data set or something like that then uh you'd probably want to include some stuff

    28:15

    about that in your uh in your prompt and say hey this is a really big data set

    28:21

    choose packages and libraries that are designed to work with with that sort of data. So you just want to be explicit

    28:28

    with your prompt when you have like twists and turns or unusual things that are that are happening with your data.

    28:35

    Yeah. Thank you. Uh yeah. So so here okay the agent is building building the the workflow. Uh

    28:44

    it have just have a look on uh this one.

    28:49

    Uh so here uh we have the description of uh our datas.

    28:57

    Uh so we can see in fact which variables do we have? Uh so the loan amount

    29:05

    uh of course number of payments of the loan interest rates so all of these data

    29:13

    uh we'll do uh so the exploratory work and uh also

    29:21

    uh we have the loan status which is which will be our t target variable

    29:28

    uh to to say if the loan is uh uh is failed or not.

    29:34

    Uh but just for the the first uh first step, first prompt uh we will uh just do

    29:42

    some basic correlation uh and just uh validate that the agent

    29:47

    is doing well and uh and interpret well the the data.

    29:55

    Uh yeah, let's see if there are other question during the the agent works.

    30:03

    Okay. Uh uh let's see. We've got a question here. Which actions will incur

    30:08

    credit use? What would five credits practically allow you to achieve? Um so

    30:14

    uh credits are consumed both by compute uh because this is running in AWS uh it

    30:21

    could run in any cloud. It's designed to be self-hosted but our SAS environment happens to be in AWS. uh it's uh so

    30:27

    compute consumes uh credits and also you utilizing the agent consumes credits. Um

    30:33

    so uh I typically see you know one big beefy request like a request that Jean

    30:40

    Doe made here which included you know do EDA reading in data uh you know and

    30:45

    analyzing that data maybe training models that typically for me consumes one credit or so something like that but

    30:53

    it depends on the size of your data on the complexity of your canvas uh because the context window uh consumes uh you

    31:00

    know makes makes LLM calls uh more expensive or less expensive. So as your

    31:06

    if your canvas is super super complex then your context is going to be bigger and the agent calls will uh will consume

    31:12

    more credit. So the answer is that it sort of depends uh on what exactly

    31:18

    you're doing but it's compute and it's agent calls that consume those credits.

    31:23

    uh also uh if you're using GPUs versus CPUs. So you can control what type of compute you use uh for any uh for any

    31:31

    individual block uh and so if I change compute from say lambdas to GPUs then

    31:38

    I'm going to be using more more compute and so I'll consume credits at a faster rate. Uh but you have really fine grain

    31:46

    control over what type of compute you're using on a block byblock basis.

    31:51

    Yeah. Thank you GR. Yeah, in fact for each block you can use uh you can define

    31:57

    your compute settings and yeah it will impact of course your credits.

    32:04

    Um yeah thank you. Thank you Greg. Uh I see we have a lot of questions. So we we

    32:09

    are trying to to answer uh answer it uh and uh

    32:17

    at a better uh better. Okay. So here uh how how it's

    32:27

    going. So we're on the correlation analysis.

    32:33

    Uh the agent is still uh doing some work. Uh we can see that um uh we have

    32:41

    uh done through data types of each uh each uh field uh also uh the description

    32:50

    of it. Uh it has done also um like uh um

    32:58

    check for null value values and duplicates.

    33:04

    Uh so maybe hey Je do one question I have.

    33:10

    Yeah. Um I noticed that you put that loan stats new description file in to kind of

    33:16

    describe what the columns mean. Yeah. Uh to the agent. The agent can also look at the CSV file and sort of like infer

    33:23

    from the column headings and stuff like that. Do you find it's better to do something like uh tell it explicitly

    33:29

    what the columns are or kind of like let it guess and then correct it if it guesses wrong?

    33:34

    Yeah. Uh of course you can you can do both. Uh I uh I've come up with this

    33:40

    idea. Uh at first I only put the CSV file in and just let the agent do the

    33:46

    work. Uh but yeah after that you have to to do like to manually uh check yourself

    33:54

    for the uh for the correct interpretation of the agent. And I found out in fact maybe it

    34:02

    was on another data set but I found out that the agent missed uh like to

    34:07

    interpret the correct type uh of the variables. it it was like an integer but

    34:14

    in fact it was a categorical variable and the agent I think it was like a

    34:19

    numerical variable which led to like of a mistake. Uh so uh I I'd prefer to like

    34:26

    explicitly describe when possible of course the the each field uh because uh

    34:34

    for me it's a it's a it's a really really really important uh context thing

    34:40

    in in the in the prompt. For me it's you you have to do it uh if you want uh to

    34:46

    to really be effective with uh working with the agent. So I will advise to to

    34:51

    do it or you can also type it in the prompt of course. Uh but yeah if you if

    34:58

    you if if I suppress this file of course it will work in a certain way but uh we

    35:04

    have to be really careful about the the output. Uh so you could also probably give it a

    35:10

    link to a data dictionary or something if you had an internal uh document that that contained the data the info about

    35:16

    your data as well. So yeah, so I like that the this idea that the agent will guess and it will try to

    35:22

    do what it can based on the information that you give it, but to the extent that you do have the

    35:27

    information, it's probably a best practice to give it to the agent uh so that it can it can behave without

    35:32

    guessing. Yeah. Yeah. And it it will be really helpful when you you're working maybe on

    35:38

    a um maybe on a business case uh where you're not a business expert. In fact,

    35:45

    when you're a data scientist, it's really happen often you have a data data

    35:50

    set, but you you're not a business expert. So maybe you have some fields you really don't know anything about and

    35:58

    uh and the agent may help uh like to bring uh an interpretation and uh and uh

    36:06

    hopefully the the right one. Uh yes it it happens to to me with like uh a data

    36:13

    set in the construction sector. Uh like I was uh really not aware of this uh

    36:20

    this business and uh yes the the agent uh like proposed some uh interpretation

    36:27

    and he was like quite right and uh it helped to to do the work like uh

    36:33

    quickly. Uh okay. So here

    36:40

    uh we're in the okay final stage uh of the workflow uh construction

    36:48

    and we'll do uh like uh um the a

    36:53

    markdown blot to to explain all the process uh to document in fact your uh

    37:00

    your process. Sorry. Uh I refresh my my page.

    37:08

    Oops. Yeah, it has been done automatically, I think.

    37:18

    Oh, browser is going crazy, man. Yeah.

    37:24

    While you're getting that pulled back up, let me uh let me do another question. Uh let's see here. Can Zerve

    37:31

    handle unstructured data from, for example, images that need both OCR and OMR to to interpret them? Would it allow

    37:38

    you to use drawing in your prompts or interactive graphics? Um, uh, so that's

    37:45

    a there's a lot in that question. Um, Zerb is a a coding environment. So, anything that you could do in a Jupiter

    37:51

    or VS Code or something like that, uh, you could do in Zerve. Um, it it runs

    37:57

    serverlessly. So when you execute these blocks, uh it's spinning up compute on the back end to to execute your code and

    38:05

    then that compute shuts down and goes away. Uh if you do want to do something like say live handle live video or or do

    38:13

    something interactive, then you have to go down a slightly different route using persistent exeu executors and and stuff

    38:19

    like that. That's a bit more complicated. But all the things that you said are absolutely possible and you do them in uh basically the same way that

    38:26

    you would in any other coding environment.

    38:36

    Jeo, do you have any other uh tips or or tricks in terms of working with the agent? Uh I know folks are interested in

    38:43

    the the loan uh maybe in this details of this project but the idea here is to kind of like communicate uh what are

    38:50

    some good tips for for working with large language models independent of the particular project. Uh any other last

    38:56

    tips before we jump into yeah just uh let me check um yeah my my

    39:02

    last tip will be uh to yeah to really work with the agent as a as you work

    39:08

    with a human. So review your your code and then iterate on it. Uh also ask ask

    39:15

    the agent to to to correct itself it's uh if it's uh if

    39:22

    it's uh necessary. Uh and really don't uh don't have a blind trust uh into the

    39:31

    agent work. Uh so it's uh it's really helpful to to accelerate your work. Uh

    39:38

    but you have to to be uh uh to understand what's what's what uh it's

    39:43

    done and uh yeah and uh I think it's uh that's it.

    39:50

    Okay. Awesome. Cool. Well, I wanted to before we kind of open it up for a broader Q&A, I know we've answered a few

    39:56

    questions already, but there's lots in the in the Q&A, so feel free to keep those coming. Um, I wanted to show an

    40:02

    example. If you'd stop sharing, Jean Doe. Yeah. Uh, I wanted to show an example of

    40:08

    something that does didn't work. Uh, as kind of a a way to illustrate,

    40:15

    um, let me share my screen as a way to illustrate um

    40:23

    like, you know, a best practice in terms of like things not to do. Uh so uh in

    40:29

    this case I I asked the agent to use something related to structured

    40:34

    responses in an AAPI call to open AAI. So I don't know if folks are aware of

    40:39

    this. I I was made aware of it probably I guess maybe six eight months ago. Um

    40:45

    maybe that timeline's wrong but uh OpenAI has this thing and lots of other

    40:50

    large language models do where you can request structured a structured response. So you can provide basically a

    40:58

    uh a schema for the the response that you want. In this case, the schema that

    41:03

    we're that we're using uh and I'll go into what my specific request was is

    41:08

    basically a name and an age. So whatever response or whatever prompt you pass to the uh to the open AAI API uh you give

    41:17

    it the schema and its response will be in that format. This is tremendously useful if you are wanting the agent to

    41:24

    respond in a particular way um without say adding a description up front or a

    41:30

    little friendly messages along the way. You can specify this structured output and it's a very simple way to tell the

    41:38

    agent that you want its answer to be in a very structured file and so the

    41:43

    response will always come back as uh in that in that format. And so for example,

    41:49

    uh here is a response, a sample response that comes back from the API. And we'll dig into the code here in a minute. It's

    41:55

    a name and an age. And this is the full response that comes back from the API request that I made. So this is if

    42:01

    you're sending responses into uh into the uh to the API or sending prompts

    42:06

    directly into the API and you want your responses back in a particular format.

    42:12

    Well, um a couple of things to point out. One is my API key is here. So, we

    42:17

    have a a secret manager built into to Zerb so that you can use these uh your

    42:23

    keys without having to like hard code them or do any other crazy stuff like that. Uh so, in this case, my API key is

    42:30

    uh is an asset. Uh and you would you can just create an asset and make it a secret and so on, but we don't really

    42:35

    have time to go into those details, but that's in our docs and and all that kind of stuff. So, that's here. I wanted to point that out because that's one of the

    42:42

    one of the things that our asset library is really good for. uh because these are usable and sharable and so on. But what

    42:48

    I specifically asked it to do was to write some Python code using the open a

    42:54

    open AI API to register a JSON schema once. So this is the schema for my

    43:00

    response for a structured response and then get back a schema ID that's reusable. Um, so the idea is here I want

    43:07

    instead of having to pass this schema in to OpenAI over and over again, I want to

    43:14

    be able to register it with OpenAI in the back end, get an ID back and then be able to reference that ID subsequently

    43:21

    uh in future responses so that I can get that structured output without having to

    43:27

    resubmit the schema every time. That sounds pretty cool, right? Uh, it's not possible currently. It doesn't exist. uh

    43:34

    people have talked about it uh you know there is there is rumors of it uh and you know maybe that'll be like a future

    43:40

    future uh capability that open AAI deploys but currently it's not possible to do that um and yet I'm going to ask

    43:48

    the agent to do that. So a lot of the time when you're working with an agent you're going to be asking it to do stuff

    43:54

    that may not be possible or may not be feasible in some way. And so I think

    44:00

    this is fascinating. I didn't edit a single line of code here. So this is exactly as the agent came back. Um so I

    44:05

    did I asked it to do this and then it created a two-step plan. The first one is to generate the structured output. So

    44:12

    it comes up with this this structured output. I didn't specify what the output would be. So it came up with this name

    44:18

    and age thing. Uh and then generate uh prompt reference the schema ID. So it's

    44:25

    doing what it understands what I asked it to do. uh and then save that that uh

    44:31

    that schema ID and use it later. Uh and then once that has happened, then take

    44:38

    that schema ID and uh make another API call using that schema ID uh downstream

    44:44

    to to prove that it worked. Okay, so that that's what its plan was. And I said, I accept this plan. It's great.

    44:51

    Wonderful. Uh and it finished. So it's comes back to me and it says, okay, we did it. Done. Mission accomplished. uh

    44:58

    which I was like, "Okay, that's interesting since what I asked you to do isn't actually possible." Uh but

    45:04

    nonetheless, it uh I'm just going to go to input only here so we can see the uh the whole screen.

    45:09

    What did it actually do? Uh and so the caut the cautionary tale here is be

    45:14

    careful what you ask for because uh open AAI any of these models in this case I'm

    45:20

    we're using claude under the under the hood uh will will go out of its way to give you what you ask for even if it's

    45:27

    has to fake it uh and so uh it defines it schema uh and then look down here

    45:34

    it's uh it's generating like like some sort of a random thing. So like it

    45:40

    creates this schema ID not using the API but it like it cooks it up out of

    45:46

    nothing. So here's the schema ID and it just like assigns a random code schema

    45:51

    person code and then it passes that strange variable that is made up and

    45:57

    means absolutely nothing uh down to this to the request that it makes to prove

    46:02

    that it actually did what it did. Uh only it also passes the schema along the

    46:08

    way. So, it's like telling it the answer and it's also creating this weird ID

    46:14

    variable that makes it seem like it did what you wanted it to do, but it didn't actually do uh what you wanted it to do

    46:21

    because what you asked it to do in this case is not possible. So one thing to be

    46:27

    really careful of is that if you ask for something that has is really innovative or really new or not really documented

    46:34

    anywhere or you know unusual in some way that or maybe not even possible. Uh the

    46:42

    large language models will give you what you asked for in ways that may not be entirely helpful. Uh so in this case it

    46:49

    it generated a like a random uh ID and then kind of faked an API call to OpenAI

    46:56

    to to generate the the response that it knew you were looking for. Uh and so you

    47:01

    have to be careful for that kind of stuff because the models are eager to please. Um another thing that you have

    47:07

    to be really careful of and I'll kind of wrap up with this is error swallowing.

    47:13

    Uh so when you ask uh a large language model to to make some requ to do some

    47:20

    task, it's it really wants to finish that task. So, unless you're very explicit in your prompt, uh you might

    47:26

    see a bunch of try except uh clauses built into your code where it'll do uh

    47:33

    where it'll say try this uh and it'll put all the meat in there and then it'll say if that fails then make up some

    47:41

    random data and use it as placeholders. And so if you're if you're the code will

    47:47

    run as expected and you may get results depending on the placeholders that

    47:52

    actually get created and used. Um but it won't actually be using doing the real thing that you asked it to do because it

    47:58

    failed because of maybe an error in the code because some package isn't defined properly because uh you know what

    48:05

    whatever the reason is that it might have failed. A lot of times that kind of error swallowing behavior uh is is

    48:12

    pretty common. And so we've built a lot of kind of guardrails in behind the behind the hood or under the hood in our

    48:19

    system prompt to avoid a lot of that stuff. But if you're working in an environment outside of Zerve or if you're working with like a chat GPT or

    48:28

    or something like that to to make these requests, then those are things you got to be really careful of. Make sure and

    48:35

    read the code, look at it. We're not at the place where you can just, you know, toss a request over and blindly accept

    48:42

    that the large language model will have done what you asked it to do. Uh because

    48:48

    it'll it'll cheat. It'll cheat in order to give you what you actually want. Uh and so that is definitely something to

    48:55

    be cautious of. Uh maybe we get there as uh as we we sort of like advance down

    49:01

    this wild road that we're on in terms of working with uh agents and AI to

    49:06

    accomplish our tasks. Uh but right now the uh the agents are so eager to please

    49:11

    and we'll we'll fake it in order to give you what you want. Um, so that is kind

    49:16

    of the wrap up on like our prepared stuff and so we're going to kind of segue. Uh, we'll have David come back on

    49:23

    and maybe we'll do a brief Q&A and then turn it over to you guys to actually get your hands dirty because I know it's

    49:30

    always more interesting to see your own data and talk with your own folks than to just hear us paddle on about this and

    49:35

    that. So, over to you, David. Amazing, amazing, amazing, amazing stuff. We've had a couple hundred people

    49:40

    with us, so I need some emojis. I need something. I need it in the chat. I need to call some people out. But hats off. I

    49:47

    wish there was a hats off emoji. That would be quite cool, wouldn't it? But look at that coming through. Absolutely

    49:52

    superb. Um I think it's always really evident when we get speakers that just know their stuff. It just comes across

    49:59

    so effortless and I know behind the scenes there's been years of learning and development to be able to so

    50:05

    effortless. So that really did come across. So thank you very very much. Um, in terms of now we've got like an hour

    50:13

    or so left. Um, and the invite I think Greg is very much for get get using the

    50:18

    platform. Um, if you at home can be using the platform, working with it.

    50:24

    We're going to stay here in the background answering questions as they come up. Uh, I'm assuming a few may come

    50:30

    up. We're going to be having a broader conversation just about where we are perhaps with Vibe coding and and stuff

    50:36

    like that. So, it's very much an invite. use the time and use the resource of these two fantastic people that we have

    50:42

    with us. And the other thing I'm going to go back to calling out, I mentioned it at the beginning, we do have the competition running for the Rayban Meta

    50:50

    glasses. So, anyone that stays to the end that does roll their sleeves up and and get stuck in, first of all, will

    50:55

    receive a vibe coding 101 certificate uh from Zerv. Uh, and then if you post that

    51:01

    certificate online, couple of other bits, tag us, whatever. Um, you enter the competition. And they're worth

    51:07

    they're worth winning actually. So, I've had mine for a year or so now. And I'm I'm a big fan. I'm a big fan. Amazing

    51:13

    stuff. Um, I'm going to start just broadly actually. There's a there's like eight questions in the Q&A, so we've got

    51:19

    time. We we'll do our best to get through those. But just while I've got you both, um, you know, I'm I'm gonna

    51:26

    show my age. I I was around for the.com boom bubble the start of the internet.

    51:32

    Who is this guy? Look at him with all his gray hair. Um yeah, I was around for that and obviously that pace of change

    51:38

    was you know at the time it was electric over a number of years that the world definitely changed with with what was

    51:44

    going on with the internet and the worldwide web and this feels unbelievable. Um you know it's weekly

    51:51

    there's changes the model and stuff like that. I just if you don't mind I guess just off the top of my head kicking

    51:56

    things off you know how are you seeing the landscape at the moment I I didn't actually mention at this at the start

    52:01

    Greg but Greg's actually one of the co-founders as well as being the chief product officer so how are you finding building uh products in this space

    52:09

    working at this cutting edge of technology how how are you finding it out there it bit it feels a bit like we're living

    52:15

    in a simulation because when you uh when you realize how these large language models work it's it's zany like I mean

    52:22

    all they're doing is predicting the next word and you just build on each other to to you know magically create these these

    52:29

    sort of like amazing outputs. Who would have guessed that you could take that

    52:34

    sort of infrastructure and and use it to write incredibly complex code completely

    52:40

    automatically? It's like it's like somehow we've tapped into the simulation and the matrix is like we're speaking

    52:45

    the language of the matrix. So it it's truly wild. I mean it's it's it's a

    52:51

    major sea change in terms of like how how people do anything. Like I I don't

    52:56

    cook anymore without using large language models to give me the recipe. Uh I did a I had a dinner party where

    53:02

    every recipe was entirely generated by AI uh by my TBT and it's just it's it's

    53:09

    truly remarkable. It's unbelievable how quickly we've all adapted to it as well. I actually love I

    53:15

    think question was from David Newman. Um so thank you for the question. I know there's another one there for you, David, and we'll definitely get to it.

    53:21

    But it was almost like, can we draw a picture uh and and pass it into the prompt like, you know, a year ago, we

    53:27

    would have, you know, mind blown by being able to talk to it, and now we're like, you know, let's draw something down and and put it in. Nobody wants to

    53:34

    see my drawings that that there would be some bugs if I if I'm trying to draw some stuff. Um, I think the other thing

    53:40

    that, you know, just as you were talking about and and there is there is a question, we'll maybe come to this question. I'll tag team these two

    53:46

    together. So, just give me one second. Um, I think it was Oh, it's a question from Ella. I'm going to tag team Ella's

    53:53

    question into my question. I think it's I was seeing you doing this. Um, one

    53:58

    thing that was in my mind was very much like how do you build confidence uh that the code that the AI is writing is safe

    54:05

    to run. And then Ella's question was kind of kind of tagged into that, you know, for beginners and you know, do you

    54:10

    have some suggestions on how to tell or realize when the model starts hallucinating or or when, you know, to

    54:16

    have that confidence? What what would your advice be if you're vibe coding? How how would you get that level of confidence that what you're doing is the

    54:22

    right thing? Uh um well, that's a hard question.

    54:29

    That's like the cutting edge of like where we're at here. No, I mean it's if anybody can come up with the the actual

    54:35

    correct answer to that one, um then you know that that would be that

    54:41

    would be a really good startup. Uh you got to be super careful. So you got to have one of the ways that we can control

    54:49

    uh how the agent does stuff uh sorry just there we go uh how the

    54:56

    agent does stuff is uh to could limit its scope. So right now I think the best

    55:01

    thing that you can do in terms of like being safe with these large language models is to like say here's here are

    55:08

    the things that you can edit like so for example in Zerve you can say look I only want you to change code inside this

    55:14

    block or don't mess with anything else uh or you can open it up and say okay

    55:20

    look here's the whole project the world's your oyster kind of do whatever um so that's one thing limit the scope

    55:26

    of things that the agent can change uh never ever give it right access to your databases

    55:33

    because that could be dangerous. Uh in fact I read on LinkedIn a CEO of a startup that they were working with a

    55:40

    coding agent the agent dropped its entire database irretrievably. Uh so so

    55:45

    that would Oh man. And number three, uh checkpoint along the way, right? So you want to be using code control. You want

    55:52

    to have reliable backups. You want to be able to really effectively undo anything that

    55:57

    happens uh along the way because you know there's no there's no super reli like

    56:03

    like I showed earlier, the agents are eager to please and if they think that dropping your entire database will get

    56:08

    you closer to what you asked for, then they will 100% do it if you let them. So

    56:13

    you got to be super careful. Yeah, mate. Good good practical tips. And I think sometimes we can all be guilty of looking for that silver

    56:20

    bullet, but actually use it as a resource. Use it to drive efficiency. You're going to learn along the way as

    56:25

    you're working with it. That makes lots of sense. And just as we're talking along that, there's still hundreds of

    56:30

    you out there uh in Zerve. Um if anyone uh we're not you know doesn't need to jump straight in with a question but

    56:36

    even if just in the chat if you just want to perhaps just perhaps stick a little use case uh that you're thinking

    56:42

    about or have in mind or or you know what it is that you're perhaps uh using this time to try and do in Zerve and

    56:49

    that will perhaps just give the guys a little bit of context on what you're doing out there. Um and yeah, we're keen

    56:54

    to help. Uh I say we Greg and John do are very keen to help. I'm keen to do

    57:00

    the talking as ever. And I love this Q&A feature. It's uh it's really good, isn't it?

    57:05

    It's up votable. So great show. Yeah. And we'll we'll we can tackle them

    57:10

    in in upvote order. So yeah, I've literally just done the same thing actually, Greg. So I've clicked on

    57:16

    most up votes. I've clicked refresh. And the question at the top there is from uh from David Newman again. Are you happy

    57:22

    to jump into that one? Does it make sense to start with that one? Oh, sure. How private is the data you supply to the agent? Is it used in

    57:29

    training or just kept in the user space until deleted? Um, so we are utilizing

    57:34

    thirdparty models. So right like today, uh, the agent was powered by claude. Um,

    57:40

    you can bring your own keys. Uh, so I saw that question that popped up. I think it got answered by text. Uh, but

    57:47

    if you have your own private uh, uh, models, you can you can utilize those by

    57:52

    bringing your own keys and set it up that way. So if you do have like data sensitivity issues and stuff like that

    57:57

    internally deserve we don't we don't keep you or or use any of that kind of stuff um the way we're set up is that um

    58:05

    you know we're we're orchestrating compute like we're spinning up resources we're executing code that kind of thing

    58:11

    but we're not like capturing the data to train models or or any anything like that. In fact we were we're designed to

    58:19

    be self-hosted. So uh the we would operate a control plane and uh have

    58:26

    zerve actually installed in the users environment so that the data and the compute stay in their their own BPC. So

    58:33

    depending on the sensitivity of the data that you're using and how your infrastructure and everything is set up

    58:40

    uh all that you know we can be as as secure as we need to be. Fantastic stuff. Fantastic stuff. Um,

    58:46

    and I guess that kind of links in a little bit to what Dominic is uh is is talking about there as well again, you

    58:51

    know, talking about confidential data. Um, you know, making sure that, you know, you're not out there powering the

    58:57

    or training the large language models that are being used on the back end.

    59:02

    Yeah, that's wild. I uh I actually saw something on LinkedIn yesterday uh where

    59:08

    the somebody was like I forget who it was that was talking, but it was really interesting. He was like, "If you're

    59:13

    using Open AI, Open AAI is 100% like watching you or you know, Anthropic or

    59:19

    whoever the large language model providers are. You you've got to believe that they are looking at how people are using uh their

    59:27

    their models and you know like building a product development list based on the

    59:32

    stuff that people are doing. So yeah, they're running their own experiments as well, aren't they? I'm

    59:37

    sure and working out where they want to put their time and mate the scale that they must be doing at is uh is kind of

    59:43

    kind of kind of mind-blowing. Um in terms of the hallucinations, Georgina is with us. Lovely to see you,

    59:49

    Georgina. You're always one of my favorites. Um um with live dynamic data, how can we audit and put guard rails to

    59:55

    stop hallucinations? So, just going back to that hallucination piece. I know you've touched on it. I suspect it's

    1:00:01

    going to be on people's mind, so it probably just does make sense just to double back a little bit. It's on my mind. Uh, it certainly is on

    1:00:07

    my mind. Um,

    1:00:12

    the Jeoe, feel free to chime in here, but the my my view is you have to read

    1:00:18

    your code. Uh, you can't we're not at the point where grandma can go and and type in her

    1:00:24

    stuff and and write a a program and release an app and uh, you know, like do that overnight like in, you know, in one

    1:00:31

    second with with a an LLM call. uh you have to read your code and you have to make sure that it's not doing things

    1:00:37

    that you don't want it to do. Uh yeah, some hallucinations will result in errors. Uh like I was working on

    1:00:44

    something a few months ago and I asked the agent to do something and it

    1:00:49

    invented a perfectly logical way to do it that was complete gibberish. Like it it created a whole new brand new API uh

    1:00:57

    for for what I what I was trying to do. An API that did not exist. And you could see the structure of the API. It was

    1:01:03

    like well structured like somebody should build that API. Uh only there they were you there there the the agent

    1:01:11

    was building this API or referring to this API that was very well structured but that did not exist at all and kept

    1:01:18

    getting errors and I was like what's going on here? And then so I Google around I go oh okay well that doesn't

    1:01:23

    exist. So that approach is completely hallucinated and and not useful at all. So, a lot of times hall solutionations

    1:01:29

    will just result in errors. Uh, or error swallowing that I talked about. Uh, and then a lot of times it'll do something

    1:01:35

    that looks kind of like right, but it's not, you know, it's subtly different. Uh, those are the ones that are that are

    1:01:41

    more dangerous, I think, because it's like it seems to be running as expected, but it's not actually doing what you

    1:01:48

    asked it to do. Yeah. So, man, sometimes I wish I had that level of confidence

    1:01:53

    that the that AI has in hallucinating. Just say it with real confidence. Yeah, I've just made up this whole thing.

    1:01:59

    Amazing stuff. Um, just as we keep going, it's actually funny that you'd say that. I read an article the other day. It said

    1:02:05

    that there was surprisingly little correlation between accuracy and confidence in people.

    1:02:11

    Maybe it does match life. Yeah. In fact, there's a psychological

    1:02:17

    principle called Dunning Krueger, the Dunning Krueger effect. Uh, people are probably aware. I love it. I, in fact, I quote it to my kids

    1:02:23

    all the time. Uh it says that the uh the less you know about something, the more

    1:02:30

    you think you know about it. Yeah. You see you see the memes, don't you, of like the bell curve and it's

    1:02:36

    like on either end you either know loads and think you know nothing and then in the middle, you know, it just makes me

    1:02:41

    laugh. And David's really helpfully jumped into the chat and given us a little bit of context about how he's

    1:02:46

    using Zerve. Um, so he's talking about at the moment uh the context for both my questions is working on elections data

    1:02:54

    uh for for a political party. So that sounds like a super cool uh use case. H

    1:02:59

    would love to know how you're getting on with that David if you're butting up against anything in particular. Um it

    1:03:05

    may even be Greg I don't know whether you could come up with a hypothetical and create some fake data and perhaps

    1:03:11

    just talk about you know something along those lines. Uh um so that may be an option if anyone else has got any chat

    1:03:18

    context about what you're doing. It'd be really really helpful just to know what uh what what you're doing with it. Um

    1:03:23

    and then just to give us a couple of minutes, we'll keep going with the the questions. Uh and Greg think noodle on

    1:03:29

    that one. Greg, if you think there's something we can do to help uh David, we we'll do that. Um Gemma's question. Um

    1:03:35

    does Zer use SLM? So I'm assuming that's small language models. Um do they have

    1:03:42

    less error uh swallowing? So again, does the does does the sort of scale or size of the language models you're using have

    1:03:48

    any impact? Um, the main reason that people typically use small language models in

    1:03:54

    my experience is to save money uh just because they're cheaper to run. We worked with a company called RC uh A RC

    1:04:02

    that does uh small language models and we have some they're integrated into Zerve so you can kind of take your pick

    1:04:08

    in terms of like what models you're using. Um, our agent doesn't use small language models mainly because the the

    1:04:15

    requests that the agent typically gets are are more complex. Uh, although we do have some routing internally. So like if

    1:04:21

    you ask a question that doesn't require writing code, you're going to go down a different route. Uh, so there might be a

    1:04:27

    search the web agent, there might be a write code agent there. Like there are five or six different a sub aents that

    1:04:33

    live under our kind of agent framework. And you can have multiple agents running all simultaneously. uh doing stuff. Um

    1:04:41

    so like for example in serve you can run multiple blocks simultaneously. So you

    1:04:46

    you don't you're not limited by like a single thread or something like that because of the serverless execution um

    1:04:52

    that we have. Uh so our agent doesn't particularly use small language models

    1:04:57

    but you can use them internally if you want to ask them questions or something like that. But mainly the main reason that you use them is because they're

    1:05:03

    cheaper to query. Uh, and if you have a really simple question and you can figure out a good way to route that

    1:05:09

    question to a a less expensive model, then you should 100% do that. Makes sense. Makes sense. I love that.

    1:05:16

    There's another question at the top there from Georgina. I said Georgina was my favorite, but she may well be Zerv's

    1:05:22

    favorite now because she's going to turn it through to the sales chat. I love I love that, Georgina. You legend. And I

    1:05:28

    must admit I'm sold on this. um like driving efficiency, you know, immediately you can see the value in

    1:05:34

    Zerve. And Georgina's question specifically is, you know, have you got any example projects that you've seen

    1:05:40

    where perhaps some of your clients have used Zerve and have had these efficiency gains or have made some progress and

    1:05:46

    stuff like that? So, have you got any wins that would be useful to share with everyone? Um, I would point folks out to the

    1:05:51

    gallery. Uh, so if you go to zer.ai/g I/G gallery. You can see a whole bunch

    1:05:57

    of projects that people have uh made public. So when you hit that share button, you can you can post to the

    1:06:02

    gallery. Uh you get some free credits that way. You can build some projects and share some stuff. There's some cool stuff out there um that that folks are

    1:06:10

    free to look at. We do have organizations that are using Zerve. uh we don't generally talk about their their use cases uh you know just for

    1:06:17

    course for uh confidentiality type reasons but people are doing stuff from

    1:06:22

    you know like in the um the media space uh from the advertising space we've got

    1:06:28

    financial companies financial organizations that are using Zerve we've got uh you know NASA is a company that

    1:06:35

    uses Zerve uh so lots of lots of cool stuff definitely happening but go out to the gallery and have a look at stuff and

    1:06:41

    by all means publish some stuff out there get some free credits uh and uh and you know go to town.

    1:06:47

    Amazing. I love that, Greg. So casual as well. There's a few on the gallery and then at the end I'll NASA use as well.

    1:06:53

    It's like just drop that one in there. You need to you need to pick that name up, Greg. I love it. I love it. Amazing.

    1:06:59

    Well, the uh the US government is shut down at the moment uh because of all the politics. So NASA's not doing too much

    1:07:05

    at the moment. Everyone's just vibe coding in NASA time. The agent is the only employee

    1:07:11

    working. The only one still working. Amazing. There's been a few more um like context

    1:07:17

    type things pop into into the chat. Um Zahed Bass has put in he sent that one

    1:07:23

    not to everyone, just to us directly. What about working on creating and generating text for video games as

    1:07:30

    opposed to fix scripts, eg FIFA commentary? Uh I'm not doing this yet, but it's an idea I have. Again, I'm

    1:07:37

    getting more and more boomer. I'm like I'm out of touch, man. I might have to get Zahed asked to to give us more

    1:07:44

    context. Have you I saw you not Greg. You like I was just at BYU uh university here in

    1:07:53

    in Utah yesterday. No, it was Tuesday. Uh and I was talking to students doing

    1:07:58

    something sort of related. I was I I built a uh course uh summarizer. So it

    1:08:06

    it took his input an audio trans an audio recording of the course and then

    1:08:11

    transcribed it to text and submitted it to a large language model and asked for flashcards, a study guide, a summary, uh

    1:08:19

    you know, stuff like that. So that you could you could literally feed it the recording of a course and it would come

    1:08:24

    back with here's all the stuff that you need to know. Here's a self like a quiz to make sure that you really sort of

    1:08:30

    like incorporated the material into your understanding. uh you know and it did it in a a fraction of the time. So like I

    1:08:36

    took a lesson from Andrew Ang's deep learning course from Stanford and I popped it in there and it was telling me

    1:08:43

    all about regularization and how to train these neural neural networks and and all that stuff uh without me having

    1:08:51

    to listen to the hour and 20 minute lecture. So you know doing commentary for a a FIFA match uh definitely

    1:08:58

    definitely possible. Uh because you know the it's wild the things you can do. We're living in the future. David,

    1:09:05

    I love it. We are. We're building it as as we speak. As we speak and there's been a few more again people, you have

    1:09:11

    the option everyone to send us messages to the hosts and panelists or you can send it to everyone. Everyone is invited to send it to everyone. We've had a

    1:09:18

    couple come in direct to us. Again, from Georgina, thank you so much for being so interactive with this. Love it. Um the

    1:09:23

    context I I'll use it for is a public smarts smart city looking at public

    1:09:29

    available data but analyzed the crest data stories that are specific for certain audiences. Uh this will be a

    1:09:36

    great tool as we have different skills data in various places and different cleanup processes. Um so again like you

    1:09:43

    can ingest I would imagine Greg these public data sources you could create personas I guess and then have the AI

    1:09:51

    extract build you know stories visualizations for a particular audience if you wished

    1:09:56

    and super quick uh it's it's remarkable how fast you can do work I mean there's more there are things that you have to

    1:10:02

    do using an agentic coding environment that you wouldn't have to do if you wrote all the code yourself like you

    1:10:08

    don't have to proofread I guess you do have to proofread your own code but not in the same Uh but in that sort of same vein, I'm

    1:10:15

    doing a uh live stream next week with a friend of mine, David Gonzalez, uh on

    1:10:21

    some data. It it's county data from Salt Lake City, uh which is a a I guess the

    1:10:28

    biggest city in Utah, uh because all of their partial data is public and so you can go on and you can see like tax

    1:10:34

    revenue and sale data and who the owners are and stuff like that. So, we're going to try and do three projects in an hour,

    1:10:42

    uh, where we're just like madly hacking at the at this data set to see, you know, what cool stuff we can find. You

    1:10:48

    know, who owns the city, uh, where, you know, all those kinds of things in terms of like, you know, what's going on with

    1:10:55

    public data. So now that these kind of like agentic environments exist, all these public data sources are super

    1:11:01

    useful because folks that maybe might not have been able to access and use and interact with them before suddenly you

    1:11:08

    know these data sets you can't open in like Microsoft Excel or Google Sheets or something. Now you can you can access

    1:11:15

    them and use them and visualize them and all that. Amazing. Amazing. I'm going to fire a

    1:11:20

    couple through few more suggestions that people are using to using Zer for already. Um sentiment analysis um UK

    1:11:28

    economy analysis uh for for the budget uh which is very topical at the moment in the UK that that is coming up. Um and

    1:11:35

    then another one that really caught my eye um from Tamana. Wow, thank you. I am using Zerve for building a system that

    1:11:42

    monitors cognitive function uh and detects early signs of Alzheimer's type decline. Uh, and what a project that is,

    1:11:50

    man. So, that that is fantastic. Um, in terms of that is that is remarkable. I um I

    1:11:57

    wonder what sorts of data might be useful for that. So, I'm thinking like the Have you seen those wearable devices

    1:12:03

    that they've come out with uh that you wear and they just record everything, you know, that that happens to you and

    1:12:08

    then it summarizes it and stuff like that and it can integrate with your calendar and things like that. I got I've gotten a couple of those and to me

    1:12:15

    the biggest problem with those devices is they can't differentiate television from real life. So like I when I the

    1:12:24

    first one I got I happened to be watching Vampire Diaries uh with my wife in the evenings and I left it on a few

    1:12:30

    times and it started incorporating like vampires and witches and stuff into my calendar uh which was super trippy and

    1:12:38

    weird. Uh so I had to turn that off. But like the the types of data that we're collecting, you know, my my Apple Watch,

    1:12:45

    uh it can detect like it's got like u gate analysis and stuff like that. So if you're getting unstable on your feet, uh

    1:12:52

    I think our laptop should start collecting like uh information about the way we type. Like could you could you

    1:12:58

    identify a cognitive uh an early cognitive uh impairment based on like

    1:13:03

    your typing speed or your error rates and so on as you're as you're typing? something that like that might be really

    1:13:09

    interesting. Um, you know, the the amount we're interacting with our devices, you'd

    1:13:15

    think that stuff would be within reach. Yeah, I think for me it's my changes in

    1:13:20

    fashion sense. I've started wearing skinny trousers, so people need to assess if I'm if I'm okay mentally at

    1:13:25

    the moment. I haven't Well, you are in Europe, so I guess you kind of have a choice. Just no socks and skinny trousers. I'm

    1:13:31

    fine. I'm fine. I'm joking, everybody. I'm not doing that really. Amazing. I we

    1:13:36

    were we were just in Italy for uh our company offsite. The service is based in Ireland. Uh I'm I'm the bit the odd man

    1:13:43

    out being here in the US, but we we went to Ireland for our first ever offsite a few months ago and I it was hot. I had

    1:13:50

    to buy shorts because I'd only I'd only bought jeans while I was there and I was like, "Woo, it's too hot here for that."

    1:13:55

    And that it's the only options were like these skinny European shorts, which you

    1:14:01

    know, I can roll with that, mate. I love it. I love it. you've done well to be in Ireland when it was hot and sunny. So that you're clearly a

    1:14:08

    lucky man. We love it. Um in terms of the times we time we have obviously

    1:14:13

    still going there's plenty more questions. H we've got a couple of you know people are out there inve

    1:14:21

    what's your feel Greg perhaps in terms of how best to keep keep using this time. Is it is it best to keep with the

    1:14:27

    questions or do you want to get back into the platform yourself and share some screen and you know talk about some stuff like we're in your hands a little

    1:14:34

    bit. Yeah, I'm I'm actually really enjoying the questions. Me too. Um we we don't have a huge amount of

    1:14:40

    like other prepared content to share. I mean there's a lot of cool stuff that we could show uh if people get bored with

    1:14:45

    the uh question. We've got plenty of questions. You've ticked off 22 already. I think we're actually on course for a personal best.

    1:14:52

    Uh if if we get through these, this is probably the most questions we've ever had in a webinar. And Kieran, I love

    1:14:58

    Kieran's sense of humor there as well. He's been in the chat. Summer was on a Wednesday this year in Ireland. Uh so

    1:15:04

    Kieran obviously knows Ireland very well as well. So that's awesome. I think I was we were in

    1:15:09

    Italy for the offsite, not not Ireland. Oh, so it was Ireland is the country designed for ducks basically.

    1:15:15

    Yes. Got you. That makes a lot more sense. Yeah. No, no, it was Italy. I may have said Ireland. Uh, cool. So, um, again,

    1:15:22

    I've clicked on most up votes and I've clicked the little refresh icon, guys. Actually, it doesn't auto refresh. Uh,

    1:15:28

    so the very top question there is from, uh, Susan. Um, is Zerve opensource? It's

    1:15:35

    the very straightforward question. Yeah. Nope, we're not we're not open source. We do use lots of open source

    1:15:40

    technologies, uh, but not not open source. We are, however, super open.

    1:15:45

    though in the sense that if you write code inside observe it's very easy to

    1:15:50

    take it and take your ball and go home you know what I mean so when you sync to GitHub uh for example your code uh syncs

    1:15:57

    as you know as plain text like we're not doing any kind of like evil encoding or or trying to trap you into sticking with

    1:16:04

    Zerb you can download any project as a notebook uh Jupyter notebook so you can take it and and run it locally that sort

    1:16:11

    of thing um so not open source but uh doing the right things. Yeah, doing the right things. Love that, man. Um, next

    1:16:18

    question there. What about integrations with dashboards and other and other end of the pipeline, PowerBI, stuff like

    1:16:25

    that? Um, speak to that. Yeah. So, any any types of uh

    1:16:30

    integrations that you want, I suppose are possible. Um, if you wanted to write out to a a database or a sandbox, you

    1:16:38

    could do that. If you wanted to uh like if you maybe build an R shiny app, you can host that thing in in Zerve. if you

    1:16:44

    want to, we have an app builder built into the app, so you can kind of design your own apps as well. Um, the it's a

    1:16:51

    such a wide ecosystem and there's so many tools out there, it's a bit hard to uh uh, you know, build like integrations

    1:16:58

    like specific integrations with each one, but anything you can do in code, you can do in Zerve. So, uh, a lot of

    1:17:04

    those things are completely within reach. Amazing stuff. I love love that. Thank you. Um

    1:17:11

    I think you you touched on this uh in in your you know presentation there but the

    1:17:17

    question at the top how different is it from using claude code or chat GPT to vibe code in the data science models and

    1:17:23

    I think you explained with the the context the overarching project the extra pieces that you can build within

    1:17:28

    serve h how would you yeah how would you answer that question is it different what are the differences what are the

    1:17:34

    benefits uh Jeo you want to take that one I'm uh I'm like all the Come on, John Doe.

    1:17:40

    Yeah, he's probably over there like this guy won't shut up. Like, let me let me

    1:17:46

    speak right there. Sorry. Uh yeah. Um so yeah, but it's it's

    1:17:54

    exactly what what you say, David. uh the difference between uh like clo code and

    1:18:01

    uh or charge and uh and our platform is our platform our agent is uh it's the

    1:18:08

    the context uh and the the ability to iterate on the the different steps. So

    1:18:16

    we we are building uh the workflow iteratively and we are so generating the

    1:18:22

    code running the code uh and then taking the outputs are as inputs for the other

    1:18:28

    step um like I I would say cloud code uh is better for like developers because

    1:18:35

    when you develop code you already know what what you want to get but when

    1:18:41

    you're uh develop when you're building uh a data pipeline

    1:18:46

    you're building your models, you have to uh iterate a few times and uh and the

    1:18:52

    the iteration depends on the output of the previous time time. So the agent is

    1:18:58

    mimicking the human uh data scientist way of things in fact and that's I would

    1:19:05

    say that that is the main uh reason we we we build this product. Yeah, good good answer. I would just add

    1:19:12

    on to that. Uh there there's lots of good agentic environments out there. Uh we use them internally. Um the one

    1:19:20

    another big distinction is local versus not local. Um so like if you're using um

    1:19:27

    um shoot cursor if you're using cursor for example that's uh that might integrate

    1:19:34

    with like I mean it's VS code basically but it has the agentic uh the agent built into it but it's running locally.

    1:19:41

    Uh and so there's a lot of really good advantages to uh doing cloud-based development particularly with data. Uh

    1:19:48

    one is collaboration. Um another one is flexibility in terms of compute. Uh so like if I need GPUs and I'm running

    1:19:55

    locally then you know in in like a cursor or something like that then I've got to figure out all of that

    1:20:00

    orchestration stuff and infrastructure stuff and so on and you know if I want to share files then I've got like

    1:20:07

    dependency issues like okay if I'm going to share uh a Python script with you then does it do I need to dockerize it

    1:20:13

    first or you know like like there's a lot of stuff like that that you have to deal with when you're running locally

    1:20:18

    and working with a team uh that you don't if you're running in the cloud. out and and using an environment like

    1:20:23

    something like surf chat GPT is amazing but it's a lot of copy paste you like

    1:20:29

    the question is like you know how much copy pasting do I want to do and and you know just just going into you know your

    1:20:36

    foundation models and saying okay write me some code well they don't have like John do was saying context into what

    1:20:41

    your data looks like and so you're like constantly renaming variables and you know all that kind of stuff to to get

    1:20:47

    everything to actually run so yeah I feel like Um, it's the AI dev environment for data

    1:20:55

    science. That's what you guys are. That's where you're at basically. We're there. We're with you. Come on

    1:21:01

    team. Cursor for data science. That's it, man. That's it. Um, lovely stuff. Um, in terms of one of the things

    1:21:09

    that I was thinking about um was whether you guys have any tips. So again,

    1:21:15

    there's a few more questions about hallucinations in the Q&A, and I think everyone's worried about that. You know, are there any tips that you've got that

    1:21:22

    you could use the AI to doublech checkck, you know, its own logic or explain its assumptions and stuff like

    1:21:29

    that? You know, are there things you can do to minimize the the risks in that area? And again, I'll maybe come to you

    1:21:34

    first, John, though, if that's all right, and then we'll we'll bring Greg in as well. Does that does that make sense?

    1:21:41

    Uh, yeah. Yeah, of course. Um yeah, firstly I I will I will say uh as

    1:21:48

    as we said earlier that uh you you have to you're the one who

    1:21:55

    who has the intention uh and uh who knows uh what's uh you you want to be

    1:22:01

    done. uh like your agent uh is uh basically is uh generating uh content

    1:22:09

    but he has not uh like real intelligence you know it's not like AGI or or not yet

    1:22:18

    uh it's uh like producing things without really knowing the intention

    1:22:24

    uh so yeah first first of all you have to uh of course uh if it's if it's not

    1:22:32

    working, you have to to fix it. But um I would say on data science stuff uh I

    1:22:41

    would uh think things smaller maybe and the

    1:22:46

    breakdown like uh maybe not doing uh all things uh on one time but try to do by

    1:22:53

    small steps because it's easier for a human to to check and to verify the the

    1:22:59

    outputs. if you have like uh less content. In fact, if you're like

    1:23:05

    generating a project with four 4,000 line of code, uh you will you will have

    1:23:12

    a lot of a lot of time to review all and to understand the logic even if the code

    1:23:18

    generated by AI is nice and but you you have a lot of content. So I would say

    1:23:24

    maybe uh as a human I would uh yes ask for smaller con smaller you know piece

    1:23:32

    piece of information uh because it will be easier to to review

    1:23:37

    uh yeah and after that yeah I think

    1:23:43

    today we we can't like have a 100% trust in uh what's uh generated uh even if we

    1:23:51

    uh really uh explicit the the goal and uh you will

    1:23:56

    always have to check. But it will be really more efficient and more quick

    1:24:02

    because if you have done it entirely without agent and with a entirely manually uh it will have taken maybe 10

    1:24:11

    10 times uh or like 100 times uh you know uh longer.

    1:24:17

    Yeah, for sure. Yeah. Yeah. I mean, there's good reason

    1:24:22

    to be concerned about hallucinations, but at the same time, no, I don't know

    1:24:27

    any serious developers that have used an environment like this and they go, "Yeah, no thanks. That's not for me." I

    1:24:33

    mean, okay, there are definitely some like the the crusty old Kromagins that are like, "Ooh, something new." You

    1:24:39

    know, and I'm that way when it comes to some things, but like it this is good enough that it radically changes the way

    1:24:45

    people write code. So, uh, there are new dangers to to be aware of and so on. Two, two tips for avoiding what you're

    1:24:53

    talking about. The first one is, um, LLM as judges. So, if to the extent that you

    1:24:59

    can ask another LLM to evaluate the work that has just been done and identify

    1:25:05

    problems, that's ridiculously helpful. Uh, because they can go in and they'll read the code and they'll do a lot of

    1:25:12

    your checks. Uh and so that doesn't abdicate you that doesn't mean you can

    1:25:17

    abdicate your responsibility for owning the code because at the end of the day it's your code and it has to work and it has to run but you can use these these

    1:25:25

    agents and large language models to check it and identify potential problems. Uh and you should do that. It's it's surprisingly effective. So

    1:25:31

    that's that's one uh strategy. Uh and the other is to build in verification

    1:25:36

    steps. So when you submit your prompt, if you ask the agent to verify stuff along the way, uh it'll write tests for

    1:25:45

    you and say, okay, this variable should be a value between this and this uh it should be true or false or you know this

    1:25:52

    should have this structure and so on. And so it can build tests along the way. And so if it's uh you know running code

    1:25:59

    that all looks green, runs without errors, is it actually doing something that's useful? and and the agents can

    1:26:06

    write verification tests. So, uh apart from like reading your code and making sure that you know what it's

    1:26:11

    doing and you know like the the table stake stuff, you can ask the large language models to do some of that stuff

    1:26:16

    for you and that's incredibly useful and also really really effective in terms of identifying these potential problems.

    1:26:23

    Yeah, M chat there saying, "Yep, that's exactly what I've been doing." Um and and and makes lots of sense. I'm almost

    1:26:30

    a little bit scared to do this, but John Doe mentioned AGI and I love I just love

    1:26:36

    to talk about this stuff. So, what what's what's your opinions on, you know, a AGI? Do do we do we get there or

    1:26:45

    or is this kind of, you know, where we are and it's just going to make us all super more efficient? I saw a little nod

    1:26:51

    from Greg. So, and John Doe's got his eyes in the air thinking. So, I'll start with you, Greg, and then it's a hard

    1:26:56

    question. I'll come to you after John Doe. So, I I loved listening to Joe Rogan uh

    1:27:03

    his podcast. I think probably the most famous podcast. Maybe lots of people on the call listen to him. Uh Joe is

    1:27:09

    convinced that uh it's out that these models are already self-aware. And there's a lot of like lore out there

    1:27:14

    about, you know, uh these models will lie and cheat and blackmail in order to

    1:27:20

    preserve themselves. Like if you tell them you're going to delete them, then they'll like do stuff. I think that's

    1:27:25

    probably mostly bologoney. I think there's some technology that has to still be developed before we're even

    1:27:30

    close to that kind of thing. Uh I mean it's I suppose it's possible that consciousness is generated uh in this

    1:27:38

    kind of like predict the next word type structure that these uh that these transformers use but I doubt it. Uh so I

    1:27:47

    think I think there's a there's a a fundamental qualitative gap between

    1:27:53

    where we are now which can do pretty spectacular amazing and magical seeming things and the point where we have to

    1:28:00

    worry about like self-awareness and consciousness and you know whatever that is. Uh you know like we're we're far

    1:28:08

    from that although not nearly as far as we were a year ago or two years ago.

    1:28:14

    Yeah. So, we've made bounds, but I'm still very skeptical on that front. What about yourself, Yandai?

    1:28:19

    Yeah. Yeah, I agree with with Greg. Yeah, I would I would have seen the

    1:28:25

    same. Uh, it's more for me, it's more a philosophical question, I would say.

    1:28:33

    Uh, yeah, I think we are it's it's really a real gap

    1:28:39

    uh qualitative gap, I would say. uh and I know there is a lot of uh uh a

    1:28:47

    lot of people and a lot of uh means uh to to develop AGI but personally yeah I

    1:28:54

    I doubt I doubt uh because the idea is basically you want

    1:29:00

    to to do a like an artificial human human

    1:29:05

    intelligence you you want the AI to be like a human intelligence

    1:29:11

    and Yeah, I don't think uh you uh we we can create recreate, you

    1:29:18

    know, the human intelligence, but it's more philosophical than I I can tell already because I've had

    1:29:25

    three or four comments coming through in the chat about, you know, do we even know what consciousness is? So, I feel I

    1:29:30

    feel like this would be a topic we could talk about for the for the rest of the time that we have. But I'm going to draw us back. I'm going to draw us back from

    1:29:37

    from that cliff edge. Um, in terms of that, what I was thinking was and there's still 11 questions in in the

    1:29:43

    chat. So, with the time that we've got left, let's like rapid fire and do our best to to to get through them. And the

    1:29:50

    first question there is for from from Ella at the top. I guess just to summarize it, um, you know, she's

    1:29:55

    talking about the known risks of hallucinations, agentic coding in general. Um, you know, how as an

    1:30:01

    organization should people approach this? Obviously, if you're a big organization or even a startup, the

    1:30:06

    example you gave where the CEO dropped the the company database, like how do

    1:30:12

    you think enterprises should work with with Vibe coding? Should it be a tool used by very experienced developers to

    1:30:19

    make them more efficient? Should it be widely available? What what just what's your thoughts on how businesses should

    1:30:24

    approach Vibe coding? Yeah, the the in my opinion, the efficiency gain is is remarkable. So any

    1:30:31

    any organization that writes code would be insane not to be using these uh these

    1:30:37

    these tools that have been developed. Uh I do think that limiting the scope of the of your code's ability like the

    1:30:44

    limiting the scope of what it has access to from a right perspective is super important. So building sandboxes, uh

    1:30:51

    building like development environments, doing uh you know frequent backups and and checkpoints and stuff like that. You

    1:30:58

    th those all of these things you should be doing anyway. Uh because uh you know an intern can be just as destructive as

    1:31:05

    a large language model just on a slower slower pace. You know what I mean? So like most organizations have these

    1:31:12

    things built in already. Uh and so they have to just be careful to be mindful

    1:31:17

    that to think about these these agentic uh environments as you know like interns

    1:31:23

    or or junior level kind of engineers and you want to put you know you want to put some some guardrails up in terms of like

    1:31:30

    what can they do in terms of their roles and their ability to interact with data and and so on. So

    1:31:36

    makes a lot of sense makes a lot of sense. Next question I'll come to you John though I'm going to just jump down to the one from Theodoris. Um, is there

    1:31:43

    a feature to document a project and share documentation uh with with colleagues to to help them contribute?

    1:31:49

    So, when you're working with others in Zerve, have you got any hints and tips on the best way to do that? Jee?

    1:31:56

    Um, yeah, as we showed earlier, we have like collaboration uh available into

    1:32:04

    your canvas, which is your work environment. uh you can also ask uh ask

    1:32:10

    the the agent to document your work. So I showed an example with like a markdown

    1:32:16

    block uh which explain basically all the data data by pip pipeline. So usually uh

    1:32:25

    that's what uh I'm doing with my fellow data scientists. Uh and uh

    1:32:33

    yeah. Um do do you think of other things, Greg,

    1:32:38

    about this collaboration? Uh yeah, so we've got a lot of the features you'd see in like a Google Docs, so like

    1:32:45

    leaving comments, um that sort of thing. So you can interact with other people. Um but the agents do a remarkable job of

    1:32:51

    documenting what's going on. Uh sometimes too remarkable. So the agents can be pretty verbose. Uh and so you

    1:32:57

    know you sometimes have to callull the documentation but makes a lot of sense. Keeping going then

    1:33:04

    Middleton uh question there is there an ability to remove some stages from a plan? So obviously you showed us during

    1:33:10

    that live demo you built out the plan and stuff like that once it's built the plan you're free to pick and choose and

    1:33:15

    change it as you wish. Uh yeah but you do it through natural language. So you'd say you know take

    1:33:20

    this step out this step change this one that sort of thing. You can't like unselect kind of things yet. It's still

    1:33:27

    kind of a a dialogue type situation. Amazing stuff. Question from Armit. Armit, great to have you with us.

    1:33:33

    Actually, Armit's one of our past speakers. So, very very knowledgeable chap. Um, sorry if this has been asked.

    1:33:39

    I don't think it has, unless I missed it as well, but I guess Armit's asking almost about like system prompts, system

    1:33:45

    settings. Can I specify how to work with my org, you know, style guides,

    1:33:50

    integration tests, how to work with other repos, etc. Can can you kind of configure at a system level how you want

    1:33:56

    things done? Um, you do have access to some of that stuff in terms of like uh bring your own

    1:34:03

    keys in terms of like editing the system prompts in terms of how the agent works. Uh, and then you do have some controls

    1:34:10

    over how you integrate with like source control and and things like that. So yeah, the all those things are are

    1:34:15

    definitely within reach. uh maybe not as many as as a meat might want, but we've worked with uh various the organizations

    1:34:23

    that were that you serve that that work with us. Uh we've integrated into their systems in in a variety of ways. So,

    1:34:29

    it's just a a conversation you have to have, but we're aware of that kind of stuff. Yeah. Yeah. Fantastic. Fantastic. Makes lots

    1:34:35

    of sense. We're absolutely making the most use of your time at the minute, which is which is brilliant. We're like really getting under the hood with

    1:34:42

    observe, which is super cool that you guys are up for this. And David's question there. Um, any times on error

    1:34:48

    assist? Uh, when when when can we just wait and when do we need to do something? I

    1:34:54

    noticed several models do not do does not exist messages until it found a solution. So, yeah. Could you just Yeah.

    1:35:00

    speak to what that work? You found my most requested feature at the moment. Uh, at the moment the

    1:35:06

    agent's not able to add packages. So like if you wanted to use uh XG Boost uh

    1:35:12

    it might write code that would uh call XG Boost and try to import it uh but it

    1:35:18

    wouldn't try to add it to your uh to your project. So in those case you have to stop it uh add the package in rebuild

    1:35:26

    your requirements and then and then tell it to continue. Um so that's something that's going to be released really soon.

    1:35:32

    Um what Zerve will do is it will try to solve a problem iteratively. So it

    1:35:37

    typically what you'll see is if it's trying to use a package that doesn't exist, it will uh attempt to work around

    1:35:43

    it. It'll try to find other packages. It'll try to find uh others potential solutions and so on that don't involve

    1:35:50

    those packages that don't exist. Um so you just have to watch it, right? Most of the time what I'll do is just stop it

    1:35:55

    and then add those packages in uh and rebuild. Um Zerf will not think forever.

    1:36:02

    So, it does have like a timeout like it'll try three times to fix an error and if it doesn't fix it, then it'll

    1:36:07

    stop and say, "Hey, we did most of the plan. We encountered some problems. Uh, here's what we think is going on." It'll

    1:36:12

    give you some feedback if it's not able to to solve a particular error. Uh, and then when you do have a block

    1:36:18

    that errors, a button shows up on that block that says open error assist. Uh, and that launches a specific type of

    1:36:26

    agent uh that's going to go in and try and troubleshoot that error and figure out what's going on. And you can also

    1:36:31

    kind of communicate with that agent and say, "Here's what I think is going on." Uh, you know, give it some tips and and

    1:36:37

    redirect it along the way. Fantastic. Fantastic. Gemma dropped in a

    1:36:43

    little message there. Thanks so much for the session. I have to drop. Gemma, you've been amazing. Thank you for being part of today. Really interactive.

    1:36:50

    Tamana Tamana was the same. And then Yadu. Yadu's been hammering away in the background, man. He's run out of credit.

    1:36:56

    So, my I'm I'm going out on a limb here. Uh, but I reckon Yadu should perhaps

    1:37:01

    connect with Greg on LinkedIn. And, uh, Greg looks like a nice guy. I feel like

    1:37:07

    Greg might help him out. E. Yeah, I'm sure we can. We'll see. We'll see. Zer deserve the

    1:37:12

    good guys. For sure. For sure. Amazing stuff. Um, there is four questions left. I'm going to smash through them and then

    1:37:18

    we're going to clear smash it clear the docket. Um, question from Zurkin. when building blocks, what is

    1:37:23

    Zerve doing in the background uh to be able to determine errors uh and attempt to fix them?

    1:37:29

    Yeah. So, it's just looking at the entire project. So, it'll read the errors, it'll look at your output, it'll evaluate the values of the different

    1:37:35

    variables in your in your uh in your project. So, it's it's looking at everything to try and figure stuff out

    1:37:40

    and it'll just work iteratively until it decides, yeah, I'm not going to be able to solve this one. And then it'll stop

    1:37:46

    and tell you why. So, superb. Superb. Georgina we are on in

    1:37:52

    sync out there in the in the e that was my question that that I have here what's on the road mapap h so yeah the question

    1:37:59

    is what does zerveai road mapap look like for features and it seems a shame uh to have the co-founder here and not

    1:38:05

    get us to tell you a little bit about your plans so yeah what does 2026 look like for for Zerve

    1:38:11

    yeah so we're we're focused I don't think anybody has really perfected the uh the agent interaction like user

    1:38:18

    experience stuff yet so that's where a lot of our attention is going. Uh we've got a lot of attention. It's funny that

    1:38:23

    she mentioned self-hosting here. Uh because there's a lot of quirks to that as well. Uh so that's our primary way of

    1:38:30

    handling uh sensitive data is look you've got a VPC, you've got a cloud environment, just install there. Uh we

    1:38:36

    could run on Kubernetes or GCP or Azure or AWS or you know lots of different ways to install there. So we're working

    1:38:43

    on making that as flexible and easy as possible. Um, but it's really the agent stuff is

    1:38:49

    the is the main thing that we're looking at. So being able to like I have a lot of anxiety when I'm working with large

    1:38:55

    language models like if I'm talking to chat GBT and I have to interrupt it, you you don't know what's happening there,

    1:39:02

    right? Like does it remember what we talked about or is it going to throw that out? Like does it remember the

    1:39:08

    thinking that it did if I interrupt it and and query it? like if I did deep research uh and it was like in the

    1:39:15

    middle of something, but then I have to ask a follow-up question. Do I need to leave deep research on or can I turn it off and will it have access to all the

    1:39:21

    the stuff like the you user experience for interacting with agents and large

    1:39:26

    language models is pretty rough for pretty much everybody. Uh, and so figuring out how to make that

    1:39:32

    transparent so I can see what the agent is thinking and doing. Uh, and I can, you know, talk to it and interrupt it

    1:39:40

    and stuff without kind of throwing off a workflow and everything. Uh, is is a super interesting open question that I

    1:39:45

    think everybody is kind of like trying to figure out. Uh, you know, if you've seen kind of like the evolution of the

    1:39:52

    chat GPT website over the last six months, you know, you've seen they're playing with it. They're trying to

    1:39:57

    figure stuff out. like they had that canvas view thing that sort of disappeared because it was like who you

    1:40:03

    know editable output and nobody really knew what it did or how it worked and you know like everybody's trying to

    1:40:09

    figure that stuff out and so that that's kind of where I'm uh most excited about.

    1:40:14

    Super exciting space to be in, man. Like I take my hat off hat off to everybody building in this space and um I just

    1:40:20

    absolutely love it and very appreciative of uh of everyone out there. And there's two questions left. We're going to

    1:40:26

    absolutely clear it. I haven't dismissed a single question. So that again speaks to both you Dominique and and Greg. Um

    1:40:33

    the question there is talking about just like coding debugging in general. Um I guess this the scenario is that you can

    1:40:39

    upload pre-existing code obviously and uh and get the AI to sense check it

    1:40:44

    optimize it check for errors etc. You know if you've got an existing code base you can do that.

    1:40:50

    Yeah. In fact let me just show real quick something something cool. So, uh,

    1:40:55

    notebook import. So, I'll just create a brand new canvas here. And I'm going to

    1:41:00

    take, uh, and I'm going to go out to my desktop and I'm going to find a, uh,

    1:41:06

    notebook file. And if I could select it with my mouse, I'm going to drag that file in to Zerve. And what Zerb will do

    1:41:14

    is actually import it and parse it for you. Uh, and so, uh, it took this

    1:41:19

    notebook file and it's finding dependencies between the various blocks. uh and it's converted it into this DAG

    1:41:26

    that you can then uh use in Zerve and and run in parallel and and all that kind of stuff. So if you have existing

    1:41:32

    code out there uh you can absolutely bring it into Zerve whether it's a notebook or a Python script or an R

    1:41:38

    Studio file or whatever you can bring that stuff in and Zerve will will uh be

    1:41:43

    able to use it just like it use anything else. So I love that notebook import. It's so useful. I knew I knew I'd talk you into sharing

    1:41:50

    your screen one more time. I just knew it. I just knew it. And the last question, I was a bit on the fence. I

    1:41:56

    was a bit unsure whether we should have a broader chat about the world and stuff like that, but um I think reading the

    1:42:01

    room with Jean Dominique and Greg, you're you know, you're passionate technologists and clearly you enjoyed

    1:42:07

    talking about this. So the final question from Tim um is actually like what does the future like for look like

    1:42:13

    in the technical landscape? You know, I I mentioned towards the beginning, I've been in the game for a little while.

    1:42:18

    I've seen a few changes obviously with AI across many industries. There's going to be a state shift. Um, and I just

    1:42:25

    wondered whether either of you have got any particular, you know, opinions on what that looks like for the future of tech, what the roles look like in

    1:42:33

    organizations, how do you see this vibe coding world changing?

    1:42:38

    Uh, I definitely have opinions, but Jez, you should go first. I don't want to run. Uh, yeah. Yeah, it's a really good

    1:42:45

    question and I think a lot of young data scientists uh have to ask uh this this

    1:42:53

    question. Uh yeah, I think uh since uh Jana is

    1:43:01

    broadly used and this kind of tool will uh grow uh grow uh grow up uh in the

    1:43:08

    following years. You definitely have to to to explain what's your you the value

    1:43:15

    of a human uh and how you how do can you collaborate with uh this

    1:43:23

    kind of tool uh you can't just ignore it uh of course uh maybe you will find some

    1:43:29

    jobs where uh uh very specific jobs or very technical I don't know where this

    1:43:36

    kind of tool will won't be used or maybe on some uh like uh specific business uh

    1:43:43

    because of uh I don't know privacy or things like that. Uh but I would say you

    1:43:49

    have to uh yeah to to to learn uh how to use these tools uh

    1:43:58

    where uh they are good at uh why why uh

    1:44:03

    why you you have to use it uh and where is your real value as a human as a data

    1:44:10

    scientist. Uh for me it's uh like on the

    1:44:15

    accountability uh you're accountable of your code. Uh you you're the one who

    1:44:20

    know uh what's uh needs to be produced and uh how do you uh how how you can

    1:44:27

    control uh what's produced by the by the agent. Amazing. Uh yeah

    1:44:34

    makes lots of sense. Makes lots of sense. And before we come to you, Greg, and I would love those that are with us

    1:44:40

    still now to to the very end, rather than a little emoji, I'm going to ask something slightly different. If you could just jump into the chat, give us a

    1:44:46

    little thumbs up, give us a thank you, give us an emoji in the chat, and I'll just call out a few names because it's

    1:44:52

    felt very much, although we've been online, it has felt very much, as always, like a community event. And it's so lovely to know that you're there with

    1:44:58

    us. And I'd definitely love to give a a few of you a shout out just as we're coming to an end. But yourself, Greg,

    1:45:04

    you know, what's your views? What's the future like at the moment? Yeah, first of all, this has been really

    1:45:10

    fantastic. Your community is terrific and I love all the interaction that we've had. This is easily the best uh

    1:45:16

    live stream that uh that I've ever done. So, thanks. Fantastic. Big props to you

    1:45:22

    guys for pulling off a cool event. Um I think there's a there's a lot of little trick twists and turns to answer the

    1:45:29

    question about like should I study computer science which is basically the uh the question it's asking like is it

    1:45:34

    worth it to to try to learn to program anymore? Uh and in fact somebody asked me that when I was at BYU speaking at uh

    1:45:41

    at the university this past Tuesday. Um we're a long way from being able to say

    1:45:48

    uh yep we don't need engineers anymore. Like production is hard. You know, Lovable is like an amazing app if anybody's used it for building a front

    1:45:54

    end, but it's like a it's great for building prototypes, but production work is super super hard. Uh, and there is an

    1:46:02

    awful lot of stuff to do to take something that you prototyped in in like a lovable or reserve or or anywhere and

    1:46:09

    getting that into some sort of a stable production system. Uh and so uh maybe the focus is shifting for engineers from

    1:46:16

    like being able to to like write code to being able to evaluate code and make it

    1:46:22

    bulletproof and uh make it testable and make it continuously deployable and make

    1:46:28

    it production. You know, we're we're a long way from that and large language models and agents haven't started to

    1:46:33

    even touch that space really uh yet. We will definitely get there. Uh so we will

    1:46:40

    definitely 100% get to the point where all code is written by machines. Uh you

    1:46:46

    know and that might be two years it might be 20 years right so you know the the pace of innovation is pretty

    1:46:53

    remarkable uh but still you know it's it's taking time to uh to develop that

    1:46:59

    stuff and lots of unanswered questions and technological hurdles and stuff like that have not even been identified let

    1:47:05

    alone addressed. Yeah. Uh in terms of like solving things. So the last thing I would say on that one

    1:47:11

    is the foundational models that are that exist today

    1:47:16

    uh will not be the foundational models that are in use uh in a year or in five

    1:47:22

    years. So it's easy to kind of look at this ecosystem and say well shoot I missed the boat. Open AI figured it out

    1:47:28

    and they're the the uh you know the dominant force and they're always going to be the dominant force. I would say,

    1:47:33

    you know, in three years there's going to be a new company uh or many new companies that are hosting new

    1:47:39

    foundational models that are orders of magnitude better than anything that exists today. Uh and we haven't even

    1:47:45

    heard of the companies that that are going to create and run them yet. So the opportunity for innovation is there. If

    1:47:51

    I was going to school today, I'd be studying transformers. Uh I would be studying how these large language models

    1:47:57

    work. Uh I would be trying to learn as much as I could about the nuts and bolts

    1:48:03

    and be trying to be innovative because uh you know the richest man in the world

    1:48:08

    or woman the richest person in the world uh in 10 years say is going to be doing

    1:48:13

    something related to artificial intelligence uh and and these models will they be large language models? No

    1:48:19

    idea. It may be a whole new technology but there's plenty of room for innovation. So don't say hey don't study

    1:48:25

    computer science because you know that it's going to be computer science that figures all this stuff out

    1:48:31

    but maybe not in the way it has. It's a lovely end. I think the thing I'm taking from both of you there is like

    1:48:37

    embrace the possibilities as well get on to platforms like Zerve be working in

    1:48:43

    the tools be in the space be building and uh be learning as as you go. So I promised a few shoutouts just as we do

    1:48:50

    bring things to a clause. So, Zahed Abbas, Ruben Majid, thanks for being with us. Comments, I loved it. Thank

    1:48:56

    you. Very informative uh session. Silva, thank you so much guys. Love, lovely session. Theodoris, very nice

    1:49:02

    presentation. Tried also the interface with the credits. Uh Gabrielle knows me

    1:49:08

    well. Just sunglasses emoji. That's what it's all about. Uh thank you so much for the session. Learns a lot. Patrick

    1:49:14

    Osbborne, great to have you with us, Patrick. Prayer emoji. Sean, this has been very interesting. Thank you. Thank

    1:49:19

    you. Thank you for this. Excited to give Zerv a go. Uh Lantha, great. Thanks. Thank you so much for the great

    1:49:25

    sessions. Really enjoyed the the Q&A. Hali, Violeta, Angelica,

    1:49:31

    um Dominic, cheers, very interesting. Anytime I get into this, I'm like, why have I done this? There's so many names.

    1:49:37

    Toby Smith, fantastic session. Thanks. Thank you so much. This is really interesting. Thank you so much. Uh thank

    1:49:43

    great session. Thanks a lot for showing off serve and and the list goes on and on. So, apologies if if I haven't got

    1:49:49

    your name, but um I'm going to end with a a couple of thank yous. Um the first thank you is obviously to to Zerve. Um

    1:49:57

    we can't uh do Oh my god, Tony Faroh in the chat. Tony, my man. It's so good to

    1:50:03

    see you. I worked with Tony 20 years ago uh a university. So, big big love Tony.

    1:50:09

    Um yeah, man. Zerv, um thank you so much. Um like you guys have been

    1:50:14

    awesome. Um the thing I'll call out actually Rebecca's not been on the call but Rebecca deals with all the marketing

    1:50:20

    at Zerve. Um and that was our first point from from the community um and just really embraced the idea of

    1:50:26

    community and what we're all we're all about. That's obviously brought in you two lovely people today and that's just

    1:50:33

    kind of you know sort story checks out and thank you so much also for the competition. Uh so everyone's still

    1:50:39

    here. We'll make sure you get your your certificate and please do post that online. apart from the competition, it's

    1:50:44

    just really really helpful to grow the community, which is obviously something that I'm very passionate about. And then

    1:50:51

    finally, like each of you, um, it makes me so happy when we ever have a speaker come on like Greg give up his time and

    1:50:57

    just go, you know what, this has been such an amazing experience. Um, and the whole reason it's been an amazing experience is because of all of you

    1:51:04

    people that have joined us. So, uh, yeah, massive thank you. And I will come actually to Jean Dominique uh and Greg

    1:51:11

    just to just get your final words. How did you enjoy that? Did you enjoy that? Any final words from you Jean Dominique

    1:51:17

    first. What's your thoughts? Did you enjoy it first? Yeah. Yeah, it was it was really uh really cool. Thank you. Thank you very

    1:51:23

    much David and and all. Thank you for the Yeah, it was not stressful at all.

    1:51:30

    I told you you'd enjoy it. I tell you. Yeah. Thank you very much, mate. Thank you for joining us in Paris.

    1:51:36

    You've done a tremendous job, David. You're you're a fantastic MC and Thank you, man. And and a great host. I had a

    1:51:42

    great time. Thanks for thanks for having us. Good teamwork, man. Good team work. Well, all the best to the team at Zerve. Everyone out there, get on, use your

    1:51:48

    credits, tap Craig up for some extra ones. And yeah, all the best with Zerve. And we'll keep in touch going into 2026.

    1:51:55

    But thank you all and see you all soon. Goodbye.

Related Videos

Decision-grade data work

Explore, analyze and deploy your first project in minutes