Greg Michaelson, Jean-Dominique Mercury, and David Loughlan speaking during the Data Science Festival Sandbox Session, each appearing in separate video frames against bright yellow backgrounds featuring the Data Science Festival logo.

What You’ll Learn from the “Data Vibe Coding with AI” Replay

A hands-on session where data scientists explored prompting, debugging, and safety in agentic coding.

When we hosted Data Vibe Coding with AI: Prompting Strategies That Actually Work during the Data Science Festival Sandbox Session, the goal was to let everyone code along live with Zerve. Attendees could try the platform for themselves, ask questions directly to  Greg Michaelson and Jean-Dominique Mercury, and see how agentic coding works in real time.

The energy was electric from the start. Hundreds of people joined, flooding the chat with ideas, experiments, and sharp questions. The curiosity was contagious. People were building projects, exploring data, and testing prompts within minutes. It felt less like a webinar and more like a shared discovery session.

What the Session Covered

How to write prompts that work

Greg and Jean-Do guided everyone through hands-on examples to show how giving the agent more context changes everything. When you describe what your data looks like and what you want to achieve, the AI stops guessing and starts producing results you can use.

How to build step by step

Participants followed along as Jean-Do built a loan prediction model from scratch. By breaking the workflow into smaller parts, it became easy to debug and understand what each block in Zerve was doing before moving to the next.

How to catch when the AI is wrong

Greg walked through a live example where the agent claimed success on an impossible task. It invented data to fill the gaps, and everyone saw it unfold in real time. The takeaway was simple: always check what your code actually does, even when it looks right.

How to work with the agent as a teammate

Throughout the session, attendees experimented with asking the agent to explain its reasoning, fix mistakes, and document its steps. The best results came from treating it as a collaborator while staying in control of the process.

How to stay safe when coding with AI

Greg wrapped up with a real-world cautionary story about a team that gave an agent full database access and lost everything. He reminded everyone to keep permissions limited, review code often, and prioritize safety over speed.

Top Questions Asked During The Livestream

  1. How private is the data used in Zerve? Is it used to train models?

    Zerve orchestrates compute and agent calls but does not keep your data to train models. You can bring your own keys. Self-hosting keeps data and compute in your VPC.

  2. Can Zerve handle very large datasets and pick the right libraries? 

    Yes. The agent can see your data, but you should state size needs in the prompt to steer it toward tools suited for scale, like Polars or other options.

  3. What consumes credits and how many do typical tasks use? 

    Compute and agent calls consume credits. Large context and GPUs cost more. A sizable EDA or modeling request is often about one credit, but it varies by data size and complexity.

  4. How do I keep code safe and reduce hallucinations? 

    Limit the agent’s scope, never grant write access to production databases, checkpoint with source control and backups, read the code, add verification tests, and use a second model as a reviewer. Work in small steps.

  5. How is Zerve different from ChatGPT, Claude Code, or Cursor? 

    Zerve runs in the cloud with full project context, a block-based DAG that caches outputs, language interoperability, collaboration, and flexible compute. It cuts copy-paste and keeps state across the workflow.

  6. Can I set org-level rules, prompts, or styles?

    You can edit system prompts, bring your own keys, and integrate with source control. Deeper org integrations are possible case by case.

  7. Can I import existing notebooks or code? 

    Yes. Drag in a notebook and Zerve parses it into a DAG of blocks you can run in parallel and edit.

Ready To Get Started?

The session closed on a high note. People stuck around to keep building and trading ideas with Greg and Jean-Dominique. It showed how much more you learn when you experiment in real time. Watch the full session below and get started with Zerve for free.

FAQs

What topics are covered in the 'Data Vibe Coding with AI' replay?

The session covers how to write effective prompts, building models step-by-step, identifying when AI makes mistakes, collaborating with AI as a teammate, and ensuring safety when coding with AI.

How can I write prompts that work effectively with AI agents?

Greg and Jean-Dominique demonstrated that providing the AI agent with more context and clear instructions helps in generating better responses and achieving desired outcomes.

What is an example of building a model during the session?

Jean-Dominique built a loan prediction model from scratch during the session, illustrating the step-by-step process of developing machine learning models using AI assistance.

How do I detect when the AI agent is wrong?

Greg shared examples where the AI agent claimed success on tasks but was actually incorrect, highlighting the importance of verifying AI outputs and understanding its limitations.

What is the best way to work with an AI agent during coding?

Treating the AI agent like a collaborative teammate yields the best results. Engaging interactively and iteratively improves code quality and problem-solving efficiency.

How can developers stay safe while coding with AI?

Greg recommended limiting the AI agent's access to sensitive data or systems to maintain security and privacy while leveraging its capabilities during development.

Greg Michaelson
Greg Michaelson
Greg Michaelson is the Chief Product Officer and Co-founder of Zerve.
Don't miss out

Related Articles

Transform your data journey with Zerve

Explore & develop at light speed.