
Agentic Coding Live: Three Data Projects, One Hour, Real Chaos
When we planned Agentic Coding Live, the goal was simple to describe and hard to pull off. Take a messy public dataset from Salt Lake County, build three different projects in one hour, and let an agentic workflow carry as much of the load as possible.
We wanted to see how far an agentic canvas could go when the goal is not a single notebook, but a set of working projects that anyone on the team can read, reuse, and ship.
What we tried to build in an hour
The livestream focused on one dataset and three very different questions. Here is the link to the canvas we created to build the dataset.
Who Owns My Neighborhood
We started with parcel level data for Salt Lake County, where each row represented a piece of land with ownership, valuation, and tax details. The first question was straightforward. Who actually owns the land, and how concentrated is that ownership.
In the “Who Owns My Neighborhood” project, the agent pulled in the CSV, walked through basic exploratory checks, surfaced missing fields, and pointed out the columns that mattered most, like owner name, tax class, and parcel acres. It also produced quick visuals for parcel size, land use, and tax categories. With one prompt, it filtered, grouped, and ranked owners by total property value and parcel count. Cities, state agencies, large industrial and resort owners, and a well known church rose to the top immediately.
Each step lived in its own block with readable code and visible outputs. Instead of an opaque result, you could see every transformation, aggregation, and chart as something you could inspect, edit, or run again.
Corporate Landlords Of Salt Lake County
The second project used the same dataset to answer a different question. How many parcels belong to corporate or institutional owners rather than individuals. The challenge was the owner name field. It was messy in all the usual ways, with inconsistent punctuation, scattered commas, multiple spellings of LLC, and a mix of ampersands and the word “and.” Typos and small variants often hid identical owners.
The agent cleaned and normalized the column by fixing casing, trimming whitespace, removing suffixes like LLC or Inc, replacing ampersands, and grouping entries that differed only by cosmetic changes. Once the names were consistent, grouping by parcel acreage or total market value was simple. The results surfaced major institutional landholders across the county, including governments, church entities, industrial owners, and commercial developers.
All of the cleaning logic stayed visible and editable, and you could rerun any step. The agent handled the tedious work while leaving the reasoning and judgment in human hands.
Tax Fairness Explorer
The third project asked a more sensitive question: how fair is the property tax burden across neighborhoods? The agent began by creating a simple metric, taxable value divided by full market value, giving a tax burden ratio for each parcel. It filtered out invalid rows, summarized burden patterns across the dataset, grouped ratios by neighborhood, flagged extreme high and low burden areas, and identified parcels with no tax burden at all.
Some results were expected, such as the government’s lack of tax burden. Other findings were more interesting. Most properties clustered near a mid range burden, while a smaller set of notable neighborhoods carried much higher or lower ratios.
We didn’t take on this project to debate tax policy, but to see whether an agent could build a fairness metric from public data and surface outliers in a way that an experienced analyst would trust. It did, and every step was visible on canvas.
What we learned from the chaos
Running three projects at once inside one hour was a stress test for both the product and the workflow. A few takeaways stood out.
Agentic workflows handle parallel projects well. We had three agents working on three different questions against the same dataset at the same time. Each produced its own plan, code, and outputs without stepping on the others. Context stayed organized, and we could bounce between projects as needed.
Modularity beats giant prompts. The people who get the most value from agentic tools are not writing long prompts that try to do everything at once. They are writing clear instructions for small, concrete steps and letting agents generate code for each block. This kept the work editable and debuggable.
Transparency builds trust. Every time the agent produced a chart or a table, the code that created it sat right beside it. When something looked odd, we could click into the block, read the logic, and adjust. That is how you keep experienced practitioners engaged instead of asking them to trust an opaque system.
Agentic coding is a bookend tool. Several questions from participants circled back to careers, junior talent, and the future of work. Agents now handle a lot of what used to be entry level coding tasks. The value shifts to the bookends, where people frame and scope the right problems, and then drive adoption and change in real organizations.
Tools like Zerve help with the middle. They make it faster to explore, model, visualize, and package work. They do not remove the need for judgment at the start or ownership at the end.
Try it yourself
If you want to explore your own data in the same way, you can sign up for Zerve, connect a dataset, and start a canvas of your own. Ask the agent to explore, clean, and model. Watch how it structures the work. Then jump in, edit the code, and push it toward something your team can ship.
FAQs
How much of the agentic coding livestream was prebuilt?
None of the analysis was prebuilt. The entire livestream shows real agentic coding as it happened, with only the dataset uploaded beforehand.
Can Zerve’s agentic workflow run multiple data projects in parallel?
Yes. Zerve handled all three data projects at the same time, each with its own plan, code, and outputs, without conflicts or lost context.
Does Zerve support larger datasets or secure data sources?
Yes. You can connect Zerve to cloud databases, lakehouses, or your own infrastructure so you can run agentic workflows on larger or private datasets.
How much prompting skill is required to use agentic coding in Zerve?
Very little. Short, direct prompts are enough for Zerve’s agent to plan, generate code, and structure each step of the workflow.
Is agentic coding reliable for serious data analysis and modeling?
Yes. Every block shows the exact code, inputs, and outputs, giving you full transparency and control over each part of the analysis.
Who benefits most from using an agentic canvas like Zerve?
Data scientists, analysts, engineers, and technical product leads all benefit from a shared, transparent workflow that supports real coding and real collaboration.

