How to Get More From LLMs When Working With Data
Use case

How to Get More From LLMs When Working With Data

Large Language Models (LLMs) are becoming indispensable in data science workflows, but they’re most effective when guided with structure. You don’t need to treat them as magical black boxes. Instead, apply systematic strategies that let the model work as a collaborator while you stay in control of decisions and interpretations.

Greg Michaelson

10/13/2025

Large Language Models (LLMs) are becoming indispensable in data science workflows, but they’re most effective when guided with structure. You don’t need to treat them as magical black boxes. Instead, apply systematic strategies that let the model work as a collaborator while you stay in control of decisions and interpretations.

Structured Prompts Make Better Use of LLMs

Ad-hoc prompting often leads to inconsistent results. By framing requests with repeatable tactics, you can extract insights that are both relevant and actionable. The following practices have proven effective in getting more from LLMs in real data science work:

1. Provide Regular Data Snapshots

Give the LLM recurring, structured updates that summarize the dataset. Highlight real signals like correlations, missingness, or anomalies. This helps the model “see” the dataset’s shape and stay grounded in context.

2. Ask for Hypotheses

Even without direct access to raw data, LLMs can identify patterns from your summaries. Prompt them to generate hypotheses about potential relationships, trends, or drivers. These can act as starting points for deeper exploration.

3. Request Targeted Exploratory Analysis

Instead of broad prompts, request specific visualizations or metrics that align with your project goals. For example, “Suggest and explain a visualization for feature importance in predicting churn.”

4. Keep the Model Updated

LLMs forget quickly. Regularly recap progress, key findings, and next steps to maintain continuity. This prevents fragmented conversations and makes the model a consistent partner across the project lifecycle.

5. Treat LLMs as Problem-Solvers

When facing issues like class imbalance, missing values, or inconsistent inputs, explain the situation and ask for strategies. The LLM can propose practical options, which you can then validate and implement in code.

Making LLMs Work With You, Not For You

LLMs shine when used as structured assistants. They won’t replace your expertise, but they can help uncover blind spots, propose alternatives, and accelerate workflows when you set clear expectations. By combining your judgment with their capacity for pattern recognition, you get more reliable results and faster progress.

FAQs (Frequently Asked Questions)

How can I get more effective results from Large Language Models (LLMs) when working with data?

Use structured tactics like providing regular data snapshots, asking for hypotheses, requesting targeted exploratory analysis, keeping the model updated, and treating it as a problem-solver for issues such as data imbalance or missing values.

Why is it important to provide regular data snapshots to LLMs?

Regular snapshots with concise summaries highlight real signals like correlations and missingness, helping LLMs generate more relevant insights and support your workflow.

How can asking for hypotheses from LLMs help in data analysis?

LLMs can’t access raw data directly, but they can spot patterns in the information you share. Hypotheses from LLMs reveal potential relationships to investigate further.

What does targeted exploratory analysis involve when using LLMs?

It means explaining your dataset clearly and asking for specific visualizations or metrics. This produces focused, useful insights aligned with your goals.

Why should I keep my LLM current during a project?

LLMs forget context quickly. Recapping progress ensures they stay aligned with your project’s direction, making interactions more coherent and productive.

How should I treat LLMs when encountering issues like imbalance or missing values in my data?

Treat them as problem-solvers: describe the issue clearly and let the LLM suggest strategies or fixes tailored to the challenge.

Transform your data journey with Zerve

Explore & develop at light speed.