๐Ÿ€Zerve chosen as NCAA's Agentic Data Platform for 2026 Hackathonยท๐ŸงฎMeet the Zerve Team at Data Decoded Londonยท๐Ÿ“ˆWe're hiring โ€” awesome new roles just gone live!
Back
Pandas

How to Handle Mixed Data Types in a Column โ€” Python Pandas

Answer

Use pd.to_numeric() with errors='coerce' to convert mixed columns to numeric (non-convertible values become NaN). For inspection, use df['col'].apply(type).value_counts() to see what types are actually in the column. Then clean or convert based on what you find.

Why This Happens

Columns that should be numeric often contain strings like "N/A", "$100", or "unknown" mixed with actual numbers. This causes calculations to fail, dtypes to default to object, and downstream operations to break. You need to identify the bad values and decide how to handle them.

Solution

The rule: inspect with .apply(type) and .unique() first, clean known patterns, then convert with errors='coerce', and finally handle the resulting NaNs.

import pandas as pd

df = pd.DataFrame({'value': [100, '200', 'N/A', 300, '$400', None]})

# โœ… Inspect what types are actually in the column
df['value'].apply(type).value_counts()

# โœ… See unique values to understand the mess
df['value'].unique()

# โœ… Convert to numeric, coercing errors to NaN
df['value'] = pd.to_numeric(df['value'], errors='coerce')

# โœ… Clean first, then convert (for patterns like $400)
df['value'] = df['value'].replace('[\$,]', '', regex=True)
df['value'] = pd.to_numeric(df['value'], errors='coerce')

# โœ… Handle specific bad values before conversion
df['value'] = df['value'].replace({'N/A': None, 'unknown': None})
df['value'] = pd.to_numeric(df['value'], errors='coerce')

# โœ… Check how many values failed conversion
df['value'].isna().sum()

# โœ… For mixed int/float, convert to consistent type
df['value'] = df['value'].astype('Int64')  # nullable integer

Better Workflow

In Zerve, you build this as a visual pipeline: messy data to inspection to cleaning to conversion to visualization. Each block shows its output inline, so you can spot issues at any stage. Need to adjust your cleaning logic? Only downstream blocks re-run. The modular approach means each transformation step is isolated and testable. No more scrolling through monolithic cells trying to find where the data went wrong.

Better workflow

Related Topics

Decision-grade data work

Explore, analyze and deploy your first project in minutes