Validate CSV data with Zod schemas
CSV files have no built-in schema — every field arrives as a string, and bad data only surfaces at runtime. snaptype generates a Zod schema from your CSV in one command, so you catch errors at the parsing step.
The problem
Most CSV parsers return Record<string, string>[] — every value is a string, regardless of what the column actually contains. Writing a Zod schema by hand means guessing which columns are numbers, which are dates, and which have a fixed set of valid values.
// After parsing — every field is a string, nothing is validated
const rows = await parseCSV("users.csv");
// type: Record<string, string>[]
// Writing the schema by hand means guessing
const UserSchema = z.object({
id: z.string().transform(Number), // is this right?
email: z.string(), // should be z.email()?
plan: z.string(), // "free" | "pro"? maybe?
joinedAt: z.string(), // a date? which format?
});The schema drifts from the real data as soon as the CSV structure changes, and invalid rows — wrong types, malformed emails, unexpected plan values — slip through silently.
The solution
snaptype reads your CSV file and inspects the actual values in each column. It distinguishes numbers from strings, detects email and ISO date formats, and identifies columns with a small set of repeated values as enums — producing a Zod schema that reflects your real data, not a best guess.
How it works
Run one command against your CSV file:
npx snaptype from-csv users.csv --zod -o src/schemas/user.tsGiven a CSV with mixed column types:
id,email,plan,joinedAt
1,alice@example.com,pro,2024-01-15T10:30:00.000Z
2,bob@example.com,free,2024-03-22T08:00:00.000Z
3,carol@example.com,free,2024-04-10T14:15:00.000Zsnaptype generates a fully typed Zod schema:
import { z } from "zod";
export const UserSchema = z.object({
id: z.number(),
email: z.email(),
plan: z.enum(["pro", "free"]),
joinedAt: z.iso.datetime(),
});
export type User = z.infer<typeof UserSchema>;Use it to validate every row at parse time:
import { UserSchema } from "./schemas/user";
const rows = await parseCSV("users.csv");
const users = rows.map((row) => UserSchema.parse(row));
// throws immediately on invalid data — before it reaches your business logicWhat you get
- Runtime validation of every CSV row — bad data surfaces immediately, not silently in production
- Semantic inference: email columns → z.email(), date columns → z.iso.datetime()
- Low-cardinality string columns auto-promoted to z.enum() — no manual listing of valid values
- Numeric columns correctly typed as numbers, not strings — no .transform(Number) boilerplate
- TypeScript type derived from the schema via z.infer — one source of truth for both
Related resources
Try snaptype in 30 seconds
No account needed. Works with any JSON file or API endpoint.
npm install -D snaptype
npx snaptype from-csv users.csv --zod -o src/schemas/user.ts