Convert JSON to CSV: Complete Guide With Examples
Learn how to convert JSON to CSV for spreadsheets, databases, and data analysis. Covers flattening nested objects, handling edge cases, and programmatic conversion in JavaScript and Python.
When You Need JSON-to-CSV Conversion
JSON is the default format for APIs and web applications, but the moment data needs to leave the developer ecosystem — into a spreadsheet, a business intelligence tool, a database import, or a report emailed to a stakeholder — CSV is usually the expected format.
Common scenarios where JSON-to-CSV conversion comes up:
- Exporting API data — pulling records from a REST API and loading them into Excel or Google Sheets for analysis.
- Database imports — many databases (PostgreSQL, MySQL, SQLite) support CSV imports out of the box, but not JSON.
- Reporting — non-technical stakeholders need data in a format they can open in a spreadsheet, not a JSON viewer.
- Data pipelines — legacy ETL systems often expect CSV as an interchange format between stages.
Try it yourself: Paste a JSON array into our JSON to CSV converter for an instant download-ready CSV file.
Simple Conversion Walkthrough
The straightforward case: you have a JSON array of flat objects where every object has the same keys. This maps directly to a CSV table.
Input JSON:
[
{ "name": "Alice", "age": 30, "city": "Portland" },
{ "name": "Bob", "age": 25, "city": "Seattle" },
{ "name": "Carol", "age": 35, "city": "Denver" }
]Output CSV:
name,age,city
Alice,30,Portland
Bob,25,Seattle
Carol,35,DenverThe object keys become column headers, and each object becomes a row. This is the happy path — and it works perfectly for most API responses that return arrays of records.
The complexity comes when your JSON isn't flat. Nested objects, arrays within objects, missing keys, and mixed types all require decisions about how to represent hierarchical data in a flat table.
Handling Nested JSON
Real-world JSON is rarely flat. An API response for a user might include a nested address object, an array of roles, and optional fields that don't appear on every record. CSV has no concept of nesting, so you need a flattening strategy.
Strategy 1: Dot Notation
The most common approach. Nested keys are concatenated with dots to form a single column name.
Input:
[
{
"name": "Alice",
"address": {
"city": "Portland",
"state": "OR"
}
}
]Output:
name,address.city,address.state
Alice,Portland,ORDot notation preserves the structural relationship in the column name. It works well for one or two levels of nesting, but gets unwieldy for deeply nested data (user.profile.settings.notifications.email).
Strategy 2: Separate Columns Per Leaf Value
Similar to dot notation, but with customized column names that drop the nesting hierarchy. For example, address.city becomes just city. This produces cleaner headers but requires manual column mapping and risks name collisions if different nested objects have keys with the same name.
Strategy 3: Stringify Nested Objects
For complex nested structures that don't map well to flat columns, serialize the nested value as a JSON string within the CSV cell.
Output:
name,address
Alice,"{""city"":""Portland"",""state"":""OR""}"
Bob,"{""city"":""Seattle"",""state"":""WA""}"This preserves all the data, but the CSV cells contain raw JSON — not ideal for spreadsheet users. It works when the CSV is an intermediate format that will be parsed again by code.
Comparing Flattening Approaches
| Approach | Readability | Data loss | Deep nesting | Best for |
|---|---|---|---|---|
| Dot notation | Good | None | Gets verbose | 1-2 levels of nesting |
| Custom column names | Best | Possible collisions | Manual effort | Known, stable schemas |
| Stringify | Poor for humans | None | Handles any depth | Machine-to-machine pipelines |
Handling Arrays Within Objects
Arrays present a different challenge. Consider a user with multiple roles:
{
"name": "Alice",
"roles": ["admin", "editor", "viewer"]
}You have several options for representing this in CSV:
- Join as delimited string:
admin;editor;viewerin a single cell. Simple but requires parsing later. - Separate columns:
roles.0,roles.1,roles.2. Works when the maximum array length is known and small. - Separate rows:One row per role, with the user's other data repeated. This denormalizes the data but avoids multi-value cells.
- Stringify:
["admin","editor","viewer"]as raw JSON in the cell. Preserves the structure for later processing.
The right choice depends on who consumes the CSV. For spreadsheet users, the delimited-string approach is usually the most practical. For data pipelines, stringify preserves the most information.
Edge Cases
Every JSON-to-CSV conversion eventually hits these cases. Knowing about them upfront saves debugging time.
Null Values
JSON null can map to an empty cell, the literal string null, or a custom placeholder. Most tools use an empty cell, which is correct for spreadsheets but loses the distinction between "null" (explicitly no value) and "missing key" (field not present). If that distinction matters, use null as a literal string.
Mixed Types
A JSON field might be a string in one record and a number in another. CSV columns don't have types — everything is a string — so this usually works fine at the CSV level. The problem surfaces when importing into typed systems (databases, typed dataframes). Validate types after conversion.
Missing Keys
When objects in a JSON array don't all have the same keys, the converter needs to build a union of all keys across all objects. This produces a header row with every possible column, and objects missing a key get an empty cell for that column.
[
{ "name": "Alice", "age": 30 },
{ "name": "Bob", "city": "Seattle" }
]Output:
name,age,city
Alice,30,
Bob,,SeattleSpecial Characters in Values
CSV values that contain commas, double quotes, or newlines must be wrapped in double quotes. If the value itself contains double quotes, they're escaped by doubling them. Most conversion libraries handle this automatically, but if you're building your own converter, it's the most common source of bugs.
name,bio
Alice,"Loves ""JSON"" and coding"
Bob,"Portland, OR"Programmatic Conversion
JavaScript / Node.js
For simple flat data, you can convert JSON to CSV in a few lines without any dependencies:
const jsonToCsv = (data: Record<string, unknown>[]) => {
if (data.length === 0) return "";
const headers = Object.keys(data[0]);
const rows = data.map((row) =>
headers
.map((header) => {
const value = row[header] ?? "";
const str = String(value);
return str.includes(",") || str.includes('"') || str.includes("\n")
? `"${str.replace(/"/g, '""')}"`
: str;
})
.join(",")
);
return [headers.join(","), ...rows].join("\n");
};For production use with nested data, libraries like json2csv (from the @json2csv/plainjs package) handle flattening, custom transforms, and edge cases:
import { Parser } from "@json2csv/plainjs";
import { flatten } from "@json2csv/transforms";
const parser = new Parser({
transforms: [flatten({ separator: "." })],
});
const csv = parser.parse(data);Python
Python's standard library has everything you need for basic conversions:
import csv
import json
import io
def json_to_csv(data: list[dict]) -> str:
if not data:
return ""
output = io.StringIO()
writer = csv.DictWriter(output, fieldnames=data[0].keys())
writer.writeheader()
writer.writerows(data)
return output.getvalue()For nested JSON, the pandas library flattens and converts in one call:
import pandas as pd
df = pd.json_normalize(data, sep=".")
df.to_csv("output.csv", index=False)json_normalizehandles nested objects automatically using dot notation for column names. It's the fastest way to go from a nested API response to a clean CSV file.
Large File Considerations
When your JSON file is hundreds of megabytes or more, the conversion approach needs to change. Loading the entire file into memory, parsing it, and building a CSV string in memory can easily exceed your available RAM.
Streaming parsers are the solution. Instead of loading the whole file, they process it incrementally:
- Node.js: Use
JSONStreamorstream-jsonto parse the JSON array one object at a time, convert each to a CSV row, and write it to an output stream. Memory stays constant regardless of file size. - Python: The
ijsonlibrary provides iterative JSON parsing. Combine it withcsv.writerwriting to a file handle for constant-memory conversion. - CLI:
jqprocesses JSON in a streaming fashion natively. For JSON-to-CSV specifically, tools likemiller(mlr) andcsvkitare designed for large-scale format conversion.
A practical rule of thumb: if your JSON file is under 50 MB, in-memory conversion is fine. Above that, switch to a streaming approach. Above 1 GB, use a CLI tool like miller or a dedicated data processing framework.
Choosing the Right Approach
| Scenario | Recommended approach |
|---|---|
| Quick one-off conversion | Online tool or browser converter |
| Flat JSON, small files | Built-in language library (csv module, simple JS function) |
| Nested JSON, complex schema | pandas.json_normalize or @json2csv with flatten transform |
| Large files (>50 MB) | Streaming parser (ijson, stream-json) or CLI (miller) |
| Recurring pipeline | Script with a library, integrated into CI/ETL |
For most developers, the quick conversion path is all they need. Paste JSON, get CSV, move on. The complexity only matters when the data is nested, the files are large, or the conversion runs unattended in a pipeline.
Try it yourself: Paste your JSON data into our JSON to CSV converter to get a clean CSV file in seconds. Need to go the other way? Use our CSV to JSON tool.