ZON Logo
Documentation
Docs
Essentials
Benchmarks

Benchmarks

ZON Performance Benchmarks

Version: v1.3.0 Environment: Node.js (macOS)

Overview

We compared ZON against JSON (native) and MsgPack (msgpack-lite) across three datasets:

  1. Hiking (Small, Mixed): A typical LLM context object with metadata and a small table.
  2. Large Array (1k items): A tabular dataset with 1000 rows, testing table compression.
  3. Nested Object (Deep): A deeply nested structure to test recursion overhead.

Results

1. Hiking Dataset (Small)

FormatSize (bytes)Tokens (GPT)Encode (ms)Decode (ms)
JSON3661150.0350.016
MsgPack277N/A0.1450.045
ZON278960.1800.120

Result: ZON saves 16.5% tokens vs JSON.

2. Large Array (1000 items)

FormatSize (bytes)Tokens (GPT)Encode (ms)Decode (ms)
JSON97,89131,0021.5200.850
MsgPack88,903N/A3.2001.800
ZON68,90425,00415.4008.500

Result: ZON saves 19.3% tokens vs JSON.

3. Nested Object (Deep)

FormatSize (bytes)Tokens (GPT)Encode (ms)Decode (ms)
JSON4,0121,4800.0140.033
MsgPack2,689N/A0.1990.124
ZON3,3301,3660.2210.916

Result: ZON saves 7.7% tokens vs JSON.

Analysis

  • Token Efficiency: ZON consistently outperforms JSON in token count, with savings ranging from 7% to 19%. This directly translates to lower LLM costs and larger effective context windows.
  • Size: ZON is significantly smaller than JSON in bytes, often approaching MsgPack's binary size for tabular data due to header deduplication.
  • Performance: While slower than native JSON (as expected), ZON's performance is well within acceptable limits for real-world applications, processing 1000 items in ~15ms.

Methodology

Benchmarks were run using benchmarks/performance.ts.

  • JSON: JSON.stringify / JSON.parse
  • MsgPack: msgpack-lite
  • ZON: zon-format v1.3.0
  • Tokens: Measured using gpt-tokenizer (GPT-3.5/4 encoding).