Intelligent Markdown chunking that preserves document structure and semantic relationships. Creates token-aware chunks optimized for embedding model context windows. Built on League CommonMark and Yethee\Tiktoken.
Clone the Interactive Demo to experiment with chunking in real-time. Paste your Markdown, adjust parameters, and see how content gets split into semantic chunks.
use League\CommonMark\Environment\Environment;
use League\CommonMark\Parser\MarkdownParser;
use League\CommonMark\Extension\CommonMark\CommonMarkCoreExtension;
use League\CommonMark\Extension\Table\TableExtension;
use BenBjurstrom\MarkdownObject\Build\MarkdownObjectBuilder;
use BenBjurstrom\MarkdownObject\Tokenizer\TikTokenizer;
// 1) Parse Markdown with CommonMark
$env = new Environment();
$env->addExtension(new CommonMarkCoreExtension());
$env->addExtension(new TableExtension());
$parser = new MarkdownParser($env);
$filename = 'guide.md';
$markdown = file_get_contents($filename);
$doc = $parser->parse($markdown);
// 2) Build the structured model
$builder = new MarkdownObjectBuilder();
$tokenizer = TikTokenizer::forModel('gpt-3.5-turbo');
$mdObj = $builder->build($doc, $filename, $markdown, $tokenizer);
// 3) Emit hierarchically-packed chunks
$chunks = $mdObj->toMarkdownChunks(target: 512, hardCap: 1024);
foreach ($chunks as $chunk) {
echo "---\n";
echo "Chunk: {$chunk->id} | {$chunk->tokenCount} tokens";
// Source position tracking for finding chunks in original document
$pos = $chunk->sourcePosition;
if ($pos->lines !== null) {
echo " | Line: {$pos->lines->startLine}";
}
echo "\n";
echo implode(' › ', $chunk->breadcrumb) . "\n";
echo "---\n\n";
echo $chunk->markdown . "\n\n";
}
/*
---
Chunk: 1 | 163 tokens | Line: 1
demo.md › Getting Started
---
# Getting Started
Welcome to the Markdown Object demo! This tool helps you visualize how markdown is parsed and chunked.
## Features
### Real-time Processing
Type or paste markdown in the left pane and see the results instantly.
### Hierarchical Chunking
Content is automatically organized into semantic chunks that keep related information together…
---
Chunk: 2 | 287 tokens | Line: 18
demo.md › Getting Started › Advanced Options
---
## Advanced Options
Configure chunking parameters to see how different settings affect the output.
### Token Limits
Adjust the target and hard cap values to control chunk sizes…
*/You can install the package via composer:
composer require benbjurstrom/markdown-object// Serialize to JSON
$json = $mdObj->toJson(JSON_PRETTY_PRINT);
// Deserialize from JSON
$copy = \BenBjurstrom\MarkdownObject\Model\MarkdownObject::fromJson($json);use BenBjurstrom\MarkdownObject\Tokenizer\TikTokenizer;
// Use a different model
$tokenizer = TikTokenizer::forModel('gpt-4');
// Or use a specific encoding
$tokenizer = TikTokenizer::forEncoding('p50k_base');
// Pass to both build() and toMarkdownChunks()
$mdObj = $builder->build($doc, $filename, $markdown, $tokenizer);
$chunks = $mdObj->toMarkdownChunks(
target: 512,
hardCap: 1024,
tok: $tokenizer
);$chunks = $mdObj->toMarkdownChunks(
target: 256, // Smaller target for content splitting
hardCap: 512, // Smaller hard cap for hierarchy
tok: $customTokenizer, // Optional: use different tokenizer
repeatTableHeaders: false // Optional: don't repeat headers in split tables
);Chunk token counts include separator tokens (\n\n) added when joining content pieces, so they may be slightly higher than the sum of individual node tokens. This is expected and ensures the count accurately reflects what will be embedded.
// Build-time: sum of nodes (no separators)
echo $mdObj->tokenCount; // e.g., 155
// Chunk: includes \n\n separators between elements
echo $chunks[0]->tokenCount; // e.g., 163 (8 tokens higher)The package uses hierarchical greedy packing to create semantically coherent chunks that respect your document's natural structure.
The chunker intelligently splits content using a two-threshold system:
target- Soft limit for splitting large content blocks (paragraphs, code, tables)hardCap- Hard limit for hierarchical decisions (when to split vs. keep sections together)
- Start whole – If the entire document fits within
hardCap, return as a single chunk - Split hierarchically – When too large, split at the highest heading level (H1, then H2, etc.)
- Pack greedily – Combine sibling sections that fit together within
hardCap - Recurse deeply – Sections that don't fit are processed recursively with updated breadcrumbs
- Minimize fragments – After recursion, continue packing remaining siblings to avoid orphaned content
- Split smartly – Long paragraphs, code blocks, and tables break at
targetboundaries while preserving readability
Run the tests with:
composer testFor detailed architecture documentation, see ARCHITECTURE.md.
For examples of hierarchical packing behavior, see EXAMPLES.md.
Please see CHANGELOG for more information on what has changed recently.
Contributions are welcome! Please feel free to submit a Pull Request.
Please review our security policy on how to report security vulnerabilities.
The MIT License (MIT). Please see License File for more information.