feat(feishu): add rich markdown, table ops, image upload, and color text#26790
feat(feishu): add rich markdown, table ops, image upload, and color text#26790Elarwei001 wants to merge 22 commits intoopenclaw:mainfrom
Conversation
e9d15c9 to
0e19392
Compare
0e19392 to
e6b38d0
Compare
e6b38d0 to
a4c7f32
Compare
90d51d3 to
b297e5b
Compare
3f737a5 to
385684b
Compare
9ad82f9 to
5671e34
Compare
|
Hi @doodlewind 👋 I noticed you're the original author of the Feishu docx implementation. This PR replaces the Children API ( Reasoning:
Question: Happy to adjust the approach if needed. Thanks! 🙏 |
be7c05e to
4fc3e79
Compare
When columns hit MAX_COLUMN_WIDTH, the remaining space was not being distributed. Now redistribute evenly across all columns that haven't reached max, ensuring tables fill the page width.
Instead of hardcoding 730px, use the sum of original column_width values from Convert API. This ensures tables match Feishu's expected dimensions for any document layout.
- Add insertBlocksInBatches() for documents exceeding API limit - Add collectDescendants() to properly split blocks with their children - Show batch progress in logs: 'Inserting batch 1/3 (1000 blocks)...' - Automatically choose single vs batched insert based on block count
Move large document handling (>1000 blocks) to batch-insert.ts: - collectDescendants() - gather blocks with their children - insertBlocksInBatches() - split and insert in batches Keep docx.ts focused on the common case (<1000 blocks).
The API limits BOTH children_id AND descendants to 1000 each. Previous logic batched by first-level IDs but descendants could exceed 1000. Now we ensure each batch has ≤1000 total blocks.
|
@greptile-apps please review the latest changes: New Features Added
Files Changed
Please review for code quality and potential issues. |
Greptile SummaryAdds rich markdown support to Feishu document creation, including native table rendering, adaptive column widths, and batch insertion for large documents (>1000 blocks). Key Changes:
Architecture: Testing: Confidence Score: 4/5
Last reviewed commit: 86e0e14 |
| // If adding this first-level block would exceed limit, start new batch | ||
| if ( | ||
| currentBatch.blocks.length + newBlocks.length > BATCH_SIZE && | ||
| currentBatch.blocks.length > 0 | ||
| ) { | ||
| batches.push(currentBatch); | ||
| currentBatch = { firstLevelIds: [], blocks: [] }; | ||
| } |
There was a problem hiding this comment.
if a single first-level block has >1000 descendants, it will still be added to an empty batch, exceeding the API limit
the condition currentBatch.blocks.length > 0 prevents starting a new batch when the current batch is empty, but doesn't handle the case where newBlocks.length alone exceeds BATCH_SIZE
example scenario:
- empty batch (length = 0)
- single block with 1500 descendants
- check:
0 + 1500 > 1000 && 0 > 0→ false - adds all 1500 blocks → exceeds limit
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/feishu/src/batch-insert.ts
Line: 120-127
Comment:
if a single first-level block has >1000 descendants, it will still be added to an empty batch, exceeding the API limit
the condition `currentBatch.blocks.length > 0` prevents starting a new batch when the current batch is empty, but doesn't handle the case where `newBlocks.length` alone exceeds `BATCH_SIZE`
example scenario:
- empty batch (length = 0)
- single block with 1500 descendants
- check: `0 + 1500 > 1000 && 0 > 0` → false
- adds all 1500 blocks → exceeds limit
How can I resolve this? If you propose a fix, please make it concise.|
Hi @m1heng, would appreciate your review when you have a moment. This PR adds table rendering, image upload, table manipulation, and text color to the Feishu doc tool. These come up frequently when agents try to generate structured reports or insert diagrams — features that are hard to work around without native API support. The PR description has the full details. Thanks! |
New actions: - insert_table_row: Insert a new row at specified index - insert_table_column: Insert a new column at specified index - delete_table_rows: Delete rows starting from index - delete_table_columns: Delete columns starting from index - merge_table_cells: Merge a range of cells
Move row/column manipulation functions to dedicated module: - insertTableRow - insertTableColumn - deleteTableRows - deleteTableColumns - mergeTableCells
Supports a simple markup syntax for colored text:
'Revenue [green]+15%[/green] YoY, Costs [red]-3%[/red]'
Tags: [red] [green] [blue] [orange] [yellow] [purple] [grey]
[bold] [bg:color] (background color)
Use case: data reports with red/green highlighting for up/down trends.
Add usage examples for: - insert_table_row / insert_table_column - delete_table_rows / delete_table_columns - merge_table_cells - color_text with markup syntax and data report workflow
Supports uploading local images to Feishu documents: - data URI: data:image/png;base64,... - Plain base64 string - Absolute file path Flow follows official Feishu FAQ: 1. Create empty image block (Children API) 2. Upload image with parent_node = image block ID 3. Set image token via replace_image patch Note: tested implementation matches official API docs. JP-cluster cross-region issue observed in test env only.
Fixes image upload for JP cluster (and other non-default regions).
When parent_node is a doxjp... block ID, the drive service at the
global endpoint couldn't route to the JP datacenter, returning
'parent node not exist' (1061044).
Solution: pass extra={drive_route_token: docToken} to upload_all API.
This tells the drive service which document the block belongs to,
enabling correct datacenter routing.
Affects:
- processImages(): URL-based images in write/append actions
- uploadImageAction(): new upload_image action
Ref: Feishu docs - drive.media.uploadAll 'extra' param
….from() Readable.from(buffer) creates a stream with unknown Content-Length, causing form-data to declare wrong size in headers. Server rejects with 'Error when parsing request' for images >~1KB. Passing the Buffer directly lets form-data calculate Content-Length correctly, fixing uploads for real-world image sizes. Also removes unused Readable import from stream.
JPEG base64 strings start with '/9j/' (JPEG magic bytes in base64), which was incorrectly matched as an absolute file path. Fix: also require path length < 1024 chars. Real file paths are short; base64-encoded images are thousands of characters long. All 3 input formats now tested and passing: - data URI: data:image/jpeg;base64,... ✅ - plain base64: /9j/4AAQSkZJRg... ✅ - file path: /Users/.../image.jpg ✅
Allows inserting images from http/https URLs directly:
upload_image { image: "https://example.com/photo.jpg" }
Also updates schema description to document all 4 input types:
- https/http URL (downloads via OpenClaw media fetcher)
- data URI (data:image/jpeg;base64,...)
- plain base64 string
- absolute file path
Passes mediaMaxBytes from account config for size limit enforcement.
- Rename batch-insert.ts → docx-batch-insert.ts
- Rename color-text.ts → docx-color-text.ts
- Merge table-ops.ts + table-utils.ts → docx-table-ops.ts
- Extract image operations from docx.ts → docx-picture-ops.ts
- Fix file path detection: use path.isAbsolute() + os.homedir()
instead of checking for leading '/' characters (base64 also uses '/')
- Fix plain base64 detection: standard base64 alphabet includes '/'
so the previous !includes('/') check broke valid JPEG base64 strings
Adds action: 'insert' to feishu_doc tool. - Resolves the target block's parent and index via documentBlock.get and documentBlockChildren.get - Converts markdown via Convert API, then inserts using Descendant API with explicit index parameter (supports both small and large docs) - Large documents (>1000 blocks) use batch insert with advancing index - Also threads parent/index params through insertBlocksWithDescendant and insertBlocksInBatches for full position support
Changes
Extends Feishu document operations with several new capabilities. All new actions are dispatched through the existing
feishu_doctool.Core: Markdown → Descendant API
Replaced the old Children API path with Descendant API for all block insertions. The Convert API handles GFM tables natively (block types 31/32), so the flow is now:
Large documents (>1000 blocks) are automatically batched (
docx-batch-insert.ts), preserving parent-child relationships.Table column widths are calculated from cell content length (CJK chars weighted 2x) and proportionally distributed to fill page width.
New Actions
upload_imagecolor_text[red]text[/red]markupinsert_table_row/insert_table_columndelete_table_rows/delete_table_columnsmerge_table_cellsImage Upload
Three issues resolved for reliable image embedding:
parent_nodeis a document block ID, the API requires passingextra: { drive_route_token: docToken }. Per the API docs, certain upload scenarios require providing the token of the cloud document that the block belongs to.Bufferdirectly to the upload form, notReadable.from(buf). Streams have unknown length, causing a Content-Length mismatch that silently truncates uploads.path.isAbsolute()— standard base64 alphabet includes/, so checking!includes('/')breaks valid JPEG base64.The same fixes apply to inline images processed during
write/append.File Structure
Closes #26222