Skip to content

[ABI] Introduce ShapeView Minimize TensorObj exposure#67

Merged
tqchen merged 1 commit intoapache:mainfrom
tqchen:tensor-abi
Sep 27, 2025
Merged

[ABI] Introduce ShapeView Minimize TensorObj exposure#67
tqchen merged 1 commit intoapache:mainfrom
tqchen:tensor-abi

Conversation

@tqchen
Copy link
Copy Markdown
Member

@tqchen tqchen commented Sep 27, 2025

This PR minimizes TensorObj ABI exposure so C++ api only depends on behavior of the DLTensor field.
We also introduce ShapeView to reduce managed copy of shape. The change will make future dependencies on C++ side more stable.

We also added a few helper functions such as data_ptr(), ndim(), numel() to the ffi::Tensor.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @tqchen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refines the Application Binary Interface (ABI) for Tensor objects by introducing a new ShapeView class and minimizing the internal state exposed by TensorObj. The primary goal is to enhance ABI stability for future C++ dependencies by ensuring that the C++ API relies more directly on the DLTensor field's behavior. This involves optimizing memory management through in-place allocation of shape and strides data and providing direct accessors for key tensor properties.

Highlights

  • Introduction of ShapeView: A new lightweight, non-owning ShapeView class has been introduced to efficiently represent shape data, reducing managed copies and memory allocations associated with Shape objects.
  • Minimized TensorObj ABI Exposure: The internal structure of TensorObj has been streamlined by removing internal state like cached DLManagedTensorVersioned, shape_data_, and strides_data_. This ensures the C++ API primarily depends on the behavior of the DLTensor field, leading to a more stable ABI.
  • In-place Allocation for Shape and Strides: Shape and strides data are now allocated in-place directly after TensorObj derived structures (e.g., TensorObjFromNDAlloc, TensorObjFromDLPack), optimizing memory usage and reducing the need for separate Shape object allocations.
  • New Helper Functions for ffi::Tensor: Convenience methods data_ptr(), ndim(), and numel() have been added to the ffi::Tensor class, providing direct access to underlying DLTensor properties.
  • Version Bump: The project version has been updated from 0.1.0b9 to 0.1.0b10.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces ShapeView as a lightweight, non-owning view for tensor shapes to reduce memory allocations and copies. It also refactors TensorObj to minimize its ABI exposure by removing internal caching and data members, making the C++ API more stable. Additionally, several convenient helper functions (data_ptr, ndim, numel) have been added to ffi::Tensor. The changes are well-motivated and the implementation is solid. I've identified a couple of areas for improvement regarding type safety and code duplication, which are detailed in the comments.

Comment thread include/tvm/ffi/container/shape.h Outdated
Comment thread include/tvm/ffi/container/tensor.h
This PR minimizes TensorObj ABI exposure so C++ api only
depends on behavior of the DLTensor field.
We also introduce ShapeView to reduce managed copy of shape.
The change will make future dependencies on C++ side more stable.

We also added a few helper functions such as data_ptr(), ndim(), numel()
to the ffi::Tensor.
@tqchen tqchen merged commit 8ca0719 into apache:main Sep 27, 2025
7 checks passed
yzh119 added a commit to flashinfer-ai/flashinfer that referenced this pull request Sep 29, 2025
<!-- .github/pull_request_template.md -->

## 📌 Description

The codegen logic for pytorch and tvm should unify after #1641 , and
this PR cleans up the related codegen functions in tvm_bindings.

Other changes:
1. update tvm-ffi to 0.1.0b11 to incorporate
apache/tvm-ffi#67 and
apache/tvm-ffi#68
2. rename of source files: `_ops.cu` and `_pybind.cu` renamed to
`_binding.cu`
3. remove torch related header include/library linking in ninja files
(#1642 (comment))
4. remove the use of `use_torch_stream` in unittests, they are no longer
required after apache/tvm-ffi#68

## 🔍 Related Issues

#1641 

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [ ] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [ ] I have installed the hooks with `pre-commit install`.
- [ ] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

cc @MasterJH5574 please let us know what changes do we need to make to
help you bump to the latest version of flashinfer in MLC.
Kathryn-cat pushed a commit to Kathryn-cat/tvm-ffi that referenced this pull request Apr 24, 2026
Adds a pure tree-walking interpreter for `tvm_ffi.pyast` expression trees. The
evaluator never calls `compile`/`eval`/`exec`, never synthesizes source text,
and never round-trips through the stdlib `ast` module.

- Class-level dispatch table `_EXPR_EVALUATOR_DISPATCH: {pyast-node-type: handler}`
  drives `ExprEvaluator.eval`.
- `OperatorDispatch` keyed by `(op_kind, operand_type, operand_index)`; `lookup`
  walks operand MRO left-to-right; `invoke` falls back to `_NATIVE_HANDLERS`
  (a `{OperationKind: operator-module-callable}` table).
- `And`/`Or`/`IfThenElse`/`Parens`/`ChainedCompare` bypass user dispatch entirely
  for short-circuit correctness; unary ops go direct to native; binary ops route
  through `dispatch.invoke`. `ChainedCompare` is flattened to pairwise
  comparisons combined with `and` semantics.
- Scope is `Mapping[str, Any]`; internally wrapped in a `ChainMap` so
  `WalrusExpr` and per-lambda/per-comprehension bindings layer over but do not
  mutate the caller's mapping.
- `_build_generator` evaluates the outermost iterable eagerly (matching Python
  genexp semantics) and iterates remaining clauses lazily inside a generator.
- `Yield`/`YieldFrom`/`AwaitExpr` are recognized but raise `EvaluationError` on
  evaluation — valid tree shapes the evaluator refuses to execute.
- `eval_assign` rejects `Attr`/`Index` targets (evaluator only produces new
  local bindings) and supports at most one `StarredExpr` per sequence with
  matching trailing/middle capture.

Re-exported from `tvm_ffi` (purely additive, no renames or removals):

- `eval_expr`
- `eval_assign`
- `OperatorDispatch`
- `DEFAULT_DISPATCH`
- `EvaluationError`
- `UndefinedNameError`

None — runtime library additions only.

None. No existing public API is renamed, modified, or removed.

Design doc lives outside the repo; no Sphinx docs added in this commit.

- `tests/python/test_pyast_evaluator.py`: 112 tests covering every handler,
  dispatch MRO, short-circuits (`and`/`or`/chained compare), comprehension
  laziness (eager outermost iter, lazy body for generator expressions),
  starred/double-starred unpacking in calls/lists/dicts, walrus, f-strings
  (conversion flags + `format_spec`), `eval_assign` (including starred
  middle/end, Attr/Index rejection), and parity with native `eval` across
  representative sources. Includes a public-API re-export sanity check.
- `uv run pytest tests/python/test_pyast_evaluator.py` → 112 passed.
- `uv run pytest tests/python/test_pyast_evaluator.py tests/python/test_pyast.py
  tests/python/test_pyast_from_py.py` → 548 passed earlier in the session.
- `pre-commit run --files python/tvm_ffi/_pyast_evaluator.py
  tests/python/test_pyast_evaluator.py python/tvm_ffi/__init__.py` → ruff-check
  and ruff-format pass; ty has pre-existing `pytest` unresolved-import warnings
  affecting every test file in the repo; one `invalid-argument-type` for
  `ChainMap(dict, Mapping)` at 4 sites suppressed with `ty: ignore`; one
  `invalid-assignment` at the walrus-into-ChainMap line suppressed similarly.
  No new ty regressions introduced.

- Lambdas with defaults / `*args` / `**kwargs` (evaluator only supports plain
  positional `Id` params; the helper strips `:` and `=` suffixes but rejects
  `StarredExpr` params).
- Comprehension targets that are `Attr`/`Index`.
- Side-effectful `ChainMap` interaction from external callers.
- No fuzz testing.
- F-string debug specifier (`{x=}`) not explicitly tested.
junrushao added a commit to Kathryn-cat/tvm-ffi that referenced this pull request Apr 29, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants