Skip to content

[Not To Land][HF][Gemma] Lower and run HF Gemma2b in ExecuTorch#4088

Closed
guangy10 wants to merge 2 commits intogh/guangy10/21/basefrom
gh/guangy10/21/head
Closed

[Not To Land][HF][Gemma] Lower and run HF Gemma2b in ExecuTorch#4088
guangy10 wants to merge 2 commits intogh/guangy10/21/basefrom
gh/guangy10/21/head

Conversation

@guangy10
Copy link
Contributor

@guangy10 guangy10 commented Jun 29, 2024

This PR is a prototype to showcase the minimal changes required to lower Gemma-2b to ExecuTorch w/ static kv cache and run it directly in llama runner w/o single line of code change in the ExecuTorch runtime.

By standardizing on the contract between HuggingFace modeling and ExecuTorch runtime, any LLM in HuggingFace could utilize llama runner as a universal runtime for a given backend.

Instructions to run the demo:

To run the demo, you need to clone huggingface/transformers and patch PR#31706 on top, which contains minimal changes required on the modeling side. Patch this PR to your ExecuTorch repo, from there you can:

  1. Run the export_hf_model.py to lower gemma-2b to ExecuTorch:
python -m examples.models.export_hf_model -hfm "google/gemma-2b" --export  # The model is exported statical dims with static KV cache
  1. Run the tokenizer.py to generate the binary format for ExecuTorch runtime:
python -m examples.models.llama2.tokenizer.tokenizer -t <path_to_downloaded_gemma_checkpoint_dir>/tokenizer.model -o <your_out_dir>/tokenizer.bin
  1. Build and run the lowered model wiht llama runner by following this guide step 4

NOTE: This prototype is to demonstrate the feasibility of exporting and running native HF model in ExecuTorch by reusing llama runner. It does NOT come with performance yet. It's an ongoing effort along this path to enable 1) delegations, e.g. xnnpack 2) custom sdpa 3) parallel prefill recently enabled in #4068.

Stack from ghstack (oldest at bottom):

@pytorch-bot
Copy link

pytorch-bot bot commented Jun 29, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/4088

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit a589e31 with merge base b7df20d (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

guangy10 added a commit that referenced this pull request Jun 29, 2024
ghstack-source-id: 2c1d595
Pull Request resolved: #4088
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 29, 2024
@mergennachin
Copy link
Contributor

@guangy10 Can you separate out of into two PRs? bug fixes in the tokenizer first and then the HF integration.

@guangy10
Copy link
Contributor Author

guangy10 commented Jul 2, 2024

@guangy10 Can you separate out of into two PRs? bug fixes in the tokenizer first and then the HF integration.

Yes, it's tracked in #4112, I will provide a proper fix for it.

guangy10 added a commit that referenced this pull request Jul 11, 2024
ghstack-source-id: 2c1d595
Pull Request resolved: #4088
This PR is a prototype to showcase the minimal changes required to lower Gemma-2b to ExecuTorch w/ static kv cache and run it directly in [llama runner](https://github.com/pytorch/executorch/tree/main/examples/models/llama2) w/o single line of code change in the ExecuTorch runtime.

By standardizing on the contract between HuggingFace modeling and ExecuTorch runtime, any LLM in HuggingFace could utilize llama runner as a universal runtime for a given backend.

Instructions to run the demo:

To run the demo, you need to clone huggingface/transformers and patch [PR#31706](huggingface/transformers#31706) on top, which contains minimal changes required on the modeling side. Patch this PR to your ExecuTorch repo, from there you can:

1. Run the export_hf_model.py to lower gemma-2b to ExecuTorch:
```
python -m examples.models.export_hf_model -hfm "google/gemma-2b" --export  # The model is exported statical dims with static KV cache
```
2. Run the tokenizer.py to generate the binary format for ExecuTorch runtime:
```
python -m examples.models.llama2.tokenizer.tokenizer -t <path_to_downloaded_gemma_checkpoint_dir>/tokenizer.model -o <your_out_dir>/tokenizer.bin
```
3. Build and run the lowered model wiht llama runner by following this guide [step 4](https://github.com/pytorch/executorch/tree/main/examples/models/llama2#step-4-run-on-your-computer-to-validate)

NOTE: This prototype is to demonstrate the feasibility of exporting and running native HF model in ExecuTorch by reusing llama runner. It does NOT come with performance yet. It's an ongoing effort along this path to enable 1) delegations, e.g. xnnpack 2) custom sdpa 3) parallel prefill recently enabled in #4068.




[ghstack-poisoned]
@guangy10 guangy10 changed the title [HF][Gemma] Lower and run HF Gemma2b in ExecuTorch [Not To Land][HF][Gemma] Lower and run HF Gemma2b in ExecuTorch Jul 12, 2024
@guangy10 guangy10 marked this pull request as draft July 12, 2024 21:19
guangy10 added a commit that referenced this pull request Jul 12, 2024
ghstack-source-id: 7806f40
Pull Request resolved: #4088
@guangy10 guangy10 closed this Aug 15, 2024
@guangy10 guangy10 deleted the gh/guangy10/21/head branch August 15, 2024 01:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants