-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Initial implementation of wasi-http serving #6091
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
b391759 to
3aab9f7
Compare
3aab9f7 to
87b2e8f
Compare
2e71d3f to
9b2e2f5
Compare
47438a1 to
891578f
Compare
|
Yes, I plan to rebase this once #6228 merges. While we're on the topic of rebase, do you have a good strategy for rebasing this repo since we're doing squash merges? My standard flow is |
|
I end up using |
0f36918 to
519365d
Compare
|
@pchickey this is rebased and ready for initial review. |
pchickey
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left some idiom comments, which are pretty minor overall.
There is, however, one much bigger ask I have from this PR: now that wasmtime is running inside a tokio executor, we need to change to running wasmtime with its async API: set Config::async_support(true), and then swap out instantiate for instantiate_async, call for call_async. Then, for all of wasi-http's calls into the host, change the bindgen to set async = true and then remove all the run-inside-tokio wrappers in the synchronous bodies of the import functions.
Also, we need some tests to show this works. An echo server like https://github.com/bytecodealliance/wasmtime/blob/main/crates/test-programs/tests/http_tests/runtime/wasi_http_tests.rs#L14-L22 would be one good test program, and then a simple proxy that exercises the client http inside the server context as well.
| } | ||
|
|
||
| impl crate::types::Method { | ||
| pub fn new(m: &hyper::Method) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Idiom: this constructor is better phrased as an impl From<hyper::Method for crate::types::Method
| &hyper::Method::CONNECT => Method::Connect, | ||
| &hyper::Method::TRACE => Method::Trace, | ||
| &hyper::Method::PATCH => Method::Patch, | ||
| _ => panic!("unknown method!"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| _ => panic!("unknown method!"), | |
| _ => Method::Other(m.as_str().to_string()), |
| match s.as_str() { | ||
| "http" => crate::types::Scheme::Http, | ||
| "https" => crate::types::Scheme::Https, | ||
| _ => panic!("unsupported scheme!"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| _ => panic!("unsupported scheme!"), | |
| s => crate::types::Scheme::Other(s.to_string()), |
| Method::Options => 6, | ||
| Method::Trace => 7, | ||
| Method::Patch => 8, | ||
| _ => panic!("unknown method"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately this one is tedious to implement properly, and its probably OK to leave it out for a little bit until we switch over to using the component model runtime
| _ => panic!("unknown method"), | |
| Method::Other(_) => unimplemented!("Method::Other"), |
|
|
||
| impl From<&mut Stream> for bytes::Bytes { | ||
| fn from(stream: &mut Stream) -> Self { | ||
| stream.data.clone().into() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like when clones are hidden inside unassuming impls like From, and From<&mut _> is unidiomatic as well. impl Stream { fn as_bytes(&self) -> Bytes { Bytes::from(self.data.clone()) }} would be a better way to write this one
|
@pchickey thanks for the review and the idiomatic suggestions. I will update this PR to reflect the async request above and fix the idiomatic bits too (and add a test) It will probably take me a few days to get enough free cycles to get it done, but should be by the end of the week. |
|
@pchickey as I dug into this it seems that a bunch of components (e.g. the Do you have a perspective on if it is ok to modify them to implement |
|
Your "context" struct The good news is the design goal for the wasmtime async apis was to work perfectly inside a hyper service, https://github.com/fastly/Viceroy/blob/main/lib/src/execute.rs was the prototype where we actually fleshed all that design out a few years ago. So maybe you can crib some of the patterns there. The bad news is that async Rust has a steep learning curve, and the error messages from rustc aren't quite as brilliant as they are for the rest of the language. If you need to pair program on this for an hour sometime, my calendar next week is pretty open, but I am taking off Friday. |
|
The biggest problem that I see is that in order to make a call into the WASM module you need to pass the The reason that I need to pass it is because the only time that you can get access to that is when the context is constructed (at least as far as I can tell). If there was a way to statically get at the Looks like you worked around this by constructing the store and linking inside each HTTP execution: https://github.com/fastly/Viceroy/blob/main/lib/src/execute.rs#L301 Which might be possible here, but would require some pretty significant rewiring since that is created/configured via flags etc in a different place. |
|
That's right - the pattern required is to construct a store inside each execution so we can give the host functions a mut ref to the Store's T, otherwise you would need to use Arc<Mutex<>> and lock every use to get something Send and Sync to communicate with outside of the execution. The context will have to be designed to take into account that it is created from the embedder's per-http request execution, not the other way around. Viceroy passes in the LinkerPre structure to amortize linking, but that will be up to the embedder. Store::new and LinkerPre.instantiate_async is designed to be extremely fast when used with the memory pooling features in the Engine. |
|
Just a quick update that this PR is proving to be more thorny/invasive than I thought b/c of the interplay of Store/Linker/Host. I think that there needs to be some significant refactoring of how the I'm still working on it (slowly) but it might be a bit of time before this arrives. |
|
@pchickey ok, I got this working with the switch to Can you take a look and see if this is what you had in mind? If it looks directionally correct, I'll work on the style and tests. Thanks! |
|
@pchickey does your 👍 mean that this looks correct? Or did it just mean "acknowledged, I will take a look when I get a chance"? Sorry, the meaning of response emojis are sometimes hard to parse :) |
|
Hi, sorry, the +1 was an acknowledgement that I was going to review it, but I haven't had time to spend on it yet, this week I have been heads down trying to move all of the preview2-prototying repo into this one. Taking a brief look now, this didn't turn out exactly like I had envisioned, but I am not sure exactly what to suggest to change, I will have to play with refactoring it a bit myself to figure out the right direction. Sorry that isn't very actionable feedback. I will spend time on this next week. |
|
@pchickey sounds good. I actually spent a bunch of time on the refactoring also, I took three different paths at it and this was the one that seemed most "right" But I'm definitely open to other options. The most radical one would be to make all wasmtime executions |
|
@pchickey this is updated with a rebase to reflect the recent wit changes. |
|
@pchickey friendly ping on this, I suspect that you are busy on other things, so no rush, but wondering what the status of this is. Thanks! |
|
Right now I am working on getting the preview 2 host streams and pollable implementation landed in #6556. That is the prerequisite for getting wasi-http integrated with the rest of the component model implementation in wasmtime. Once #6556 lands we need to do pretty hefty renovations to the wasi-http implementation to rebase it on top of that work. @dicej has successfully made a new implementation based on a somewhat similar implementation of streams and pollable https://github.com/fermyon/spin/blob/132a32daed6632768d52a9336125073e87fee1e8/crates/wasi-cloud/src/http.rs. The current consensus is that Joel will spearhead the effort to get wasmtime-wasi-http running on top of streams - we are expecting this to more or less amount to a rewrite as he ports his work from the spin repo into this crate. His upstream work also contains a (related, but again a little bit different) implementation of the server piece of this puzzle. So, I think at this point we have to set this PR aside. |
|
Closing as obsolete. |
This builds on top of #5929 to add an implementation of http serving.
Aserver.cexample is also added.This is a basic implementation, but ready for feedback/review.
Known limitations:
localhost:8080and is not currently configurablemainmethod of your HTTP serving WASM must exit before the server starts (no infinite loops)There are some interesting questions about what we actually want
wasi-httpserving to mean revealed by this implementation.Specifically, three interesting questions come to mind.
wasmtimethat are specific to a module. I think using anhttp_config.tomlfile similar to what the cache settings does makes the most sense, but interested in other's thoughts.main/_startmethod in the WASM module. If you request an HTTP server, should the main run? Should the server run in parallel to the main? Currently it does run the main, but it expects it to exit before the server starts up.mainhas exited and so the memory is recreated fresh each time. What do we expect the memory lifespan inside the module to be.It's quite possible that some of these questions are better asked in the wasi-http spec repo rather than this implementation, happy to take the discussion there. It's related to the discussion about requests here: WebAssembly/WASI#818
cc @pchickey