Conversation
In order to migrate branches to ISLE, we define a second entry point `lower_branch` which gets the list of branch targets as additional argument. This requires a small change to `lower_common`: the `isle_lower` callback argument is changed from a function pointer to a closure. This allows passing the extra argument via a closure. Traps make use of the recently added facility to emit safepoints from ISLE, but are otherwise straightforward.
…alliance#3739) This commit updates the allocation of a `VMExternRefActivationsTable` structure to perform zero malloc memory allocations. Previously it would allocate a page-size of `chunk` plus some space in hash sets for future insertions. The main trick here implemented is that after the first gc during the slow path the fast chunk allocation is allocated and configured. The motivation for this PR is that given our recent work to further refine and optimize the instantiation process this allocation started to show up in a nontrivial fashion. Most modules today never touch this table anyway as almost none of them use reference types, so the time spent allocation and deallocating the table per-store was largely wasted time. Concretely on a microbenchmark this PR speeds up instantiation of a module with one function by 30%, decreasing the instantiation cost from 1.8us to 1.2us. Overall a pretty minor win but when the instantiation times we're measuring start being in the single-digit microseconds this win ends up getting magnified!
…lliance#3741) * Don't copy `VMBuiltinFunctionsArray` into each `VMContext` This is another PR along the lines of "let's squeeze all possible performance we can out of instantiation". Before this PR we would copy, by value, the contents of `VMBuiltinFunctionsArray` into each `VMContext` allocated. This array of function pointers is modestly-sized but growing over time as we add various intrinsics. Additionally it's the exact same for all `VMContext` allocations. This PR attempts to speed up instantiation slightly by instead storing an indirection to the function array. This means that calling a builtin intrinsic is a tad bit slower since it requires two loads instead of one (one to get the base pointer, another to get the actual address). Otherwise though `VMContext` initialization is now simply setting one pointer instead of doing a `memcpy` from one location to another. With some macro-magic this commit also replaces the previous implementation with one that's more `const`-friendly which also gets us compile-time type-checks of libcalls as well as compile-time verification that all libcalls are defined. Overall, as with bytecodealliance#3739, the win is very modest here. Locally I measured a speedup from 1.9us to 1.7us taken to instantiate an empty module with one function. While small at these scales it's still a 10% improvement! * Review comments
|
/bench_x64 |
b91d0dc to
8ff5de0
Compare
|
bench |
|
/binch_x64 |
|
/bench_x64 |
8ff5de0 to
2cc4b39
Compare
|
X86_blah |
|
/x86_64 |
|
/Bench_x64 |
2cc4b39 to
3090ec0
Compare
|
/bench_x64
|
|
Performance results based on clockticks comparison with main HEAD (higher %change shows improvement):
|
3090ec0 to
9ea1089
Compare
|
/Bench_x64 |
|
/bench_x64
|
|
/bench_x64 Performance results based on clockticks comparison with main HEAD (higher %change shows improvement):
|
As first suggested by Jan on the Zulip here [1], a cheap and effective way to obtain copy-on-write semantics of a "backing image" for a Wasm memory is to mmap a file with `MAP_PRIVATE`. The `memfd` mechanism provided by the Linux kernel allows us to create anonymous, in-memory-only files that we can use for this mapping, so we can construct the image contents on-the-fly then effectively create a CoW overlay. Furthermore, and importantly, `madvise(MADV_DONTNEED, ...)` will discard the CoW overlay, returning the mapping to its original state. By itself this is almost enough for a very fast instantiation-termination loop of the same image over and over, without changing the address space mapping at all (which is expensive). The only missing bit is how to implement heap *growth*. But here memfds can help us again: if we create another anonymous file and map it where the extended parts of the heap would go, we can take advantage of the fact that a `mmap()` mapping can be *larger than the file itself*, with accesses beyond the end generating a `SIGBUS`, and the fact that we can cheaply resize the file with `ftruncate`, even after a mapping exists. So we can map the "heap extension" file once with the maximum memory-slot size and grow the memfd itself as `memory.grow` operations occur. The above CoW technique and heap-growth technique together allow us a fastpath of `madvise()` and `ftruncate()` only when we re-instantiate the same module over and over, as long as we can reuse the same slot. This fastpath avoids all whole-process address-space locks in the Linux kernel, which should mean it is highly scalable. It also avoids the cost of copying data on read, as the `uffd` heap backend does when servicing pagefaults; the kernel's own optimized CoW logic (same as used by all file mmaps) is used instead. [1] https://bytecodealliance.zulipchat.com/#narrow/stream/206238-general/topic/Copy.20on.20write.20based.20instance.20reuse/near/266657772
Testing so far with recent Wasmtime has not been able to show the need for avoiding the process-wide mmap lock in real-world use-cases. As such, the technique of using an anonymous file and ftruncate() to extend it seems unnecessary; instead, memfd can always use anonymous zeroed memory for heap backing where the CoW image is not present, and mprotect() to extend the heap limit by changing page protections.
…nchtrap s390x: Migrate branches and traps to ISLE
Even though the implementation of emit and emit_safepoint may be platform-specific, the interface ought to be common so that other code in prelude.isle may safely call these constructors. This patch moves the definition of emit (from all platforms) and emit_safepoint (s390x only) to prelude.isle. This required adding an emit_safepoint implementation to aarch64 and x64 as well - the latter is still a stub as special move mitosis handling will be required.
Move emit and emit_safepoint to prelude.isle
With the addition of `sock_accept()` in `wasi-0.11.0`, wasmtime can now
implement basic networking for pre-opened sockets.
For Windows `AsHandle` was replaced with `AsRawHandleOrSocket` to cope
with the duality of Handles and Sockets.
For Unix a `wasi_cap_std_sync::net::Socket` enum was created to handle
the {Tcp,Unix}{Listener,Stream} more efficiently in
`WasiCtxBuilder::preopened_socket()`.
The addition of that many `WasiFile` implementors was mainly necessary,
because of the difference in the `num_ready_bytes()` function.
A known issue is Windows now busy polling on sockets, because except
for `stdin`, nothing is querying the status of windows handles/sockets.
Another know issue on Windows, is that there is no crate providing
support for `fcntl(fd, F_GETFL, 0)` on a socket.
Signed-off-by: Harald Hoyer <harald@profian.com>
(This was not a correctness bug, but is an obvious performance bug...)
Copyright (c) 2022, Arm Limited.
|
/bench_x64 |
|
/bench_x64 |
16 similar comments
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
/bench_x64 |
|
Requested from pull request comment. Shows clockticks reduced. 1-Patch/Main (positive pct is better)
|
|
/bench_x64 |
1 similar comment
|
/bench_x64 |
|
Requested from pull request comment. Shows clockticks reduced. 1-Patch/Main (positive pct is better)
|
|
Requested from pull request comment. Shows clockticks reduced. 1-Patch/Main (positive pct is better)
|
|
/bench_x64 |
|
Requested from pull request comment. Shows clockticks reduced. 1-Patch/Main (positive pct is better)
|
No description provided.