Add documentation on configuring C/C++ toolchains [skip ci]#1868
Add documentation on configuring C/C++ toolchains [skip ci]#1868jayconrod merged 1 commit intobazel-contrib:masterfrom
Conversation
|
@ixdy Could you read this and let me know what you think? This is a minimal example. Ideally, it would have instructions for building a custom, hermetic, cross-compiling toolchain, but I haven't had time to figure that out (and probably won't for a while to be honest). But at least it shows how to install a toolchain into a repository that can be referenced from other workspaces and how to get Bazel to use it. |
| Next, we'll create a ``cc_toolchain`` target that tells Bazel where to find some | ||
| important files. This API is undocumented and will very likely change in the | ||
| future. We need to create one of these for each ``toolchain`` in ``CROSSTOOL``. | ||
| The ``toolchain_identifier`` and ``cpu`` fields should match, and the |
There was a problem hiding this comment.
maybe clarify that "should match" here means "should match the values in CROSSTOOL". At first I was confused becfause I thought that toolchain_identifier should be the same as cpu, which it's not.
|
Thanks, this has been helpful for understanding the crosstool setup (along with reading lots of other docs). I've been working on extending this to support cross-compiling (currently from linux/amd64 -> linux/arm). I got something kinda working; I added the following to my toolchain {
toolchain_identifier: "clang-arm"
host_system_name: "linux"
target_system_name: "linux"
target_cpu: "arm"
target_libc: "arm"
compiler: "clang"
abi_version: "unknown"
abi_libc_version: "unknown"
tool_path { name: "ar" path: "bin/llvm-ar" }
tool_path { name: "cpp" path: "bin/clang-cpp" }
tool_path { name: "dwp" path: "bin/llvm-dwp" }
tool_path { name: "gcc" path: "bin/clang" }
tool_path { name: "gcov" path: "bin/llvm-profdata" }
tool_path { name: "ld" path: "bin/ld.lld" }
tool_path { name: "nm" path: "bin/llvm-nm" }
tool_path { name: "objcopy" path: "bin/llvm-objcopy" }
tool_path { name: "objdump" path: "bin/llvm-objdump" }
tool_path { name: "strip" path: "bin/llvm-strip" }
compiler_flag: "-no-canonical-prefixes"
linker_flag: "-no-canonical-prefixes"
cxx_builtin_include_directory: "/usr/arm-linux-gnueabihf/include"
linker_flag: "-fuse-ld=lld"
compiler_flag: "-target"
compiler_flag: "arm-linux-gnueabihf"
compiler_flag: "-mfloat-abi=hard"
linker_flag: "-target"
linker_flag: "arm-linux-gnueabihf"
compiler_flag: "-I/usr/arm-linux-gnueabihf/include"
linker_flag: "-L/usr/arm-linux-gnueabihf/lib"
compiler_flag: "-Wno-builtin-requires-header"
}This requires running on a machine (or in a docker container) with the I'd like to find some way to not depend on those packages, though it seems tricky. Debian has prebuilt packages for all relevant architectures, but I'm not sure we want to depend on them. Linaro has prebuilt sysroots for arm, but not ppc/s390x, which Kubernetes needs. The Chromium project also has prebuilt sysroots, but again not for ppc/s390x. Another option to consider is using musl, though we still have the issue of needing to build everything, we'd probably want to statically link rather than dynamically link (maybe better), and there may be subtle differences from glibc, which we might not want. We could drop the dependency on Along the way, I've also found a few other interesting projects:
|
…rib#1885) This PR implements a better way of specifying the requirements files for different (os, cpu) tuples. It allows for more granular specification than what is available today and allows for future extension to have all of the sources in the select statements in the hub repository. This is replacing the previous selection of the requirements and there are a few differences in behaviour that should not be visible to the external user. Instead of selecting the right file which we should then use to create `whl_library` instances we parse all of the provided requirements files and merge them based on the contents. The merging is done based on the blocks within the requirement file and this allows the starlark code to understand if we are working with different versions of the same package on different target platforms. Fixes bazel-contrib#1868 Work towards bazel-contrib#1643, bazel-contrib#735
…around (bazel-contrib#2069) This is extra preparation needed for bazel-contrib#2059. Summary: - Create `pypi_repo_utils` for more ergonomic handling of Python in repo context. - Split the resolution of requirements files to platforms into a separate function to make the testing easier. This also allows more validation that was realized that there is a need for in the WIP feature PR. - Make the code more robust about the assumption of the target platform label. Work towards bazel-contrib#260, bazel-contrib#1105, bazel-contrib#1868.
…azel-contrib#2068) This is just a small PR to reduce the scope of bazel-contrib#2059. This just moves some code from one python file to a separate one. Work towards bazel-contrib#260, bazel-contrib#1105, bazel-contrib#1868.
…ontrib#2075) This also changes the local_runtime_repo to explicitly check for supported platforms instead of relying on a `None` value returned by the helper method. This makes the behaviour exactly the same to the behaviour before this PR and we can potentially drop the need for the validation in the future if our local_runtime detection is more robust. This also makes the platform detectino in `pypi_repo_utils` not depend on `uname` and only use the `repository_ctx`. Apparently the `module_ctx.watch` throws an error if one attempts to watch files on the system (this is left for `repository_rule` it seems and one can only do `module_ctx.watch` on files within the current workspace. This was surprising, but could have been worked around by just unifying code. This splits out things from bazel-contrib#2059 and makes the code more succinct. Work towards bazel-contrib#260, bazel-contrib#1105, bazel-contrib#1868.
…2059) Before this change the `all_requirements` and related constants will have packages that need to be installed only on specific platforms and will mean that users relying on those constants (e.g. `gazelle`) will need to do extra work to exclude platform-specific packages. The package managers that that support outputting such files now include `uv` and `pdm`. This might be also useful in cases where we attempt to handle non-requirements lock files. Note, that the best way to handle this would be to move all of the requirements parsing code to Python, but that may cause regressions as it is a much bigger change. This is only changing the code so that we are doing extra processing for the requirement lines that have env markers. The lines that have no markers will not see any change in the code execution paths and the python interpreter will not be downloaded. We also use the `*_ctx.watch` API where available to correctly re-evaluate the markers if the `packaging` Python sources for this change. Extra changes that are included in this PR: - Extend the `repo_utils` to have a method for `arch` getting from the `ctx`. - Change the `local_runtime_repo` to perform the validation not relying on the implementation detail of the `get_platforms_os_name`. - Add `$(UV)` make variable for the `uv:current_toolchain` so that we can generate the requirements for `sphinx` using `uv`. - Swap the requirement generation using `genrule` and `uv` for `sphinx` and co so that we can test the `requirement` marker code. Note, the `requirement` markers are not working well with the `requirement_cycles`. Fixes bazel-contrib#1105. Fixes bazel-contrib#1868. Work towards bazel-contrib#260, bazel-contrib#1975. Related bazel-contrib#1663. --------- Co-authored-by: Richard Levasseur <rlevasseur@google.com>
…thon (bazel-contrib#2135) Before this PR the lockfile would become platform dependent when the `requirements` file would have env markers. This was not caught because we do not have MODULE.bazel.lock checked into the `rules_python` repository because the CI is running against many versions and the lock file is different, therefore we would not be able to run with `bazel build --lockfile_mode=error`. With this change we use the label to `BUILD.bazel` which is living next to the `python` symlink and since the `BUILD.bazel` is the same on all platforms, the lockfile will remain the same. Summary * refactor(uv): create a reusable macro for using uv for locking reqs. * test(bzlmod): enable testing the MODULE.bazel.lock breakage across platforms. * test(bzlmod): use a universal requirements file for 3.9. This breaks the CI, because the python interpreter file hash is added to the lock file. * fix(bzlmod): keep the lockfile platform independent when resolving python Fixes bazel-contrib#1105 and bazel-contrib#1868 for real this time. Implements an additional helper for bazel-contrib#1975.
No description provided.