-
-
Notifications
You must be signed in to change notification settings - Fork 10
Description
Most recently encountered #140, where some features that needed protobuf 4 got (re-)enabled, pushing the compilation time from just below 6h to being killed by azure. Moving the discussion there into an issue:
@h-vetinari: We're getting into problematic spheres here unfortunately. Windows on protobuf 3.21 (with the disablement patch) already took 5:48h, which is pushing against the 6h hard cutoff enforced by azure. Without that feature disablement, we go from building 6917 build steps to 7027, which is not such a big change, but seemingly enough to push it over the edge (also not all agents are equally spec'd, so there's some randomness involved).
Still, the OSX builds for protobuf 4 are not far behind (way over 5h), so we're probably going to have to start thinking about how to slice up this package somehow. Is there a way to define a core library upon which we can build, or tranches that can be built independently?
Is there a way to define a core library upon which we can build, or tranches that can be built independently?
We should probably move this discussion to a bug? In any case, yes we can do this. The current behavior is to build all the GA libraries:
-DGOOGLE_CLOUD_CPP_ENABLE=__ga_libraries__ \ The list is defined here:
And it grows over time (maybe one or two features a month, though some "features" can be quite large). There are a set of (implicit) common features that we could build first or separately. Then the features are largely independent from each other, with a small number of caveats:
I am not sure how to translate this into the conda packaging model. Should we create different packages for the core components and then a package for each feature? That seems a bit overwhelming. Any other thoughts?