Rework cipher / digest fetching for legacy nids with multiple name support#9969
Rework cipher / digest fetching for legacy nids with multiple name support#9969levitte wants to merge 1 commit intoopenssl:masterfrom
Conversation
…pport With multiple names, it's no longer viable to just grab the "first" in the set and use that to find the legacy NID. Instead, all names for an algorithm must be checked, and if we encounter more than one NID asssociated with those names, we consider it an error and make that method unloadable. This ensures that all methods that do have an internal NID associated will get that NID in their structure, thereby ensuring that other parts of libcrypto that haven't gone away from using NIDs for comparison will continue to work as expected.
|
Without this, any provider based algorithm implementation with multiple names risk not being detected the way they should, causing all sorts of test failures. |
|
I think this would this be better done as a check when registering names/aliases. The registration happens once, the lookups could be frequent. Failing at registration time would prevent conflicts being introduced, rather than detecting them afterwards. |
|
So what you're saying is that you'd like to have a name number <=> legacy NID map, is that right? I did contemplate that, and sure, that can be done as well. |
|
Wait, hold on.... This code is executed as part of constructing a method, i.e. during registration already. What I'll end up doing is moving essentially the same code to another spot, but it's still going to be called as part of the construction process. |
|
The function I'm not suggesting a name <-> NID mapping. We're moving away from NIDs. I'm suggesting that identical names with different NIDs be checked for earlier and that fetches won't need to check. |
You're mixing up method lookup with method construction.
When's this "earlier" you're talking about? |
Yes, and this is temporary code which should be removed as soon as we stop having code like this: Lines 27 to 39 in a941054 But in the mean time, we need something that does this mapping, or we see our tests fail all over the place (I've had to chase down some of them when working on #8984 and #8985). |
|
I think I've been confusing myself, fetch is registration time. With the caching, the searching for clashing names is skipped. |
|
Thank you |
With the storing, even. Do remember that we have both a store and a cache. |
|
The only concern I can see here is if an algorithm, not known to libcrypto, is registered by two different providers. This seems pretty unlikely. I think they'd also need to be in different threads to create a new object race condition. |
I'm starting to lose track of bits of the architecture :) |
Haven't we dealt with that possibility already, and dealt with it? They will have to come with properties that somehow make them unique. |
…pport With multiple names, it's no longer viable to just grab the "first" in the set and use that to find the legacy NID. Instead, all names for an algorithm must be checked, and if we encounter more than one NID asssociated with those names, we consider it an error and make that method unloadable. This ensures that all methods that do have an internal NID associated will get that NID in their structure, thereby ensuring that other parts of libcrypto that haven't gone away from using NIDs for comparison will continue to work as expected. Reviewed-by: Paul Dale <paul.dale@oracle.com> (Merged from #9969)
|
Merged. 3204083 Rework cipher / digest fetching for legacy nids with multiple name support |
With multiple names, it's no longer viable to just grab the "first" in
the set and use that to find the legacy NID. Instead, all names for
an algorithm must be checked, and if we encounter more than one NID
asssociated with those names, we consider it an error and make that
method unloadable.
This ensures that all methods that do have an internal NID associated
will get that NID in their structure, thereby ensuring that other
parts of libcrypto that haven't gone away from using NIDs for
comparison will continue to work as expected.