-
-
Notifications
You must be signed in to change notification settings - Fork 11k
WIP : FIPS ECC related changes. (To be replaced by seperate PR's) #8524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This changeset looks rather massive and the changes are not limited only to sections guarded by the In my opinion it would be better to split it into smaller PRs to be evaluated more carefully. What are the thoughts of the rest of the team about it? |
romen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some preliminary comments for now.
Also, I did not understand the changes in the test framework. I would recommend to separate code changes and test framework changes in different commits.
| static const ec_list_element curve_list[] = { | ||
| /* prime field curves */ | ||
| /* secg curves */ | ||
| #ifndef OPENSSL_NO_EC_NISTP_64_GCC_128 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: The nested #ifdef/#if/#else/#endif should be indented
| */ | ||
|
|
||
| #include "internal/cryptlib.h" | ||
| #include "internal/bn_int.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't really need to put the FIPS specific bn_rand_priv_range() outside this file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have seen that this change will also be needed in other keygen - DH/DSA.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If so, should this be a new public function rather than an internal one? If we need it to be FIPS compliant then most probably it's something that external ENGINEs need as well (and should go in a separate PR).
| do { | ||
| if (!BN_priv_rand(rnd, bits, BN_RAND_TOP_ANY, BN_RAND_BOTTOM_ANY)) | ||
| return 0; | ||
| count++; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we keep track of count if it's not read anywhere?
| * Repeat this process until the generated BIGNUM is smaller than the 'upper' | ||
| * bound. | ||
| */ | ||
| int bn_rand_priv_range(BIGNUM *rnd, int bits, BIGNUM *upper) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this should be a static inline function in ec_key.c guarded by FIPS_MODE
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use BN_priv_rand_range()? Why the extra bits parameter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The old method of generation just got a random and check it is in range (then internally BN_priv_rand_range() does some very interesting optimizations (involving modulus) when the 2nd and 3rd most top bits are not set).
The new algorithm is based on grabbing a exact number of bits of random, and then checking if that value is in range.
So for FIPS I am not sure that you could argue that they are the same operation.
A side effect of the new algorithm is that the output value is 1 + rand()... Which breaks some tests which feed the entropy in thru a buffer (which now needs to be subtracted by 1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The old method of generation just got a random and check it is in range (then internally BN_priv_rand_range() does some very interesting optimizations (involving modulus) when the 2nd and 3rd most top bits are not set).
https://github.com/openssl/openssl/blame/master/crypto/bn/bn_rand.c#L113
BN_priv_rand_range() derives the number of bits from the range, so that part is functionally compatible to what is being done here passing the number of bits;
I admit I don't understand why BN_priv_rand_range() is checking the 2 and 3 topmost bit of range as well and behaving differently depending on that check: it's 18 years old code and I cannot find a motivation looking back at the history of it (seems to be related to Bleichenbacher attacks on biased DSA nonces).
In any case either that behaviour is still required (and the new ad-hoc method is bypassing it) or is ossified legacy code (and we would be better off revising it rather than internally use something different and not accessible to external developers).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the existing one will probably be faster as it grabs 1 more bit of random and then tries some modulus operations (by subtracting the range one or 2 times). I cant tell if this is biased or not. The one I am trying to use doesnt have bias..
So the issue it was trying to solve (I think) is this..
Imagine the top bit is set in the range but then there are a whole lot of zeros following the top bit. So when we get a random if the top bit is set (50% chance) then if any of the other top bits are set (in the positions where there are all zeros below the top bit) then that random number will be too large and it will need to retry. To get around this it sees if the next 2 top bits are not set - and in this case it does magic to grab an extra rand bit and then try modulus on the bigger range.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's my understanding that the 1 is added anyway, it would just be moved to a new function.
If you do the extra bits version and then modular reduction, you have 2 choices:
- do
k := longer_bn mod(range-1)followed byk := k+1 - do
k := longer_bn mod(range), repeat ifk == 0(which happens with probability of 1/n)
I would prefer the second one, in which no extra subtractions/additions are necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And my point is that either the k += 1 or the repeat if k == 0 is currently in some other function, not the one that returns you the number in a range.
If you want to generalize something, either it's a function that returns something in 1-range, or lower-upper. And if you go for the one with lower and upper, the addition looks like the most obvious way, but you could also just repeat until you have a number in the range.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@paulidale and I have both looked at little closer at the current implementation of the rand range function - it does not appear to be biased. Dont know that it is worth trying to put in a new API just to do this fairly simple thing.
I think the following is sufficient (i.e the old way)
/*
* Although this is slightly different from the standard it is effectively
* equivalent as it gives an unbiased result ranging from 1..n-1. It is also
* faster as the standard needs to retry more often. Also doing
* 1 + rand[0..n-2) would effect the way that tests feed dummy entropy into
* rand so the simpler backward compatible method has been chosen here.
*/
do
{
if (!BN_priv_rand_range(priv_key, order))
goto err;
} while (BN_is_zero(priv_key));
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And my point is that either the k += 1 or the repeat if k == 0 is currently in some other function, not the one that returns you the number in a range.
Ah, I missed your point then!
I was writing on the assumption of keeping the current BN_rand_range() API of returning something in [0, range-1] inclusive and then having the caller dealing with the exclusion of the lower bound.
If you want to generalize something, either it's a function that returns something in 1-range, or lower-upper. And if you go for the one with lower and upper, the addition looks like the most obvious way, but you could also just repeat until you have a number in the range.
For a new generalized function
int BN_rand_range_ex(BIGNUM *out, const BIGNUM *min_inclusive, const BIGNUM *max_inclusive);
int BN_rand_priv_range_ex(BIGNUM *out, const BIGNUM *min_inclusive, const BIGNUM *max_inclusive);as you say addition seems the obvious way, and could be ok for the KATs as well, as then the offsetting wouldn't be ad-hoc in certain modes or for very specific steps (e.g., in the current code of this PR we subtract 1 only when feeding the keygen and not for sign_setup that should generate a nonce using exactly the same logic of the keygen operation).
But again, this is one more reason why this PR should be split in more parts, even if the actual code changes seems small:
- we shouldn't have an ad-hoc
bn_rand_priv_range()used only by the ECC keygen and that cannot be accessed externally by ENGINEs/providers or external applications; - the proposed
bn_rand_priv_range()(or the alternatives discussed in this thread) is not limited to FIPS ECC related changes - the proposed changes to the ECC keygen method are not limited to FIPS mode
- changes in the ECC keygen should be mirrored in the
ecdsa_sign_setup()as they generally have the same requirements - changes to the
EC_POINTcomparison logic shouldn't be limited to FIPS mode and should stay unified - some of the considerations on point validation should probably apply to non-FIPS mode too, while some should stay FIPS mode specific
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SP800-56A seems to have 2 algorithms, one that takes 64 extra bits, one that takes the correct amount of bits.
I'll note the thread at https://mailarchive.ietf.org/arch/msg/cfrg/6k9HMqfmdiKDdFFUHG3IJOGaF8g that is perhaps timely, w.r.t. that 64 number of extra bits
| const unsigned char *params_seed, *params_order; | ||
| EC_GROUP *tmp = NULL; | ||
| BN_CTX *ctx = NULL; | ||
| int sz = (EC_GROUP_get_degree(group) + 7) / 8; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: meth, field_type, sz are mixing declarations and code (and assuming group and meth are non-NULL)
| * This is done by invalidating any existing key pair, and only | ||
| * setting the returned keypair into the key if there is no error. | ||
| */ | ||
| BN_free(eckey->priv_key); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be cleared before freeing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, this is changing the existing behaviour of the library not only in FIPS_MODE: before eckey->priv_key would have been reused if already allocated. What are the performance implications of this change?
This is done by invalidating any existing key pair, and only setting the returned keypair into the key if there is no error.
Maybe we can design a better strategy to achieve the same behaviour?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used BN_secure_new() to allocate, which I thought does this automagically.
Ok. Good point. But it should at least not leave the old keypair in the key..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used BN_secure_new() to allocate, which I thought does this automagically.
Yes I noticed this, but I am not sure that every EC_KEY object that could be fed to this function has necessarily been populated by this function. Maybe it is, but we should make sure of it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok.
That makes me wonder if it should fail if any private key doesnt use BN_secure_new.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would personally wish for such a check, but that could be an API breaking change!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can't do that because on many systems the secure heap needs root privs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As @richsalz noted, secure allocs can be normal ones on some platforms.
A clear is required and mandating secure allocations isn't enforceable.
|
|
||
| /* XXX EC_POINT_cmp() assumes that the methods are equal */ | ||
| if (r || EC_POINT_cmp(a, EC_GROUP_get0_generator(a), | ||
| if (r || point_cmp(a, EC_GROUP_get0_generator(a), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment above is not relevant anymore!
| return 1; | ||
| } | ||
|
|
||
| static int point_cmp(const EC_GROUP *group, const EC_POINT *a, const EC_POINT *b, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is completely superseding EC_POINT_cmp(). Why? Should we instead improve EC_POINT_cmp() to make sure the additional checks in pnt_is_compat() are always executed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasnt confident that applying that change everywhere would be good, it should be ok for the group_cmp. If the original writer of this code has an opinion on this, it would be good to know.
|
Hmm there are not a great deal of source code changes - the largest changes are in test .txt files. |
I would definitely recommend to separate the changes in the library code from those to the testing framework as separate commits (possibly one for the The multiple PRs are IMHO useful for those changes that don't seem to be exclusive to the FIPS mode, and for those that are introduced because of the FIPS mode but that would benefit any mode of execution (e.g. the |
| my @valid = (); | ||
| my @invalid = (); | ||
|
|
||
| if (defined $ENV{'fips'}) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the env var should match the pre-processor, FIPS_MODE and be set to 1 if on. Sometimes 'unsetenv' / 'unset' is not convenient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes here are ahead of the framework ones. This isn't the way FIPS detection will be done but we don't know how it will be done yet. Until someone works on the latter, this is good enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree. There's no point in making things needlessly different, especially for an interim measure.
| return security_bits >= min_strength && security_bits <= RAND_DRBG_STRENGTH; | ||
| #endif | ||
| return security_bits <= RAND_DRBG_STRENGTH; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if we don't need something that can be set by the user. I think it's useful to be able to set a minimum security strength of 112 when not in FIPS mode. Even when in FIPS mode, it might need to be set to a higher value.
kroeckx
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code seems to have too many compile time code to see if we're in FIPS mode or not. I think most of that should be done at run time instead.
Separating them is a good idea. There will need to be some changes to the test framework for FIPS, we don't know the form on these yet.
I prefer this where possible. Pauli |
|
We are not required to have technical enforcement of such limits. |
|
In fips the ec param check is "is this an approved curve", to me that means enforcing the strength check in FIPS mode, since there is code in the standards that checks the strength). Assurance of the params may need to be done before most operations (e.g keygen). |
|
A general comment: we agreed to support both FIPS and non-FIPS in the same executable. That probably means ifdef is the wrong approach. |
No - this is precisely the approach as documented in the Design document (see the section "Conditional Code"). The same file will be compiled twice. Once for inclusion in the FIPS provider and once for inclusion in the default provider. "FIPS_MODE" will be defined for one of those compilations and not the other. |
|
Sorry, of course you're right, @mattcaswell |
|
Regarding the test changes to
Here is my reasoning: |
|
For completeness (i.e., I waste too much time finding the right PRs in the list), see the following PRs for related changes: |
|
Is there any reason to keep this open? |
(1) In fips mode only approved curves (nist curves with size >=224) are allowed for key generation and key agreement. (decrypt and signature verification still allow deprecated (size >= 160) curves to be used.
(2) Key agreement key generation has changed slightly in the latest standard (Which is why it no longer uses the BN_rand_range() method - which may introduce bias????)
(3) Assuming that the tests will need to be run twice (one for fips). Some tests have been modified due to the changes in (1) i.e. The curve tests .txt needed to be seperated into different files consisting of:
(i) fips - which run in both fip and non fips tests.
(ii) non_fips - which dont run during the fips tests.
(4) Key validation has an extra check in it now.
NOTES:
Checklist