Mask count in shift operations on native integers#61341
Conversation
0aa2160 to
09ae707
Compare
|
Does this need a corresponding "breaking change" doc? Notably However, some platform supported by another runtime might see differing behavior when compiled with C# 11 vs when compiled with a prior version since |
2633235 to
0873c0a
Compare
|
@tannergooding I was not planning to add a breaking change entry for this fix. But we can discuss it during code review. |
| // - When the type of `x` is `nint` or `nuint`, the shift count is given by | ||
| // the low-order five bits of `count` on a 32 bits platform, or | ||
| // the lower-order six bits of `count` on a 64 bits platform. | ||
| // The shift count is computed from `count & (sizeof(nint) * 8 - 1)` (or `nuint`), |
| // - When the type of `x` is `nint` or `nuint`, the shift count is given by | ||
| // the low-order five bits of `count` on a 32 bits platform, or | ||
| // the lower-order six bits of `count` on a 64 bits platform. | ||
| // The shift count is computed from `count & (sizeof(nint) * 8 - 1)` (or `nuint`), |
| TypeSymbol rightType = loweredRight.Type; | ||
| Debug.Assert(rightType.SpecialType == SpecialType.System_Int32); | ||
|
|
||
| if (rightConstantValue != null && rightConstantValue.IsIntegral && rightConstantValue.Int32Value <= 0x1F) |
| } | ||
|
|
||
| return oldNode == null | ||
| ? _factory.Binary(operatorKind, type, loweredLeft, loweredRight) |
| } | ||
|
|
||
| [Fact, WorkItem(43347, "https://github.com/dotnet/roslyn/issues/43347")] | ||
| public void MaskShiftCount() |
There was a problem hiding this comment.
We have tests for old native integers (ie. using a runtime without feature flag) and new native integers (on new runtime).
This file is for the old native integers (fix applies to them too).
|
Done with review pass (commit 3) |
| // For the predefined operators, the number of bits to shift in `x << count`, `x >> count` or `x >>> count` is computed as follows: | ||
| // [...] | ||
| // - When the type of `x` is `nint` or `nuint`, the shift count is given by | ||
| // the low-order five bits of `count` on a 32 bits platform, or |
| TypeSymbol rightType = loweredRight.Type; | ||
| Debug.Assert(rightType.SpecialType == SpecialType.System_Int32); | ||
|
|
||
| if (rightConstantValue != null && rightConstantValue.IsIntegral && rightConstantValue.Int32Value <= 0x1F) |
| validate("nuint", "NUintMaxValue", ">>> 63", "0x0000_0001", "0x0000_0000_0000_0001", nuint_shr_un(63)); | ||
| validate("nuint", "NUintMaxValue", ">>> 64", "0xFFFF_FFFF", "0xFFFF_FFFF_FFFF_FFFF", nuint_shr_un(64)); | ||
|
|
||
| // lifted value (sampler) |
There was a problem hiding this comment.
It was meant as "not as exhaustive" (just picking a few interesting permutations). I'll remove
| // return x op count; | ||
| Diagnostic(ErrorCode.ERR_BadBinaryOps, $"x {op} count").WithArguments(op, type, "int").WithLocation(5, 16) | ||
| ); | ||
| } |
|
|
||
| if (rightConstantValue != null | ||
| && rightConstantValue.Discriminator == ConstantValueTypeDiscriminator.Int32 | ||
| && rightConstantValue.Int32Value is >= 0 and <= 0x1F) |
There was a problem hiding this comment.
Perhaps we will be able to handle some negative numbers as well with the following logic?
int originalValue = rightConstantValue.Int32Value;
int maskedValue = originalValue & 0x1F;
bool useMaskedValue = (originalValue & ~((1 << 31) | maskedValue )) == 0;
#Resolved
|
Done with review pass (commit 5) |
| && rightConstantValue.Discriminator == ConstantValueTypeDiscriminator.Int32 | ||
| && (rightConstantValue.Int32Value & 0b100_000) == 0) | ||
| { | ||
| // For cases where count doesn't have the sixth bit set, we statically know the masked count. |
There was a problem hiding this comment.
nint becoming 128-bits would require revisions to the PE metadata format, the general IL support, and a number of other things. At this point, such a feature would break much existing .NET IL code (particularly given the prevalence of code like if (IntPtr.Size == 8) { } else { /* assume 32-bit */ } at some of the lowest layers)
Supporting a 128-bit platform, if it ever happens, is likely many years out. It wouldn't be unreasonable for the current compiler to just pretend it can't happen given the likelihood that it would require a break to the existing metadata format.
It also wouldn't be unreasonable to just guard against "too big shifts" or even emit a warning wave in such scenarios to help tell the user "this might not do what you think it does" (many users think that overshifting results in 0, like it does for byte, sbyte, short, and ushort when the mask is less than 31).
There was a problem hiding this comment.
I can appreciate the trouble that runtime would have to go through, but I do not see a good reason for the compiler to do so as well (I guess apart from supporting new metadata format, if that would be even needed). We can easily have correct implementation that doesn't depend on the assumption that 64bit is the biggest size we will ever encounter. I do not see this assumption "helping" compiler today or in the long run.
There was a problem hiding this comment.
Reverted to simpler logic (only optimize if count is >= 0 and <= 0x1F). This all isn't worth the effort to optimize for odd cases of negative counts.
This reverts commit 7dd0d62.
Fixes #43347