Skip to content

Conversation

@mattip
Copy link
Member

@mattip mattip commented Jun 5, 2025

Users have asked for an option to raise an error when casting overflows. This is hard to do with the NEP-50 based casting, which only offers blanket rules for casting one dtype to another, without taking the actual values into account. This PR adds a new same_value argument to the casting kwarg (implemented in PyArray_CastingConverter), and extends the casting loop functions to raise a ValueError if same_value casting and the value would be changed by the cast. So far this is only implemented for ndarray.astype, i.e. np.array([1000]).astype(np.int8, casting='same_value') will now raise an error.

Performance: since the PR touches the assignment deep in the inner loop, I checked early on that it does not impact performance when not using same_value casting. The loop pseudo code now looks like this, and when compiled with -O3 (via a gcc pragma specific to the loop), it seems the compiler is smart enough to make the if condition not impact performance. seems there is a small performance impact for some casting functions

int same_value_casting = <check some condition>
type2 dst_value; type1 src_value;
while (<condition>) {
    <setup dst_value, src_value>
    dst_value = (cast)src_value;
    if (same_value_casting) {
        <do some checking>;
    }
    <iterate>;
}

Protection: This PR only changes ndarray.astype. Future PRs may implement more. I tried to guard the places that use PyArray_CastingConverter to raise an error if 'same_value' is passed in:

  • array_datetime_as_string (by disallowing 'same_value' in can_cast_datetime64_units)
  • array_copyto
  • array_concatenate (by disallowing 'same_value' in PyArray_ConcatenateInto)
  • array_einsum (by disallowing 'same_value' in PyArray_EinsteinSum)
  • ufunc_generic_fastcall

'same_value' is allowed in

  • array_astype (the whole goal of this PR)
  • array_can_cast_safely
  • npyiter_init (I am pretty sure that is OK?)
  • NpyIter_NestedIters (I am pretty sure that is OK?)

Testing: I added tests for ndarray.astype() covering all combinations of built-in types, and also some tests for properly blocking casting='same_value'.

TODO:

  • disallow astype(..., casting='same_value') with datetime64 or user-defined dtypes?
  • rerun benchmarks against main

Copy link
Member

@seberg seberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is pretty cool that we can do it! That said, I would prefer some more tinkering about the approach.

Basically, I think making it work everywhere is not nearly as hard as it may seem and so I think we have to at least try.

I think the only good way to do this is that the new cast level(s) are reported by the loops (via resolve_descriptors) and integrated into PyArray_MinCastSafety. By doing so, we will achieve two things:

  1. Things should just work everywhere as long as you also set the context.flag correctly everywhere (should also not be too much).
  2. I am very sure that currently things are broken for datatypes that don't implement this flag yet. And unfortunately, that is not great. If a user does casting="same_value" a "safe" cast is fine but we need to reject "unsafe" and "same_kind" casts.

So, I think we really need something for 2, and honestly think the easiest way to solve it is this same path that gives us full support everywhere.
(And yes, I don't love having two "same_value" levels, but I doubt it can be helped.)

(All other things are just nitpicks at this time.)

#define _NPY_METH_static_data 10

/*
* Constants for same_value casting
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be constant for special array method return values, I think (i.e. not just applicable to casts).

Is it really worth to just keep this inside the inner-loop? Yes, you need to grab the GIL there, but on the plus side, you could actually consider reporting the error.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought the point of having a return value from the inner loop function is so we can use it to report errors. In general I prefer a code style that tries as much as possible to separate programming metaphors: keep the python c-api calls separate from the "pure" C functions.

Copy link
Member

@seberg seberg Jun 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can be convinced to add this. But it is public API and we call these loops from a lot of places so we need machinery to make sure that all of these places give the same errors.

That may be a great refactor! E.g. we could have a single function that does HandleArrayMethodError("name", method_result, method_flags).
But I guess without such a refactor it feels very bolted on to me, because whether or not we check for this return depends on where we call/expect it.

EDIT: I.e. basically, wherever we have PyUFunc_GiveFloatinpointErrors() we would also pass the actual return value and do the needed logic there.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will try to do this as a separate PR.

@seberg seberg added the 56 - Needs Release Note. Needs an entry in doc/release/upcoming_changes label Jun 5, 2025
Copy link
Member

@jorenham jorenham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you add this option to

_CastingKind: TypeAlias = L["no", "equiv", "safe", "same_kind", "unsafe"]

@mattip
Copy link
Member Author

mattip commented Jun 5, 2025

It seems I touched something in the float -> complex astype. Running

$ spin bench --compare -t bench_ufunc.NDArrayAsType.time_astype

gives me

Change Before [d52bccb] After [23dda18] Ratio Benchmark (Parameter)
+ 2.60±0.04μs 6.45±0.05μs 2.48 bench_ufunc.NDArrayAsType.time_astype(('float32', 'complex64'))
+ 4.08±0.01μs 9.21±0.3μs 2.26 bench_ufunc.NDArrayAsType.time_astype(('float64', 'complex128'))
+ 3.99±0.04μs 8.62±0.04μs 2.16 bench_ufunc.NDArrayAsType.time_astype(('float32', 'complex128'))
+ 15.8±0.4μs 17.8±0.2μs 1.13 bench_ufunc.NDArrayAsType.time_astype(('float16', 'float32'))
+ 16.0±0.1μs 18.0±0.3μs 1.13 bench_ufunc.NDArrayAsType.time_astype(('float16', 'int64'))
+ 2.14±0μs 2.39±0.03μs 1.11 bench_ufunc.NDArrayAsType.time_astype(('int32', 'float32'))
+ 18.0±0.2μs 19.9±0.2μs 1.1 bench_ufunc.NDArrayAsType.time_astype(('float16', 'complex128'))
- 2.20±0.03μs 1.98±0.04μs 0.9 bench_ufunc.NDArrayAsType.time_astype(('int16', 'int32'))

@seberg
Copy link
Member

seberg commented Jun 6, 2025

It seems I touched something in the float -> complex astype.

Maybe the compiler just didn't decide to lift the branch for some reason? I have to say that complex -> real casts, wouldn't be my biggest concern, there is this weird ComplexWarning on it (in some branches at least), for a reason...

@mattip
Copy link
Member Author

mattip commented Jun 15, 2025

The linting failures are from #29197 and PR to fix them in #29210.

The OpenSUSE failure seems to be due to network failures downloading a package.

@mattip
Copy link
Member Author

mattip commented Jul 21, 2025

Somehow this caused test_nditer.py::test_iter_copy_casts to fail for [e-f], [e-d], [e-F], [e-D], [e-g], [F-e], [D-e], [G-e], but only on the debug build, which would suggest I touched something in the half casting.

Edit: it was due to the debug build exposing a missing initialization of context.flags

@mattip
Copy link
Member Author

mattip commented Jul 22, 2025

CI is passing

@mattip
Copy link
Member Author

mattip commented Jul 22, 2025

Hmm. I wonder what we should do with casting float-to-int or float-to-smaller-float? I think the second example should raise, but what about the third?

>>> np.array([1.0, 2.0, 100.0]).astype('int64', casting='same_value')
array([  1,   2, 100])
>>> np.array([1.2, 2.45, 100.0]).astype('int64', casting='same_value')
array([  1,   2, 100])

>>> a = np.array([1.2, 2.45, 3.14156])
>>> a == a.astype(np.half, casting='same_value')
[False, False, False]

@seberg
Copy link
Member

seberg commented Jul 23, 2025

I had a list of 5-6 different possible definitions, but for now skipping it. I am tempted to go with your a == a.astype(new_dtype, casting='same_value') must be true definition (one caveat below).
(This implies that a.astype(new_dtype, casting='same_value').astype(a.dtype) round-trips, but is stronger due to == indicating that the two dtype must be promotable int he context of comparing.)

Unless we go all out and say that a same_value cast must also be a same_kind one and disallow float to integer casts entirely here, the float must clearly be an integer.
That allows the use-case of wanting to use a float array as an integer one safely. If this is not desired, the best solution may be an "is_integral" check for the dtype.

  • One additional subtlety: The float value 2e60 lacks the mantissa to exactly represent an integral value. Even though a == a.astype(new_dtype, casting='same_value') is true one could consider it unsafe from this perspective.

It may actually be worth bringing it up on the mailing list and meeting and sketching at least the main alternatives. (I don't usually expect much input, but sketching the alternatives tends to clarify things a bit either way.)

I'll try to look over the recent changes here soon, but ping me here or on slack if I forget!

@mattip
Copy link
Member Author

mattip commented Jul 23, 2025

I think it makes sense to do the most conservative thing and require "ability to accurately round-trip" for 'same_value' in this first implementation.

@seberg
Copy link
Member

seberg commented Jul 24, 2025

I think it makes sense to do the most conservative thing and require "ability to accurately round-trip" for 'same_value' in this first implementation.

I suppose we have at least a brief window where we could still broaden the definition slightly.
There is one use-case where a different definition might be nice, and that is if we were to consider moving towards this cast mode as a default one output parameters/assignments:

arr += something
arr[...] = other_array

That is currently same-kind I think. In that context I think it is best to forgive all float inaccuracies (including over and underflows).

But, I don't want to get lost too much in this. Unless we are sure that we are only interested in one of these two use-cases for floats (or float to int), it seems fine to just focus on one now and add the other if we want to later.

(I suppose I wouldn't mind asking around if there is a preference about the insufficient mantissa for float -> int casts, but it is a detail. -- but more of a sanity check because I don't have a strong gut feeling.)

@mattip
Copy link
Member Author

mattip commented Sep 7, 2025

I ran a benchmark where I disabled the internal dispatching in each call to the function, and the results showed that this is the cause of the slowdown:

 static GCC_CAST_OPT_LEVEL int
 @prefix@_cast_@name1@_to_@name2@(
         PyArrayMethod_Context *context, char *const *args,
         const npy_intp *dimensions, const npy_intp *strides,
         NpyAuxData *data)
 {
-#if !@is_bool2@
+#if 0 && !@is_bool2@
     int same_value_casting = ((context->flags & NPY_SAME_VALUE_CONTEXT_FLAG) == NPY_SAME_VALUE_CONTEXT_FLAG);
     if (same_value_casting) {
         return @prefix@_cast_@name1@_to_@name2@_same_value(context, args, dimensions, strides, data);
     } else {
 #else
     {
 #endif
         return @prefix@_cast_@name1@_to_@name2@_no_same_value(context, args, dimensions, strides, data);
 }}

So that would indicate that a path to restore the degradation in performance would be to avoid the same_value flag check when casting is safe.

@mattip
Copy link
Member Author

mattip commented Sep 8, 2025

There is still a slowdown, even when disabling the same_value check and dispatching. Playing with a version of the code in the compiler explorer, I think the NPY_GCC_OPT_3 attribute is insufficient to efficiently inline things to get rid of function calls. I need to decorate the half conversion utilities as well, and also inline the To BitCast(const From &from)

I also noticed the benchmarks are using itertools.combinations() and not itertools.product, so we are only benchmarking half the casting type combinations.

Edit: we cannot inline things like the float16 conversion routines, since they come from libnpymath.a which we link to.

@mattip
Copy link
Member Author

mattip commented Sep 9, 2025

Even if I replace lowlevel_strided_loops.c.src with the one from main, I still see a slowdown. But if I also remove the flags field from PyArrayMethod_Context, I get no slowdown. Could the size of that struct be influencing runtime?

@github-actions

This comment has been minimized.

2 similar comments
@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@mattip
Copy link
Member Author

mattip commented Sep 14, 2025

Running benchmarks locally on a Ryzen AMD machine (the AsType benchmarks now run all possible pairings of int/float/complex dtypes) there is still some slowdowns and some speedups. I think this is acceptable?

| Change   | Before [73f8bd0f] <test_casting~1>   | After [31ee1aa4] <value-based>   |   Ratio | Benchmark (Parameter)                                            |
|----------|--------------------------------------|----------------------------------|---------|------------------------------------------------------------------|
| +        | 18.6±0.2μs                           | 21.0±0.2μs                       |    1.12 | bench_ufunc.NDArrayAsType.time_astype(('float16', 'complex64'))  |
| +        | 2.02±0.01μs                          | 2.24±0.01μs                      |    1.11 | bench_ufunc.NDArrayAsType.time_astype(('int16', 'int32'))        |
| +        | 4.47±0.04μs                          | 4.76±0.05μs                      |    1.07 | bench_ufunc.NDArrayAsType.time_astype(('complex128', 'float64')) |
| +        | 3.05±0.05μs                          | 3.24±0.05μs                      |    1.06 | bench_ufunc.NDArrayAsType.time_astype(('complex64', 'float32'))  |
| -        | 2.64±0.05μs                          | 2.51±0.05μs                      |    0.95 | bench_ufunc.NDArrayAsType.time_astype(('int64', 'int16'))        |
| -        | 24.5±0.5μs                           | 22.7±0.3μs                       |    0.93 | bench_ufunc.NDArrayAsType.time_astype(('complex128', 'float16')) |
| -        | 16.2±0.2μs                           | 15.0±0.2μs                       |    0.93 | bench_ufunc.NDArrayAsType.time_astype(('float16', 'float64'))    |
| -        | 8.06±0.07μs                          | 7.19±0.04μs                      |    0.89 | bench_ufunc.NDArrayAsType.time_astype(('complex128', 'int64'))   |
| -        | 18.7±0.1μs                           | 16.7±0.1μs                       |    0.89 | bench_ufunc.NDArrayAsType.time_astype(('float16', 'int64'))      |
| -        | 18.7±0.2μs                           | 16.3±0.3μs                       |    0.88 | bench_ufunc.NDArrayAsType.time_astype(('float16', 'int32'))      |
| -        | 18.9±0.2μs                           | 16.2±0.2μs                       |    0.86 | bench_ufunc.NDArrayAsType.time_astype(('float16', 'int16'))      |
| -        | 2.54±0.01μs                          | 2.13±0.02μs                      |    0.84 | bench_ufunc.NDArrayAsType.time_astype(('int32', 'float32'))      |

@github-actions
Copy link

Diff from mypy_primer, showing the effect of this PR on type check results on a corpus of open source code:

spark (https://github.com/apache/spark)
- python/pyspark/ml/functions.py:244: note:     def [_ScalarT: generic[Any]] vstack(tup: Sequence[_SupportsArray[dtype[_ScalarT]] | _NestedSequence[_SupportsArray[dtype[_ScalarT]]]], *, dtype: None = ..., casting: Literal['no', 'equiv', 'safe', 'same_kind', 'unsafe'] = ...) -> ndarray[tuple[Any, ...], dtype[_ScalarT]]
+ python/pyspark/ml/functions.py:244: note:     def [_ScalarT: generic[Any]] vstack(tup: Sequence[_SupportsArray[dtype[_ScalarT]] | _NestedSequence[_SupportsArray[dtype[_ScalarT]]]], *, dtype: None = ..., casting: Literal['no', 'equiv', 'safe', 'same_kind', 'same_value', 'unsafe'] = ...) -> ndarray[tuple[Any, ...], dtype[_ScalarT]]
- python/pyspark/ml/functions.py:244: note:     def [_ScalarT: generic[Any]] vstack(tup: Sequence[Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | complex | bytes | str | _NestedSequence[complex | bytes | str]], *, dtype: type[_ScalarT] | dtype[_ScalarT] | _SupportsDType[dtype[_ScalarT]], casting: Literal['no', 'equiv', 'safe', 'same_kind', 'unsafe'] = ...) -> ndarray[tuple[Any, ...], dtype[_ScalarT]]
+ python/pyspark/ml/functions.py:244: note:     def [_ScalarT: generic[Any]] vstack(tup: Sequence[Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | complex | bytes | str | _NestedSequence[complex | bytes | str]], *, dtype: type[_ScalarT] | dtype[_ScalarT] | _SupportsDType[dtype[_ScalarT]], casting: Literal['no', 'equiv', 'safe', 'same_kind', 'same_value', 'unsafe'] = ...) -> ndarray[tuple[Any, ...], dtype[_ScalarT]]
- python/pyspark/ml/functions.py:244: note:     def vstack(tup: Sequence[Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | complex | bytes | str | _NestedSequence[complex | bytes | str]], *, dtype: type[Any] | dtype[Any] | _SupportsDType[dtype[Any]] | tuple[Any, Any] | list[Any] | _DTypeDict | str | None = ..., casting: Literal['no', 'equiv', 'safe', 'same_kind', 'unsafe'] = ...) -> ndarray[tuple[Any, ...], dtype[Any]]
+ python/pyspark/ml/functions.py:244: note:     def vstack(tup: Sequence[Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | complex | bytes | str | _NestedSequence[complex | bytes | str]], *, dtype: type[Any] | dtype[Any] | _SupportsDType[dtype[Any]] | tuple[Any, Any] | list[Any] | _DTypeDict | str | None = ..., casting: Literal['no', 'equiv', 'safe', 'same_kind', 'same_value', 'unsafe'] = ...) -> ndarray[tuple[Any, ...], dtype[Any]]

@mattip
Copy link
Member Author

mattip commented Sep 15, 2025

Extending the benchmarks for all combinations of dtypes in astype: the benchmarks ran in 3m41sec instead of approximately 3m30secs. I think the extra 10 secs is worth the more extensive benchmarking.

@mattip
Copy link
Member Author

mattip commented Sep 17, 2025

Any more thoughts here?

Copy link
Member

@seberg seberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I am happy with it, and don't think there were many new things. I just scrolled through, but may not have looked at the most recent changes very closely, but I think they were all re-organizations.

Unless @mhvk/someone speaks up very soon about concerns, please go ahead and merge when you are happy @mattip (and then open a follow up issue).

#else
return (double)(ToDoubleBits(h));
#endif
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if that can't be in the header then, but maybe it doens't matter (and I guess that header may be public), but not sure.

@mhvk
Copy link
Contributor

mhvk commented Sep 17, 2025

Don't really have the time for another review, but I was already happy, and you addressed all my more immediate comments. Please do raise follow-up issues about things not done, such as the various casting flags and how they are used and passed on. (@seberg might this also be relevant for how we are trying to implement sorting flags -- a general mechanism to pass on flags may be useful... but maybe I'm missing the boat completely here, don't really have the time to look properly now...)

@seberg
Copy link
Member

seberg commented Sep 17, 2025

a general mechanism to pass on flags may be useful... but maybe I'm missing the boat completely here, don't really have the time to look properly now

I think it is almost the same! But here, it seems right -- or at least OK -- to apply this flag to (almost) all ArrayMethods in the future (even ufuncs in principle, that right now are allowed to absorb casts even if there is no such ufunc inside or outside of NumPy, I am sure).

While for sorts it could look basically the same, but the flag and flag space must be specific to the sort gufunc, I think.

@seberg seberg merged commit abb8dac into numpy:main Sep 17, 2025
79 checks passed
bwhitt7 pushed a commit to bwhitt7/numpy that referenced this pull request Sep 23, 2025
…#29129)

* start implementing same_value casting

* work through more places that check 'cast', add a TODO

* add a test, percolate casting closer to inner loops

* use SAME_VALUE_CAST flag for one inner loop variant

* aligned test of same_value passes. Need more tests

* handle unaligned casting with 'same_value'

* extend tests to use source-is-complex

* fix more interfaces to pass casting around, disallow using 'same_value' in raw_array_assign_scalar and raw_array_wheremasked_assign_scalar

* raise in places that have a kwarg casting, besides np.astype

* refactor based on review comments

* CHAR_MAX,MIN -> SCHAR_MAX,MIN

* copy context flags

* add 'same_value' to typing stubs

* document new feature

* test, check exact float->int casting: refactor same_value check into a function

* enable astype same_value casting for scalars

* typo

* fix ptr-to-src_value -> value casting errors

* fix linting and docs, ignore warning better

* gcc warning is different

* fixes from review, typos

* fix compile warning ignore and make filter in tests more specific, disallow non-numeric 'same_value'

* fix warning filters

* emit PyErr inside the loop

* macOS can emit FPEs when touching NAN

* Fix can-cast logic everywhere for same-value casts (only allow numeric)

* reorder and simplify, from review

* revert last commit and remove redundant checks

* gate and document SAME_VALUE_CASTING for v2.4

* make SAME_VALUE a flag, not an enum

* fixes from review

* fixes from review

* inline inner casting loop and float16 -> float64 calls

* typo

* fixes for AVX512 and debug builds, inline npy_halfbits_to_doublebits only on specific platforms

* use optimize instead of inline

---------

Co-authored-by: Matti Picus <matti.picus#@gmail.com>
Co-authored-by: Sebastian Berg <sebastianb@nvidia.com>
ssam18 added a commit to ssam18/numpy that referenced this pull request Nov 12, 2025
This commit implements follow-up work for same-value casting (PR numpy#29129):

1. **Enhanced Documentation**:
   - Added comprehensive documentation for NPY_SAME_VALUE_CONTEXT_FLAG
   - Documented the flag's purpose, usage, and relationship to casting system
   - Added examples and cross-references to related functionality

2. **Extended np.can_cast Support**:
   - Modified can_cast to use PyArray_CastingConverterSameValue instead of PyArray_CastingConverter
   - Updated can_cast docstring to include 'same_value' as valid casting option
   - Added examples demonstrating same_value casting usage
   - Added version note for same_value support

3. **Updated Tests**:
   - Modified test_conversion_utils.py to expect same_value support
   - Added comprehensive test suite for same_value casting in can_cast
   - Created test cases for identity casts, widening casts, and edge cases
   - Added parameter validation tests

4. **Implementation Details**:
   - Changed multiarraymodule.c to use PyArray_CastingConverterSameValue for can_cast
   - This enables same_value casting support without breaking existing functionality
   - Type hints already supported same_value through _CastingKind

This addresses the main items from issue numpy#29765 regarding extending
same_value casting beyond ndarray.astype() to other NumPy operations.

Fixes: numpy#29765
IndifferentArea pushed a commit to IndifferentArea/numpy that referenced this pull request Dec 7, 2025
…#29129)

* start implementing same_value casting

* work through more places that check 'cast', add a TODO

* add a test, percolate casting closer to inner loops

* use SAME_VALUE_CAST flag for one inner loop variant

* aligned test of same_value passes. Need more tests

* handle unaligned casting with 'same_value'

* extend tests to use source-is-complex

* fix more interfaces to pass casting around, disallow using 'same_value' in raw_array_assign_scalar and raw_array_wheremasked_assign_scalar

* raise in places that have a kwarg casting, besides np.astype

* refactor based on review comments

* CHAR_MAX,MIN -> SCHAR_MAX,MIN

* copy context flags

* add 'same_value' to typing stubs

* document new feature

* test, check exact float->int casting: refactor same_value check into a function

* enable astype same_value casting for scalars

* typo

* fix ptr-to-src_value -> value casting errors

* fix linting and docs, ignore warning better

* gcc warning is different

* fixes from review, typos

* fix compile warning ignore and make filter in tests more specific, disallow non-numeric 'same_value'

* fix warning filters

* emit PyErr inside the loop

* macOS can emit FPEs when touching NAN

* Fix can-cast logic everywhere for same-value casts (only allow numeric)

* reorder and simplify, from review

* revert last commit and remove redundant checks

* gate and document SAME_VALUE_CASTING for v2.4

* make SAME_VALUE a flag, not an enum

* fixes from review

* fixes from review

* inline inner casting loop and float16 -> float64 calls

* typo

* fixes for AVX512 and debug builds, inline npy_halfbits_to_doublebits only on specific platforms

* use optimize instead of inline

---------

Co-authored-by: Matti Picus <matti.picus#@gmail.com>
Co-authored-by: Sebastian Berg <sebastianb@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants