Skip to content

Conversation

@jchen351
Copy link
Contributor

@jchen351 jchen351 commented Oct 6, 2023

Description

Enabling c++20 on Linux
Currently blocking issue:

  1. Eigen using deprecated '[=]' lambda expression.

Motivation and Context

We want the latest and the greatest features from c++20.

depends on

…ERTY LINK_FLAGS " -s ASSERTIONS=2 -s SAFE_HEAP=1 -s STACK_OVERFLOW_CHECK=2")"
@jchen351 jchen351 requested a review from a team October 6, 2023 17:33
…ING PROPERTY LINK_FLAGS " -s ASSERTIONS=2 -s SAFE_HEAP=1 -s STACK_OVERFLOW_CHECK=2")""

This reverts commit c72c40e.
@snnn
Copy link
Contributor

snnn commented Oct 6, 2023

We cannot do it for Linux CUDA build yet. Because the compiler we use is GCC 8, which is too low.

@wschin
Copy link
Contributor

wschin commented Oct 6, 2023

Just out of curiosity. Why do you need C++20? Would it crash onnxruntime build in my old conda environment?

@snnn
Copy link
Contributor

snnn commented Oct 6, 2023

For your second question, no. A new C++ standard consists of two things:

  1. Core language features like constexpr/constinit, which do not need runtime support.
  2. New std header files and functions like , which do not need a new runtime library.

The first one doesn't have impact on compatibility. The second one has impacts on non-Windows/non-Linux systems like macOS/iOS. We can avoid the issue by not using the headers that are not supported by the target system.

@snnn
Copy link
Contributor

snnn commented May 24, 2024

Wait this #20786 . The PR will update all GCCs to >=11. Now we still have GCC 8.

@snnn
Copy link
Contributor

snnn commented Jun 3, 2024

The GCC version is updated.

@snnn
Copy link
Contributor

snnn commented Jun 3, 2024

Just out of curiosity. Why do you need C++20? Would it crash onnxruntime build in my old conda environment?

We can use the new C++ features to write better code, and it shouldn't impact our compatibility at runtime. The language features are listed in https://gcc.gnu.org/projects/cxx-status.html in "C++20 Language Features" section

template <typename InputType, typename ThresholdType, typename OutputType>
class TreeAggregatorMax : public TreeAggregator<InputType, ThresholdType, OutputType> {
public:
TreeAggregatorMax<InputType, ThresholdType, OutputType>(size_t n_trees,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All other similar class like TreeAggregatorMin and TreeAggregatorSum does not include type qualifier.

@jchen351
Copy link
Contributor Author

depends on #21071

jchen351 added 2 commits June 19, 2024 11:18
…nux_c++20

# Conflicts:
#	onnxruntime/contrib_ops/cuda/math/cufft_plan_cache.h
#	orttraining/orttraining/training_ops/cuda/nn/conv_shared.h
#	orttraining/orttraining/training_ops/rocm/nn/conv_grad.cc
@fs-eire fs-eire mentioned this pull request Aug 30, 2024
7 tasks
class OrtMutex {
#if defined(__clang__) && __cplusplus >= 202002L
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated-pragma"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be fixed - see this patch: e34c91c

int n = batch_size * sequence_length;
concurrency::ThreadPool::TryBatchParallelFor(
#if __cplusplus >= 202002L
context->GetOperatorThreadPool(), n, [=, this, &failed](ptrdiff_t index) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see this core guideline:
https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#f54-when-writing-a-lambda-that-captures-this-or-any-class-data-member-dont-use--default-capture

I think they have a point. at least, we should consider how to make the code easier to understand.

@snnn
Copy link
Contributor

snnn commented Oct 2, 2024

Now we are using NDK 27?

# Conflicts:
#	include/onnxruntime/core/platform/ort_mutex.h
#	onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc
#	onnxruntime/core/session/inference_session.cc
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can commit the suggested changes from lintrunner.

Comment on lines 1219 to 1221
NodeComputeInfo compute_info;
compute_info.create_state_func = [=](ComputeContext* context, FunctionState* state) {
compute_info.create_state_func = [=,this](ComputeContext* context, FunctionState* state) {
std::unique_ptr<MIGraphXFuncState> p = std::make_unique<MIGraphXFuncState>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
NodeComputeInfo compute_info;
compute_info.create_state_func = [=](ComputeContext* context, FunctionState* state) {
compute_info.create_state_func = [=,this](ComputeContext* context, FunctionState* state) {
std::unique_ptr<MIGraphXFuncState> p = std::make_unique<MIGraphXFuncState>();
NodeComputeInfo compute_info;
compute_info.create_state_func = [=, this](ComputeContext* context, FunctionState* state) {
std::unique_ptr<MIGraphXFuncState> p = std::make_unique<MIGraphXFuncState>();

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can commit the suggested changes from lintrunner.

jchen351 added 2 commits April 1, 2025 18:03
    [input_ids_data,
     word_embedding_length,
     sequence_length,
     position_ids_data,
     broadcast_position_ids,
     position_embedding_data,
     word_embedding_data,
     segment_ids_data,
     segment_embedding_data,
     position_embedding_length,
     segment_embedding_length,
     output_data,
     embedding_sum_data,
     gamma_data,
     beta_data,
     epsilon_value,
     hidden_size,
     &failed](ptrdiff_t index) {
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can commit the suggested changes from lintrunner.

Comment on lines +96 to +115
context->GetOperatorThreadPool(), n,
[input_ids_data,
word_embedding_length,
sequence_length,
position_ids_data,
broadcast_position_ids,
position_embedding_data,
word_embedding_data,
segment_ids_data,
segment_embedding_data,
position_embedding_length,
segment_embedding_length,
output_data,
embedding_sum_data,
gamma_data,
beta_data,
epsilon_value,
hidden_size,
&failed](ptrdiff_t index) {
#else
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
context->GetOperatorThreadPool(), n,
[input_ids_data,
word_embedding_length,
sequence_length,
position_ids_data,
broadcast_position_ids,
position_embedding_data,
word_embedding_data,
segment_ids_data,
segment_embedding_data,
position_embedding_length,
segment_embedding_length,
output_data,
embedding_sum_data,
gamma_data,
beta_data,
epsilon_value,
hidden_size,
&failed](ptrdiff_t index) {
#else
context->GetOperatorThreadPool(), n,
[input_ids_data,
word_embedding_length,
sequence_length,
position_ids_data,
broadcast_position_ids,
position_embedding_data,
word_embedding_data,
segment_ids_data,
segment_embedding_data,
position_embedding_length,
segment_embedding_length,
output_data,
embedding_sum_data,
gamma_data,
beta_data,
epsilon_value,
hidden_size,
&failed](ptrdiff_t index) {
#else

@snnn snnn closed this Jul 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants