Skip to content

Conversation

@karpinsn
Copy link
Contributor

The new benchmark measures the performance of using ColoredConsoleTarget to log to a console. The performance fix is to remove the boxing that is occurring in the condition passed to ConsoleRowHighlightingRule. It now checks the condition of the expression and returns a cached boxed boolean value from ConditionExpression. The current benchmarks results in saving ~8% allocations (from 1.33Mb to 1.22Mb). This doesn't result in a noticable CPU performance saving but should help with GC overhead.

Before - 1.33Mb

Before image

After - 1.22Mb

After image

Nik Karpinsky added 2 commits August 20, 2025 13:59
- This PR adds a BenchmarkDotNet (BDN) benchmark to the solution to make it
  easy to microbenchmark. It also adds in the VisualStudio profiler BDN
  integration so we can get profiler traces and optimize code.
- This PR removes boxing that was occuring in
  ColoredConsoleSystemPrinter by evaluating the condition and returning
  a cached boxed bool from ConditionExpression. This reduces allocations
  in the benchmark by ~8%
@welcome
Copy link

welcome bot commented Aug 20, 2025

Thanks for opening this pull request!
We will try to review this soon! Please note that pull requests with unit tests are earlier accepted 👼

@coderabbitai
Copy link

coderabbitai bot commented Aug 20, 2025

Walkthrough

Adds a new NLog.Benchmarks project with a BenchmarkDotNet runner and a ColoredConsoleTarget benchmark that exercises word-highlighting. Updates the solution to include the project and configuration mappings. Adds a new overload in ConditionMethodExpression to accept Func<LogEventInfo, bool> for no-parameter condition methods.

Changes

Cohort / File(s) Summary of Changes
Benchmarks project scaffold
src/NLog.Benchmarks/NLog.Benchmarks.csproj, src/NLog.Benchmarks/Program.cs
Adds a new executable project targeting net462 with BenchmarkDotNet and diagnosers, references ..\NLog\NLog.csproj, and adds a Program.Main that runs benchmarks via BenchmarkSwitcher.
ColoredConsoleTarget benchmark
src/NLog.Benchmarks/ColoredConsoleTargetBenchmark.cs
New BenchmarkDotNet benchmark class: configures ColoredConsoleTarget with layout "${logger} ${message}", a word-highlighting rule (ignore-case "at" → red), redirects Console.Out to an in-memory stream in Setup, logs 1,000 debug messages in the benchmark, and restores output in Cleanup.
Solution integration
src/NLog.sln
Updates solution header to Visual Studio 18 and adds NLog.Benchmarks project entry with GUID and Debug/Release configuration mappings; nests the new project under an existing parent.
Condition method overload
src/NLog/Conditions/ConditionMethodExpression.cs
Adds CreateMethodNoParameters(string, Func<LogEventInfo, bool>) which delegates to the existing object-returning overload by converting bool results to boxed true/false objects.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant User
  participant Program as Program.Main
  participant Switch as BenchmarkSwitcher
  participant Runner as Benchmark Runner
  participant Bench as ColoredConsoleTargetBenchmark
  participant NLog as NLog Logger
  participant Target as ColoredConsoleTarget
  participant Console as Console.Out (Memory)

  User->>Program: start(args)
  Program->>Switch: FromAssembly(...).Run(args)
  Switch->>Runner: Discover & execute benchmarks

  rect rgb(235,245,255)
    note right of Bench: GlobalSetup
    Runner->>Bench: Setup()
    Bench->>Console: Redirect Console.Out -> MemoryStream
    Bench->>NLog: Configure logger, target, rule
  end

  rect rgb(240,255,240)
    note right of Bench: Benchmark iteration
    Runner->>Bench: WordHighlightingTextIgnoreCase()
    Bench->>Console: Clear memory buffer
    loop 1000 times
      Bench->>NLog: Logger.Debug("... at ...")
      NLog->>Target: Apply layout + highlighting
      Target->>Console: Write colored output
    end
  end

  rect rgb(255,245,235)
    note right of Bench: GlobalCleanup
    Runner->>Bench: Cleanup()
    Bench->>Console: Restore original Console.Out
  end
Loading
sequenceDiagram
  autonumber
  participant Rule as Condition Rule
  participant CME as ConditionMethodExpression
  participant Pred as Func<LogEventInfo,bool>
  participant Log as LogEventInfo

  Note over Rule,CME: New overload for no-parameter boolean predicates
  Rule->>CME: CreateMethodNoParameters(name, Pred)
  CME->>Pred: Invoke(LogEventInfo)
  Pred-->>CME: bool
  CME-->>Rule: return boxed true/false (object)
  Rule->>Rule: Use existing object-returning evaluation path
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

A rabbit hops, the benchmarks hum,
I color “at” in red—what fun! 🥕
A thousand tiny debug beats,
Console tucked in memory seats.
I box true hops and call it done.

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between e57a2db and 7b2c188.

📒 Files selected for processing (1)
  • src/NLog.Benchmarks/NLog.Benchmarks.csproj (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/NLog.Benchmarks/NLog.Benchmarks.csproj

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
src/NLog/Targets/ColoredConsoleSystemPrinter.cs (1)

104-110: Good allocation win by returning cached boxed booleans

Using ConditionExpression.BoxedTrue/BoxedFalse via static lambdas avoids repeated boxing and the static captures nothing. Nice and targeted fix.

Optional cleanups for consistency/readability:

  • Use the same namespace qualifier for both types (either NLog.Conditions.* or Conditions.*) to avoid mixed styles on the same line.
  • If you prefer shorter lines, consider a using static import for ConditionExpression and then just return BoxedTrue/BoxedFalse.

Example diff to unify the qualifier (purely cosmetic):

- new ConsoleRowHighlightingRule(Conditions.ConditionMethodExpression.CreateMethodNoParameters("level == LogLevel.Fatal", static (evt) => evt.Level == LogLevel.Fatal ? NLog.Conditions.ConditionExpression.BoxedTrue : NLog.Conditions.ConditionExpression.BoxedFalse), ConsoleOutputColor.Red, ConsoleOutputColor.NoChange),
+ new ConsoleRowHighlightingRule(Conditions.ConditionMethodExpression.CreateMethodNoParameters("level == LogLevel.Fatal", static (evt) => evt.Level == LogLevel.Fatal ? Conditions.ConditionExpression.BoxedTrue : Conditions.ConditionExpression.BoxedFalse), ConsoleOutputColor.Red, ConsoleOutputColor.NoChange),
src/NLog.Benchmarks/Program.cs (1)

11-19: Remove unreachable code or guard it with conditional compilation

The explicit return makes the in-process debug runner unreachable, requiring pragma suppression. Wrap the two paths with #if DEBUG to keep both without disabling CS0162.

Suggested refactor:

-            BenchmarkSwitcher.FromAssembly(System.Reflection.Assembly.GetExecutingAssembly()).Run(
-                args,
-                DefaultConfig.Instance
-                    .AddJob(BenchmarkDotNet.Jobs.Job.Default.WithIterationCount(5).WithWarmupCount(1).WithEnvironmentVariables(Microsoft.VSDiagnostics.VSDiagnosticsConfigurations.GetDotNetObjectAllocEnvironmentVariables(1)).AsMutator())
-                    .AddDiagnoser(new Microsoft.VSDiagnostics.DotNetObjectAllocDiagnoser())
-            ); return;
-#pragma warning disable CS0162 // Unreachable code detected
-            BenchmarkSwitcher.FromAssembly(typeof(Program).Assembly).Run(args, new DebugInProcessConfig());
-#pragma warning restore CS0162 // Unreachable code detected
+#if DEBUG
+            BenchmarkSwitcher.FromAssembly(typeof(Program).Assembly)
+                .Run(args, new DebugInProcessConfig());
+#else
+            BenchmarkSwitcher.FromAssembly(System.Reflection.Assembly.GetExecutingAssembly()).Run(
+                args,
+                DefaultConfig.Instance
+                    .AddJob(BenchmarkDotNet.Jobs.Job.Default
+                        .WithIterationCount(5)
+                        .WithWarmupCount(1)
+                        .WithEnvironmentVariables(Microsoft.VSDiagnostics.VSDiagnosticsConfigurations.GetDotNetObjectAllocEnvironmentVariables(1))
+                        .AsMutator())
+                    .AddDiagnoser(new Microsoft.VSDiagnostics.DotNetObjectAllocDiagnoser())
+            );
+#endif
src/NLog.Benchmarks/ColoredConsoleTargetBenchmark.cs (2)

38-41: Ensure predictable writes to the redirected Console output

Set AutoFlush to true on the StreamWriter so Console writes are flushed promptly to the underlying memory stream. It reduces buffering variability across runs.

-            streamWriter = new StreamWriter(memoryStream);
+            streamWriter = new StreamWriter(memoryStream) { AutoFlush = true };

46-49: Tidy up global state after the benchmark

Consider resetting LogManager.Configuration to avoid leaking configuration to other benchmarks/processes. Flush before restoring Console.Out for a clean hand-off.

         public void Cleanup()
         {
-            Console.SetOut(oldConsoleOutWriter);
-            streamWriter.Dispose();
-            memoryStream.Dispose();
+            streamWriter.Flush();
+            Console.SetOut(oldConsoleOutWriter);
+            streamWriter.Dispose();
+            memoryStream.Dispose();
+            LogManager.Configuration = null;
         }
src/NLog.Benchmarks/NLog.Benchmarks.csproj (1)

13-15: Avoid depending on the UnitTests project; reference NLog directly

Referencing the test project drags test-only dependencies and increases build time. The benchmark only uses NLog types; depend directly on the NLog project instead.

-  <ItemGroup>
-    <ProjectReference Include="..\..\tests\NLog.UnitTests\NLog.UnitTests.csproj" />
-  </ItemGroup>
+  <ItemGroup>
+    <ProjectReference Include="..\NLog\NLog.csproj" />
+  </ItemGroup>
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 0de74a0 and 9e6878a.

📒 Files selected for processing (5)
  • src/NLog.Benchmarks/ColoredConsoleTargetBenchmark.cs (1 hunks)
  • src/NLog.Benchmarks/NLog.Benchmarks.csproj (1 hunks)
  • src/NLog.Benchmarks/Program.cs (1 hunks)
  • src/NLog.sln (4 hunks)
  • src/NLog/Targets/ColoredConsoleSystemPrinter.cs (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
src/NLog.Benchmarks/ColoredConsoleTargetBenchmark.cs (1)
src/NLog/LogManager.cs (1)
  • LogManager (52-411)
src/NLog/Targets/ColoredConsoleSystemPrinter.cs (2)
src/NLog/Targets/ConsoleRowHighlightingRule.cs (2)
  • ConsoleRowHighlightingRule (48-50)
  • ConsoleRowHighlightingRule (58-63)
src/NLog/Targets/ColoredConsoleTarget.cs (2)
  • ConsoleRowHighlightingRule (463-471)
  • ConsoleRowHighlightingRule (473-482)
🔇 Additional comments (3)
src/NLog.Benchmarks/Program.cs (1)

11-16: Config looks solid for allocation-focused runs

The Job mutation and DotNetObjectAllocDiagnoser align with the PR goal to track object allocations. Running from the executing assembly is fine for discovering all benchmarks.

src/NLog.Benchmarks/ColoredConsoleTargetBenchmark.cs (1)

51-59: Benchmark scenario exercises the targeted path

Looping 1000 Debug events with word highlighting and default row rules will hit the DefaultConsoleRowHighlightingRules and validate the boxing fix in a realistic path. Looks good.

src/NLog.sln (1)

84-85: Solution integration looks correct

Project added with build mappings for Debug/Release and nested into the solution. No issues spotted.

Also applies to: 204-207, 239-239

@karpinsn
Copy link
Contributor Author

I'll fix the unreachable code nit, I'm just waiting for a new fix in our profiling agent that found this. If folks are interested this talk shows the agent that found this fix: https://www.youtube.com/watch?v=zhXhUMkrJDQ

@snakefoot
Copy link
Contributor

snakefoot commented Aug 21, 2025

Thank you for this pull-request, very nice find.

Some observations:

  • When the NLog maintainer fails to do it correctly, then there is probably a pitfall to be removed. Maybe one should create a method that supports boolean-condition, that automatically does the mapping to non-boxing.
  • Really like the idea of having a benchmark-project that exercise the different features on NLog. But for the Console-Target, then I prefer "mocking" so instead of modifying the global-console then Console-Target should get a constructor where one can provide the output-stream.
  • Saw the Youtube-video and it is almost magical. Amazing how it was able to recognize that multiple expression-compiles could be merged into a single expression-compile. But yes the code-style of the CoPilot-changes is a little weird, and fails to follow the project standards.
  • Sad to see the recently added unit-test Enqueue_WhenGrowBehaviourAndHighlyConcurrent_GrowOnce is unstable and causing builds to fail. Will probably revert the unit-test and wait for the original author to fix it. AsyncTargetWrapper - LogEventDropped and EventQueueGrow events fixes #5883

@snakefoot
Copy link
Contributor

snakefoot commented Aug 21, 2025

Possible you can make co-pilot rewrite this pull-request to match #5968 ?

I like the goal of having an official NLog-Benchmark-project depending on Benchmark.net, which makes it easy to profile common usage-patterns. But not sure it should be mixed into this pull-request. Possible that you can update this pull-request to only include the code-fix ?

@karpinsn
Copy link
Contributor Author

Yep, happy to reformat to #5968. Would you like me to split this PR into 2 separate PRs, one for the benchmarks and one for the fix? I did it as 2 separate commits with the first being the benchmark addition and the second being the perf fix.

Saw the Youtube-video and it is almost magical. Amazing how it was able to recognize that multiple expression-compiles could be merged into a single expression-compile. But yes the code-style of the CoPilot-changes is a little weird, and fails to follow the project standards.

Yeah, its pretty incredible for being able to run a whole bunch of different experiments really quickly and finding clever optimizations, I can't wait to release it! It doesn't always hit the standards right and sometimes creates / extends public interfaces breaking things so you need to keep it in line / touch things up afterwards. I tried to do that with this PR, feel free to point out anything you are not happy with, I'm more than happy to change stuff. Just trying to find successes with the project to make sure it really helps folks and do some good in the community as well :)

@karpinsn
Copy link
Contributor Author

But for the Console-Target, then I prefer "mocking" so instead of modifying the global-console then Console-Target should get a constructor where one can provide the output-stream.

Happy to change this around, was just trying to leave existing APIs as is and this seemed to be the best way to do that. I can add another constructor to ColoredConsoleTarget that takes in two TextWriter objects (one for error and one for out) then change ColoredConsoleTarget.GetOutput from static to be an instance method that returns either the passed in objects or the normal console ones. Is this the sort of change you are looking for? Are there any requirements on changing public classes I should be aware of?

@snakefoot
Copy link
Contributor

Yep, happy to reformat to #5968. Would you like me to split this PR into 2 separate PRs, one for the benchmarks and one for the fix?

After further consideration, then just keep this in a single pull-request (Code-fix + new benchmark-project). But please update code-fix to match #5968

I can add another constructor to ColoredConsoleTarget that takes in two TextWriter objects

Maybe easier that you just commit the NLog.Benchmark-project in its current state. Then I can adjust to my liking (Think it will become an internal-constructor only visible to the unit-test-projects, that takes a method-delegate that returns the TextWriter, where the default delegate will be a method that returns StdOut / StdErr depending on target-configuration).

@snakefoot
Copy link
Contributor

snakefoot commented Aug 22, 2025

I tried to do that with this PR, feel free to point out anything you are not happy with, I'm more than happy to change stuff. Just trying to find successes with the project to make sure it really helps folks and do some good in the community as well :)

Maybe the "Profiler Agent" should rely on a "Code Improvement"-agent that have been prompted differently (Like the "Code Review" agent follows project-guidelines-md). Instead of making the surgical code-fix directly (based on profiler-red-path-discovery), then give the task of implementing the surgical code-fix to a "Code Improvement"-agent. The "Code Improvement"-agent can then consider how the initial change would affect the overall code-base, and matches the project-code-style, and whether other caller-locations should also be considered.

But maybe that is only for enterprise-customers, while free-riders just have to do their own code-review of the surgical code-fix.

- This commit address PR feedback, moving the boxing expression into
  ConditionMethodExpression. This should help with other areas in the
  code that use no param boolean funcs for ConditionMethodExpressions.
- This also updates the benchmarking package to use the latest
  BenchmarkDotNetDiagnosers package for VS. This leaves the Program.cs
  benchmark launcher as is an uses a non-intrusive global config when
  run through the VS Profiler Agent.
@karpinsn
Copy link
Contributor Author

Maybe the "Profiler Agent" should rely on a "Code Improvement"-agent

It's a really interesting thought. I looked at the change myself and touched things up as I don't want to be inundating maintainers with bad AI contributions and I missed bringing this up level. It's been fascinating to see it try different perf experiments and what it comes up with. I'm guessing we could have a "Code Improvement"-agent do a few different versions of a performance fix so the dev can chose which one they want (or which is the closest).

But maybe that is only for enterprise-customers, while free-riders just have to do their own code-review of the surgical code-fix.

Well the Profiler Agent will be in all SKUs of Visual Studio, and if we do a "Code Improvement"-agent hopefully that one can be as well.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
src/NLog.Benchmarks/NLog.Benchmarks.csproj (3)

4-6: Benchmark project: consider multi-targeting, mark as non-packable, and force x64 for stable results

  • Benchmarks are often run on multiple runtimes. Recommend multi-targeting to include a modern .NET runtime (e.g., net8.0) alongside net462.
  • Mark the project as non-packable to avoid accidental packaging.
  • Force x64 for more stable/realistic BDN runs and to avoid 32-bit JIT noise on some dev machines.

Apply this diff:

-  <PropertyGroup>
-    <TargetFramework>net462</TargetFramework>
-    <OutputType>Exe</OutputType>
-  </PropertyGroup>
+  <PropertyGroup>
+    <!-- Keep net462, but also allow running on a modern runtime -->
+    <TargetFrameworks>net462;net8.0</TargetFrameworks>
+    <OutputType>Exe</OutputType>
+    <IsPackable>false</IsPackable>
+    <PlatformTarget>x64</PlatformTarget>
+    <!-- Optional: keep source simple and modern -->
+    <LangVersion>latest</LangVersion>
+  </PropertyGroup>

9-11: Diagnostics package choice: prefer BDN’s platform-specific diagnosers and gate them by OS

The Visual Studio DiagnosticsHub diagnoser package tends to be Windows/VS-specific and heavy. For cross-platform CI and fewer restore issues, prefer BenchmarkDotNet.Diagnostics.Windows (Windows ETW) or keep diagnosers optional/conditional.

Option A (Windows-only diagnoser):

   <ItemGroup>
     <PackageReference Include="BenchmarkDotNet" Version="0.15.2" />
-    <PackageReference Include="Microsoft.VisualStudio.DiagnosticsHub.BenchmarkDotNetDiagnosers" Version="18.0.36422.2" />
   </ItemGroup>
+  <ItemGroup Condition="'$(OS)' == 'Windows_NT'">
+    <PackageReference Include="BenchmarkDotNet.Diagnostics.Windows" Version="0.15.2" />
+  </ItemGroup>

Please verify:

  • Whether you actually need DiagnosticsHub-specific features over BDN’s built-in/Windows diagnosers.
  • That 0.15.2 is the intended BDN version across the repo (pinning a newer/older patch may be preferable for consistency).

14-15: ProjectReference is fine as-is; consider excluding this project from default CI builds

The ProjectReference is appropriate for local benchmarking. To keep CI lean and avoid failing pipelines due to benchmarks, consider excluding the benchmarks project from the default solution build in CI (solution-level config) rather than changing this csproj.

If helpful, I can provide a separate PR to:

  • Add a solution configuration where NLog.Benchmarks is not built by default on CI.
  • Document how to run benchmarks locally (dotnet run -c Release -f net8.0).
src/NLog/Conditions/ConditionMethodExpression.cs (2)

90-93: Document the non-boxing behavior for discoverability

Add an XML summary to call out that this overload returns cached boxed booleans to avoid allocations. It helps future contributors choose this overload for boolean predicates.

Proposed doc (add above the method):

/// <summary>
/// Creates a condition method (no parameters) backed by a boolean predicate.
/// The result is converted to a cached boxed boolean (BoxedTrue/BoxedFalse) to avoid per-evaluation boxing.
/// </summary>

85-99: Ensure the parser actually invokes the bool-overload to avoid boxing

Verification shows that ConditionParser (at src/NLog/Conditions/ConditionParser.cs:151–154) only calls

var method = _configurationItemFactory.ConditionMethodFactory.TryCreateInstanceWithNoParameters(functionName);
if (method != null)
    return ConditionMethodExpression.CreateMethodNoParameters(functionName, method);

and has no path for Func<LogEventInfo,bool>, so all parser-resolved methods still box their bool result. Meanwhile, the new bool overload is only used by hand-written callers (e.g. in SetupLoadConfigurationExtensions.cs and ColoredConsolePrinter*).

To make the zero-parameter bool overload effective for parser-created conditions, pick one integration approach:

  • Factory-first
    Extend the MethodFactory API to preferentially return a Func<LogEventInfo,bool> when available:
    // In MethodFactory
    public Func<LogEventInfo,bool>? TryCreateInstanceWithNoParametersBool(string methodName) {}
    
    // In ConditionParser.CreateMethodExpression:
    var methodBool = _configurationItemFactory.ConditionMethodFactory.TryCreateInstanceWithNoParametersBool(functionName);
    if (methodBool != null)
        return ConditionMethodExpression.CreateMethodNoParameters(functionName, methodBool);
    
    // Fallback to object-returning:
    var methodObj  = _configurationItemFactory.ConditionMethodFactory.TryCreateInstanceWithNoParameters(functionName);
    if (methodObj != null)
        return ConditionMethodExpression.CreateMethodNoParameters(functionName, methodObj);
  • Parser adapter
    If the existing factory can indicate the target method’s return type (e.g. via metadata on the delegate), detect boolean-returning methods in ConditionParser and call the new bool overload directly.

Please adjust ConditionParser (around line 151) to route boolean delegates into CreateMethodNoParameters(string, Func<LogEventInfo,bool>).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 9e6878a and 3433668.

📒 Files selected for processing (3)
  • src/NLog.Benchmarks/NLog.Benchmarks.csproj (1 hunks)
  • src/NLog.Benchmarks/Program.cs (1 hunks)
  • src/NLog/Conditions/ConditionMethodExpression.cs (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/NLog.Benchmarks/Program.cs
🧰 Additional context used
🧬 Code graph analysis (1)
src/NLog/Conditions/ConditionMethodExpression.cs (2)
src/NLog/Conditions/ConditionParser.cs (2)
  • ConditionMethodExpression (113-145)
  • ConditionMethodExpression (147-183)
src/NLog/Config/MethodFactory.cs (5)
  • Func (352-361)
  • Func (363-372)
  • Func (374-383)
  • Func (385-394)
  • Func (396-443)

@karpinsn
Copy link
Contributor Author

I fixed up the code to align with #5968, let me know if there are any other changes you would like.

I'm going to try and see if I can find other fixes, for future PRs do you want each benchmark separate from the fix or new benchmark and fix in the same PR? I typically think of benchmarks as unit tests for perf which is why I include them with the fix. Also, any areas / APIs in particular you want me to have the Profiler Agent try and review?

@snakefoot
Copy link
Contributor

snakefoot commented Aug 22, 2025

for future PRs do you want each benchmark separate from the fix or new benchmark and fix in the same PR? I typically think of benchmarks as unit tests for perf which is why I include them with the fix.

I'm happy with benchmark being added together with code-fix (Just needed to adjust my world view, since few NLog contributors focus on isolated benchmark, and if creating benchmarks then making them non-generic and closely matching their own local environment/config). Was thinking to have a dedicated benchmark for the different NLog-features / NLog-use-case.

Also, any areas / APIs in particular you want me to have the Profiler Agent try and review?

I was thinking that one should have at least one benchmark for each NLog-target / NLog-Target-wrapper / NLog-Layout / NLog-LayoutRenderer. But maybe also a benchmark for the common NLog Logger use-cases:

  • NLog Logger where LogLevel is below minimum-loglevel (no-op)
  • NLog Logger where LogLevel is below minimum-loglevel, and includes structured properties (no-op)
  • NLog Logger where LogLevel is below minimum-loglevel, and includes positional parameters (no-op)
  • NLog Logger where LogLevel matches a single target
  • NLog Logger where LogLevel matches a single target and includes structured properties.
  • NLog Logger where LogLevel matches a single target and includes positional parameters.
  • NLog Logger where LogLevel matches two targets

@snakefoot
Copy link
Contributor

snakefoot commented Aug 22, 2025

Think you should mark the NLog.Benchmark-csproj, as "test"-project, then Sonar will ignore the missing code-coverage:

<SonarQubeTestProject>true</SonarQubeTestProject>
<IsPackable>false</IsPackable>

https://docs.sonarsource.com/sonarqube-server/10.8/analyzing-source-code/dotnet-environments/specify-test-project-analysis/

@karpinsn
Copy link
Contributor Author

Looks like most of the other projects have:
<SonarQubeExclude>true</SonarQubeExclude>
Preference between this and SonarQubeTestProject?

@snakefoot
Copy link
Contributor

snakefoot commented Aug 22, 2025

Preference between this and SonarQubeTestProject?

I prefer <SonarQubeTestProject> since I usually like the Sonar-Code-Style-NitPicks. Not wanting to disable Sonar completely.

But I guess at some point then dotnet-analyzers / dotnet-coverage will take over everything :)

@snakefoot snakefoot enabled auto-merge (squash) August 22, 2025 19:44
@snakefoot snakefoot disabled auto-merge August 22, 2025 19:45
@snakefoot snakefoot enabled auto-merge (squash) August 22, 2025 19:45
@snakefoot
Copy link
Contributor

snakefoot commented Aug 22, 2025

@karpinsn Just curious. Why did you choose net462 for the benchmark-project instead of net8 / net9 ? I'm currently treating .NET Framework as mostly legacy. Is it because the JIT-compiler is not so good with net462, while the JIT-compiler in net8 / net9 can hide/fix less optimal code? (Along with more aggressive GC because of DATAS)

@karpinsn
Copy link
Contributor Author

@snakefoot I based it off the NLog.RegEx.Tests project since the ColoredConsoleTargetTests is what I used to have Copilot generate the benchmark. I saw you target both net462 and net8.0 so I opted for the min version since then Copilot won't generate anything that is net8.0 specific. I can specify multiple frameworks, BenchmarkDotNet supports running across versions so you can see the performance enhancement. Want me to do that here or generate a new PR for that?

This was referenced Sep 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants