Skip to content

MCP server + API Keys + User/Groups on Studio#3481

Merged
lvca merged 21 commits intomainfrom
mcp-server
Feb 19, 2026
Merged

MCP server + API Keys + User/Groups on Studio#3481
lvca merged 21 commits intomainfrom
mcp-server

Conversation

@lvca
Copy link
Contributor

@lvca lvca commented Feb 18, 2026

Started with the MCP server, I ended up providing some missing pieces in Studio (manage users and groups). UX can be improved, but it's quite functional.

The MCP server works pretty well, I've tested with Claude desktop from my computer.

The MCP server has zero dependencies a few Kb of code, so I merged into the server. It's disabled by default.

@lvca lvca self-assigned this Feb 18, 2026
@lvca lvca added this to the 26.3.1 milestone Feb 18, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @lvca, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the ArcadeDB server by integrating a new Multi-Cloud Protocol (MCP) server, alongside comprehensive API key and user/group management capabilities. The changes enable programmatic interaction with the server via API tokens and provide a more robust security framework, with corresponding updates to the ArcadeDB Studio for easier administration.

Highlights

  • MCP Server Integration: Introduced a new Multi-Cloud Protocol (MCP) server, which is disabled by default, to enable programmatic interaction with ArcadeDB from LLM clients like Claude Desktop.
  • API Key Management: Added comprehensive API token management capabilities, including endpoints for creating, listing, and deleting API tokens, along with support for API token-based authentication.
  • User and Group Management in Studio: Implemented new HTTP handlers and corresponding UI in ArcadeDB Studio for managing server users and security groups, allowing for granular control over database access and permissions.
  • Enhanced Security Framework: Updated the core server security to support API tokens, improved the merging logic for wildcard and database-specific group configurations, and added permission checks for MCP tools.
  • MCP Tools and Configuration: Provided several built-in MCP tools for common operations like listing databases, retrieving schema, executing queries and commands, and checking server status, all configurable via a new MCP configuration mechanism.
  • Studio UI Updates: Integrated the new security management features (users, groups, API tokens) and MCP configuration into the ArcadeDB Studio interface for easier administration.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • server/src/main/java/com/arcadedb/server/ArcadeDBServer.java
    • Integrated MCP configuration and exposed it via a getter.
  • server/src/main/java/com/arcadedb/server/http/HttpServer.java
    • Registered new HTTP handlers for API tokens, users, groups, and MCP endpoints.
  • server/src/main/java/com/arcadedb/server/http/handler/AbstractServerHttpHandler.java
    • Updated bearer token authentication to support API tokens.
  • server/src/main/java/com/arcadedb/server/http/handler/DeleteApiTokenHandler.java
    • Added API endpoint for deleting API tokens.
  • server/src/main/java/com/arcadedb/server/http/handler/DeleteGroupHandler.java
    • Added API endpoint for deleting user groups.
  • server/src/main/java/com/arcadedb/server/http/handler/DeleteUserHandler.java
    • Added API endpoint for deleting users.
  • server/src/main/java/com/arcadedb/server/http/handler/GetApiTokensHandler.java
    • Added API endpoint for listing API tokens.
  • server/src/main/java/com/arcadedb/server/http/handler/GetGroupsHandler.java
    • Added API endpoint for listing user groups.
  • server/src/main/java/com/arcadedb/server/http/handler/GetUsersHandler.java
    • Added API endpoint for listing users.
  • server/src/main/java/com/arcadedb/server/http/handler/PostApiTokenHandler.java
    • Added API endpoint for creating API tokens.
  • server/src/main/java/com/arcadedb/server/http/handler/PostGroupHandler.java
    • Added API endpoint for creating or updating user groups.
  • server/src/main/java/com/arcadedb/server/http/handler/PostUserHandler.java
    • Added API endpoint for creating new users.
  • server/src/main/java/com/arcadedb/server/http/handler/PutUserHandler.java
    • Added API endpoint for updating existing users.
  • server/src/main/java/com/arcadedb/server/mcp/MCPConfigHandler.java
    • Added API endpoint for managing MCP server configuration.
  • server/src/main/java/com/arcadedb/server/mcp/MCPConfiguration.java
    • Implemented configuration management for the MCP server.
  • server/src/main/java/com/arcadedb/server/mcp/MCPHttpHandler.java
    • Implemented the core MCP protocol handler and tool dispatcher.
  • server/src/main/java/com/arcadedb/server/mcp/tools/ExecuteCommandTool.java
    • Added an MCP tool for executing database commands with permission checks.
  • server/src/main/java/com/arcadedb/server/mcp/tools/GetSchemaTool.java
    • Added an MCP tool for retrieving database schema information.
  • server/src/main/java/com/arcadedb/server/mcp/tools/ListDatabasesTool.java
    • Added an MCP tool for listing available databases.
  • server/src/main/java/com/arcadedb/server/mcp/tools/QueryTool.java
    • Added an MCP tool for executing read-only database queries.
  • server/src/main/java/com/arcadedb/server/mcp/tools/ServerStatusTool.java
    • Added an MCP tool for retrieving ArcadeDB server status.
  • server/src/main/java/com/arcadedb/server/security/ApiTokenConfiguration.java
    • Implemented API token storage, creation, and validation.
  • server/src/main/java/com/arcadedb/server/security/ServerSecurity.java
    • Integrated API token management and enhanced group configuration merging.
  • server/src/main/java/com/arcadedb/server/security/ServerSecurityUser.java
    • Extended user security context to support synthetic group configurations from API tokens.
  • server/src/test/java/com/arcadedb/server/ApiTokenAuthenticationIT.java
    • Added integration tests for API token functionality.
  • server/src/test/java/com/arcadedb/server/GroupManagementIT.java
    • Added integration tests for user group management.
  • server/src/test/java/com/arcadedb/server/mcp/MCPConfigurationTest.java
    • Added unit tests for MCP server configuration.
  • server/src/test/java/com/arcadedb/server/mcp/MCPPermissionsTest.java
    • Added unit tests for MCP command permission detection.
  • server/src/test/java/com/arcadedb/server/mcp/MCPServerPluginTest.java
    • Added integration tests for the MCP server.
  • server/src/test/java/com/arcadedb/server/security/ApiTokenConfigurationTest.java
    • Added unit tests for API token configuration.
  • studio/package-lock.json
    • Updated dependency lock file for ajv.
  • studio/package.json
    • Overrode ajv dependency version.
  • studio/src/main/resources/static/index.html
    • Updated HTML to include new security script, tab, and content.
  • studio/src/main/resources/static/js/studio-database.js
    • Modified global variables and login logic to support new authentication context.
  • studio/src/main/resources/static/js/studio-security.js
    • Added new JavaScript for Studio UI to manage users, groups, and API tokens.
  • studio/src/main/resources/static/js/studio-server.js
    • Added JavaScript for Studio UI to manage MCP server configuration.
  • studio/src/main/resources/static/js/studio-utils.js
    • Adjusted error notification message handling.
  • studio/src/main/resources/static/security.html
    • Added new HTML for the Studio security management tab.
  • studio/src/main/resources/static/server.html
    • Updated HTML to include a new MCP configuration tab.
Activity
  • The author, lvca, initiated the development with the MCP server and subsequently integrated missing user and group management features into Studio.
  • The MCP server was successfully tested with Claude desktop, demonstrating its functionality.
  • The MCP server was merged into the main server due to its minimal dependencies and small codebase, and is disabled by default.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify
Copy link
Contributor

mergify bot commented Feb 18, 2026

🧪 CI Insights

Here's what we observed from your CI run for 6ac9b91.

🟢 All jobs passed!

But CI Insights is watching 👀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant new functionality, including an MCP (Machine-Conversation Protocol) server for LLM integration, a robust API key management system, and a comprehensive security management UI in Studio for users, groups, and API tokens. The changes are well-structured, with clear separation of concerns between backend APIs, security logic, and the new MCP components. The inclusion of extensive tests for the new features is commendable. My review focuses on improving the robustness of the MCP command permission checks and fixing a minor bug in the new Studio UI.

Comment on lines +149 to +170
private static OperationType detectCypherOperation(final String upper) {
// Schema operations in Cypher
if (upper.contains("CREATE INDEX") || upper.contains("CREATE CONSTRAINT") || upper.contains("DROP INDEX")
|| upper.contains("DROP CONSTRAINT"))
return OperationType.SCHEMA;

// Check for write clauses
final boolean hasCreate = upper.contains("CREATE") && !upper.contains("CREATE INDEX") && !upper.contains("CREATE CONSTRAINT");
final boolean hasMerge = upper.contains("MERGE");
final boolean hasSet = upper.contains(" SET ");
final boolean hasDelete = upper.contains("DELETE");
final boolean hasRemove = upper.contains("REMOVE");

if (hasDelete)
return OperationType.DELETE;
if (hasSet || hasMerge)
return OperationType.UPDATE;
if (hasCreate)
return OperationType.INSERT;

return OperationType.UNKNOWN;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation for detecting write clauses in Cypher queries uses String.contains(), which can lead to incorrect query classification. For example, a read-only query like MATCH (n) WHERE n.name = 'CREATE new things' RETURN n would be misclassified as an INSERT operation because it contains the string "CREATE". This could cause valid read queries to be blocked if insert operations are disabled.

To make the detection more robust, you should match against whole keywords, for example by using regular expressions with word boundaries (\bKEYWORD\b). This will prevent matching substrings within string literals or identifiers.

  private static OperationType detectCypherOperation(final String upper) {
    // Schema operations in Cypher
    if (upper.matches(".*\\b(CREATE|DROP)\\s+(INDEX|CONSTRAINT)\\b.*"))
      return OperationType.SCHEMA;

    // Check for write clauses using regex for whole words to avoid matching substrings in literals
    if (upper.matches(".*\\b(DELETE|REMOVE)\\b.*"))
      return OperationType.DELETE;
    if (upper.matches(".*\\b(SET|MERGE)\\b.*"))
      return OperationType.UPDATE;
    if (upper.matches(".*\\bCREATE\\b.*"))
      return OperationType.INSERT;

    return OperationType.UNKNOWN;
  }

Comment on lines +912 to +914
' <a href="#" onclick="removeMCPUser(\'' +
escapeHtml(users[i]) +
"')\" class=\"text-white ms-1\"><i class=\"fa fa-times\"></i></a></span>";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There appears to be a syntax error in the string concatenation for the onclick handler. An extra escaped double quote \" is present after the function call parenthesis, which will likely break the removeMCPUser function call by being rendered as a literal quote in the HTML.

Suggested change
' <a href="#" onclick="removeMCPUser(\'' +
escapeHtml(users[i]) +
"')\" class=\"text-white ms-1\"><i class=\"fa fa-times\"></i></a></span>";
' <a href="#" onclick="removeMCPUser(\'' +
escapeHtml(users[i]) +
'\')" class="text-white ms-1"><i class="fa fa-times"></i></a></span>';

Comment on lines +38 to +42
public ApiTokenConfiguration(final String configPath) {
String path = configPath;
if (!path.endsWith(File.separator))
path += File.separator;
this.filePath = path + FILE_NAME;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The file path is constructed by manually appending file separators. This can be brittle across different operating systems. Using java.nio.file.Paths.get() is a more modern and platform-independent way to construct file paths.

  public ApiTokenConfiguration(final String configPath) {
    this.filePath = java.nio.file.Paths.get(configPath, FILE_NAME).toString();
  }

@claude
Copy link

claude bot commented Feb 18, 2026

Code Review

This is a substantial PR adding three major features: MCP server integration, API tokens, and user/group management UI in Studio. Overall the implementation is solid and well-structured. Here are my findings:


Security Concerns

1. Expired API token returns wrong HTTP status (critical)
In AbstractServerHttpHandler.java, when authenticateByApiToken() throws ServerSecurityException for an invalid/expired token, the exception bubbles up through the general exception handler—which typically returns a 500 or catches it differently than the 401 returned for session token failures. The test testExpiredTokenReturns403 expects a 403, but the code path is asymmetric: session-token failures explicitly set 401 and return early, while API token failures throw an exception and rely on the surrounding error handler. Verify this produces a consistent response code.

2. API token stored as plain prefix + UUID (low entropy concern)
In ApiTokenConfiguration.java:

final String tokenValue = TOKEN_PREFIX + UUID.randomUUID();

UUID.randomUUID() uses SecureRandom internally on most JVMs (verified in modern Java), but this should be explicitly documented. A UUID is 122 bits of randomness—acceptable, but worth noting in a comment.

3. MCP command-detection bypass via string matching limitations
ExecuteCommandTool.detectOperationType() only checks if the command starts with certain keywords (for SQL) or contains certain strings (for Cypher). For Cypher, upper.contains("CREATE") will match any query containing the word "CREATE" anywhere—including in string literals or comments. Example: MATCH (n) WHERE n.desc = 'CREATE something' RETURN n would be detected as an INSERT. This is a false positive (safe side), but worth documenting the limitation clearly.

4. MCP route /api/v1/mcp accessible even when MCP is disabled
Routes are registered unconditionally when mcpConfig != null. The check if (mcpConfig != null) in HttpServer.java is always true since mcpConfiguration is always initialized—so the null-check is misleading. Consider either removing the dead condition or genuinely gating route registration on mcpConfig.isEnabled() (though you'd need to handle runtime enable/disable differently).

5. No rate limiting on API token creation
Any root user can create unlimited API tokens with no rate limiting or maximum count enforcement. Consider adding a max token count guard.


Bugs / Correctness Issues

6. PutUserHandler: updating databases via toJSON().put(...) may not persist correctly

existingUser.toJSON().put("databases", databases);
existingUser.toJSON().remove("_databaseCacheCleared");

If toJSON() reconstructs a new JSONObject each call rather than returning a live reference to the internal state, these mutations are lost. The _databaseCacheCleared key removal on the second call operates on a different object than the first. Verify this method returns a mutable live reference—and if not, the database assignments will silently not be saved even after security.saveUsers().

7. ApiTokenConfiguration.cleanupExpired() has a concurrent modification pattern

for (final Map.Entry<String, JSONObject> entry : tokens.entrySet()) {
    ...
    tokens.remove(entry.getKey());
}

While ConcurrentHashMap doesn't throw ConcurrentModificationException, removing during iteration over entrySet() is not guaranteed to visit all entries. Use tokens.entrySet().removeIf(...) instead.

8. MCPConfiguration.updateFrom() is not synchronized
The class mixes synchronized methods (load(), save()) with non-synchronized updateFrom() that modifies multiple volatile fields non-atomically. An observer could see a partially updated configuration mid-update. Either make updateFrom() synchronized or document why this is acceptable.

9. ServerStatusTool leaks all database names regardless of user permissions

result.put("databases", new JSONArray(server.getDatabaseNames()));

ListDatabasesTool correctly filters databases by user permissions, but ServerStatusTool exposes all databases on the server. An MCP user could enumerate all databases even those they lack access to.


Code Quality

10. Fully-qualified class name in ServerSecurity.java

new com.arcadedb.serializer.json.JSONArray()

The CLAUDE.md guidelines explicitly state: "don't use fully qualified names if possible, always import the class." JSONArray is already imported in the same file—use the short name.

11. MCPConfiguration.save() uses FileWriter without specifying charset

try (final FileWriter writer = new FileWriter(getConfigFile())) {

This uses the platform default charset. Use StandardCharsets.UTF_8 or the DatabaseFactory.getDefaultCharset() pattern used consistently in ApiTokenConfiguration.save().

12. MCPServerPluginTest uses hardcoded port 2480 instead of per-index ports

private static final String MCP_URL = "http://127.0.0.1:2480/api/v1/mcp";

Other integration tests (e.g., ApiTokenAuthenticationIT, GroupManagementIT) use the 248{serverIndex} pattern via testEachServer() to support multi-server testing. This test only covers server 0.


Test Coverage Gaps

  • No test for the group merging behavior (getDatabaseGroupsConfiguration now merges wildcard + database-specific groups)—this logic change is significant and deserves a dedicated unit test
  • No test for the case where MCP is disabled and a request is made
  • testReadOnlyTokenCannotInsert uses isNotEqualTo(200) rather than a specific status code—consider asserting the exact 403

Minor Nits

  • DeleteApiTokenHandler: passing the token value as a query parameter means it appears in server access logs. Consider accepting it in the request body instead.
  • New user/group management endpoints should be added to the OpenAPI spec for completeness.

@codacy-production
Copy link

codacy-production bot commented Feb 18, 2026

Coverage summary from Codacy

See diff coverage on Codacy

Coverage variation Diff coverage
Report missing for 3e851681 81.57%
Coverage variation details
Coverable lines Covered lines Coverage
Common ancestor commit (3e85168) Report Missing Report Missing Report Missing
Head commit (381b6c9) 116803 72747 62.28%

Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>

Diff coverage details
Coverable lines Covered lines Diff coverage
Pull request (#3481) 852 695 81.57%

Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%

See your quality gate settings    Change summary preferences

Footnotes

  1. Codacy didn't receive coverage data for the commit, or there was an error processing the received data. Check your integration for errors and validate that your coverage setup is correct.

@codecov
Copy link

codecov bot commented Feb 18, 2026

Codecov Report

❌ Patch coverage is 75.57003% with 300 lines in your changes missing coverage. Please review.
✅ Project coverage is 64.05%. Comparing base (de9ea0c) to head (6ac9b91).
⚠️ Report is 4 commits behind head on main.

Files with missing lines Patch % Lines
...a/com/arcadedb/server/security/ServerSecurity.java 70.32% 18 Missing and 9 partials ⚠️
...rcadedb/server/security/ApiTokenConfiguration.java 77.66% 17 Missing and 6 partials ⚠️
...a/com/arcadedb/server/mcp/tools/GetSchemaTool.java 69.44% 11 Missing and 11 partials ⚠️
...ava/com/arcadedb/mongo/query/MongoQueryEngine.java 0.00% 21 Missing ⚠️
...in/java/com/arcadedb/bolt/BoltNetworkExecutor.java 0.00% 14 Missing and 3 partials ⚠️
...adedb/server/http/handler/PostApiTokenHandler.java 65.30% 8 Missing and 9 partials ⚠️
...java/com/arcadedb/server/mcp/MCPConfiguration.java 84.46% 10 Missing and 6 partials ⚠️
.../arcadedb/server/mcp/tools/ExecuteCommandTool.java 79.16% 8 Missing and 7 partials ⚠️
...c/main/java/com/arcadedb/utility/SingletonSet.java 40.90% 12 Missing and 1 partial ⚠️
...server/http/handler/AbstractServerHttpHandler.java 68.29% 12 Missing and 1 partial ⚠️
... and 19 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3481      +/-   ##
==========================================
+ Coverage   63.15%   64.05%   +0.89%     
==========================================
  Files        1299     1321      +22     
  Lines       86427    87621    +1194     
  Branches    17789    17965     +176     
==========================================
+ Hits        54585    56127    +1542     
+ Misses      24102    23615     -487     
- Partials     7740     7879     +139     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@claude
Copy link

claude bot commented Feb 19, 2026

PR Review: MCP server + API Keys + User/Groups on Studio

This is a substantial PR adding three major features: an MCP (Model Context Protocol) server, API token authentication, and Studio UI for user/group management. The overall design is solid and the code follows project conventions well. Here are my findings:


Security Concerns

[Critical] Token value stored in plaintext on disk

ApiTokenConfiguration.createToken() stores the plain token value as the map key and writes it to server-api-tokens.json in plaintext. If this file is readable by other processes or users on the system, all tokens are exposed. Consider storing a SHA-256 hash of the token as the lookup key and for persistence, returning the plaintext only once at creation time (similar to how most API key systems work).

[High] MCP command classification is bypassable for some query languages

ExecuteCommandTool.detectOperationType() only classifies SQL and Cypher; Gremlin, GraphQL, and MongoDB always fall into UNKNOWN, which requires all write permissions to be enabled. This means users who want to use Gremlin writes must enable all write types simultaneously — likely unintended. Consider documenting this limitation or extending classification to other languages.

[Medium] Synthetic group name uses human-readable token name

In ServerSecurity.authenticateByApiToken():

final String syntheticGroupName = "_apitoken_" + tokenName;

If tokenName can conflict with existing group names, the synthetic group could potentially overlap. Using the token's UUID suffix (from the at- prefixed value) would be more collision-resistant.


Correctness Issues

[Medium] PutUserHandler mutates user state non-atomically

existingUser.toJSON().put("databases", databases);
existingUser.refreshDatabaseNames();
// ...
security.saveUsers();

This modifies the in-memory user JSON directly before the save completes. A concurrent request reading the user between the put() and saveUsers() calls could see inconsistent state. Compare with PostUserHandler which builds a new userConfig object and calls createUser(userConfig).

[Low] MCPConfiguration.toJSON() is not synchronized

updateFrom() is synchronized but toJSON() is not. Since the fields are individually volatile, single-field reads are safe, but the multi-field snapshot in toJSON() is not atomic. If strict consistency of the config snapshot matters (e.g., for the config GET endpoint), consider synchronizing toJSON() as well.

[Low] ApiTokenConfiguration.cleanupExpired() is not synchronized

The method is not synchronized but calls save() (which is). While ConcurrentHashMap.removeIf() is thread-safe, the boolean result used to trigger save() could be stale if another thread also modified the map between removeIf() and the save call.


Code Quality

[Medium] Redundant null check with contradictory comment

In HttpServer.setupRoutes():

final var mcpConfig = server.getMCPConfiguration();
if (mcpConfig != null) {  // comment: "safety guard (mcpConfig is always initialized)"

If mcpConfig is always initialized, the null check is dead code. If it could genuinely be null (e.g., during early startup), the comment is wrong. Pick one and clean it up.

[Low] PutUserHandler does not use the getQueryParameter() helper

Unlike DeleteUserHandler which calls getQueryParameter(exchange, "name"), PutUserHandler manually accesses exchange.getQueryParameters(). Using the existing helper would be more consistent.

[Low] Import ordering in MCPConfiguration.java

java.io.FileOutputStream appears after java.io.OutputStreamWriter — out of alphabetical order compared to the other imports in the file.

[Low] HTTP 200 returned for resource creation

PostApiTokenHandler and PostUserHandler return 200 on success. REST convention uses 201 Created for new resource creation. Not a bug, but worth aligning for consistency with standard practice.


Test Coverage

The test coverage is comprehensive. A few gaps:

  • PutUserHandler (user update) has no integration test. ApiTokenAuthenticationIT covers tokens, GroupManagementIT covers groups, but user update via PUT /server/users is not tested end-to-end.
  • testReadOnlyTokenCannotInsert() uses a loose assertion: assertThat(connection.getResponseCode()).isNotEqualTo(200). It would be stronger to assert the specific error code (403 or 500) that the security layer returns.
  • Wildcard + database-specific group merging is not explicitly tested. testDatabaseSpecificGroupDoesNotBreakWildcardGroups verifies root still works but does not verify that a user in only the database-specific group can use it while lacking wildcard group access.

Studio / Frontend

  • The use of escapeHtml() in onclick attributes (onclick="editUser('" + escapeHtml(name) + "')") prevents XSS in HTML context, but usernames containing a single-quote would still break the JS string. Consider using data-* attributes with event listeners attached in JS instead.
  • The package.json overrides for ajv to ^8.18.0 is a useful security/compatibility fix.

Overall this is well-structured, thoughtfully designed code with good test coverage. The main items worth addressing before merging are: the plaintext token storage concern, the PutUserHandler non-atomic mutation, and adding a test for the user update endpoint.

@claude
Copy link

claude bot commented Feb 19, 2026

Code Review

This is a substantial PR adding three significant features: MCP server, API key authentication, and Studio User/Group management. Overall the implementation is well-structured. Here are my observations:


Security Issues

1. SHA-256 without salt for token hashing (ApiTokenConfiguration.java)

SHA-256 is used to hash API tokens without a salt. While API tokens are random (UUID-based) and long enough to resist brute-force, adding a salt is defense-in-depth best practice. Consider HMAC-SHA256 with a server-side secret to prevent offline attacks if the token file is exfiltrated.

2. UNKNOWN operation type requires ALL write permissions (ExecuteCommandTool.java:354-357)

The fallback for unclassified commands is to require all write permissions simultaneously, which is overly restrictive. A better approach would be to throw an explicit error saying the command cannot be classified, prompting the user to switch to a classified language/command.

3. Regex-based SQL/Cypher permission checking is bypassable (ExecuteCommandTool.java)

The keyword detection using startsWith and regex patterns can be bypassed. For example:

  • SQL: /* comment */ INSERT INTO ... would not be caught by upper.startsWith("INSERT")
  • Cypher: MATCH (n) WITH n CREATE (m) contains CREATE but also other keywords

The execute_command tool is intended for writes; consider simply requiring any write permission to be enabled rather than trying to classify the exact operation type. This is safer and simpler.

4. Token prefix length reveals too much (ApiTokenConfiguration.java:43)

PREFIX_LEN = 10 for at- + 7 chars of UUID. The prefix is stored in plaintext in the token file. Consider reducing to 6-7 chars (just enough for human identification).


Design/Architecture Issues

5. Race condition in PutUserHandler.java - saveUsers() called outside synchronized block

synchronized (security) {
    final ServerSecurityUser existingUser = security.getUser(name);
    ...
    existingUser.toJSON().put("databases", databases);
    existingUser.refreshDatabaseNames();
}
security.saveUsers();  // <- OUTSIDE the synchronized block

saveUsers() is called after releasing the lock. If two updates happen concurrently, they could interleave. Move security.saveUsers() inside the synchronized block.

6. MCPConfiguration has mixed synchronization

Some methods are synchronized (load, save, toJSON, updateFrom) but the volatile fields are also accessed in non-synchronized getters (isEnabled(), isAllowReads(), etc.). The volatile keyword handles visibility but not atomicity for compound checks. Consider whether this could lead to a TOCTOU issue if isEnabled() and isUserAllowed() are checked separately in MCPHttpHandler.

7. getDatabaseGroupsConfiguration behavior change may break existing setups

The original code returned database-specific groups OR the wildcard groups (fallback). The new code merges them. This is a semantic change that could alter existing security permissions for users who intentionally have different group configs for specific databases vs wildcard. This should be explicitly documented as a behavior change and ideally covered by a test.

8. MCPConfigHandler inconsistent with established routing patterns

The MCP config endpoint uses addExactPath which matches all HTTP methods, and branches on exchange.getRequestMethod() internally. All other endpoints use the Undertow routing builder (.get(...), .post(...)) which is the established pattern. Consider consistency.


Code Quality Issues

9. GetApiTokensHandler.java:52 - "..." appended in the API layer

entry.put("tokenPrefix", token.getString("tokenPrefix", "") + "...");

Presentational formatting (the "..." ellipsis) should be left to the UI, not the API layer.

10. PostUserHandler.java - inconsistent error handling for password validation

For password validation errors, throw new ServerSecurityException(...) is used, while other validation in the same method uses return new ExecutionResponse(400, ...). Use consistent error handling patterns throughout a handler.

11. usersToJSON() called in GetUsersHandler.java - method not visible in the diff

The handler calls security.usersToJSON(), but this method is not shown as being added to ServerSecurity in the diff. Please verify this method exists (may be pre-existing).


Missing Tests

12. MCP server has no integration tests

The PR adds ApiTokenAuthenticationIT and GroupManagementIT but there are no tests for:

  • MCP server initialize, tools/list, tools/call endpoints
  • MCP permission enforcement (allowReads=false, allowInsert=false, etc.)
  • MCP user allowlist enforcement
  • MCP config save/load roundtrip

13. testReadOnlyTokenCannotInsert has a weak assertion (ApiTokenAuthenticationIT.java:170)

assertThat(connection.getResponseCode()).isNotEqualTo(200);

This accepts any non-200 response including 500 (server error). The test should assert a specific 403 or verify the error response body to confirm it's a permission error and not an unrelated server failure.


Minor Issues

14. Wildcard imports in ApiTokenConfiguration.java

The file uses import java.io.*;, import java.nio.file.*;, etc. The project style prefers explicit imports.

15. MCPHttpHandler.java:94 - INFO-level logging for every MCP request may be too verbose

LogManager.instance().log(this, Level.INFO, "MCP %s (user=%s)", method, user.getName());

MCP tools can be called frequently by AI agents. Consider Level.FINE/Level.DEBUG or log only unexpected/error paths at INFO.

16. DeleteGroupHandler.java:58-65 - getDatabase() without null check

final DatabaseInternal db = (DatabaseInternal) httpServer.getServer().getDatabase(dbName);
security.updateSchema(db);

getDatabase() may return null or throw if the database is not open. Add a null check before calling updateSchema.


Summary

The overall architecture is clean and the security fundamentals are sound (root-only access to management endpoints, token hashing, MCP disabled by default, no plaintext token persistence). The main concerns are:

  1. saveUsers() outside the synchronized block in PutUserHandler - potential race condition
  2. Regex-based operation detection in ExecuteCommandTool is fragile (comment prefix bypass, multi-clause Cypher)
  3. No MCP integration tests - the MCP server is the main feature of this PR
  4. Behavior change in getDatabaseGroupsConfiguration (merge vs fallback) needs documentation/test coverage

@claude
Copy link

claude bot commented Feb 19, 2026

Code Review

This is a substantial PR adding three major features: MCP server integration, API token authentication, and User/Group management via REST API. Overall, the code is well-structured and follows project conventions. Below is my detailed feedback.


Security Concerns

1. MCP endpoint authentication gap — MCPHttpHandler does not enforce authentication before checking user access

In AbstractServerHttpHandler.handleRequest(), if a request comes in with no Authorization header (anonymous), user will be null. MCPHttpHandler.execute() then calls config.isUserAllowed(user.getName()) — this will NPE if user is null. Worse, the endpoint must be accessible without authentication for some MCP clients. Make sure authentication is consistently required (or explicitly handled) before checking isUserAllowed.

2. Token stored as UUID — predictability concern

final String tokenValue = TOKEN_PREFIX + UUID.randomUUID();

UUID.randomUUID() uses SecureRandom internally on most JVMs, so this is acceptable. However, the token space is only 122 bits of randomness. Using SecureRandom directly to generate 256 bits would be more explicit and provably secure:

byte[] bytes = new byte[32];
SECURE_RANDOM.nextBytes(bytes);
final String tokenValue = TOKEN_PREFIX + HexFormat.of().formatHex(bytes);

3. Tokens file in plain config directory — no file permission hardening

server-api-tokens.json is written with no explicit file permission restriction. On Linux, consider setting it to mode 600 (owner-readable only) after creation, similar to how SSH keys are handled.

4. MCP execute_command tool operates without a transaction

In ExecuteCommandTool.execute():

try (final ResultSet resultSet = database.command(language, command)) { ... }

Commands are executed directly without an explicit transaction. ArcadeDB auto-commits in this case, but the behavior on error mid-command may differ. Consider wrapping in a transaction for consistency with the rest of the codebase and clearer error semantics.

5. Query injection: The MCP permission check is text-parsing-based, not semantic

detectOperationType() parses SQL/Cypher as text. An adversary with write access could craft:

  • A SQL stored procedure or function call that issues a DELETE internally
  • A Cypher query with subqueries or CALL syntax

The database-level security (user permissions via groups) provides the actual security layer, but the checkPermission in MCP is a best-effort filter, not a guarantee. Consider documenting this limitation clearly.


Correctness Issues

6. PutUserHandler — race condition on user update

synchronized (security) {
  final ServerSecurityUser existingUser = security.getUser(name);
  ...
  existingUser.toJSON().put("databases", databases);
  existingUser.refreshDatabaseNames();
}
security.saveUsers();

saveUsers() is called outside the synchronized block. Another thread could modify the user between the in-memory update and the file save. Move security.saveUsers() inside the synchronized block.

7. ApiTokenConfiguration.getToken() — lazy expiry cleanup triggers save() on every expired token access

public JSONObject getToken(final String plaintextToken) {
  ...
  if (expiresAt > 0 && expiresAt < System.currentTimeMillis()) {
    tokens.remove(hash);
    save();  // ← disk write on every auth attempt with an expired token
    return null;
  }
  ...
}

This causes a disk write on every authentication attempt with an expired token, which could be a DoS vector if an attacker repeatedly sends expired tokens. The cleanupExpired() method already exists — use a scheduled cleanup instead of writing on every call.

8. MCPConfiguration.updateFrom() — not synchronized on field updates involving volatile list

allowedUsers is declared volatile, but updateFrom() iterates and assigns a new list without synchronized:

public synchronized void updateFrom(final JSONObject json) { ... }

Wait — it is synchronized. But isUserAllowed() accesses allowedUsers without synchronization. Since allowedUsers is volatile and the list reference is reassigned atomically, this is safe for a single reference read. However, if isUserAllowed reads the old list while setAllowedUsers is replacing it, that's a race that volatile alone doesn't prevent consistently. Consider using CopyOnWriteArrayList instead.

9. URL routing inconsistency: /server/api-tokens vs /api/v1/server/api-tokens

In HttpServer.java, the routes are registered as:

.get("/server/api-tokens", new GetApiTokensHandler(this))

But tests access http://...248X/api/v1/server/api-tokens. Check if the /api/v1 prefix is applied globally by the router or if there's a mismatch.


Code Quality

10. Raw string JSON in error responses

Several handlers return raw JSON strings like:

return new ExecutionResponse(400, "{\"error\":\"Token parameter is required\"}");

This is inconsistent with the rest of the codebase which uses JSONObject. Use new JSONObject().put("error", "...") throughout for consistency and to avoid escaping bugs.

11. MCPConfigHandler handles both GET and POST but only registers on addExactPath

MCPConfigHandler checks exchange.getRequestMethod() internally to differentiate GET vs POST, but it's registered with addExactPath which handles all methods. This works but is fragile — consider separate registrations:

routes.addExactPath("/api/v1/mcp/config", exchange -> {
  if (Methods.GET.equals(exchange.getRequestMethod()))
    new GetMCPConfigHandler(...).handleRequest(exchange);
  else
    new PostMCPConfigHandler(...).handleRequest(exchange);
});

Or document this pattern explicitly.

12. usersToJSON() / groupsToJSON() methods — verify these exist on ServerSecurity

GetUsersHandler calls security.usersToJSON() and GetGroupsHandler calls security.groupsToJSON(). These methods are not in the diff — confirm they were pre-existing public methods.

13. getDatabaseGroupsConfiguration — behavioral change with potential regression

The original code fell back to the * wildcard config only when no database-specific config existed. The new code merges wildcard and database-specific groups. This is a behavior change that could affect existing deployments where a database-specific config was intentionally used to override (not extend) wildcard groups. The merge strategy (wildcard base + database-specific overlay) should be documented clearly.


Test Coverage

14. Missing test: MCP disabled returns error

There's no test verifying that when MCP is disabled, the endpoint returns the appropriate error. testInitialize() only runs after @BeforeEach enables MCP.

15. Missing test: PutUserHandler — confirm save-outside-sync is exercised

The race condition in point #6 is not covered by any test.

16. testReadOnlyTokenCannotInsert — weak assertion

// Security error — may be 403 or 500 depending on where the exception is caught
assertThat(connection.getResponseCode()).isNotEqualTo(200);

This could pass for unrelated errors (network, 404, etc.). Be more specific: isIn(403, 500) or fix the response code so it's always 403 for permission errors.


Minor Observations

  • MCPHttpHandler.handleNotification() returns HTTP 200 with empty body. MCP spec says notifications don't get responses, but HTTP requires a response. This is correct behavior; worth a comment explaining why.

  • formatArgs in MCPHttpHandler — values printed to the log without sanitization could leak sensitive data (e.g., if a command contains passwords). Consider masking sensitive keys.

  • MCPConfiguration.load() on failure calls save() on first load even if it was a corrupt file — the original corrupt content would be silently overwritten with defaults. This could be surprising behavior in a production deployment.

  • PostUserHandler: Empty password (password.length() < 4) throws ServerSecurityException (which becomes a 500), while the handler should return a 400. Validate with an early return instead of throwing.


Summary

The feature set is well-designed and broadly useful. The main items to address before merge are:

  1. The potential NPE/auth gap in MCPHttpHandler for unauthenticated requests
  2. The race condition in PutUserHandler.saveUsers() outside the synchronized block
  3. The getToken() disk write on every expired token access
  4. The raw JSON string error responses (consistency)
  5. Verifying the behavioral change in getDatabaseGroupsConfiguration won't break existing deployments

Good job on the test coverage — it's comprehensive for a new feature.

@claude
Copy link

claude bot commented Feb 19, 2026

Code Review: MCP Server + API Keys + User/Groups on Studio

This is a substantial and well-structured PR. The implementation is clean, consistent with the codebase conventions, and ships with solid integration test coverage. Here are the notes, organized by severity.


Security Issues

1. SHA-256 is not suitable for API token hashing (ApiTokenConfiguration.java)

SHA-256 is a fast hash, which makes it vulnerable to brute-force attacks if the token file is ever compromised. API tokens (like at-<UUID>) have relatively low entropy (~122 bits from UUID v4), but the concern is the precedent — and if token formats ever become more predictable, this matters more.

For static bearer tokens, a constant-time HMAC-SHA256 with a server-side secret key is the standard practice. Alternatively, using a slow hash (PBKDF2, bcrypt, or Argon2) would protect the stored hashes. Given that the tokens already have a long UUID component, the risk is low in practice, but worth noting for future token designs.

2. MCPConfigHandler accepts POST on GET route without explicit routing

MCPConfigHandler.execute() dispatches on exchange.getRequestMethod() internally but the route is registered via routes.addExactPath(...) which handles all HTTP methods. This means PUT, DELETE, etc. will silently fall through to the POST code path (payload null check returns 400, which is fine, but it is implicit). Minor, but worth using explicit get()/post() registrations like the other handlers do.

3. MCP endpoint authentication relies on AbstractServerHttpHandler auth — verify this works when MCP is disabled

When MCP is disabled, MCPHttpHandler.execute() returns early with a JSON-RPC error. However, AbstractServerHttpHandler still authenticates the user first. This is correct behavior, but worth confirming that unauthenticated requests to /api/v1/mcp when disabled return 401 and not the JSON-RPC "disabled" error (which would reveal the endpoint exists). Currently, the authentication layer in AbstractServerHttpHandler runs before execute(), so 401 is returned first — this is fine.


Correctness Issues

4. PutUserHandlerexistingUser.toJSON().put("databases", databases) mutates internal state directly

existingUser.toJSON().put("databases", databases);
existingUser.refreshDatabaseNames();

This works only if toJSON() returns the live userConfiguration object (not a copy). If it returns a copy, the change is silently dropped and refreshDatabaseNames() sees stale data. This should be verified against the ServerSecurityUser.toJSON() implementation to ensure the reference semantics are what is expected.

5. ExecuteCommandToolMERGE in Cypher is classified as UPDATE, but MERGE can INSERT

In Cypher/OpenCypher, MERGE is a create-or-match operation — it will insert if nothing exists. Classifying it purely as UPDATE means a user with only allowUpdate=true (and allowInsert=false) could inadvertently create records via MERGE. The safer mapping would be to require both INSERT and UPDATE permissions for MERGE.

6. detectCypherOperation can be fooled by keywords in string literals

The regex patterns use word boundaries (\b) which helps, but the analysis is done on the uppercased raw command string. A query like MATCH (n) WHERE n.description = 'DELETE this' RETURN n would be misclassified as a DELETE operation. This is a known limitation of regex-based SQL/Cypher classification, and the comment acknowledges word boundaries are used to help, but the string literal concern remains. Consider documenting this limitation clearly.

7. ApiTokenConfiguration.cleanupExpired() is never called

The method exists but there is no scheduler registered to call it. Expired tokens are only removed lazily (on access via getToken()). Tokens will accumulate in the file and in-memory map until they are explicitly accessed. This is functional but could cause memory growth in long-running servers with many short-lived tokens. Consider wiring this into the existing server timer infrastructure.


Code Quality Issues

8. MCPConfiguration uses volatile on individual fields but synchronized on load()/save()

The fields are marked volatile for non-blocking reads, but toJSON() and updateFrom() are synchronized. isEnabled(), isAllowReads(), etc. are not synchronized — this is intentional and correct for individual boolean reads, but a composite check like "is enabled AND is user allowed" is not atomic. For the current use case this is likely fine (the MCP handler does two separate checks), but it is worth a comment.

9. New route URLs use /api/v1/server/... but are missing from OpenAPI docs

The existing routes include GetOpenApiHandler. The new routes (/server/api-tokens, /server/users, /server/groups, /mcp, /mcp/config) should be reflected in the OpenAPI spec to keep the API documentation current.

10. PostGroupHandler returns 200 for creates, should be 201

For consistency with REST conventions and with PostUserHandler (which correctly returns 201 for creates), PostGroupHandler should return 201 for new group creation. Since it doubles as an upsert (create or update), you could accept 200 for update and 201 for create, or document it as an upsert always returning 200.

11. MCPHttpHandler logs every request at Level.INFO

LogManager.instance().log(this, Level.INFO, "MCP %s (user=%s)", method, user.getName());

For production use, tools/call operations with the query content logged (including the formatArgs() output) could expose sensitive query data in logs. Consider using Level.FINE or Level.DEBUG for the per-call argument logging. The method routing log at INFO is fine.


Minor Issues

12. MCPConfigHandler is in com.arcadedb.server.mcp but extends AbstractServerHttpHandler which is in com.arcadedb.server.http.handler — this is a cross-package dependency that slightly breaks the handler-per-package pattern

Not a bug, just a note on package organization. The other MCP handlers (MCPHttpHandler) follow the same pattern. This is an acceptable trade-off for keeping MCP self-contained.

13. getQueryParameter() in DeleteApiTokenHandler sends token value in URL

Deleting a token via DELETE /api/v1/server/api-tokens?token=<value> passes the token in the query string, which may appear in server access logs. Since the token is already being deleted, the security risk is low, but consider accepting the token (or its hash) in the request body for consistency with other write operations.

14. groupsToJSON() and usersToJSON() — verify these are new additions vs pre-existing

The review shows GetGroupsHandler calls security.groupsToJSON() and GetUsersHandler calls security.usersToJSON(). I could not confirm from the diff whether these are new methods added to ServerSecurity. If they are newly added, they should be included in the test coverage checks.


Tests

The test coverage is good — ApiTokenAuthenticationIT, GroupManagementIT, and UserManagementIT cover the main success and error paths including:

  • Create/list/delete tokens
  • Expired token handling
  • Authorization enforcement (non-root blocked)
  • testPlaintextNotPersistedOnDisk — this is a nice security-specific test

Missing test coverage:

  • MCP endpoint tests (tools/list, tools/call with query/execute_command, disabled state)
  • The group merge behavior (getDatabaseGroupsConfiguration wildcard + database-specific merge) has an integration test but no unit test for the merge logic itself

Summary

The core design is solid. The API token approach (one-time plaintext display, hash stored at rest) is correct. The MCP server being disabled by default with an explicit opt-in is the right default. The synthetic user/group construction for API token auth is clever. Main items to address before merge:

  1. Issue Bump jedis from 3.6.1 to 3.6.3 #5 (MERGE permission) — correctness bug
  2. Issue Bump mongo-java-driver from 3.12.4 to 3.12.10 #4 (PutUserHandler mutation semantics) — verify or fix
  3. Issue Bump postgresql from 42.2.22 to 42.2.23 #7 (no expired token cleanup scheduler) — functional gap
  4. Issue Bump gremlin.version from 3.4.10 to 3.5.1 #2 (MCPConfigHandler HTTP method routing) — minor but clean to fix

@codacy-production
Copy link

codacy-production bot commented Feb 19, 2026

Coverage summary from Codacy

See diff coverage on Codacy

Coverage variation Diff coverage
-10.61% 85.59%
Coverage variation details
Coverable lines Covered lines Coverage
Common ancestor commit (ffc9be2) 86407 63054 72.97%
Head commit (9f24437) 117118 (+30711) 73043 (+9989) 62.37% (-10.61%)

Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>

Diff coverage details
Coverable lines Covered lines Diff coverage
Pull request (#3481) 1145 980 85.59%

Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%

See your quality gate settings    Change summary preferences

This allows to be more precise in MCP profiling
@claude
Copy link

claude bot commented Feb 19, 2026

Code Review

This is a substantial PR that introduces an MCP server, API key authentication, and user/group management in Studio. The overall design is solid. Here are my findings:


Security Concerns

1. MCP routes have no authentication (critical)
MCPHttpHandler accepts requests and checks isUserAllowed() using a username derived from the HTTP Basic auth header — but the MCP endpoint itself appears to accept unauthenticated requests (no checkRootUser or standard auth gate). If the MCP config has allowedUsers: ["*"] or if the check is bypassable, this is a significant security hole. The handler should enforce authentication before any operation, consistent with all other AbstractServerHttpHandler subclasses.

2. GraphQL operation-type detection is naive and bypassable
detectGraphQLOperationTypes() uses a plain string prefix check:

if (trimmed.startsWith("mutation") || trimmed.startsWith("mutation "))
    return Set.of(OperationType.CREATE, OperationType.UPDATE, OperationType.DELETE);

The second condition ("mutation ") is redundant since the first already matches it. More critically, a query like mutation{...} (no space) would match the first check, but a query with leading whitespace or a comment would fall through to READ. Consider using a proper GraphQL parser or at minimum a case-insensitive regex like (?i)^mutation\b. The same issue applies to detectMongoOperationTypes() — a payload with the key "FIND" embedded inside a document value (not as a command key) would trigger a false READ classification.

3. API token permissions not enforced in QueryTool
QueryTool.execute() only checks config.isAllowReads() (the MCP config), but does not validate that the authenticated ServerSecurityUser (which may be a synthetic API-token user) has access to the requested database. ExecuteCommandTool has the same gap — it calls server.getDatabase(databaseName) without verifying the token's database field scope. A token scoped to "databaseA" could query "databaseB".

4. PutUserHandler synchronized(security) block is too broad
The handler synchronizes on the ServerSecurity object to update a user, which is a reasonable pattern, but existingUser.toJSON().put("databases", databases) mutates the live JSON object directly without going through the security layer's official write path. This bypasses any future validation or audit hooks in ServerSecurity.


Correctness Issues

5. SingletonSet — redundant isEmpty() override
AbstractSet already derives isEmpty() from size() == 0, so the override returning false is harmless but unnecessary. More importantly, the toString() format [element] does not match AbstractSet's default {element} format, which could cause test failures if assertions use toString().

6. MCPConfiguration default has reads enabled but MCP disabled
The constructor sets allowReads = true by default, but enabled = false. When the config file is missing and the default is written, reads are pre-enabled for when someone flips enabled to true. This is not inherently wrong but should be documented — currently the Javadoc on MCPConfiguration does not describe the security posture of defaults.

7. ApiTokenConfiguration.load() migration silently promotes plaintext tokens
The backward-compatibility migration hashes and re-saves any plaintext token fields it finds. If a second server process reads the file mid-migration, it could see an inconsistent state. Consider writing to a temp file and atomically renaming, which is the standard pattern for file-based config updates (relevant for save() too).

8. getOperationTypes() fallback returns {CREATE, UPDATE, DELETE} for unknown DML
The default in Statement.getOperationTypes() and QueryEngine.AnalyzedQuery.getOperationTypes() is deliberately conservative (all write types), but this means that for any unclassified statement the MCP server will require ALL write permissions to be enabled. While safe, it will confuse users who cannot execute a simple statement because it triggers a DELETE permission check. Consider logging a warning when the fallback is used.


Code Quality / Style

9. Missing Apache 2.0 license headers
Several new files are missing the standard license header:

  • server/src/main/java/com/arcadedb/server/mcp/tools/ExecuteCommandTool.java
  • server/src/main/java/com/arcadedb/server/mcp/tools/QueryTool.java
  • server/src/main/java/com/arcadedb/server/http/handler/PutUserHandler.java
  • server/src/test/java/com/arcadedb/server/mcp/MCPPermissionsTest.java
  • server/src/test/java/com/arcadedb/server/GroupManagementIT.java and others in the test package

All source files in this repo have the Apache 2.0 header — please add them for consistency.

10. ExecuteCommandTool exposes checkPermission as public static
The permission checking logic is public and static, making it easy to call from tests (which it does), but also means it can be called from anywhere without going through the MCP handler's authentication checks. Consider making it package-private or extracting it to a dedicated MCPPermissionChecker class to keep the boundary clear.

11. GraphQLQueryEngineisIdempotent() now depends on getOperationTypes()
Before this PR, isIdempotent() always returned false for GraphQL. Now it returns true for queries. This is semantically correct but is a behavior change for all existing consumers of isIdempotent() (not just the MCP server). Make sure this does not affect query caching or plan reuse paths that relied on the prior conservative false.

12. AnalyzedQuery.getOperationTypes() default vs override inconsistency
QueryEngine.AnalyzedQuery has a default implementation of getOperationTypes(), and Statement (the SQL AST base class) has its own default with the same logic. Subclasses of Statement delegate to statement.getOperationTypes() via SQLQueryEngine. This is fine, but the duplication of the fallback logic in two places (QueryEngine.AnalyzedQuery and Statement) could diverge over time. Consider having QueryEngine's default call through to the statement, or adding a comment noting both must stay in sync.


Test Coverage

13. No test for API-token database scope enforcement
ApiTokenAuthenticationIT tests expiry and read-only enforcement but does not test that a token scoped to databaseA cannot access databaseB. Given point #3 above, this is a missing regression test.

14. MCPServerPluginTest enables MCP in @BeforeEach via HTTP
This is a valid integration test pattern, but it means MCP config persistence is exercised on the real filesystem under target/. If tests run in parallel, the shared config file could cause flakiness. Consider using a unique temp directory per test run.


Minor

  • The comment // INITIALIZE MCP CONFIGURATION (always available, disabled by default) in ArcadeDBServer.start() uses all-caps convention inconsistently with surrounding comments.
  • In detectRedisOperationTypes(), SET/HSET/HMSET return {CREATE, UPDATE} — this is correct since SET is upsert-like, but the distinction is worth a brief inline comment for future readers.
  • MCPConfigHandler handles both GET and POST in a single execute() method by checking exchange.getRequestMethod(). Consider splitting into separate handler classes (consistent with the existing pattern in this codebase where Get*Handler and Post*Handler are separate) or at least adding a comment explaining why this one is combined.

@claude
Copy link

claude bot commented Feb 19, 2026

Code Review: MCP Server + API Keys + User/Groups on Studio

This is a substantial PR adding three major features: an MCP server, API key authentication, and user/group management in Studio. The overall design is sound — in particular, using semantic AST-based OperationType classification for permission checking instead of text pattern matching is the right approach. Below are my findings.


Security Concerns

1. QueryTool bypasses semantic permission checking

QueryTool.execute() only checks config.isAllowReads() upfront but then calls database.query(language, query) without semantic validation. An LLM or caller could pass language: "sql" with a mutating statement (e.g., INSERT, DELETE) via the query tool — the read-only intent is not enforced by the engine. QueryTool should call ExecuteCommandTool.checkPermission(database, query, language, config) or at minimum assert that the analyzed operation types contain only READ.

// Missing in QueryTool.execute():
ExecuteCommandTool.checkPermission(database, query, language, config);

2. DeleteApiTokenHandler accepts plaintext tokens via query parameter

The token is read from a query parameter (getQueryParameter(exchange, "token")). A plaintext at- token in a URL will appear in server access logs and browser history. Consider requiring deletion by name or hash instead, or accept deletion via a request body.

3. MCPConfiguration.allowedUsers wildcard "*" is undocumented

isUserAllowed() supports "*" to allow all users, but this is not mentioned in any comment or documentation. Given the security implications, this should be clearly documented.

4. authenticateByApiToken has no brute-force protection

SHA-256 is a fast hash — a token file, if stolen, could be brute-forced quickly against a short token namespace. Consider adding a failed-attempt counter per IP, or use PBKDF2/bcrypt for token hashing.


Correctness Issues

5. getDatabaseGroupsConfiguration behavior change may break existing setups

The refactored method now merges wildcard (*) groups with database-specific groups instead of the previous fallback-only behavior. Users who relied on the wildcard being exclusive for databases without specific overrides may see unexpected merged permissions. This should be called out in release notes.

6. MoveVertexStatement.getOperationTypes() is missing CREATE

MOVE VERTEX migrates records from one type to another — it inserts into the target type while deleting from the source. The current {UPDATE, DELETE} is missing CREATE. Consider returning {CREATE, UPDATE, DELETE}.

7. ApiTokenConfiguration.cleanupExpired() is never called automatically

It's defined but there's no scheduled timer calling it. Expired tokens stay in memory (though rejected at access time) until the next load(). Either wire this up to a timer or add a comment explaining how callers are expected to invoke it.


Code Quality

8. SingletonSet duplicates java.util.Collections.singleton()

Collections.singleton() (JDK since 1.3) is a lightweight, immutable, single-element Set with O(1) contains() — the same characteristics as SingletonSet. The custom class adds code to maintain without a measurable benefit. Consider replacing CollectionUtils.singletonSet() calls with Collections.singleton().

9. GraphQL operation type detection is too coarse

detectGraphQLOperationTypes() maps any "mutation" to {CREATE, UPDATE, DELETE}. This will block valid mutations when only one write type is enabled. Worth a // TODO: improve per-field mutation analysis comment.

10. MongoDB operation type detection uses fragile string matching

upper.contains("\"INSERT\"") can false-positive on JSON string values containing those words, not just command keys. Consider parsing the JSON and checking the first key of the command object.

11. handleNotification() returns HTTP 200 instead of 204

The MCP spec says notifications don't get a JSON-RPC response. HTTP 204 No Content is more semantically correct than 200 with an empty body.


Test Coverage

12. QueryTool has no dedicated test

Given the permission bypass described above, QueryTool needs test coverage verifying that a mutating query submitted via the query tool is rejected.

13. testOpenCypherAdminViaEngine asserts isDDL() for SHOW USERS

This works because CypherAdminStatement triggers isDDL(), but it's semantically confusing. A comment explaining the relationship between ADMIN and DDL classification would help.


Minor Nits

  • ApiTokenConfiguration uses String.format("%02x", b) in a byte loop — HexFormat.of().formatHex(bytes) (Java 17+) is cleaner and avoids per-byte object allocation.
  • The null check on mcpConfiguration in HttpServer is dead code (it's always initialized before httpServer starts). The comment says as much — just remove the null check entirely.
  • API token-authenticated users have names like "apitoken:<name>" and will always fail checkRootUser. This is presumably intentional, but a comment would help maintainers understand why API tokens can't manage other tokens.

@claude
Copy link

claude bot commented Feb 19, 2026

PR Review: MCP server + API Keys + User/Groups on Studio

This is a substantial PR (~6K lines added) adding three major features: MCP server, API key management, and Studio UI for user/group management. The implementation is well-structured with solid security practices overall. Below are specific observations.


MCP Server

Positives:

  • Disabled by default — safe-by-default is correct.
  • Permission granularity is good (separate flags for read/insert/update/delete/schema/admin).
  • Semantic, parser-based operation type detection avoids naive string matching. The right approach.

Issues / Suggestions:

  1. JSON-RPC errors always return HTTP 200. In MCPHttpHandler.jsonRpcError(), the HTTP status code is always 200 even for protocol/authorization errors. The MCP spec allows non-200 codes for HTTP-layer errors — returning 503 when disabled and 401 when unauthorized would help proxies and monitoring tools. The current approach is technically compliant but may confuse tooling.

  2. INFO-level logging of every MCP method call will generate significant log noise in production with frequent polling (e.g., ping). Consider FINE/DEBUG level for routine calls, INFO only for write operations or errors.

  3. Volatile field read ordering in MCPConfiguration. toJSON() and updateFrom() are synchronized, but individual field getters (isEnabled(), isAllowReads(), etc.) are not. The fields are volatile (correct for visibility), but a multi-getter check-act sequence is not atomic — a concurrent config update could yield an inconsistent combination of values. Very low-risk in practice, worth noting for correctness.

  4. Silent config file creation on first startup. MCPConfiguration.load() calls save() when the file doesn't yet exist, with no log message. An INFO-level log would help operators understand why the file appeared.


API Token Management

Positives:

  • Tokens stored as SHA-256 hashes — never plaintext at rest. Excellent.
  • File permissions set to 600 on POSIX systems.
  • tokenSuffix for identification without exposure.
  • Brute-force protection via tokenFailures map.
  • Deletion rejects plaintext tokens in query parameters (log-safe). Very good.
  • Backward compatibility migration from old plaintext format is handled cleanly.

Issues / Suggestions:

  1. tokenFailures map grows unboundedly. Entries are only removed on successful auth or during lockout window check. With many distinct bad tokens (e.g., a scanning attack), this plain HashMap grows without bound. Consider periodically pruning entries where now - entry[1] > TOKEN_LOCKOUT_MS, or using a bounded LinkedHashMap.

  2. SHA-256 without salting. Since tokens are high-entropy (256-bit random), omitting salt is safe here. A brief comment explaining this would help future maintainers who might otherwise question it.

  3. No validation of permissions JSON structure in PostApiTokenHandler. Malformed permissions could cause a runtime exception in authenticateByApiToken. Root access is required to hit this path, making this a reliability concern rather than a security issue, but basic structural validation would improve robustness.


User/Group Management HTTP Handlers

Positives:

  • All handlers require checkRootUser().
  • Root user deletion is prevented.
  • Admin group deletion from wildcard database is prevented.

Issues / Suggestions:

  1. Minimum password length of 4 characters in PostUserHandler/PutUserHandler is below current conventional minimums. 8 would be more appropriate.

  2. PutUserHandler mutates the live user object's internal JSON via existingUser.toJSON().put("databases", databases). This is intentional and refreshDatabaseNames() is called after, but it's an unusual pattern. Worth verifying toJSON() returns a mutable reference (not a defensive copy) and that no concurrent reads of the underlying JSON occur between the mutation and save.

  3. New API endpoints (/server/users, /server/groups, /server/api-tokens) are not documented in the OpenAPI spec (/openapi.json). The existing API documentation should be updated.


OperationType / Query Engine

Positives:

  • Clean OperationType enum abstraction.
  • Safe default implementation in QueryEngine.AnalyzedQuery.getOperationTypes().
  • Per-language overrides in OpenCypher, Gremlin, MongoDB, Redis are well-considered.

Issues / Suggestions:

  1. SingletonSet reimplements Collections.singleton(). The JDK already provides a lightweight singleton set. The comment claims this avoids overhead of Set.of(), but this is unverified and likely negligible in practice. Removing the custom class reduces maintenance surface.

  2. MongoDB operation type detection uses contains() on the raw query string. A field value containing the literal string "INSERT" inside a find query document would incorrectly classify it as CREATE. Consider whether this precision is acceptable; actual permission enforcement also happens at the database layer.

  3. Redundant condition in GraphQL mutation detection:
    The check trimmed.startsWith("mutation") || trimmed.startsWith("mutation ") — the second condition is a strict subset of the first. Remove it.


Test Coverage

Coverage is comprehensive: OperationTypeTest and OperationTypeIntegrationTest cover the full SQL/OpenCypher matrix; ApiTokenAuthenticationIT, GroupManagementIT, UserManagementIT cover the HTTP APIs thoroughly; the MCP server has three dedicated test classes.

One gap: no test for the brute-force lockout behavior in authenticateByApiToken (verifying the N+1th failure within the window is rejected with the lockout message).


Summary

The security fundamentals are solid: hashed token storage, root-only administration, semantic operation analysis via the query parser/AST rather than regex. Main items to address before merge:

  • High: Item 5 (unbounded tokenFailures map)
  • Medium: Items 1 (JSON-RPC HTTP status codes), 2 (log verbosity), 10 (missing OpenAPI docs)
  • Low: Items 8 (password minimum), 11 (custom SingletonSet), 13 (redundant condition)

@claude
Copy link

claude bot commented Feb 19, 2026

PR Review: MCP Server + API Keys + User/Groups on Studio

This is a substantial PR adding three major features: an MCP (Model Context Protocol) server, API key authentication, and user/group management endpoints. Overall the implementation is well-structured and security-conscious, but there are several issues worth addressing.


Security Issues

1. Brute-Force Protection Key is Bypassable (ServerSecurity.java)

The attempt key is the hash prefix of the invalid token being submitted:

final String attemptKey = ApiTokenConfiguration.hashToken(tokenValue).substring(0, 8);

An attacker can bypass the lockout by slightly varying their guesses each time (e.g., appending a counter). Different invalid tokens hash to different prefixes, each with an independent counter — the lockout never triggers for any single "slot." The failure map should be keyed on the client IP address instead, which is the standard approach for rate-limiting.

2. Auth Order in MCPHttpHandler.execute() Leaks Server State

if (!config.isEnabled())
    return jsonRpcError(null, -32600, "MCP server is disabled");  // Before auth check!
if (payload == null)
    return jsonRpcError(null, -32700, "Parse error");             // Before auth check!
if (user == null || !config.isUserAllowed(user.getName()))
    return jsonRpcError(...);

An unauthenticated request can probe whether MCP is enabled/disabled. The user == null / authorization check should come first.

Additionally, jsonRpcError always returns HTTP 200. Unauthenticated requests should receive HTTP 401, not a 200 with a JSON-RPC error body, since many HTTP clients and API gateways rely on the status code.

3. Expired Token Removal in getToken() Not Persisted (ApiTokenConfiguration.java)

if (expiresAt > 0 && expiresAt < System.currentTimeMillis()) {
    tokens.remove(hash);  // Does NOT call save()
    return null;
}

The expired token is removed from memory but not from disk. After a server restart it reappears. Either call save() here, or remove this eager eviction and rely solely on cleanupExpired(). Also, cleanupExpired() is defined but I couldn't find where it is scheduled — it should run periodically.


Code Quality Issues

4. GraphQL Mutation Detection is Fragile (GraphQLQueryEngine.java)

if (trimmed.startsWith("mutation") || trimmed.startsWith("mutation "))
  • startsWith("mutation") already covers startsWith("mutation ") — the second condition is redundant
  • Comments before the keyword (e.g., # comment\nmutation { ... }) would not be detected

Consider delegating to the actual GraphQL parser for accurate classification, consistent with how SQL uses the AST.

5. MongoDB Operation Detection Uses String Matching (MongoQueryEngine.java)

if (upper.contains("\"INSERT\"") || upper.contains("\"INSERTONE\"") ...)

A value containing the word "insert" in a MongoDB document could produce a false positive. The existing MongoDB parser should be used for accurate classification.

6. SingletonSet Duplicates JDK Functionality

Set.of(element) in modern JDKs is already highly optimized — it returns a dedicated Set12 implementation with no wrapper overhead. The comment says it avoids overhead from Set.of() but that claim is inaccurate for Java 21. This custom class adds ~90 lines to maintain for negligible benefit. Consider removing it and using Set.of() directly.

7. PutUserHandler — Ambiguous User Name Source

String name = getQueryParameter(exchange, "name");
if (name == null || name.isBlank())
    name = payload.getString("name", "");

Accepting the user name from either the query parameter or the request body is inconsistent with other handlers and creates ambiguity when both are present with different values. The OpenAPI spec documents a query parameter — the body fallback should be removed.

8. Logging Query Content at INFO Level (MCPHttpHandler.java)

LogManager.instance().log(this, Level.INFO, "MCP tools/call '%s' %s (user=%s)", toolName, formatArgs(args), ...);

formatArgs logs query content at INFO level. Even with 100-char truncation, queries may contain sensitive data (passwords in CREATE USER, PII in WHERE clauses). This should be DEBUG/FINE level.


Design Issues

9. No Integration Test for MCP Server

There are good tests for API tokens (ApiTokenAuthenticationIT) and group management (GroupManagementIT), but no integration test for the MCP HTTP handler. Testing initialize, tools/list, and tools/call round-trips would provide a regression baseline.

10. Double Parsing in QueryTool and ExecuteCommandTool

Both tools call engine.analyze(query) for permission/idempotency checking, then call database.query() / database.command() which parses the same query again internally. Minor performance concern — the analyzed result is discarded before execution.


Minor Issues

  • DeleteGroupHandler parameter name mismatch: The OpenAPI spec documents the delete parameter as "group" but the handler reads getQueryParameter(exchange, "name"). One is wrong.

  • Fully-qualified names in PostApiTokenHandler: private static final java.util.Set<String> VALID_ACCESS_VALUES = java.util.Set.of(...) — per project conventions, import the class and use the simple name.

  • Inconsistent charset in ApiTokenConfiguration.save(): Uses FileWriter(file, DatabaseFactory.getDefaultCharset()) while MCPConfiguration correctly uses OutputStreamWriter with explicit StandardCharsets.UTF_8. These should be consistent.


What's Good

  • Semantic OperationType analysis via the query parser AST (not regex) for permission checking is a solid design
  • API token hashing (SHA-256, unsalted) is justified by 256-bit SecureRandom entropy — reasoning in the comment is correct
  • File permissions set to 600 on POSIX systems for the token file
  • Good test coverage for API tokens and group management
  • MCPConfiguration defaults are secure (write operations all disabled, reads enabled only for listed users)
  • Plaintext token never stored on disk — correct security practice
  • MCP disabled by default — correct default posture
  • Backward compatibility migration for old token format is handled cleanly

@claude
Copy link

claude bot commented Feb 19, 2026

PR Review: MCP server + API Keys + User/Groups on Studio

This is a large, well-structured PR that adds three major features: an MCP (Model Context Protocol) server, API token authentication, and REST endpoints for managing users/groups. The code quality is generally high with good security practices and solid test coverage. Here are my findings:


Security

Positives:

  • API tokens are stored as SHA-256 hashes only — plaintext is returned once and never persisted. Good.
  • File permissions are set to owner-only (mode 600) on POSIX systems for the token file.
  • The brute-force protection on token authentication (5 failures → 30s lockout) is a reasonable safeguard.
  • The comment explaining why salt is not needed for high-entropy random tokens is accurate and helpful.
  • Plaintext tokens are rejected in the DELETE endpoint to prevent URL/log leakage.
  • Auth check before isEnabled() check in MCPHttpHandler avoids leaking server state to unauthenticated callers.

Issues:

  1. MCPHttpHandler returns HTTP 200 for all JSON-RPC errors including auth failures (-32600) and method-not-found (-32601). While this follows the JSON-RPC 2.0 spec for transport-level errors, returning 200 for Authentication required can interfere with HTTP-level proxies and WAFs that inspect status codes. Consider returning 401 for auth errors and 503 when the server is disabled, while keeping 200 for JSON-RPC method/param errors.

  2. Token failure tracking uses LinkedHashMap with manual synchronized blocks — the tokenFailures map in ServerSecurity is a LinkedHashMap wrapped in synchronized blocks. This is correct but slightly fragile. A ConcurrentHashMap with a scheduled cleanup task would be more idiomatic for this codebase.

  3. deleteToken() in ApiTokenConfiguration accepts both plaintext and hash while DeleteApiTokenHandler explicitly rejects plaintext. These two layers are inconsistent. Consider restricting deleteToken() at the configuration level as well, since callers outside the HTTP handler could invoke it with a plaintext token.

  4. MCP tools bypass the standard ArcadeDB authorization layer — QueryTool, ExecuteCommandTool, and GetSchemaTool call server.getDatabase(databaseName) and execute queries without verifying that the authenticated user has access to the specified database. ListDatabasesTool correctly filters by user.getAuthorizedDatabases(), but the other tools do not. An MCP user could access a database they are not authorized for if the MCP config allows reads.

  5. ServerStatusTool exposes HA topology information (cluster name, leader name, election status) gated only by isAllowReads(). Consider whether HA topology should be gated separately.


Code Quality

  1. SingletonSet is unnecessary — Set.of(element) already creates a compact single-element immutable set since Java 9. The custom SingletonSet class adds complexity without measurable benefit. The comment Avoids the overhead of Set.of() which creates a wrapper object is not well-founded for this use case.

  2. AnalyzedQuery.execute() returns null as a sentinel when direct execution is not supported. This requires callers to handle null and fall back, which is an unusual contract. A boolean canExecuteDirect() or an UnsupportedOperationException would be cleaner. The null-check pattern appears multiple times across QueryTool and ExecuteCommandTool.

  3. MCPConfiguration.updateFrom() inconsistency — when updating allowedUsers, it calls json.getJSONArray("allowedUsers") without a default value, unlike load() which uses the two-argument form json.getJSONArray("allowedUsers", null). This will throw if the field is absent in a partial update payload.

  4. PostGroupHandler/DeleteGroupHandler iterate all open databases on every group change — server.getDatabase(dbName) can open a closed database as a side effect. Worth considering if this is intentional.

  5. MongoDB operation detection uses fragile string matching — detectMongoOperationTypes() in MongoQueryEngine uses upper.contains(""INSERT"") on the raw query string. This could be fooled by JSON field values containing command keywords (e.g., a document field named INSERT). Since this is used for MCP permission checking, a false classification could allow or block operations incorrectly.


Minor Issues

  1. MCP routes use addExactPath with full /api/v1/mcp path while other new routes use relative paths under the prefix router. Verify the MCP routes are reachable and consistent with the rest of the API.

  2. testReadOnlyTokenCannotInsert asserts isIn(403, 500) — accepting HTTP 500 as a valid response for a permissions error means the permission check may not be caught and translated correctly. The expected response should be a consistent 403.

  3. Missing test for MCP database authorization — there is no test verifying that an MCP user cannot access a database they are not authorized for (related to issue 4 above).


Summary

This is a substantial and well-thought-out feature addition. The security-sensitive areas (token storage, brute-force protection, root-only admin endpoints) are handled correctly. The main concerns are:

  • Missing database-level authorization check in MCP tools (security issue)
  • HTTP 200 for auth failures in the MCP handler (security posture)
  • The custom SingletonSet class adds unnecessary complexity

The test coverage is excellent for the new REST endpoints and MCP configuration. Overall, good work.

@claude
Copy link

claude bot commented Feb 19, 2026

Code Review: MCP Server + API Keys + User/Groups on Studio

This is a substantial PR (~6600 additions) adding three major features: an MCP server, API key authentication, and user/group management in Studio. The overall approach is solid and well thought out. Here are my observations:


Positive Highlights

  • Security-first design: API tokens are stored as SHA-256 hashes (never plaintext), file permissions are set to 600 on POSIX systems, brute-force lockout is implemented, and plaintext tokens are rejected from DELETE endpoints.
  • Semantic permission checking: Using AST-based OperationType classification rather than text matching is the right approach. This is robust against injection tricks.
  • MCP disabled by default: Good default configuration with explicit opt-in for each operation type.
  • Auth check before isEnabled() check in MCPHttpHandler.execute(): Prevents leaking server state to unauthenticated callers.
  • Test coverage: Integration tests cover most key scenarios including expired tokens, read-only enforcement, and plaintext rejection.

Issues and Suggestions

Security

1. MCP enabled/disabled status leaks to authenticated users

The order in MCPHttpHandler.execute() is: auth check → isEnabled() → payload parse → isUserAllowed(). An authenticated user not in allowedUsers can discover whether MCP is enabled (503) vs. disabled. Consider checking isUserAllowed() before returning the 503, or returning 403 consistently.

2. QueryTool: limit parameter is not validated

final int limit = args.getInt("limit", DEFAULT_LIMIT);

A caller could send limit: -1 or limit: Integer.MAX_VALUE. Consider clamping: Math.min(Math.max(1, limit), DEFAULT_LIMIT).

3. ApiTokenConfiguration.getToken(): Expired token removal triggers save() on every lookup

This is called on every authenticated request. If many expired tokens exist, this could become a write bottleneck under load. Consider deferring removal to cleanupExpired() and only returning null without saving during getToken().


Correctness

4. QueryTool: Result set iterated outside a transaction

Unlike ExecuteCommandTool which wraps execution in database.transaction(), QueryTool does not. Read-only queries that return lazy iterators may fail if iterated outside a transaction context. Consider wrapping for consistency.

5. MCPConfiguration.updateFrom(): Silent no-op when allowedUsers is explicitly null

if (json.has("allowedUsers")) {
  final JSONArray usersArray = json.getJSONArray("allowedUsers", null);
  if (usersArray == null)
    return;  // silently ignores the remaining fields too

An explicit {"allowedUsers": null} early-returns and skips processing of any other keys that follow in the JSON. This is likely unintentional.

6. PutUserHandler: Password validation throws instead of returning 400

if (password.length() < 8)
  throw new ServerSecurityException("User password must be at least 8 characters");

PostUserHandler returns ExecutionResponse(400, ...) for the same check, but PutUserHandler throws — which ends up as a 403 (SecurityException handler). The HTTP status should be 400 for validation errors. Be consistent.

7. ServerSecurity.updateUser(): Not synchronized

updateUser() modifies the users map without synchronization, unlike most other mutation methods in this class. Consider adding synchronized or using the same locking strategy.

8. OpenCypherQueryEngine: isDDL() semantic change for CypherAdminStatement

CypherAdminStatement was previously treated as DDL (line 111 change). It is now reclassified as non-DDL with ADMIN operation type. This is semantically correct but could be a breaking change for any callers that relied on isDDL() returning true for admin statements. Worth auditing.


Minor Code Style / Design

9. MCPConfigHandler: Unhandled HTTP methods

Only GET and POST are explicitly handled. PUT/DELETE requests fall through to the POST (config update) path silently. Consider returning 405 for unsupported methods.

10. SingletonSet vs. Set.of(element) or Collections.singleton()

The comment says this avoids the overhead of Set.of(). In JDK 21 Set.of(element) returns a specialized Set12 instance that is similarly lightweight. Unless benchmarked, SingletonSet adds maintenance overhead. Collections.singleton() is a simpler existing alternative.

11. Mixed volatile + synchronized in MCPConfiguration

Fields are individually volatile for simple reads (e.g., isEnabled()), while toJSON() and updateFrom() are synchronized for atomic multi-field operations. This is correct but the volatile modifiers are redundant for the synchronized methods. A comment explaining the rationale would help future maintainers.


Testing

12. testNonRootCannotManageTokens: Likely compile error

getServer(serverIndex).getSecurity().createUser("testuser", "testpass");

The createUser method takes a JSONObject, not two strings. The correct call would be:

security.createUser(new JSONObject().put("name", "testuser").put("password", security.encodePassword("testpassword")));

13. No MCP server integration tests

Integration tests cover the API token feature well, but there are no tests for MCP HTTP handler functionality (e.g., tools/call with query tool, permission enforcement via checkPermission, tools/list, initialize). Consider adding an MCPServerIT test class.


Summary

Well-architected code that takes security seriously. The main action items:

  1. Fix the testNonRootCannotManageTokens test compile error (item 12).
  2. Address QueryTool missing transaction wrapper (item 4).
  3. Fix PutUserHandler returning 403 for validation errors instead of 400 (item 6).
  4. Add limit clamping in QueryTool (item 2).
  5. Fix early return in MCPConfiguration.updateFrom() when allowedUsers is null (item 5).

The MCP server design is clean and the semantic operation type system is a solid foundation for fine-grained access control.

@claude
Copy link

claude bot commented Feb 19, 2026

Code Review

This is a substantial and well-structured PR that adds three major features: an MCP server, API key authentication, and Studio UI for user/group management. The overall design is sound. Below are my observations.


Security

1. Brute-force lockout uses token hash prefix as the key (ServerSecurity.java)

The lockout key is hashToken(tokenValue).substring(0, 8) - 8 hex chars = 32 bits of distinguishing power. An attacker who can enumerate their own token's hash prefix can craft requests that share the same lockout key as a target token, causing a targeted denial-of-service that locks out legitimate users. Since the full hash is already computed in the same call, using the full hash (or at least 32+ chars) as the key has essentially zero extra cost and eliminates the collision risk entirely.

// Current (collision-prone)
final String attemptKey = ApiTokenConfiguration.hashToken(tokenValue).substring(0, 8);

// Suggested
final String attemptKey = ApiTokenConfiguration.hashToken(tokenValue);

2. Lockout window resets only when the lockout expires, not per request (ServerSecurity.java)

The entry[1] timestamp is only updated when the lockout period is already over, so after 5 failures the attacker can try once every TOKEN_LOCKOUT_MS (30 s) without ever re-triggering the full lockout. Consider resetting the timestamp on every new failure to implement a sliding window.

3. ApiTokenConfiguration.getToken() is not synchronized but calls tokens.remove() + save() (ApiTokenConfiguration.java)

getToken() checks expiry and calls tokens.remove(hash) and then save(), but is not synchronized. save() is synchronized, but the check-then-remove sequence between two concurrent callers can result in a double-save (harmless) or, more importantly, a TOCTOU window where the expiry check and removal are not atomic. Consider making getToken() synchronized like load() and save().

4. authenticateByApiToken creates a ServerSecurityUser with null password (ServerSecurity.java)

The synthetic user built for an API-token caller has no password. If any downstream code path calls user.getPassword() and tries to verify it (e.g., a future authentication path, audit log, or HA replication), this silently succeeds as null. It would be safer to mark this more explicitly (e.g., a boolean flag) so callers can distinguish token-authenticated users from password-authenticated ones.

5. MCPConfiguration allows runtime toggling but no re-auth (MCPHttpHandler.java)

The isEnabled() check happens on every request, which is good for toggling. However, if MCP is re-enabled while a previous request is in-flight, there is no guard. This is likely acceptable given the synchronization on configuration loads, but worth a comment.


API Design

6. execute_command tool has no result-set limit (ExecuteCommandTool.java)

QueryTool correctly enforces a configurable limit parameter (default 1000). ExecuteCommandTool streams all results into a JSONArray records with no bound. A runaway INSERT...SELECT or a massive UPDATE returning all modified records could OOM the server. Applying a configurable cap here is advisable.

7. QueryTool passes Collections.emptyMap() as parameters to analyzed.execute() (QueryTool.java)

The tool receives args but there is no mechanism for the MCP caller to pass query parameters to the SQL/Cypher engine. The current implementation therefore silently ignores parameterized queries. Either document this limitation clearly in the tool description, or add a params field to the input schema.


Code Quality

8. SingletonSet vs Set.of(element) (SingletonSet.java)

The Javadoc claims this avoids the overhead of Set.of(), but the JDK's Set.of(element) creates an ImmutableCollections.Set1 which is also a single-field wrapper class with essentially the same heap footprint and O(1) operations. The custom implementation adds 93 lines of code and a bespoke iterator whose equals() implementation differs slightly from AbstractSet's (which delegates to containsAll). Unless there is a measured benchmark showing a meaningful difference, using Set.of() is simpler and keeps the codebase smaller.

9. Duplicate permission-checking overloads in ExecuteCommandTool (ExecuteCommandTool.java)

There are two checkPermission overloads and a getOperationTypes static helper. The checkPermission(Database, String, String, MCPConfiguration) overload is defined but never called from within the MCP tooling itself (the calling code uses the analyze+execute pattern directly). If this is intended as a public API, that is fine, but if it is dead code it should be removed.

10. MCPConfiguration.save() silently swallows IOExceptions on directory creation

if (!configDir.exists())
    configDir.mkdirs();  // return value not checked

If mkdirs() fails (e.g., permissions), the subsequent FileOutputStream will throw and the exception is swallowed with a log warning. Checking the return value and surfacing the error would make failures easier to diagnose.

11. Wildcard imports in ApiTokenConfiguration.java

The file uses import java.io.*, import java.nio.file.*, etc. The project style (per CLAUDE.md) is to import specific classes. This file is inconsistent with the rest of the codebase.

12. AnalyzedQuery.execute() default returns null (QueryEngine.java)

The contract documented in the Javadoc says "Returns null if direct execution is not supported". Returning null as a sentinel forces every call site to null-check, which is error-prone. A default no-op implementation that delegates back to the normal execution path, or an Optional, would be cleaner.


Test Coverage

13. No test for the brute-force lockout

ApiTokenAuthenticationIT covers the happy path and basic negative cases, but there is no test exercising the lockout (5+ consecutive failures within 30 s).

14. No test for token expiry during an active request

The expiry cleanup in getToken() is tested indirectly, but there is no test for the concurrent case where a token expires between the authentication step and the actual query execution.

15. MCP permission tests are unit tests only (MCPPermissionsTest.java)

MCPPermissionsTest tests permissions in isolation using mocked objects, which is good. There should also be an integration test that starts an actual server, enables MCP, and verifies that a restricted-permission token cannot issue a forbidden operation through the HTTP endpoint.


Minor Nits

  • TOKEN_PREFIX = "at-" is a private constant but isApiToken() uses token.startsWith(TOKEN_PREFIX) - consider making the check a static method in the same class (already done) but also verify the prefix is not guessable/collides with session tokens (AU-). The current prefixes are distinct, which is correct.
  • In MCPHttpHandler.handleToolsCall, the log message uses Level.INFO for every tool call including errors. Security-sensitive errors (permission denied) should arguably log at Level.WARNING to make them visible in default log configurations without requiring INFO-level logging.
  • BoltNetworkExecutor log level changes from INFO to FINE when debug is true: this is correct for debug-mode messages, but the non-debug log for 'negotiated version' and 'WebSocket upgrade completed' are now completely suppressed (wrapped in if (debug)). If those events are useful for normal operation observability they should remain unconditional at a lower level like FINE, or gated on a configurable log level rather than the debug flag.

Overall this is a high-quality addition. The semantic operation-type classification via OperationType / getOperationTypes() is a particularly solid design choice over text-pattern matching. The main issues to address before merge are the brute-force lockout collision risk (#1) and the missing result-set limit on execute_command (#6).

@claude
Copy link

claude bot commented Feb 19, 2026

Code Review: MCP Server + API Keys + User/Groups on Studio

This is a substantial PR with three main areas of change: MCP server implementation, API token authentication, and user/group management APIs. Overall the code is well-structured and thought-out. Below are observations across quality, security, correctness, and test coverage.


MCP Server

Good:

  • Disabled by default with an opt-in enabled flag - correct conservative default for a new attack surface.
  • Auth check happens before the isEnabled() check in MCPHttpHandler.execute(), correctly avoiding server state leakage to unauthenticated callers.
  • Semantic parser-based permission checking via OperationType avoids the brittle text-matching approach that would have been easy to bypass. Good design choice.
  • formatArgs() sanitizes log output against log injection (newline/tab replacement) - nice defensive touch.

Issues / Suggestions:

  1. MCPHttpHandler relies on a null-user guard - the parent AbstractServerHttpHandler only sets user if credentials are provided; a request with no Authorization header reaches execute() with user == null. The handler guards against this with if (user == null) return jsonRpcError(..., 401), which is correct, but it relies on the guard being in place. Consider enforcing auth at the framework level (e.g., by overriding an isAuthenticationRequired() method) so MCP endpoints cannot accidentally become open if the null check is ever missed.

  2. MCPConfigHandler silently treats any non-GET as a config update - the handler dispatches on Methods.GET but otherwise falls through to a POST/update for all other methods. A DELETE or PUT request would silently be treated as a config update. Consider returning 405 for unsupported methods.

  3. MCP routes are added outside the main router block - they are registered as exact paths directly on the routing handler, bypassing the pathPrefix("/api/v1", ...) grouping. Functionally fine, but worth a comment or unification for readability.

  4. QueryTool default limit - already defined as DEFAULT_LIMIT = 1000, good. Consider surfacing this cap in the MCP tool description so AI clients know it applies.

  5. ExecuteCommandTool wraps all commands in database.transaction() - correct for writes, but READ commands that somehow reach execute_command will also be wrapped in an explicit transaction, generating unnecessary WAL overhead. Minor.

  6. Double parsing for non-SQL engines - when analyzed.execute() returns null, the code falls back to database.command() / database.query() (correct). This parses the command twice for Gremlin, GraphQL, etc. The design is intentional and documented, but a brief comment at the call sites in QueryTool and ExecuteCommandTool would help future readers.


API Token Authentication

Good:

  • Tokens stored as SHA-256 hashes, never plaintext - the comment explaining why salting is not needed (256-bit entropy) is appreciated.
  • File permissions set to 600 on POSIX systems.
  • Brute-force protection via failure counter + lockout period.
  • Backward-compatibility migration from old plaintext token format.
  • The tokenSuffix (last 4 chars) aids identification without exposing the full token - sensible UX decision.

Issues / Suggestions:

  1. Minor race condition in authenticateByApiToken - the brute-force check (read) and the subsequent getToken() call are not atomic. The tokenFailures.compute() call is atomic, but the initial read-and-check at the top of the method is not. Low risk in practice, but the entire check-and-gate logic could be made atomic with a single compute call.

  2. cleanupExpired() in ApiTokenConfiguration is never called - it is a public method with no scheduled trigger. Either wire it up to ServerSecurity's existing cleanup timer or remove the method.

  3. listTokens() returns references to live JSONObject entries from the ConcurrentHashMap. GetApiTokensHandler copies fields individually so it is safe there, but the method contract allows accidental mutation. Consider returning deep copies.

  4. Token name uniqueness is not enforced - createToken() allows multiple tokens with the same name. Only the hash is the unique key. This is confusing from a UX standpoint; consider validating uniqueness at creation time.


User / Group Management APIs

Good:

  • checkRootUser is called consistently on all admin endpoints.
  • Password length validation (8-256 chars) in both PostUserHandler and PutUserHandler.
  • Password hashes are never returned in GetUsersHandler.
  • Protection against deleting the root user and the admin group from the wildcard database.

Issues / Suggestions:

  1. Inconsistent synchronization - PutUserHandler uses synchronized(security) but PostUserHandler does not. Both mutate the user store. Synchronizing inside ServerSecurity methods (rather than at the handler level) would be more consistent and prevent races.

  2. User name validation missing - PostUserHandler checks name.isBlank() but does not validate the format. A user name like apitoken:foo conflicts with the synthetic user names used for API token auth (prefixed with apitoken:). Explicitly rejecting the apitoken: prefix would prevent this collision.

  3. User name in DELETE query parameter - the name appears in server access logs. Acceptable for admin-only endpoints, but worth documenting.


OperationType / Query Classification

Good:

  • The default fallback of {CREATE, UPDATE, DELETE} for unknown operations errs on the side of caution - correct for a security boundary.
  • All query engines updated consistently.

Issues / Suggestions:

  1. MongoQueryEngine uses string-contains matching - a command like {"find": "users", "filter": {"deleteMany": 1}} would incorrectly match as DELETE because "DELETEMANY" appears in the filter value. This is the exact class of bypass the semantic analysis for SQL/Cypher was designed to prevent. The MongoDB wire protocol module has a proper command parser; using it here would be more reliable.

  2. GraphQLQueryEngine.detectGraphQLOperationTypes falls back to READ on parse error - a malformed GraphQL mutation is classified as READ and would pass QueryTool's isIdempotent() check (it would then fail at execution time). Low risk in practice, but the intentional behavior should be documented.


SingletonSet

  1. SingletonSet reinvents Collections.singleton() - Java has had Collections.singleton() since 1.2 and Set.of(element) since Java 9, both returning immutable single-element sets with comparable overhead. The claimed advantage in the Javadoc is not meaningful in practice. Consider removing this class and using Set.of() directly, which would also allow removing CollectionUtils.singletonSet().

Bolt Log Level Changes

The INFO to FINE changes for debug-gated messages are correct.

One note: one previously unconditional INFO log (successful BOLT version negotiation) was moved behind the if (debug) guard. This removes a production-visible connection log that might be useful for monitoring. Whether this is intentional is worth confirming.


Test Coverage

Overall coverage is good. A few gaps:

  • No test for the MongoDB operation type detection bypass (point 14).
  • No test for MCP behavior when allowReads = false through the HTTP endpoint.
  • No test for the cleanupExpired() method.
  • MCPPermissionsTest and the semantic operation type tests are thorough and well-designed.

Summary

The MCP server design is solid, the security model is well thought-out, and the test coverage is substantial. Main areas for attention before merge:

  1. MongoDB operation type detection (point 14) - string-contains is a security-relevant bypass risk for MCP permission checks.
  2. cleanupExpired() never called (point 8) - wire it up or remove it.
  3. User name validation (point 12) - prevent apitoken: prefix collision.
  4. SingletonSet redundancy (point 16) - minor code simplification opportunity.

@codacy-production
Copy link

codacy-production bot commented Feb 19, 2026

Coverage summary from Codacy

See diff coverage on Codacy

Coverage variation Diff coverage
-9.78% 83.63%
Coverage variation details
Coverable lines Covered lines Coverage
Common ancestor commit (de9ea0c) 86427 62351 72.14%
Head commit (6ac9b91) 117179 (+30752) 73075 (+10724) 62.36% (-9.78%)

Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>

Diff coverage details
Coverable lines Covered lines Diff coverage
Pull request (#3481) 1228 1027 83.63%

Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%

See your quality gate settings    Change summary preferences

@claude
Copy link

claude bot commented Feb 19, 2026

Code Review

This is a substantial PR adding three major features: an MCP server, API key authentication, and user/group management in Studio. The overall approach is well-structured and security-conscious. Here are observations by category.


Security

Positive observations:

  • API tokens are stored as SHA-256 hashes only; plaintext is returned once on creation and never persisted — correct approach.
  • Brute-force protection on token authentication via failure counting is a good addition.
  • Deletion endpoint rejects plaintext tokens in query parameters (preventing log/URL history leakage).
  • File permissions on token storage are set to 600 on POSIX systems.
  • MCP is disabled by default; all write operations are opt-in individually.
  • Semantic, parser-based operation classification for permission checks — avoids regex/text-matching bypass vulnerabilities.

Concerns:

  1. MCP logs query content at INFO levelMCPHttpHandler.java logs every tools/call with arguments including the full query text at Level.INFO. For a production environment, query text may contain sensitive data (user IDs, tokens, PII). The existing sanitizeForLog() truncates long strings but does not redact sensitive content. Consider Level.FINE for argument-level details, similar to what was done for the Bolt driver in this same PR.

  2. authenticateByApiToken subtle ordering — The failure-count compute() lambda is atomic on the ConcurrentHashMap bucket, but apiTokenConfig.getToken(tokenValue) is called outside the lambda. A concurrent brute-force attempt racing a legitimate request could both observe a non-null token, but only one clears the counter. Low practical risk, but worth noting.

  3. MCPConfiguration.updateFrom() silent no-op for null allowedUsers — If the JSON payload contains "allowedUsers": null, getJSONArray("allowedUsers", null) returns null and the method returns early without giving feedback. A client trying to set an empty list by sending null would be silently ignored. Consider treating null as an empty list or returning an error.

  4. SingletonSet vs JDK alternativesCollections.singleton() from the JDK provides an immutable single-element set that is already well-optimized and widely understood. The custom SingletonSet adds code surface and creates an anonymous Iterator object on each iteration. Consider using Collections.singleton() to reduce complexity.

  5. dropUser() is not synchronizedcreateUser() and updateUser() were made synchronized in this PR, but dropUser() is not. This could allow concurrent modifications to the users map. Should be synchronized for consistency.


Code Quality

  1. isDDL() behavioral change in OpenCypherQueryEngine — The original code treated both CypherDDLStatement and CypherAdminStatement as DDL. This PR separates them (admin is no longer DDL). This is semantically correct but is a behavioral change that may affect callers relying on the old contract. Worth documenting.

  2. PutUserHandler returns 403 for user-input validation errors — When the new password is too short/long, the handler throws ServerSecurityException. The base handler catches SecurityException (parent class) and maps to 403. But password-length validation is a client input error (400), not an authorization error (403). Other handlers in this PR correctly use return new ExecutionResponse(400, ...) for equivalent checks. This will confuse API clients.

  3. OpenCypherQueryEngine.getOperationTypes() fallback — When ops is empty after inspecting all clauses (non-read, non-DDL, non-admin, no specific mutations detected), it falls back to Set.of(CREATE, UPDATE, DELETE). A comment explaining why this is the safe default (rather than an error) would help future maintainers.

  4. Import ordering in HttpServer.java — The MCP handler imports are inserted between two groups of related imports. Minor style nit.


Performance

  1. ApiTokenConfiguration.createToken() is O(n) for name uniqueness — Scans all tokens linearly to find duplicate names. Fine for typical usage (tokens are few), but a secondary name-to-hash index would scale better.

  2. MCPHttpHandler.handleToolsList() rebuilds JSONObject tool definitions on every call — Tool definitions are static/immutable. Pre-building them as static constants would avoid repeated allocation on each tools/list request.

  3. Double-parsing for non-SQL engines in MCP toolsQueryTool and ExecuteCommandTool call analyzed.execute() which returns null for Cypher/Gremlin/GraphQL, forcing a full re-parse. This is noted in comments. A future improvement would have those engines implement execute().


Test Coverage

The test coverage is good overall — ApiTokenAuthenticationIT, OperationTypeIntegrationTest, OperationTypeTest, and GroupManagementIT provide solid coverage.

Missing test scenarios:

  • No tests for MCPHttpHandler directly: tools/list, tools/call with each tool, the enabled=false rejection path, and allowedUsers whitelist enforcement.
  • No test for PutUserHandler returning the correct 400 status for short passwords (a test would have caught the bug noted above).
  • No test verifying dropUser() is thread-safe under the new security model.

Minor Nits

  • MoveVertexStatement.getOperationTypes() uses Set.of(...) while single-element callers use CollectionUtils.singletonSet() — minor inconsistency.
  • MCPConfiguration builds paths with string concatenation while ApiTokenConfiguration uses Paths.get() — inconsistent style.
  • Confirm ATTRIBUTIONS.md does not need updates if any transitive dependencies changed.

Overall this is solid, well-designed work. The security design around API tokens is particularly good. The main actionable items are: the PutUserHandler returning 403 instead of 400 for validation errors, the dropUser() synchronization gap, and INFO-level query content logging in MCP.

@claude
Copy link

claude bot commented Feb 19, 2026

PR Review: MCP Server + API Keys + User/Groups on Studio

I've completed a thorough review of PR #3481. This is an excellent addition to ArcadeDB with solid implementation. Below are my findings organized by category.


✅ Overall Assessment

Strong Points:

  • Well-structured MCP server implementation with clean separation of concerns
  • Comprehensive test coverage (503 test lines across 3 test files)
  • Secure API token handling with SHA-256 hashing and proper migration path
  • Fine-grained permission model with semantic operation type detection
  • Zero external dependencies for MCP server as stated

Code Quality: High - follows project conventions and coding standards


🔒 Security Review

Excellent Security Practices:

  1. API Token Security (ApiTokenConfiguration.java):

    • ✅ SHA-256 hashing of tokens (no plaintext storage)
    • ✅ Secure token generation using SecureRandom (32 bytes = 256 bits entropy)
    • ✅ One-time display of plaintext token
    • ✅ POSIX file permissions (600) for token storage file
    • ✅ Automatic migration from plaintext to hashed tokens with cleanup
    • ✅ Expiration checking with automatic cleanup
  2. MCP Server Security (MCPHttpHandler.java):

    • ✅ Authentication required for all MCP endpoints
    • ✅ User authorization checking before exposing server state
    • ✅ Database-level access control enforced
    • ✅ Disabled by default
    • ✅ Fine-grained permission model (allowReads, allowInsert, allowUpdate, allowDelete, allowSchemaChange, allowAdmin)
    • ✅ Semantic query analysis to prevent write operations in read-only tools
  3. Query Operation Detection (OperationType.java + query engines):

    • ✅ New semantic analysis framework to classify query operations
    • ✅ Prevents privilege escalation (e.g., INSERT via query tool)
    • ✅ Comprehensive test coverage in OperationTypeTest.java and OperationTypeIntegrationTest.java

Security Considerations:

Minor Issue - Logging Sensitive Data:

  • Location: MCPHttpHandler.java:79, MCPHttpHandler.java:129
  • The MCP handler logs all request parameters at INFO level, which could expose sensitive query data
  • Recommendation: Consider using FINE/DEBUG level or sanitizing logged queries
  • Current mitigation: The formatArgs() method does truncate long strings to 100 chars and sanitizes newlines

Note on Token Hashing:

  • The comment at line 191-193 in ApiTokenConfiguration.java correctly explains why salting isn't needed for high-entropy random tokens
  • This is technically correct, but for defense-in-depth, consider adding a server-specific pepper stored outside the JSON file

🐛 Potential Bugs & Issues

1. Race Condition in Token Cleanup

File: ApiTokenConfiguration.java:63-90
Issue: During load(), expired tokens are removed and needsSave triggers a save, but new tokens might be created between load and save
Severity: Low
Recommendation: Consider using atomic file operations or ensuring cleanup happens in a single synchronized block

2. Missing Null Checks

File: MCPConfiguration.java:199-205
Issue: If usersArray is not null but contains null elements, getString(i) could return null and cause issues
Recommendation: Add null check when iterating:

for (int i = 0; i < usersArray.length(); i++) {
  String user = usersArray.getString(i);
  if (user != null) users.add(user);
}

3. Directory Creation Without Error Handling

File: MCPConfiguration.java:87-89
Issue: `mkdirs()) can fail silently
Recommendation:

if (!configDir.exists() && !configDir.mkdirs())
  throw new IOException("Failed to create config directory: " + configDir);

⚡ Performance Considerations

Positive:

  1. SingletonSet (SingletonSet.java) - Smart optimization to avoid Set.of() overhead for single-element sets used in operation type detection
  2. Query reuse in QueryTool (QueryTool.java:90-94) - Reuses already-parsed SQL statements when possible, avoiding double-parsing
  3. CopyOnWriteArrayList for MCP allowed users - Good choice for read-heavy configuration

Concerns:

1. Synchronization Overhead

  • MCPConfiguration uses synchronized on multiple methods (load, save, toJSON, updateFrom)
  • For read-heavy operations like isUserAllowed(), consider using ReadWriteLock or making fields volatile with lock-free reads
  • Current: Every config access locks
  • Better: Only write operations lock

2. JSON Serialization in Request Path

  • MCPHttpHandler.java:147 calls toolResult.toString() on every query result
  • For large result sets, this could be expensive
  • Consider streaming JSON or limiting result size earlier

3. File I/O on Every Config Change

  • MCPConfiguration.save() writes to disk on every config update
  • For frequent updates, consider batching or async writes
  • Current implementation is safe but potentially slow under load

📊 Test Coverage

Excellent Coverage:

  • MCPServerPluginTest.java: 503 lines, 17 test methods covering:

    • ✅ Initialize/tools list
    • ✅ All 5 MCP tools (list_databases, get_schema, query, execute_command, server_status)
    • ✅ Permission enforcement
    • ✅ User authorization
    • ✅ Database-level access control
    • ✅ Disabled state handling
    • ✅ Query vs command separation
    • ✅ Limit enforcement
  • ApiTokenAuthenticationIT.java: 402 lines covering token lifecycle

  • MCPPermissionsTest.java: 183 lines for permission model

  • UserManagementIT.java, GroupManagementIT.java: 626 combined lines

Missing Test Cases:

  1. Token expiration edge cases - tokens expiring mid-request
  2. Concurrent token creation with same name
  3. MCP configuration file corruption recovery
  4. Studio JavaScript - No automated tests for the new studio-security.js (896 lines)

🎨 Code Quality & Best Practices

Following Project Standards: ✅

  • ✅ Imports without fully qualified names
  • ✅ Proper use of final keyword on parameters and variables
  • ✅ Single-line if statements without braces (project style)
  • ✅ Assertion style: assertThat(x).isTrue() (per CLAUDE.md)
  • ✅ Use of JSONObject and JSONArray from com.arcadedb.serializer.json
  • ✅ No author attributions in source code

Minor Style Issues:

1. Inconsistent Error Handling

// File: MCPConfiguration.java:80-83
catch (final IOException e) {
  LogManager.instance().log(this, Level.WARNING, "Error loading...", e.getMessage());
} catch (final Exception e) {
  LogManager.instance().log(this, Level.WARNING, "Corrupt MCP...", e.getMessage());
}

Recommendation: Pass exception object, not message:

LogManager.instance().log(this, Level.WARNING, "Error loading MCP configuration", e);

2. Magic Numbers

  • TOKEN_BYTES = 32, SUFFIX_LEN = 4, DEFAULT_LIMIT = 1000 - Well defined as constants ✅
  • MCP_PROTOCOL_VERSION = "2025-03-26" - Consider making this configurable or documented

3. String Concatenation in Loops

  • MCPHttpHandler.formatArgs() uses StringBuilder
  • Studio JS uses += in loops (acceptable for UI code)

🌐 Studio (Frontend) Review

New Files:

  • security.html (321 lines) - User/Group/API Token management UI
  • studio-security.js (896 lines) - Client-side logic

Observations:

Positive:

  • Clean Bootstrap 5 UI
  • Proper HTML escaping: escapeHtml() used consistently
  • AJAX error handling present

Concerns:

1. No XSS Protection Verification
I see escapeHtml() being called, but should verify the implementation is robust. Check studio-utils.js:

// Verify this function exists and properly escapes HTML
function escapeHtml(text) {
  // Should escape <, >, &, ", '
}

2. Password Validation
studio-security.js:99 mentions "Min 4 characters" but I don't see client-side validation. Server-side validation should enforce this.

3. Sensitive Data in Browser
API tokens are displayed one-time, which is good, but ensure they're not cached in browser history/autocomplete.


📝 Documentation & Comments

Good:

  • ✅ Clear JavaDoc on OperationType enum
  • SingletonSet has helpful class comment
  • ✅ Comment explaining no-salt decision in ApiTokenConfiguration
  • ✅ JSON-RPC method handling is self-documenting via switch expression

Missing:

  • ❌ No README or documentation for MCP server setup/configuration
  • ❌ API token format/structure not documented (at- prefix, suffix purpose)
  • ❌ MCP permission model not documented in code
  • ❌ Studio features not documented in changelog/release notes

🔧 Architectural Concerns

1. Bolt Protocol Logging Change

Files: BoltNetworkExecutor.java
Change: Debug logs changed from INFO to FINE
Impact: This is a breaking change for users who rely on INFO-level Bolt logs
Recommendation: Consider making this configurable or documenting the change

2. Query Engine Interface Changes

Files: QueryEngine.java, all query engine implementations
Change: New analyze() method and AnalyzedQuery interface
Impact: Breaking change for custom query engines
Recommendation: Ensure this is documented in upgrade notes

3. ServerSecurity Growing Complexity

ServerSecurity.java now has 156 new lines for API token management. Consider extracting token management into a separate manager class for better SRP (Single Responsibility Principle).


🎯 Recommendations Summary

High Priority:

  1. Add null checks in MCPConfiguration user array iteration
  2. Improve directory creation error handling in MCPConfiguration
  3. Verify escapeHtml() implementation in Studio
  4. Add MCP server documentation (setup, configuration, permission model)

Medium Priority:

  1. Consider ReadWriteLock for MCPConfiguration to reduce lock contention
  2. Add token expiration integration tests
  3. Document Bolt logging level change (breaking change)
  4. Document QueryEngine API changes (breaking change for extensions)

Low Priority (Nice to Have):

  1. Consider adding server-side pepper to token hashing for defense-in-depth
  2. Add automated tests for Studio JavaScript
  3. Consider extracting API token management from ServerSecurity to separate class
  4. Add MCP request rate limiting to prevent abuse

✨ Conclusion

This is a high-quality PR that adds significant value to ArcadeDB. The implementation demonstrates:

  • Strong security awareness
  • Comprehensive testing
  • Clean code following project conventions
  • Thoughtful design (operation type detection, zero dependencies)

The issues identified are minor and mostly relate to defensive programming, documentation, and potential optimizations rather than critical bugs.

Recommendation:APPROVE with minor follow-up items

Great work on this feature! The MCP server integration and API token system will be valuable additions to ArcadeDB.


Review conducted by Claude Code following ArcadeDB's CLAUDE.md guidelines

@claude
Copy link

claude bot commented Feb 19, 2026

Code Review for PR #3481: MCP Server + API Keys + User/Groups on Studio

This PR adds significant new functionality: an MCP (Model Context Protocol) server, API token authentication, and user/group management UI in Studio. Overall, the implementation is well-structured with good security practices and comprehensive test coverage. Below are my findings organized by category.

✅ Strengths

Security

  • Excellent token hashing implementation: SHA-256 hashing with 256-bit entropy tokens in ApiTokenConfiguration.java:194-202
  • Proper file permissions: API token file permissions set to owner-only (600) on POSIX systems in ApiTokenConfiguration.java:118-124
  • Rate limiting for tokens: Token failure tracking with lockout mechanism in ServerSecurity.java:74-76
  • Semantic permission checking: Using AST-based operation type detection instead of regex pattern matching in ExecuteCommandTool.java:104-117
  • Defense in depth: Multiple authorization layers (user auth, MCP config, database permissions)
  • Backward compatibility: Automatic migration from plaintext to hashed tokens in ApiTokenConfiguration.java:75-84

Code Quality

  • Strong test coverage: 6 new test files covering MCP functionality, API tokens, permissions, user/group management
  • Performance optimization: SingletonSet implementation avoids Set.of() overhead for single-element sets
  • Clean separation of concerns: MCP tools are properly separated into individual classes
  • Proper use of project conventions: Uses JSONObject.get*() with default values to reduce boilerplate
  • Good logging: Structured logging throughout with appropriate log levels

Architecture

  • MCP disabled by default: Security-conscious default configuration
  • Query reuse: Smart optimization to avoid double-parsing in QueryTool.java:96 and ExecuteCommandTool.java:88
  • Fine-grained permissions: Separate controls for reads, inserts, updates, deletes, schema changes, admin operations

🔍 Issues Found

High Priority

None identified - the code appears to be production-ready.

Medium Priority

  1. Potential DOS via unlimited query results (server/src/main/java/com/arcadedb/server/mcp/tools/ExecuteCommandTool.java:89-94)

    • The execute_command tool has no limit on result set size, unlike query which has a 1000-record default limit
    • Recommendation: Add a configurable limit (e.g., DEFAULT_LIMIT = 1000) and apply it to the result iteration
  2. Token cleanup could accumulate failures (server/src/main/java/com/arcadedb/server/security/ServerSecurity.java:142-152)

    • The tokenFailureCleanupTimer is only created if null, but stopService() cancels it without setting it to null
    • If loadUsers() is called multiple times (e.g., config reload), timers could accumulate
    • Recommendation: Set tokenFailureCleanupTimer = null after cancel() in stopService():178
  3. File path concatenation could be cleaner (server/src/main/java/com/arcadedb/server/mcp/MCPConfiguration.java:90,213)

    • Using File.separator concatenation instead of Path.of() or Paths.get()
    • Recommendation: Use Paths.get(rootPath, "config", "mcp-config.json") for better readability and cross-platform compatibility

Low Priority

  1. OpenAPI spec generator appears incomplete (server/src/main/java/com/arcadedb/server/http/handler/OpenApiSpecGenerator.java)

    • Only shows first 100 lines in review, but the createPaths() method needs to populate actual endpoint definitions
    • Recommendation: Verify that endpoint discovery and schema generation are fully implemented
  2. Minor: Log format could be more structured (server/src/main/java/com/arcadedb/server/mcp/MCPHttpHandler.java:167-201)

    • The formatArgs() and formatResult() methods are well-done, but consider using structured logging (JSON) for better parsing by log aggregators
    • This is minor and current approach is acceptable for human readability
  3. Consider validating token name format (server/src/main/java/com/arcadedb/server/security/ApiTokenConfiguration.java:127)

    • No validation on token name characters (e.g., preventing special chars, length limits)
    • Recommendation: Add basic name validation to prevent injection or display issues

📋 Recommendations

General

  1. Documentation: Consider adding inline examples to the MCP tool descriptions for common use cases
  2. Monitoring: The MCP server should expose metrics (request count, error rate, latency) for production monitoring
  3. API token expiration: Consider adding a background job to automatically clean up expired tokens from the file (currently done lazily on load/access)

Testing

  1. The test coverage is excellent. Consider adding:
    • Concurrent access tests for MCPConfiguration and ApiTokenConfiguration to verify thread safety
    • Tests for token expiration edge cases (expiring during request execution)
    • Tests for MCP permission boundary cases (e.g., UPSERT requiring both CREATE and UPDATE permissions)

Performance

  1. Consider caching query analysis results in MCP tools if the same queries are frequently executed
  2. The CopyOnWriteArrayList for allowed users is appropriate for read-heavy workloads, but verify this matches the expected usage pattern

🎯 Compliance with CLAUDE.md

  • ✅ Code adheres to existing style (no unnecessary braces, final keywords used)
  • ✅ Uses project's JSON classes (com.arcadedb.serializer.json.JSONObject)
  • ✅ No new dependencies added (MCP implemented with zero dependencies as stated)
  • ✅ Comprehensive test coverage for all new features
  • ✅ No System.out usage except for legitimate console interaction (password prompts)
  • ✅ Security-conscious implementation throughout
  • ✅ Proper import usage (no fully qualified names in code)

📊 Summary

This is a high-quality PR that adds significant functionality with strong attention to security and code quality. The implementation demonstrates good architectural decisions and follows the project's conventions. The issues identified are minor and can be addressed in follow-up commits or accepted as-is given the low risk.

Recommendation: ✅ APPROVE with optional improvements suggested above.

The code is ready to merge. The suggested improvements are enhancements rather than blockers.


Review performed by Claude Code on 2026-02-19

@lvca lvca merged commit 90665be into main Feb 19, 2026
8 of 13 checks passed
@lvca lvca deleted the mcp-server branch February 19, 2026 22:19
@claude
Copy link

claude bot commented Feb 19, 2026

PR Review: MCP Server + API Keys + User/Groups Management

I've completed a comprehensive review of this PR. Overall, this is excellent work with strong security practices, clean architecture, and comprehensive test coverage.


🎯 Summary

Overall Assessment: APPROVED ✅

This PR adds:

  • MCP (Model Context Protocol) server integration for AI-powered database operations
  • API token authentication system with secure token management
  • User and group management UI in Studio
  • OpenAPI specification generator
  • Operation type classification for semantic permission checking

The implementation demonstrates thoughtful engineering with multiple defense-in-depth security measures, efficient design patterns, and comprehensive testing.


🔒 Security Review

Excellent Security Practices

  1. API Token Security (ApiTokenConfiguration.java):

    • ✅ Hash-only storage (SHA-256) - plaintext never persisted
    • ✅ Secure random generation (256-bit entropy via SecureRandom)
    • ✅ One-time plaintext display (only during creation)
    • ✅ Automatic expiration handling
    • ✅ POSIX file permissions (mode 600)
    • ✅ Backward compatibility migration from plaintext to hashes
    • ✅ Deletion protection (rejects plaintext tokens in delete API)
  2. MCP Server Authorization (MCPHttpHandler.java):

    • ✅ Multi-layered checks: authentication → MCP enabled → user allowlist
    • ✅ Semantic query analysis (parser-based, not regex) prevents SQL injection bypass
    • ✅ Granular permissions (READ, INSERT, UPDATE, DELETE, SCHEMA, ADMIN)
    • ✅ Disabled by default (explicit opt-in required)
    • ✅ Root-only configuration access
  3. Brute Force Protection (ServerSecurity.java:572-600):

    • ✅ Rate limiting: 5 failed attempts → 30-second lockout
    • ✅ Hash-prefix tracking (first 8 chars)
    • ✅ Early lockout check before token lookup
    • ✅ Automatic reset on success
  4. Input Validation:

    • ✅ Token name, database, expiration validation
    • ✅ Password length enforcement (8-256 chars)
    • ✅ Username prefix protection (prevents "apitoken:" prefix)
    • ✅ Proper HTTP status codes (401, 403, 503)

⚠️ Minor Security Considerations

  1. MCP Logging Verbosity (MCPHttpHandler.java:135):

    • Full queries logged at INFO level
    • Consider moving to DEBUG to reduce log exposure
    • Current sanitization (truncation to 100 chars) is good but could cut context
  2. No Explicit CSRF Protection:

    • MCP uses JSON-RPC over POST with Bearer tokens
    • CSRF risk is minimal (requires custom headers)
    • ✅ Design is sound, consider documenting this decision
  3. Token Hash Suffix (ApiTokenConfiguration.java:137):

    • Last 4 characters stored for UX
    • ✅ Acceptable tradeoff with 256-bit random tokens

No critical vulnerabilities found.


💻 Code Quality

Excellent Patterns

  1. Consistent Error Handling:

    • Proper JSON-RPC error codes
    • SecurityException for permission denials
    • Meaningful error messages
  2. Thread Safety:

    • CopyOnWriteArrayList for MCP allowed users
    • ConcurrentHashMap for tokens
    • synchronized on config mutations
  3. Resource Management:

    • Try-with-resources for file I/O
    • ResultSet auto-closing
    • Proper transaction wrapping
  4. Code Reuse:

    • AbstractServerHttpHandler base class
    • Shared checkRootUser() method
    • Consistent JSON serialization

CLAUDE.md Compliance

  • ✅ No unnecessary dependencies
  • ✅ Uses project's JSONObject/JSONArray classes
  • ✅ Imports instead of fully qualified names
  • ✅ Single-statement if blocks without braces
  • ✅ final keyword on parameters/variables
  • ✅ No System.out debug statements
  • ✅ Tests use AssertJ syntax: assertThat(x).isTrue()
  • ✅ No Claude author attribution

🎨 Clean Architecture

  • Clear separation of concerns (config, handlers, tools, security)
  • Lightweight permission checks (O(1) hash lookups)
  • Query parsing optimization (single parse for permission + execution)
  • Lazy token cleanup (no periodic scanning overhead)

⚡ Performance

Efficient Design

  1. Query Optimization (ExecuteCommandTool.java:80-95):

    • Single parse for both permission check and execution
    • Reuses parsed statement when possible
    • Avoids double parsing overhead
  2. Lightweight Auth:

    • O(1) hash lookups for tokens
    • O(n) user allowlist check (acceptable for small lists)
    • No database queries during auth
  3. Smart Caching:

    • Expired tokens removed during access (lazy)
    • Scheduled cleanup for brute-force tracking
    • No periodic file scanning

💡 Potential Optimizations (Low Priority)

  1. MCP allowedUsers (MCPConfiguration.java:172):

    • Linear search could be O(1) with Set
    • Current implementation fine for small user lists
    • Only optimize if lists grow large
  2. Result Set Limits:

    • Default 1000 records is reasonable
    • No memory cap on individual record size
    • Consider adding max result size in bytes

🧪 Test Coverage

Comprehensive Testing (1088+ lines of tests)

MCP Server Tests (MCPServerPluginTest.java - 503 lines):

  • ✅ Protocol methods (initialize, tools/list, ping, notifications)
  • ✅ All 5 tools tested (list_databases, get_schema, query, execute_command, server_status)
  • ✅ Security (unauthorized users, database authorization)
  • ✅ Permission enforcement (read-only rejection, insert denial)
  • ✅ Error cases (disabled MCP, unknown tool, method not found)
  • ✅ Edge cases (query limits, notification 204 response)

MCP Permissions (MCPPermissionsTest.java - 183 lines):

  • ✅ Granular permission checks (INSERT, UPDATE, DELETE, SCHEMA, ADMIN, READ)
  • ✅ Combined operations (UPSERT requires CREATE + UPDATE)
  • ✅ User allowlist (wildcard support, null handling)

API Token Tests (ApiTokenConfigurationTest.java + ApiTokenAuthenticationIT.java - 576 lines):

  • ✅ CRUD operations
  • ✅ Security (hash-only storage, expiration, plaintext rejection)
  • ✅ Persistence (save/load, expired cleanup)
  • ✅ Validation (duplicate names, invalid tokens)
  • ✅ End-to-end auth flow
  • ✅ Permission enforcement

User/Group Management (UserManagementIT.java + GroupManagementIT.java - 683 lines):

  • ✅ CRUD operations for users and groups
  • ✅ Authorization checks
  • ✅ Validation and error handling

📋 Test Coverage Gaps (Medium Priority)

  1. Missing Brute-Force Tests:

    • No test for MAX_TOKEN_FAILURES lockout
    • No test for lockout expiration (30 seconds)
    • Recommend adding ApiTokenBruteForceIT.java
  2. Missing MCP Edge Cases:

    • No test for very large query results (>1000 records)
    • No test for malformed JSON-RPC requests
    • No test for concurrent MCP config updates
  3. Missing Error Scenarios:

    • No test for filesystem failures (permissions, disk full)
    • No test for corrupted config files
    • Recovery is handled but not tested

📝 Specific File Analysis

New Utilities

OperationType.java ✅:

  • Clean enum for semantic operation classification
  • Well-documented purpose (MCP permission checking)
  • Used throughout query engines for permission enforcement

SingletonSet.java ✅:

  • Lightweight immutable set implementation
  • Avoids Set.of() overhead
  • Proper equals/hashCode/iterator implementation
  • Good performance optimization for single-element sets

OpenApiSpecGenerator.java ✅:

  • Comprehensive OpenAPI 3.0 spec generation
  • Documents all new endpoints (users, groups, tokens)
  • Proper security schemes (basicAuth, bearerAuth)
  • Good API documentation foundation

Studio Frontend (studio-security.js)

✅ Strengths:

  • Consistent jQuery/Bootstrap 5 usage (per CLAUDE.md)
  • Proper HTML escaping (escapeHtml() function)
  • DataTables integration for user/group/token tables
  • Clear separation of concerns (users, groups, tokens)

⚠️ Minor Issues:

  • Password placeholder says "Min 4 characters" but backend requires 8 (line 98)
    • Should be: .attr("placeholder", "Min 8 characters")
  • Consider adding client-side validation to match backend rules

🎯 Recommendations

🔴 High Priority

None - No critical issues found

🟡 Medium Priority

  1. Add Brute-Force Tests:

    • Test lockout behavior after 5 failures
    • Test lockout expiration
    • Test failure count reset on success
  2. Add Filesystem Failure Tests:

    • Test config save failure handling
    • Test corrupted JSON recovery
    • Test concurrent config access
  3. Fix Studio Password Placeholder:

    • Change "Min 4 characters" to "Min 8 characters"
    • Match backend validation (PostUserHandler.java:48-52)

🟢 Low Priority

  1. Reduce MCP Logging Verbosity:

    • Move full query logging from INFO to DEBUG
    • Keep summary logging at INFO level
  2. Document CSRF Design:

    • Add comment explaining JSON-RPC + Bearer = inherent CSRF resistance
  3. Consider HMAC for Tokens:

    • Current SHA-256 is secure with high entropy
    • HMAC-SHA256 with server secret adds defense-in-depth
    • Current approach is acceptable

🎉 Highlights

Particularly Well Done

  1. Security-First Design: Multiple defense layers, proper cryptography, no shortcuts
  2. Semantic Query Analysis: Parser-based operation type detection (not regex) is brilliant
  3. Test Coverage: 1088+ lines of tests covering happy paths, edge cases, and security
  4. Code Organization: Clear separation, consistent patterns, reusable components
  5. Performance: Efficient query reuse, lazy cleanup, minimal overhead
  6. CLAUDE.md Adherence: Perfect compliance with project standards

Innovation

The OperationType enum + semantic query analysis is a sophisticated approach to permission checking that prevents common bypass techniques. This is significantly more robust than regex-based approaches.


✅ Final Verdict

APPROVED FOR MERGE

This PR represents high-quality work that enhances ArcadeDB's security and functionality without introducing vulnerabilities or technical debt. The code is production-ready.

Recommendations:

  • Merge as-is
  • Address medium-priority test gaps in follow-up PRs
  • Fix Studio password placeholder in quick follow-up

Files Reviewed: 61 files changed (+6865 / -118 lines)

  • Core: MCP server, API tokens, operation types
  • Server: HTTP handlers, security integration
  • Studio: User/group/token management UI
  • Tests: Comprehensive coverage across all new features

Great work! 🚀

robfrank pushed a commit that referenced this pull request Mar 5, 2026
* feat: new MCP server

Issue #3460

* chore: moved MCP component from separate module into server

* feat: created new api token to use for authentication and MCP server

* feat: added users and groups UX in studio

* chore: renamed some labels in studio

* fix: all concerns from 2 reviews

* fix: all findings from latest review

* fix: keep token short version for reference

* fix: improved discovery of operations types by query languages

This allows to be more precise in MCP profiling

* fix: implemented recommendation from latest review

* fix: addressed latest review

* Update the PR from the latest review

* fix: implemented suggestions from ltest rview

* fix: latest issues found from the review

* chore: bolt protocol now logs only if bolt debug is true in global cfg

* fix: implemented latest recommendation from review

* fix: mcp server last fixed before merging

* MCP minor fixes

(cherry picked from commit 90665be)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant