Merged
Conversation
vk-playground
pushed a commit
to vk-playground/mcp-context-forge
that referenced
this pull request
Sep 14, 2025
Docker Compose scripts and docs Signed-off-by: Mihai Criveti <crivetimihai@gmail.com>
vk-playground
pushed a commit
to vk-playground/mcp-context-forge
that referenced
this pull request
Sep 14, 2025
Docker Compose scripts and docs
vk-playground
pushed a commit
to vk-playground/mcp-context-forge
that referenced
this pull request
Sep 16, 2025
Docker Compose scripts and docs Signed-off-by: Vicky Kuo <vicky.kuo@ibm.com>
65 tasks
4 tasks
yiannis2804
added a commit
to yiannis2804/mcp-context-forge
that referenced
this pull request
Feb 19, 2026
…BM#8) Address code review suggestion from @jonpspri: Problem: The _check_resource_access logic (owner, team, visibility) is well-thought-out but never executed because no callsite passes resource_type to the decorator. Could be forgotten. Solution: - Added comprehensive NOTE explaining this is Phase 2+ scaffolding - Documents why it's currently not called (no resource_type parameter) - Provides Phase 2 activation plan with 4 clear steps - Includes example future usage - Prevents implementation from being forgotten Current State: - Resource always None in check_access() - _check_resource_access never executes - Permission checks are permission-level only Future Phase 2: - Decorators will pass resource_type - Extract resource_id from function params - Fine-grained per-resource access control - Check ownership, team membership, visibility Related: PR IBM#2682 Phase 1 Code Review Item IBM#8 Signed-off-by: yiannis2804 <yiannis2804@gmail.com>
crivetimihai
pushed a commit
that referenced
this pull request
Feb 24, 2026
) Address code review suggestion from @jonpspri: Problem: The _check_resource_access logic (owner, team, visibility) is well-thought-out but never executed because no callsite passes resource_type to the decorator. Could be forgotten. Solution: - Added comprehensive NOTE explaining this is Phase 2+ scaffolding - Documents why it's currently not called (no resource_type parameter) - Provides Phase 2 activation plan with 4 clear steps - Includes example future usage - Prevents implementation from being forgotten Current State: - Resource always None in check_access() - _check_resource_access never executes - Permission checks are permission-level only Future Phase 2: - Decorators will pass resource_type - Extract resource_id from function params - Fine-grained per-resource access control - Check ownership, team membership, visibility Related: PR #2682 Phase 1 Code Review Item #8 Signed-off-by: yiannis2804 <yiannis2804@gmail.com>
aidbutlr
added a commit
to aidbutlr/mcp-context-forge
that referenced
this pull request
Mar 3, 2026
CYFR-380 Resync project 20260114
2 tasks
gandhipratik203
added a commit
that referenced
this pull request
Mar 19, 2026
Closes #3740 ## What changed ### Plugin fixes (plugins/rate_limiter/rate_limiter.py) - Config validation at __init__: _validate_config() parses all rate strings at startup — bad config raises immediately, not mid-request - Graceful degradation: both hooks wrapped in try/except; unexpected errors are logged and the request is allowed through (permissive) - prompt_pre_fetch now enforces by_tool limits using prompt_id as key - MemoryBackend: asyncio.Lock makes counter increments atomic - MemoryBackend: background TTL sweep evicts expired windows (0.5s interval) - RedisBackend: atomic INCR+EXPIRE via Lua script; shared state across all gateway instances; native TTL expiry; falls back to memory on error ### Test additions (tests/unit/.../test_rate_limiter.py) - Gap tests: 4 xfail -> pass (shared state, eviction, prompt by_tool, graceful degradation); 1 xfail remains (fixed window burst, deferred) - Edge case tests: malformed/unsupported config raises at init (not request time); runtime errors degrade gracefully via mock injection - Redis backend test uses injected FakeRedis — no live server required ### Config changes - plugins/config.yaml: RateLimiterPlugin enabled with enforce mode - tests/performance/plugins/config.yaml: RateLimiterPlugin set to permissive for inclusion in cProfile benchmark runs Signed-off-by: Pratik Gandhi <gandhipratik203@gmail.com>
crivetimihai
pushed a commit
that referenced
this pull request
Mar 21, 2026
Closes #3740 ## What changed ### Plugin fixes (plugins/rate_limiter/rate_limiter.py) - Config validation at __init__: _validate_config() parses all rate strings at startup — bad config raises immediately, not mid-request - Graceful degradation: both hooks wrapped in try/except; unexpected errors are logged and the request is allowed through (permissive) - prompt_pre_fetch now enforces by_tool limits using prompt_id as key - MemoryBackend: asyncio.Lock makes counter increments atomic - MemoryBackend: background TTL sweep evicts expired windows (0.5s interval) - RedisBackend: atomic INCR+EXPIRE via Lua script; shared state across all gateway instances; native TTL expiry; falls back to memory on error ### Test additions (tests/unit/.../test_rate_limiter.py) - Gap tests: 4 xfail -> pass (shared state, eviction, prompt by_tool, graceful degradation); 1 xfail remains (fixed window burst, deferred) - Edge case tests: malformed/unsupported config raises at init (not request time); runtime errors degrade gracefully via mock injection - Redis backend test uses injected FakeRedis — no live server required ### Config changes - plugins/config.yaml: RateLimiterPlugin enabled with enforce mode - tests/performance/plugins/config.yaml: RateLimiterPlugin set to permissive for inclusion in cProfile benchmark runs Signed-off-by: Pratik Gandhi <gandhipratik203@gmail.com>
crivetimihai
added a commit
that referenced
this pull request
Mar 21, 2026
…dation (#3750) * test(rate-limiter): harden rate limiter plugin — gaps #1-#8 Closes #3740 ## What changed ### Plugin fixes (plugins/rate_limiter/rate_limiter.py) - Config validation at __init__: _validate_config() parses all rate strings at startup — bad config raises immediately, not mid-request - Graceful degradation: both hooks wrapped in try/except; unexpected errors are logged and the request is allowed through (permissive) - prompt_pre_fetch now enforces by_tool limits using prompt_id as key - MemoryBackend: asyncio.Lock makes counter increments atomic - MemoryBackend: background TTL sweep evicts expired windows (0.5s interval) - RedisBackend: atomic INCR+EXPIRE via Lua script; shared state across all gateway instances; native TTL expiry; falls back to memory on error ### Test additions (tests/unit/.../test_rate_limiter.py) - Gap tests: 4 xfail -> pass (shared state, eviction, prompt by_tool, graceful degradation); 1 xfail remains (fixed window burst, deferred) - Edge case tests: malformed/unsupported config raises at init (not request time); runtime errors degrade gracefully via mock injection - Redis backend test uses injected FakeRedis — no live server required ### Config changes - plugins/config.yaml: RateLimiterPlugin enabled with enforce mode - tests/performance/plugins/config.yaml: RateLimiterPlugin set to permissive for inclusion in cProfile benchmark runs Signed-off-by: Pratik Gandhi <gandhipratik203@gmail.com> * chore(config): enable Redis backend for RateLimiterPlugin in plugins/config.yaml Switch the default stack config from in-memory to Redis-backed rate limiting. This ensures the 30/m per-user limit is enforced as a true shared limit across all gateway instances rather than 30/m per process. Validated via Redis MONITOR: all 3 gateway instances atomically increment the same rl:user:<id>:60 counter via the Lua INCR+EXPIRE script. Signed-off-by: Pratik Gandhi <gandhipratik203@gmail.com> * test(rate-limiter): add benchmark-rate-limiter load test for multi-instance correctness Adds locustfile_rate_limiter.py and a make benchmark-rate-limiter target to demonstrate the multi-instance rate limit enforcement gap and its fix. The test sends 1 req/s (60 req/min = 2x the 30/m limit) through 3 gateway instances. With a memory backend each instance only sees ~20 req/min and never fires the limiter (~0% failures). With the Redis backend the shared counter reaches 30/min and blocks ~50% of requests — clearly showing the fix works across instances. Expected results: Memory backend: ~0% blocked (each instance sees 20 req/min < 30/m limit) Redis backend: ~50% blocked (shared counter: 60 req/min > 30/m limit) Signed-off-by: Pratik Gandhi <gandhipratik203@gmail.com> * test(rate-limiter): add hardening tests, bypass resistance, PII fix, and updated docs - Add 22 new unit tests (70 passed total, 4 xfailed): - Permissive vs enforce mode through PluginExecutor - Redis fallback: memory takeover when Redis is down, limit still enforced, no-fallback graceful degradation - Cross-tenant isolation: independent counters, no counter bleed between tenants - Header accuracy: Retry-After bounds, X-RateLimit-Reset future/consistency, Remaining decrement - Bypass resistance: None/whitespace user identity, tool name case sensitivity and whitespace (documented as xfail gaps) - PII: violation description must not contain user or tenant identifiers - Fix PII leak in violation description: remove user/tenant from description string in both prompt_pre_fetch and tool_pre_invoke — identifiers appeared in log output via permissive-mode manager warning and enforce-mode PluginViolationError message - Rewrite plugins/rate_limiter/README.md: was describing the old pre-fix implementation (in-memory only, no Redis, Redis as TODO). Now documents both backends, full config reference, response headers, examples, and accurate limitations table - Update plugin-manifest.yaml description to reflect Redis backend support Closes #3740 Signed-off-by: Pratik Gandhi <gandhipratik203@gmail.com> * fix(rate-limiter): review fixes — dead code, test correctness, config validation - Remove unused _allow() module-level function (dead code — plugin uses self._rate_backend.allow() directly) - Fix test_graceful_degradation test: was patching _allow() which is never called by the plugin; now patches backend.allow() via patch.object so the try/except error path is actually exercised - Add prompt_pre_fetch graceful degradation test (was only tested for tool_pre_invoke) - Fix inconsistent by_tool lookup in tool_pre_invoke: remove unnecessary hasattr(__contains__) guard, align with prompt_pre_fetch pattern - Add backend validation to _validate_config(): typo like 'reddis' now raises ValueError at startup instead of silently falling back to memory - Add test for malformed by_tool rate string validation - Add test for invalid backend name validation - Change default config mode from enforce to permissive for safety (consistent with all other security plugins in the default config) Signed-off-by: Mihai Criveti <crivetimihai@gmail.com> --------- Signed-off-by: Pratik Gandhi <gandhipratik203@gmail.com> Signed-off-by: Mihai Criveti <crivetimihai@gmail.com> Co-authored-by: Mihai Criveti <crivetimihai@gmail.com>
ecthelion77
pushed a commit
to forterro/mcp-context-forge
that referenced
this pull request
Mar 30, 2026
Merged IBM/mcp-context-forge upstream/main into feature/upstream-sync-march30. Key upstream additions: - Security: Server ID validation in Streamable HTTP, secrets detection, content size limits, service account support - SSO: Stale team membership revocation, groups claim extraction for generic OIDC providers, sync_roles flag - RBAC: Session-token team narrowing Layer 2, permission-based menu hiding - Observability: Fix duplicate DB session middleware, metrics returning 0 after cleanup, metrics_cache leak fix - Tools: Configurable forbidden description patterns (replaces our IBM#18) - Plugins: retry-with-backoff, PII filter Rust hardening, URL reputation - Infra: Remove MySQL/MongoDB support (PostgreSQL only), rate limiter fix - A2A: Cascade agent state changes to MCP tools - UI: Persist admin table filters, team member modal fixes Conflicts resolved (10 files): - admin.py: kept upstream team preservation on edit + our OIDC sync params - schemas.py: kept upstream configurable patterns + our meta-server fields - gateway_service.py: kept upstream visibility propagation fix - oauth_manager.py: kept our expires_in=None fix (patch IBM#20) - sso_service.py: adopted upstream _build_normalized_user_info refactor - team_management_service.py: kept our PermissionError + upstream UNSET/skip_limits - streamablehttp_transport.py: adapted meta-server loading to use validated server_id - sso_bootstrap.py: combined upstream scope preservation + our smart team_mapping merge - test_sso_*.py: adopted upstream test refactoring Patches now obsolete (superseded by upstream): - IBM#1 (SSO email_verified) — upstream b668d2b - IBM#8 (teams=None) — upstream b2b6c12 - IBM#18 (tool description sanitize) — upstream bd803e5 (configurable patterns)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.