You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[FEATURE][SECURITY]: Content Size and Type Security Limits for Resources and Prompts
Goal
Implement configurable content validation for resources and prompts with size limits, content type restrictions, and security validation when content is submitted via the API. This prevents abuse, DoS attacks, and injection of malicious content.
Why Now?
Security Compliance: OWASP recommends validating all input sizes and types
DoS Prevention: Large content submissions can exhaust memory and storage
Data Quality: Type restrictions ensure only valid content is stored
Injection Protection: Pattern detection blocks XSS and other injection attacks
As a security engineer I want content size limits enforced on all submissions So that large uploads cannot exhaust system resources
Acceptance Criteria:
Scenario: Resource content size limitGiven CONTENT_MAX_RESOURCE_SIZE=102400 (100KB)
When a user creates a resource with 200KB content
Then the request should be rejected with 413 Payload Too Large
And the response should indicate the size limit
Scenario: Prompt template size limitGiven CONTENT_MAX_PROMPT_SIZE=10240 (10KB)
When a user creates a prompt with 20KB template
Then the request should be rejected with 413 Payload Too Large
Scenario: Content within limitsGiven CONTENT_MAX_RESOURCE_SIZE=102400
When a user creates a resource with 50KB content
Then the resource should be created successfully
Technical Requirements:
Check content size before processing
Return 413 with clear error message
Log oversized submission attempts
Apply limits consistently across create and update operations
US-2: Security - Restrict Content Types
As a security engineer I want only allowed content types accepted for resources So that dangerous file types cannot be uploaded
Acceptance Criteria:
Scenario: Allowed MIME typeGiven CONTENT_ALLOWED_RESOURCE_MIMETYPES=text/plain,text/markdown
When a user creates a resource with mimeType "text/plain"Then the resource should be created successfully
Scenario: Disallowed MIME typeGiven CONTENT_ALLOWED_RESOURCE_MIMETYPES=text/plain,text/markdown
When a user creates a resource with mimeType "text/html"Then the request should be rejected with 400 Bad Request
And the response should list allowed types
Scenario: MIME type detection from URIGiven a resource with URI "notes.md"When no explicit mimeType is provided
Then the system should detect "text/markdown"And validate against allowed types
Technical Requirements:
Configure allowed MIME types per entity type
Auto-detect MIME type from URI if not provided
Validate declared type matches detected type
Default to safe types (text/plain, text/markdown)
US-3: Security - Block Malicious Patterns
As a security engineer I want content scanned for malicious patterns So that XSS and injection attacks are blocked
Acceptance Criteria:
Scenario: Block script injectionGiven content containing "<script>alert(1)</script>"When a user attempts to create a resource
Then the request should be rejected with 400 Bad Request
And a security warning should be logged
Scenario: Block JavaScript URLsGiven content containing "javascript:void(0)"When a user attempts to create a resource
Then the request should be rejected
Scenario: Block event handlersGiven content containing "onclick=alert(1)"When a user attempts to create a resource
Then the request should be rejected
Scenario: Clean content passesGiven content with "This is normal markdown content"When a user creates a resource
Then the resource should be created successfully
Technical Requirements:
Configure blocked patterns list
Scan content case-insensitively
Log security violations with user context
Allow admins to customize blocked patterns
US-4: Security - Validate Prompt Templates
As a security engineer I want prompt templates validated for safe syntax So that template injection attacks are prevented
Acceptance Criteria:
Scenario: Balanced template bracesGiven a prompt template with unbalanced braces "{{user}}"When a user creates the prompt
Then the prompt should be created successfully (balanced)
Scenario: Unbalanced braces rejectedGiven a prompt template "Hello {{user" (missing closing)
When a user attempts to create the prompt
Then the request should be rejected with validation error
Scenario: Dangerous template patterns blockedGiven a prompt template "{{__import__('os')}}"When a user attempts to create the prompt
Then the request should be rejected as security violation
As a platform operator I want content creation rate limited So that rapid creation cannot overwhelm the system
Acceptance Criteria:
Scenario: Rate limit content creationGiven CONTENT_CREATE_RATE_LIMIT_PER_MINUTE=3
And user "alice" has created 3 resources in the last minute
When "alice" attempts to create another resource
Then the request should be rejected with 429 Too Many Requests
And Retry-After header should be included
Scenario: Concurrent operation limitGiven CONTENT_MAX_CONCURRENT_OPERATIONS=2
And user "alice" has 2 create operations in progress
When "alice" starts a third create operation
Then the operation should be queued or rejected
Technical Requirements:
Track creation rate per user
Implement sliding window rate limiting
Track concurrent operations
Return appropriate retry timing
🏗 Architecture
Content Validation Flow
flowchart TD
A[Content Submission] --> B{Size Check}
B -->|Exceeds Limit| C[413 Payload Too Large]
B -->|Within Limit| D{MIME Type Check}
D -->|Disallowed| E[400 Bad Request]
D -->|Allowed| F{Pattern Scan}
F -->|Malicious| G[400 Security Violation]
F -->|Clean| H{Encoding Check}
H -->|Invalid UTF-8| I[400 Invalid Encoding]
H -->|Valid| J[Store Content]
[FEATURE][SECURITY]: Content Size and Type Security Limits for Resources and Prompts
Goal
Implement configurable content validation for resources and prompts with size limits, content type restrictions, and security validation when content is submitted via the API. This prevents abuse, DoS attacks, and injection of malicious content.
Why Now?
📖 User Stories
US-1: Security - Enforce Content Size Limits
As a security engineer
I want content size limits enforced on all submissions
So that large uploads cannot exhaust system resources
Acceptance Criteria:
Technical Requirements:
US-2: Security - Restrict Content Types
As a security engineer
I want only allowed content types accepted for resources
So that dangerous file types cannot be uploaded
Acceptance Criteria:
Technical Requirements:
US-3: Security - Block Malicious Patterns
As a security engineer
I want content scanned for malicious patterns
So that XSS and injection attacks are blocked
Acceptance Criteria:
Technical Requirements:
US-4: Security - Validate Prompt Templates
As a security engineer
I want prompt templates validated for safe syntax
So that template injection attacks are prevented
Acceptance Criteria:
Technical Requirements:
US-5: Operator - Rate Limit Content Creation
As a platform operator
I want content creation rate limited
So that rapid creation cannot overwhelm the system
Acceptance Criteria:
Technical Requirements:
🏗 Architecture
Content Validation Flow
flowchart TD A[Content Submission] --> B{Size Check} B -->|Exceeds Limit| C[413 Payload Too Large] B -->|Within Limit| D{MIME Type Check} D -->|Disallowed| E[400 Bad Request] D -->|Allowed| F{Pattern Scan} F -->|Malicious| G[400 Security Violation] F -->|Clean| H{Encoding Check} H -->|Invalid UTF-8| I[400 Invalid Encoding] H -->|Valid| J[Store Content]Content Security Service
classDiagram class ContentSecurityService { -dangerous_patterns: List~Pattern~ +validate_resource_content(content, uri, mime_type) +validate_prompt_content(template, name) -detect_mime_type(uri, content) -validate_content(content, mime_type, context) -validate_prompt_template_syntax(template, name) } class ContentRateLimiter { -operation_counts: Dict -concurrent_operations: Dict +check_rate_limit(user, operation) +record_operation(user, operation) +end_operation(user) }📋 Implementation Tasks
Phase 1: Configuration
CONTENT_MAX_RESOURCE_SIZEsettingCONTENT_MAX_PROMPT_SIZEsettingCONTENT_ALLOWED_RESOURCE_MIMETYPESsettingCONTENT_BLOCKED_PATTERNSsetting.env.examplePhase 2: Content Security Service
mcpgateway/services/content_security.pyPhase 3: Rate Limiting
Phase 4: Service Integration
Phase 5: Monitoring
Phase 6: Testing
⚙️ Configuration Example
✅ Success Criteria
🏁 Definition of Done
make verify🔗 Related Issues