Skip to content

added github workflows; added dockerfile; added docker-compose;#18

Closed
skluthe wants to merge 3 commits intocalcom:mainfrom
skluthe:feature/docker
Closed

added github workflows; added dockerfile; added docker-compose;#18
skluthe wants to merge 3 commits intocalcom:mainfrom
skluthe:feature/docker

Conversation

@skluthe
Copy link
Copy Markdown

@skluthe skluthe commented Apr 15, 2021

Currently having an issue running the container:

calendoso_1  | internal/modules/cjs/loader.js:883
calendoso_1  |   throw err;
calendoso_1  |   ^
calendoso_1  | 
calendoso_1  | Error: Cannot find module '/root/.npm/_npx/240/lib/node_modules/prisma/scripts/preinstall-entry.js'
calendoso_1  |     at Function.Module._resolveFilename (internal/modules/cjs/loader.js:880:15)
calendoso_1  |     at Function.Module._load (internal/modules/cjs/loader.js:725:27)
calendoso_1  |     at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
calendoso_1  |     at internal/main/run_main_module.js:17:47 {
calendoso_1  |   code: 'MODULE_NOT_FOUND',
calendoso_1  |   requireStack: []
calendoso_1  | }
calendoso_1  | npm ERR! code ELIFECYCLE
calendoso_1  | npm ERR! errno 1
calendoso_1  | npm ERR! prisma@2.21.2 preinstall: `node scripts/preinstall-entry.js`
calendoso_1  | npm ERR! Exit status 1
calendoso_1  | npm ERR! 
calendoso_1  | npm ERR! Failed at the prisma@2.21.2 preinstall script.
calendoso_1  | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
calendoso_1  | 
calendoso_1  | npm ERR! A complete log of this run can be found in:
calendoso_1  | npm ERR!     /root/.npm/_logs/2021-04-15T15_49_11_857Z-debug.log
calendoso_1  | Install for [ 'prisma@latest' ] failed with code 1

But other than that it should be pretty good. The workflows require you to add a PAT with the packages scope from github and also a dockerhub username and password to the secrets for this repo.

It'll automatically build for multiarch (arm, x86, etc) and push to ghcr.io and dockerhub. I also added a docker-compose.yml for testing.

If you have any questions, a solution for the error above, or any requests to change this PR, feel free to let me know. Most of it was copied from https://github.com/selfhostedpro/yacht if you're curious about how any of this works that may give you some insight.

@vercel
Copy link
Copy Markdown

vercel bot commented Apr 15, 2021

@skluthe is attempting to deploy a commit to the calendso Team on Vercel.

A member of the Team first needs to authorize it.

@skluthe skluthe mentioned this pull request Apr 15, 2021
@skluthe
Copy link
Copy Markdown
Author

skluthe commented Apr 15, 2021

Didn't see #7 sorry about that! May be some useful stuff from this to copy over.

Co-authored-by: 50bbx <leonardostenico@gmail.com>
@fabioelia
Copy link
Copy Markdown

@skluthe ran into a similar issue, it looks like the postinstall https://github.com/calendso/calendso/blob/main/package.json#L9 script isn't running. I had to run yarn postinstall manually before the primsa command in order for it to run

@pumfleet
Copy link
Copy Markdown
Contributor

Going to close this PR as we're not officially supporting Docker. We may reconsider supporting it in future. Thanks very much for your help though!

@pumfleet pumfleet closed this Jun 22, 2021
@PeerRich
Copy link
Copy Markdown
Member

PeerRich commented Jul 3, 2021

docker support now exists here: https://github.com/calendso/calendso-docker powered by the community. not officially maintained by the calendso core team, yet.

KATT added a commit that referenced this pull request Sep 3, 2021
Arjun3492 referenced this pull request in onehashai/Cal-ID Sep 25, 2025
pedroccastro added a commit that referenced this pull request Feb 21, 2026
* feat: add abuse scoring schema, types, and data model

Foundational layer for the abuse scoring pipeline:

- Extend WatchlistType enum (SPAM_KEYWORD, SUSPICIOUS_DOMAIN, EMAIL_PATTERN, REDIRECT_DOMAIN)
- Add dedicated User columns: abuseScore (Int) + abuseData (Json?)
- Use dedicated columns instead of metadata JSONB to prevent API data leakage
- Define abuseMetadataSchema owned by the feature (zero @calcom/prisma dependency)
- Derive types via z.infer with Zod safeParse validation
- Register abuse-scoring feature flag in AppFlags

* fix: tighten abuseMetadata schema and seed feature flag

- Fix zod import to named import (codebase convention)
- Add .datetime() validation on at/lockedAt/lastAnalyzedAt fields
- Document why REDIRECT_DOMAIN is excluded from flags enum
- Seed abuse-scoring feature flag (enabled: false)

* feat: add scoring engine, alert system, and DTOs

- Pure calculateScore() function with seven signal types (signup flags, malicious redirects, forward params, content spam, high/elevated booking velocity, self-booking pattern). Signal caps prevent stacking
- Score range 0-100 with thresholds at 50 (alert) and 80 (lock)
- Slack alerter with admin user link and DI interface for testability
- Zod-validated DTOs at repository-service boundary
- 20 tests, zero mocks for scoring (pure function), full coverage for alert payloads and error resilience

* fix: tighten scoring types and parse abuseData at DTO boundary

- Add const arrays as source of truth for flag/signal/lockedReason types
- Use z.enum() in abuseMetadataSchema from const arrays
- Parse abuseData at DTO boundary with .catch(null) (fail-open)
- Remove getAbuseMetadata indirection from scoring — DTO handles it
- Type ABUSE_SIGNAL_CAPS with AbuseSignalType

* refactor: inject webhook URL into SlackAbuseAlerter via constructor

* feat: add abuse scoring repository, service, and DI wiring

- AbuseScoringRepository with Zod-validated DTOs at output boundary
- AbuseScoringService with checkSignup (Gate 1), shouldCheckEventType (Gate 2), checkBookingVelocity (Gate 3), shouldMonitor, and analyzeUser
- DI layer: tokens, modules (repository, service, alerter), container
- Service uses Pick<> on concrete types for type-safe dependency injection
- analyzeUser consolidates watchlist queries and wraps in try/catch (fail-open)
- 50 unit tests covering service methods and edge cases

* refactor: remove getAbuseMetadata, resolve webhook URL via DI module

* feat: add abuse scoring watchlist types to blocklist create modal

Replace ToggleGroup with grouped Select dropdown supporting all
WatchlistType values. Two groups: Blocking (Email, Domain, Username)
and Abuse Scoring (Spam Keyword, Suspicious Domain, Email Pattern,
Redirect Domain). Per-type validation and placeholders included.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: display all watchlist types in blocklist table

Update type badge column to handle all 7 WatchlistType values
instead of only EMAIL and DOMAIN.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add abuse scoring tasker pattern and DI wiring

Adds the async/sync dispatch infrastructure for abuse scoring analysis
following the established Tasker pattern (calendars, webhooks, bookings).

- AbuseScoringTasker extends base Tasker with analyzeUser method
- Trigger.dev schemaTask with queue (concurrency 5), retry (3x), OOM bump
- Sync fallback via AbuseScoringTaskService delegating to AbuseScoringService
- Full DI wiring: tokens, modules, containers in di/tasker/

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add abuse scoring event hooks for Gates 2 and 3

Adds hook functions that PR 5 will wire into EventType and booking handlers:

- onEventTypeChange: Gate 2 — checks shouldCheckEventType, dispatches analysis
- onBookingCreated: Gate 3 — two paths: flagged users via shouldMonitor,
  unflagged users via checkBookingVelocity

Both are fire-and-forget with fail-open error handling.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: register abuse-scoring task in trigger.config.ts

Adds the abuse-scoring trigger directory to the dirs array so
Trigger.dev discovers the analyze-user task.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add onSignup hook and wire Gate 1 into signup handlers

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: wire Gate 2 and Gate 3 hooks into EventType and booking handlers

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: remove unrelated diff noise from alerts.ts and types.ts

* feat: add scoring engine, alert system, and DTOs (#9)

* feat: add scoring engine, alert system, and DTOs

- Pure calculateScore() function with seven signal types (signup flags, malicious redirects, forward params, content spam, high/elevated booking velocity, self-booking pattern). Signal caps prevent stacking
- Score range 0-100 with thresholds at 50 (alert) and 80 (lock)
- Slack alerter with admin user link and DI interface for testability
- Zod-validated DTOs at repository-service boundary
- 20 tests, zero mocks for scoring (pure function), full coverage for alert payloads and error resilience

* fix: tighten scoring types and parse abuseData at DTO boundary

- Add const arrays as source of truth for flag/signal/lockedReason types
- Use z.enum() in abuseMetadataSchema from const arrays
- Parse abuseData at DTO boundary with .catch(null) (fail-open)
- Remove getAbuseMetadata indirection from scoring — DTO handles it
- Type ABUSE_SIGNAL_CAPS with AbuseSignalType

* refactor: inject webhook URL into SlackAbuseAlerter via constructor

* Migrate user abuse score to it's own table

* Move to separate data table

* Write abuse score to table for new users

* Fix `abuseMetadataSchema` to not expect db fields

* Rename method to `shouldCheckUsersEventType`

* fix: resolve type errors in scoring.ts and AbuseScoringRepository.ts

Co-Authored-By: joe@cal.com <j.auyeung419@gmail.com>

* fix: rename shouldCheckEventType to shouldUsersCheckEventType in tests

Co-Authored-By: joe@cal.com <j.auyeung419@gmail.com>

---------

Co-authored-by: Pedro Castro <pedro@cal.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants