Conversation
…ified management scripts, documentation updates, and a SQL division-by-zero fix.
…ractive setup steps.
…environment variables and add new ignore rules for Docker/SSL artifacts.
📝 WalkthroughWalkthroughAdds a production-ready Docker deployment and management suite: new docker-compose, nginx configs, multi-stage Dockerfiles, setup/manage scripts, db init/backup/restore tooling, environment templates, documentation, and multiple script removals/rewrites for a consolidated orchestration and deployment workflow. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant quick-setup as quick-setup.sh
participant manage as manage.sh
participant compose as docker-compose
participant Services as Docker Services
User->>quick-setup: run ./quick-setup.sh
quick-setup->>quick-setup: create .env from .env.example & generate secrets
quick-setup->>User: prompt domain / build options
User->>quick-setup: choose options
quick-setup->>manage: call install_worklenz()
manage->>compose: docker compose up -d
compose->>Services: start containers (db, redis, minio, backend, frontend, nginx)
Services->>Services: initialize services (db-init-wrapper runs, nginx loads certs, etc.)
manage->>User: display access URLs and status
sequenceDiagram
participant Postgres
participant db-init as db-init-wrapper.sh
participant Backup
participant SQL as SQL schema files
Postgres->>db-init: container entrypoint triggers
db-init->>db-init: check /var/lib/postgresql/data/.initialized
alt initialized exists
db-init->>Postgres: skip initialization
else
db-init->>Backup: search for latest backup file
alt backup found
Backup->>db-init: provide backup
db-init->>Postgres: restore backup via psql
else no backup
db-init->>SQL: apply 0_extensions -> 1_tables ... migrations
SQL->>Postgres: execute SQL scripts
end
db-init->>Postgres: create .initialized marker
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
CONTRIBUTING.md (1)
36-51: Doc inconsistency: PR target branch saysmainand later saysdevelopment.
Pick one and align both spots to avoid contributor confusion.Proposed fix (example: standardize on main)
-7. Create a pull request against the main repository's `main` branch. +7. Create a pull request against the main repository's `main` branch. @@ -All contributions should be submitted as pull requests against the `development` branch of the main repository. +All contributions should be submitted as pull requests against the `main` branch of the main repository.worklenz-backend/database/sql/4_functions.sql (3)
1284-1365:deserialize_user: good direction, but ensure notification_settings insert can’t insert NULL team_id.
If a user has no teams (oractive_teamis NULL and no team row exists), theINSERT INTO notification_settings (user_id, team_id, ...) SELECT ...can attemptteam_id = NULLand fail (or create hard-to-debug state depending on constraints). Consider guarding withWHERE ... IS NOT NULL.
6569-6675: Duplicate function definitions:get_sort_column_name+update_task_sort_orders_bulkappear twice.
Even withOR REPLACE, this is confusing and can regress behavior depending on which version is last (notably the differing supported_group_bycases). Please dedupe and keep a single canonical implementation.
4341-4434: Remove the duplicate function definitions to prevent silent behavior divergence.
- The
get_sort_column_name()function is defined twice (lines 4342 and 6615) with different behavior: the first version supports'members'grouping and returns'member_sort_order', but the second version omits this case and would silently fall back to status sorting. Similarly,update_task_sort_orders_bulk()is defined twice (lines 6569 and 6632).- All required columns (
status_sort_order,priority_sort_order,phase_sort_order,member_sort_order) do exist on thetaskstable.- Consolidate to a single definition of each function to ensure consistent behavior across the codebase.
🤖 Fix all issues with AI agents
In @.env.example:
- Line 56: Inline comments after dotenv values (e.g., the DB_PASSWORD line) can
be parsed as part of the secret by some dotenv parsers; remove the inline
comment after the value and instead place any explanatory text on a separate
comment line above the variable (or remove it entirely) for DB_PASSWORD and the
other affected entry around line 100; ensure no variable lines contain trailing
"#" comments so secrets remain exact.
In @docker-compose.yaml:
- Around line 69-88: The healthcheck for the redis service is using `redis-cli
--raw incr ping`, which increments a key instead of checking connectivity;
update the redis service's healthcheck `test` to call `redis-cli ping` (and if
your container requires the password set by `command` use `redis-cli -a
${REDIS_PASSWORD:-worklenz_redis_pass} ping`) so the `redis` service healthcheck
performs a proper ping check rather than mutating data.
- Around line 28-67: The db-backup service's entrypoint/command loop currently
runs gzip and cleanup blindly after a successful pg_dump and doesn't check gzip
result; update the command so that after pg_dump returns success you run gzip
and verify its exit status (treat gzip failure as a backup failure: log an
error, remove the incomplete uncompressed file if desired, and skip the
cleanup), and ensure the find cleanup that deletes old backups runs only when
the current backup (including gzip) fully succeeded; reference the db-backup
service's entrypoint/command, the BACKUP_FILE/BACKUP_RETENTION_DAYS variables,
and the pg_dump/gzip/find steps when implementing these checks and conditional
cleanup.
- Around line 90-113: The docker-compose service "minio" currently uses the
floating image tag "minio/minio:latest"; change the image reference for the
minio service to a pinned release tag in the form
"minio/minio:RELEASE.YYYY-MM-DDThh-mm-ssZ" (or use an environment variable like
MINIO_VERSION and set image to "minio/minio:${MINIO_VERSION}") so deployments
are reproducible; pick a specific release from MinIO's GitHub releases and
update the image value for the minio service accordingly, then test upgrades in
lower environments before changing the tag again.
In @manage.sh:
- Around line 519-525: The restore code currently pipes the SQL dump into psql
which appends data; modify the restore sequence around the psql invocation to
first drop and recreate the target database (using psql -c "DROP DATABASE IF
EXISTS \"${DB_NAME:-worklenz_db}\"; CREATE DATABASE
\"${DB_NAME:-worklenz_db}\";" -U "${DB_USER:-postgres}") before importing, or
detect if the dump is a custom-format and use pg_restore with --clean --no-owner
against the DB (e.g., pg_restore --clean -U "${DB_USER:-postgres}" -d
"${DB_NAME:-worklenz_db}" "$temp_dir/$backup_content/database.sql"), ensuring
you still run these commands against the postgres container via $compose_cmd
exec -T postgres and preserve the existing success/failure logging
(print_success or error).
In @scripts/db-init-wrapper.sh:
- Around line 6-37: The script's current use of "set -e" plus later manual $?
checks causes restore failures to abort before fallback logic runs and the
.initialized marker gets written unconditionally even after partial failures;
update the restore and schema/migration blocks to: remove reliance on
post-failure $? checks by capturing command results directly (e.g., run gunzip |
psql and check its exit status immediately), ensure the fallback schema
initialization (the schema import block that uses psql on files in DB_DIR)
executes when restore fails, make migration commands (the section running
migrations) fail the script on error (exit non-zero) rather than continuing, and
only write the "/var/lib/postgresql/data/.initialized" marker after all steps
(restore or schema import and migrations) complete successfully; additionally
quote all variable usages like "$LATEST_BACKUP", "$POSTGRES_USER",
"$POSTGRES_DB", and paths to avoid word-splitting and ensure safe quoting when
invoking psql and gzip.
In @SETUP_THE_PROJECT.md:
- Around line 114-158: Change the bold-only "Option 1: Automated Setup
(Easiest)" and "Option 2: Manual Docker Setup" lines into proper headings (e.g.,
"#### Option 1: Automated Setup (Easiest)" and "#### Option 2: Manual Docker
Setup") to satisfy MD036, and wrap the bare URLs shown under "Access the
Application" (https://localhost, http://localhost, http://localhost:9001) as
either markdown links or inline code (e.g., `<https://localhost>` or
`https://localhost`) to satisfy MD034; update the sections titled "Option 1" and
"Option 2" and the URL instances accordingly.
In
@worklenz-backend/database/migrations/20260102000000-fix-division-by-zero-in-task-ratio.sql:
- Around line 33-42: The numerator includes the parent task but the denominator
omits it, allowing ratios >100%; make numerator/denominator consistent by
setting _total_tasks = _sub_tasks_count + 1 (so it counts the parent too), then
compute _ratio = (_total_completed / _total_tasks) * 100 and remove the
special-case IF branch (division-by-zero is no longer possible because
total_tasks >= 1). Ensure you update the assignments for _total_tasks and the
single calculation of _ratio and remove the IF/ELSE block that handled
_total_tasks == 0; references: _total_completed, _parent_task_done,
_sub_tasks_done, _sub_tasks_count, _total_tasks, _ratio.
In @worklenz-backend/database/sql/4_functions.sql:
- Around line 3448-3458: The _total_completed currently includes the parent via
_parent_task_done while _total_tasks excludes it, causing ratios >100%; change
the logic to make both counts consistent (either add the parent to _total_tasks
or exclude _parent_task_done from _total_completed) and mirror the new
migration’s approach to avoid duplicating the bug; update the block that
computes _total_completed/_total_tasks (variables _total_completed,
_sub_tasks_count, _parent_task_done, _total_tasks, _ratio) so the denominator
and numerator refer to the same set of tasks and retain the division-by-zero
guard already present.
In @worklenz-backend/Dockerfile:
- Around line 62-64: The COPY in the Dockerfile references /app/release which
doesn't exist; either create the expected release file at /app/release in the
builder stage (so the existing COPY --from=builder --chown=worklenz:worklenz
/app/release ./release will succeed and the runtime file-constants.ts reads
./release), or update the COPY to pull the actual path created in the builder
(/app/build/RELEASE_VERSION) by changing the source to that path and keeping the
destination ./release so the runtime still finds the release file; adjust the
builder stage to write the version to the chosen source path if you opt to
create /app/release.
In @worklenz-backend/src/passport/index.ts:
- Around line 19-25: passport-google-mobile.ts currently always exports a
CustomStrategy instance so the guard using GoogleMobileLogin in
passport/index.ts always passes; change passport-google-mobile.ts to match
passport-google.ts by checking the required env vars and export null when they
are missing, otherwise construct and export the CustomStrategy instance (the
exported symbol GoogleMobileLogin). Ensure the exported name matches what
passport/index.ts imports (GoogleMobileLogin) and follows the same
conditional-export pattern as GoogleLogin so passport.use("google-mobile",
GoogleMobileLogin) only runs when env vars are present.
In @worklenz-frontend/Dockerfile:
- Around line 48-51: The Dockerfile currently creates a placeholder file setting
window._env_ = {}, but the runtime env-config.sh and frontend code
(src/config/env.ts) expect properties under window.VITE_*; update the
placeholder creation in the Dockerfile to initialize the same naming convention
(e.g., set window.VITE_API_URL, window.VITE_SOCKET_URL and other window.VITE_*
keys or at least window.VITE = {}) so env-config.sh and the code that reads
window.VITE_* find the expected keys; modify the RUN that creates
/app/build/env-config.js (and the chown) to use window.VITE_* instead of
window._env_.
- Around line 25-27: The Dockerfile currently runs the Vite build via RUN npm
run build but later copies from /app/build (COPY /app/build ...), which is wrong
because Vite outputs to dist/ by default; update the COPY instructions that
reference /app/build to /app/dist (or alternatively set Vite's build.outDir to
"build" in vite.config or package.json to keep /app/build), ensuring the
Dockerfile and Vite config reference the same output directory.
🧹 Nitpick comments (14)
worklenz-backend/src/passport/passport-strategies/passport-google.ts (1)
22-27: Consider wrapping JSON.parse in a try-catch for robustness.If
req.query.statecontains malformed JSON (e.g., from a tampered or corrupted OAuth redirect),JSON.parsewill throw. While the outer try-catch will handle it, this results in an error response rather than gracefully proceeding with empty state.Suggested improvement
- const state = JSON.parse(req.query.state as string || "{}"); - if (state) { + let state: { team?: string; teamMember?: string } = {}; + try { + state = JSON.parse(req.query.state as string || "{}"); + } catch { + // Proceed with empty state if parsing fails + } + if (state.team || state.teamMember) { body.team = state.team; body.member_id = state.teamMember; }worklenz-frontend/Dockerfile (2)
53-66: HeredocCOPY <<EOFrequires BuildKit; env JS generation is also injection-prone.
COPY <<'EOF' ...will fail on builders without BuildKit/dockerfile v1.4+ support.- Writing JS with raw
"${ENV}"can break the file (quotes/newlines) and can become an injection vector if env values are not fully controlled.Proposed changes (BuildKit + safer JS payload)
+# syntax=docker/dockerfile:1.6 + @@ -COPY --chown=worklenz:worklenz <<'EOF' /app/env-config.sh +COPY --chown=worklenz:worklenz <<'EOF' /app/env-config.sh #!/bin/sh # Generate env-config.js with runtime environment variables -cat > /app/build/env-config.js << ENVEOF -window.VITE_API_URL = "${VITE_API_URL}"; -window.VITE_SOCKET_URL = "${VITE_SOCKET_URL}"; -window.VITE_APP_TITLE = "${VITE_APP_TITLE:-Worklenz}"; -window.VITE_APP_ENV = "${VITE_APP_ENV:-production}"; -window.VITE_ENABLE_RECAPTCHA = "${VITE_ENABLE_RECAPTCHA:-false}"; -window.VITE_RECAPTCHA_SITE_KEY = "${VITE_RECAPTCHA_SITE_KEY:-}"; -window.VITE_ENABLE_GOOGLE_LOGIN = "${VITE_ENABLE_GOOGLE_LOGIN:-false}"; -window.VITE_ENABLE_SURVEY_MODAL = "${VITE_ENABLE_SURVEY_MODAL:-false}"; -ENVEOF +# Minimal escaping for JS string literals +js_escape() { + # escapes: backslash, double-quote, newline, carriage return + printf '%s' "$1" | sed -e 's/\\/\\\\/g' -e 's/"/\\"/g' -e ':a;N;$!ba;s/\n/\\n/g' -e 's/\r/\\r/g' +} + +cat > /app/build/env-config.js << ENVEOF +window._env_ = { + VITE_API_URL: "$(js_escape "${VITE_API_URL}")", + VITE_SOCKET_URL: "$(js_escape "${VITE_SOCKET_URL}")", + VITE_APP_TITLE: "$(js_escape "${VITE_APP_TITLE:-Worklenz}")", + VITE_APP_ENV: "$(js_escape "${VITE_APP_ENV:-production}")", + VITE_ENABLE_RECAPTCHA: "$(js_escape "${VITE_ENABLE_RECAPTCHA:-false}")", + VITE_RECAPTCHA_SITE_KEY: "$(js_escape "${VITE_RECAPTCHA_SITE_KEY:-}")", + VITE_ENABLE_GOOGLE_LOGIN: "$(js_escape "${VITE_ENABLE_GOOGLE_LOGIN:-false}")", + VITE_ENABLE_SURVEY_MODAL: "$(js_escape "${VITE_ENABLE_SURVEY_MODAL:-false}")" +}; +ENVEOF EOFAlso applies to: 71-78
6-6: Update serve to 14.2.5; Node 22 LTS is acceptable.Node 22-alpine is stable with Active LTS through October 2025 and Maintenance LTS (security fixes) through April 2027, so no immediate concerns. However, serve@14.2.1 (published August 2023) is outdated; the latest version is 14.2.5 (September 2025), which includes a compression dependency security update. Update to serve@14.2.5 for maintenance freshness and dependency management, or if remaining on 14.2.1, document the pinning decision.
.env.example (1)
184-189: Consider not defaultingDOCKER_USERNAMEto a real account in an example file.
This can mislead users into thinking they should publish under that namespace. Maybe make ityourdockerhubuser.worklenz-backend/Dockerfile (1)
56-58: Image size/security: runtime copies fullnode_modules(likely includes devDependencies).
Ifnpm ciinstalls dev deps for build, you may wantnpm prune --omit=devafternpm run build(in builder) and copy the prunednode_modulesinto runtime.nginx/nginx.conf (1)
31-37: Avoid deprecated/duplicated security headers; set explicit rate-limit status codes.
X-XSS-Protection(Line 35) is obsolete; recommend dropping it (and keeping headers defined in one place to avoid duplicates withnginx/conf.d/worklenz.conf).- Consider setting
limit_req_status 429;/limit_conn_status 429;so clients don’t see 503s for throttling.Proposed diff
http { @@ # Security headers server_tokens off; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; - add_header X-XSS-Protection "1; mode=block" always; add_header Referrer-Policy "no-referrer-when-downgrade" always; @@ # Rate limiting zones limit_req_zone $binary_remote_addr zone=api_limit:10m rate=50r/s; limit_req_zone $binary_remote_addr zone=login_limit:10m rate=5r/m; limit_conn_zone $binary_remote_addr zone=conn_limit:10m; + + # Prefer explicit throttling status codes + limit_req_status 429; + limit_conn_status 429;Also applies to: 49-52
nginx/conf.d/worklenz.conf (2)
56-67: Re-check HSTS scope and/uploads/caching semantics.
- Lines 57-58:
Strict-Transport-Security ... includeSubDomainsis risky if this config might ever serve multiple unrelated hostnames (or if you don’t control all subdomains).- Lines 213-217:
expires 1d+Cache-Control: immutableis a mismatch unless URLs are content-hashed; consider either longer TTL or dropimmutable. Also confirm whetherproxy_cacheis actually enabled elsewhere before relying onproxy_cache_valid.Also applies to: 207-217
121-175: Make WebSocket upgrade headers conditional using a map; the nested location block is valid.The nested
locationblock at lines 234-238 is valid nginx syntax (regex locations inside prefix locations are allowed) and does not need to be moved.However, lines 128 and 163 should use a conditional variable for the
Connectionheader to follow best practice:Recommended map-based solution
Add this map block in the
httpcontext (at the top of the configuration):map $http_upgrade $connection_upgrade { default upgrade; '' close; }Then replace the unconditional headers:
- Line 128:
proxy_set_header Connection "upgrade";→proxy_set_header Connection $connection_upgrade;- Line 163:
proxy_set_header Connection "upgrade";→proxy_set_header Connection $connection_upgrade;This ensures that
Connection: upgradeis only sent when the client requests an upgrade, allowing socket.io polling fallbacks to work properly on normal HTTP requests.README.md (1)
134-137: Consider using proper Markdown link syntax for URLs.Static analysis flagged bare URLs. While functional, using Markdown link syntax improves readability and consistency.
Proposed fix
**Access the application:** -- **Application**: https://localhost (or http://localhost) -- **MinIO Console**: http://localhost:9001 (login: minioadmin/minioadmin) +- **Application**: <https://localhost> (or <http://localhost>) +- **MinIO Console**: <http://localhost:9001> (login: minioadmin/minioadmin)quick-setup.sh (2)
69-112: Declare and assign separately to avoid masking return values.Per Shellcheck SC2155, combining declaration and assignment can mask failures in the subshell. While unlikely to cause issues here since
generate_secrethas a fallback, it's good practice to separate them.Proposed fix for one example (apply pattern to all)
- local db_password=$(generate_secret) + local db_password + db_password=$(generate_secret)
205-248: Modifying docker-compose.yaml directly may cause git conflicts.The script uses
sedto update image names indocker-compose.yaml. This creates uncommitted changes that may conflict with future git pulls. Consider using environment variable substitution instead (which docker-compose already supports via.env).Instead of modifying
docker-compose.yaml, consider using environment variable interpolation in the compose file:backend: image: ${DOCKER_USERNAME:-chamikajaycey}/worklenz-backend:latestThen update only the
.envfile with the username.manage.sh (3)
20-27: Remove unused variable REPO_DIR.
REPO_DIRis declared but never used. Remove it to avoid confusion.BACKUP_DIR="$SCRIPT_DIR/backups" -REPO_DIR="$SCRIPT_DIR"
109-116: Secure alternative for secret generation could fail silently.The
cat /dev/urandom | tr ... | fold ... | headpipeline can produce empty output on some systems if any pipe stage fails. Consider adding validation.Proposed fix
generate_secret() { if command -v openssl &> /dev/null; then openssl rand -hex 32 else - cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 64 | head -n 1 + local secret + secret=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 64 | head -n 1) + if [ -z "$secret" ]; then + echo "fallback_secret_$(date +%s)_$$" | sha256sum | cut -d' ' -f1 + else + echo "$secret" + fi fi }
457-476: Use mapfile for array assignment from command output.Per Shellcheck SC2207, using
mapfileorread -ais safer than direct array assignment from command substitution.Proposed fix
- local backups=($(ls -t "$BACKUP_DIR"/worklenz_backup_*.tar.gz 2>/dev/null)) + local backups=() + mapfile -t backups < <(ls -t "$BACKUP_DIR"/worklenz_backup_*.tar.gz 2>/dev/null)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
nginx/ssl/cert.pemis excluded by!**/*.pemnginx/ssl/key.pemis excluded by!**/*.pem
📒 Files selected for processing (27)
.env.example.gitignoreCONTRIBUTING.mdDOCKER_SETUP.mdREADME.mdSETUP_THE_PROJECT.mdbackup.shdocker-compose.yamldocker-compose.ymlimproved_deserialize_user.sqlmanage.shnginx/conf.d/worklenz.confnginx/nginx.confquick-setup.shscripts/db-init-wrapper.shstart.batstart.shstop.batstop.shtask-progress-methods.mdupdate-docker-env.shworklenz-backend/Dockerfileworklenz-backend/database/migrations/20260102000000-fix-division-by-zero-in-task-ratio.sqlworklenz-backend/database/sql/4_functions.sqlworklenz-backend/src/passport/index.tsworklenz-backend/src/passport/passport-strategies/passport-google.tsworklenz-frontend/Dockerfile
💤 Files with no reviewable changes (9)
- task-progress-methods.md
- docker-compose.yml
- improved_deserialize_user.sql
- start.sh
- stop.sh
- backup.sh
- update-docker-env.sh
- stop.bat
- start.bat
🧰 Additional context used
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 49-49: [UnorderedKey] The HTTPS_PORT key should go before the HTTP_PORT key
(UnorderedKey)
[warning] 56-56: [UnorderedKey] The DB_PASSWORD key should go before the DB_USER key
(UnorderedKey)
[warning] 56-56: [ValueWithoutQuotes] This value needs to be surrounded in quotes
(ValueWithoutQuotes)
[warning] 57-57: [UnorderedKey] The DB_MAX_CLIENTS key should go before the DB_NAME key
(UnorderedKey)
[warning] 86-86: [UnorderedKey] The REDIS_DB key should go before the REDIS_PASSWORD key
(UnorderedKey)
[warning] 98-98: [UnorderedKey] The AWS_BUCKET key should go before the AWS_REGION key
(UnorderedKey)
[warning] 99-99: [UnorderedKey] The AWS_ACCESS_KEY_ID key should go before the AWS_BUCKET key
(UnorderedKey)
[warning] 100-100: [ValueWithoutQuotes] This value needs to be surrounded in quotes
(ValueWithoutQuotes)
[warning] 102-102: [UnorderedKey] The MINIO_BROWSER key should go before the S3_URL key
(UnorderedKey)
🪛 LanguageTool
DOCKER_SETUP.md
[style] ~307-~307: You have already used this phrasing in nearby sentences. Consider replacing it to add variety to your writing.
Context: ...mod -aG docker $USER ``` *Note: You may need to log out and back in for this change to ...
(REP_NEED_TO_VB)
🪛 markdownlint-cli2 (0.18.1)
DOCKER_SETUP.md
59-59: Link fragments should be valid
(MD051, link-fragments)
60-60: Link fragments should be valid
(MD051, link-fragments)
135-135: Bare URL used
(MD034, no-bare-urls)
135-135: Bare URL used
(MD034, no-bare-urls)
136-136: Bare URL used
(MD034, no-bare-urls)
213-213: Bare URL used
(MD034, no-bare-urls)
259-259: Bare URL used
(MD034, no-bare-urls)
335-335: Bare URL used
(MD034, no-bare-urls)
SETUP_THE_PROJECT.md
120-120: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
133-133: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
146-146: Bare URL used
(MD034, no-bare-urls)
146-146: Bare URL used
(MD034, no-bare-urls)
147-147: Bare URL used
(MD034, no-bare-urls)
README.md
135-135: Bare URL used
(MD034, no-bare-urls)
135-135: Bare URL used
(MD034, no-bare-urls)
136-136: Bare URL used
(MD034, no-bare-urls)
259-259: Bare URL used
(MD034, no-bare-urls)
🪛 Shellcheck (0.11.0)
quick-setup.sh
[warning] 75-75: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 76-76: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 77-77: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 78-78: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 79-79: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 80-80: Declare and assign separately to avoid masking return values.
(SC2155)
manage.sh
[warning] 26-26: REPO_DIR appears unused. Verify use (or export if used externally).
(SC2034)
[warning] 70-70: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
[warning] 156-156: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 277-277: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 324-324: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 353-353: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 374-374: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 397-397: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 398-398: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 439-439: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 461-461: Prefer mapfile or read -a to split command output (or quote to avoid splitting).
(SC2207)
[warning] 470-470: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 471-471: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 472-472: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 505-505: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 506-506: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 511-511: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 592-592: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 599-599: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 606-606: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 613-613: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 620-620: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 628-628: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 695-695: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 696-696: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 697-697: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 1196-1196: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 1248-1248: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 1301-1301: Declare and assign separately to avoid masking return values.
(SC2155)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (24)
.gitignore (1)
81-86: LGTM: ignores match generated SSL/backups/temp artifacts.
If you intend to keep emptybackups/ornginx/ssl/directories in repo, consider adding a.gitkeepand un-ignoring it explicitly.CONTRIBUTING.md (1)
11-23: Nice improvement: clearer onboarding with Docker path.
./quick-setup.sh+./manage.shcallouts are helpful.worklenz-backend/Dockerfile (1)
6-6: No action required —libvips42is available in Debian Bookworm.The
node:20-slimbase image uses Debian Bookworm (Debian 12), andlibvips42is packaged and available in its repositories. The Dockerfile correctly specifieslibvips42in the runtime stage (lines 42–48), and it will resolve and install successfully withapt-get.worklenz-backend/database/migrations/20260102000000-fix-division-by-zero-in-task-ratio.sql (1)
4-5: No issue found - explicitBEGIN;andCOMMIT;are correct for this migration runner.This repository uses a custom shell-based migration runner (
00_init.sh) that executes each.sqlmigration file as an isolatedpsqlsession using the-fflag. Unlike frameworks that wrap migrations in implicit transactions,psql -fdoes not automatically wrap the file's contents in a transaction. The explicitBEGIN;andCOMMIT;statements are therefore necessary to ensure migration atomicity—without them, a partial migration success is possible if an error occurs mid-execution. This pattern is consistently used across 20+ migration files in the repository and is the correct approach for this runner setup.worklenz-backend/database/sql/4_functions.sql (1)
4117-4125: Verify whetherCOALESCE(_schedule_id, 'null'::json)is actually used downstream and if it's redundant.The code correctly sets
_schedule_idto either JSONnullor a JSON object via the CASE statement, so_schedule_idshould never be SQL NULL. IfCOALESCE(_schedule_id, 'null'::json)is applied later, it would be redundant since the SELECT INTO already guarantees a JSON value. Confirm that the function's return structure matches API client expectations forschedule_id(JSON null vs object), and verify the same applies to lines 4148–4149.DOCKER_SETUP.md (1)
235-242: Change service name fromdbtopostgresin the migration exec command.Line 365 references
docker compose exec db pg_dumpbut the docker-compose.yaml defines the PostgreSQL service aspostgres, notdb. Update todocker compose exec postgres pg_dump -U postgres worklenz_db > backup.sql.docker-compose.yaml (6)
1-27: PostgreSQL service configuration looks solid.Good use of health checks, security options, and proper volume mounting for initialization scripts. The
start_period: 30sis appropriate for database initialization.
115-139: minio-init sets bucket policy to public download.Line 134 sets
anonymous set downloadwhich allows unauthenticated public access to all files in the bucket. This may be intentional for serving uploaded assets, but verify this is the desired security posture for production deployments.
141-228: Backend service configuration looks comprehensive.Good separation of concerns with required vs optional environment variables, proper health checks, and security options. The use of
condition: service_healthyfor dependencies ensures proper startup order.
229-260: Frontend service configuration is well-structured.Proper dependency on backend health, environment variable injection for runtime configuration, and appropriate health check configuration.
261-303: Nginx reload loop and Certbot renewal cycle are properly configured.The nginx command sets up a 6-hour reload cycle for certificate updates, and certbot runs a 12-hour renewal check loop. This is a common pattern for Let's Encrypt certificate management.
304-325: Network and volume definitions are appropriate.The
internal: trueflag onworklenz-backendnetwork properly isolates database and cache services from external access. Named volumes with local driver are suitable for single-host deployments.README.md (3)
94-148: Quick Start documentation is comprehensive and well-organized.The two-path approach (automated vs manual) with clear steps makes onboarding straightforward. Good inclusion of management commands.
221-287: Production deployment documentation is thorough.Clear distinction between quick and manual setup, proper SSL/Let's Encrypt guidance, and deployment mode explanations. The management commands section provides good operational guidance.
354-387: Security features and backup documentation is well-structured.Good coverage of security best practices and backup commands. The checklist format makes it easy to follow.
quick-setup.sh (3)
63-66: Good fallback for secret generation.The fallback to
/dev/urandomwithxxdwhenopensslis unavailable ensures the script works across different environments.
293-312: Script calls manage.sh for installation - good modularity.Delegating the actual installation to
manage.shavoids code duplication and ensures consistency between the quick setup and interactive management workflows.
1-14: Script usesset -efor fail-fast behavior.Good practice for setup scripts to exit on first error, preventing partial/broken installations.
manage.sh (6)
66-73: Sourcing .env file is correct but consider validation.The
set -a/source/set +apattern correctly exports variables. ShellCheck's SC1090 warning is expected since the path is dynamic. Consider adding basic validation that the file was sourced successfully.
389-450: Backup function is comprehensive with good progress reporting.The 4-step backup process (database, Redis, MinIO, config) with proper error handling and cleanup of old backups is well-implemented. Good use of deployment mode detection to skip irrelevant backups in advanced mode.
452-561: Restore function has proper safety checks.The confirmation prompt requiring explicit "yes" input and the warning about data replacement are good safeguards against accidental data loss.
1027-1224: Install function is well-structured with good user guidance.The step-by-step installation with domain configuration, optional custom image building, SSL setup, and service startup provides a smooth onboarding experience. Good error handling throughout.
1230-1270: Upgrade function creates backup before upgrading - good practice.The backup-before-upgrade pattern protects against upgrade failures. However, consider adding a rollback option if the upgrade fails.
1423-1509: CLI interface is comprehensive with good help documentation.The command-line interface supports all major operations and falls back to interactive mode when no arguments are provided. Help output is clear and informative.
| # MinIO Object Storage (Express mode - default) | ||
| minio: | ||
| image: minio/minio:latest | ||
| container_name: worklenz-minio | ||
| restart: unless-stopped | ||
| command: server /data --console-address ":9001" | ||
| environment: | ||
| MINIO_ROOT_USER: ${AWS_ACCESS_KEY_ID:-minioadmin} | ||
| MINIO_ROOT_PASSWORD: ${AWS_SECRET_ACCESS_KEY:-minioadmin} | ||
| MINIO_BROWSER: ${MINIO_BROWSER:-on} | ||
| volumes: | ||
| - minio_data:/data | ||
| networks: | ||
| - worklenz-backend | ||
| healthcheck: | ||
| test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"] | ||
| interval: 30s | ||
| timeout: 20s | ||
| retries: 3 | ||
| start_period: 10s | ||
| security_opt: | ||
| - no-new-privileges:true | ||
| profiles: | ||
| - express |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
MinIO latest stable version 2026
💡 Result:
The latest stable MinIO release is tag RELEASE.2025-10-15T17-29-55Z (published Oct 16, 2025). (github.com)
Citations:
🌐 Web query:
MinIO minio/minio docker image versioning best practices
💡 Result:
Summary — best practices for using minio/minio Docker image
-
Pin full release tags (RELEASE.YYYY-MM-DDThh-mm-ssZ) or image digests, never rely on :latest. This ensures reproducible, auditable deployments. [1][2]
-
Prefer Kubernetes Operator or official deployment methods for production rather than single-container standalone images; Operator handles upgrades/compatibility better. [1]
-
Test upgrades in lower environments and read release notes before upgrading; do not blindly upgrade production. Follow MinIO’s upgrade checklist. [1][2]
-
Avoid mc admin update for containerized MinIO — upgrade by replacing container images (and coordinate parallel upgrades for distributed setups). [1][2]
-
For highest safety, pin by digest (sha256) in orchestrator manifests and CI, and use immutable tags in registries/air-gapped mirrors so you can roll back. [1][2]
-
Automate CI: scan images for CVEs, run integration tests against the pinned tag, and have a documented rollback/upgrade playbook tied to release notes. [2]
-
If an official Docker image for a release isn’t available (some releases may require building from source), build and publish your own internal image with the exact release tag and embed version metadata; track upstream GitHub releases for urgent security fixes. (There have been reports of missing/pending official images for security releases — if so, build from the tagged release). [2][3]
Sources
- MinIO Docker Hub / upgrade checklist and tag examples. [1]
- MinIO GitHub releases (official RELEASE.* tags, release notes and build-from-source instructions). [2]
- Community/GitHub discussion reporting missing/pending Docker images for security releases (reason to have internal build/mirror plan). [3]
If you want, I can: (a) inspect your current manifests and produce pinned tag/digest updates, or (b) provide sample Kubernetes Deployment/DaemonSet manifests that pin digest and include a CI upgrade workflow.
Pin MinIO to a specific release version instead of latest.
Using the latest tag can introduce unexpected breaking changes in production. Pin to a specific release tag (format: RELEASE.YYYY-MM-DDThh-mm-ssZ) for reproducibility and auditability. Before upgrading, test in lower environments and review MinIO's release notes.
Proposed fix
minio:
- image: minio/minio:latest
+ image: minio/minio:RELEASE.2025-10-15T17-29-55Z
container_name: worklenz-minio(Check MinIO GitHub releases for the latest stable version when deploying.)
🤖 Prompt for AI Agents
In @docker-compose.yaml around lines 90 - 113, The docker-compose service
"minio" currently uses the floating image tag "minio/minio:latest"; change the
image reference for the minio service to a pinned release tag in the form
"minio/minio:RELEASE.YYYY-MM-DDThh-mm-ssZ" (or use an environment variable like
MINIO_VERSION and set image to "minio/minio:${MINIO_VERSION}") so deployments
are reproducible; pick a specific release from MinIO's GitHub releases and
update the image value for the minio service accordingly, then test upgrades in
lower environments before changing the tag again.
| $compose_cmd up -d postgres | ||
| sleep 10 | ||
|
|
||
| if [ -f "$temp_dir/$backup_content/database.sql" ]; then | ||
| $compose_cmd exec -T postgres psql -U "${DB_USER:-postgres}" -d "${DB_NAME:-worklenz_db}" < "$temp_dir/$backup_content/database.sql" 2>/dev/null | ||
| print_success "Database restored" | ||
| fi |
There was a problem hiding this comment.
Database restore lacks pre-restore database reset.
The restore uses psql to run the SQL dump, but this appends to existing data rather than replacing it. Consider dropping and recreating the database, or using pg_restore with --clean if using custom format dumps.
Proposed fix
if [ -f "$temp_dir/$backup_content/database.sql" ]; then
+ # Drop and recreate database to ensure clean restore
+ $compose_cmd exec -T postgres psql -U "${DB_USER:-postgres}" -d postgres -c "DROP DATABASE IF EXISTS ${DB_NAME:-worklenz_db};" 2>/dev/null || true
+ $compose_cmd exec -T postgres psql -U "${DB_USER:-postgres}" -d postgres -c "CREATE DATABASE ${DB_NAME:-worklenz_db};" 2>/dev/null
$compose_cmd exec -T postgres psql -U "${DB_USER:-postgres}" -d "${DB_NAME:-worklenz_db}" < "$temp_dir/$backup_content/database.sql" 2>/dev/null
print_success "Database restored"
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| $compose_cmd up -d postgres | |
| sleep 10 | |
| if [ -f "$temp_dir/$backup_content/database.sql" ]; then | |
| $compose_cmd exec -T postgres psql -U "${DB_USER:-postgres}" -d "${DB_NAME:-worklenz_db}" < "$temp_dir/$backup_content/database.sql" 2>/dev/null | |
| print_success "Database restored" | |
| fi | |
| if [ -f "$temp_dir/$backup_content/database.sql" ]; then | |
| # Drop and recreate database to ensure clean restore | |
| $compose_cmd exec -T postgres psql -U "${DB_USER:-postgres}" -d postgres -c "DROP DATABASE IF EXISTS ${DB_NAME:-worklenz_db};" 2>/dev/null || true | |
| $compose_cmd exec -T postgres psql -U "${DB_USER:-postgres}" -d postgres -c "CREATE DATABASE ${DB_NAME:-worklenz_db};" 2>/dev/null | |
| $compose_cmd exec -T postgres psql -U "${DB_USER:-postgres}" -d "${DB_NAME:-worklenz_db}" < "$temp_dir/$backup_content/database.sql" 2>/dev/null | |
| print_success "Database restored" | |
| fi |
🤖 Prompt for AI Agents
In @manage.sh around lines 519 - 525, The restore code currently pipes the SQL
dump into psql which appends data; modify the restore sequence around the psql
invocation to first drop and recreate the target database (using psql -c "DROP
DATABASE IF EXISTS \"${DB_NAME:-worklenz_db}\"; CREATE DATABASE
\"${DB_NAME:-worklenz_db}\";" -U "${DB_USER:-postgres}") before importing, or
detect if the dump is a custom-format and use pg_restore with --clean --no-owner
against the DB (e.g., pg_restore --clean -U "${DB_USER:-postgres}" -d
"${DB_NAME:-worklenz_db}" "$temp_dir/$backup_content/database.sql"), ensuring
you still run these commands against the postgres container via $compose_cmd
exec -T postgres and preserve the existing success/failure logging
(print_success or error).
| const googleStrategy = process.env.GOOGLE_CLIENT_ID && process.env.GOOGLE_CLIENT_SECRET | ||
| ? new GoogleStrategy.Strategy({ | ||
| clientID: process.env.GOOGLE_CLIENT_ID, | ||
| clientSecret: process.env.GOOGLE_CLIENT_SECRET, | ||
| callbackURL: process.env.GOOGLE_CALLBACK_URL as string, | ||
| passReqToCallback: true | ||
| }, | ||
| (req, _accessToken, _refreshToken, profile, done) => void handleGoogleLogin(req, _accessToken, _refreshToken, profile, done)); | ||
| (req, _accessToken, _refreshToken, profile, done) => void handleGoogleLogin(req, _accessToken, _refreshToken, profile, done)) | ||
| : null; |
There was a problem hiding this comment.
Missing validation for GOOGLE_CALLBACK_URL environment variable.
The condition checks GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET, but GOOGLE_CALLBACK_URL on line 67 is used with a type assertion (as string) without validation. If the callback URL is not set while the other two are, this will pass undefined cast as a string, likely causing OAuth failures at runtime.
Proposed fix
-const googleStrategy = process.env.GOOGLE_CLIENT_ID && process.env.GOOGLE_CLIENT_SECRET
+const googleStrategy = process.env.GOOGLE_CLIENT_ID && process.env.GOOGLE_CLIENT_SECRET && process.env.GOOGLE_CALLBACK_URL
? new GoogleStrategy.Strategy({
clientID: process.env.GOOGLE_CLIENT_ID,
clientSecret: process.env.GOOGLE_CLIENT_SECRET,
- callbackURL: process.env.GOOGLE_CALLBACK_URL as string,
+ callbackURL: process.env.GOOGLE_CALLBACK_URL,
passReqToCallback: true
},| # Build React application for production with Vite | ||
| RUN npm run build | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find and examine Vite configuration
cd worklenz-frontend
# Look for Vite config files
echo "=== Searching for Vite config files ==="
find . -maxdepth 2 -type f \( -name "vite.config.*" -o -name "vitest.config.*" \) 2>/dev/null | head -20
# Check package.json for build script
echo "=== Checking package.json build script ==="
cat package.json | grep -A 3 '"build"' || true
# Search for outDir configuration
echo "=== Searching for outDir configuration ==="
rg -i 'outDir' . --type ts --type js 2>/dev/null || trueRepository: Worklenz/worklenz
Length of output: 178
🏁 Script executed:
#!/bin/bash
# Check the Dockerfile structure
cd worklenz-frontend
echo "=== Dockerfile lines 20-30 ==="
sed -n '20,30p' Dockerfile
echo ""
echo "=== Dockerfile lines 40-50 ==="
sed -n '40,50p' Dockerfile
echo ""
echo "=== List worklenz-frontend root ==="
ls -laRepository: Worklenz/worklenz
Length of output: 178
🌐 Web query:
Worklenz frontend Vite build configuration output directory
💡 Result:
The frontend build outputs to the Vite default "dist" directory — i.e. worklenz-frontend/dist. [1][2][3]
Sources:
- Worklenz repo README (frontend folder layout / build instructions). [1]
- Worklenz Docker guide (references ANGULAR_DIST_DIR / built frontend dist path). [2]
- Vite docs — default build outDir is "dist". [3]
🌐 Web query:
Worklenz GitHub repository Dockerfile Vite
💡 Result:
The Worklenz repo is on GitHub (Worklenz/worklenz). The frontend lives in the worklenz-frontend folder and the project uses Vite environment variables (VITE_API_URL, VITE_SOCKET_URL). The repo includes Docker support (docker-compose + Dockerfiles) to build/run the frontend service. See the repository and Docker guide for details. [1][2]
Sources:
- Worklenz GitHub repository (worklenz, frontend folder listed). [1]
- Worklenz Docker guide / docs (mentions VITE_API_URL, docker setup). [2]
Vite outputs to dist/ by default, but Dockerfile copies from /app/build — will ship an empty app.
Worklenz frontend builds to worklenz-frontend/dist (Vite default). Change the COPY instruction from /app/build to /app/dist, or explicitly configure Vite's build.outDir to "build" if that directory is intentional.
Also applies to: 45-47
🤖 Prompt for AI Agents
In @worklenz-frontend/Dockerfile around lines 25 - 27, The Dockerfile currently
runs the Vite build via RUN npm run build but later copies from /app/build (COPY
/app/build ...), which is wrong because Vite outputs to dist/ by default; update
the COPY instructions that reference /app/build to /app/dist (or alternatively
set Vite's build.outDir to "build" in vite.config or package.json to keep
/app/build), ensuring the Dockerfile and Vite config reference the same output
directory.
…ocker health checks - Reformat .env.example comments for better readability of required password fields - Enhance backup script with detailed error handling, compression verification, and cleanup logic - Fix Redis health check to use authentication when password is configured - Correct backend Dockerfile to copy RELEASE_VERSION from build directory - Conditionally enable Google mobile authentication strategy based on environment variables - Initialize
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
worklenz-backend/src/passport/passport-strategies/passport-google-mobile.ts (1)
28-30: Replace tokeninfo endpoint with local token verification for production.The code uses Google's
tokeninfoendpoint, which Google's official documentation explicitly states is unsuitable for production—it is intended for debugging only and is subject to rate limits and intermittent throttling.For this production setup, verify tokens locally using either:
- Google's official client libraries (e.g.,
google-auth-library) — they handle key fetching, signature validation, and claim verification automatically.- JWKS-based verification — fetch Google's public keys from the JWKS URI, cache them per HTTP headers, validate the token signature, and verify required claims (
aud,iss,exp).The current code does validate
audandissclaims, which is good practice, but these checks only work after an external call to tokeninfo. Local verification eliminates the external dependency and rate-limit risk.
🤖 Fix all issues with AI agents
In @docker-compose.yaml:
- Around line 161-163: The backend service currently only lists PostgreSQL under
depends_on; when using the express profile it must also wait for Redis. Update
the backend service's depends_on to include the Redis service (with the same
service_healthy condition) when profile: express, or alternatively add startup
logic in the backend entrypoint (e.g., in the backend start script) to
retry/wait for Redis availability and log a clear message if Redis is
unavailable; ensure references to the Docker Compose depends_on block and the
express profile (and the Redis service name "redis") are updated accordingly.
In @worklenz-backend/src/passport/passport-strategies/passport-google-mobile.ts:
- Around line 110-112: The guard that initializes googleMobileStrategy only
checks GOOGLE_CLIENT_ID but handleMobileGoogleAuth accepts
GOOGLE_ANDROID_CLIENT_ID and GOOGLE_IOS_CLIENT_ID as well; change the
initialization condition for googleMobileStrategy to check whether any of
GOOGLE_CLIENT_ID, GOOGLE_ANDROID_CLIENT_ID, or GOOGLE_IOS_CLIENT_ID is set
(e.g., use a logical OR across those env vars) so the
CustomStrategy(handleMobileGoogleAuth) is created when any mobile client ID is
configured.
🧹 Nitpick comments (4)
.env.example (1)
38-42: Consider tightening default CORS configuration for production.
SERVER_CORS=*allows all origins by default, which is permissive. While the comments explain this is for development, the example file defaults could be safer. Consider defaulting to a more restrictive value or adding a stronger warning.Additionally,
SOCKET_IO_CORSreferenceshttp://localhostwhileSERVER_CORSuses*, which is inconsistent.Suggested improvement
# CORS Configuration # For development: * (allows all origins) # For production: your-domain.com or https://your-domain.com -SERVER_CORS=* +# WARNING: Change this for production deployments! +SERVER_CORS=http://localhost SOCKET_IO_CORS=http://localhostdocker-compose.yaml (2)
28-79: Backup service is functional with good error handling.The inline backup script properly handles failures, cleans up incomplete files, and implements retention-based cleanup. Using the
backupprofile makes this opt-in.One consideration: the first backup runs immediately on container start, then waits 24 hours. If you want consistent daily backups at a specific time, consider using
crondinstead of the sleep loop.
91-92: Redis health check exposes password in command.The health check command includes the password directly, which could be visible in process listings inside the container. Consider using
REDISCLI_AUTHenvironment variable instead.Suggested improvement
healthcheck: - test: ["CMD", "sh", "-c", "redis-cli -a ${REDIS_PASSWORD:-worklenz_redis_pass} ping"] + test: ["CMD", "sh", "-c", "REDISCLI_AUTH='${REDIS_PASSWORD:-worklenz_redis_pass}' redis-cli ping"] interval: 10sworklenz-backend/Dockerfile (1)
56-57: Production image includes devDependencies.Copying
node_modulesdirectly from the builder includes all dependencies including devDependencies. This increases image size and attack surface. Consider pruning dev dependencies.Suggested improvement in builder stage
# Build TypeScript application RUN npm run build +# Remove dev dependencies for production +RUN npm prune --omit=dev + # Create release version file RUN echo "$(date '+%Y-%m-%d %H:%M:%S')" > build/RELEASE_VERSION
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.env.exampledocker-compose.yamlworklenz-backend/Dockerfileworklenz-backend/src/passport/passport-strategies/passport-google-mobile.tsworklenz-frontend/Dockerfile
🧰 Additional context used
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 49-49: [UnorderedKey] The HTTPS_PORT key should go before the HTTP_PORT key
(UnorderedKey)
[warning] 57-57: [UnorderedKey] The DB_PASSWORD key should go before the DB_USER key
(UnorderedKey)
[warning] 58-58: [UnorderedKey] The DB_MAX_CLIENTS key should go before the DB_NAME key
(UnorderedKey)
[warning] 87-87: [UnorderedKey] The REDIS_DB key should go before the REDIS_PASSWORD key
(UnorderedKey)
[warning] 99-99: [UnorderedKey] The AWS_BUCKET key should go before the AWS_REGION key
(UnorderedKey)
[warning] 100-100: [UnorderedKey] The AWS_ACCESS_KEY_ID key should go before the AWS_BUCKET key
(UnorderedKey)
[warning] 104-104: [UnorderedKey] The MINIO_BROWSER key should go before the S3_URL key
(UnorderedKey)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (12)
worklenz-backend/src/passport/passport-strategies/passport-google-mobile.ts (1)
110-114: No action needed. The nullable export is properly handled in the only consumer (index.tsat lines 23-25), which guards against the null case with anifcheck before registering the strategy..env.example (2)
65-80: LGTM!The security secrets section is well-documented with clear placeholder values and instructions for generating secure random strings using
openssl rand -hex 32. The naming convention makes it obvious these must be changed.
201-255: Well-structured configuration guide.The embedded configuration guide provides clear, actionable instructions for different deployment scenarios (Express mode, production with custom domain, advanced mode, and Google login setup). This is excellent documentation that will help users get started quickly.
docker-compose.yaml (4)
1-27: PostgreSQL service is well-configured.Good use of required variable syntax (
${DB_PASSWORD:?DB_PASSWORD is required}), proper health checks withstart_periodto allow initialization time, and security hardening withno-new-privileges. The encoding settings inPOSTGRES_INITDB_ARGSensure proper UTF-8 support.
146-146: Verify if public anonymous download is intended for the bucket.The MinIO init sets the bucket to allow anonymous downloads (
mc anonymous set download). This is appropriate for publicly-served assets but may not be suitable for private file storage. Consider documenting this behavior or making it configurable.
273-302: Nginx configuration is well-designed for production.Good practices observed:
- Depends on both frontend and backend being healthy
- Volume mounts for configuration and SSL certificates
- Health check endpoint
- Periodic reload every 6 hours for certificate refresh
- Security hardening with
no-new-privileges
316-322: Good network isolation design.The
worklenz-backendnetwork is marked asinternal: true, which prevents direct external access to database, Redis, and MinIO containers. Only nginx exposes ports to the host, providing a proper security boundary.worklenz-backend/Dockerfile (2)
5-33: Builder stage is well-structured.Appropriate build dependencies for native module compilation (python3, make, g++ for node-gyp, postgresql-server-dev-all for pg_native). The timestamp-based release version is a simple but effective approach for tracking builds.
72-86: LGTM! Production runtime is properly secured.Good security practices:
- Non-root user execution
- tini as init system for proper signal handling and zombie process reaping
- Health check for orchestration integration
- Minimal runtime dependencies
worklenz-frontend/Dockerfile (3)
21-26: Clever runtime environment injection pattern.Using placeholder values (
__VITE_API_URL__,__VITE_SOCKET_URL__) at build time and replacing them at runtime viaenv-config.jsis a solid approach for deploying the same image to different environments without rebuilding.
48-73: Runtime configuration approach is sound.The two-step approach (placeholder creation during build, actual values at runtime) ensures the configuration file always exists. The heredoc syntax for inline script creation is a clean approach.
Ensure the frontend's
index.htmlloadsenv-config.jsbefore the main application bundle, typically via a script tag like:<script src="/env-config.js"></script>
77-103: LGTM! Startup configuration is well-designed.Good practices:
- Pinned
serve@14.2.1version for reproducibilityset -efor early failure on errorsexecto replace shell process with serve (proper signal handling)--no-request-loggingreduces log noise- Health check and tini init system properly configured
| depends_on: | ||
| postgres: | ||
| condition: service_healthy |
There was a problem hiding this comment.
Backend should depend on Redis when using express profile.
The backend depends only on PostgreSQL, but in express mode it also requires Redis. When Redis is not available, the backend may fail to start or experience runtime errors.
Suggested fix
Since Redis is profile-conditional, you may need to handle this in the startup script or document that the express profile must be used. Alternatively, ensure the backend gracefully handles Redis unavailability during startup.
🤖 Prompt for AI Agents
In @docker-compose.yaml around lines 161 - 163, The backend service currently
only lists PostgreSQL under depends_on; when using the express profile it must
also wait for Redis. Update the backend service's depends_on to include the
Redis service (with the same service_healthy condition) when profile: express,
or alternatively add startup logic in the backend entrypoint (e.g., in the
backend start script) to retry/wait for Redis availability and log a clear
message if Redis is unavailable; ensure references to the Docker Compose
depends_on block and the express profile (and the Redis service name "redis")
are updated accordingly.
| const googleMobileStrategy = process.env.GOOGLE_CLIENT_ID | ||
| ? new CustomStrategy(handleMobileGoogleAuth) | ||
| : null; |
There was a problem hiding this comment.
Logic inconsistency: Guard should check all mobile client IDs.
The initialization guard only checks GOOGLE_CLIENT_ID, but the authentication logic accepts three different client IDs (including GOOGLE_ANDROID_CLIENT_ID and GOOGLE_IOS_CLIENT_ID at lines 34-38). If only Android or iOS client IDs are configured without the web GOOGLE_CLIENT_ID, the strategy won't initialize even though mobile authentication should work.
🔧 Proposed fix to check any of the three client IDs
-const googleMobileStrategy = process.env.GOOGLE_CLIENT_ID
- ? new CustomStrategy(handleMobileGoogleAuth)
- : null;
+const googleMobileStrategy =
+ process.env.GOOGLE_CLIENT_ID ||
+ process.env.GOOGLE_ANDROID_CLIENT_ID ||
+ process.env.GOOGLE_IOS_CLIENT_ID
+ ? new CustomStrategy(handleMobileGoogleAuth)
+ : null;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const googleMobileStrategy = process.env.GOOGLE_CLIENT_ID | |
| ? new CustomStrategy(handleMobileGoogleAuth) | |
| : null; | |
| const googleMobileStrategy = | |
| process.env.GOOGLE_CLIENT_ID || | |
| process.env.GOOGLE_ANDROID_CLIENT_ID || | |
| process.env.GOOGLE_IOS_CLIENT_ID | |
| ? new CustomStrategy(handleMobileGoogleAuth) | |
| : null; |
🤖 Prompt for AI Agents
In @worklenz-backend/src/passport/passport-strategies/passport-google-mobile.ts
around lines 110 - 112, The guard that initializes googleMobileStrategy only
checks GOOGLE_CLIENT_ID but handleMobileGoogleAuth accepts
GOOGLE_ANDROID_CLIENT_ID and GOOGLE_IOS_CLIENT_ID as well; change the
initialization condition for googleMobileStrategy to check whether any of
GOOGLE_CLIENT_ID, GOOGLE_ANDROID_CLIENT_ID, or GOOGLE_IOS_CLIENT_ID is set
(e.g., use a logical OR across those env vars) so the
CustomStrategy(handleMobileGoogleAuth) is created when any mobile client ID is
configured.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.