test: comprehensive coverage improvement (35% → 87%)#120
Conversation
Add 603 new tests across 14 test files, bringing non-web coverage from 35% to 87%. Key improvements: - telegram_backup.py: 61% → 94% - listener.py: 50% → 92% - connection.py: 43% → 100% - telegram_import.py: 86% → 100% - avatar_utils.py: 32% → 100% - export_backup.py: 0% → 99% - __main__.py: 0% → 98% - setup_auth.py: 35% → 96% - db/base.py: 19% → 90% - db/adapter.py: 17% → 54% - scheduler.py: 0% → 82% - realtime.py: 21% → 87% - db/migrate.py: 9% → 83%
|
| GitGuardian id | GitGuardian status | Secret | Commit | Filename | |
|---|---|---|---|---|---|
| 30456597 | Triggered | Generic Password | ff0b307 | tests/test_db_base.py | View secret |
| 30459025 | Triggered | Generic Password | 832735a | tests/test_db_migrate.py | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secrets safely. Learn here the best practices.
- Revoke and rotate these secrets.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 17 minutes and 5 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (15)
📝 WalkthroughWalkthroughThis PR adds comprehensive test coverage across 13 new test modules, covering avatar utilities, database operations, Telegram backup/import, authentication, real-time notifications, scheduling, and CLI functionality. Tests total approximately 10,400 lines and employ mocking, async patterns, and integration-style assertions. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #120 +/- ##
===========================================
+ Coverage 44.56% 85.93% +41.36%
===========================================
Files 21 21
Lines 5865 5865
===========================================
+ Hits 2614 5040 +2426
+ Misses 3251 825 -2426 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (1)
tests/test_config_extended.py (1)
37-37: Prefersys.executableover hardcoded"python3".Hardcoding
python3breaks on environments where the running interpreter ispython,py, or a venv-specific binary not namedpython3.sys.executableguarantees you run-m src.configwith the same interpreter that's executing the tests.🛠️ Proposed fix (apply to all three subprocess.run calls)
+import sys ... - ["python3", "-m", "src.config"], + [sys.executable, "-m", "src.config"],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_config_extended.py` at line 37, Replace hardcoded "python3" in the subprocess.run argv lists with sys.executable so the tests invoke the same interpreter running the test; update each occurrence of ["python3", "-m", "src.config"] used in the three subprocess.run calls and ensure tests/test_config_extended.py imports sys at top if not already present (look for the subprocess.run invocations to locate each replacement).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/test_config_extended.py`:
- Around line 85-87: The assertion is tautological because "src.config" is
appended to the haystack; update the test to actually verify the subprocess ran
by checking result.returncode (e.g., self.assertEqual(result.returncode, 0)) and
then assert that "src.config" appears in the real output
(self.assertIn("src.config", result.stdout + result.stderr)) without
concatenating the literal into the haystack; locate the failing assertion that
references result, stdout, and stderr and replace it with these two concrete
checks.
In `@tests/test_db_adapter.py`:
- Around line 1196-1204: The test contains invalid Python 3 exception syntax in
the fake_refresh function causing import failure; update the exception handling
in fake_refresh (the except clause used when calling setattr on obj with
attributes from vars(mock_account)) to use a parenthesized tuple for multiple
exceptions (i.e., catch (AttributeError, TypeError)) or split into separate
except blocks, then leave mock_session.refresh assigned to fake_refresh
unchanged.
In `@tests/test_listener_extended.py`:
- Line 128: Many async test coroutines (e.g., test_stop_clears_running_flag and
the other ~72 async test methods in tests/test_listener_extended.py) are missing
the required `@pytest.mark.asyncio` decorator; add `@pytest.mark.asyncio` above
every async def test_* coroutine in this file and ensure pytest is imported
(import pytest) at the top if not already present so pytest-asyncio will run
those tests correctly.
In `@tests/test_main_module.py`:
- Around line 240-267: Tests leak BACKUP_PATH/SESSION_DIR into os.environ and
touch real /tmp paths; update both test_main_data_dir_sets_environment_variables
and test_main_data_dir_creates_directories to run main() inside a sandboxed
environment by wrapping the call with patch.dict(os.environ, {}, clear=True) (so
BACKUP_PATH and SESSION_DIR do not persist across tests) and use a
tempfile.TemporaryDirectory() for the --data-dir argument instead of hard-coded
/tmp paths; keep the existing patch("pathlib.Path.mkdir") but if resolution
still hits the FS consider also patching pathlib.Path.resolve in the same
context. Reference: tests test_main_data_dir_sets_environment_variables,
test_main_data_dir_creates_directories, main(), os.environ["BACKUP_PATH"],
os.environ["SESSION_DIR"], and pathlib.Path.mkdir/resolve.
In `@tests/test_realtime.py`:
- Line 80: Add an import for pytest at the top of tests/test_realtime.py and
mark every async test coroutine with the pytest.mark.asyncio decorator (e.g.,
add `@pytest.mark.asyncio` above async def
test_init_detects_postgresql_from_db_manager and the other async def test_*
functions) so all 34 async tests are explicitly decorated per guidelines; ensure
the decorator is applied to each async test function in the file.
In `@tests/test_scheduler.py`:
- Around line 194-211: The test enables scheduler.config.fill_gaps which causes
_run_backup_job() to import and await src.telegram_backup.run_fill_gaps; fix by
explicitly mocking that symbol with an AsyncMock (e.g.,
patch("src.telegram_backup.run_fill_gaps", new_callable=AsyncMock, create=True))
whenever you call scheduler._run_backup_job() in these tests (including the
other case around lines 252-266), so the real gap-fill path is never executed
and the call returns a controlled value; ensure the mock is active in the same
context where _run_backup_job() is awaited.
- Line 183: This file contains 22 async test functions (e.g., async def
test_run_backup_job_calls_run_backup(self, scheduler_with_connection):) that are
missing the required pytest.mark.asyncio decorator; add "import pytest" at top
if absent and prepend `@pytest.mark.asyncio` above each async test function
definition (or apply pytestmark = pytest.mark.asyncio at module scope if you
prefer a module-level marker) so pytest-asyncio runs them correctly; ensure
every async test (functions starting with "test_" that are async) in
tests/test_scheduler.py is decorated.
In `@tests/test_telegram_backup_extended.py`:
- Around line 1639-1665: The tests assert that backup.db.close is called but the
current run_backup implementation only ensures TelegramBackup.disconnect is
awaited (it does not close backup.db), so update the two tests
test_run_backup_connects_backs_up_disconnects and
test_run_backup_disconnects_on_error to stop expecting
mock_backup.db.close.assert_awaited_once(); instead only assert
mock_backup.connect.assert_awaited_once(),
mock_backup.backup_all.assert_awaited_once() (or that it raised), and
mock_backup.disconnect.assert_awaited_once() to match the behavior of
run_backup.
---
Nitpick comments:
In `@tests/test_config_extended.py`:
- Line 37: Replace hardcoded "python3" in the subprocess.run argv lists with
sys.executable so the tests invoke the same interpreter running the test; update
each occurrence of ["python3", "-m", "src.config"] used in the three
subprocess.run calls and ensure tests/test_config_extended.py imports sys at top
if not already present (look for the subprocess.run invocations to locate each
replacement).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 87de1301-977e-4771-a9d3-c2fe7360969e
📒 Files selected for processing (14)
tests/test_avatar_utils.pytests/test_config_extended.pytests/test_connection.pytests/test_db_adapter.pytests/test_db_base.pytests/test_db_migrate.pytests/test_export_backup.pytests/test_listener_extended.pytests/test_main_module.pytests/test_realtime.pytests/test_scheduler.pytests/test_setup_auth.pytests/test_telegram_backup_extended.pytests/test_telegram_import_extended.py
| async def fake_refresh(obj): | ||
| for attr in vars(mock_account): | ||
| if not attr.startswith("_"): | ||
| try: | ||
| setattr(obj, attr, getattr(mock_account, attr)) | ||
| except AttributeError, TypeError: | ||
| pass | ||
|
|
||
| mock_session.refresh = fake_refresh |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm the file has a syntax error as-is.
python3 -c "import ast, sys; ast.parse(open('tests/test_db_adapter.py').read()); print('PARSED OK')" 2>&1 || echo "SYNTAX ERROR CONFIRMED"
# Also check whether ruff is wired into the tests/ tree
rg -n 'tests' pyproject.toml setup.cfg ruff.toml .ruff.toml 2>/dev/null | rg -i 'exclude|include|src' || trueRepository: GeiserX/Telegram-Archive
Length of output: 484
SyntaxError on line 1201 — test file will not parse.
Line 1201 uses invalid Python 3 syntax:
except AttributeError, TypeError:This causes SyntaxError: multiple exception types must be parenthesized and prevents the entire tests/test_db_adapter.py module from being imported. All tests in this file fail to collect.
Fix
async def fake_refresh(obj):
for attr in vars(mock_account):
if not attr.startswith("_"):
try:
setattr(obj, attr, getattr(mock_account, attr))
- except AttributeError, TypeError:
+ except (AttributeError, TypeError):
pass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_db_adapter.py` around lines 1196 - 1204, The test contains invalid
Python 3 exception syntax in the fake_refresh function causing import failure;
update the exception handling in fake_refresh (the except clause used when
calling setattr on obj with attributes from vars(mock_account)) to use a
parenthesized tuple for multiple exceptions (i.e., catch (AttributeError,
TypeError)) or split into separate except blocks, then leave
mock_session.refresh assigned to fake_refresh unchanged.
| protector.start() | ||
| assert protector._running is True | ||
|
|
||
| async def test_stop_clears_running_flag(self): |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python - <<'PY'
import ast
from pathlib import Path
path = Path("tests/test_listener_extended.py")
tree = ast.parse(path.read_text())
for node in ast.walk(tree):
if isinstance(node, ast.AsyncFunctionDef) and node.name.startswith("test_"):
has_asyncio_mark = any(
(
isinstance(dec, ast.Attribute)
and dec.attr == "asyncio"
)
or (
isinstance(dec, ast.Call)
and isinstance(dec.func, ast.Attribute)
and dec.func.attr == "asyncio"
)
for dec in node.decorator_list
)
if not has_asyncio_mark:
print(f"{path}:{node.lineno}: missing pytest.mark.asyncio")
PYRepository: GeiserX/Telegram-Archive
Length of output: 4789
Add @pytest.mark.asyncio to all async test methods.
All 73 async test methods in this file are missing the @pytest.mark.asyncio decorator required by the coding guidelines. This can cause tests to fail collection or be skipped depending on pytest-asyncio mode.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_listener_extended.py` at line 128, Many async test coroutines
(e.g., test_stop_clears_running_flag and the other ~72 async test methods in
tests/test_listener_extended.py) are missing the required `@pytest.mark.asyncio`
decorator; add `@pytest.mark.asyncio` above every async def test_* coroutine in
this file and ensure pytest is imported (import pytest) at the top if not
already present so pytest-asyncio will run those tests correctly.
| class TestRealtimeNotifierInitMethod: | ||
| """Tests for RealtimeNotifier.init (async).""" | ||
|
|
||
| async def test_init_detects_postgresql_from_db_manager(self): |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python - <<'PY'
import ast
from pathlib import Path
path = Path("tests/test_realtime.py")
tree = ast.parse(path.read_text())
for node in ast.walk(tree):
if isinstance(node, ast.AsyncFunctionDef) and node.name.startswith("test_"):
has_asyncio_mark = any(
(
isinstance(dec, ast.Attribute)
and dec.attr == "asyncio"
)
or (
isinstance(dec, ast.Call)
and isinstance(dec.func, ast.Attribute)
and dec.func.attr == "asyncio"
)
for dec in node.decorator_list
)
if not has_asyncio_mark:
print(f"{path}:{node.lineno}: missing pytest.mark.asyncio")
PYRepository: GeiserX/Telegram-Archive
Length of output: 1913
Add pytest.mark.asyncio to all async test methods.
This file contains 34 async test functions without the pytest.mark.asyncio decorator. Per coding guidelines, all async tests in tests/**/*.py must be explicitly marked. Add import pytest at the top and decorate each async def test_* method accordingly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_realtime.py` at line 80, Add an import for pytest at the top of
tests/test_realtime.py and mark every async test coroutine with the
pytest.mark.asyncio decorator (e.g., add `@pytest.mark.asyncio` above async def
test_init_detects_postgresql_from_db_manager and the other async def test_*
functions) so all 34 async tests are explicitly decorated per guidelines; ensure
the decorator is applied to each async test function in the file.
| scheduler._listener = None | ||
| return scheduler | ||
|
|
||
| async def test_run_backup_job_calls_run_backup(self, scheduler_with_connection): |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python - <<'PY'
import ast
from pathlib import Path
path = Path("tests/test_scheduler.py")
tree = ast.parse(path.read_text())
for node in ast.walk(tree):
if isinstance(node, ast.AsyncFunctionDef) and node.name.startswith("test_"):
has_asyncio_mark = any(
(
isinstance(dec, ast.Attribute)
and dec.attr == "asyncio"
)
or (
isinstance(dec, ast.Call)
and isinstance(dec.func, ast.Attribute)
and dec.func.attr == "asyncio"
)
for dec in node.decorator_list
)
if not has_asyncio_mark:
print(f"{path}:{node.lineno}: missing pytest.mark.asyncio")
PYRepository: GeiserX/Telegram-Archive
Length of output: 1321
Add @pytest.mark.asyncio to all async test methods.
All 22 async test functions in this file lack the required @pytest.mark.asyncio decorator. Without it, tests may fail collection or be skipped depending on pytest-asyncio mode. Per coding guidelines for tests/**/*.py: "Use pytest.mark.asyncio for async tests."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_scheduler.py` at line 183, This file contains 22 async test
functions (e.g., async def test_run_backup_job_calls_run_backup(self,
scheduler_with_connection):) that are missing the required pytest.mark.asyncio
decorator; add "import pytest" at top if absent and prepend `@pytest.mark.asyncio`
above each async test function definition (or apply pytestmark =
pytest.mark.asyncio at module scope if you prefer a module-level marker) so
pytest-asyncio runs them correctly; ensure every async test (functions starting
with "test_" that are async) in tests/test_scheduler.py is decorated.
- Use sys.executable instead of hardcoded python3 in subprocess calls - Fix tautological assertion in config test - Fix Python 2 exception syntax in db adapter test - Eliminate env var leakage with patch.dict clear=True and tempfile - Remove assertions for uncalled db.close in run_backup tests
Second pass of coverage improvements: - db/adapter.py: 54% → 90% (+82 tests) - telegram_backup.py: 94% → 97% (+36 tests) - listener.py: 92% → 99% (+16 tests) - realtime.py: 87% → 100% (+4 tests) - db/base.py: 90% → 99% (+5 tests) - db/migrate.py: 83% → 95% (+5 tests) - db/__init__.py: 60% → 100% (+4 tests) - scheduler.py: 82% → 91% (+8 tests) - web/thumbnails.py: 0% → 98% (29 tests) - web/push.py: 0% → ~75% on CI (22 tests, skipped locally) - web/main.py: 0% → ~30% on CI (68 tests, skipped locally)
Summary
Type of Change
Database Changes
scripts/Data Consistency Checklist
N/A — test-only changes, no production code modified.
Coverage Improvements
telegram_backup.pylistener.pyconnection.pytelegram_import.pyavatar_utils.pyexport_backup.py__main__.pysetup_auth.pyconfig.pydb/base.pydb/migrate.pyscheduler.pyrealtime.pydb/adapter.pyNew Test Files
test_telegram_backup_extended.py— 105 teststest_listener_extended.py— 111 teststest_db_adapter.py— 97 tests (rewritten)test_main_module.py— 49 teststest_connection.py— 36 teststest_scheduler.py— 34 teststest_realtime.py— 34 teststest_db_base.py— 30 teststest_export_backup.py— 18 teststest_setup_auth.py— 16 teststest_db_migrate.py— 15 teststest_avatar_utils.py— 12 teststest_telegram_import_extended.py— 42 teststest_config_extended.py— 3 testsTesting
python -m pytest tests/ -v) — 1008 passedruff check .)ruff format --check .)Security Checklist
Deployment Notes
Summary by CodeRabbit
Release Notes