Skip to content

feat(test): Spliting test_view file for better test structure. Closes #794#803

Merged
regulartim merged 4 commits intointelowlproject:developfrom
drona-gyawali:split/test
Feb 16, 2026
Merged

feat(test): Spliting test_view file for better test structure. Closes #794#803
regulartim merged 4 commits intointelowlproject:developfrom
drona-gyawali:split/test

Conversation

@drona-gyawali
Copy link
Copy Markdown
Contributor

@drona-gyawali drona-gyawali commented Feb 12, 2026

Description

In this pr we split test_views.py testcases

Related issues

closesL #794

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue).
  • New feature (non-breaking change which adds functionality).
  • Breaking change (fix or feature that would cause existing functionality to not work as expected).

Checklist

  • I have read and understood the rules about how to Contribute to this project.
  • The pull request is for the branch develop.
  • I have added documentation of the new features.
  • Linter (Ruff) gave 0 errors. If you have correctly installed pre-commit, it does these checks and adjustments on your behalf.
  • I have added tests for the feature/bug I solved. All the tests (new and old ones) gave 0 errors.
  • If changes were made to an existing model/serializer/view, the docs were updated and regenerated (check CONTRIBUTE.md).
  • If the GUI has been modified:
    • I have a provided a screenshot of the result in the PR.
    • I have created new frontend tests for the new component or updated existing ones.

Important Rules

  • If you miss to compile the Checklist properly, your PR won't be reviewed by the maintainers.
  • If your changes decrease the overall tests coverage (you will know after the Codecov CI job is done), you should add the required tests to fix the problem
  • Everytime you make changes to the PR and you think the work is done, you should explicitly ask for a review. After being reviewed and received a "change request", you should explicitly ask for a review again once you have made the requested changes.

@drona-gyawali
Copy link
Copy Markdown
Contributor Author

drona-gyawali commented Feb 12, 2026

To my surprise, noticed that test_feed_types.py was never actually running neither in CI(maybe) nor during normal test execution. The reason was that there was no __init__.py file inside the api directory, so Django did not treat it as a proper package and therefore did not discover and execute the tests inside it.

While splitting test_views and creating a new views directory under api, I added an __init__.py file to ensure proper test discovery. At that point, the tests started failing with a database connection already closed error. Initially, I was confused because I followed the same structure used in the cronjobs test directory, yet it was still failing.

After investigating further, I realized this was actually a long-standing issue. The failure had been hidden because test_feed_types.py was never executed before. Since it wasn’t running, the underlying error never surfaced.

The root cause turned out to be the overridden tearDownClass method in tests/__init__.py. In Django, when we typically use overriding lifecycle methods like tearDownClass, we must call super().tearDownClass() to allow Django to complete its internal cleanup process. That call was missing. As a result, Django’s internal test database lifecycle was being interrupted, which eventually led to the “connection already closed” error.

image

After adding super().tearDownClass(), everything passed successfully.

I just wanted to clarify two things and wanted you guidance:

  1. Was test_feed_types.py intentionally left out of test discovery?
  2. Was there a specific reason for not calling super().tearDownClass() in the overridden method

@regulartim
Copy link
Copy Markdown
Collaborator

Good catch @drona-gyawali !

Was test_feed_types.py intentionally left out of test discovery?

No, I don't think so. This was only merged recently in #725 and I don't remember we left these tests out on purpose.

Was there a specific reason for not calling super().tearDownClass() in the overridden method

I also don't think so. As far as I can see, calling the super method here is essential. The Django docs specifically warn to not forget it.

I guess you found 2 proper bugs! Thanks a lot! :)

Copy link
Copy Markdown
Collaborator

@regulartim regulartim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good work! 👍

I just want to discuss the best way to fix the bug you found.

@@ -189,6 +189,7 @@ def tearDownClass(cls):
IOC.objects.all().delete()
CowrieSession.objects.all().delete()
CommandSequence.objects.all().delete()
Copy link
Copy Markdown
Collaborator

@regulartim regulartim Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now, that we properly call the super method, do we need these 4 model.objects.all().delete() calls?

If not, we only override the method to call its super method. So we can completely remove the whole method. Am I right?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right also correct me if I’m wrong, but it seems we are using explicit deletes because we want to ensure no stale data is left behind between tests , yes.

My concern is that if we keep adding new models to the test suite, we’ll always have to remember to update tearDown(), which might become a maintainability headache as the project grows. I agree that we should avoid explicit deletes if possible, but I was wondering: is there a specific performance reason for doing it manually, or can we safely rely on super method?

Copy link
Copy Markdown
Collaborator

@regulartim regulartim Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but it seems we are using explicit deletes because we want to ensure no stale data is left behind between tests

Yes, but that is exactly what tearDownClass is supposed to do. So I don't think we will end up with stale data if we remove the manual deletes.

My concern is that if we keep adding new models to the test suite, we’ll always have to remember to update tearDown(), which might become a maintainability headache as the project grows.

I agree.

is there a specific performance reason for doing it manually, or can we safely rely on super method?

I honestly don't know. This code was added before I joined the project. But if we remove it and all tests pass in a reasonable amount of time, then that's fine for me.

Copy link
Copy Markdown
Collaborator

@regulartim regulartim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot! :)

@regulartim regulartim merged commit f2d6ba4 into intelowlproject:develop Feb 16, 2026
4 checks passed
@drona-gyawali
Copy link
Copy Markdown
Contributor Author

Thanks a lot! :)

Thank you very much :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants