Programming, Python, Tech Journey

Stealthy Automation in Selenium and Playwright: When and Why It’s Used

Introduction
Automation tools like Selenium and Playwright are widely used for browser automation, testing, and data collection. A common question among beginners is the concept of “stealthy automation”, which aims to make automated browsers behave like real humans to avoid detection by websites.

What Is Stealthy Automation?
Stealthy automation refers to techniques used to hide the fact that a browser is being controlled by code. This includes masking browser fingerprints, removing automation flags, and simulating human-like behavior. It is most commonly used in:

  • Web scraping on sites that block bots
  • Automated testing where human-like interactions are required

It is not needed when testing websites you own or automating internal systems.

Selenium and Stealth
Selenium is a popular browser automation tool, but by default, it is easily detectable by websites due to:

  • navigator.webdriver flag being set to true
  • Automation-specific browser extensions
  • Headless browser detection

To reduce detection, developers can:

  • Use undetected-chromedriver to hide automation flags
  • Disable Chrome automation extensions and experimental switches
  • Simulate human behavior with random delays and scrolling
  • Avoid headless mode

Playwright and Stealth
Playwright is another browser automation framework. It is generally harder to detect than Selenium due to:

  • Cleaner automation architecture
  • Better browser emulation and human-like behavior options
  • Fewer default automation traces

Do All Developers Use Stealth?
No. Stealth is not used by default in professional automation. Use cases:

  1. Testing Websites (QA/Dev): Stealth is usually unnecessary because developers control the site. Speed and reliability matter more than looking human.
  2. Web Scraping / Data Collection: Stealth is often applied if sites have anti-bot mechanisms, but only when legal and allowed.
  3. Internal Business Automation (RPA): Stealth is generally not needed.
  4. Abuse / Gray-area Automation: Heavily stealthy automation is risky and may violate laws or site terms of service.

Why Stealth Isn’t Default
Using stealth adds complexity, can slow down automation, increases maintenance due to browser updates, and makes debugging harder. Professionals adopt a “use the simplest tool that works” approach.

Conclusion
Stealthy automation is a specialized tool, not a standard requirement. It is best applied only when necessary for scraping or interacting with sites that actively block automation. For testing and internal automation, normal Selenium or Playwright setups are sufficient.

Standard
Programming, Python, Tech Journey

Understanding __init__.py in Python: The Key to Organized Packages

Introduction

Python is a versatile programming language known for its simplicity and readability. One feature that often confuses beginners is the __init__.py file, which plays a crucial role in Python packages. Understanding its purpose can help you better organize your code and create reusable modules.


What is __init__.py?

__init__.py is a special Python file that is placed inside a directory to make Python treat that directory as a package. In older versions of Python, it was mandatory to include this file; in newer versions, it is optional for namespace packages, but still widely used for package initialization and controlling imports.


Primary Uses of __init__.py

  1. Marking a Directory as a Package
    When Python encounters a directory containing __init__.py, it treats the directory as a package, allowing you to import modules from it using dot notation: from mypackage import mymodule
  2. Package Initialization
    You can include initialization code inside __init__.py that runs when the package is first imported. This is useful for setting up package-level variables, logging, or other configurations.
  3. Controlling the Public API
    Perhaps the most important use of __init__.py is re-exporting selected modules or functions to simplify the package interface for users. Without it, users would have to know the internal structure of your package to import components: # Inside mypackage/__init__.py from .module1 import func1 from .module2 import ClassA # Now users can do: from mypackage import func1, ClassA This allows the package to hide internal complexity and maintain a clean, stable API.

Real-world Example

Many popular Python libraries use this approach. For instance, in SQLAlchemy, a function called create_engine is defined deep inside the internal structure (sqlalchemy/engine/create.py). However, users can import it directly from the top-level package:

from sqlalchemy import create_engine

This works because SQLAlchemy’s __init__.py re-exports the function from the internal submodule, providing a simple and intuitive interface for developers.


Summary

The __init__.py file is more than just a marker file. It is a powerful tool that helps:

  • Structure Python packages clearly
  • Initialize package-level code
  • Control what parts of the package are publicly accessible

By using __init__.py thoughtfully, you can make your Python packages more maintainable and user-friendly, hiding internal details while exposing a clean interface for others to use.

Standard
Python

Optimizing Visual Studio Code for Professional Python Development

Visual Studio Code (VS Code) is a widely used, lightweight editor that can be transformed into a powerful development environment with the right tools and configurations. In this article, we’ll walk you through the must-have extensions, configurations, and practices for making VS Code a professional-grade Python development setup. Whether you’re a beginner or an experienced developer, these tips will help you maximize productivity and maintain clean, efficient code.

1. Essential Extensions for Python Development

Python (by Microsoft)

This extension is the backbone of Python development in VS Code. It provides essential features like IntelliSense (code completion), linting, debugging, and testing.

  • Features: Syntax highlighting, code navigation, support for virtual environments, integrated Jupyter notebook support.
  • Installation: You can install it directly from the Extensions Marketplace by searching for “Python.”

Pylance (by Microsoft)

Pylance is an optimized language server extension that works alongside the Python extension, offering deeper IntelliSense, fast analysis, and type checking using Python’s type hinting.

  • Why You Need It: Enhanced code navigation and type checking, which makes your code more robust and maintainable.

Black Formatter

Black is an opinionated Python code formatter that ensures your code is consistently styled. It helps streamline your workflow by automatically formatting your code when you save your files, enforcing standards like PEP8.

  • How to Use: Install Black from the Extensions Marketplace and configure VS Code to format on save. Add the following to your settings.json:

    "editor.formatOnSave": true,
    "python.formatting.provider": "black"

Python Docstring Generator

Maintaining high-quality documentation is essential in professional development. This extension helps you generate docstrings automatically for your functions, classes, and modules.

  • Why You Need It: Easily create well-formatted docstrings in Python’s standard formats (like Google, NumPy, or Sphinx) to keep your codebase well-documented.

Pyright (Static Type Checker)

Pyright is a static type checker designed for Python. Using it with type hints allows you to catch bugs early and improve code readability.

  • How to Use: Once installed, Pyright will analyze your code and point out potential issues based on type annotations. For example:python

    def greet(name: str) -> str:
    return "Hello, " + name

Linting: Flake8 or Pylint

Linters help ensure that your code follows best practices and is free of common errors. Flake8 and Pylint are popular Python linters that you can easily integrate into your workflow.

  • How to Use: Install Flake8 or Pylint from the Marketplace. Configure your settings.json to enable the chosen linter.

    "python.linting.flake8Enabled": true,
    "python.linting.enabled": true

Jupyter (by Microsoft)

If you work with Jupyter notebooks for data science or machine learning, the Jupyter extension allows you to open, run, and edit Jupyter notebooks directly in VS Code.

  • Why You Need It: Seamless integration with notebooks, especially useful for interactive coding and visualization tasks like data exploration.

GitLens

For version control, GitLens is a powerful extension that extends the capabilities of Git inside VS Code. It allows you to visualize code changes, understand commit history, and perform advanced Git operations directly from the editor.

  • Why You Need It: It enhances Git’s functionality, making it easier to track changes, especially in a team setting.

2. Configuring VS Code for Python Development

Once you have the right extensions installed, it’s time to fine-tune your development environment with the right settings.

Setting up the Python Interpreter

VS Code needs to know which Python interpreter to use for your project, especially when you’re using virtual environments.

  • How to Select an Interpreter: Open the command palette (Ctrl + Shift + P or Cmd + Shift + P on macOS) and type Python: Select Interpreter. Choose the correct interpreter for your project.

Automatic Formatting and Linting

Enabling auto-formatting and linting ensures that your code remains clean and follows coding standards. You can add these settings to your settings.json for a seamless experience:

"editor.formatOnSave": true,
"python.formatting.provider": "black",
"python.linting.flake8Enabled": true,
"python.linting.enabled": true

Terminal Integration with Virtual Environments

For managing virtual environments, it’s helpful to have the terminal automatically activate the correct Python environment when opened. Add this to your settings.json:

"python.terminal.activateEnvironment": true

Testing Framework Integration

You can configure VS Code to run tests using popular frameworks like Unittest, Pytest, or Nose2. Set up your preferred testing framework in settings.json:

"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"python.testing.nosetestsEnabled": false

Now, you can run tests directly from VS Code using the testing panel or the command palette.


3. Advanced Features for Professional Development

Using Snippets for Faster Coding

VS Code supports code snippets, which can be a great time-saver. For example, you can create snippets for frequently used code structures like function definitions or classes. There are many pre-built Python snippets extensions available in the Marketplace.

Debugging in VS Code

The Python extension includes a robust debugger, allowing you to set breakpoints, inspect variables, and step through your code. You can start debugging by pressing F5 after setting breakpoints in your code.

Workspace Settings for Team Consistency

When working in teams, it’s important to share consistent settings across the workspace. You can save project-specific settings in the .vscode/settings.json file, ensuring that all developers working on the project follow the same configurations (e.g., formatting rules, interpreter paths, linting preferences).


4. Organizing Your Projects for Scalability

Project Structure

A clean and scalable project structure is important, especially when collaborating with others. Here’s a standard structure to follow for professional Python projects:

cssCopy codemy_project/
│
├── src/
│   ├── main.py
│   └── utils.py
│
├── tests/
│   └── test_main.py
│
├── .vscode/
│   └── settings.json
│
├── .gitignore
├── requirements.txt
└── README.md
  • src/: Contains your source code.
  • tests/: Contains unit tests.
  • .vscode/: Stores project-specific settings.
  • requirements.txt: Lists the dependencies for the project.
  • README.md: Contains documentation for the project.

Version Control with Git

In professional environments, Git is a must for version control. With GitLens and Git integration in VS Code, you can manage your repository, commit changes, resolve merge conflicts, and view commit history, all from the editor.


5. Best Practices for Python Development in VS Code

  • Use Virtual Environments: Always use virtual environments to isolate dependencies per project.
  • Write Tests Early: Set up testing frameworks like Pytest early in your project to catch bugs before they become problems.
  • Document Your Code: Use docstrings and extensions like Python Docstring Generator to maintain well-documented code.
  • Follow PEP8: Enforce code formatting standards like PEP8 using Black and linting tools like Flake8 or Pylint.
  • Use Git Effectively: Regular commits, meaningful commit messages, and leveraging GitLens for in-depth repository management ensure version control best practices.

Conclusion

By enhancing Visual Studio Code with these powerful extensions and fine-tuning the environment to your needs, you’ll have a highly efficient, professional setup for Python development. From formatting and linting to debugging and version control, this guide helps you turn VS Code into a full-fledged Python IDE, making your development process faster, cleaner, and more scalable.

Standard
Programming, Python

Python Clousers

def make_multiplier(factor):
	"""
	code
	"""

double = make_multiplier(2)
triple = make_multiplier(3)

print(double(5)) # Output: 10
print(triple(5)) # Output: 15

First try to get some idea about what has happen here yourself. Don’t worry about what is happening inside thhe make_multiplier function and you just need to find what would be the return. Take few minutes or longer as you like before read this further.

Okay. Lets check it. ‘make_multiplier’ function has been called twice with different arguments and the result has been saved in ‘double’ and ‘triple’ variables separately. So we just know ‘make_multiplier’ function has returned something at the end of the function.

In two print statements we can notice that these ‘double’ and ‘triple’ variables has been called as functions. Now we can identify that ‘make_multiplier’ function is also returned another function. Lets update the above code with new facts. Do it yourself before going further.

def make_multiplier(factor):
	def multiply(x):
		"""
		code
		"""
	return multiply

double = make_multiplier(2)
triple = make_multiplier(3)

print(double(5)) # Output: 10
print(triple(5)) # Output: 15

I think I don’t need to explain above code. We just set the ‘multiply’ function inside ‘make_multiplier’ function. We can call ‘make_multiplier’ function is the outer function and ‘multiply’ is the inner function. We can also define the ‘multiply’ function as the nested function. I mean nested function is same as inner function. In the same we can the the outer function as the enclosing function.

Then lets go further. When we consider print statements, we can find that ‘double’ function multiply the given arguments by 2 and ‘triple’ function multiply the argument by 3. So where this should be happened? It should be happen in ‘multiply’ function. Again update the code. Try it yourself.

def make_multiplier(factor):
	def multiply(x):
		return factor*x
	return multiply

double = make_multiplier(2)
triple = make_multiplier(3)

print(double(5)) # Output: 10
print(triple(5)) # Output: 15

If you couldn’t do it then at least try to understand above code yourself. For the calculation happen in ‘multiply’ function, it need two variables. Surely one variable is entered with double/triple function for the ‘multiply’ function. Then where is the other one? It is passed with ‘make_multiplier’ function. Another things. We have no problem regarding passing arguments to ‘multiply’ function. But we feel something wrong when we using ‘factor’, the argument of the outer/enclosing function inside ‘multiply’ function. Why we feel like that? You see we call ‘make_multiplier’ the outer function before we call ‘multiply’ function(inner function). Because of that ‘make_multiplier’ function is already returned, ended and it is finished. Normally after we call a function, we can’t access it’s variables(function scope) because they get destroyed after the function is finished. But here we access ‘factor’ even after it get destroyed. How we did that? We used it in the inner function and we returned the inner function. So its like inner function remembers the ‘factor’ before it get destroyed. This is called as a closure. Closure means, a nested function which can access and remember the values of outer/enclosing function after the outer/enclosing function returned/finished/get destroyed.

Here is the summary

  • a function can return another function
  • outer function = enclosing function
  • inner function = nested function
  • when the inner function accessing outer function’s variable even after completing(returning) the outer function, then we call it a ‘closure’.

Standard
Programming, Python

List Comprehensions in Python

  • This is a way to create a new list using the existing list with shorter syntax
  • To do the same things without list comprehensions, we need to use for loop with conditional statements
  • But with this it is so simple
  • Here is the syntax
    • newlist = [expression for item in iterable if condition == True]
    • In expression, we define what should be added to the newlist. So it can be simply the item or the item after some modification or something else.
    • In iterable it can be anything that can be iterated
  • Let’s move for an example with comparing
# create new list with numbers higher than 30 without list comprehensions
list_one = [32, 25, 15, 12, 43, 32, 48, 68]
new_list = []

for item in list_one:
    if item>30:
        new_list.append(item)
print(new_list)
# [32, 43, 32, 48, 68]

# create new list with numbers higher than 30 with list comprehensions
new_list = [item for item in list_one if item>30]
print(new_list)
# [32, 43, 32, 48, 68]
  • I think you got how much it is easy to write the code in this way
  • we can modify the items in new_list also
new_list = ['item is: {}'.format(item) for item in list_one if item>30]
print(new_list)
# ['item is: 32', 'item is: 43', 'item is: 32', 'item is: 48', 'item is:  # 68']

  • we can also use list comprehensions with multiple lists
# create new_list with items which there sum is higher than               # 40(item_one+item_two>40) without list comprehensions
list_one = [32, 25, 15, 12, 43, 32, 48, 68]
list_two = [15, 8, 63, 52, 74, 23, 31, 12]
new_list = []

for item_one in list_one:
    for item_two in list_two:
        if (item_one+item_two) > 40:
            new_list.append((item_one, item_two))
print(new_list)
# [(32, 15), (32, 63), (32, 52), (32, 74), (32, 23), (32, 31), (32, 12),  # (25, 63), (25, 52), (25, 74), (25, 23), (25, 31), (15, 63), (15, 52),    # (15, #74), (15, 31), (12, 63), (12, 52), (12, 74), (12, 31), (43, 15),   # (43, 8), #(43, 63), (43, 52), (43, 74), (43, 23), (43, 31), (43, 12),   # (32, 15), #(32, 63), (32, 52), (32, 74), (32, 23), (32, 31), (32, 12),  # (48, 15), (48, #8), (48, 63), (48, 52), (48, 74), (48, 23), (48, 31),   # (48, 12), (68, 15), #(68, 8), (68, 63), (68, 52), (68, 74), (68, 23),   # (68, 31), (68, 12)]


# create new_list with items which there sum is higher than               # 40(item_one+item_two>40) with list comprehensions
new_list = [(item_one, item_two) for item_one in list_one for item_two in list_two if (item_one+item_two)>40]
print(new_list)
# [(32, 15), (32, 63), (32, 52), (32, 74), (32, 23), (32, 31), (32, 12),  # (25, 63), (25, 52), (25, 74), (25, 23), (25, 31), (15, 63), (15, 52),   # (15, 74), (15, 31), (12, 63), (12, 52), (12, 74), (12, 31), (43, 15),   # (43, 8), (43, 63), (43, 52), (43, 74), (43, 23), (43, 31), (43, 12),    # (32, 15), (32, 63), (32, 52), (32, 74), (32, 23), (32, 31), (32, 12),   # (48, 15), (48, 8), (48, 63), (48, 52), (48, 74), (48, 23), (48, 31),    # (48, 12), (68, 15), (68, 8), (68, 63), (68, 52), (68, 74), (68, 23),    # (68, 31), (68, 12)]
  • I think you enjoyed this.
Standard
Python, Tech Journey

Reshaping Arrays with Python

  • Numpy’s reshape.method can be used to change the shape of arrays

  • Here reshaping not only means changing the number of dimensions an array has like changing 1D array to 2D array or 3D array vise versa.

  • With reshaping it is possible to change array’s structure. I mean we can change (2, 3) 2D array which has 2 rows and and 3 columns into (3, 2) 2D array which has 3 rows and 2 columns

  • To simplify the above this let’s move to a real example with coding

  • Think, initially we can 1D array like [2, 4, 6, 5, 8, 10]. Let’s define it(Actually in Python we have no arrays and here we actually define a list. Yah you see this as an array but if check the type, then you see this is a list)

    arr = [2, 4, 6, 5, 8, 10]

  • Then covert this to a Numpy array

    narr = numpy.array(arr)

  • If you want, you can check the data type of the created one using type() method

    type(narr)
  • You can check the shape of this array also(Yah, you see this is a 1D array)

    numpy.shape(narr)

Here for 1D array it gives a output like (x), for 2D arrays (x, y), for 3D arrays (x, y, z). Here these variables like x, y, z shows the length/size/no of elements of each dimension. For example in (x, y) 2D array, x means there are x number of rows and y number of columns.

  • Before go to the converting lets see how our (6) 1D numpy array like
  • Now for an example, covert this (6) 1D array to a (2, 3) 2D array

    shp1_narr = numpy.reshape(narr, (2, 3))

    Here you see, for the first argument we give our numpy array we want to convert and as the second argument we give the new shape as a tuple. Lets check the shape of created array and how it is look like
  • Ohh yahh you see. That’s nice. Here note that only the shape has been changed. Order is same.


  • Let’s covert this (2, 3) 2D array into (3, 2) 2D array, check the type and how its look like
  • Nice…

  • Let’s covert back this (2, 3) 2D array into (6) 1D array
  • I think now you can understand about reshaping. If you don’t use Numpy, I don’t say you can’t reshape arrays but you see this is really quick and simple.

Standard
Linux, Python, Tech Journey

How to create a Python Virtual Environment with pipenv on Linux

  • Install pipenv
python3.9 -m pip install pipenv
  • Create virtual environment
    • Here we define default python version for the environment(here it is Python 3.9)
    • Created virtual environment is not added to the current working directory and it is located at ‘~/.local/share/virtualenvs/'(in user directory)
python3.9 -m pipenv --python 3.9
  • Activate virtual environment
python3.9 -m pipenv shell
  • Deactivating current virtual environment
deactivate
  • Removing current virtual environment
python3.9 -m pipenv --rm

Standard
Python, Tech Journey

How to change Python version for IPython

  • First find the IPython location using ‘which’ command
  • Then open that IPython script with a text editor

nano /usr/bin/ipython

  • Then change the the ‘VERSION’ number in the script with the version you want(here I changed it as 3.7)
  • Save the file and that’s it(Sometimes system will ask to re-login)
  • Enter ‘ipython ‘ command to check the change
Standard