How can I mark a test as skipped during collection using `pytest_collection_modifyitems`?

I want pytest to collect all tests and then, using the pytest_collection_modifyitems hook, skip certain tests based on a condition retrieved from a database.

The common solution involves accessing a protected member (_request) like this:

def pytest_collection_modifyitems(items, config):
    # get skip condition from database
    for item in items:
        if skip_condition:
            item._request.applymarker(pytest.mark.skipif(True, reason='Put any reason here'))

Accessing _request feels unsafe. Is there a cleaner or recommended way to mark tests as skipped during collection using pytest_collection_modifyitems?

Hey! I’ve tackled this before, and you’re right, accessing _request feels a bit hacky.

I found a cleaner approach is to directly apply a marker to the item without touching protected members:

import pytest

def pytest_collection_modifyitems(items, config):
    for item in items:
        if skip_condition:  # check your DB or other condition
            item.add_marker(pytest.mark.skip(reason="Condition met, skipping test"))

Using add_marker keeps it safe and readable, and pytest will honor the skip during test execution.

This approach worked well for me when skipping tests dynamically based on DB values.

I ran into the same issue, and what helped me was avoiding _request entirely. The trick is that pytest_collection_modifyitems runs after collection, so you can just use item.add_marker() to mark tests as skipped:

def pytest_collection_modifyitems(items, config):
    for item in items:
        if some_condition_from_db(item.name):
            item.add_marker(pytest.mark.skip(reason="Skipped based on DB condition"))

It feels much safer and is officially supported. I liked this because it keeps the code future-proof and avoids fragile internals.

From my experience, using _request is usually unnecessary. I use add_marker to mark tests dynamically:

def pytest_collection_modifyitems(items, config):
    for item in items:
        if should_skip(item):
            item.add_marker(pytest.mark.skip(reason="Skipping dynamically"))

This way, pytest handles the skip cleanly during test execution. I found it also makes debugging easier since the skipped tests show up clearly in the report instead of being silently ignored.