Python Logging: Essential Guide and Best Practices

July 16, 2025

Python's logging module is incredibly powerful but often overlooked. Many developers miss the proper way to use it, leading to applications that are harder to debug and maintain.

In this article, we will delve into the essential aspects of Python's logging module that every developer should master.

Understanding Python's logging Module

Python provides a well-designed logging module by default. In most cases, there's no need to use third-party libraries for logging with Python. The built-in module is powerful, flexible, and follows industry best practices.

Loggers Are Singletons

The logging.getLogger(name) function returns the same logger instance every time it's called with the same name.

import logging

logger1 = logging.getLogger('myapp')
logger2 = logging.getLogger('myapp')
print(logger1 is logger2)  # True

So there's no need to implement custom Logger classes to reuse them - Python handles this for you.

Logger Hierarchy and Handler Inheritance

A logger named a.b.c is a child of logger named a.b, which in turn is a child of logger a. This hierarchy becomes particularly important when you understand how handlers work.

Child loggers propagate their log messages to their parent loggers by default:

import logging

logging.basicConfig(level=logging.DEBUG, format='%(name)s: %(message)s')

parent_logger = logging.getLogger('a.b')
child_logger = logging.getLogger('a.b.c')
child_logger.info('This is a log message')
# Output:
# a.b.c: This is a log message
# a.b: This is a log message (propagated)

However, what's equally important - and often misunderstood - is handler inheritance. When a logger doesn't have any handlers of its own, it inherits the handlers from its parent loggers, all the way up to the root logger. This is why logging.basicConfig() affects all loggers in your application by default.

Here's where it gets interesting. Let's say you have a logger without any handlers:

import logging

# Configure only the root logger
logging.basicConfig(level=logging.INFO, format='%(name)s - %(message)s')

# These loggers have no handlers of their own
app_logger = logging.getLogger('myapp')
db_logger = logging.getLogger('myapp.database')
api_logger = logging.getLogger('myapp.api')

# All of them will use the root logger's handler
app_logger.info('Application started')       # Works fine
db_logger.info('Database connected')         # Works fine
api_logger.info('API server listening')     # Works fine

Now, suppose you want database logs to also go to a separate file:

# Add a file handler specifically to the database logger
file_handler = logging.FileHandler('database.log')
file_handler.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(message)s'))
db_logger.addHandler(file_handler)

# Now database logs go to BOTH console (inherited) AND file (own handler)
db_logger.info('This appears in both console and database.log')

This dual behavior can sometimes catch developers off guard. If you want the database logger to only log to the file and not to the console, you need to disable propagation:

db_logger.propagate = False

Understanding this inheritance behavior is crucial because it explains why you can configure logging once at the application level and have it work throughout your entire codebase.

Logging Per Module vs Per Class

Since the module is the fundamental unit of Python software organization, the recommended convention is to use module-level loggers. In each module that uses logging, add this line at the top right after your import statements:

import logging

logger = logging.getLogger(__name__)

This creates a singleton logger for the module, so you don't need to pass it around to functions. You can then use logger.info(), logger.debug(), logger.error(), or logger.warning() inside any function or class within that module.

Best Practices

Use Proper Log Levels

Maintaining meaningful logs hinges on selecting the right log level. I've seen too many applications where everything is logged at INFO level, making it nearly impossible to filter signal from noise in production.

  • DEBUG: For detailed technical information, useful during diagnosis.
  • INFO: Confirms high-level application logic is working.
  • WARNING: Indicates unexpected but handled situations, such as deprecated features or recoverable errors.
  • ERROR: Signifies a serious problem preventing a function from completing, often due to exceptions that don't crash the app.
  • CRITICAL: Marks a fatal error that could cause the application to abort, demanding immediate attention.

Log Exception Tracebacks

When logging exceptions, always include the full traceback to make debugging easier. This is one of those practices that seems obvious in hindsight but is surprisingly often forgotten in the heat of development.

import logging

logger = logging.getLogger(__name__)

try:
    result = 10 / 0
except ZeroDivisionError:
    # Method 1: Using exc_info=True
    logger.error("Division by zero occurred", exc_info=True)
    # Method 2: Using exception() method (preferred)
    logger.exception("Division by zero occurred")

The exception() method is equivalent to error() with exc_info=True and is generally preferred for logging exceptions.

Strategic Log Placement

Optimal log volume is essential. Excessive logging obscures critical information, while insufficient logging leaves you blind when issues arise. Finding the right balance often requires some trial and error.

Consider adding log messages at these key locations:

  • Entry and exit points of critical functions: When entering and leaving key business logic
  • Before and after external calls: APIs, database queries, file operations
  • Decision points: When your code branches based on conditions
  • Exception: Always log what went wrong
  • At state changes: When important variables or application state changes
import logging

logger = logging.getLogger(__name__)

def process_user_data(user_id):
    logger.info("Starting to process data for user %s", user_id)

    try:
        user_data = fetch_user_from_database(user_id)
        logger.debug("Retrieved user data for user %s", user_id)

        if user_data.is_premium:
            logger.info("User %s is premium, applying special processing", user_id)
            result = premium_processing(user_data)
        else:
            logger.info("User %s is standard, applying normal processing", user_id)
            result = standard_processing(user_data)

        logger.info("Successfully processed data for user %s", user_id)
        return result

    except DatabaseError as e:
        logger.exception("Database error while processing user %s", user_id)
        raise
    except Exception as e:
        logger.exception("Unexpected error while processing user %s", user_id)
        raise

Note how I'm using lazy evaluation (%s) rather than f-strings for the log messages. This becomes important for performance, which we'll discuss next.

Choose Proper Way to Configure Logging

You need to configure logging as early as possible. Ideally, in your main entry point before any other modules are imported or executed.

import logging
import sys

def setup_logging():
    formatter = logging.Formatter(
        "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
    )

    root_logger = logging.getLogger()
    root_logger.setLevel(logging.INFO)
    root_logger.handlers.clear()

    console_handler = logging.StreamHandler(sys.stdout)
    console_handler.setFormatter(formatter)
    root_logger.addHandler(console_handler)

    file_handler = logging.FileHandler('application.log')
    file_handler.setFormatter(formatter)
    root_logger.addHandler(file_handler)

if __name__ == "__main__":
    setup_logging()

Thanks to handler inheritance, configuring the root logger this way ensures all your module-level loggers automatically use these handlers unless they have their own specific configuration. This is why you can sprinkle logger = logging.getLogger(__name__) throughout your modules without worrying about individual configuration.

Performance Considerations

Logging performance becomes critical in high-throughput applications. I learned this the hard way when debugging a performance issue that turned out to be caused by expensive debug logging that was supposedly "disabled."

String Formatting and Lazy Evaluation

The most common performance pitfall is premature string formatting. When you use f-strings or .format() in log messages, Python evaluates the formatting regardless of whether the log level is enabled.

import logging

logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)  # DEBUG is disabled

# Bad: Always formats string, even when DEBUG is disabled
logger.debug(f"Processing user {user_id} with data {expensive_computation()}")

# Good: Lazy evaluation - only formats if DEBUG level is enabled
logger.debug("Processing user %s with data %s", user_id, expensive_computation())

Performance Impact: In the problematic example, expensive_computation() runs and the f-string is formatted on every call, even when DEBUG logging is disabled. The lazy evaluation approach only calls expensive_computation() when DEBUG is actually enabled.

Advanced Performance Techniques

1. Conditional Logging for Complex Operations

When log message preparation is expensive, check if logging is enabled first:

if logger.isEnabledFor(logging.DEBUG):
    # Only do expensive work if DEBUG is enabled
    detailed_info = analyze_complex_data_structure(data)
    logger.debug("Analysis results: %s", detailed_info)

2. LoggerAdapter for Context Without Overhead

Use LoggerAdapter to add context without repeated string formatting. This is particularly useful in web applications where you want to include request IDs or user information in every log message:

import logging

class ContextAdapter(logging.LoggerAdapter):
    def process(self, msg, kwargs):
        return f"[User: {self.extra['user_id']}] {msg}", kwargs

logger = logging.getLogger(__name__)
context_logger = ContextAdapter(logger, {'user_id': '12345'})

# Now all log messages automatically include user context
context_logger.info("Processing payment")  # Output: [User: 12345] Processing payment

3. Avoid Logging in Tight Loops

Be cautious about logging inside loops, especially with large datasets. This can easily become a bottleneck:

# Bad: Logs every iteration
for i in range(1000000):
    logger.debug("Processing item %d", i)
    process_item(i)

# Better: Log periodically
for i in range(1000000):
    if i % 10000 == 0:
        logger.debug("Processed %d items", i)
    process_item(i)

# Best: Log summary
logger.info("Starting to process %d items", len(items))
for i, item in enumerate(items):
    process_item(item)
logger.info("Finished processing %d items", len(items))

Structured Logging

For modern applications, especially those deployed in containerized environments, consider using structured logging (JSON format). This makes it much easier to parse and analyze logs programmatically:

import logging
import json

class JSONFormatter(logging.Formatter):
    def format(self, record):
        log_entry = {
            'timestamp': self.formatTime(record),
            'level': record.levelname,
            'logger': record.name,
            'message': record.getMessage(),
            'module': record.module,
            'function': record.funcName,
            'line': record.lineno
        }

        if record.exc_info:
            log_entry['exception'] = self.formatException(record.exc_info)

        # Include any extra fields from LoggerAdapter
        extra_fields = getattr(record, 'extra_fields', {})
        log_entry.update(extra_fields)

        return json.dumps(log_entry)

handler = logging.StreamHandler()
handler.setFormatter(JSONFormatter())
logger = logging.getLogger(__name__)
logger.addHandler(handler)

Conclusion

Proper logging is essential for building maintainable Python applications. By understanding how logger hierarchy and handler inheritance work, choosing appropriate log levels, strategically placing logs, including tracebacks, and optimizing for performance, you'll create applications that are much easier to debug and monitor in production.

References