Chapter 229: Task Affinity and Core Pinning

Chapter Objectives

By the end of this chapter, you will be able to:

  • Define task affinity and understand its importance in multi-core systems.
  • Explain the strategic reasons for pinning a task to a specific CPU core.
  • Use the xTaskCreatePinnedToCore() function to bind tasks to Core 0 or Core 1.
  • Understand the purpose of the tskNO_AFFINITY option.
  • Analyze the performance implications of different core pinning strategies.
  • Write code that correctly handles task creation on both single-core and dual-core ESP32 variants.

Introduction

In the previous chapter, we explored the power of Symmetric Multiprocessing (SMP), where the FreeRTOS scheduler intelligently distributes tasks across available cores. This automatic load balancing is highly efficient for general-purpose applications. However, in advanced embedded systems, there are often compelling reasons to exert more direct control over which task runs on which core.

This level of control is achieved through task affinity, or core pinning. By assigning a task to a specific core, you can isolate critical processes, optimize hardware access, and guarantee performance for real-time operations. For example, you might dedicate one core to handling time-sensitive network protocols while the other runs the main application logic, preventing them from ever interfering with each other. This chapter will teach you how to master this powerful technique, moving from letting the scheduler decide to dictating exactly how your application leverages the dual-core architecture of the ESP32.

Theory

What is Task Affinity?

Task affinity is a property that instructs the operating system’s scheduler to run a particular task only on a specified subset of available CPU cores. In the context of ESP-IDF and its dual-core variants, this means we can “pin” a task to a specific core. Once a task is pinned, the FreeRTOS scheduler will only ever schedule it to run on that designated core, even if the other core is completely idle.

By default, tasks created with the standard xTaskCreate() function have no affinity. The scheduler is free to run them on any available core. This is equivalent to specifying the special value tskNO_AFFINITY. By explicitly setting an affinity, we override this default behavior.

Why Pin a Task to a Specific Core?

While letting the scheduler manage core allocation is often effective, manual pinning is a critical tool for optimization and reliability. Here are the primary reasons to implement it:

Reason / Strategy Core 0 (PRO_CPU) Role Core 1 (APP_CPU) Role Key Benefit
Performance Isolation Handles time-sensitive protocols like Wi-Fi and Bluetooth. Runs primary application logic, complex algorithms, or UI tasks. Guarantees that network stack activity will not delay or interrupt the main application task, ensuring real-time performance.
Cache Performance (Can be used) Pin a CPU-intensive task that repeatedly accesses the same data. Maximizes “warm cache” hits, avoiding slow data fetches from main memory and significantly boosting computation speed.
Hardware Affinity May be favored for certain low-level peripherals. Interact with peripherals that are not core-sensitive. Ensures optimal operation and avoids potential timing issues with peripherals that have specific core dependencies.
Simplified Synchronization Pin multiple tasks that share data to the same core (either Core 0 or Core 1). Eliminates the possibility of simultaneous multi-core access to shared data, allowing for simpler synchronization primitives than SMP-safe spinlocks.
  1. Performance Isolation & Real-Time Guarantees: This is the most common reason. The Wi-Fi and Bluetooth stacks in ESP-IDF are complex and have strict timing requirements. For this reason, they are pinned to Core 0 (the PRO_CPU). If you have a similarly critical task in your application, such as real-time motor control, processing audio data from an I2S peripheral, or running a complex algorithm, you can pin it to Core 1 (the APP_CPU). This ensures that no matter how busy the networking stack gets on Core 0, your critical task on Core 1 will have an entire processor to itself, free from interruptions.
  2. Cache Performance: Modern CPUs rely heavily on caches (small, fast memory banks) to store frequently used data and instructions. When a task runs on a core, its data populates that core’s cache. If the scheduler moves the task to another core, the new core’s cache is “cold” (doesn’t have the task’s data), and the data must be fetched again from main memory, which is much slower. Pinning a CPU-intensive task to a single core ensures it always benefits from a “warm” cache, which can significantly boost its performance.
  3. Hardware-Specific Constraints: While most peripherals on the ESP32 can be accessed from either core, some low-level operations or legacy components might have performance characteristics that favor a particular core. Pinning a task that interacts heavily with such a peripheral ensures optimal operation.
  4. Simplified Synchronization: As discussed in the last chapter, sharing data between tasks running on different cores requires SMP-safe locks like spinlocks. However, if you pin two tasks that share data to the same core, you guarantee they can never run in parallel. This simplifies synchronization, as you no longer need to protect against simultaneous multi-core access. A simpler critical section (like taskENTER_CRITICAL() without a spinlock) might be sufficient.
graph TD
    Scheduler[Scheduler] 
    
    subgraph Core0["Core 0"]
        WiFi["Wi-Fi Stack"]
        Bluetooth["Bluetooth Stack"]
        TaskA["Your Pinned Task A"]
    end
    
    subgraph Core1["Core 1"]
        MainLoop["Main Loop"]
        WebServer["Web Server"]
        TaskB["Your Pinned Task B"]
    end
    
    Scheduler --> Core0
    Scheduler --> Core1

How to Pin a Task in ESP-IDF

ESP-IDF provides a specific function to handle task creation with core affinity: xTaskCreatePinnedToCore().

C
BaseType_t xTaskCreatePinnedToCore(
    TaskFunction_t    pvTaskCode,
    const char * const pcName,
    const uint32_t    usStackDepth,
    void * const pvParameters,
    UBaseType_t       uxPriority,
    TaskHandle_t * const pvCreatedTask,
    const BaseType_t  xCoreID
);

This function is identical to xTaskCreate(), with the addition of one final parameter:

Parameter Type Description
pvTaskCode TaskFunction_t Pointer to the function that implements the task.
pcName const char * A descriptive name for the task (mainly for debugging).
usStackDepth uint32_t The size of the task stack, specified in words (4 bytes on ESP32).
pvParameters void * A value that is passed as the parameter to the created task.
uxPriority UBaseType_t The priority at which the task should run (0 is lowest).
pvCreatedTask TaskHandle_t * Can be used to pass out a handle to the created task.
xCoreID BaseType_t (Affinity) The core to pin the task to:
0: Pin to Core 0 (PRO_CPU).
1: Pin to Core 1 (APP_CPU).
tskNO_AFFINITY: No affinity, scheduler chooses core.
  • xCoreID: This specifies which core the task should be pinned to.
    • 0: Pin the task to Core 0 (PRO_CPU).
    • 1: Pin the task to Core 1 (APP_CPU).
    • tskNO_AFFINITY: Do not pin the task to any specific core. The scheduler is free to run it on either Core 0 or Core 1. This makes the function behave like xTaskCreate().

Tip: The app_main function, the entry point for your application, is itself a task. On dual-core ESP32s, it is pinned to and runs on Core 1 by default.

Practical Examples

Let’s see core pinning in action.

Example 1: Pinning Tasks to Specific Cores

This example creates two tasks and explicitly pins one to each core. Each task then prints its assigned core ID to prove it is running where we intended.

Code
C
#include <stdio.h>
#include "freertos/FreeRTOS.hh"
#include "freertos/task.h"
#include "esp_log.h"

static const char *TAG = "CORE_PINNING";

// This task will be pinned to a specific core.
void pinned_task_function(void *pvParameters)
{
    // The parameter is just for logging
    char *task_name = (char *)pvParameters;

    while (1)
    {
        // xPortGetCoreID() returns the ID of the core the task is currently running on.
        ESP_LOGI(task_name, "I am alive and running on Core %d", xPortGetCoreID());
        vTaskDelay(pdMS_TO_TICKS(1000));
    }
}

void app_main(void)
{
    ESP_LOGI(TAG, "Starting Core Pinning Demo.");
    ESP_LOGI(TAG, "app_main is running on Core %d", xPortGetCoreID());

    // Create a task and pin it to Core 0.
    xTaskCreatePinnedToCore(
        pinned_task_function,   // Task function
        "Pinned_Task_Core_0",   // Name of the task
        2048,                   // Stack size
        "Pinned_Task_Core_0",   // Task parameter
        5,                      // Priority
        NULL,                   // Task handle
        0);                     // Core ID

    // Create another task and pin it to Core 1.
    xTaskCreatePinnedToCore(
        pinned_task_function,   // Task function
        "Pinned_Task_Core_1",   // Name of the task
        2048,                   // Stack size
        "Pinned_Task_Core_1",   // Task parameter
        5,                      // Priority
        NULL,                   // Task handle
        1);                     // Core ID
}
Build and Flash Instructions
  1. Create a new ESP-IDF project in VS Code.
  2. Replace the contents of main.c with the code above.
  3. Build, flash, and open the monitor.
Observation

You will see clear output showing each task running exclusively on its assigned core. Pinned_Task_Core_0 will only ever report Core 0, and Pinned_Task_Core_1 will only ever report Core 1.

Plaintext
I (315) CORE_PINNING: Starting Core Pinning Demo.
I (315) CORE_PINNING: app_main is running on Core 1
I (325) Pinned_Task_Core_0: I am alive and running on Core 0
I (335) Pinned_Task_Core_1: I am alive and running on Core 1
I (1325) Pinned_Task_Core_0: I am alive and running on Core 0
I (1335) Pinned_Task_Core_1: I am alive and running on Core 1
...

Example 2: Performance Isolation

This example demonstrates how pinning can protect a time-sensitive task from a CPU-heavy task. We will have a task that blinks an LED at a precise interval and another that performs a meaningless but intensive calculation.

flowchart TD

    A(Start Demo):::startStyle --> B{Pinning Strategy?};
    
    B -->|"Use Pinning<br><b>(Recommended)</b>"| C[Create <b>Blinky Task</b><br>Priority 5];
    C --> D[Pin Blinky Task to<br><b>Core 0</b>];
    D --> E[Create <b>Heavy Task</b><br>Priority 1];
    E --> F[Pin Heavy Task to<br><b>Core 1</b>];
    F --> G((Isolated Execution));
    G --> H[<b>Result:</b><br>Blinky is perfectly stable.<br>Heavy task runs without<br>interfering with Blinky.]:::successStyle;

    B -->|"No Affinity<br><b>(For Comparison)</b>"| I[Create <b>Blinky Task</b><br>Priority 5];
    I --> J[Create <b>Heavy Task</b><br>Priority 1];
    J --> K((Shared Execution on<br>Either Core));
    K --> L[<b>Result:</b><br>Scheduler may run both tasks<br>on the same core. Heavy task<br>can disrupt Blinky's timing.]:::checkStyle;
    
    %% Define styles
    classDef startStyle fill:#EDE9FE,stroke:#5B21B6,stroke-width:2px,color:#5B21B6
    classDef processStyle fill:#DBEAFE,stroke:#2563EB,stroke-width:1px,color:#1E40AF
    classDef decisionStyle fill:#FEF3C7,stroke:#D97706,stroke-width:1px,color:#92400E
    classDef checkStyle fill:#FEE2E2,stroke:#DC2626,stroke-width:1px,color:#991B1B
    classDef successStyle fill:#D1FAE5,stroke:#059669,stroke-width:2px,color:#065F46
Code
C
#include <stdio.h>
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include "esp_log.h"
#include "driver/gpio.h"

#define BLINK_GPIO CONFIG_BLINK_GPIO // Use menuconfig to set the GPIO

static const char *TAG = "PERF_ISOLATION";

// A CPU-intensive task that just wastes CPU cycles
void heavy_computation_task(void *pvParameters)
{
    ESP_LOGI(TAG, "Heavy computation task started on core %d", xPortGetCoreID());
    volatile uint64_t counter = 0;
    while(1)
    {
        counter++; // Just do something to keep the CPU busy
    }
}

// A time-sensitive task to blink an LED
void blinky_task(void *pvParameters)
{
    ESP_LOGI(TAG, "Blinky task started on core %d", xPortGetCoreID());
    gpio_reset_pin(BLINK_GPIO);
    gpio_set_direction(BLINK_GPIO, GPIO_MODE_OUTPUT);

    while(1)
    {
        gpio_set_level(BLINK_GPIO, 0);
        vTaskDelay(pdMS_TO_TICKS(500));
        gpio_set_level(BLINK_GPIO, 1);
        vTaskDelay(pdMS_TO_TICKS(500));
    }
}

void app_main(void)
{
    ESP_LOGI(TAG, "Starting Performance Isolation Demo on core %d", xPortGetCoreID());

    // Scenario: Pin the sensitive task to Core 0 and the heavy task to Core 1.
    // This isolates them completely.
    ESP_LOGI(TAG, "Pinning blinky to Core 0, and heavy task to Core 1.");

    xTaskCreatePinnedToCore(blinky_task, "Blinky", 2048, NULL, 5, NULL, 0);
    xTaskCreatePinnedToCore(heavy_computation_task, "Heavy", 2048, NULL, 1, NULL, 1);

    // To see the non-isolated behavior, you could comment out the lines above
    // and uncomment these lines below. The blinking would likely become erratic.
    // ESP_LOGI(TAG, "Running both tasks without affinity.");
    // xTaskCreate(blinky_task, "Blinky", 2048, NULL, 5, NULL);
    // xTaskCreate(heavy_computation_task, "Heavy", 2048, NULL, 1, NULL);
}
Build and Flash Instructions
  1. Open menuconfig (idf.py menuconfig or use the VS Code UI).
  2. Go to Example Configuration -> Blink GPIO number and set it to a GPIO connected to an LED on your board (e.g., 2 for the built-in LED on many boards).
  3. Save and exit menuconfig.
  4. Build, flash, and run.
Observation

You will see the LED blinking at a perfectly steady 1Hz rate. The heavy_computation_task on Core 1 is running at full speed but is completely unable to interfere with the blinky_task because it is isolated on Core 0. If you were to run the version without pinning, the scheduler might try to run both tasks on the same core, and the high-priority heavy_computation_task could disrupt the timing of vTaskDelay in the blinky task, causing visible jitter.

Variant Notes

Core affinity behavior depends entirely on the number of cores available.

  • Dual-Core (ESP32, ESP32-S3):
    • xTaskCreatePinnedToCore() works as described.
    • Passing xCoreID as 0 pins to Core 0.
    • Passing xCoreID as 1 pins to Core 1.
    • Passing tskNO_AFFINITY allows the task to run on either core.
  • Single-Core (ESP32-S2, ESP32-C3, ESP32-C6, ESP32-H2):
    • These variants only have a Core 0.
    • The function xTaskCreatePinnedToCore() is still available for API compatibility across the ESP32 family.
    • Passing xCoreID as 0 or tskNO_AFFINITY will successfully create the task on the only available core (Core 0).
    • CRITICAL: Passing xCoreID as 1 will fail. The function will return pdFAIL and the task will not be created.

Warning: When writing code intended to run on multiple ESP32 variants, you must check the return value of xTaskCreatePinnedToCore(). A call that works on a dual-core ESP32 will fail on a single-core ESP32 if you try to pin to Core 1.

Common Mistakes & Troubleshooting Tips

Mistake / Issue Symptom(s) Troubleshooting / Solution
Pinning to a Non-Existent Core
Calling xTaskCreatePinnedToCore(…, 1) on a single-core chip.
The task fails to start. The application may not behave as expected. The function returns pdFAIL but this is missed if not checked. Check return value: Always ensure the call returned pdPASS.
Use portable code: Use #if CONFIG_FREERTOS_UNICORE to conditionally compile code for single-core vs. dual-core targets.
Interfering with Protocol Stacks
Pinning a demanding, high-priority task to Core 0.
Wi-Fi disconnects, Bluetooth is laggy, high network latency, or total protocol failure. The system may crash with watchdog timer errors. Isolate application logic: Reserve Core 0 (PRO_CPU) for Wi-Fi/BT. Pin your demanding application tasks to Core 1 (APP_CPU). If a task must be on Core 0, ensure its priority is not starving the system tasks.
Unnecessary Pinning
Pinning every task “just in case” without a specific reason.
The system may perform worse than with no affinity. The scheduler’s flexibility is reduced, making it harder to balance loads efficiently. Code becomes more complex. Have a clear reason: Only pin tasks for performance isolation, real-time needs, or cache optimization. For general tasks, use xTaskCreate or tskNO_AFFINITY and let the scheduler manage them.

Exercises

  1. LED Race: Write a program that blinks two LEDs at different rates using two tasks. Pin both tasks to Core 1. Create a third, high-priority “interferer” task that does nothing but spin in a while(1) loop. First, run the interferer task with tskNO_AFFINITY. Observe the LEDs. Then, pin the interferer task to Core 1 and observe again. What happens to the LEDs and why? Finally, pin the interferer to Core 0 and observe.
  2. Portable Task Creation: Write a function create_worker_task() that creates a task. This function should be “smart”: on a dual-core system, it should pin the new task to Core 1. On a single-core system, it should pin it to Core 0. Use preprocessor macros (like CONFIG_FREERTOS_UNICORE) to achieve this.
  3. Cache Performance Measurement: Write a task that allocates a large array and repeatedly performs a series of calculations on it. Measure the time it takes to complete 1000 iterations. Run two instances of this task. First, run them both with tskNO_AFFINITY. Then, pin each instance to a different core (0 and 1). Finally, pin both instances to the same core (e.g., Core 1). Compare the execution times. You should find that pinning to separate cores gives the best performance, and pinning to the same core is slower due to cache contention and context switching.

Summary

  • Task Affinity (or core pinning) forces the scheduler to run a task on a specific, designated CPU core.
  • The primary function for this is xTaskCreatePinnedToCore(), which takes an additional xCoreID parameter.
  • Pinning is essential for performance isolation, protecting real-time tasks from CPU-heavy tasks.
  • Pinning can improve cache locality, boosting the performance of computationally intensive tasks.
  • By default, system tasks like Wi-Fi/Bluetooth are pinned to Core 0 (PRO_CPU). It’s best practice to put application logic on Core 1 (APP_CPU).
  • Using tskNO_AFFINITY for xCoreID allows the scheduler to manage the task freely, which is the default and often desired behavior.
  • On single-core devices, attempting to pin to Core 1 will fail. Code must be written defensively to handle this.

Further Reading

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top