Chapter 230: Inter-Processor Communication
Chapter Objectives
By the end of this chapter, you will be able to:
- Define Inter-Processor Communication (IPC) in the context of a multi-core system.
- Explain why IPC is a necessary tool for certain dual-core applications.
- Utilize the
esp_ipc
API to execute functions on a different core. - Differentiate between blocking (
esp_ipc_call_blocking
) and non-blocking (esp_ipc_call
) IPC calls. - Pass arguments to functions running on another core.
- Understand the limitations and best practices for using IPC.
- Write portable code that handles IPC on both single-core and dual-core ESP32 variants.
Introduction
In our exploration of dual-core programming, we’ve learned how the FreeRTOS scheduler can distribute tasks across cores and how we can pin specific tasks to a core for performance or reliability. This creates a powerful parallel processing environment. But what happens when a task pinned to one core needs to perform an action that is restricted to, or more efficient on, the other? For example, your main application task on Core 1 might need to trigger a low-level Wi-Fi driver function that is tightly coupled with the protocol stack running on Core 0.
Simply calling the function won’t work if the task is on the wrong core. This is where Inter-Processor Communication (IPC) becomes essential. IPC isn’t just about sending data between cores (which queues and event groups already do well); it’s about explicitly requesting that one core execute a piece of code on your behalf. This chapter introduces the esp_ipc
driver, a powerful mechanism for orchestrating work and respecting the architectural boundaries within your dual-core ESP32.
Theory
What is Inter-Processor Communication?
Inter-Processor Communication is a set of mechanisms that allows the different processors (cores) in a multi-processor system to communicate with each other. This communication can be for exchanging data or, more relevant to this chapter, for coordinating work.
The esp_ipc
API in ESP-IDF provides a specific form of IPC: cross-core function invocation. It gives a task running on one core the ability to say to the other core, “Please run this specific function for me with these arguments.”
Think of a busy restaurant kitchen with two chefs, one specializing in sauces (Core 0) and the other in grilling (Core 1). The grilling chef might prepare a steak but needs a specific sauce to finish the dish. Instead of trying to make the sauce themselves (which would be inefficient and break the kitchen’s workflow), the grilling chef simply passes an order to the sauce chef, who prepares it and hands it back. The esp_ipc
API is the system for passing these orders between chefs.
graph TD %% Define styles for different node types based on the established theme classDef successStyle fill:#D1FAE5,stroke:#059669,stroke-width:2px,color:#065F46 classDef processStyle fill:#DBEAFE,stroke:#2563EB,stroke-width:1px,color:#1E40AF classDef checkStyle fill:#FEE2E2,stroke:#DC2626,stroke-width:1px,color:#991B1B classDef decisionStyle fill:#FEF3C7,stroke:#D97706,stroke-width:1px,color:#92400E %% Subgraph for the calling core subgraph Core 1 direction TB A[/"Task A (Running)"/]:::successStyle A_Blocked("<font color=#991B1B>Task A is Blocked</font>"):::checkStyle A_Resumes[/"Task A (Unblocked/Resumes)"/]:::successStyle end %% Subgraph for the target core subgraph Core 0 direction TB B["IPC Service Task"]:::decisionStyle C["do_work() function executes"]:::processStyle B --> |"Receives & dispatches call"| C end %% Define the flow of execution A -- "1- Calls <b>esp_ipc_call_blocking</b>(core=0, func=do_work)" --> A_Blocked A_Blocked -.-> |"Sends request to Core 0"| B C -- "2- Execution Completes" --> A_Resumes
The ESP-IDF IPC Mechanism
At a high level, the esp_ipc
driver works as follows:
- Request: A task on the “calling core” calls one of the
esp_ipc_call
functions, providing the target core ID and a pointer to the function to be executed, along with a singlevoid*
argument. - Queuing: The IPC driver places this request into a dedicated, high-priority queue for the “target core”.
- Signaling: The driver signals the target core that a new IPC request is pending. This is done via a high-priority mechanism that causes an IPC service task on the target core to run almost immediately.
- Execution: The high-priority IPC service task on the target core wakes up, pulls the function pointer and argument from its queue, and executes the requested function.
- Synchronization (for blocking calls): If the initial call was blocking, the calling task on the original core will be suspended. Once the function finishes on the target core, the IPC service task signals back to the calling core, waking the original task up.
sequenceDiagram actor TaskA as Task A (on Core 1) participant Core1 as Core 1 CPU participant IPC_Service as IPC Service (on Core 0) participant Core0 as Core 0 CPU par TaskA ->> Core1: Is Running end TaskA ->>+ IPC_Service: esp_ipc_call_blocking(core=0, func=do_work) Note right of TaskA: Task A is now blocked,<br>waiting for completion. IPC_Service ->>+ Core0: Executes do_work() on behalf of Task A Core0 -->> Core0: ...work is done... Core0 ->>- IPC_Service: do_work() completes IPC_Service ->>- TaskA: Signals completion Note right of TaskA: Task A unblocks and<br>resumes execution. par TaskA ->> Core1: Continues Running end
Blocking vs. Non-Blocking IPC Calls
The esp_ipc
API provides two primary modes of operation, suiting different application needs.
- esp_ipc_call_blocking(uint32_t core_id, esp_ipc_func_t func, void* arg)This is a synchronous call. The task that calls it will be blocked and will not resume execution until the function func has completely finished running on the target core core_id. This is the simplest way to use IPC, as it behaves much like a regular function call, just with the execution happening elsewhere. It’s ideal when you need the result or effect of the remote function immediately before proceeding.
- esp_ipc_call(uint32_t core_id, esp_ipc_func_t func, void* arg)This is a non-blocking, asynchronous call. It is a “fire-and-forget” mechanism. The calling task queues the function for execution on the target core and then immediately continues with its own work without waiting. This is highly efficient if you don’t need to know when the remote function completes.
Feature | esp_ipc_call_blocking() |
esp_ipc_call() |
---|---|---|
Behavior | Synchronous | Asynchronous (“Fire-and-Forget”) |
Calling Task | Blocks until the remote function is completely finished. | Returns immediately after queueing the function for execution. Does not wait. |
Ideal Use Case | When you need to wait for the result or completion of the remote action before proceeding. | When you want to trigger a background task on another core without halting the current task. |
Data Return | Safer. Pass a pointer to a struct/variable. The data will be populated and valid when the call returns. | Requires careful memory management. You cannot pass a pointer to a local stack variable. |
Performance | Introduces wait states. The calling core is idle while waiting. | More efficient for the calling core, which can continue working immediately. |
Warning: The function signature for IPC calls is fixed:
void my_ipc_function(void *arg)
. It takes onevoid
pointer as an argument and returns nothing. If you need to pass multiple parameters, you must wrap them in astruct
and pass a pointer to it. If you need to get data back, this struct must have members that the remote function can write to.
Practical Examples
Example 1: Simple Blocking IPC
In this example, our app_main
task on Core 1 will request that Core 0 run a simple function to print its core ID.
Code
#include <stdio.h>
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include "esp_log.h"
#include "esp_ipc.h"
static const char *TAG = "IPC_DEMO";
// This is the function we want to execute on the other core.
// It must have this exact signature: void func(void* arg).
void print_core_id_task(void *arg)
{
// The argument is passed from the IPC call
int some_value = *(int*)arg;
ESP_LOGI(TAG, "IPC task running on Core %d, received value: %d", xPortGetCoreID(), some_value);
}
void app_main(void)
{
ESP_LOGI(TAG, "app_main started on Core %d.", xPortGetCoreID());
int my_arg = 123;
ESP_LOGI(TAG, "Requesting Core 0 to run a function via blocking IPC...");
// This call will block until print_core_id_task finishes on Core 0
esp_ipc_call_blocking(0, print_core_id_task, &my_arg);
ESP_LOGI(TAG, "Blocking IPC call finished. app_main continues on Core %d.", xPortGetCoreID());
}
Build and Flash Instructions
- Create a new ESP-IDF project in VS Code.
- Copy the code into your
main.c
. - Build, flash, and monitor.
Observation
The logs will show a clear sequence. app_main
requests the IPC call, then the message from the IPC task on Core 0 appears, and only after that does the final message from app_main
appear, proving it was blocked.
I (315) IPC_DEMO: app_main started on Core 1.
I (325) IPC_DEMO: Requesting Core 0 to run a function via blocking IPC...
I (325) IPC_DEMO: IPC task running on Core 0, received value: 123
I (335) IPC_DEMO: Blocking IPC call finished. app_main continues on Core 1.
Example 2: Non-Blocking IPC with Data Pointer Caveat
This example uses a non-blocking call. It highlights a critical pitfall: passing a pointer to a variable on the stack.
Code
#include <stdio.h>
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include "esp_log.h"
#include "esp_ipc.h"
#include <string.h>
static const char *TAG = "IPC_NON_BLOCKING";
// Let's create a struct to hold our data that can live in a stable memory location
typedef struct {
char message[50];
int call_count;
} ipc_data_t;
// Global or heap-allocated data is safe to pass via non-blocking IPC
ipc_data_t g_ipc_data;
void async_task_on_other_core(void* arg)
{
// Wait a moment to simulate work and ensure app_main has moved on
vTaskDelay(pdMS_TO_TICKS(500));
ipc_data_t* data = (ipc_data_t*)arg;
ESP_LOGI(TAG, "Async IPC task running on Core %d", xPortGetCoreID());
ESP_LOGI(TAG, "Message: %s, Count: %d", data->message, data->call_count);
}
void app_main(void)
{
ESP_LOGI(TAG, "app_main started on Core %d.", xPortGetCoreID());
// Prepare the data in a safe (global) memory location
strcpy(g_ipc_data.message, "Hello from app_main!");
g_ipc_data.call_count = 1;
ESP_LOGI(TAG, "Requesting Core 0 to run a function via NON-blocking IPC...");
esp_ipc_call(0, async_task_on_other_core, &g_ipc_data);
// This log appears immediately because the call was non-blocking
ESP_LOGI(TAG, "NON-blocking IPC call returned immediately. app_main continues.");
// If we passed a pointer to a local variable here, it would be invalid
// by the time the async_task_on_other_core runs, as app_main might
// have already returned and its stack cleaned up.
}
graph TD subgraph Core 1 direction TB A["app_main starts"]:::startStyle A --> B{"<font size=2><b>Stack Frame for app_main</b><br>ipc_data_t my_local_data;</font>"}:::processStyle; B --> C["esp_ipc_call(0, ..., &my_local_data)"]:::checkStyle; C --> D["app_main continues & returns"]:::processStyle; D --> E["<b>Stack Frame for app_main is Destroyed!</b><br><i>my_local_data no longer exists.</i>"]:::checkStyle; end subgraph Core 0 direction TB F["... some time later ..."]:::processStyle; F --> G["IPC Task finally runs"]:::startStyle; G --> H["Tries to access pointer &my_local_data"]:::checkStyle; H --> I["<b>Undefined Behavior!</b><br>Reads garbage data or crashes."]:::checkStyle; end C -.->|Pointer to my_local_data| G; %% Define styles classDef startStyle fill:#EDE9FE,stroke:#5B21B6,stroke-width:2px,color:#5B21B6 classDef processStyle fill:#DBEAFE,stroke:#2563EB,stroke-width:1px,color:#1E40AF classDef checkStyle fill:#FEE2E2,stroke:#DC2626,stroke-width:1px,color:#991B1B
Observation
The log from app_main
saying the call returned appears before the log from the task on Core 0. This demonstrates the asynchronous “fire-and-forget” nature of the call.
I (316) IPC_NON_BLOCKING: app_main started on Core 1.
I (326) IPC_NON_BLOCKING: Requesting Core 0 to run a function via NON-blocking IPC...
I (326) IPC_NON_BLOCKING: NON-blocking IPC call returned immediately. app_main continues.
I (836) IPC_NON_BLOCKING: Async IPC task running on Core 0
I (836) IPC_NON_BLOCKING: Message: Hello from app_main!, Count: 1
Variant Notes
IPC is a concept for multi-core systems. Its behavior differs significantly based on the chip.
- Dual-Core (ESP32, ESP32-S3):
- The
esp_ipc
API functions as described, providing true inter-processor function calls between Core 0 and Core 1. This is the intended use case.
- The
- Single-Core (ESP32-S2, ESP32-C3, ESP32-C6, ESP32-H2):
- These variants only have a single core, Core 0.
- The
esp_ipc
functions are still included in the API for portability. - If a task on Core 0 calls
esp_ipc_call(0, ...)
oresp_ipc_call_blocking(0, ...)
, there is no “other core” to send the request to. The system is smart enough to recognize this and simply executes the function directly as if it were a normal function call. The “blocking” call returns immediately after the function runs. - Attempting to call
esp_ipc_call(1, ...)
on a single-core chip will fail with anESP_ERR_INVALID_ARG
error.
Common Mistakes & Troubleshooting Tips
Mistake / Issue | Symptom(s) | Troubleshooting / Solution |
---|---|---|
Dangling Pointer in Non-Blocking Call Passing esp_ipc_call() a pointer to a local (stack) variable. |
Crashes, Guru Meditation errors, or silent data corruption. The remote task reads garbage data because the original stack frame is gone. | Use persistent memory: For non-blocking calls, ONLY pass pointers to global variables, static variables, or heap-allocated memory (e.g., via malloc). |
IPC Deadlock Core 0 blocks waiting for Core 1, while Core 1 simultaneously blocks waiting for Core 0. |
The entire system freezes. Neither task makes progress. A watchdog timer will likely trigger and reset the device. | Avoid circular blocking dependencies. Design a clear communication flow (e.g., Core 1 is always the client, Core 0 is the server). Use non-blocking calls if a response is not immediately needed. |
No Direct Return Value Trying to get a result like int x = esp_ipc_call_blocking(…) |
This will not compile. The IPC function signature is void func(void* arg) and it returns nothing. | Pass data by reference. Pass a pointer to a struct as the argument. The remote function writes its results into the struct’s members for the calling task to access after the block returns. |
High-Frequency Signaling Using IPC in a tight loop for very low-latency synchronization. |
Poor performance due to high overhead from context switching, queueing, and signaling between cores for every call. | Use the right tool. IPC is for delegating function calls, not for microsecond-level signaling. For high-frequency sync, use FreeRTOS semaphores, event groups, or hardware spinlocks. |
Exercises
- Return Value Simulation: Create a function
calculate_on_core0(void* arg)
. It should take a pointer to astruct
containing two integers (a
andb
) and a third member for theresult
. The function should calculatea + b
and store it inresult
. Fromapp_main
on Core 1, useesp_ipc_call_blocking
to run this function and print the result afterward. - Asynchronous Completion Signal: Use a non-blocking IPC call (
esp_ipc_call
) to trigger a “long-running” task on Core 0 (use avTaskDelay
of 2 seconds to simulate work). Theapp_main
on Core 1 should not wait. Create a binary semaphore before making the call. Pass the semaphore handle as the argument to the IPC function. After the work is done, the IPC function shouldxSemaphoreGive()
the semaphore. Theapp_main
task should try toxSemaphoreTake()
the semaphore to confirm the job is done. - Core-Safe Wrapper: Many ESP-IDF drivers are not thread-safe and some are recommended for use from a single core. Let’s simulate this. Create a function
void safe_action(void)
. Inside this function, it must run a specific piece of logic (e.g.,ESP_LOGI(TAG, "Action executed on Core 0")
). Write thesafe_action
function so that it can be called from a task on any core, but it ensures theESP_LOGI
part always runs on Core 0. (Hint: checkxPortGetCoreID()
inside the function). - IPC Stress Test: Create two tasks, one pinned to Core 0 and one pinned to Core 1. The task on Core 1 should, in a loop, call
esp_ipc_call_blocking
to run a function on Core 0. The function on Core 0 should justvTaskDelay
for 10ms. At the same time, the task on Core 0 should, in its loop, callesp_ipc_call_blocking
to run a function on Core 1 that also delays for 10ms. See if you can induce a deadlock. What happens? How would you fix it?
Summary
- Inter-Processor Communication (IPC) is a mechanism for a task on one core to request the execution of a function on another core.
- It is essential when a task needs to trigger logic that is pinned to or must run on a specific core (e.g., interacting with the Wi-Fi stack on Core 0).
- The primary ESP-IDF API is
esp_ipc
.esp_ipc_call_blocking()
: Synchronous. Waits for the remote function to complete.esp_ipc_call()
: Asynchronous. Returns immediately (“fire-and-forget”).
- The function executed via IPC must have the signature
void func(void* arg)
. Data must be passed in and out via the pointer argument. - Passing pointers to stack variables in non-blocking IPC calls is a critical bug; use global or heap memory instead.
- On single-core ESP32 variants, the IPC API executes the function directly without any cross-core mechanism.
Further Reading
- ESP-IDF IPC Driver API Reference: https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/system/ipc.html