concurrent.interpreters — Multiple interpreters in the same process

Added in version 3.14.

Source code: Lib/concurrent/interpreters


The concurrent.interpreters module constructs higher-level interfaces on top of the lower level _interpreters module.

The module is primarily meant to provide a basic API for managing interpreters (AKA “subinterpreters”) and running things in them. Running mostly involves switching to an interpreter (in the current thread) and calling a function in that execution context.

For concurrency, interpreters themselves (and this module) don’t provide much more than isolation, which on its own isn’t useful. Actual concurrency is available separately through threads See below

See also

InterpreterPoolExecutor

Combines threads with interpreters in a familiar interface.

Isolating Extension Modules

How to update an extension module to support multiple interpreters.

PEP 554

PEP 734

PEP 684

Availability: not WASI.

This module does not work or is not available on WebAssembly. See WebAssembly platforms for more information.

Key details

Before we dive in further, there are a small number of details to keep in mind about using multiple interpreters:

  • isolated, by default

  • no implicit threads

  • not all PyPI packages support use in multiple interpreters yet

Introduction

An “interpreter” is effectively the execution context of the Python runtime. It contains all of the state the runtime needs to execute a program. This includes things like the import state and builtins. (Each thread, even if there’s only the main thread, has some extra runtime state, in addition to the current interpreter, related to the current exception and the bytecode eval loop.)

The concept and functionality of the interpreter have been a part of Python since version 2.2, but the feature was only available through the C-API and not well known, and the isolation was relatively incomplete until version 3.12.

Multiple Interpreters and Isolation

A Python implementation may support using multiple interpreters in the same process. CPython has this support. Each interpreter is effectively isolated from the others (with a limited number of carefully managed process-global exceptions to the rule).

That isolation is primarily useful as a strong separation between distinct logical components of a program, where you want to have careful control of how those components interact.

Note

Interpreters in the same process can technically never be strictly isolated from one another since there are few restrictions on memory access within the same process. The Python runtime makes a best effort at isolation but extension modules may easily violate that. Therefore, do not use multiple interpreters in security-sensitive situations, where they shouldn’t have access to each other’s data.

Running in an Interpreter

Running in a different interpreter involves switching to it in the current thread and then calling some function. The runtime will execute the function using the current interpreter’s state. The concurrent.interpreters module provides a basic API for creating and managing interpreters, as well as the switch-and-call operation.

No other threads are automatically started for the operation. There is a helper for that though. There is another dedicated helper for calling the builtin exec() in an interpreter.

When exec() (or eval()) are called in an interpreter, they run using the interpreter’s __main__ module as the “globals” namespace. The same is true for functions that aren’t associated with any module. This is the same as how scripts invoked from the command-line run in the __main__ module.

Concurrency and Parallelism

As noted earlier, interpreters do not provide any concurrency on their own. They strictly represent the isolated execution context the runtime will use in the current thread. That isolation makes them similar to processes, but they still enjoy in-process efficiency, like threads.

All that said, interpreters do naturally support certain flavors of concurrency. There’s a powerful side effect of that isolation. It enables a different approach to concurrency than you can take with async or threads. It’s a similar concurrency model to CSP or the actor model, a model which is relatively easy to reason about.

You can take advantage of that concurrency model in a single thread, switching back and forth between interpreters, Stackless-style. However, this model is more useful when you combine interpreters with multiple threads. This mostly involves starting a new thread, where you switch to another interpreter and run what you want there.

Each actual thread in Python, even if you’re only running in the main thread, has its own current execution context. Multiple threads can use the same interpreter or different ones.

At a high level, you can think of the combination of threads and interpreters as threads with opt-in sharing.

As a significant bonus, interpreters are sufficiently isolated that they do not share the GIL, which means combining threads with multiple interpreters enables full multi-core parallelism. (This has been the case since Python 3.12.)

Communication Between Interpreters

In practice, multiple interpreters are useful only if we have a way to communicate between them. This usually involves some form of message passing, but can even mean sharing data in some carefully managed way.

With this in mind, the concurrent.interpreters module provides a queue.Queue implementation, available through create_queue().

“Sharing” Objects

Any data actually shared between interpreters loses the thread-safety provided by the GIL. There are various options for dealing with this in extension modules. However, from Python code the lack of thread-safety means objects can’t actually be shared, with a few exceptions. Instead, a copy must be created, which means mutable objects won’t stay in sync.

By default, most objects are copied with pickle when they are passed to another interpreter. Nearly all of the immutable builtin objects are either directly shared or copied efficiently. For example:

There is a small number of Python types that actually share mutable data between interpreters:

Reference

This module defines the following functions:

concurrent.interpreters.list_all()

Return a list of Interpreter objects, one for each existing interpreter.

concurrent.interpreters.get_current()

Return an Interpreter object for the currently running interpreter.

concurrent.interpreters.get_main()

Return an Interpreter object for the main interpreter. This is the interpreter the runtime created to run the REPL or the script given at the command-line. It is usually the only one.

concurrent.interpreters.create()

Initialize a new (idle) Python interpreter and return a Interpreter object for it.

concurrent.interpreters.create_queue(maxsize=0, *, unbounditems=UNBOUND)

Initialize a new cross-interpreter queue and return a Queue object for it.

maxsize sets the upper bound on the number of items that can be placed in the queue. If maxsize is less than or equal to zero, the queue size is infinite.

unbounditems sets the default behavior when getting an item from the queue whose original interpreter has been destroyed. See Queue.put() for supported values.

concurrent.interpreters.is_shareable(obj)

Return True if the object can be sent to another interpreter without using pickle, and False otherwise. See “Sharing” Objects.

Interpreter objects

class concurrent.interpreters.Interpreter(id)

A single interpreter in the current process.

Generally, Interpreter shouldn’t be called directly. Instead, use create() or one of the other module functions.

id

(read-only)

The underlying interpreter’s ID.

whence

(read-only)

A string describing where the interpreter came from.

is_running()

Return True if the interpreter is currently executing code in its __main__ module and False otherwise.

close()

Finalize and destroy the interpreter.

prepare_main(ns=None, /, **kwargs)

Bind the given objects into the interpreter’s __main__ module namespace. This is the primary way to pass data to code running in another interpreter.

ns is an optional dict mapping names to values. Any additional keyword arguments are also bound as names.

The values must be shareable between interpreters. Some objects are actually shared, some are copied efficiently, and most are copied via pickle. See “Sharing” Objects.

For example:

interp = interpreters.create()
interp.prepare_main(name='world')
interp.exec('print(f"Hello, {name}!")')

This is equivalent to setting variables in the interpreter’s __main__ module before calling exec() or call(). The names are available as global variables in the executed code.

exec(code, /)

Run the given source code in the interpreter (in the current thread).

code is a str of Python source code. It is executed as though it were the body of a script, using the interpreter’s __main__ module as the globals namespace.

There is no return value. To get a result back, use call() instead, or communicate through a Queue.

If the code raises an unhandled exception, an ExecutionFailed exception is raised in the calling interpreter. The actual exception object is not preserved because objects cannot be shared between interpreters directly.

This blocks the current thread until the code finishes.

call(callable, /, *args, **kwargs)

Call callable in the interpreter (in the current thread) and return the result.

Nearly all callables, args, kwargs, and return values are supported. All “shareable” objects are supported, as are “stateless” functions (meaning non-closures that do not use any globals). For other objects, this method falls back to pickle.

If the callable raises an exception, an ExecutionFailed exception is raised in the calling interpreter.

call_in_thread(callable, /, *args, **kwargs)

Start a new Thread that calls callable in the interpreter and return the thread object.

This is a convenience wrapper that combines threading with call(). The thread is started immediately. Call join() on the returned thread to wait for it to finish.

Exceptions

exception concurrent.interpreters.InterpreterError

This exception, a subclass of Exception, is raised when an interpreter-related error happens.

exception concurrent.interpreters.InterpreterNotFoundError

This exception, a subclass of InterpreterError, is raised when the targeted interpreter no longer exists.

exception concurrent.interpreters.ExecutionFailed

This exception, a subclass of InterpreterError, is raised when the running code raised an uncaught exception.

excinfo

A basic snapshot of the exception raised in the other interpreter.

exception concurrent.interpreters.NotShareableError

This exception, a subclass of TypeError, is raised when an object cannot be sent to another interpreter.

Communicating Between Interpreters

class concurrent.interpreters.Queue(id)

A cross-interpreter queue that can be used to pass data safely between interpreters. It provides the same interface as queue.Queue. The underlying queue can only be created through create_queue().

When an object is placed in the queue, it is prepared for use in another interpreter. Some objects are actually shared and some are copied efficiently, but most are copied via pickle. See “Sharing” Objects.

Queue objects themselves are shareable between interpreters (they reference the same underlying queue), making them suitable for use with Interpreter.prepare_main().

id

(read-only)

The queue’s ID.

maxsize

(read-only)

The maximum number of items allowed in the queue. A value of zero means the queue size is infinite.

empty()

Return True if the queue is empty, False otherwise.

full()

Return True if the queue is full, False otherwise.

qsize()

Return the number of items in the queue.

put(obj, block=True, timeout=None, *, unbounditems=None)

Put obj into the queue. If block is true (the default), block if necessary until a free slot is available. If timeout is a positive number, block at most timeout seconds and raise QueueFullError if no free slot is available within that time.

If block is false, put obj in the queue if a free slot is immediately available, otherwise raise QueueFullError.

unbounditems controls what happens when the item is retrieved via get() after the interpreter that called put() has been destroyed. If None (the default), the queue’s default (set via create_queue()) is used. Supported values:

  • UNBOUNDget() returns the UNBOUND sentinel in place of the original object.

  • UNBOUND_ERRORget() raises ItemInterpreterDestroyed.

  • UNBOUND_REMOVE – the item is silently removed from the queue when the original interpreter is destroyed.

put_nowait(obj, *, unbounditems=None)

Equivalent to put(obj, block=False).

get(block=True, timeout=None)

Remove and return an item from the queue. If block is true (the default), block if necessary until an item is available. If timeout is a positive number, block at most timeout seconds and raise QueueEmptyError if no item is available within that time.

If block is false, return an item if one is immediately available, otherwise raise QueueEmptyError.

get_nowait()

Equivalent to get(block=False).

exception concurrent.interpreters.QueueEmptyError

This exception, a subclass of queue.Empty, is raised from Queue.get() and Queue.get_nowait() when the queue is empty.

exception concurrent.interpreters.QueueFullError

This exception, a subclass of queue.Full, is raised from Queue.put() and Queue.put_nowait() when the queue is full.

Basic usage

Creating an interpreter and running code in it:

from concurrent import interpreters

interp = interpreters.create()

# Run source code directly.
interp.exec('print("Hello from a subinterpreter!")')

# Call a function and get the result.
def add(x, y):
    return x + y

result = interp.call(add, 3, 4)
print(result)  # 7

# Run a function in a new thread.
def worker():
    print('Running in a thread!')

t = interp.call_in_thread(worker)
t.join()

Passing data with prepare_main():

interp = interpreters.create()

# Bind variables into the interpreter's __main__ namespace.
interp.prepare_main(greeting='Hello', name='world')
interp.exec('print(f"{greeting}, {name}!")')

# Can also use a dict.
config = {'host': 'localhost', 'port': 8080}
interp.prepare_main(config)
interp.exec('print(f"Connecting to {host}:{port}")')

Using queues to communicate between interpreters:

interp = interpreters.create()

# Create a queue and share it with the subinterpreter.
queue = interpreters.create_queue()
interp.prepare_main(queue=queue)

# The subinterpreter puts results into the queue.
interp.exec("""
import math
queue.put(math.factorial(10))
""")

# The main interpreter reads from the same queue.
result = queue.get()
print(result)  # 3628800

Running CPU-bound work in parallel using threads and interpreters:

import time
from concurrent import interpreters

def compute(n):
    total = sum(range(n))
    return total

interp1 = interpreters.create()
interp2 = interpreters.create()

# Each interpreter runs in its own thread and does not share
# the GIL, enabling true parallel execution.
t1 = interp1.call_in_thread(compute, 50_000_000)
t2 = interp2.call_in_thread(compute, 50_000_000)
t1.join()
t2.join()

Tip

For many use cases, InterpreterPoolExecutor provides a higher-level interface that combines threads with interpreters automatically.