How to migrate from pytest-operator to Jubilant¶
Many charm integration tests use pytest-operator and python-libjuju. This guide explains how to migrate your integration tests from those libraries to Jubilant.
To get help while you’re migrating tests, please keep the API reference handy, and make use of your IDE’s autocompletion – Jubilant tries to provide good type annotations and docstrings.
Migrating your tests can be broken into three steps:
Update your dependencies
Add fixtures to
conftest.py
Update the tests themselves
Let’s look at each of these in turn.
Update your dependencies¶
The first thing you’ll need to do is add jubilant
as a dependency to your tox.ini
or pyproject.toml
dependencies.
You can also remove the dependencies on juju
(python-libjuju), pytest-operator
, and pytest-asyncio
.
If you’re using tox.ini
, the diff might look like:
[testenv:integration]
deps =
boto3
cosl
- juju>=3.0
+ jubilant~=1.0
pytest
- pytest-operator
- pytest-asyncio
-r{toxinidir}/requirements.txt
If you’re migrating a large number of tests, you may want to do it in stages. In that case, keep the old dependencies in place till the end, and migrate tests one at a time, so that both pytest-operator and Jubilant tests can run together.
Add fixtures to conftest.py
¶
The pytest-operator library includes pytest fixtures, but Jubilant does not include any fixtures, so you’ll need to add one or two fixtures to your conftest.py
.
A juju
model fixture¶
Jubilant expects that a Juju controller has already been set up, either using Concierge or a manual approach. However, you’ll want a fixture that creates a temporary model. We recommend naming the fixture juju
:
# tests/integration/conftest.py
import jubilant
import pytest
@pytest.fixture(scope='module')
def juju(request: pytest.FixtureRequest):
keep_models = bool(request.config.getoption('--keep-models'))
with jubilant.temp_model(keep=keep_models) as juju:
juju.wait_timeout = 10 * 60
yield juju # run the test
if request.session.testsfailed:
log = juju.debug_log(limit=1000)
print(log, end='')
def pytest_addoption(parser):
parser.addoption(
'--keep-models',
action='store_true',
default=False,
help='keep temporarily-created models',
)
In your tests, use the fixture like this:
# tests/integration/test_charm.py
def test_active(juju: jubilant.Juju):
juju.deploy('mycharm')
juju.wait(jubilant.all_active)
# Or wait for just 'mycharm' to be active (ignoring other apps):
juju.wait(lambda status: jubilant.all_active(status, 'mycharm'))
A few things to note about the fixture:
It includes a command-line parameter
--keep-models
, to match pytest-operator. If the parameter is set, the fixture keeps the temporary model around after running the tests.It sets
juju.wait_timeout
to 10 minutes, to match python-libjuju’s defaultwait_for_idle
timeout.If any of the tests fail, it uses
juju.debug_log
to display the last 1000 lines ofjuju debug-log
output.It is module-scoped, like pytest-operator’s
ops_test
fixture. This means that a new model is created for everytest_*.py
file, but not for every test.
An application fixture¶
If you don’t want to deploy your application in each test, you can add a module-scoped app
fixture that deploys your charm and waits for it to go active.
The following fixture assumes that the charm has already been packed with charmcraft pack
in a previous CI step (Jubilant has no equivalent of ops_test.build_charm
):
# tests/integration/conftest.py
import pathlib
import jubilant
import pytest
@pytest.fixture(scope='module')
def app(juju: jubilant.Juju):
juju.deploy(
charm_path('mycharm'),
'mycharm',
resources={
'mycharm-image': 'ghcr.io/canonical/...',
},
config={
'base_url': '/api',
'port': 80,
},
base='[email protected]',
)
# ... do any other application setup here ...
juju.wait(jubilant.all_active)
yield 'mycharm' # run the test
def charm_path(name: str) -> pathlib.Path:
"""Return full absolute path to given test charm."""
# We're in tests/integration/conftest.py, so parent*3 is repo top level.
charm_dir = pathlib.Path(__file__).parent.parent.parent
charms = [p.absolute() for p in charm_dir.glob(f'{name}_*.charm')]
assert charms, f'{name}_*.charm not found'
assert len(charms) == 1, 'more than one .charm file, unsure which to use'
return charms[0]
In your tests, you’ll need to specify that the test depends on both fixtures:
# tests/integration/test_charm.py
def test_active(juju: jubilant.Juju, app: str):
status = juju.status()
assert status.apps[app].is_active
Update the tests themselves¶
Many features of pytest-operator and python-libjuju map quite directly to Jubilant, except without using async
. Here is a summary of what you need to change:
Remove
async
andawait
keywords, and replacepytest_asyncio.fixture
withpytest.fixture
Replace introspection of python-libjuju’s
Application
andUnit
objects withjuju.status
Replace
model.wait_for_idle
withjuju.wait
and an appropriate ready callableReplace
unit.run
withjuju.exec
; note the different return type and error handlingReplace
unit.run_action
withjuju.run
; note the different return type and error handlingReplace other python-libjuju methods with equivalent
Juju
methods, which are normally much closer to the Juju CLI commands
Let’s look at some specifics in more detail.
Deploying a charm¶
To migrate a charm deployment from pytest-operator, drop the await
, change series
to base
, and replace model.wait_for_idle
with juju.wait
:
# pytest-operator
postgres_app = await model.deploy(
'postgresql-k8s',
channel='14/stable',
series='jammy',
revision=300,
trust=True,
config={'profile': 'testing'},
)
await model.wait_for_idle(apps=[postgres_app.name], status='active')
# jubilant
juju.deploy(
'postgresql-k8s',
channel='14/stable',
base='[email protected]',
revision=300,
trust=True,
config={'profile': 'testing'},
)
juju.wait(lambda status: jubilant.all_active(status, 'postgresql-k8s'))
Fetching status¶
A python-libjuju model is updated in the background using websockets. In Jubilant you use ordinary Python function calls to fetch updates:
# pytest-operator
async def test_active(app: Application):
assert app.units[0].workload_status == ActiveStatus.name
# jubilant
def test_active(juju: jubilant.Juju, app: str):
status = juju.status()
assert status.apps[app].units[app + '/0'].is_active
Waiting for a condition¶
However, instead of calling status
directly, it’s usually better to wait for a certain condition to be true. In python-libjuju you used model.wait_for_idle
; in Jubilant you use juju.wait
, which has a simpler and more consistent API.
The wait
method takes a ready callable, which takes a Status
object. Internally, wait
polls juju status
every second and calls the ready callable, which must return True three times in a row (this is configurable).
You can optionally provide an error callable, which also takes a Status
object. If the error callable returns True, wait
raises a WaitError
immediately.
Jubilant provides helper functions to use for the ready and error callables, such as jubilant.all_active
and jubilant.any_error
. These check whether the workload status of all (or any) applications and their units are in a given state.
For example, here’s a simple wait
call that waits for all applications and units to go “active” and raises an error if any go into “error”:
# pytest-operator
async def test_active(model: Model):
await model.deploy('mycharm')
await model.wait_for_idle(status='active') # implies raise_on_error=True
# jubilant
def test_active(juju: jubilant.Juju):
juju.deploy('mycharm')
juju.wait(jubilant.all_active, error=jubilant.any_error)
It’s usually best to wait on workload status with the all_*
and any_*
helpers. However, if you want to wait specifically for unit agent status to be idle, you can use jubilant.all_agents_idle
:
# pytest-operator
async def test_idle(model: Model):
await model.deploy('mycharm')
await model.wait_for_idle()
# jubilant
def test_active(juju: jubilant.Juju):
juju.deploy('mycharm')
juju.wait(jubilant.all_agents_idle)
It’s common to use a lambda
function to customize the callable or compose multiple checks. For example, to wait specifically for mysql
and redis
to go active and logger
to be blocked:
juju.wait(
lambda status: (
jubilant.all_active(status, 'mysql', 'redis') and
jubilant.all_blocked(status, 'logger'),
),
)
The wait
method also has other options (see juju.wait
for details):
juju.deploy('mycharm')
juju.wait(
ready=lambda status: jubilant.all_active(status, 'mycharm'),
error=jubilant.any_error,
delay=0.2, # poll "juju status" every 200ms (default 1s)
timeout=60, # set overall timeout to 60s (default juju.wait_timeout)
successes=7, # require ready to return success 7x in a row (default 3)
)
Integrating two applications¶
To integrate two charms, remove the async
-related code and replace model.add_relation
with juju.integrate
. For example, to integrate discourse-k8s with three other charms:
# pytest-operator
await asyncio.gather(
model.add_relation('discourse-k8s', 'postgresql-k8s:database'),
model.add_relation('discourse-k8s', 'redis-k8s'),
model.add_relation('discourse-k8s', 'nginx-ingress-integrator'),
)
await model.wait_for_idle(status='active')
# jubilant
juju.integrate('discourse-k8s', 'postgresql-k8s:database')
juju.integrate('discourse-k8s', 'redis-k8s')
juju.integrate('discourse-k8s', 'nginx-ingress-integrator')
juju.wait(jubilant.all_active)
Executing a command¶
In pytest-operator
tests, you used unit.run
to execute a command. With Jubilant (as with Juju 3.x) you use juju.exec
. Jubilant’s exec
returns a jubilant.Task
, and it also checks errors for you:
# pytest-operator
unit = model.applications['discourse-k8s'].units[0]
action = await unit.run('/bin/bash -c "..."')
await action.wait()
logger.info(action.results)
assert action.results['return-code'] == 0, 'Enable plugins failed'
# jubilant
task = juju.exec('/bin/bash -c "..."', unit='discourse-k8s/0')
logger.info(task.results)
Running an action¶
In pytest-operator
tests, you used unit.run_action
to run an action. With Jubilant, you use juju.run
. Similar to exec
, Jubilant’s run
returns a jubilant.Task
and checks errors for you:
# pytest-operator
app = model.applications['postgresl-k8s']
action = await app.units[0].run_action('get-password', username='operator')
await action.wait()
password = action.results['password']
# jubilant
task = juju.run('postgresql-k8s/0', 'get-password', {'username': 'operator'})
password = task.results['password']
The cli
fallback¶
Similar to how you could call ops_test.juju
, with Jubilant you can call juju.cli
to execute an arbitrary Juju command. The cli
method checks errors for you and raises a CLIError
if the command’s exit code is nonzero:
# pytest-operator
return_code, _, scp_err = await ops_test.juju(
'scp',
'--container',
'postgresql',
'./testing_database/testing_database.sql',
f'{postgres_app.units[0].name}:.',
)
assert return_code == 0, scp_err
# jubilant
juju.cli(
'scp',
'--container',
'postgresql',
'./testing_database/testing_database.sql',
'postgresql-k8s/0:.',
)
A fast_forward
context manager¶
Pytest-operator has a fast_forward
context manager which temporarily speeds up update-status
hooks to fire every 10 seconds (instead of Juju’s default of every 5 minutes). Jubilant doesn’t provide this context manager, as we don’t recommend it for new tests. If you need it for migrating existing tests, you can define it as:
@contextlib.contextmanager
def fast_forward(juju: jubilant.Juju):
"""Context manager that temporarily speeds up update-status hooks to fire every 10s."""
old = juju.model_config()['update-status-hook-interval']
juju.model_config({'update-status-hook-interval': '10s'})
try:
yield
finally:
juju.model_config({'update-status-hook-interval': old})
See more¶
This discourse-k8s migration PR shows how we migrated a real charm’s integration tests