Developer setup¶
There are three ways to run laputa: 1. Running laputa with the local-env-dev: recommended for the onboarding, or when you need auth for developping 2. Running laputa locally with docker without auth: recommended for simple test, but this mode is not well maintained at the moment 3. Running laputa with python: a bit complex but useful for local dev
Running laputa with the local-env-dev¶
If you really need authentication, you can run a complete laputa stack locally along with curity, vault, spicedb, etc...
In order to do that, please follow the readme of local-env-dev.
You will need to have the local-env-dev, the layout service and the dataset service started (in local-env-dev mode) before this step.
Then to start laputa:
1. Copy the laputa/common/parameters/config.yml.dist file into laputa/common/parameters/config.yml.
2. Modify it so it has the following properties for services, tenancy and auth:
dataset_service: enable
dataset_service_url: http://host.docker.internal:5003
layout_service_flag: enable
layout_service_url: http://host.docker.internal:3000
tenancy_flag: "enable"
tenancy:
workspace_id: d53a053b-2925-436f-add1-eb2f492cab5a
tenant_id: 2814c680-afb1-4d64-95f2-c271b6a9a3e4
toucan_oidc_configuration:
algorithms:
- RS512
audiences:
- workspace-toucan
- workspace-toucan-code
- toucan-infra-admin-client
- toucan-embed-client
jwks:
uri: https://auth.toucan.local/2814c680-afb1-4d64-95f2-c271b6a9a3e4/oauth/oauth-anonymous/jwks
cache_ttl: 3600
min_time_between_attempts: 60
toucan_user_management:
uri: https://auth.toucan.local/2814c680-afb1-4d64-95f2-c271b6a9a3e4/user-management/graphql/admin
vault:
secret_path: https://public-vault.toucan.local/v1/toucan_oauthapp_tenant/self/2814c680-afb1-4d64-95f2-c271b6a9a3e4/toucan-admin-management-client
token_header: X-Vault-Token
token: 00000000-0000-0000-0000-000000000000
toucan_micro_service_client:
vault:
secret_path: https://public-vault.toucan.local/v1/toucan_oauthapp_tenant/self/2814c680-afb1-4d64-95f2-c271b6a9a3e4/toucan-micro-service-client
token_header: X-Vault-Token
token: 00000000-0000-0000-0000-000000000000
toucan_websocket_client:
client_id: toucan-laputa-websocket-client
client_secret: admin
uri: https://auth.toucan.local/2814c680-afb1-4d64-95f2-c271b6a9a3e4/oauth/oauth-introspect
toucan_public_embed_client:
client_id: toucan-embed-client
jwt_assertion_audience: https://auth.toucan.local/2814c680-afb1-4d64-95f2-c271b6a9a3e4/oauth/oauth-token
jwt_assertion_issuer: 2814c680-afb1-4d64-95f2-c271b6a9a3e4-embed
curity_admin_api_configuration:
username: admin
password: admin
embed_client_public_key_id: 2814c680-afb1-4d64-95f2-c271b6a9a3e4-public_embed_verification_key
curity_admin_url_prefix: https://auth.toucan.local/admin/api/restconf/data
oauth_profile_id: 2814c680-afb1-4d64-95f2-c271b6a9a3e4-oauth
spicedb:
url: "spicedb.toucan.local:50051"
preshared_key: "toucan-local-env-dev-spicedb"
ca_cert_path: "/usr/local/share/ca-certificates/spicedb-ca-cert.crt"
- In the same file, uncomment the
db_encryption_secretand put a random string in it - When the local env dev is running, you can then run:
export MKCERT_DIR=$(mkcert -CAROOT)
export LOCAL_ENV_DEV_DIR=path-to-local-env-dev # replace this with the path of your local env dev
docker compose -f local-docker/docker-compose.yml --profile websocket_ssl up -d
# (websocket_ssl profile is optional, but it required when using the full stack.
# Use --profile websocket_no_ssl if you want to use ws:// instead of wss://)
- Run
docker compose logs -f api/workerto have the logs of the api or of the worker - It may take a few minutes to install. Wait for the log;
*** Boot finished (took X.XXs). Ready to handle requests ! ***(be careful there is a lot of logs) - Go to http://localhost:5000/, you should get
{"status":"OK","version":"vXXX.0.0"}
If you want to down the laputa stack use docker compose -f local-docker/docker-compose.yml down; it will clean everything.
More details in the docker-compose docs.
Running laputa locally with docker without auth¶
The most simple way to run laputa is to use docker and use the no auth mode, but it is not maintained:
1. Copy the laputa/common/parameters/config.yml.dist file into laputa/common/parameters/config.yml.
2. Modify it so it has the following properties for services, tenancy and auth:
dataset_service: enable
dataset_service_url: http://host.docker.internal:5003
layout_service_flag: enable
layout_service_url: http://host.docker.internal:3000
# Keep this flag enabled when running without the authentication service
# It overrides `tenancy_flag` but still requires `tenant_id` and `workspace_id`
# to be set in the `tenancy` config
no_auth_mode: enable
tenancy_flag: enable
tenancy:
tenant_id: 'aaaaaaaa-aaaa-4aaa-aaaa-aaaaaaaaaaaa'
workspace_id: 'aaaaaaaa-aaaa-4aaa-aaaa-aaaaaaaaaaab'
- In the same file, uncomment the
db_encryption_secretand put a random string in it - Run
docker compose -f local-docker/docker-compose-no-auth.yml up -d, copy the name of the container start containsapi. - Run
docker logs -f $container_namereplacing$container_nameby the name of theapicontainer (ex:docker logs -f local-docker-api-1). - It may take a few minutes to install. Wait for the log;
*** Boot finished (took X.XXs). Ready to handle requests ! ***(be careful there is a lot of logs) - Go to http://localhost:5000/, you should get
{"status":"OK","version":"vXXX.0.0"}
Note that with this configuration, the server restarts automatically when files are modified. But sometimes not all config is reloaded and you might need to do a hard-reset; In order to do that you can down the stack and start it again (cf below).
If you want to down the laputa stack use docker compose -f local-docker/docker-compose-no-auth.yml down; it will clean everything.
More details in the docker-compose docs.
Running with a local python environment (a bit complex but better if you develop in laputa regularly)¶
If you will often develop in laputa, and you want a good experience in our IDE, you can use a local python environnment. But be careful, it can be a bit complex as you will have to install many system packages.
Install required system packages¶
Debian and derivatives systems¶
sudo apt-get install -y autoconf build-essential curl freetds-dev git libbz2-dev libffi-dev liblzma-dev libncurses5-dev libpq-dev libreadline-dev libsasl2-dev libsnappy-dev libssl-dev libtool libxmlsec1-dev libxmlsec1-openssl llvm make python3-dev python-openssl tk-dev unixodbc-dev wget xz-utils zlib1g-dev
Other systems¶
If you're on redhat derivatives, the package names will differ a bit from the debian ones:
# These include pyenv dependencies
sudo dnf install -y autoconf bzip2 bzip2-devel cmake csnappy-devel curl cyrus-sasl-devel freetds-devel gcc git libffi-devel libpq-devel libtool libtool-ltld-devel llvm make ncurses-devel openssl-devel python-devel readline-devel snappy sqlite sqlite-devel tk-devel unixODBC-devel wget xmlsec1-devel xmlsec1-openssl xz-devel zlib-devel
For any other system, follow this doc.
MacOS¶
You'll also need to install a few packages, feel free to copy-paste this line (if you have brew installed)
brew install coreutils mkdocs libtool make automake autoconf libmagic unixodbc pyenv
Install python¶
Laputa supports python3.10.
There are numerous ways to install python and handle multiple versions (including
downloading compiled binaries, recompile from source, use distributions like
anaconda). We wil describe two of the most
natural and common ways to do that: your OS package manager or pyenv.
Option 1: use your OS package manager¶
On debian derivatives:
sudo apt update install python3.10
On redhat derivatives:
sudo yum install rh-python3.10
Option 2: use pyenv¶
Pyenv lets you easily switch between multiple
versions of Python, as nvm does for node.
We need pyenv at least version 2.3.
To install pyenv:
curl https://pyenv.run | bash
(or follow this doc)
If pyenv asks you to add a few lines in your .bashrc, do it and restart bash afterwards.
Once pyenv is installed, use it to install python 3.11.10:
pyenv install 3.11.10
If the installation is successful, the following command should print Python 3.11.10:
`pyenv root`/versions/3.11.10/bin/python -V
You can then switch your current shell session to this python version:
pyenv shell 3.11.10
Tip
You can use pyenv local to set the python version for a specific directory only:
pyenv local 3.11.10 # run this at the root of the laputa repository
Install laputa and its dependencies¶
git clone git@github.com:toucantoco/laputa
cd laputa/
Install uv globally and then laputa dependencies with uv sync.
Laputa dependencies are managed by uv in a persistent environment saved in .venv directory.
You can find more information on uv documentation
Install pre-commit
pre-commit install
NOTES:
- On specific Apple Silicons (M1 etc...) py-mini-racer installation does not work, so now its installation is optional. This package is usefull for the parsing of CSON files, so this does not work by default locally (old stuff such as etl_config.cson, locales.cson or permission.cson). You can still install it among all additional packages with uv sync --all-extras. If you really want to use it on Mac M1, see the section "install Laputa on a m1 mac".
- If you're facing some issues with some package make sure to install all packages listed in "Install required system packages".
Later on, to run commands use uv run <command> in laputa's directory.
Start the required databases and services¶
Laputa needs to connect to a redis and a mongo server. If you already have them on your system, you can use them but we usually recommend to use docker containers (if you don't have a running docker installation on your system, first install docker):
docker run -d -p 6379:6379 --rm redis:6.2.1-alpine
docker run -d -p 27017:27017 --rm mongo:5.0.4
docker run -d -p 3000:3000 --rm quay.io/toucantoco/screenshot:1.3.0
# Or with docker compose
docker compose -f local-docker/docker-compose.yml up -d mongo redis
N.B. : the
quay.io/toucantoco/screenshotis optionnal.
The default laputa configuration (./laputa/common/parameters/config.yml or
./tests/common/parameters/config.yml for tests) expects the redis server
to be reachable on redis:6379,the mongo one on mongo:27017 and
the screenshot service on screenshot:3000.
Therefore, either change the configuration or edit your /etc/hosts file so
that mongo, redis and screenshot resolve to localhost:
head -n 1 /etc/hosts
127.0.0.1 localhost redis mongo screenshot
You can also change the docker command lines above to expose the services on
different ports if you prefer, provided that you update the config.yml
accordingly.
Configuration¶
As said earlier, sensitive default parameters are available from
common/parameters and they should be copied by skipping the dist extension:
## App configuration
cp common/parameters/config.yml{.dist,}
The copy from config.yml.dist to config.yml is done each time you run a goal with
configure as its dependency. For example, configure itself, docker, jenkins.
This essentially means that those commands will overwrite any changes made to config.yml.
If you want to use a config.yml file located somewhere else,
just set the TOUCAN_CONFIG environment variable accordingly.
Uncomment the line db_encryption_secret, otherwise the server won't start.
This value should encode all secrets fields in the database, so it should not change across restarts.
Make sure the tests pass¶
uv run pytest tests/models/test_user.py
Start laputa¶
uv run toucan-ctl runserver
# or
make dev-server
Using Docker¶
You can run laputa with docker. make docker-start-local-stack will start:
- An API server listening on
localhost:5000 - A websocket server listening on
localhost:5001 - A redis server listening on
localhost:6379 - A mongo server listening on
localhost:27017 - Two celery workers
Development¶
Linters/Mypy¶
We use ruff and black to make sure all the code respects good standards. We also use mypy to check typing issues.
Run tests in a docker container¶
If you didn't do a full laputa installation, you can build a docker image with testing dependencies and use it to run the tests:
make docker-build-testing &&\
docker compose -f docker-compose-testing.yml run --rm api pytest tests/
Python's dependencies management¶
We use uv to simplify, secure and have a reproducible Python dependencies installation.
So you should have uv installed on your stack !
If its not, please refer to Laputa Installation Section
(Re)Installing Laputa's development dependencies is as simple as uv sync : this will also remove any unnecessary package.
In order to update a dependency :
1) Update the dependancy version in pyproject.toml
2) Run uv sync
3) This will update your uv.lock file.
Please check uv sync documentation to have an overview on all available options.
Migrations¶
If you need to update the data model in a non-backward-compatible way, you may want to write a migration script to make sure the old data will automatically be updated when your new code will be deployed.
Migrations are python scripts which filename and content must follow a strict format. They are
located in the app/migrations/ directory.
Migration usage, specs, gotchas and templates are described in PAT-14.
Caution!
As much as possible, please write reentrant and reversible migration scripts.
Advice on writing resilient migrations¶
We recommend using only pymongo to write migrations.
It can be tempting to use the MongoEngine's models to make queries easier, but keep in mind that migrations must stay exactly
the same in time, while the rest of the code base can evolve. Their execution should always lead to the same result,
otherwise subsequent migrations could fail. For this reason, if you ever want to use models, copy them and rename them in your migration
and never import them from laputa.models. This way, even if they change later, the migration will still produce the
same result.
An example of such migration can be found in laputa/app/migrations/migration_2020_09_30_13_00_00_privileges_allsmallapps.py.
Migrations of small app config files¶
Since we have committed templates in the code, modifying the front configs in the database
with a migration is not enough!
We also need to migrate the front configs of those templates and commit the new configs.
In order to run a migration on all the committed front configs, there is an util in
laputa/common/utils/migration_utils.py to run manually that should help you!