Integrate OnFinality into your CI/CD Pipeline (Substrate only)
Summary
This walkthrough shows how to automate your development process so any change to your node codebase can be automatically built, and then deployed to an existing or new OnFinality network.
You can use this to create a brand new one-validator network from changes to your codebase. You might want to use it to run some tests, or join more nodes into it to simulate a more complicated scenario. And you are safe to delete the whole network after you're done, and pay only a small price for it since we charge on an hourly basis.
Integrating OnFinality’s CLI into GitHub actions to:
Update Network spec
Deploy new nodes
Perform a rolling update to existing nodes in network spec
Common deployment strategies regarding dev and staging environments
Prerequisites
A valid user network and running nodes on OnFinality
A existing pipeline that build and publish binary to dockerhub
The OnF CLI tool is locally installed and has an access key configured. OnFinality CLI Tool and Access Keys
You have access to and generated a chainspec file
We have provided some example chainspec files in at https://github.com/OnFinality-io/onf-cli/tree/master/sample/onf-testnet :
rococo-local.json
(relay chain - optional)karura-dev-2000.json
(parachain)
For your own cain you should generate your own chainspec file. Please see the open-web3-stack / parachain-launch GitHub repo for an example of how to generate chainspec file.
You do not need to provide a relay chain if your chain is not running as a parachain (it’s running as a standalone substrate chain)
Other Resources
Our Github Action: https://github.com/OnFinality-io/action-onf-release
Chain Config Files
Chain Config for Standalone Chain/Parachain
We have provided some example chainspec files in at https://github.com/OnFinality-io/onf-cli/tree/master/sample/onf-testnet :
bootstrap-parachain-config.yaml
(parachain)
networkSpec
You can see basic information about the image version, including the node types and the reference to the chain spec file that was created earlier.
Also included are links to the bootnodes libp2p address for the parachain
validator
In the example files we have configured to automatically deploy 3 validators as part of the bootstrap process.
These three are all in the OnFinality Japan location and have 2 compute units with 30 GB of storage.
Each will expose a public port and will be protected via an API key
bootNode
If you want to add bootnodes for the new network, you can adjust the
count
property in thebootNode
section. Then all the new created bootnodes libp2p address will be updated to the network spec.
Start Network
Bootstrap Relaychain (Optional)
Skip this step if your chain is not running as a parachain (it’s running as a standalone chain)
run onf network bootstrap -f bootstrap-relaychain-config.yaml
After you run the command, OnFinality will create a new network spec in the workspace and deploy any new nodes as defined in the config file.
Bootstrap Parachain/Standalone Chain.
Firstly, you need to modify --bootnodes
parameters in the networkspec definition section to replace with addresses from the bootnodes you created in the previous step. you can get the libp2p address in the OnFinality webapp, or via cli tool.
run onf network bootstrap -f bootstrap-parachain-config.yaml
After you run the command, OnFinality will create a new network spec in the workspace and deploy any new nodes as defined in the config file.
Automate via a GitHub Action (optional)
First, we assume that there is a workflow to build and publish a Docker Image to a image repository like DockerHub. If not, you can find an example in Docker Image Requirements.
Assuming there's a workflow that will build and publish the Docker Image to dockerhub like the following in a GitHub Action:
jobs:
build:
# The type of runner that the job will run on
runs-on: ubuntu-20.04
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Check out the repo
uses: actions/checkout@v2.5.0
# Login to Docker hub using the credentials stored in the repository secrets
- name: Log in to Docker Hub
uses: docker/login-action@v2.1.0
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_TOKEN }}
# Get the commit short hash, to use as the rev
- name: Calculate rev hash
id: rev
run: echo "value=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
# Build and push 2 images, One with the version tag and the other with latest tag
- name: Build and push Docker images
uses: docker/build-push-action@v3.2.0
with:
context: .
push: true
tags: ${{ env.DOCKER_REPO }}:v${{ steps.rev.outputs.value }}, ${{ secrets.DOCKER_REPO }}:latest
Note that the new image version is the first 7 letter of the commit hash (sha_short).
Now we add two steps in the end, the first one will push the new image version to an existing OnFinality Network Spec, the second one will do a rolling upgrade for all nodes to the new version.
# You need to add the following secrets to your GitHub Repository or Organization to make this work
# OnFinality access credential instructions https://documentation.onfinality.io/support/onfinality-cli-tool-and-access-keys
# - ONF_ACCESS_KEY: The unique access key to OnFinality
# - ONF_SECRET_KEY: A secret access key to OnFinality
# - ONF_WORKSPACE_ID: The workspace ID of your OnFinality workspace, you can retrieve this from your workspace settings. E.g. 6683212593101979648
# - ONF_NETWORK_KEY: The network ID of your OnFinality workspace, you can retrieve this from the URL when viewing the network. E.g. f987705c-fe75-4069-99b4-77d62c4fe58k
....
- name: Update image version of the existing network spec
uses: "OnFinality-io/action-onf-release@v1"
with:
# These keys should be in your GitHub secrets
# https://documentation.onfinality.io/support/onfinality-cli-tool-and-access-keys
onf-access-key: ${{ secrets.ONF_ACCESS_KEY }}
onf-secret-key: ${{ secrets.ONF_SECRET_KEY }}
onf-workspace-id: ${{ secrets.ONF_WORKSPACE_ID }}
onf-network-key: ${{ secrets.ONF_NETWORK_KEY }}
# Add a new image version to network spec
onf-sub-command: image
onf-action: add
image-version: v${{ steps.rev.outputs.value }}
- name: Perform a rolling upgrade to all nodes
uses: "OnFinality-io/action-onf-release@v1"
with:
# These keys should be in your GitHub secrets
onf-access-key: ${{ secrets.ONF_ACCESS_KEY }}
onf-secret-key: ${{ secrets.ONF_SECRET_KEY }}
onf-workspace-id: ${{ secrets.ONF_WORKSPACE_ID }}
onf-network-key: ${{ secrets.ONF_NETWORK_KEY }}
# Perform a rolling ugrade to nodes
onf-sub-command: node
onf-action: upgrade
image-version: v${{ steps.rev.outputs.value }}
percent: 30 # Percent of nodes to update at each time
Extend it Further
A lot of possibilities are left to be explored. Once a new version rolled out, you can execute some tests for the new client. And then an extra step to upgrade the runtime to the new version, before running other tests for the new runtime.