Skip to main content

Verifier in CI

Verifier in CI gives you control over where and when to run integration tests against your containerized software. Learn more about the Verifier toolkit.

Verifier in CI allows you to take a containerized version of the Verifier testing tool and run it in your own environment, such as in GitLab CI. Today, Tangram Pro supports exporting an example GitLab CI file which can be integrated into your existing repository's CI file. You are welcome to use Verifier in CI in other places, with the understanding that we currently don't have formal support for it in Tangram Pro. Reach out to us through the support links at the bottom of the page if you have questions or need assistance with Verifier.

To use Verifier in CI, you'll need to have the Verifier role on your user account.

What You'll Need

To run Verifier in CI, you'll need each of the following:

  • Access to Tangram Pro
    • For configuring your component & transport mechanism
    • For obtaining config files via the API
  • Your component, containerized including the modified watcher script
  • The appropriate transport proxy, if needed, for communication with your component
  • A GitLab CI which includes the provided partial CI file
    • You'll need to have access to the CI/CD settings for the repo under test to set a variable
  • Network access at CI runtime
    • GitLab will need to pull the Verifier image from Tangram Pro

We'll discuss each of these below.

Getting Started

Tangram Pro

Test configuration happens inside Tangram Pro: Verifier needs your input in order to understand how to talk to the component and what to test.

Tangram Pro should be used for the following:

  • Configuring the transport proxy settings (such as hostname and port)
  • Setting up the component's message interface
  • Specifying message sequences that the component should be tested on
  • Writing YAML field constraints for messages in those sequences

Once you've done component configuration in Tangram Pro, you'll be able to export a set of example files which will help you set up Verifier tests in GitLab CI (the same files which you may be seeing now). This will also enable pulling files which contain your configuration using the Tangram Pro API: the included partial CI file will do that for you.

GitLab CI

Using Verifier in CI is possible in any containerized environment, but we provide an example configuration specifically for GitLab CI, which is very common in self-hosted environments. The example config will be generated for you when you open the Verifier in CI panel from the Verify mode in a Tangram Pro project.

This partial CI example includes:

  • Pulling configuration files for your project directly from Tangram Pro via the API
  • Running Verifier against your component
  • Generating jUnit test results which can be viewed within GitLab
  • Pushing Verifier results to Tangram Pro, which provides a better viewing experience for test results

The partial CI example does NOT include an example image build: the ways that builds can be done here will vary depending on how your GitLab administrators have configured your instance.

Tangram Pro Registry Access

The Verifier image is stored in the Tangram Pro registry: you'll need to give GitLab CI access to this registry so that it can pull the image. Granting this access isn't difficult and can be done in a number of different ways, as listed in these GitLab docs. The simplest way which requires no administrative environment changes to GitLab is using a CI/CD variable. We'll repeat these steps here.

Start by getting a registry token:

  • In Tangram Pro, click on your profile icon and then on User Settings
  • Go to the API Keys tab, and create a new API key which has permissions to the Registry Scope
  • Copy the token to your clipboard

Next, you'll need to turn this token into Docker config. This can be done in a couple different ways as discussed in the links above, but this is the quickest way and won't interfere with Docker on your local machine. You'll need to use your Tangram Pro username and replace the <USERNAME> and <TOKEN> sections in this command, which should work in a standard Unix-compatible shell like bash or sh:

printf "<USERNAME>:<TOKEN>" | openssl base64 -A

This sequence of commands will encode your username and password.

Now, create a new CI/CD variable in the settings of your GitLab repo. The name of the variable must be DOCKER_AUTH_CONFIG. Paste in the following content, but dont' save yet:

{"auths":{"<TPRO URL>":{"auth":"<ENCODED>"}}}
  • Replace <TPRO URL> with the URL of the Tangram Pro instance you're using. This URL should not have any prefix (i.e., don't include https:// on the front of the URL: it should look like pro.tangramflex.io)
  • Replace <ENCODED> with the output of the printf | openssl command you ran previously, being careful not to copy a newline at the end of the generated code.
  • We recommend that you set the variable visibility to Masked and hidden
  • Uncheck Expand variable reference
  • You may find it convenient to uncheck Protect variable if you want to run Verifier in CI on unprotected branches and tags (for example, in merge request pipelines)

At this point, go ahead and save the variable.

Tangram Pro API Key

As mentioned previously, the CI example uses API calls to Tangram Pro in order to pull Verifier config files which are needed at runtime. Using these API calls allows you to modify your sequence tests in Tangram Pro and automatically use the latest tests from GitLab without needing to manually copy new files. However, these API calls do require an API key.

To create an API key:

  • Generate an API key with API Scope
    • Click on your profile icon in the top right, go to User Settings > API Keys
  • Create a new variable under your GitLab repo CI/CD settings for that API KEY
    • Using the variable name TPRO_API_KEY will require fewer changes from you to our provided CI config file
  • If you used a variable name other than TPRO_API_KEY, you'll need to update that variable in the generated CI file to use your variable
  • We recommend Masked and hidden
  • Leave Expand variable reference enabled
  • You may again find it convenient to uncheck Protect variable

jUnit Test Results

The provided CI will trigger another pipeline, which parses the Verifier results and then displays them in a Test tab. To view these results:

  • Once the Verifier jobs have completed, view them in the Pipeline panel of GitLab
  • Find the verifier:trigger_junit_results job, and you'll see an arrow pointing to the triggered pipeline. Click on that pipeline
  • On the triggered pipeline, click on the Test panel to see results
    • Results will be separated by run: if you've configured multiple reruns in the Tangram Pro test configuration, each run will apepar here
    • Inside each run, pass/fail results will be shown for each sequence. Viewing details for a failed test will show details about why the component failed the test

Component Containerization

To use Verifier in CI, you must create a new image build job in your CI pipeline that uses the provided Dockerfile (modified to base it off of your existing component image) to wrap your component image (and component process) with our watcher script.

The watcher.sh script is a required wrapper which allows Verifier to signal and receive signals from your component process without requiring any code changes. Specifically, it uses file-based signaling to handle component lifecycle:

  • The startfile tells watcher.sh to start your component process
  • The killfile tells watcher.sh to end your component process
  • The watchfile tells Verifier if your component process ends

Managing component lifecycle allows Verifier to ensure that every sequence test is conducted on equal footing, with a fresh component state.

You can find the example Dockerfile to wrap your component imgae with the watcher script below.

Watcher Script

The watcher.sh script WILL require a modification for it to work: specifically, you'll need to modify the run_on_start function to start your component process, and the wait_for_kill function to kill your component process. Without these changes, the script will not work as expected.

You can find this file below.

Transport Network

GitLab supports running Services on jobs, which can be used to run any transport proxies your component may need (such as Kafka, ActiveMQ, NATS, etc.). This transport proxy should be added as a service to the verifier:test_component job, filling in the options highlighted for you.

Verifier needs to know how to talk to your proxy: ensure that you set the transports IP/hostname on the job service using the alias attribute and match this value on the transport IP/hostname setting when configuring your transport in Tangram Pro.

Viewing Results

Results are available in three ways:

  • View the results.json file produced by the verifier:test_component job
    • This contains all data, but is not particularly suited to human viewing
  • View the triggered pipeline Test tab as discussed earlier
  • View the results in Tangram Pro, which are automatically pushed via the API
    • We recommend viewing the results in Tangram Pro, which can be found in the Verify panel of the project you configured the Verifier in CI test from.

Example files

Example Dockerfile

See above for details on this file.

Dockerfile

# The image will be based off of your own component imgae
FROM <YOUR COMPONENT IMAGE>

# This watcher script will handle the execution & lifecycle of your component process
COPY --link ./watcher.sh /watcher.sh
RUN chmod +x /watcher.sh

ENTRYPOINT ["/bin/sh", "/watcher.sh"]

Example Watcher Script

See above for details on this file.

watcher.sh

#!/usr/bin/env sh

STARTFILE=/builds/startfile
WATCHFILE=/builds/watchfile
KILLFILE=/builds/killfile

LOGFILE=/builds/component.log

run_on_start() {
    # Modify this line to run your component process
    /runway/test/component/component
    echo "Component exited" | tee -a $LOGFILE
    echo "done" > $WATCHFILE

    # kill this shell fork
    exit 0
}

wait_for_start() {
    echo "=============================" >> $LOGFILE
    echo "Waiting for start..." | tee -a $LOGFILE
    while [ ! -s "$STARTFILE" ]; do sleep 1; done
    echo "Received start" | tee -a $LOGFILE

    # Clear the start signal file
    echo -n > $STARTFILE
}

list_descendants() {
  local children=$(ps -o pid= --ppid "$1")

  for pid in $children
  do
    list_descendants "$pid"
  done

  echo "$children"
}

wait_for_kill() {
    echo "Waiting for kill signal..." | tee -a $LOGFILE
    while [ ! -s "$KILLFILE" ] && [ ! -s "$WATCHFILE" ]; do sleep 0.1; done
    if [ -s "$KILLFILE" ]; then
        echo "Received kill signal" | tee -a $LOGFILE
        kill $(list_descendants $$)
    fi

    echo -n > $KILLFILE
    echo -n > $WATCHFILE
}


# Create & clear signal files to start
echo -n > $STARTFILE
echo -n > $WATCHFILE
echo -n > $KILLFILE

chmod 777 $STARTFILE
chmod 777 $WATCHFILE
chmod 777 $KILLFILE


while true; do
    echo "Starting loop..." | tee -a $LOGFILE

    wait_for_start
    run_on_start &
    wait_for_kill

    echo "Exiting loop" | tee -a $LOGFILE
done