Verifier in CI
Verifier in CI gives you control over where and when to run integration tests against your containerized software. Learn more about the Verifier toolkit.
Verifier in CI allows you to take a containerized version of the Verifier testing tool and run it in your own environment, such as in GitLab CI. Today, Tangram Pro supports exporting an example GitLab CI file which can be integrated into your existing repository's CI file. You are welcome to use Verifier in CI in other places, with the understanding that we currently don't have formal support for it in Tangram Pro. Reach out to us through the support links at the bottom of the page if you have questions or need assistance with Verifier.
To use Verifier in CI, you'll need to have the Verifier
role on your user account.
What You'll Need
To run Verifier in CI, you'll need each of the following:
- Access to Tangram Pro
- For configuring your component & transport mechanism
- For obtaining config files via the API
- Your containerized component including the modified
watcher
script - The appropriate transport proxy, if needed, for communication with your component
- A GitLab CI which includes the provided partial CI file
- You'll need to have access to the CI/CD settings for the repo under test to set a variable
- Network access at CI runtime
- GitLab will need to pull the Verifier image from Tangram Pro
We'll discuss each of these below.
Getting Started
Tangram Pro
Test configuration happens inside Tangram Pro: Verifier needs your input in order to understand how to talk to the component and what to test.
Tangram Pro should be used for the following:
- Configuring the transport proxy settings (such as hostname and port)
- Setting up the component's message interface
- Specifying message sequences that the component should be tested on
- Writing YAML field constraints for messages in those sequences
After completing component configuration in Tangram Pro, you can export a set of example files to assist in setting up Verifier tests in CI. These files, which you may already be seeing, will also facilitate pulling configuration files using the Tangram Pro API, with the included partial CI file handling this process for you.
GitLab CI
Using Verifier in CI is possible in any containerized environment, but we provide an example configuration specifically for GitLab CI, which is very common in self-hosted environments. The example config will be generated for you when you open the Setup Tests in CI panel from the Verify mode in a Tangram Pro project.
This partial CI example includes:
- Pulling configuration files for your project directly from Tangram Pro via the API
- Running Verifier against your component
- Generating jUnit test results which can be viewed within GitLab
- Pushing Verifier results to Tangram Pro, which provides a better viewing experience for test results
The partial CI example does NOT include an example image build: the ways that builds can be done here will vary depending on how your GitLab administrators have configured your instance.
You can download this partial GitLab CI file by clicking on the Setup Tests in CI button in the Verify mode of a Tangram Pro project, and
then clicking on the Download your gitlab-ci file
button.
Tangram Pro Registry Access
The Verifier image is stored in the Tangram Pro registry: you'll need to give GitLab CI access to this registry so that it can pull the image. Granting this access isn't difficult and can be done in a number of different ways, as listed in these GitLab docs. The simplest way which requires no administrative environment changes to GitLab is using a CI/CD variable. We'll repeat these steps here.
Start by getting a registry token:
- In Tangram Pro, click on your profile icon and then on User Settings
- Go to the API Keys tab, and create a new API key which has permissions to the
Registry Scope
- Copy the token to your clipboard
Next, you'll need to turn this token into Docker config. This can be done in a couple different ways as discussed in the links above,
but this is the quickest way and won't interfere with Docker on your local machine. You'll need to use your Tangram Pro username and
replace the <USERNAME>
and <TOKEN>
sections in this command, which should work in a standard Unix-compatible shell like bash
or sh
:
printf "<USERNAME>:<TOKEN>" | base64
If you don't have base64
installed but you do have openssl
installed, you can try this command instead:
printf "<USERNAME>:<TOKEN>" | openssl base64 -A
If you still run into issues, try following the GitLab docs on this topic.
This sequence of commands will encode your username and password into the format expected by Docker for registry authentication.
Now, create a new CI/CD variable in the settings of your GitLab repo. The key of the variable must be DOCKER_AUTH_CONFIG
. Paste in the following
content, but don't save yet:
{"auths":{"<TPRO URL>":{"auth":"<ENCODED>"}}}
- Replace
<TPRO URL>
with the URL of the Tangram Pro instance you're using. This URL should not have any prefix (i.e., don't includehttps://
on the front of the URL: it should look something likepro.tangramflex.io
) - Replace
<ENCODED>
with the output of theprintf | base64
command you ran previously, being careful not to copy a newline at the end of the generated code. - You may find it convenient to uncheck
Protect variable
if you want to run Verifier in CI on unprotected branches and tags (for example, in merge request pipelines) - Uncheck
Expand variable reference
- We recommend that you set the variable visibility to
Masked and hidden
If you run Verifier in CI and get a 401 Unauthorized
error in the pipeline because GitLab can't pull the Verifier image from Tangram Pro, it's probably because something
went wrong with this step.
- Double check that you're creating an API key with Registry scope in Tangram Pro
- Check the spelling of your username in the
printf | base64
command
Go ahead and save the variable now.
Tangram Pro Authentication
As mentioned previously, the CI example uses API calls to Tangram Pro in order to pull Verifier config files which are needed at runtime. Using these API calls allows you to modify your sequence tests in Tangram Pro and automatically use the latest tests from GitLab without needing to manually copy new files. However, these API calls do require an API key.
You'll need to create two CI/CD Variables in GitLab:
TPRO_USERNAME
TPRO_API_KEY
To create an API key:
- In Tangram Pro, generate an API key with API Scope
- Click on your profile icon in the top right, go to User Settings > API Keys, and create a new key
- In GitLab, create a new variable under your repository's CI/CD settings
- Using the variable key
TPRO_API_KEY
will require fewer changes from you to our provided CI config file - Copy the
Token
value from the key you created in Tangram Pro into the value of the GitLab variable - You do not need to repeat the Docker auth settings from above here: the variable value should be only the API token
- Using the variable key
- If you used a variable key other than
TPRO_API_KEY
, you'll need to update that variable in the generated CI file to use your variable - We recommend setting the variable's visibility to
Masked and hidden
- Leave
Expand variable reference
enabled - You may again find it convenient to uncheck
Protect variable
for the same reasons as discussed above
You should repeat this process for a new variable with a key of TPRO_USERNAME
:
- The variable value should contain the username for the Tangram Pro account for which you've created an API key
- It's critical that you use the username and not the email address associated with the account
- It's less important for this value to be
Masked and hidden
or protected.
Test Configuration
You may be used to Verifier options like Test Runs
or Retries
in Tangram Pro. When running Verifier in CI,
you can control these same parameters using environment variables that are included (along with default
values) in the generated GitLab CI file.
VERIFIER_RUNS
: The number of times Verifier will test all sequences (Equivalent toTest Runs Total
in Tangram Pro)- Because Verifier will generate randomized message fields (within your applied constraints) each time it performs a run, this can be a useful way to test a component against a variety of input values in a single Verifier execution
VERIFIER_RETRIES
: The number of attempts Verifier has to get each sequence to pass (Equivalent toMax Path Retries
in Tangram Pro)- This is useful when you have a component that may behave randomly in some way. Set the attempt number high enough that the sequence has a very high probability of randomly hitting the right sequence to test
VERIFIER_STARTUP_MS
: The amount of time (in milliseconds) that Verifier should give your component to startVERIFIER_TIMEOUT_MS
: The maximum delay time (in milliseconds) that may occur before Verifier determines that the component under test is not going to send a response
jUnit Test Results
The provided CI will trigger
another pipeline, which parses the Verifier results and then displays them in a Test tab. To view these results:
- Once the Verifier jobs have completed, view them in the Pipeline panel of GitLab
- Find the
verifier:trigger_junit_results
job, and you'll see an arrow pointing to the triggered pipeline. Click on that pipeline - On the triggered pipeline, click on the
Test
panel to see results- Results will be separated by run: if you've configured multiple
reruns
in the Tangram Pro test configuration, each run will appear here - Inside each run, pass/fail results will be shown for each sequence. Viewing details for a failed test will show details about why the component failed the test
- Results will be separated by run: if you've configured multiple
Component Containerization
To use Verifier in CI, you must create a new image build job in your CI pipeline that uses the provided Dockerfile (modified to base it off of your existing component image) to wrap your component image (and component process) with our watcher
script.
The watcher.sh
script is a required wrapper which allows Verifier to signal and receive signals from your component process without requiring any code changes. Specifically, it uses file-based signaling to handle
component lifecycle:
- The
startfile
tellswatcher.sh
to start your component process - The
killfile
tellswatcher.sh
to end your component process - The
watchfile
tells Verifier if your component process ends
Managing component lifecycle allows Verifier to ensure that every sequence test is conducted on equal footing, with a fresh component state.
You can find the example Dockerfile to wrap your component image with the watcher script below.
On the verifier:test_component
job of the generated GitLab CI file, you'll see a placeholder for your watcher
-wrapped component
image as a service. You'll need to replace that with your component image for testing to work. You should refer to the
GitLab documentation for more information about
setting up services if you need to pass environment variables to your component, set an alias
hostname to connect to the component,
or provide other details.
Watcher Script
The watcher
script is sh
and bash
compatible (at least). You may make it execute with bash (or another shell) if you wish, if you need extra functionality not provided by sh
. The core function of this script is to allow Verifier to manage the lifecycle of your component through three files:
/builds/startfile
: Allows Verifier to start the component process/builds/watchfile
: Watches and notifies Verifier if the component dies/builds/killfile
: Allows Verifier to forcefully stop the component process
The watcher.sh
script WILL require a modification for it to work: specifically, you'll need to modify the
run_on_start
function to start your component process, and the wait_for_kill
function to kill your component process.
Without these changes, the script will not work as expected.
You can find this file below.
Example 1:
Perhaps your component is Java-based. In run_on_start
you might replace <YOUR PROCESS EXECUTABLE>
with java -jar component.jar
. You would then need to modify wait_for_kill
to kill that process, for example using pkill java
.
Example 2:
Your component may be a binary that you run with /home/user/component --config ./config.yaml
. You would replace <YOUR PROCESS EXECUTABLE>
in the run_on_start
function with that startup command, and you could then run pkill component
in wait_for_kill
to kill the component process.
If you would like to find another way to manage the component lifecycle that fits this interface, you're welcome to replace
watcher.sh
with your own implementation.
Transport Network
GitLab supports running Services on jobs, which can be used to run any transport proxies your component may need (such as Kafka, ActiveMQ, NATS, etc.). This transport proxy should be added as a service to the verifier:test_component
job, filling in the options highlighted for you.
Verifier needs to know how to talk to your proxy: ensure that you set the transports IP/hostname on the job service using the alias
attribute and match this value on the transport IP/hostname setting when configuring your transport in Tangram Pro.
Verifier (and Verifier in CI) need to connect to your component somehow in order to send and receive messages while testing message sequences. You can configure
these transport settings using the Transport
panel in your Tangram Pro project's Design
mode.
You can learn more about the available settings for each transport type in the Transport Configuration section.
Making Sequence & Messages Set Changes
Verifier in CI uses the Tangram Pro API key to automatically pull the latest Verifier configuration for your
project from Tangram Pro when running in the GitLab pipeline (which occurs in the verifier:pull_api
job if you're using our generated GitLab CI file).
To change the sequences that your component gets tested against, the transport configuration, or the messages that are used in the test sequences, use Tangram Pro to modify the Flex package or sequences just as you normally would for Verifier. The next time Verifier in CI runs, it will use the latest configuration from Tangram Pro.
Viewing Results
Results are available in three ways:
- View the
results.json
file produced by theverifier:test_component
job- This contains all data, but is not particularly suited to human viewing
- View the triggered pipeline Test tab as discussed earlier
- View the results in Tangram Pro, which are automatically pushed via the API
- We recommend viewing the results in Tangram Pro, which can be found in the Verify panel of the project you configured the Verifier in CI test from.
Example files
Example Dockerfile
See above for details on this file.
Dockerfile
# The image will be based off of your own component image
FROM <YOUR COMPONENT IMAGE>
# This watcher script will handle the execution & lifecycle of your component process
COPY --link ./watcher.sh /watcher.sh
RUN chmod +x /watcher.sh
ENTRYPOINT ["/bin/sh", "/watcher.sh"]
Example Watcher Script
See above for details on this file.
watcher.sh
#!/usr/bin/env sh
STARTFILE=/builds/startfile
WATCHFILE=/builds/watchfile
KILLFILE=/builds/killfile
LOGFILE=/builds/component.log
run_on_start() {
# Modify this line to run your component process
<YOUR PROCESS EXECUTABLE> 2>&1 | tee -a $LOGFILE
echo "Component exited" | tee -a $LOGFILE
echo "done" > $WATCHFILE
# kill this shell fork
exit 0
}
wait_for_start() {
echo "=============================" >> $LOGFILE
echo "Waiting for start..." | tee -a $LOGFILE
while [ ! -s "$STARTFILE" ]; do sleep 1; done
echo "Received start" | tee -a $LOGFILE
# Clear the start signal file
echo -n > $STARTFILE
}
wait_for_kill() {
echo "Waiting for kill signal..." | tee -a $LOGFILE
while [ ! -s "$KILLFILE" ] && [ ! -s "$WATCHFILE" ]; do sleep 0.1; done
if [ -s "$KILLFILE" ]; then
echo "Received kill signal" | tee -a $LOGFILE
pkill <YOUR PROCESS EXECUTABLE>
fi
echo -n > $KILLFILE
echo -n > $WATCHFILE
}
# Create & clear signal files to start
echo -n > $STARTFILE
echo -n > $WATCHFILE
echo -n > $KILLFILE
chmod 777 $STARTFILE
chmod 777 $WATCHFILE
chmod 777 $KILLFILE
while true; do
echo "Starting loop..." | tee -a $LOGFILE
wait_for_start
run_on_start &
wait_for_kill
echo "Exiting loop" | tee -a $LOGFILE
done
Transport Configuration
Verifier in CI supports a fixed set of transports for communicating with your component. Each transport type has specific configuration options that need to be set in Tangram Pro's Transport panel. Below are the available transports and their configuration options.
Queue-Based Transports
ActiveMQ
ActiveMQ transport uses the publish/subscribe pattern and requires the following configuration:
Required Settings:
Broker URI
: The URI for configuring the broker, typically in the format<IP>:<port>
with any additional options- Example:
failover:(tcp://127.0.0.1:61616)?timeout=1000
- Example:
Optional Settings:
Auto Acknowledge Broker
: When enabled, automatically acknowledges data from the brokerConvert bytes to text
: Converts bytes to text when sending/receivingMax Sending Size
: Maximum allowed message size for sending (in bytes)Max Receiving Size
: Maximum allowed message size for receiving (in bytes)Startup Wait in ms
: Time in milliseconds to wait for network initialization
TLS/SSL Configuration (Optional):
Keystore Value Path
: Path to the keystore fileKeystore Password
: Password for the keystoreTrust Cert File Path
: Path to the trust store fileTrust Cert Password
: Password for the trust store
Kafka
Kafka transport uses the publish/subscribe pattern with the following settings:
Required Settings:
Broker URL
: URL for the broker (format:<IP>:<port>
)Component Group ID
: Group ID for the component
Optional Settings:
Message Partition
: Partition to use for messagesMax Sending Size
: Maximum message size for sending (in bytes)Max Receiving Size
: Maximum message size for receiving (in bytes)Startup Wait in ms
: Network initialization wait time (in milliseconds)
MQTT
MQTT transport uses the publish/subscribe pattern and requires configuration for both publishing and subscribing:
Required Settings:
Pub Socket
: Publishing socket configurationIP/Hostname
: IP address to connect/bind toPort
: Port numberSocket Type
: EitherBind
(server) orConnect
(client)
Sub Socket
: Subscribing socket configuration (same fields as pub_socket)
Optional Settings:
Max Sending Size
: Maximum message size for sending (in bytes)Max Receiving Size
: Maximum message size for receiving (in bytes)Startup Wait in ms
: Network initialization wait time (in milliseconds)
TLS Configuration (Optional):
Certificate authority file path
: Path to certificate authority certificate filesPEM-encoded certificate file path
: Path to PEM-encoded certificate filePEM-encoded key file path
: Path to PEM-encoded key file
NATS
NATS transport uses the publish/subscribe pattern with extensive configuration options:
Optional Settings:
URL
: NATS server URL (format:nats://<IP>:<port>
)Max Sending Size
: Maximum message size for sending (in bytes)Max Receiving Size
: Maximum message size for receiving (in bytes)Startup Wait in ms
: Network initialization wait time (in milliseconds)
Authentication (Optional):
NATS Server Username
: NATS server usernameNATS Server Password
: NATS server passwordNKey for the NATS server
: NKey for the NATS serverNKey seed
: NKey seed
TLS Configuration (Optional):
PEM-encoded key file path
: Path to TLS key filePEM-encoded certificate file path
: Path to TLS certificate file
RabbitMQ
RabbitMQ transport uses the publish/subscribe pattern with the following configuration:
Required Settings:
URL
: Broker URL (format:amqp://<IP>:<port>
)Virtual Channel Number
: Virtual channel number
Optional Settings:
Max Sending Size
: Maximum message size for sending (in bytes)Max Receiving Size
: Maximum message size for receiving (in bytes)Startup Wait in ms
: Network initialization wait time (in milliseconds)
SSL Configuration (Optional):
PEM-formatted certificate
: Path to PEM-formatted certificateCertificate Authority Cert File
: Path to certificate authority cert filePEM-formatted secret key
: Path to PEM-formatted secret keyPeer verification
: Enable peer verificationHostname verification
: Enable hostname verification
ZeroMQ
ZeroMQ transport uses the publish/subscribe pattern and requires configuration for both publishing and subscribing:
Required Settings:
External Proxy
: Set to true if using an external proxyPub Socket
: Publishing socket configurationIP/Hostname
: IP address to connect/bind toPort
: Port numberSocket Type
: EitherBind
(server) orConnect
(client)
Sub Socket
: Subscribing socket configuration (same fields as pub_socket)
Optional Settings:
Max Sending Size
: Maximum message size for sending (in bytes)Max Recieving Size
: Maximum message size for receiving (in bytes)Startup Wait in ms
: Network initialization wait time (in milliseconds)
Curve Authentication (Optional):
Public Key File
: Path to public key fileSecret Key File
: Path to secret key fileServer IP
: Curve authentication server IPServer Key
: Auth server public key
Packet-Based Transports
TCP
TCP transport creates a direct connection with the following configuration:
Required Settings:
IP/Hostname
: IP address to bind/connect toPort
: Port numberSocket Type
: EitherBind
(server) orConnect
(client)
Optional Settings:
Max Message Size
: Maximum message buffer size (in bytes)Startup Wait in ms
: Network initialization wait time (in milliseconds)Expect Topic
: Expect topic string before data packet
UDP
UDP transport provides connectionless communication with the following configuration:
Required Settings:
IP/Hostname
: IP address to bind/connect toPort
: Port numberSocket Type
: EitherBind
(server) orConnect
(client)
Optional Settings:
Max Message Size
: Maximum message buffer size (in bytes)Startup Wait in ms
: Network initialization wait time (in milliseconds)Expect Topic
: Expect topic string before data packet
Note: For both TCP and UDP transports, when Expect Topic
is set to true, you must provide a topic
in the message metadata to specify how messages will be sent and identified.