Welcome to IceCI’s documentation!¶
Welcome to IceCI`s documentation!
Introduction¶
Concept¶
IceCI is a continuous integration system designed for Kubernetes from the ground up. Running in the cluster, it operates on Kubernetes primitives while providing a layer of abstraction to make creating and maintaining pipelines more accessible. It also provides a web UI for visualization and easy monitoring of pipeline runs.
First steps¶
New to IceCI, want to find out what’s it all about? Our quickstart tutorial will guide you through installation and basic pipeline configuration. Start running builds in less than 15 minutes!
Getting started: Quickstart
Installation and configuration¶
Learn more about IceCI’s components and the way they’re deployed and configured in the cluster.
How to install IceCI: Installation
Advanced configuration options: Configuration
Pipeline reference¶
Learn about the structure of the pipeline configuration file, its building blocks and available options.
How to build pipeline files: Building pipelines
Pipeline config file reference: Pipeline structure
Environment variables reference: Environment
Quickstart¶
This tutorial will help you install IceCI in your Kubernetes cluster and set up your first pipeline using an example repository in less than 15 minutes. Let’s start integrating!
Note
To make this quickstart guide actually quick we’ll fork an example public repository, so no secret configuration will be needed.
Prerequisites¶
Before installing IceCI you’ll need to have access to a Kubernetes cluster. If you’re using an existing cluster in the cloud or on-premise, you’re pretty much good to go. You can also launch IceCI in a local cluster using Minikube or K3s with k3sup. In that case just follow the installation instructions provided by the respective documentation.
Note
When using Minikube you will need to enable the ingress addon to be able to reach the UI. It can be enabled with minikube addons enable ingress
. For more information please refer to the documentation.
Installing and running IceCI¶
Prepare an IceCI namespace¶
Note
This step is optional - if you’d rather run IceCI in the default namespace, go ahead and skip to the next step.
Create a namespace in which IceCI will operate and update your kubectl config to use that namespace.
kubectl create ns iceci
kubectl config set-context --current --namespace=iceci
Installing IceCI¶
You can use a handy all-in-one manifest to install IceCI. Applying it in your cluster will set up all the necessary objects to run the applications.
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/all_in_one.yaml
Once all the applications are running, you’re all ready to go.
Note
Apart from the all-in-one file, all the manifests can be found as separate files in the IceCI GitHub repository
Configuring the repository¶
Now you can access the UI through your browser and add a repository to start running pipelines. To do that, simply click the +
button on the top of the left navigation bar and create a new repository using the form.
Note
When using Minikube, the default IP address of the VM is 192.168.99.100
. You can use minikube ip
to make sure that it’s the address you should be using.

For the purposes of this guide, we’ll fork the quickstart example repository. After forking this repository, all you need to do is copy the clone url and paste it into the Repository URL field in the form.
Note
The example repository is also a template, so instead of forking you can use it to create a new one!
Once the repository is added to IceCI you’re all set. IceCI will react to Git events, so all that’s left is to push a commit to trigger a new pipeline. Try it for yourself!

Next steps¶
As you can see, the example pipeline is very simple - just to get you acquainted with the structure of the config file. For more information about the configuration file - check out the Pipelines section of the documentation. We’ve also prepared a small Python application along with a ready pipeline which you can find on GitHub.
Installation¶
IceCI’s installation consists of multiple components. This guide will show you how can you customize your installation to best suit your needs
Important
The provided yaml files are suitable only for single node installations. If you want to deploy IceCI in a multi node cluster, the storage will need to be updated.
Ice operator¶
Custom resources¶
IceCI uses a range of custom Kubernetes resources. They are required by the operator to properly handle repositories and run builds.
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/crds/iceci.io_gitpipelines_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/crds/iceci.io_gitwatchers_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/crds/icekube.io_backingservices_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/crds/icekube.io_dockerbuilds_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/crds/icekube.io_gitclones_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/crds/icekube.io_pipelines_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/crds/icekube.io_tasks_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/crds/icekube.io_workspaces_crd.yaml
Note
You can find out more about Kubernetes custom resources in the documentation
Service accounts and roles¶
The operator uses two separate service accounts - one is used by the operator itself, the other one is used by the pipeline steps. Those service accounts are required for the operator to work properly.
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/operator/service_account.yaml
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/operator/role.yaml
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/operator/role_binding.yaml
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/operator/step_service_account.yaml
Storage¶
The operator uses two types of storage - dynamic and shared storage. By default, both of the persistent volumes are of hostPath type. The storage is required by the operator.
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/operator/storage-pvc.yaml
Important
As mentioned, the hostPath storage type will be suitable only for single node clusters and will have to be changed for multi node installations.
The operator¶
The operator is the heart of IceCI, responsible for handling the creation and lifecycle of all the objects related to running the pipelines.
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/operator/operator.yaml
DB Sync¶
PostgreSQL¶
A PostgreSQL database is used to store all the data extracted by the sync application.
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/app/postgres.yaml
Sync¶
The sync application monitors the cluster for object changes (like new object creation or status updates) - as well as logs - and stores them in the database for persistence. Installing the application is optional - unless the API and UI are used.
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/app/sync.yaml
API and UI¶
The API and UI provide a web interface for interacting with the operator. The applications allow for creating and monitoring pipelines, providing access to information about the pipelines themselves, as well as specifics regarding builds. Installing the applications is optional.
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/app/api.yaml
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/app/ui.yaml
kubectl apply -f https://raw.githubusercontent.com/IceCI/IceCI/master/manifests/app/ingress.yaml
Important
The API pulls information from the database, hence the sync application is required for the web applications to function correctly.
Further reading¶
For more information about customizing the configuration of particular apps, please check out the Configuration page.
Configuration¶
This guide will show you how can you customize and configure IceCI’s installation for your cluster.
Note
We’ll focus on customizing the deploy by modifying the YAML manifests provided in IceCI’s repository.
Before we start, clone IceCI’s repository so you can make the modifications locally.
git clone https://github.com/IceCI/IceCI.git
Storage config¶
The storage configuration manifests can be found in manifests/operator/pvc-storage.yaml
. As you can see, two types of storage are defined - static and dynamic storage. In the provided installation, the volumes are configured as 1Gi host paths. As mentioned in the installation page - it will work fine for single node clusters. You might want to change the type and the requested disk space though, depending on the kind of setup you’ll be deploying the application in.
The names of the storage PVCs are defined in the environment variables in manifests/operator/operator.yaml
- you can create the PVCs with different names, but remember to update them in the operator configuration.
Important
The operator isn’t the only application making use of those PVCs - keep in mind that the sync app uses them as well! You can find it’s manifest in manifests/app/sync.yaml
.
Database config¶
The default IceCI manifests deploy a PostgreSQL instance for persisting pipeline and build data - it can be found in manifests/app/postgres.yaml
. If you already have a database up and running that you’d like to use with IceCI, you can simply skip the creation of that DB and configure the applications to use your own. All you have to do is configure the following environment variables in the API and sync apps. Their manifests can be found in manifests/app/api.yaml
and manifests/app/sync.yaml
, respectively.
- name: ICECI_DB_USER
value: <db_user>
- name: ICECI_DB_HOST
value: <db_host>
- name: ICECI_DB_NAME
value: <db_name>
- name: ICECI_DB_PASS
value: <db_pass>
- name: ICECI_DB_PORT
value: <db_port>
- name: ICECI_DB_DIALECT
value: <db_dialect>
Resource requests and limits¶
By default, the applications deployed in the cluster don’t have any resources requests nor limits set. You can add them easily by editing the deployment manifests for each of the applications that you decide do deploy in the cluster. You can read more about resource requests and limits in the Kubernetes documentation.
Further reading¶
How that you’ve got IceCI installed and configured in your cluster, it’s time to build some pipelines. If you need a quick refresher on how a basic pipeline is built, you can go back to the Quickstart page or visit one of example repositories, like the Quickstart example or the Python Flask API example. For a more comprehensive guide on how the pipeline configuration files are structured - check out the Pipelines page.
Overview¶
IceCI pipelines are defined in .iceci.yaml
file stored in root of your Git repository.
Keep in mind - after adding a repository containing a pipeline definition to IceCI, the system won’t build anything until a Git event occurs.
The repository is scanned for new events every minute - when a new event is recorded, a build is triggered. The build is performed based on the contents of the pipeline definition file.
Attention
Currently the only supported Git events are commits. Support for tags and pull requests will be added in future versions.
Building pipelines¶
Building blocks¶
There are a couple of key elements in IceCI pipelines. Some of them may be familiar from other continuous integration systems.
Steps¶
Overview¶
Steps are the bread and butter of every pipeline. They are operations that will be executed during the pipeline execution. Steps are run in the order defined in the steps
field in pipeline spec.
When one of the steps
fail, failure handlers are executed and the pipeline finishes. No other steps
in the pipeline will run.
Note
Currently IceCI does not support parallel pipeline execution. This feature is considered for future versions.
Every step
is executed in a container, running as a pod in Kubernetes cluster. Every step
has the same volume mounted in the /workspace
directory inside the container running the step
. This volume contains source code related to the Git event that occurred in the git repository. The volume is persistent across the whole pipeline and is isolated from other pipelines.
Note
If you create files in the /workspace
directory in any of your steps during pipeline execution, all the subsequent steps will have access those files.
Examples¶
Simple steps¶
Here’s an example of a working pipeline having 2 simple docker run
steps
.
steps:
- name: step1
dockerRun:
image: busybox
script: "echo 'Hello world!'"
- name: step2
dockerRun:
image: busybox
script: |
echo "step 2"
env
Persistent workspace¶
An example of generating a file and then accessing it in next step
.
steps:
- name: generate-date
dockerRun:
image: busybox
script: "date > date.log"
- name: print-date
dockerRun:
image: busybox
script: "cat date.log"
Environment variables¶
Here’s an example of passing environment variables to a container.
steps:
- name: env-test
dockerRun:
image: busybox
script: "printenv ENV_VAR_1"
environment:
- name: ENV_VAR_1
value: test-value
Note
As you can see, the environment variable value is hardcoded into the pipeline. This is fine if your build configuration doesn’t contain passwords or other sensitive data. For more information on how to manage sensitive data in IceCI
see secrets section.
Files¶
Here’s an example of mounting files from a secret in a container.
steps:
- name: file-test
dockerRun:
image: busybox
script: "cat /mnt/file"
files:
- path: /mnt/file
fromSecret: file-secret
Note
The content of a file can’t be defined inline. Every file has to have a reference to a secret, from which the content is pulled.
Services¶
Overview¶
A service in IceCI is a container that is running during the whole pipeline execution. Services let you run backing containers for your pipeline - for example a PostgreSQL database for your integration tests.
Every service has a name - in the pipeline it can be resolved using this name. If you create a website service, running curl http://website
command in your pipeline step
will hit the website service assuming that it’s a container listening on port 80 (like nginx, for example).
Note
The pipeline will stop all services after finishing - regardless of the pipeline result.
Warning
If a service stops during the pipeline execution, the pipeline will fail no matter what was the exit code of the service main process.
Examples¶
Failure handlers¶
Overview¶
Failure handlers are very similar to docker run
steps
. They will execute a set of commands in a container. The main difference is that they are used to execute code after a pipeline step fails. This allows you to add cleanup and notification actions to the pipeline.
Failure handlers are defined globally, and can be referenced on both global pipeline level and on step
level. Thanks to that, you can create a global error notification, but also add certain cleanup logic to specific steps.
Failure handlers are executed in order of appearance of the onFailure
section of the pipeline. Failure handlers for steps
are executed first, followed by failure handlers defined in the global
section.
Important
When multiple failure handlers are specified, and one of them fails, the other ones will continue to execute. The pipeline tries to run every failure handler ignoring statuses of the previous ones.
Important
Failure handlers don’t affect the status of the pipeline. If a step fails, but all failure handlers finish correctly, the pipeline will still be in failed status!
Examples¶
Single failure handler¶
Below is an example of running a failure handler after a failed step
. It also shows how environment variables are injected to every step
and failure handler in the pipeline.
steps:
- name: step-that-fails
dockerRun:
image: busybox
script: "noSuchCommand"
onFailure:
- handlerName: failure-handler-1
failureHandlers:
- name: failure-handler-1
image: busybox
script: 'echo "step $ICE_FAILED_STEP_NAME has failed"'
Global failure handlers¶
Below is an example of running a failure handlers
from both step
and pipeline level.
steps:
- name: step-that-fails
dockerRun:
image: busybox
script: "noSuchCommand"
onFailure:
- handlerName: failure-handler-1
- handlerName: failure-handler-2
globals:
onFailure:
- handlerName: failure-handler-1
- handlerName: failure-handler-3
failureHandlers:
- name: failure-handler-1
image: busybox
script: 'echo "failure handler $ICE_STEP_NAME"'
- name: failure-handler-2
image: busybox
script: 'echo "failure handler $ICE_STEP_NAME"'
- name: failure-handler-3
image: busybox
script: 'echo "failure handler $ICE_STEP_NAME"'
Note
Notice that failure-handler-1
will run twice because it’s declared in both the global section and in the step. Currently IceCI does not implement any deduplication mechanism for failure handlers.
Environment variables and files¶
Here’s an example of defining environment variables and files on failure handler
level.
steps:
- name: step-that-fails
dockerRun:
image: busybox
script: "noSuchCommand"
onFailure:
- handlerName: failure-handler-1
failureHandlers:
- name: failure-handler-1
image: busybox
script: |
echo $ICE_FH
cat /mnt/file
environment:
- name: ICE_FH
value: failure-handler-env
files:
- path: /mnt/file
fromSecret: failure-secret
Globals¶
Overview¶
Some of the settings that can be specified for steps
can also be specified in the globals
section - this means that they will be applied to all steps
in the pipeline. Thanks to this, you don’t have to redeclare settings (like environment variables) in each step
, but set it globally for the whole pipeline instead.
Note
Objects from the global section will be passed to steps in the pipeline only when it makes sense. See the globals
reference for more details.
Examples¶
Environment variables¶
Below is an example of setting up a global
environment variables and overriding them on step
level.
globals:
environment:
- name: GLOBAL_ENV
value: global-value
steps:
- name: step1
dockerRun:
image: busybox
script: "printenv GLOBAL_ENV"
- name: step2
dockerRun:
image: busybox
script: "printenv GLOBAL_ENV"
environment:
- name: GLOBAL_ENV
value: local-value
Files¶
Here’s an example of setting up a global
file and overriding it on step
level.
globals:
files:
- path: /mnt/file
fromSecret: global-secret
steps:
- name: step1
dockerRun:
image: busybox
script: "cat /mnt/file"
- name: step2
dockerRun:
image: busybox
script: "cat /mnt/file"
files:
- path: /mnt/file
fromSecret: local-secret
Additional objects¶
Secrets¶
Overview¶
Secrets are objects in IceCI responsible for storing sensitive data. They are stored as secrets in Kubernetes cluster. Currently IceCI
distinguishes 4 types of secrets:
Attention
git ssh key
and git token
secret types are used for cloning the repository and are never directly referred to in the pipelines. They can be only used while creating repositories in the UI
.
Note
Currently secrets
can only be configured through the UI
in the settings
section.
Generic secret¶
Generic secrets store sensitive data that can be used in pipelines. Those values can be passed as environment variables to containers. Generic secrets can be used in dockerRun
steps, services
and failure handlers
. They can be also defined in the global
scope of the pipeline.
Example¶
Here’s an example of passing a value from generic-secret
as an environment variable ENV_FROM_SECRET
.
steps:
- name: step1
dockerRun:
image: busybox
script: |
printenv ENV_FROM_SECRET
environment:
- name: ENV_FROM_SECRET
fromSecret: generic-secret
Here’s an example of passing a value from generic-secret
to both service
and dockerRun
step in the pipeline via the global
section.
globals:
environment:
- name: ENV_FROM_SECRET
fromSecret: generic-secret
services:
- name: envcheck
image: busybox
script: |
printenv ENV_FROM_SECRET
sleep 99999
steps:
- name: step1
dockerRun:
image: busybox
script: |
printenv ENV_FROM_SECRET
Further reading¶
For more information about passing secrets as environment variables see environment variable
reference.
Docker secret¶
Overview¶
A Docker secret stores credentials used to communicate with docker registries. It can be used for both downloading images from private registries as well as pushing images after building them in the dockerBuild
step. In both cases the dockerSecret
field is used.
A Docker secret can also be specified in the globals
section of the pipeline - this way it’ll be passed to every object that has a dockerSecret
field. If a Docker secret is specified at the object level, it will override the global docker secret.
Examples¶
Here’s an example of using a Docker image from a private registry to run both the service
and step
in the pipeline.
services:
- name: db
image: mrupgrade/private:db
dockerSecret: dockerhub
steps:
- name: step1
dockerRun:
image: mrupgrade/private:debian10
dockerSecret: dockerhub
script: "echo Hello world"
Note
While running this example in your own pipelines, remember to change the image
value into a repository and image that you have read access to. You also need to create a correct Docker secret with name dockerhub.
Here’s an example of setting up dockerSecret
at the global level so it doesn’t have to be repeated in every step
, service
and failure handler
.
globals:
dockerSecret: dockerhub
services:
- name: db
image: mrupgrade/private:db
steps:
- name: step1
dockerRun:
image: mrupgrade/private:debian10
script: "echo Hello world"
Note
While running this example in your own pipelines, remember to change the image
value into a repository and image that you have read access to. You also need to create a correct Docker secret with name dockerhub.
Further reading¶
For more information on how to use Docker secrets check the reference for these pipeline objects: dockerRun
, dockerBuild
, service
, failureHandler
and globals
.
Git SSH key¶
A Git SSH key stores a SSH key used to communicate with a Git server. It’s used for cloning the repository and monitoring any changes that may occur.
The secret is specified while adding a repository to IceCI
. After entering a SSH clone URL in the Repository URL
field - for example git@github.com:MrUPGrade/example-python-flask-api.git
- the Secret names
dropdown will show you all the available Git SSH secrets.

Note
Git SSH keys are used whenever the access to repository is via ssh
regardless if it’s public or private repository.
Git token¶
A Git token stores a token used to communicate with the Git server. It’s used for cloning the repository and monitoring any changes that may occur.
The secret is specified while adding a repository to IceCI
. After entering a HTTP clone URL in the Repository URL
field - for example https://github.com/MrUPGrade/example-python-flask-api.git
- the Secret names
will dropdown list you all the available Git token secrets.

Note
A Git token is used only when accessing a private repository via https
. For public https
repositories the token can be skipped and no secrets are needed.
Pipeline structure¶
Pipeline root object¶
The root of pipeline yaml file consists of the following fields.
-
steps
: list(Step)¶ List of
Step
objects.For more information and examples on how steps work see steps section.
-
services
: list(Service)¶ List of
Service
objects. Services will be created all at once, at the beginning of pipeline.For more information and examples on how services work see services section.
-
failureHandlers
: list(FailureHandler)¶ List of
FailureHandler
objects.List of all failure handler objects available in the whole pipeline. They can be referenced by both
Step
or other global failure handlers.For more information and examples on how failure handlers work see failure handlers section.
-
globals
: object¶ Globals object contains settings that will be passed down to the relevant objects in the pipeline.
For more information and examples see globals section.
-
dockerSecret
: string¶ Name of the docker secret used for communicating with docker registry.
This value will be passed to the following objects:
-
onFailure
: list(FailureHandlerReference)¶ List of global failure handlers. Those failure handlers will be run after every failed
step
in the pipeline, no matter what type it was.
-
environment
: list(EnvironmentVariable)¶ List of
environment variables
.Those environment variables will be passed to every
Docker run step
in the pipeline.Important
Failure handlers
have access to all the environment variables of a givenstep
injected into their spec, so they’re available in the failure handler as well.
-
files
: list(File)¶ List of files that will be mounted in every
Docker run step
in the pipeline.Important
Like environment variables, files from a given
step
are also mounted intoFailure handlers
.
-
Objects and types¶
Definitions of all objects and types used in the pipeline definition.
-
Step
: Object¶ Step is an object in the pipeline representing a single execution unit.
For more information and examples of how steps work see steps section.
-
name
¶ Name of the step. This name will be displayed in the UI.
-
onFailure
: list(FailureHandlerReference) List of
FailureHandlerReference
objects. All of failure handlers will be executed in the order declared in this list. If global failure handlers are also defined, they will be run after those specified here.
-
dockerRun
: DockerRun¶ Docker run step is used for running various commands in a container.
For full reference see
DockerRun
.
-
dockerBuild
: DockerBuild¶ Docker build step is used to build a Docker image and publish it to a specified Docker registry.
For full reference see
DockerBuild
.
Important
A step can be either a
dockerRun
ordockerBuild
type - never both. If more than one field will be set, the pipeline will fail during validation.-
-
DockerRun
: Object¶ Docker run executes the
script
inside a Docker container running a Dockerimage
. If any of the commands exit with code other than0
, the step will fail.-
image
: string¶ Docker image used to run your commands.
-
dockerSecret
: string = "" Name of the Docker secret used for communicating with Docker registry. Used for pulling image from private registries.
-
script
: string¶ A string containing the script that will be executed. The shell is run with
set -e
so this script will fail if any of the commands exits with a code other than0
. If empty, the defaultcommand
from the Docker image will be executed.
-
environment
: list(EnvironmentVariable) List of environment variables passed to the container.
-
files
: list(File) List of files that will be mounted in the container.
-
-
DockerBuild
: Object¶ Build the Docker image and pushes it to registry.
-
registry
: string = docker.io¶ Address of the Docker registry used to store images.
-
user
: string¶ User used to login to the Docker registry.
-
imageName
: string¶ Name of the image that will be built. This name is not the full image name, but only the name of a given image without a
tag
,user
andregistry
.
-
tag
: string¶ Tag of a Docker image that will be build.
-
contextPath
: string = .¶ Path that will be used as context in the Docker build process. See docker help command for more information.
Important
The context will be set to ., but the current working directory is set to the workspace folder, so it will behave like the Docker command is run inside the source directory.
-
dockerfilePath
: string = Dockerfile¶ Path to the Dockerfile file.
-
dockerSecret
: string Name of the Docker secret used for communicating with Docker registry. Used for pulling image from private registries.
-
-
Service
: Object¶ Service runs an application specified by
script
inside a Dockerimage
.-
name
: string Name of the service that will be used in the UI and also for communicating with the service. This name will be added to /etc/hosts of every
step
andfailure handler
in the pipeline, so the service can be resolved using the name.
-
image
: string Docker image used to run the service.
-
dockerSecret
: string Name of the Docker secret used for communicating with Docker registry. Used for pulling image from private registries.
-
script
: string = "" A script or command that will be executed. If empty, the default
command
from the Docker image will be executed.
-
environment
: list(EnvironmentVariable) List of environment variables passed to the container.
-
-
FailureHandler
: Object¶ Failure handler is an object representing a special kind of pipeline execution unit for handling errors in pipeline steps. Its definition is very similar to the docker run step.
-
name
: string Name of the failure handler. This name will be used by FailureHandlerReference
-
image
: string Docker image used to run the failure handler.
-
dockerSecret
: string Name of the Docker secret used for communicating with Docker registry. Used for pulling image from private registries.
-
script
: string A script or command that will be executed. If empty, the default
command
from the Docker image will be executed.
-
environment
: list(EnvironmentVariable) List of environment variables passed to the container.
-
files
: list(File) List of files that will be mounted in the container.
-
-
FailureHandlerReference
: Object¶ Failure handler reference is an object representing reference to a defined failure handler.
-
handlerName
¶ Name of the
FailureHandler
object to run. The failure handler must be defined in thefailureHandlers
list - else the pipeline will fail during validation.
-
-
EnvironmentVariable
: Object¶ The environment variable object represents an environment variable within a container. You can provide a value inline through
value
or by referencing a secret by its name usingfromSecret
field.-
name
The name of the environment variable that will be passed to the container.
-
value
¶ Value for a given environment variable.
-
fromSecret
¶ Name of a secret from which the value should be retrieved to inject to the container as an environment variable.
Note
Currently
IceCI
supports creating environment variables by explicitly entering their values in the pipeline yaml or by providing a secret name from which the value should be taken. Those options are exclusive for a given variable - you can’t have bothvalue
andfromSecret
set at the same time - the pipeline validation will fail.-
-
File
: Object¶ The file object represents a file that’ll be mounted in a container from a secret. Unlike environment variables, file values cannot be provided inline - they have to reference a secret.
-
path
¶ The absolute path that the file will be mounted in.
-
fromSecret
Name of a secret from which the value should be retrieved to mount into the container as a file.
-
Environment¶
IceCI generates additional metadata for every build and passes this data to the relevant objects in the pipeline.
Note
In the UI those environment variables won’t be visible in neither step
nor failure handler
details. They are injected during execution - the UI only shows variables defined in the pipeline.
Steps¶
Environment variables specified here are injected into every step
and failure handler
in the pipeline.
-
ICE_BUILD_NUMBER
¶ Subsequent number of a build in a repository. This value is unique only in the context of a given repository.
Example value:
12
-
ICE_STEP_NAME
¶ Name of the step that’s currently executing
Example value:
run-tests
-
ICE_SERVICE_XXX
¶ IP address of the XXX service. This variable is created for every service defined in the pipeline spec.
Example value:
10.0.0.1
-
ICE_GIT_EVENT_TYPE
¶ Git event type. Currently only
commit
is supported.Example value:
commit
-
ICE_GIT_COMMIT_SHA
¶ SHA of Git commit.
Example value:
93126518fa6eec3447d1d57c503aeebfd84f23ec
-
ICE_GIT_BRANCH_NAME
¶ Name of the branch on which the event happened.
Example value:
master
-
ICE_GIT_TAG
¶ Not supported in current version. Git tag name. This environment value is set only if
ICE_GIT_EVENT_TYPE
is set totag
.Example value:
0.1.0
-
ICE_GIT_LOG_HEADER
¶ Git log header encoded in base64.
Example value:
VXBkYXRlICdSRUFETUUubWQnCg==
-
ICE_GIT_LOG_MESSAGE
¶ Git log body (without the header) encoded in base64.
Example value:
VXBkYXRlICdSRUFETUUubWQnCg==
-
ICE_GIT_AUTHOR_NAME
¶ Name of the event author.
Example value:
iceci
-
ICE_GIT_AUTHOR_EMAIL
¶ Email of the event author.
Example value:
iceci@iceci.io
-
ICE_GIT_AUTHOR_DATE
¶ Date of the event.
Example value:
Wed, 5 Feb 2020 01:24:15 +0100
Failure handler¶
Environment variables specified here are injected into every failure handler
in the pipeline.
-
ICE_FAILED_STEP_NAME
¶ Name of the failed step.
Example Value:
run-tests
Important
Failure handlers also have all of the failed step environment variables injected - this includes secrets.