This document explains how to structure and use modular Jenkins pipelines using external Groovy include files.
The goal of this approach is to improve pipeline readability, maintainability, and reuse by separating logic into smaller, well-defined components.

The repository structure used in this example is:

https://github.com/faustobranco/devops-db/tree/master/knowledge-base/jenkins/include-files

.
├── README.md
├── include-files
│   ├── includes
│   │   ├── stepOne.groovy
│   │   └── stepTwo.groovy
│   ├── main.groovy
│   └── parameters
│       ├── parameters.groovy
│       ├── projectChoices.groovy
│       └── versionChoices.groovy

This structure separates pipeline orchestration, stage logic, and dynamic parameter configuration.


1. Why Use Include Files in Jenkins Pipelines

As pipelines grow, a single Jenkinsfile quickly becomes difficult to maintain.
Common problems include:

  • Long scripts with hundreds of lines
  • Repeated logic across pipelines
  • Difficult debugging
  • Reduced readability

Using include files allows pipelines to be split into logical modules:

ComponentResponsibility
main.groovyMain pipeline orchestration
includes/Modular pipeline step implementations
parameters/Parameter definitions and dynamic choices

This modular approach allows each file to have a single responsibility.


2. Pipeline Architecture Overview

The main pipeline (main.groovy) performs three main tasks:

  1. Load pipeline parameters
  2. Load external include scripts
  3. Execute the pipeline logic

The execution flow is:

Jenkins
   │
   ├─ Load parameters (checkout + read files)
   │
   ├─ Start declarative pipeline
   │
   ├─ Load include scripts
   │
   └─ Execute modular steps


3. Repository Hierarchy

The repository uses the following hierarchy:

include-files
│
├── main.groovy
│
├── includes
│   ├── stepOne.groovy
│   └── stepTwo.groovy
│
└── parameters
    ├── parameters.groovy
    ├── projectChoices.groovy
    └── versionChoices.groovy

Why This Structure

includes/

Contains reusable pipeline logic.

These files behave exactly like code inside Jenkins steps.

Example responsibilities:

  • Build logic
  • Deployment steps
  • Validation
  • Integration tasks

parameters/

Contains dynamic parameter logic.

This allows parameters to be version controlled and maintained separately from the pipeline code.

parameters.groovy

def getParameters(projectScript, versionScript) {

    return [

        [
            $class: 'CascadeChoiceParameter',
            choiceType: 'PT_SINGLE_SELECT',
            name: 'PROJECT',
            description: 'Select project',
            script: [
                $class: 'GroovyScript',
                script: [
                    $class: 'SecureGroovyScript',
                    script: projectScript,
                    sandbox: false
                ]
            ]
        ],

        [
            $class: 'CascadeChoiceParameter',
            choiceType: 'PT_SINGLE_SELECT',
            name: 'APPLICATION_VERSION',
            description: 'Select version',
            referencedParameters: 'PROJECT',
            script: [
                $class: 'GroovyScript',
                script: [
                    $class: 'SecureGroovyScript',
                    script: versionScript,
                    sandbox: false
                ]
            ]
        ],

        choice(
            name: 'ENVIRONMENT',
            choices: ['test','stage','prod'],
            description: 'Environment'
        )

    ]
}

return this

projectChoices.groovy

def items = [
    "kubernetes",
    "vault",
    "gitlab"
]

return items

versionChoices.groovy

def versions = [
    "kubernetes": ["3.11.1", "3.11.0", "3.10.5"],
    "vault": ["2.5.0", "2.4.1", "2.4.0"],
    "gitlab": ["1.8.2", "1.8.1"]
]

if (!PROJECT) {
    return ["select project first"]
}

return versions[PROJECT]

main.groovy

Coordinates everything.

This file:

  • Loads parameters
  • Loads include modules
  • Runs pipeline stages
@Library('devopsdb-global-lib') _

import devopsdb.utilities.Utilities
def obj_Utilities = new Utilities(this)        

def includes_steps = [:]                

node {
    // Garante que o repo existe no workspace
    checkout scm

    dir('include-files') {

        def projectScript = readFile('parameters/projectChoices.groovy')
        def versionScript = readFile('parameters/versionChoices.groovy')

        def parametersFile = load 'parameters/parameters.groovy'

        properties([
            parameters(
                parametersFile.getParameters(projectScript, versionScript)
            )
        ])
    }
}

pipeline {
    agent {
        kubernetes {
            yaml GeneratePodTemplate('1234-ABCD', 'registry.devops-db.internal:5000/img-jenkins-devopsdb:2.0')
            retries 2
        }
    }

    options { timestamps() }
    stages {

        stage('Load Includes') {
            steps {
                container('container-1') {
                    script {

                        includes_steps.stepOne = load "include-files/includes/stepOne.groovy"
                        includes_steps.stepTwo = load "include-files/includes/stepTwo.groovy"

                    }
                }
            }
        }

        stage('Run Steps') {
            steps {
                container('container-1') {
                    script {
                        includes_steps.stepOne.stepOne()
                        includes_steps.stepTwo.stepTwo()
                    }
                }
            }
        }

        stage('Example') {
            steps {
                echo "Project: ${params.PROJECT}"
                echo "Version: ${params.APPLICATION_VERSION}"
                echo "Environment: ${params.ENVIRONMENT}"
            }
        }
    }
}

4. Using a Map to Store Included Scripts

To load multiple scripts safely, a Groovy map is used.

Example concept:

def includes_steps = [:]

This creates an empty map.

Scripts can then be dynamically added:

includes_steps.stepOne = load "include-files/includes/stepOne.groovy"
includes_steps.stepTwo = load "include-files/includes/stepTwo.groovy"

After loading them, the functions defined in those files can be executed.

Example:

includes_steps.stepOne.stepOne()
includes_steps.stepTwo.stepTwo()

Why Use a Map

Using a map provides several advantages:

Prevents namespace pollution

Instead of creating many variables:

stepOneScript
stepTwoScript
stepThreeScript

everything is grouped inside a single structure.

Avoids Jenkins CPS serialization issues

Jenkins pipelines use a CPS execution engine that serializes pipeline state.
Using a map helps keep objects organized and reduces the risk of serialization problems.

Improves scalability

Additional modules can be added easily:

includes_steps.deploy = load "includes/deploy.groovy"
includes_steps.rollback = load "includes/rollback.groovy"

5. Loading Pipeline Parameters

In this example, the dynamic parameters are stored in:

include-files/parameters

These files define the logic for parameters such as:

  • Project selection
  • Version selection
  • Environment selection

The parameters are loaded before the pipeline starts.

Example flow:

checkout scm
read parameter scripts
load parameters definition
apply properties(parameters)

Why checkout scm Is Required

Jenkins needs access to the repository files before reading them.

Since the pipeline script itself may be retrieved independently, we must explicitly perform:

checkout scm

This ensures that:

  • The repository is cloned into the workspace
  • The parameter files are available for reading

Without this step, Jenkins would fail to locate the parameter scripts.


6. CascadeChoiceParameter Script Approval

The pipeline uses Active Choices parameters such as CascadeChoiceParameter.

These parameters execute Groovy scripts dynamically.

Because Jenkins enforces script security, these scripts often require manual approval.

If approval is required, Jenkins will show a warning in:

Manage Jenkins → In-Process Script Approval

Administrators must approve the scripts before they can run.

Typical approval scenarios include:

  • Groovy scripts using non-sandbox execution
  • External API calls
  • Dynamic parameter logic

Without approval, the parameter rendering will fail.


7. Writing Code Inside Include Files

Each file inside includes/ behaves like code written inside Jenkins pipeline steps.

For example, commands such as:

  • echo
  • sh
  • withCredentials
  • env
  • params

are available.

Example conceptual structure inside an include file:

stepOne.groovy

def stepOne() {
    echo "Running step one"
    def str_Password = GeneratePassword(20, true)
    println String.format("New password: %s", str_Password)
    env.NEW_PASSWORD = str_Password

}

return this

stepTwo.groovy

def stepTwo() {
    echo "Running step two"
    env.APP_NAME = params.PROJECT
    env.APP_VERSION = params.APPLICATION_VERSION
    env.APP_ENV = params.ENVIRONMENT
    sh 'env'
}

return this

Important points:

  • Functions must be defined normally in Groovy
  • return this allows Jenkins to access the functions after loading the script

After loading the file using load, the pipeline can call the function directly.


8. Why Paths Are Referenced Differently

You may notice two different path styles:

Parameter loading

parameters/projectChoices.groovy

Include loading

include-files/includes/stepOne.groovy

This difference exists because of workspace context.

When the parameters are loaded, the pipeline changes the working directory:

dir('include-files')

Inside that context, paths are relative to include-files.

However, when loading scripts inside the pipeline stages, the working directory returns to the workspace root.

Therefore the full relative path must be used.

This ensures that Jenkins can correctly locate the files regardless of execution context.


9. Benefits of This Approach

Using this architecture provides several advantages.

Maintainability

Large pipelines become easier to manage.

Reusability

Modules can be reused across multiple pipelines.

Clear separation of responsibilities

FileResponsibility
main.groovyPipeline orchestration
includesExecution logic
parametersParameter configuration

Version control

All pipeline logic lives in Git.

Easier debugging

Each step can be modified independently.


10. Summary

This modular pipeline structure introduces a clean separation between:

  • Pipeline orchestration
  • Execution steps
  • Parameter configuration

Key concepts include:

  • Using checkout scm to load repository files for parameters
  • Storing included scripts inside a Groovy map
  • Structuring pipeline logic using external modules
  • Approving Active Choice scripts when required
  • Writing include files as standard Jenkins step code

This architecture allows pipelines to scale while remaining readable and maintainable.

As pipelines grow in complexity, modularization becomes essential for maintaining clarity and reducing operational risk.