Pipelines-as-a-Service: How to put some order in a chaotic Jenkins
- ikerdomingoperez
- Sep 10
- 10 min read
The Reality of Jenkins
Jenkins is everywhere. The most popular CI/CD orchestrator, and, as it happens with flexible tools, often misused. I have been dealing with Jenkins for a while. Through different companies and personal projects, I have seen a few things, and learnt many others. I am aware every company is different, every team manages Jenkins their own way, and that's fine. Today, I will try to speak for those struggling with their Jenkins setup and willing to fix it.

What’s the Real Problem?
Chaos.
Yes, Senior members of the team won't see much chaos there, as they are familiar with the setup and know exactly where is everything. But, picture a new joiner or someone from a different team, trying to find their way on a Jenkins labyrinth: Jobs taking the Jenkinsfiles from different repositories, some times from non-main legacy branches, Freestyle Jobs with random bat scripts, Jobs with scripted pipelines as text on the job, etc. Feeling like every job has been created ad-hoc, manually, without following any standards, trying to solve the same problems again without looking at other Jobs for references. And, look, I have done that myself too. Don't think there is many people around totally free from blame.
So, first, let's enumerate the potential pain points:
Excessive duplication - no DRY principles
No centralized pipelines - difficulty to make a change across the board
Different solutions for the same problem - confusion
Difficulty to find out what are the real plugins requirements
Ad-hoc and decentralized compliance and security checks
Business logic spread all over the place
Accountability - who is responsible when something fails?
Troubleshooting - no need to explain
Blast radius - Say you need to modify a Jenkinsfile that is taken from source control, how can you tell how many Jobs use it? (a problem for another post!)
Again, I'm sure there will be very competent companies around with a setup like that, where each sub-team is accountable, manages their pipeline on their own, in a proactive and engagement manner. But, there will also be others that don't. The goal of this post is to try to put some order in the chaos. At least on the pipelines side.
The Solution: Pipelines-as-a-Service
Change doesn’t happen overnight. Start small, then slowly absorb job after job, just like Katamari Damacy would do.
The idea of the solution is to centralize the pipelines, define some rules, but make them dynamic and generic enough so they can scale with custom requirements and, most important, test the sh*t out of them. This gives you Pipelines-as-a-Service: shared, scalable, reusable and testable CI/CD that removes the weight out of the shoulders of the teams, and lets your DevOps team take true ownership.
You have a terraform deployment? we have a pipeline for that. You have a Java project? again? Pick this one, a thousand of Java repos already use it. This is more than just standardization, it’s about removing friction and increasing confidence by using a generic and tested solution to solve the same problems.
Start Small: Draft Reusable Pipelines
How to start? If you try to look at the big picture it must be overwhelming. Let's try to make it easy: Start with one technology. Pick one that your team works with, as you will become your first customer. For the sake of this example we will assume that your team already have some Terraform, Ansible projects or Python libraries around.
Pick one and draft a pipeline that can fit on all of your repositories of the same technology. Use variables or parameters and make it project agnostic: the idea is that, on your Jenkinsfile, there is not a single reference to any of the projects or repositories, no reference to anything only your team does.
It doesnt have to work, you are just creating a draft, a blueprint. Keep it high level.
Grow: Identify Common Ground
Now, repeat that with the other technologies on your stack. You should end up with 3 different blueprints. High level designs only.
Take a step back, and compare them. What do they have in common? Often, technology-agnostic steps like publish, checkout or email notification are built for reuse. Python, Ansible and Terraform can't be more different from each other, but when it comes down to a build pipeline, they may have common steps. Identify them, and make sure your draft clearly highlights those: they will form the foundation of your shared Pipelines-as-a-Service.
Split, Centralize, and DRY the Steps
Duplication is one of the big bad bosses, the enemy of maintainability. If two or more pipelines share a step, write it once, and reference it everywhere, so when you make a change to it, it will be reflected automatically wherever is used. Even if you centralize the pipelines, you really don't want to loop over dozens of jenkinsfiles searching for that email notification step that you need to modify.
Follow the same approach for every step, because even if it is being used once today, you may need it again tomorrow. Also for consistency. Split and store each step on its own file or any modular structure. The idea is that each step is going to be treated independently. This way you make it more manageable, easier to troubleshoot, to make changes, to extend and test.
You will sell this as Pipelines-as-a-Service, but the real deal is on the steps.
Compose Jenkinsfiles Dynamically
Great, we have a bunch of files with steps. Now is time to create a solution to put together those steps in the order you want. First, let's create some data structure to define that list of steps, in that specific order, for each Jenkinsfile:
python_build.jenkinsfile:
- checkout
- lint_py
- pytest
- build_py
- publish
- cleanup
- notify
terraform_build.jenkinsfile:
- checkout
- lint_tf
- deploy
- destroy
- publish
- cleanup
- notify
ansible_build.jenkinsfile:
- checkout
- lint_ans
- test
- publish
- cleanup
- notifyNot bad. Now you can implement a script or mechanism to assemble the Jenkinsfiles. If you also need to manage different agent labels or different environment sections, you can adapt the data structure to something like:
python_build.jenkinsfile:
agent_label: aws
environment_section: basic_build_environment
steps:
- checkout
- lint_py
- pytest
- build_py
- publish
- cleanup
- notify
terraform_build.jenkinsfile:
agent_label: onprem
environment_section: onprem_build_environment
steps:
- checkout
- lint_tf
- deploy
- destroy
- publish
- cleanup
- notifyPerfect, we have generated 3 different jenkinsfiles, and reused steps like checkout, publish, cleanup and notify. So far, just a draft, nothing usable.
Testing Pipelines: Own Your Service
Testing is crucial.
You are no longer "Just a DevOps", now you are the vendor, you are offering Pipelines-as-a-Service, you are taking ownership and accountability, the client will knock on your door if the pipeline has a syntax error or doesn't do its job. So, you need to test them. You also need to document it, and the tests will play a big role in the documentation. Your goal is not only to say: I have a jenkinsfile for building java which you can use, you also have to demonstrate it can, and, if the repository needs to follow some structure, show it with examples.
One way to test these Jenkinsfiles is to run them. But, not real jobs, I mean, yes, real, but, you know, dummy jobs. If you have followed the previous steps, at this point you will just have a draft of the pipelines. This was intentional, a way to prepare the ground for a nice session of TDD: Test Driven Development
Set up minimal, representative test repositories for each type
a Python repo with main.py to print hello world, with some requirements, a setup.py or pyproject.toml
an Ansible Role/Playbook with some requirements that just print some vars
a Terraform project to use some external providers and creates some outputs
Create 3 Jobs in Jenkins and point them to each of the Jenkinsfiles
Run the jobs
Write the steps
Repeat #3 and #4 until the jobs finish successfully, validate the result
Some steps may not run on a successful run. Maybe the notification only notifies on failure? You can create a different branch on the test repository with a faulty code, to make the pipeline fail and trigger the notification. Does it notify? I'm sure it does. Great. So... how do you put this on code?
Remember the pipeline structure? Let's extend it a bit
python_build.jenkinsfile:
agent_label: aws
environment_section: basic_build_environment
steps:
- checkout
- lint_py
- pytest
- build_py
- publish
- cleanup
- notify
test_jobs:
path/to/python_test_repository/main: SUCCESSFUL
path/to/python_test_repository/fail: FAILUREEach pipeline can receive multiple test jobs, each of them with the expected result. Now you can put together a script or mechanism to test those jobs and assert that the result is the expected. If matches, the test is successful!
Testing as Part of CI/CD
Yes, testing is nice, but once live, your pipelines will be used by hundreds of jobs daily. Before touching a Live Jenkinsfile, you need to validate it. PR reviews are not enough. How can you include the testing in the CI/CD workflow of the pipelines-as-a-service?
With python, of course! (or Groovy or bash or whatever). The idea is to have a script that:
List all the Jenkinsfiles that have been modified (diff from main or previous commit)
For each Jenkinsfile:
List the Test Jobs assigned
For each Test Job
With Jenkins REST API:
Update config.xml to point to the Jenkinsfile from this commit
Trigger the job
Wait until completed (put some timeout)
Validate the result
You may need to use a bit of parallel execution, as you don't want to be waiting for hours if you are changing multiple pipelines. But, still, the idea remains: Assign the pipeline from current branch to the test job and run it.
Going Live: Becoming Your Own First Customer
Once you have your first 3 pipelines around, you will be your first customer. So, point all your jobs to your fancy new Jenkinsfiles. And, test the sh*t out of them. There will be steps that didn't fail on the test jobs because they were dummy repositories, and now, with real jobs, they fail. You will find that out at this point. Is time to adapt and extend those test repositories, and fix the possible errors on the pipelines. After a while it all will be golden, and you can start thinking on on-boarding active teams and scaling.
Scaling Up: More Technologies, More Teams, More Value
Put your Katamari Damazy hat on: Absorb more projects that use technologies you already have pipelines for. Extend your solution for new technologies. Sooner than later you will receive new requirements and custom modifications requests. Handle those thoughtfully, with love and care: some teams will do just for lazyness, but others for lack of understanding and because they honestly want to use your setup. You may need to create different variations of that Python Build Jenkinsfile to allow Poetry or simple scripts, or you may need to create an alternate version of the Terraform one with an user-input step. To keep things clean on your side, you can start grouping all pipelines by technology, this way you can declare some common steps and make it more DRY, or take the chance to make extra validations. Example
python:
- python_setup_build.jenkinsfile:
agent_label: aws
environment_section: basic_build_environment
steps:
- checkout
- lint_py
- pytest
- build_setup_py
- publish
- cleanup
- notify
test_jobs:
path/to/python_test_repository/main: SUCCESSFUL
path/to/python_test_repository/fail: FAILURE
- python_script_build.jenkinsfile:
agent_label: aws
environment_section: basic_build_environment
steps:
- checkout
- lint_py
- pytest
- build_zip
- publish
- cleanup
- notify
test_jobs:
path/to/python_test_repository/script: SUCCESSFUL
- python_poetry_build.jenkinsfile:
agent_label: aws
environment_section: basic_build_environment
steps:
- checkout
- lint_py
- pytest
- build_poetry_py
- publish
- cleanup
- notify
test_jobs:
path/to/python_test_repository/poetry: SUCCESSFULUhm... that is not very DRY. Let's refactor it a bit
python_build:
common:
agent_label: aws
environment_section: basic_build_environment
pre_steps:
- checkout
- lint_py
- pytest
post_steps:
- publish
- cleanup
- notify
pipelines:
- name: python_setup_build.jenkinsfile:
steps:
- build_setup_py
test_jobs:
path/to/python_test_repository/main: SUCCESSFUL
path/to/python_test_repository/fail: FAILURE
- name: python_script_build.jenkinsfile:
steps:
- build_zip
test_jobs:
path/to/python_test_repository/script: SUCCESSFUL
- name: python_poetry_build.jenkinsfile:
steps:
- build_poetry
test_jobs:
path/to/python_test_repository/poetry: SUCCESSFULMuch better. Just with a quick look you can tell that the only difference is the build step. And we don't need to test the notify step that happens on failure on the new pipelines, as it is already tested on the first one.
DRY, scalable, customizable, generic, reusable. Doesn't sound too bad, right?
Handling Business Logic and Shared Libraries
The main problem of this setup, if not handled properly, is the business logic. When you try to make stuff generic and reusable in a Jekinsfile, you may end up with large Groovy or Shell blocks, with plenty of conditionals and decision branches. That is not bad itself, but it looks bad on a Jenkinsfile. Also, it can make the maintenance more challenging, as, even if you have a test case for that pipeline, you may not have test cases for each decision branch of the business logic on that step that is used in 20 different jenkinsfiles.
Luckily, Jenkins has a thing called Shared Libraries. By definition, they are Groovy scripts that are loaded directly from source control, where you can define functions and use them.
Think, like, create a groovy function called publish, and you invoke it on your pipeline just with like that in groovy: publish. The function itself can access environment variables and parameters (artifact, version, target, etc), so you can keep all your logic in that inner function, keeping your pipeline clean. You write some tests for the CI/CD workflow of your shared libraries and you will have all bases covered. Oh, you don't know much about Groovy and even less about Groovy tests? you don't like the idea of loading a Shared Library directly from source control into a production pipeline? That's fine, and very understandable. I have something for you: Shared Libraries in python
If you use shared libraries wisely, the changes on the pipelines will be minimal. You will find yourself changing just the functions, which have their own set of tests, so all business logic will be fully covered.
Conclusion: The True Value of Pipelines-as-a-Service
Centralizing Jenkins pipelines delivers serious benefits: clarity, consistency, maintainability, and transparency across your DevOps organization. It displays how your company actually builds and deploys, exposes compliance and governance gaps, and makes ongoing improvements possible. Pipelines-as-a-Service doesn't just put some order in the chaos: it empowers teams, clarifies accountability, and allows for true scalability
It's not perfect yet, and that's fine. You will hit edge cases, weird legacy setups, and unexpected failures. But now you have a foundation. You have a system that can grow, adapt, and scale. And most importantly, you have taken the first step towards turning Jenkins from a headache into a platform.