Data Virtualization for Mainframes: Integrating z/OS With Modern Analytics and AI Systems
23.01.2024

The term mainframe DevOps might sound like an oxymoron to those accustomed to the agile, cloud-native world of distributed systems. Yet for the thousands of U.S. enterprises running mission-critical workloads on IBM Z and z/OS—processing payments, managing insurance claims, handling core banking transactions, and administering government benefits—modernizing development practices isn't optional. It's essential for survival.
Mainframes aren't going anywhere. They process approximately 87% of all credit card transactions globally and handle $8 trillion in payments daily. U.S. banks, insurers, healthcare providers, and government agencies depend on the reliability, security, and raw processing power that z/OS delivers. The challenge isn't replacing these systems; it's modernizing how teams develop, test, and deploy applications running on them.
Historically, mainframe development followed a path dramatically different from distributed systems. While web and mobile teams embraced Git, Jenkins, continuous integration, and automated testing years ago, mainframe shops often remained tethered to green-screen editors like ISPF, proprietary source control systems, manual change management processes, and release cycles measured in months rather than days.
This gap created friction. Younger developers trained in modern tooling found mainframe development archaic and frustrating. Cross-platform projects that spanned mainframe and cloud struggled with incompatible workflows. Organizations couldn't achieve the agility their business demanded while their core systems languished in legacy development practices.
The good news: this gap is closing rapidly. A new generation of tools and practices brings DevOps principles—continuous integration, automated testing, infrastructure as code, and rapid feedback loops—to z/OS environments. IBM's Z DevOps Acceleration Program provides frameworks and guidance for implementing modern CI/CD pipelines. Open-source initiatives like Zowe offer APIs and CLI tools that make z/OS accessible to standard DevOps toolchains. Major vendors including IBM, Broadcom, and BMC have reimagined their mainframe tooling for the DevOps era.
This comprehensive guide explores the practical reality of mainframe DevOps for U.S. enterprises. We'll examine how to build CI/CD pipelines that include z/OS, which tools enable Git-based version control for COBOL and PL/I code, how Jenkins integrates with mainframe build and deployment processes, and what z/OS automation patterns work in production environments. Whether you're a mainframe development manager planning modernization, a platform engineer implementing pipelines, or an enterprise architect designing hybrid DevOps workflows, you'll find actionable guidance grounded in real capabilities and proven patterns.
Before diving into tools and pipelines, let's establish what mainframe DevOps actually means and how core DevOps principles apply to z/OS environments.
DevOps, at its essence, is about reducing the friction between development and operations through automation, collaboration, and continuous feedback. The fundamental practices—version control, automated builds, continuous integration, automated testing, and streamlined deployment—apply regardless of platform. However, the implementation details differ significantly between distributed systems and mainframes.
The classic mainframe development model evolved in an era when stability and control were paramount, and the cost of failures was measured in millions of dollars of downtime.
This led to processes that prioritized safety over speed:
Source Control: Code resided in Partitioned Data Sets (PDS) on the mainframe itself. Developers edited COBOL, PL/I, or Assembler programs using ISPF panels, making changes directly to production-like environments. Version control, when it existed, came from commercial products like CA Endevor, IBM SCLM, or Serena ChangeMan that operated within the mainframe ecosystem. These systems tracked versions and enforced promotion paths, but they were isolated from the version control systems used by distributed teams.
Build Process: Compilation typically happened on-demand or through batch JCL jobs. Developers compiled their programs individually, often without understanding the full dependency chain. Build automation existed but was brittle, relying on hard-coded JCL and manual intervention when problems arose.
Testing: Testing was often manual, performed by QA teams weeks after development. Automated unit testing existed but wasn't standard practice. Integration testing happened late in the cycle, when multiple components came together in test regions.
Deployment: Changes moved through environments (development, test, QA, production) via formal change control processes involving paperwork, committee approvals, and scheduled deployment windows. Production deployments often occurred during weekend maintenance windows to minimize business impact.
Release Cadence: Many mainframe applications released quarterly, semi-annually, or even annually. Emergency fixes followed expedited processes, but planned enhancements accumulated into large, risky releases.
This model delivered stability. Production incidents were relatively rare, and when they occurred, extensive audit trails helped identify root causes. However, it also created significant problems: slow time-to-market for new features, difficulty onboarding developers accustomed to modern tooling, and inability to respond quickly to business needs.
Contemporary mainframe DevOps transforms this model while preserving the governance and auditability that regulated industries require:
Version Control: Source code for COBOL, PL/I, Assembler, JCL, and copybooks lives in Git repositories, just like Java or Python code. Developers work in branches, submit pull requests, conduct code reviews, and merge changes following the same workflows used across the enterprise. According to Broadcom's guidance on Git and mainframe development, this standardization dramatically improves collaboration between mainframe and distributed teams.
Automated Builds: Every code commit triggers an automated build process. Tools like IBM's Dependency Based Build (DBB) analyze source code, identify dependencies, and compile only what changed, dramatically speeding up builds. Build scripts are version-controlled alongside application code, ensuring reproducibility.
Continuous Integration: A CI server like Jenkins monitors Git repositories, triggering builds and tests automatically. Within minutes of committing code, developers know whether their changes compile cleanly and pass automated tests. This rapid feedback prevents problems from accumulating.
Automated Testing: Unit tests, component tests, and integration tests run automatically as part of CI pipelines. Tools like BMC's Topaz for Total Test enable test automation that was previously impractical. Performance tests validate that changes don't degrade response times.
Continuous Deployment: Successful builds produce versioned artifacts—load modules, configuration files, JCL—stored in artifact repositories. Automated deployment tools promote these artifacts through environments based on policies rather than manual intervention. While production deployment might still require approval gates, the mechanical deployment process is automated and repeatable.
Rapid Release Cycles: Modern mainframe shops increasingly deploy weekly, daily, or even multiple times per day for non-critical changes. This requires robust automation, comprehensive testing, and strong collaboration between development and operations teams.
Organizations pursuing mainframe DevOps typically aim for several interconnected goals:
Accelerated Delivery: Reducing cycle time from code commit to production deployment enables faster response to business needs. Features that previously took months can ship in weeks or days.
Reduced Errors: Automation eliminates manual steps where mistakes occur. Consistent, repeatable processes produce more reliable outcomes than processes dependent on human memory and attention.
Improved Collaboration: When mainframe and distributed teams use common tools—Git, Jenkins, Jira—they collaborate more effectively. Hybrid applications that span platforms become easier to develop and maintain.
Developer Experience: Modern IDEs like Visual Studio Code or Eclipse, integrated with z/OS through plugins, provide mainframe developers with the same productive environments that distributed developers enjoy. This improves satisfaction and makes mainframe development accessible to newer developers.
Auditability and Compliance: Paradoxically, automation often improves compliance. Every build, test, and deployment generates logs. Version control provides complete change history. Automated gates enforce policies consistently. These capabilities satisfy auditors while enabling faster delivery.
To make mainframe DevOps concrete, let's explore a reference architecture for a CI/CD pipeline that includes z/OS applications. While specific implementations vary, successful pipelines share common patterns.
Imagine a pipeline diagram with these stages flowing left to right: Source Control → Build → Test → Package → Deploy → Monitor. Each stage produces artifacts and metrics that feed into subsequent stages. Failure at any stage stops the pipeline, providing rapid feedback.
The foundation is Git-based version control. All mainframe source code—COBOL programs, copybooks, PL/I modules, JCL scripts, and configuration files—resides in Git repositories. IBM's Z DevOps installation guide describes how to structure these repositories, whether as monorepos containing entire applications or as separate repos for different components.
Branching Strategy: Most mainframe teams adopt either Gitflow or trunk-based development, adapted for their specific needs. Gitflow uses long-lived branches for development, release, and production, with feature branches for individual changes. Trunk-based development uses short-lived feature branches that merge quickly into main, with feature flags controlling functionality in production.
For mainframe applications with stringent change control requirements, a modified Gitflow often works well. Development happens in feature branches. Pull requests trigger automated builds and tests before merging to the development branch. Release branches stabilize code before production, with only critical fixes allowed. This provides clear points for governance gates while enabling continuous integration.
Hybrid Git Workflows: Some organizations maintain legacy SCM systems alongside Git during transitions. Code synchronizes between systems, allowing teams to adopt Git gradually. Developers can use modern Git workflows while automated sync keeps legacy systems updated for compliance or operational reasons. While not ideal long-term, this hybrid approach reduces migration risk.
When developers push code to Git, webhooks notify the CI server, triggering a build. The build stage checks out code, analyzes dependencies, compiles programs, and produces executable load modules.
Dependency Based Build (DBB): IBM's DBB framework revolutionizes mainframe builds by analyzing source code to understand dependencies between programs, copybooks, and modules. Rather than recompiling everything, DBB identifies what changed and rebuilds only affected components. This intelligent incremental building dramatically reduces build times from hours to minutes.
zAppBuild: Built on DBB, zAppBuild provides ready-to-use build scripts for common mainframe languages and scenarios. Organizations can customize these scripts for their specific needs, incorporating custom compilation options, link-edit parameters, and packaging requirements.
The build process interacts with z/OS through several mechanisms. Build scripts might submit JCL jobs through z/OSMF REST APIs or Zowe CLI, compile programs using USS-based compilers, or invoke IBM's Enterprise COBOL or PL/I compilers remotely. The resulting load modules, along with metadata about their creation, become versioned artifacts.
Testing represents one of the most transformative aspects of mainframe DevOps. Automated testing, once rare in mainframe environments, becomes standard practice.
Unit Testing: Unit tests verify individual programs or modules in isolation. Tools like BMC's Topaz for Total Test allow developers to create test cases that exercise COBOL or PL/I code with various inputs, asserting expected outputs. These tests run automatically during CI, catching regressions immediately.
Component Testing: Component tests verify that groups of programs work together correctly. For example, testing that a CICS transaction correctly invokes multiple subroutines and produces the expected database updates.
Integration Testing: Integration tests validate interactions with external systems—databases, message queues, APIs. These tests often require dedicated test environments that mirror production topology.
Performance Testing: Automated performance tests measure response times and resource consumption, failing the pipeline if performance degrades beyond acceptable thresholds. This prevents performance regressions from reaching production.
Testing frameworks generate reports in standard formats like JUnit XML, which CI servers parse to display test results. Failed tests stop the pipeline and notify developers immediately.
Successful builds produce artifacts: compiled load modules, configuration files, JCL procedures, and metadata describing the build. These artifacts are versioned and stored in artifact repositories like JFrog Artifactory or Sonatype Nexus, the same repositories used for Java JARs or Python packages.
Versioning follows semantic versioning conventions (major.minor.patch) or includes Git commit hashes, ensuring traceability from artifacts back to source code. Artifact metadata records which source versions were included, when the build occurred, which tests passed, and who approved the build.
This approach treats mainframe applications like any other software, with clear provenance and reproducible builds. If production issues arise, teams can identify exactly which code version is running and reproduce builds for analysis or rollback.
Deployment automation moves artifacts through environments—from development to test to QA to production—without manual file transfers or JCL submissions.
Deployment Tools: IBM's DevOps Deploy (formerly UrbanCode Deploy) provides application release automation specifically designed for heterogeneous environments including mainframes. It orchestrates complex deployments involving multiple components, handling dependencies, rollback procedures, and approval gates.
BMC and Broadcom offer similar deployment automation tools integrated with their broader mainframe DevOps suites. These tools understand z/OS concepts like datasets, load libraries, and CICS regions, automating the mechanics of deployment while respecting governance requirements.
Deployment Patterns: While blue/green or canary deployments common in cloud environments aren't always applicable to mainframes, modern mainframe shops implement analogous patterns. CICS regions can be staged with new versions while old versions continue serving traffic, then traffic shifts to new regions after validation. Batch jobs can run new program versions in parallel with old versions for comparison before cutover.
Approval Gates: Production deployments typically require approval gates where human decision-makers review changes before proceeding. Modern pipelines implement these gates without sacrificing automation. Jenkins Pipeline, for example, supports input steps that pause for approval. Deployment tools provide dashboards where approvers review change details, test results, and risk assessments before authorizing production deployment.
The final pipeline stage involves monitoring applications in production and feeding observability data back into development. z/OS applications generate logs, performance metrics, and transaction traces that CI/CD systems consume.
Monitoring tools collect this data and expose it through dashboards and alerts. When performance degrades or errors increase, alerts notify teams. These metrics also feed back into pipeline stages—performance tests can query production baselines to set appropriate thresholds, and CI systems can track trends in code quality or test coverage.
This closed-loop feedback ensures continuous improvement. Teams observe how changes affect production, learn from problems, and incorporate that learning into development practices.
The mainframe DevOps ecosystem includes tools from major vendors, open-source projects, and specialized providers. Understanding the landscape helps organizations select appropriate tooling for their needs.
Git has become the de facto standard for version control across all software development, and mainframe is no exception. Adopting Git for mainframe code provides several critical benefits:
Standardization: When mainframe and distributed teams both use Git, they share common workflows, tools, and vocabulary. Developers can switch between projects without learning different version control systems. Code reviews use standard pull request workflows regardless of platform.
Tooling Ecosystem: Git's massive ecosystem—IDE integrations, code review tools, CI/CD integrations, analytics platforms—becomes available for mainframe code. Developers can use Visual Studio Code or IntelliJ IDEA with Git integration for mainframe development just as they would for Python or JavaScript.
Branching and Merging: Git's sophisticated branching and merging capabilities enable parallel development at scale. Multiple teams can work on different features simultaneously, merging changes when ready. This level of concurrent development was difficult with traditional mainframe SCM systems.
Implementation Patterns: Organizations typically adopt one of several patterns for Git integration:
The Zowe project provides crucial infrastructure for Git adoption through its CLI and APIs, enabling Git workflows to interact seamlessly with z/OS datasets and USS file systems.
Jenkins, the widely deployed open-source automation server, serves as the CI/CD orchestrator for many mainframe DevOps implementations. Its plugin architecture, pipeline-as-code capabilities, and extensive community make it well-suited for hybrid environments.
Why Jenkins for Mainframe? Organizations already using Jenkins for distributed applications can extend their existing infrastructure to include mainframe workloads. Security, user management, and operational procedures remain consistent across platforms. Developers use familiar Jenkinsfile syntax whether building Node.js applications or COBOL programs.
Integration Mechanisms: Jenkins communicates with z/OS through several channels:
Example Jenkins Pipeline: Here's a simplified Jenkinsfile illustrating key stages for a mainframe application:
pipeline {
agent any
environment {
ZOWE_OPT_HOST = 'mainframe.example.com'
ZOWE_OPT_USER = credentials('zos-credentials')
}
stages {
stage('Checkout') {
steps {
git branch: 'main',
url: 'https://github.com/example/cobol-app.git'
}
}
stage('Build') {
steps {
sh '''
# Use DBB/zAppBuild to compile
groovyz /path/to/zAppBuild/build.groovy \
--application MYAPP \
--sourceDir ${WORKSPACE}/src
'''
}
}
stage('Unit Test') {
steps {
sh '''
# Run unit tests via Zowe CLI
zowe jobs submit data-set "MYAPP.TEST.JCL(UNITTEST)" \
--wait-for-output
'''
junit 'test-results/*.xml'
}
}
stage('Deploy to Test') {
steps {
sh '''
# Upload load modules to test environment
zowe files upload file-to-data-set \
target/MYAPP.LOAD \
"TEST.MYAPP.LOAD"
'''
}
}
stage('Integration Test') {
steps {
sh '''
# Run integration tests
zowe jobs submit data-set "MYAPP.TEST.JCL(INTTEST)" \
--wait-for-output
'''
}
}
stage('Approve for Production') {
steps {
input message: 'Deploy to production?',
ok: 'Deploy'
}
}
stage('Deploy to Production') {
steps {
sh '''
# Automated production deployment
# (typically using deployment automation tools)
'''
}
}
}
post {
always {
archiveArtifacts artifacts: 'target/*.load',
fingerprint: true
}
failure {
mail to: 'mainframe-devops@example.com',
subject: "Pipeline Failed: ${env.JOB_NAME}",
body: "Check ${env.BUILD_URL}"
}
}
}
This example demonstrates Git checkout, DBB-based builds, automated testing via Zowe CLI, approval gates, and deployment stages. Real implementations would be more complex, with additional error handling, artifact management, and environment-specific configurations.
IBM provides a comprehensive set of tools designed specifically for z/OS DevOps, documented in the IBM Z DevOps Guide.
IBM Developer for z/OS (IDz): An Eclipse-based IDE providing modern development experience for mainframe languages. IDz integrates with Git, supports local syntax checking, and connects to z/OS systems for compilation and debugging. Developers get content assist, refactoring tools, and other IDE features common in distributed development.
Dependency Based Build (DBB): As discussed earlier, DBB analyzes code dependencies and performs intelligent incremental builds. It's the foundation for efficient CI/CD on z/OS, dramatically reducing build times.
zAppBuild: Sample build scripts built on DBB that organizations can customize. These scripts handle common scenarios for COBOL, PL/I, Assembler, and other mainframe languages, providing a starting point for build automation.
IBM DevOps Deploy (UrbanCode Deploy): Application release automation for hybrid environments. UrbanCode orchestrates complex deployments involving mainframe and distributed components, managing dependencies, sequences, and rollback procedures.
z/OS Connect: Exposes mainframe applications as RESTful APIs, enabling integration with modern DevOps pipelines and cloud-native applications. z/OS Connect includes DevOps capabilities for API development and deployment.
These tools integrate with Git and Jenkins, fitting into broader enterprise DevOps toolchains. IBM's Z DevOps Acceleration Program provides detailed guidance, reference architectures, and sample implementations for organizations adopting these tools.
Broadcom's Mainframe DevOps Suite emphasizes "open-first" DevOps, focusing on integration with standard DevOps tools and practices.
Git Integration: Broadcom tools connect with Git repositories, allowing developers to manage mainframe code in Git while leveraging Broadcom's specialized mainframe development capabilities. This hybrid approach provides both modern version control and mainframe-specific tooling.
Jenkins Plugins: Broadcom provides Jenkins plugins that enable pipeline integration with their mainframe development and testing tools. Pipelines can invoke Broadcom's build, test, and deployment capabilities as standard Jenkins stages.
API-First Architecture: Broadcom tools expose REST APIs that Jenkins, GitLab CI, or other CI/CD orchestrators can consume. This open architecture ensures interoperability with diverse toolchains.
Code Analysis and Quality: Broadcom's suite includes static analysis and quality tools that integrate into CI/CD pipelines, failing builds that don't meet quality thresholds. This shifts quality enforcement left, catching issues during development rather than in testing or production.
The suite supports scenarios common in large U.S. enterprises: integrating mainframe applications with cloud services, exposing mainframe functionality through APIs, and modernizing mainframe code while maintaining operational continuity.
BMC (which acquired Compuware's mainframe tools) offers the AMI DevX portfolio, designed specifically for modernizing mainframe development.
ISPW (DevOps Source Code Management): ISPW provides modern SCM capabilities designed for mainframe, including Git integration. According to BMC's documentation, ISPW can synchronize with Git repositories, enabling hybrid workflows where some teams use Git while ISPW maintains governance and control.
Topaz for Total Test: Automated testing framework for mainframe applications. Topaz for Total Test allows developers to create unit and integration tests that run automatically in CI/CD pipelines. Tests can stub external dependencies, enabling isolated testing of COBOL and PL/I programs.
Topaz Workbench: Modern Eclipse-based IDE for mainframe development, providing similar capabilities to IBM's IDz but with tight integration to BMC's broader toolset.
Jenkins Integration: BMC provides detailed examples of Jenkins pipelines that incorporate BMC tools. These pipelines demonstrate build automation, automated testing with Topaz for Total Test, and deployment orchestration.
BMC's approach emphasizes developer experience, recognizing that modernizing mainframe development requires tools that appeal to contemporary developers while respecting the unique characteristics of mainframe platforms.
The Open Mainframe Project, part of the Linux Foundation, fosters open-source innovation for mainframe platforms.
Zowe: The flagship project, Zowe provides an extensible framework for z/OS with CLI tools, REST APIs, and a web-based interface. Zowe's importance to mainframe DevOps cannot be overstated—it serves as the integration layer that connects modern DevOps tools with z/OS systems.
The Zowe CLI enables scriptable automation from any platform. Commands like zowe files list data-sets, zowe jobs submit, and zowe zos-files upload make z/OS resources accessible to Jenkins pipelines, GitLab CI, or any automation platform.
GitHub + Jenkins + Modern IDEs: The Open Mainframe Project blog describes patterns for using GitHub, Jenkins, and IntelliJ IDEA for mainframe development. This fully open-source stack demonstrates that mainframe DevOps doesn't require expensive proprietary tools, though commercial tools may provide additional convenience and support.
Community Innovation: The open-source ecosystem around mainframe drives innovation that benefits all users. Projects like Zowe evolve based on community feedback, and patterns shared by early adopters help others avoid pitfalls.
Automation is the cornerstone of DevOps, and z/OS offers multiple automation interfaces that CI/CD pipelines can leverage.
z/OS Management Facility provides RESTful APIs for managing z/OS systems. These APIs enable programmatic access to system resources from any HTTP client, making them ideal for Jenkins pipelines or custom automation scripts.
Common Operations:
Example: Submitting a job and waiting for completion using curl and z/OSMF:
# Submit JCL job
JOB_ID=$(curl -X PUT "https://zos.example.com:443/zosmf/restjobs/jobs" \
-H "Content-Type: text/plain" \
-H "X-CSRF-ZOSMF-HEADER: true" \
--user "${ZUSER}:${ZPASS}" \
--data @myjob.jcl \
| jq -r '.jobid')
# Poll for completion
while true; do
STATUS=$(curl "https://zos.example.com:443/zosmf/restjobs/jobs/${JOB_ID}" \
-H "X-CSRF-ZOSMF-HEADER: true" \
--user "${ZUSER}:${ZPASS}" \
| jq -r '.status')
if ; thenThis level of scriptability enables Jenkins or other CI/CD tools to interact with z/OS programmatically, no different from how they interact with Linux servers or cloud APIs.
The Zowe CLI provides higher-level abstractions over z/OSMF and other z/OS interfaces, simplifying common operations.
Example Automation Sequence:
# Set up environment (credentials stored in Jenkins)
export ZOWE_OPT_HOST=zos.example.com
export ZOWE_OPT_USER=${ZUSER}
export ZOWE_OPT_PASSWORD=${ZPASS}
# Upload newly compiled load module
zowe files upload file-to-data-set \
"target/PAYROLL.load" \
"PROD.PAYROLL.LOADLIB(PAYROLL)" \
--binary
# Backup current JCL
zowe files download data-set \
"PROD.PAYROLL.PROCLIB(PAYJOB)" \
--file "backup/PAYJOB.jcl"
# Upload updated JCL
zowe files upload file-to-data-set \
"updated/PAYJOB.jcl" \
"PROD.PAYROLL.PROCLIB(PAYJOB)"
# Submit smoke test
zowe jobs submit data-set \
"TEST.PAYROLL.JCL(SMOKE)" \
--wait-for-active --wait-for-output
# Check return code
RC=$(zowe jobs view job-status-by-jobid ${JOBID} | jq -r '.retcode')
if ; thenThis example demonstrates uploading artifacts, creating backups, running tests, and implementing automated rollback—all scriptable operations that integrate naturally into Jenkins pipelines.
Automated Environment Provisioning: Create and configure test environments programmatically. Submit JCL to allocate datasets, define CICS regions, initialize databases, and load test data. Tear down environments when testing completes.
Smoke Tests After Deployment: Automatically run validation tests after every deployment, verifying that applications start correctly and basic functionality works. Fail the pipeline and alert teams if smoke tests fail.
Automated Rollback: When production deployments fail validation, automatically restore previous versions. Copy backup load modules, restore JCL, and restart applications. This automated rollback capability gives teams confidence to deploy more frequently.
Configuration as Code: Store JCL, CICS definitions, and other configuration in Git alongside application code. Deploy configuration changes using the same pipeline as code changes, ensuring consistency between environments.
Moving from traditional mainframe development practices to modern DevOps isn't a flip-the-switch transformation. It requires phased approaches that balance modernization benefits with operational stability.
Most organizations begin with variations of this starting point:
Goals: Establish version control and basic automation without disrupting production.
Actions:
Risks and Mitigations:
Goals: Implement automated builds, introduce automated testing, and accelerate feedback loops.
Actions:
Risks and Mitigations:
Goals: Implement continuous delivery to production with appropriate governance.
Actions:
Risks and Mitigations:
Many organizations run hybrid environments where some applications use modern DevOps practices while others remain on legacy processes. This is acceptable and often necessary. The key is preventing the hybrid state from becoming permanent.
Synchronization Patterns: Tools can synchronize between Git and legacy SCM systems, allowing gradual migration. As teams become comfortable with Git, sync can be disabled for those applications.
Tool Choice Flexibility: Don't mandate single tools for all teams. Some might prefer VS Code, others Eclipse-based IDEs, and some might continue using ISPF for certain tasks. Flexibility increases adoption.
Progressive Migration: Prioritize applications for modernization based on change frequency, business criticality, and team readiness. Frequently changed applications benefit most from automation. Teams eager to adopt new tools make better pilots than reluctant teams.
To illustrate how these concepts come together, let's examine realistic scenarios based on patterns commonly seen in U.S. enterprises.
Background: A top-10 U.S. bank with extensive z/OS infrastructure running core banking applications developed over 40 years. The bank faced pressure to accelerate feature delivery for digital banking while maintaining the reliability that regulations and customers demand.
Initial State:
Approach:
The bank adopted a three-year modernization program:
Year 1: Established Git repositories for new and frequently changed applications. Implemented Jenkins with basic pipelines that validated syntax and performed compilation. Provided VS Code with Zowe extensions to interested developers. Ran legacy processes in parallel, ensuring no disruption.
Year 2: Deployed IBM's DBB for intelligent builds. Introduced Topaz for Total Test and began building automated test suites for critical transaction paths. Extended Jenkins pipelines to include automated testing. Implemented artifact management with Artifactory.
Year 3: Rolled out deployment automation with IBM DevOps Deploy. Implemented approval gates for production deployments. Moved to monthly releases for most applications, with weekly releases for less critical code.
Toolchain:
Outcomes:
Lessons Learned:
Background: A regional health insurer needed to accelerate claims processing system changes to compete with larger national players and comply with evolving healthcare regulations.
Initial State:
Approach:
The insurer implemented a "DevOps-first" strategy for all new development while gradually modernizing existing code:
Phase 1: Adopted Git for all new code and all code being modified. Implemented Jenkins pipelines with automated compilation and basic validation. No changes to deployment processes yet.
Phase 2: Built comprehensive automated test suites using Broadcom's testing tools integrated with Jenkins. Achieved 80% code coverage for newly written and modified code. Started requiring tests for all changes.
Phase 3: Implemented automated deployment to non-production environments. Maintained manual deployment to production but with automated preparation of deployment packages. This satisfied auditors while eliminating manual errors.
Toolchain:
Outcomes:
Lessons Learned:
Background: A federal agency running benefit administration systems on mainframes needed to modernize to improve citizen services while working within government budget and security constraints.
Initial State:
Approach:
The agency chose an open-source-first strategy:
Foundation: Implemented Git (GitLab Community Edition in private cloud) and Jenkins. Used Zowe CLI extensively for z/OS automation, avoiding expensive proprietary connectors.
Build Automation: Developed custom build scripts using USS and IBM compilers directly rather than purchasing DBB initially. While less sophisticated, this approach worked within budget constraints.
Testing: Created custom testing frameworks using available tools and open-source components. Testing wasn't as comprehensive as commercial tools might enable, but provided significant improvement over manual testing.
Deployment: Built deployment automation using shell scripts, Zowe CLI, and z/OSMF APIs. Implemented extensive logging and validation for audit requirements.
Toolchain:
Outcomes:
Lessons Learned:
Treat Mainframe as Just Another Platform: Apply the same DevOps principles used for distributed systems. While implementation details differ, the concepts of version control, automated builds, continuous testing, and streamlined deployment apply equally.
Standardize on Git: Align mainframe and distributed teams on common version control. Git's ubiquity makes it the obvious choice. This alignment improves collaboration and makes mainframe development accessible to broader talent pools.
Automate Testing Comprehensively: Invest heavily in test automation. Automated tests provide the confidence to release frequently. Start with unit tests for individual programs, then build integration and system tests. Aim for high coverage of critical paths.
Implement Pipeline Observability: Ensure pipelines generate comprehensive logs and metrics. Track build times, test pass rates, deployment frequency, and failure rates. Use these metrics to continuously improve processes.
Invest in Training and Culture: Technical tools alone don't create DevOps. Invest in cross-training mainframe specialists in modern tools and DevOps engineers in mainframe concepts. Foster collaboration between teams. Address cultural resistance through education and demonstrated success.
Start Small, Expand Progressively: Don't attempt to transform everything simultaneously. Start with pilot projects that demonstrate value. Learn from early implementations. Expand to additional applications as confidence and capability grow.
Maintain Strong Governance: DevOps doesn't mean abandoning governance. Implement appropriate approval gates for production changes. Maintain audit trails. Work with compliance teams to ensure automated processes meet regulatory requirements.
Document Processes and Patterns: Create runbooks, architecture documentation, and troubleshooting guides. This knowledge capture is essential for onboarding and reduces dependency on individual experts.
Underestimating Cultural Challenges: Technical implementation is often easier than organizational change. Expect resistance, skepticism, and anxiety about new approaches. Address these through clear communication, executive sponsorship, and demonstrated quick wins.
Overcomplicating Early Pipelines: Start simple. A pipeline that checks out code, compiles it, and runs a few tests provides value. Don't try to build the perfect pipeline initially. Evolve pipelines as understanding deepens.
Neglecting Compliance Requirements: Engage audit, security, and compliance teams early. Show how automation improves rather than undermines control. Demonstrate that Git provides better change history than PDS comments and that pipeline logs exceed manual approval forms for auditability.
Assuming Tools Solve Everything: Tools enable DevOps but don't create it. DevOps is fundamentally about people, processes, and culture. Tools support these elements but can't substitute for them.
Ignoring Performance Implications: Monitor pipeline performance. Builds shouldn't take hours. Tests should provide fast feedback. If pipelines are slow, developers will work around them. Optimize for speed without sacrificing quality.
Failing to Plan for Failure: Implement and test rollback procedures. Ensure teams know how to quickly recover from failed deployments. Practice failure scenarios during low-stakes periods.
Inconsistent Tool Adoption: Don't mandate specific tools without providing training and support. Ensure tools are genuinely better than what they replace, not just newer.
For a U.S. enterprise beginning mainframe DevOps and z/OS CI/CD implementation:
Foundation (Months 1-3):
Basic Automation (Months 4-6):
Expansion (Months 7-12):
Continuous Delivery (Months 13-18):
Governance Throughout:
Mainframe DevOps with modern CI/CD pipelines is no longer aspirational—it's achievable and increasingly essential. U.S. enterprises running mission-critical workloads on IBM Z and z/OS can adopt the same DevOps practices that have transformed distributed systems development, gaining faster delivery, higher quality, and improved developer satisfaction while maintaining the reliability and governance that regulated industries demand.
The technological foundation exists. Git provides version control that works as well for COBOL as for Python. Jenkins orchestrates pipelines that seamlessly include z/OS alongside cloud services. Zowe bridges modern DevOps tools and traditional mainframe systems. IBM, Broadcom, and BMC offer comprehensive toolchains designed for enterprise mainframe DevOps. Open-source alternatives provide options for budget-constrained organizations.
The challenge is organizational rather than technical. Successfully implementing mainframe DevOps requires executive commitment, cultural change, cross-functional collaboration, and persistent effort over months or years. Organizations that approach modernization as a journey rather than a destination—starting with pilot projects, learning from experience, and expanding progressively—consistently achieve their goals.
Looking ahead, several trends will shape mainframe DevOps evolution. AI-assisted testing and code analysis will help teams understand complex legacy applications and accelerate modernization. Increasing convergence between cloud-native and mainframe DevOps practices will further break down silos. Open-source innovation through projects like Zowe will continue democratizing access to mainframe development tools.
The organizations that invest in mainframe DevOps today position themselves for long-term success. They can retain their mainframe investments—proven systems processing billions of transactions reliably—while gaining the agility and speed that business demands. They can attract and retain talented developers by offering modern tooling and practices. They can compete more effectively by delivering features and improvements faster than competitors stuck in quarterly release cycles.
For mainframe development managers, platform engineers, DevOps leads, and enterprise architects reading this: the tools, practices, and patterns exist to transform your mainframe development. The path is clear, validated by numerous successful implementations across banking, insurance, healthcare, retail, and government sectors. The question isn't whether mainframe DevOps is possible, but when your organization will begin the journey.
Start small. Choose a pilot application. Implement Git and basic CI/CD. Build from there. The investment will pay dividends in faster delivery, higher quality, improved team morale, and competitive advantage that grows stronger with each iteration.
Q: Can we really use Git for mainframe source code like COBOL and JCL?
A: Yes, absolutely. Git works well with mainframe source code. COBOL, PL/I, Assembler, JCL, and copybooks are text files that Git handles like any other source code. Modern tools like Zowe provide extensions that help synchronize between Git repositories and z/OS datasets. Many large enterprises now manage all their mainframe code in Git, benefiting from standard branching, merging, and code review workflows.
Q: How does Jenkins integrate with z/OS systems for CI/CD pipelines?
A: Jenkins integrates with z/OS through several mechanisms. The Zowe CLI provides command-line tools that Jenkins pipeline stages can execute to interact with z/OS—submitting jobs, managing datasets, and retrieving results. The z/OSMF REST APIs enable programmatic access to z/OS resources from Jenkins. Additionally, vendor-specific plugins from IBM, Broadcom, and BMC provide pre-built integrations. Jenkins treats z/OS like any other build target, orchestrating mainframe builds alongside distributed application builds.
Q: What about compliance and audit requirements? Doesn't DevOps conflict with mainframe governance?
A: DevOps actually improves compliance when implemented properly. Git provides complete change history with who made what changes when and why. CI/CD pipelines generate comprehensive logs of every build, test, and deployment. Approval gates in pipelines enforce governance while automating mechanical steps. Automated processes are more consistent and auditable than manual processes. Many regulated organizations find that compliance teams become advocates once they see the improved auditability that modern DevOps provides.
Q: How long does it take to implement mainframe DevOps and CI/CD for z/OS?
A: Timeline varies based on organization size, current maturity, and scope. A pilot implementation with basic Git and Jenkins pipelines for a single application typically takes 2-4 months. Expanding to comprehensive test automation and deployment automation across multiple applications usually requires 12-18 months. Full organizational transformation with mainstream adoption across the mainframe portfolio often takes 2-3 years. The key is starting small, learning, and expanding progressively rather than attempting wholesale transformation simultaneously.
Q: What skills do our mainframe developers need to learn for DevOps?
A: Mainframe developers need familiarity with Git for version control (branches, commits, pull requests), basic understanding of CI/CD concepts and Jenkins, comfort with modern IDEs like VS Code or Eclipse, and understanding of automated testing principles. These are learnable skills, especially with good training and support. Conversely, DevOps engineers need to learn mainframe concepts—z/OS architecture, COBOL basics, JCL, and deployment considerations. Cross-training benefits both groups, creating hybrid expertise that's valuable for successful mainframe DevOps.
Q: Can we implement mainframe DevOps without expensive commercial tools?
A: Yes. Open-source tools can enable mainframe DevOps, though commercial tools may provide additional convenience and support. Git, Jenkins, and Zowe are open source and provide core capabilities for version control, CI/CD orchestration, and z/OS automation. Organizations can build custom build scripts and testing frameworks. However, commercial tools like IBM's DBB, BMC's Topaz for Total Test, or Broadcom's DevOps suite provide specialized mainframe capabilities that accelerate implementation and reduce custom development. The choice depends on budget, timeline, and internal capability to build and maintain custom solutions.
Q: How do we handle the cultural resistance from experienced mainframe developers?
A: Cultural change is often the biggest challenge. Successful approaches include: involving skeptics early in planning and pilot projects, demonstrating benefits through small wins rather than mandating wholesale change, respecting existing expertise while showing how new tools augment rather than replace it, providing comprehensive training and support, and ensuring executive sponsorship that makes modernization a priority rather than optional. Many experienced mainframe developers embrace modern tools once they see how Git, automated testing, and CI/CD make their work easier and more satisfying. Start with willing adopters and let success speak for itself.
Schema Markup for FAQ (JSON-LD):
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": 23.01.2024
23.01.2024
23.01.2024
23.01.2024
23.01.2024