Emulation and Virtualization: Running Mainframes on x86 & Cloud Platforms

Integration & Modernization

By Hannah Reed

Emulation and Virtualization: Running Mainframes on x86 and Cloud Platforms

Introduction: Why Emulate or Virtualize Mainframes?

Despite decades of predictions about their demise, IBM mainframes remain the backbone of enterprise computing across banking, insurance, healthcare, retail, and government sectors. These systems process an estimated 30 billion business transactions daily, handling everything from ATM withdrawals to airline reservations to Social Security benefits. For organizations dependent on IBM Z systems running z/OS, the challenge isn't whether to maintain these critical workloads—it's how to make mainframe development, testing, and modernization more accessible and cost-efficient.

This is where mainframe emulator technology comes into play. Rather than requiring expensive dedicated IBM Z hardware for every development task or test scenario, emulation and virtualization solutions allow organizations to run mainframe workloads on commodity x86 hardware or in public cloud environments. This fundamental shift opens new possibilities for DevOps practices, accelerates application development cycles, and provides cost-effective pathways for training the next generation of mainframe engineers.

However, the landscape of mainframe emulation options can be confusing. Terms like "emulation," "virtualization," "rehosting," and "containerization" are often used interchangeably but represent distinctly different technical approaches with very different implications for licensing, performance, and production readiness.

Defining the Key Concepts

Mainframe emulation refers to software that implements the IBM mainframe instruction set (z/Architecture and its predecessors) on different hardware—typically x86-64 processors. Tools like Hercules, IBM zPDT, and IBM Z Development and Test Environment (ZD&T) fall into this category. These emulators translate mainframe CPU instructions into x86 equivalents, emulate mainframe I/O devices, and create an environment where z/OS and other mainframe operating systems can run without modification.

Virtualization, in the mainframe context, traditionally refers to partitioning actual IBM Z hardware using technologies like Logical Partitions (LPARs) and z/VM. This is different from x86-based emulation—it's running multiple operating system instances on the same physical mainframe, with each partition having near-native performance. When mainframe professionals talk about "virtualization," they often mean these native IBM Z technologies rather than emulation on different hardware.

Rehosting or binary translation solutions, exemplified by platforms like LzLabs, take a different approach. Rather than emulating the entire mainframe environment, they provide runtime environments that can execute mainframe binaries (COBOL programs, JCL scripts) on x86 Linux systems without the need for a full z/OS stack. This approach targets production workload migration rather than just development and testing.

Setting Realistic Expectations

This article focuses primarily on development, testing, training, and selective modernization use cases for mainframe emulation. While some organizations have successfully used emulation-based rehosting for production workloads, this remains a specialized path with significant architectural and risk considerations. The primary value proposition for most enterprises lies in using emulation to:

  • Create affordable development environments for application programmers
  • Build automated testing pipelines that don't consume production MIPS
  • Train new mainframe engineers without requiring access to production systems
  • Experiment with modernization approaches before committing to major transformations
  • Conduct proof-of-concept work for cloud migration strategies

Understanding what emulation can and cannot do—and the licensing, performance, and support implications of different tools—is essential for making informed decisions about mainframe development and modernization strategies.

Architectural Background: IBM Z vs x86 & Cloud

To understand mainframe emulation, we need to appreciate the fundamental differences between IBM Z architecture and the x86-64 systems that power most modern servers and cloud infrastructure.

Instruction Set Architecture Differences

IBM Z systems use a proprietary instruction set architecture called z/Architecture (and its predecessors System/370 and ESA/390). This CISC (Complex Instruction Set Computer) architecture evolved over decades specifically for high-throughput transaction processing and includes specialized instructions for decimal arithmetic, cryptographic operations, and data compression that have no direct x86 equivalents.

In contrast, x86-64 processors use a different instruction set originally developed by Intel and AMD. When you run a mainframe emulator on x86 hardware, the software must translate every z/Architecture instruction into a sequence of x86 instructions that produces the same result. This translation process—whether done through interpretation, dynamic recompilation, or binary translation—inherently introduces performance overhead.

I/O and Device Architecture

Mainframe I/O architecture differs significantly from x86 systems. IBM Z uses channel subsystems with specialized I/O processors that offload data movement from the main CPU. Devices are addressed using a channel/control unit/device numbering scheme (like 0.0.0100 for a DASD volume). Emulators must simulate these channel subsystems and map mainframe device types to x86 equivalents or files.

Common emulated mainframe devices include:
  • 3390 DASD (Direct Access Storage Device): emulated as large files on the x86 filesystem
  • 3270 terminals: emulated through telnet or terminal emulation software
  • OSA (Open Systems Adapter): emulated network interfaces for TCP/IP connectivity
  • 3420/3480/3490 tape drives: emulated using files or actual tape devices

Performance Characteristics and Overhead

The performance gap between emulated mainframes and real IBM Z hardware varies dramatically based on workload characteristics. Single-threaded integer workloads might run at 10-30% of native speed on a high-end x86 processor, while I/O-intensive workloads could see better relative performance due to modern SSD storage. However, workloads that rely heavily on z/Architecture's specialized instructions (decimal arithmetic, compression, encryption) will typically show the largest performance degradation.

According to IBM documentation on ZD&T, performance expectations should be set appropriately: these environments are designed for functional testing and development, not for performance benchmarking or capacity planning. A typical ZD&T instance might provide the equivalent of 100-200 MIPS, compared to thousands or tens of thousands of MIPS in production IBM Z systems.

Licensing Constraints and Advantages

One of the most important architectural considerations isn't technical—it's legal. IBM operating systems like z/OS, z/VM, z/VSE, and their associated middleware (CICS, DB2, IMS) require IBM licenses regardless of the hardware they run on. Running current versions of z/OS on unauthorized emulators violates IBM's licensing terms and intellectual property rights.

The Hercules emulator, while technically capable of running z/OS, is only legally usable with public domain IBM operating systems (like MVS 3.8j from 1981) or with Linux distributions compiled for z/Architecture. For any commercial z/OS development work, organizations must use IBM-sanctioned tools like zPDT or ZD&T, which include appropriate licensing provisions.

Despite these constraints, the advantages of x86-based mainframe environments are compelling:
  • Cost reduction: Development and test environments that might cost $50,000-200,000 annually in mainframe MIPS can run on x86 hardware or cloud instances costing $1,000-10,000 per month
  • Elasticity: Cloud-based emulated environments can be provisioned on-demand and destroyed when not needed
  • Developer productivity: Engineers can have personal mainframe instances on laptops or workstations rather than sharing congested development LPARs
  • Experimentation freedom: Organizations can test modernization approaches, new software versions, or architectural changes without impacting production systems

Overview of Mainframe Emulation & Virtualization Options

The mainframe emulation landscape includes several distinct tools and platforms, each designed for different use cases and operating under different licensing models.

Open Source: The Hercules Emulator

Hercules is an open-source mainframe emulator that runs on Linux, Windows, macOS, and other operating systems. Originally created by Roger Bowler in 1999, Hercules can emulate System/370, ESA/390, and z/Architecture mainframe systems. The software is distributed under the Q Public License and actively maintained by a community of developers and mainframe enthusiasts.

Common legitimate uses for Hercules include:

  • Running public-domain IBM operating systems like MVS 3.8j, VM/370, or DOS/VS for educational purposes
  • Developing and testing Linux on IBM Z (s390x) applications
  • Preserving historical computing systems and software
  • Teaching mainframe concepts in academic settings without requiring IBM hardware

What Hercules is NOT:

Hercules is not a legally sanctioned environment for running current, licensed versions of z/OS, z/VM, or z/VSE without explicit IBM authorization. While technically capable of running these systems, doing so violates IBM's software licensing terms. Most commercial organizations should use IBM-supported tools (zPDT or ZD&T) for any z/OS development work.

The Hercules community maintains extensive documentation, mailing lists, and support forums. For hobbyists, students, and organizations working with public-domain mainframe software or Linux on Z, Hercules provides valuable free access to mainframe technology.

IBM zPDT (System z Personal Development Tool)

IBM zPDT is an officially sanctioned product that allows developers to run small-scale IBM Z environments on x86 Linux systems. As detailed in IBM Redbooks documentation, zPDT provides a "laptop mainframe" capability where individual developers or small teams can run z/OS and associated middleware for application development and testing.

Key characteristics of zPDT:

  • Runs as a Linux application on x86-64 hardware
  • Emulates IBM Z processors, I/O channels, and devices
  • Supports z/OS, z/VM, z/VSE, and Linux on Z
  • Includes tools for managing emulated devices (DASD, tape, network)
  • Requires IBM licensing for the operating systems and middleware running within it
  • Typically used by ISVs, IBM business partners, and enterprises under specific agreements

According to recent industry reporting, IBM has been transitioning customers from PC-based zPDT toward ZD&T and cloud-hosted alternatives. The IBM Redbooks ISV zPDT Guide provides comprehensive technical documentation for ISVs and developers using zPDT for mainframe software development.

IBM Z Development and Test Environment (ZD&T)

IBM Z Development and Test Environment represents IBM's current strategic direction for x86-based mainframe emulation. ZD&T is a commercial product specifically designed to enable development, testing, demonstration, and training activities on commodity x86 hardware or in cloud environments.

ZD&T is available in two primary editions:

  • Personal Edition: Designed for individual developers or small teams, suitable for application development and unit testing
  • Enterprise Edition: Provides web-based management, multi-user access, snapshot capabilities, and integration features for larger development organizations

As documented in IBM's official ZD&T overview, the platform enables organizations to:

  • Create reproducible z/OS test environments from production system images
  • Run full z/OS middleware stacks (CICS, DB2, IMS, MQ, etc.) for integration testing
  • Support multiple z/OS releases simultaneously for compatibility testing
  • Integrate mainframe testing into modern CI/CD pipelines
  • Provide hands-on training environments without impacting production systems

The Enterprise Edition documentation highlights advanced capabilities including web UI management, REST APIs for automation, and support for containerized deployments. These features make ZD&T particularly attractive for organizations pursuing DevOps practices in mainframe environments.

7.1

Commercial Rehosting and Emulation Platforms

Beyond development-focused emulators, several vendors offer platforms designed for production workload migration. These solutions typically combine emulation, binary translation, or containerization to enable mainframe applications to run on x86 infrastructure.

LzLabs Software Defined Mainframe (SDM) provides a managed container environment that can execute mainframe binaries without requiring z/OS. As explained in their cloud-native applications guide, this approach maintains binary compatibility with mainframe applications while enabling deployment on Linux servers or cloud platforms.

Other vendors in this space include Micro Focus (Enterprise Server), Rocket Software, and various system integrators offering transformation services. These platforms target production migration scenarios rather than just development and testing, with correspondingly different risk profiles, support models, and licensing arrangements.

Comparison of Mainframe Emulation ToolsToolLicensing ModelPrimary Use CasesLegally Supported IBM OSesCloud DeploymentHerculesOpen source (QPL)Hobbyist, education, Linux on Z developmentPublic domain OSes (MVS 3.8j, VM/370), Linux on ZYes (community support)IBM zPDTCommercial (IBM license required)ISV development, small-scale testingz/OS, z/VM, z/VSE (with proper licensing)Limited (primarily on-premises)IBM ZD&TCommercial (IBM license required)Enterprise dev/test, CI/CD integration, trainingz/OS, z/VM, z/VSE, Linux on ZYes (AWS, Azure, on-premises)LzLabs SDMCommercial (subscription)Production rehosting, modernizationRuns mainframe binaries without z/OSYes (designed for cloud)

The choice among these tools depends on your organization's specific needs, licensing situation, budget, and whether the goal is development/testing or production workload migration.

The Hercules Emulator in Practice

For organizations and individuals working with public-domain mainframe software or Linux on Z development, Hercules provides a capable, cost-free emulation platform.

History and Community

Hercules began as a personal project by Roger Bowler to preserve mainframe computing history. Over 25 years, it has evolved into a sophisticated emulator supporting System/370, ESA/390, and z/Architecture instruction sets. The Hercules project maintains active development with regular updates, bug fixes, and enhancements contributed by a global community of developers.

The emulator runs on multiple host platforms including Linux (various distributions), Windows, macOS, FreeBSD, and Solaris. This cross-platform capability means developers can work with mainframe environments on whatever operating system they prefer.

Legal Use Cases and Boundaries

Understanding what you can legally do with Hercules is crucial. The emulator itself is entirely legal open-source software. The licensing constraints apply to the operating systems and software you run within Hercules:

Legitimate Hercules use cases:

  • Public domain operating systems: MVS 3.8j (from 1981), VM/370, DOS/VS, and other IBM systems released into the public domain can be freely used
  • Linux on IBM Z: Distributions like Debian s390x, Ubuntu for s390x, or Fedora for s390x are freely available and can be legally run in Hercules
  • Educational purposes: Teaching mainframe concepts, assembly language programming, or operating system internals using public-domain software
  • Historical preservation: Running vintage mainframe applications and maintaining computing history

Where Hercules crosses legal boundaries:

  • Running current, licensed versions of z/OS without IBM authorization
  • Using commercial mainframe software (CICS, DB2, IMS) without proper licenses
  • Production use of business applications in Hercules without vendor approval

For any commercial mainframe development work with current z/OS releases, organizations should use IBM-sanctioned tools like ZD&T that include appropriate licensing provisions.

Setting Up Hercules: A High-Level Walkthrough

While a complete Hercules setup tutorial is beyond this article's scope, here's an overview of the process for running Linux on Z:

Step 1: Install Hercules

On a Debian/Ubuntu Linux system:

sudo apt-get update
sudo apt-get install hercules

Or compile from source for the latest features:

git clone https://github.com/hercules-390/hyperion.git
cd hyperion
./configure
make
sudo make install




Step 2: Create a Configuration File

Hercules uses a configuration file to define the emulated mainframe. A minimal config might include:

CPUSERIAL 001234
CPUMODEL 3090
MAINSIZE 1024
NUMCPU 2

# Console
0009 3215-C /
000C 3505 ./input/reader.txt
000D 3525 ./output/punch.txt
000E 1403 ./output/printer.txt

# DASD
0120 3390 ./dasd/linux01.120
0121 3390 ./dasd/linux02.121

# Network
0A00 CTCI 192.168.1.100 /dev/net/tun
















Step 3: Prepare Boot Media

Download a Linux on Z distribution (like Debian s390x) and create bootable media. The Linux kernel and initial ramdisk are loaded by Hercules.

Step 4: Start Hercules and Install

hercules -f hercules.conf

Connect to the console (typically via telnet to localhost:3270) and proceed with Linux installation following distribution-specific instructions.

Community Resources

The Hercules community maintains extensive resources:

  • Official documentation: Installation guides, configuration references, and FAQs at hercules-390.org
  • Mailing lists: Active discussion forums for troubleshooting and best practices
  • GitHub repositories: Source code, issue tracking, and contribution guidelines
  • Third-party tutorials: Numerous blog posts and videos demonstrating specific setups

For organizations or individuals interested in mainframe technology without the cost of IBM hardware, Hercules provides valuable hands-on learning opportunities within its legal boundaries.

zPDT: Personal IBM Z Environments for Developers

IBM zPDT (System z Personal Development Tool) represents IBM's approach to providing individual developers with personal mainframe environments for application development and testing.

Architecture and Capabilities

As detailed in the comprehensive IBM zPDT Redbook, zPDT is a Linux application that creates an emulated IBM Z environment on x86-64 hardware. The architecture includes several key components:

Processor Emulation: zPDT emulates IBM Z processors using dynamic binary translation, converting z/Architecture instructions to x86 equivalents at runtime. Performance depends heavily on the host processor—modern Intel Xeon or AMD EPYC processors with high single-thread performance deliver the best results.

Device Emulation: zPDT simulates the full complement of mainframe devices:

  • 3390 DASD: Implemented as files on the Linux filesystem, supporting various volume sizes (3390-3, 3390-9, etc.)
  • 3270 terminals: Accessible via TN3270 protocol over TCP/IP
  • OSA network adapters: Providing TCP/IP connectivity to and from the z/OS guest
  • Tape devices: Emulated using files or connected to physical tape hardware

Management Tools: zPDT includes command-line and GUI utilities for managing the emulated environment, starting/stopping systems, attaching devices, and monitoring resource usage.

Typical Usage Scenarios

zPDT primarily serves Independent Software Vendors (ISVs) and IBM business partners developing mainframe software products:

ISV Application Development: Software vendors building products for z/OS (database tools, monitoring solutions, security products) use zPDT to develop and test across multiple z/OS releases without maintaining physical mainframe hardware.

Quality Assurance Testing: QA teams can reproduce customer environments, test compatibility across z/OS versions, and validate fixes in controlled settings before releasing updates.

Customer Proof-of-Concepts: ISVs can quickly spin up zPDT environments to demonstrate products to prospective customers without requiring access to customer mainframes.

Internal Enterprise Development: Some large enterprises use zPDT under IBM licensing agreements to provide developers with personal development instances, reducing contention for shared development LPARs.

Licensing and Constraints

zPDT itself requires an IBM license, and the z/OS and middleware running within zPDT also require appropriate IBM licenses. This is not a loophole for running unlicensed z/OS—organizations must work with IBM to ensure proper licensing for all software used in zPDT environments.

The "personal" in zPDT refers to scale rather than licensing. These environments are limited in capacity—typically providing the equivalent of 100-200 MIPS—and designed for individual developer use rather than team-shared infrastructure or production workloads.

Evolution and Current Status

Recent IBM strategy has emphasized ZD&T and cloud-hosted alternatives over PC-based zPDT. While zPDT remains available and supported, new customers are typically directed toward ZD&T Enterprise Edition or IBM's Wazi as a Service offerings. This shift reflects the broader industry trend toward cloud-based development environments and team-accessible infrastructure rather than developer-local instances.

Organizations currently using zPDT should evaluate whether ZD&T or cloud-hosted alternatives better meet their long-term needs, particularly as teams embrace DevOps practices requiring integration with CI/CD pipelines and collaborative development workflows.

IBM Z Development and Test Environment (ZD&T) on x86 and Cloud

IBM ZD&T represents the current state-of-the-art for x86-based mainframe emulation focused on enterprise development and testing use cases.

Core Architecture and Capabilities

ZD&T provides a comprehensive platform for running z/OS and associated middleware on x86 hardware for non-production purposes. The architecture includes:

Emulation Engine: Like zPDT, ZD&T uses binary translation to execute z/Architecture instructions on x86 processors. However, ZD&T includes optimizations and features specifically designed for team environments rather than individual developers.

Full Stack Support: ZD&T can run complete z/OS environments including:

  • CICS Transaction Server for transaction processing
  • DB2 for z/OS database management
  • IMS for hierarchical databases and transaction processing
  • IBM MQ for messaging
  • z/OS Connect for API enablement
  • Virtually any z/OS subsystem or ISV product

Image Management: One of ZD&T's key capabilities is the ability to create development and test images from production system snapshots. Organizations can:

  1. Take a snapshot of a production z/OS system
  2. Mask or sanitize sensitive data
  3. Import the image into ZD&T
  4. Create multiple clones for different testing purposes
  5. Reset to known-good states as needed

Personal vs Enterprise Editions

ZD&T Personal Edition targets individual developers or very small teams. It provides basic emulation capabilities with limited administrative features, suitable for application development and unit testing.

ZD&T Enterprise Edition offers advanced capabilities for larger organizations:

  • Web-based management UI: Administrators can manage multiple ZD&T instances through a browser interface
  • REST APIs: Automation and integration with CI/CD tools through programmatic interfaces
  • Multi-user access: Support for teams sharing ZD&T instances
  • Snapshot and cloning: Rapid creation of multiple environments from templates
  • Resource pools: Efficient management of multiple concurrent test environments

The Enterprise Edition documentation provides detailed information about these capabilities and recommended deployment architectures.

Performance Expectations and Sizing

Organizations must set realistic expectations about ZD&T performance. These are functional test environments, not performance testing platforms. A typical ZD&T instance might provide:

  • Processor capacity: 100-500 MIPS depending on host hardware
  • Memory: 16-64 GB for the z/OS guest
  • I/O throughput: Dependent on host storage (SSDs strongly recommended)
  • Network: Generally adequate for application testing but not high-volume transaction simulation

When sizing ZD&T environments, consider:

  • Single-threaded performance: z/Architecture emulation is largely single-threaded, so host CPU clock speed matters more than core count
  • Memory: Provide generous RAM to avoid paging on the host system
  • Storage: Fast SSDs dramatically improve DASD emulation performance
  • Network: Adequate bandwidth for file transfers and remote access

Cloud Deployment Scenarios

ZD&T's design makes it well-suited for cloud deployment, offering several advantages over on-premises installations:

AWS Deployment: The AWS Partner Network blog details how to deploy ZD&T on Amazon EC2 instances. Recommended instance types include compute-optimized instances (C5 or C6i families) with high single-thread performance.

Azure Deployment: Microsoft Azure supports ZD&T on various VM sizes, with Fsv2 series providing good price-performance balance for emulation workloads.

Google Cloud Deployment: While less commonly documented, ZD&T can run on Google Cloud Compute Engine using compute-optimized machine types.

Hybrid Approaches: Some organizations run ZD&T on-premises for continuous access while maintaining cloud instances for burst capacity or geographically distributed teams.

Example Implementation Workflow

Consider a typical enterprise implementing ZD&T for CI/CD integration:

Phase 1: Environment Setup

  1. Provision x86 server or cloud instance with appropriate resources
  2. Install Linux operating system (typically RHEL or SUSE)
  3. Install ZD&T software following IBM documentation
  4. Configure storage for DASD volumes and system images

Phase 2: Image Creation

  1. Create snapshot of production z/OS system using IBM tools
  2. Apply data masking to remove sensitive information
  3. Transfer masked image to ZD&T environment
  4. Import and configure image within ZD&T
  5. Validate that applications function correctly in emulated environment

Phase 3: CI/CD Integration

  1. Create automation scripts (shell, Python, or Ansible) to:
  • Start ZD&T instance
  • Upload test data and application code
  • Execute test suites
  • Capture results
  • Shut down instance
  1. Integrate scripts with Jenkins, GitLab CI, or GitHub Actions
  2. Configure pipeline triggers (commit, pull request, scheduled)

Phase 4: Operational Management

  1. Establish procedures for image updates (security patches, OS updates)
  2. Monitor resource utilization and performance
  3. Manage costs (especially for cloud-based instances)
  4. Document environment configurations for reproducibility

This workflow enables organizations to shift mainframe testing left in the development cycle, catching issues earlier and reducing the burden on production LPARs.

Cloud Mainframe Lab Setups & Dev/Test Pipelines

The combination of mainframe emulation and cloud infrastructure creates powerful possibilities for modernizing mainframe development practices.

Cloud-Based z/OS Testing Architecture

A typical cloud-based ZD&T architecture for automated testing might include:

Compute Layer:

  • ZD&T instances on compute-optimized VMs (AWS C5/C6i, Azure Fsv2, GCP C2)
  • On-demand provisioning via Infrastructure-as-Code (Terraform, CloudFormation)
  • Auto-shutdown policies to minimize costs when not actively testing

Storage Layer:

  • High-performance block storage (AWS EBS io2, Azure Premium SSD) for DASD volumes
  • Object storage (S3, Azure Blob) for z/OS images, backups, and test data
  • Automated snapshot policies for point-in-time recovery

Network Layer:

  • VPC/VNet isolation for security
  • VPN or Direct Connect for on-premises integration
  • Load balancers for accessing 3270 terminals and APIs

Integration Layer:

  • REST APIs for programmatic control of ZD&T
  • Message queues for test job orchestration
  • Logging and monitoring integration (CloudWatch, Azure Monitor, ELK stack)

Infrastructure-as-Code for ZD&T Provisioning

Modern DevOps practices emphasize treating infrastructure as code. Here's a conceptual Terraform example for provisioning a ZD&T environment on AWS:

# Simplified example - not production-ready
resource "aws_instance" "zdt_server" {
ami = data.aws_ami.rhel8.id
instance_type = "c6i.4xlarge"

root_block_device {
volume_size = 200
volume_type = "gp3"
}

ebs_block_device {
device_name = "/dev/sdf"
volume_size = 1000
volume_type = "io2"
iops = 10000
}

vpc_security_group_ids =


user_data = templatefile("install_zdt.sh", {
zos_image_url = var.zos_image_url
license_key = var.ibm_license_key
})

tags = {
Name = "ZDT-Test-Environment"
Purpose = "CI-CD-Testing"
CostCenter = "Engineering"
}
}





























This infrastructure-as-code approach enables:

  • Reproducible environments: Identical ZD&T instances can be created on-demand
  • Version control: Infrastructure changes tracked alongside application code
  • Team collaboration: Multiple engineers can share environment definitions
  • Cost optimization: Environments destroyed when not needed, recreated when required

CI/CD Pipeline Integration

Integrating ZD&T into continuous integration pipelines transforms mainframe testing from a manual, batch-oriented process to an automated, feedback-driven workflow.

Jenkins Pipeline Example (conceptual):

pipeline {
agent any

stages {
stage('Provision ZDT') {
steps {
sh 'terraform apply -auto-approve'
sh 'wait_for_zdt_ready.sh'
}
}

stage('Upload Test Code') {
steps {
sh 'upload_cobol_programs.sh'
sh 'upload_jcl_jobs.sh'
sh 'upload_test_data.sh'
}
}

stage('Run Tests') {
steps {
sh 'submit_test_jobs.sh'
sh 'wait_for_completion.sh'
sh 'collect_results.sh'
}
}

stage('Validate Results') {
steps {
sh 'parse_test_output.sh'
junit 'test-results/*.xml'
}
}
}

post {
always {
sh 'terraform destroy -auto-approve'
}
}
}








































This pipeline:

  1. Provisions a fresh ZD&T instance
  2. Uploads application code and test scenarios
  3. Executes test suites on the emulated z/OS
  4. Collects and validates results
  5. Destroys the environment to avoid ongoing costs

Benefits and Use Cases

Offloading Production LPARs: By running regression tests, integration tests, and load simulations in ZD&T, organizations reduce MIPS consumption on production mainframes. A major bank might save $500,000-$1M annually by moving 20-30% of test workload from mainframe to cloud-based emulation.

Accelerated Release Cycles: Development teams can test more frequently without competing for shared test LPAR time slots. Instead of weekly test windows, teams might test on every commit or pull request.

Training and Onboarding: New engineers can practice on isolated ZD&T instances without risk of impacting shared environments. Training scenarios can be reset instantly, allowing trainees to retry exercises.

Disaster Recovery Rehearsals: Organizations can use ZD&T to practice recovery procedures, validate backup processes, and train operations staff without requiring production-like mainframe capacity.

Version Migration Testing: Before upgrading production z/OS versions, teams can validate applications on new releases using ZD&T, identifying compatibility issues early.

Caveats and Considerations

Performance Limitations: ZD&T is not suitable for performance testing or capacity planning. Timing-dependent applications might behave differently on emulated hardware.

Data Sensitivity: Production data must be thoroughly masked or synthesized before use in cloud environments. Even in isolated VPCs, regulatory compliance (GDPR, HIPAA, PCI-DSS) requires careful data handling.

Network Configuration: Emulated mainframes in cloud VPCs need careful network design to access on-premises resources, external APIs, or shared services.

Cost Management: While cheaper than mainframe MIPS, cloud resources still accumulate costs. Organizations must implement proper governance: automated shutdowns, resource tagging, cost alerts, and regular usage reviews.

Licensing Compliance: Even in development environments, organizations must maintain proper IBM licensing for z/OS and middleware. Cloud deployment doesn't exempt you from licensing requirements.

Emulation Tools & Rehosting Platforms in Modernization

Mainframe emulation plays multiple roles in broader modernization strategies, from enabling gradual migration to serving as a permanent rehosting platform.

The Modernization Spectrum

As outlined in modernization frameworks from Tech Mahindra, Huawei, and Adaptigent, organizations face several strategic options:

Retain: Keep workloads on IBM Z, potentially with modernization of development practices, API enablement, or DevOps integration.

Rehost: Move applications to x86 infrastructure while maintaining COBOL code and JCL, using emulation or binary translation platforms.

Replatform: Migrate to cloud infrastructure with some code changes, potentially moving from COBOL to Java or other languages.

Refactor: Significant code restructuring to adopt cloud-native patterns, microservices, containers.

Rebuild: Complete rewrite using modern languages and architectures.

Replace: Adopt Commercial Off-The-Shelf (COTS) or SaaS solutions.

Emulation and rehosting typically support the Rehost strategy, though they also enable Retain strategies through better dev/test environments.

Rehosting Platforms: Beyond Development Emulation

While tools like ZD&T target development and testing, several vendors offer production-grade rehosting:

LzLabs Software Defined Mainframe: As described in their cloud-native guide, LzLabs SDM creates a containerized runtime environment for mainframe applications. Rather than emulating the entire z/OS stack, it provides:

  • Binary compatibility for COBOL programs
  • JCL execution without z/OS Job Entry Subsystem
  • Data access layers that map mainframe data structures to relational databases
  • Integration with modern DevOps tooling and cloud services

Micro Focus Enterprise Server: Provides COBOL runtime and mainframe service emulation on distributed platforms, supporting gradual migration of mainframe applications to x86 or cloud.

Rocket Bluezone: Offers mainframe application modernization and migration tools including emulation capabilities for running legacy applications on modern infrastructure.

Risk and Architectural Considerations

Organizations considering production rehosting face different risk profiles than those using emulation for development:

Performance and Scalability: Production workloads demand consistent performance and horizontal scaling capabilities. Emulation overhead must be acceptable for business requirements, and the platform must scale to handle production volumes.

Support and SLAs: Development tools like ZD&T carry different support commitments than production platforms. Organizations must ensure vendors provide appropriate service levels for critical workloads.

Vendor Dependency: Moving from IBM mainframe dependency to a rehosting platform vendor dependency. Evaluate vendor viability, technology roadmap, and exit strategies.

Integration Complexity: Production applications integrate with numerous external systems, databases, APIs, and services. Rehosted environments must support all these integration points while potentially spanning on-premises and cloud infrastructure.

Compliance and Audit: Regulated industries (banking, healthcare, insurance) face strict compliance requirements. Rehosting platforms must demonstrate compliance with relevant frameworks (SOX, PCI-DSS, HIPAA) and undergo regular audits.

Data-First Modernization Strategies

Treehouse Software's white paper advocates for "data-first" approaches where organizations initially replicate mainframe data to cloud data warehouses or lakes, enabling analytics and new applications while leaving core transaction processing on the mainframe. Emulation can support this strategy by:

  • Providing test environments for data extraction and transformation logic
  • Enabling experimentation with data replication tools without production impact
  • Supporting development of cloud-native applications that consume mainframe data
  • Facilitating gradual migration as cloud applications mature and can replace mainframe functions

Hybrid Architectures

Many successful modernization journeys involve hybrid architectures where:

  • Core transaction processing remains on IBM Z for performance and reliability
  • New customer-facing applications run in cloud environments
  • APIs bridge mainframe services with cloud applications
  • Development and testing occur primarily in cloud-based emulated environments
  • Data replication keeps cloud analytics platforms synchronized with mainframe systems of record

Emulation tools enable organizations to experiment with these architectures, validate integration patterns, and develop cloud-native components without immediate pressure to migrate production workloads off the mainframe.

Licensing, Compliance, and Risk Management

The legal and compliance aspects of mainframe emulation are as important as the technical considerations.

IBM Software Licensing Requirements

IBM's software licenses apply regardless of the hardware platform. Key points:

z/OS and System Software: Running z/OS, z/VM, z/VSE, or z/TPF requires appropriate IBM licenses. These licenses may be:

  • Development/test licenses (typically more affordable, with usage restrictions)
  • Production licenses (full cost, no restrictions)
  • Time-limited evaluation licenses

Organizations using ZD&T or zPDT must obtain proper development/test licenses from IBM. The emulator license doesn't include operating system or middleware rights.

ISV and Middleware Licensing: Products like CICS, DB2, IMS, MQ, and third-party ISV software also require separate licensing. Even in development environments, software licenses matter.

Hercules and Public Domain Software: Using Hercules with public domain operating systems (MVS 3.8j, VM/370) doesn't require IBM licensing, as these systems are no longer under copyright protection. However, running current z/OS releases on Hercules without IBM authorization violates licensing terms and intellectual property rights.

When to Use Hercules vs IBM Tools

Choose Hercules when:

  • Learning mainframe concepts for educational purposes
  • Developing Linux on Z applications
  • Working with public domain historical software
  • Budget constraints prevent commercial alternatives
  • No licensed IBM software will be used

Choose zPDT or ZD&T when:

  • Developing commercial applications requiring current z/OS releases
  • Testing with licensed middleware (CICS, DB2, IMS)
  • Requiring IBM support for development environments
  • Working in regulated industries with compliance requirements
  • Building production-grade software needing validated test environments

Data Protection and Compliance

Moving mainframe data to x86 or cloud environments introduces compliance considerations:

Data Classification and Masking: Before loading production data into ZD&T or rehosting platforms:

  1. Classify data by sensitivity level (public, internal, confidential, regulated)
  2. Mask or tokenize personal information (PII) to comply with GDPR, CCPA
  3. Sanitize payment card data per PCI-DSS requirements
  4. Anonymize protected health information (PHI) for HIPAA compliance
  5. Remove or hash Social Security Numbers, account numbers, and other sensitive identifiers

Data Residency: Regulations may restrict where data can be stored and processed. European GDPR, Brazilian LGPD, and Chinese data sovereignty laws impose geographic constraints. Organizations must:

  • Deploy cloud instances in compliant regions
  • Use encryption for data in transit and at rest
  • Document data flows for regulatory audits
  • Implement access controls and audit logging

Synthetic Data Generation: Some organizations generate synthetic test data that mimics production characteristics without containing real customer information. This approach eliminates many compliance concerns while providing realistic test scenarios.

Support and Risk Management

IBM Support Boundaries: IBM supports ZD&T and zPDT for their intended use cases (development, testing, training). Support doesn't extend to:

  • Production workloads running on emulators
  • Unsupported configurations or modifications
  • Performance issues inherent to emulation
  • Third-party software compatibility problems

Disaster Recovery: Development emulators aren't typically included in disaster recovery plans. However, organizations should:

  • Back up ZD&T configurations and automation scripts
  • Maintain copies of z/OS images in multiple locations
  • Document procedures for recreating environments after failures
  • Consider multi-region cloud deployments for critical development infrastructure

Security Hardening: Emulated environments require security measures:

  • Network isolation (VPCs, firewalls, security groups)
  • Access controls (authentication, authorization, MFA)
  • Vulnerability scanning and patching
  • Encryption for sensitive data and communications
  • Logging and monitoring for security events
  • Regular security assessments

Real-World Usage Scenarios & Mini Case Studies

Examining how organizations actually use mainframe emulation provides practical insights beyond technical specifications.

Scenario 1: ISV Building z/OS Management Tools

Background: A software company develops monitoring and automation tools for z/OS systems. They sell to hundreds of customers running different z/OS releases, middleware versions, and configurations.

Challenge: Testing across customer environments required maintaining multiple expensive mainframe LPARs or negotiating test access with customers—both impractical and cost-prohibitive.

Solution: The ISV adopted ZD&T Personal Edition for individual developers and ZD&T Enterprise Edition for QA teams. They maintain multiple z/OS images representing common customer configurations (z/OS 2.4, 2.5, 3.1 with various middleware combinations).

Implementation:

  • Developers receive laptop workstations capable of running ZD&T Personal instances
  • Each developer maintains 2-3 z/OS images for primary development and basic compatibility testing
  • QA maintains a server cluster running ZD&T Enterprise with 10+ test environments
  • Automated test suites run nightly against all supported configurations
  • Customer-reported issues are reproduced in ZD&T before fixes are developed

Outcomes:

  • Development costs reduced by approximately $300,000 annually compared to LPAR-based development
  • Time-to-market improved as engineers test locally without waiting for LPAR access
  • Product quality improved through comprehensive compatibility testing
  • Customer satisfaction increased due to faster issue resolution

Key Success Factors:

  • Proper IBM licensing for all z/OS versions and middleware
  • Investment in automation (environment provisioning, test execution, result collection)
  • Discipline around image management (version control, update procedures)
  • Clear policies on when to escalate to real mainframe hardware for specific investigations
7.2

Scenario 2: Bank Modernizing COBOL Applications

Background: A regional bank operates critical banking applications written in COBOL and running on z/OS with CICS and DB2. They're modernizing gradually, adding REST APIs, integrating with mobile banking, and improving development practices.

Challenge: Production mainframe MIPS are expensive and heavily utilized. Development and testing compete with production workloads. Release cycles take 6-9 months due to testing bottlenecks. New developers struggle to learn mainframe technology without disrupting operations.

Solution: The bank deployed ZD&T Enterprise Edition in AWS to create a cloud-based development and testing environment parallel to production.

Implementation:

  • Created masked production image with anonymized customer data
  • Deployed ZD&T on AWS C6i.8xlarge instances (32 vCPUs, 64 GB RAM)
  • Built Terraform modules for automated environment provisioning
  • Integrated ZD&T with Jenkins CI/CD pipeline
  • Established processes for bi-weekly production image refreshes
  • Trained developers on using cloud-based mainframe environments

Workflow:

  1. Developer commits COBOL or JCL changes to Git repository
  2. Jenkins pipeline triggered automatically
  3. Fresh ZD&T instance provisioned from template
  4. Code deployed to test environment
  5. Automated regression tests executed
  6. Results reported to developer
  7. ZD&T instance destroyed (runtime: 2-3 hours total)

Outcomes:

  • MIPS consumption on production mainframe reduced 25%
  • Development cycle times reduced from 6-9 months to 3-4 months
  • Testing frequency increased from quarterly to weekly
  • AWS costs approximately $8,000/month vs $25,000/month for equivalent mainframe MIPS
  • Developer satisfaction improved with self-service test environments
  • Onboarding time for new developers reduced from 6 months to 3 months

Challenges:

  • Initial image creation took 3 months (data masking, validation, troubleshooting)
  • Network configuration for accessing on-premises services required careful planning
  • Some test scenarios revealed timing differences between ZD&T and production requiring test adjustments
  • Cultural change management needed to shift from LPAR-centric to cloud-based development

Scenario 3: University Teaching Mainframe Concepts

Background: A computer science department wanted to offer mainframe systems programming courses but lacked budget for IBM hardware or commercial licenses.

Challenge: Providing hands-on mainframe experience without hardware or software costs. Enabling students to experiment freely without breaking shared systems.

Solution: Deployed Hercules emulator running MVS 3.8j (public domain) and Linux on Z in university's computer lab and cloud infrastructure.

Implementation:

  • Installed Hercules on lab Linux workstations (one emulator per student)
  • Provided pre-configured MVS 3.8j images for OS concepts courses
  • Created Linux on Z images for systems administration courses
  • Developed lab exercises covering JCL, TSO/ISPF, assembler programming, COBOL
  • Made cloud-based Hercules instances available for remote students
  • Published all configurations and images for self-study

Outcomes:

  • 50-75 students per year gain mainframe exposure at zero licensing cost
  • Students practice freely without fear of impacting production systems
  • Several graduates hired by enterprises specifically due to mainframe skills
  • Course materials shared with other universities, expanding mainframe education
  • Alumni community maintains knowledge base of Hercules configurations and exercises

Educational Value:

  • Understanding legacy computing architectures
  • Learning batch processing concepts still relevant in modern data engineering
  • Experiencing different computing paradigms beyond cloud-native development
  • Preparing for careers supporting critical enterprise systems

Scenario 4: Insurance Company Evaluating Modernization

Background: An insurance company operates policy administration systems on mainframe. Claims processing, underwriting, and billing all depend on COBOL applications. Management asked IT to evaluate modernization options.

Challenge: Understanding what modernization might look like without committing to expensive transformation programs. Assessing whether rehosting could work for their applications.

Solution: Conducted 6-month proof-of-concept using ZD&T to analyze applications and LzLabs SDM to test rehosting feasibility.

PoC Structure:

Phase 1 - Analysis (2 months):

  • Created ZD&T environment with representative production subset
  • Documented application dependencies, integration points, data flows
  • Identified 3 candidate applications for rehosting pilot

Phase 2 - Rehosting Test (3 months):

  • Worked with LzLabs to rehost a non-critical claims reporting application
  • Maintained ZD&T environment as baseline for comparison
  • Validated functional equivalence between mainframe and rehosted versions
  • Performance tested under simulated load

Phase 3 - Assessment (1 month):

  • Evaluated results against success criteria
  • Analyzed costs, risks, benefits
  • Presented recommendations to leadership

Findings:

  • Rehosting technically feasible for selected applications
  • Performance acceptable for reporting workloads, questionable for high-volume transaction processing
  • Integration complexity higher than initially estimated
  • Modernization requires 3-5 year roadmap, not big-bang migration
  • ZD&T valuable for ongoing development even if rehosting deferred

Decision:

  • Proceed with gradual modernization: API enablement, data replication to cloud
  • Deploy ZD&T for development/test regardless of rehosting timeline
  • Revisit production rehosting in 18 months with better data integration in place
  • Continue using ZD&T for application analysis and modernization planning

Lessons Learned:

  • Proof-of-concept investment ($150,000) prevented premature commitment to wrong strategy
  • ZD&T provided safe environment for experimentation
  • Modernization requires patience and iterative approach
  • Technology feasibility only one dimension—organizational readiness equally important

Best Practices and Pitfalls

Drawing from successful implementations and lessons learned, here are practical guidelines for mainframe emulation initiatives.

Best Practices

Define Clear Objectives: Specify whether you're addressing:

  • Development environment needs
  • Testing and QA automation
  • Training and education
  • Proof-of-concept for modernization
  • Production workload rehosting

Each objective suggests different tools, architectures, and success metrics.

Start Small and Iterate: Begin with a limited scope:

  • Single application or subsystem
  • Small development team
  • Narrow set of test scenarios
  • Limited z/OS configuration

Expand based on lessons learned rather than attempting comprehensive deployment immediately.

Invest in Automation: Manual environment management becomes unsustainable as usage grows. Automate:

  • Infrastructure provisioning (Terraform, CloudFormation, Ansible)
  • Image creation and updates
  • Test execution and result collection
  • Environment teardown and cost management

Implement Proper Data Governance:

  • Classify data by sensitivity level
  • Mask or synthesize sensitive information
  • Document data flows for compliance
  • Implement access controls and audit logging
  • Regular review of data handling practices

Integrate with Existing Tools: Connect emulated environments to:

  • Version control systems (Git)
  • CI/CD platforms (Jenkins, GitLab, GitHub Actions)
  • Monitoring and logging (Splunk, ELK, CloudWatch)
  • Issue tracking (Jira, ServiceNow)
  • Documentation systems (Confluence, SharePoint)

Build Expertise Gradually: Develop internal capabilities:

  • Train staff on emulation tools and cloud platforms
  • Document procedures and configurations
  • Create runbooks for common tasks
  • Establish Centers of Excellence
  • Mentor junior engineers with experienced mainframe professionals

Establish Support Relationships: For commercial tools:

  • Understand vendor SLAs and support boundaries
  • Establish escalation procedures
  • Participate in user communities
  • Provide feedback to vendors on roadmaps

Common Pitfalls to Avoid

Unrealistic Performance Expectations: Emulated environments won't match production mainframe performance. Don't use emulators for:

  • Performance benchmarking
  • Capacity planning
  • Load testing requiring production-scale throughput
  • Latency-sensitive timing validation

Licensing Ignorance or Violations: Assuming emulation equals free mainframe access. Remember:

  • IBM operating systems require licenses regardless of platform
  • Development/test licenses have usage restrictions
  • Violating licensing can lead to audit findings and financial penalties
  • Hercules doesn't exempt you from licensing requirements for current software

Inadequate Security: Treating development environments as less sensitive than production:

  • Dev/test often contain production data copies
  • Compromised development systems enable production attacks
  • Compliance violations occur in test environments too
  • Apply security rigor appropriate to data sensitivity

Environment Drift and Configuration Sprawl: Without discipline, emulated environments multiply and diverge:

  • Implement lifecycle management (creation, usage, archival, destruction)
  • Version control configurations
  • Regular cleanup of unused resources
  • Cost tracking and chargeback

Underestimating Complexity: Assuming emulation setup is turnkey:

  • Image creation requires significant effort
  • Network configuration can be intricate
  • Integration with enterprise systems takes time
  • Cultural and process changes accompany technology adoption

Treating Emulation as Silver Bullet: Viewing emulators as magic solution to all mainframe challenges:

  • They're tools, not strategies
  • Modernization requires holistic approach
  • People, process, and culture matter as much as technology
  • Long-term success demands sustained commitment

Conclusion

Mainframe emulation and virtualization have evolved from hobbyist curiosities to enterprise-grade development and testing solutions. Tools like Hercules, zPDT, and particularly IBM ZD&T enable organizations to break free from the constraints of shared production mainframes for development work, creating opportunities for modernizing development practices, accelerating release cycles, and training the next generation of mainframe engineers.

The landscape offers options for different scenarios. Hercules provides free access to mainframe technology for education and public-domain software. IBM's zPDT and ZD&T deliver commercially supported environments for professional development with current z/OS releases. Rehosting platforms like LzLabs offer pathways for organizations considering production workload migration to x86 or cloud infrastructure.

Cloud deployment of mainframe test environments represents a particularly powerful evolution. By combining ZD&T's emulation capabilities with AWS, Azure, or Google Cloud infrastructure, organizations achieve elasticity impossible with physical mainframes. Development teams can provision z/OS environments on-demand, integrate mainframe testing into CI/CD pipelines, and pay only for resources actually consumed. The typical enterprise can reduce development and testing costs by 40-60% while simultaneously improving developer productivity and release frequency.

However, emulation is not a magic solution. Performance remains limited compared to production IBM Z systems—these are functional test environments, not performance benchmarking platforms. Licensing requirements persist regardless of underlying hardware; running z/OS still requires IBM licenses. Data protection and compliance demand careful attention when moving production data to x86 or cloud environments. And successful implementation requires not just technology deployment but organizational change around development practices, automation, and DevOps culture.

Looking forward, several trends will shape the mainframe emulation landscape:

Cloud-Native Development: Increasing integration of mainframe development into cloud-based IDEs, version control, and CI/CD platforms. IBM's Wazi as a Service represents this direction—cloud-hosted mainframe development environments accessible through browsers.

Mainframe-as-a-Service: Growing offerings that provide on-demand access to mainframe environments without managing infrastructure. Similar to how database-as-a-service simplified data tier management, mainframe-as-a-service could democratize access to IBM Z capabilities.

AI-Assisted Automation: Application of AI and machine learning to automate environment provisioning, test generation, and result analysis. Language models could potentially generate test data, create JCL jobs, or suggest modernization opportunities based on code analysis.

Hybrid Architectures: Continued evolution toward architectures where production workloads run on IBM Z for performance and reliability while development, testing, and modernization activities occur on cloud-based emulated environments. This hybrid approach balances the strengths of each platform.

Expanded Modernization Patterns: As rehosting platforms mature and proven migration patterns emerge, more organizations will evaluate production workload migration. However, many will likely adopt hybrid strategies where core transaction processing remains on IBM Z while new capabilities deploy cloud-native.

For organizations running IBM mainframes, the question isn't whether to explore emulation and virtualization—it's how to incorporate these tools effectively into development, testing, and modernization strategies. The technology has matured, the economics are compelling, and the competitive pressure to accelerate development cycles continues mounting.

Success requires approaching emulation strategically rather than tactically. Define clear objectives aligned with business outcomes. Invest in automation and integration with modern development tooling. Build internal expertise while leveraging vendor support appropriately. Manage costs, licensing, and compliance rigorously. Start with focused pilots that deliver measurable value, then expand based on lessons learned.

The mainframe isn't disappearing anytime soon—too much critical business logic runs on these systems, and the reliability and transaction processing capabilities remain unmatched. But the way organizations develop, test, and modernize mainframe applications is transforming. Emulation and virtualization on x86 and cloud platforms are essential enablers of this transformation, making mainframe technology more accessible, flexible, and aligned with contemporary development practices. Organizations that embrace these capabilities position themselves to maintain mainframe excellence while evolving toward the future of enterprise computing.

Related posts