Mainframe
This section explains about Mainframe, their advantages, uses, workload, roles and important programing languages used for Mainframes.
Overview
The term mainframe, in this topic, refers to computers that can support thousands of applications and input/output devices to simultaneously serve thousands of users.
A business might have a large server collection that includes transaction servers, database servers, e-mail servers and Web servers. These large collections of servers are sometimes referred to as server farms. The hardware required to perform a server function can range from little more than a cluster of rack-mounted personal computers to the most powerful mainframes manufactured today.
A mainframe is the central data repository, or hub, in a corporation’s data processing centre, linked to users through less powerful devices such as workstations or terminals. The presence of a mainframe often implies a centralised form of computing, as opposed to a distributed form of computing. Centralising the data in a single mainframe repository saves customers from having to manage updates to more than one copy of their business data, which increases the likelihood that the data is current.
However, the distinction between centralised and distributed computing is rapidly blurring as smaller machines continue to gain in processing power and mainframes become ever more flexible and multi-purpose. Market pressures require that today’s businesses constantly re-evaluate their IT strategies to find better ways of supporting a changing marketplace. As a result, mainframes are now frequently used in combination with networks of smaller servers in a multitude of configurations. The ability to dynamically reconfigure a mainframe’s hardware and software resources (such as processors, memory, and device connections), while applications continue to run, underscores the flexible and evolving nature of the modern mainframes. While mainframe hardware has become harder to pigeon-hole, so have the operating systems that run on mainframes. Years ago, in fact, the terms defined each other: a mainframe was any hardware system that ran a major IBM operating system. This meaning has also blurred in recent years because these operating systems can be run on very small systems.
Computer manufacturers and IT professionals often use the term platform to refer to the hardware and software that are associated with a particular computer architecture. For example, a mainframe computer and its operating system (and their predecessors) are considered a platform; UNIX on a Reduced Instruction Set Computer (RISC) system is considered a platform; personal computers can be seen as several different platforms, depending on which operating system is being used. So, to return to the original question: “What is a mainframe?” Today, the term mainframe can best be used to describe a style of operation, applications, and operating system facilities. To start with a working definition, “a mainframe is what businesses use to host the commercial databases, transaction servers, and applications that require a greater degree of security and availability than is commonly found on smaller-scale machines.”
Early mainframe systems were housed in enormous, room-sized metal boxes or frames, which is how the term mainframe originated. The early mainframe required large amounts of electrical power and air-conditioning, and the room was filled mainly with I/O devices. Also, a typical customer site had several mainframes installed, with most of the I/O devices connected to all of the mainframes. During their largest period, in terms of physical size, a typical mainframe occupied 2,000 to 10,000 square feet (200 to 1000 square meters). Some installations were even larger than this.
Starting around 1990, mainframe processors and most of their I/O devices became physically smaller, while their functionality and capacity continued to grow. Mainframe systems today are much smaller than earlier systems—about the size of a large refrigerator. In some cases, it is now possible to run a mainframe operating system on a PC that emulates a mainframe. Such emulators are useful for developing and testing business applications before moving them to a mainframe production system.
Clearly, the term mainframe has expanded beyond merely describing the physical characteristics of a system. Instead, the word typically applies to some combination of the following attributes:
- Backwards compatibility with previous mainframe operating systems, applications, and data.
- Centralised control of resources.
- Hardware and operating systems that can share access to disk drives with other systems, with automatic locking and protection against destructive simultaneous use of disk data.
- A style of operation, often involving dedicated operations staff who use detailed operations procedure books and highly organised procedures for backups, recovery, training, and disaster recovery at an alternative location.
- Hardware and operating systems that routinely work with hundreds or thousands of simultaneous I/O operations.
- Clustering technologies that allow the customer to operate multiple copies of the operating system as a single system. This configuration known as Parallel Sysplex is analogous in concept to a UNIX cluster, but allows systems to be added or removed as needed, while applications continue to run. This flexibility allows mainframe customers to introduce new applications, or discontinue the use of existing applications, in response to changes in business activity.
- Additional data and resource sharing capabilities. For example, in a Parallel Sysplex, it is possible for users across multiple systems to access the same databases concurrently, where database access is controlled at the record level.
- Optimised I/O for business-related data processing applications supporting high speed networking and terabytes of disk storage.
As the performance and cost of such hardware resources such as the central processing unit (CPU) and external storage media improve, and the number and types of devices that can be attached to the CPU increase, the operating system software can now fully take advantage of the improved hardware.
Advantages of Mainframe
The reasons to use a mainframe are many, but most generally they fall into one or more of the following categories:
- Reliability, availability, and serviceability
- Security
- Scalability
- Continuing compatibility
- Evolving architecture
- Extensibility
- Total cost of ownership
- Environment friendly
The reliability, availability, and serviceability (or “RAS”) of a computer system have always been important factors in data processing. RAS has become accepted as a collective term for many characteristics of hardware and software that are prized by mainframe users. The terms are defined as follows:
- Reliability: The system’s hardware components have extensive self-checking and self-recovery capabilities. The system’s software reliability is a result of extensive testing and the ability to make quick updates for detected problems. One of the operating system’s features is a Health Checker, which identifies potential problems before they impact availability or, in worst cases, cause system or application outages.
- Availability: The system can recover from a failed component without impacting the rest of the running system. This applies to hardware recovery (the automatic replacing of failed elements with spares) and software recovery (the layers of error recovery that are provided by the operating system).
- Serviceability: The system can determine why a failure occurred. This allows for the replacement of hardware and software elements while impacting the operational system as little as possible. This term also implies well-defined units of replacement, either hardware or software.
A computer system can be defined as available, reliable and serviceable, when all its applications are available for use, it rarely requires downtime for upgrades or repairs and it is easy to fix whenever it is brought down by an error condition.
An organisation’s most valuable resources is its data, which includes customer lists, accounting data, employee information and so on. These critical data needs to be securely managed, controlled and at the same time made available to the authorised users. The mainframe computer has extensive capabilities to share the firm’s data among multiple users and protect it at the same time.
In the IT environment, data security is defined as protection against unauthorised access, transfer, modification, or destruction, whether accidental or intentional. To protect data and to maintain the resources necessary to meet the security objectives, customers typically add a sophisticated security manager product to their mainframe operating system. The customer’s security administrator often bears the overall responsibility for using the available technology to transform the company’s security policy into a usable plan. A secure computer system prevents users from accessing or changing any objects on the system, including user data, except through system-provided interfaces that enforce authority rules. The mainframe provides a very secure system for processing large numbers of heterogeneous applications that access critical data.
The mainframe's built-in security throughout the software stack means that z/OS will not suffer from the virus attacks from buffer overflow related problems (characteristic of many distributed environments), due to its architecture design and use of registries. Hardware-enabled security offers unmatched protection for workload isolation, storage protection, and secured communications. Built-in security embedded throughout the operating system, network infrastructure, middleware application and database architectures delivers secured infrastructures, secured business processing, and fosters compliance. The mainframe’s cryptography executes at multiple layers of the infrastructure, ensuring protection of data throughout its life cycle.
It has been said that the only constant is change. Nowhere is that statement truer than in the IT industry. In business, positive results can often trigger a growth in IT infrastructure to cope with increased demand. The degree to which the IT organisation can add capacity without disruption to normal business processes or without incurring excessive overhead (non-productive processing) is largely determined by the scalability of the particular computing platform. By scalability, we mean the ability of the hardware, software, or a distributed system to continue to function well as it is changed in size or volume; for example, the ability to retain performance levels when adding processors, memory, and storage. A scalable system can efficiently adapt to work, with larger or smaller networks performing tasks of varying complexity. The mainframe provides functionality for both vertical and horizontal scaling where software and hardware collaborate to accommodate various application requirements. As a company grows in employees, customers, and business partners, it usually needs to add computing resources to support business growth. One approach is to add more processors of the same size, with the resulting overhead in managing this more complex setup. A company can consolidate its many smaller processors into fewer, larger systems because the mainframe is a share-everything architecture. Mainframes exhibit scalability characteristics in both hardware and software, with the ability to run multiple copies of the operating system software as a single entity called a system complex, or sysplex.
Mainframe customers tend to have a very large financial investment in their applications and data. Some applications have been developed and refined over decades. Some applications were written many years ago, while others may have been written yesterday. The ability of an application to work in the system or its ability to work with other devices or programs is called compatibility.
The need to support applications of varying ages imposes a strict compatibility demand on mainframe hardware and software, which have been upgraded many times since the first System/360 mainframe computer was shipped in 1964. Applications must continue to work properly. Thus, much of the design work for new hardware and system software revolves around this compatibility requirement.
The overriding need for compatibility is also the primary reason why many aspects of the system work as they do, for example, the syntax restrictions of the job control language (JCL) is used to control job scheduling and execution. Any new design enhancements made to JCL must preserve compatibility with older jobs so that they can continue to run without modification. The desire and need for continuing compatibility is one of the defining characteristics of mainframe computing. Absolute compatibility across decades of changes and enhancements is not possible, of course, but the designers of mainframe hardware and software make it a top priority. When an incompatibility is unavoidable, the designers typically warn users at least a year in advance that software changes might be needed.
Technology has always accelerated the pace of change. New technologies enable new ways of doing business, shifting markets, changing customer expectations, and redefining business models. Each major enhancement to technology presents opportunities. Companies that understand and prepare for changes can gain advantage over competitors and lead their industries. To support an on demand business, the IT infrastructure must evolve to support it. At its heart the data centre must transition to reflect these needs, it must be responsive to changing demands, it must be variable to support the diverse environment, it must be flexible so that applications can run on the optimal resources at any point in time and it must be resilient to support an always open-for-business environment. For over four decades, the IBM mainframe has been a leader in data and transaction serving. The announcement of the latest machine provides a strong combination of heritage mainframe characteristics plus new functions designed around scalability, availability, and security. IBM further enhances the capabilities of the mainframe by introducing optimised capacity settings with subcapacity central processors (CPs). With the introduction of CPU capacity settings, the mainframe now has a comprehensive server range to meet the needs of businesses spanning mid-range companies to large enterprises. In addition, the availability of special purpose processors improves cost of its ownership and provides greater overall throughput.
In software engineering, extensibility is a system design principle where the implementation takes into consideration future growth. It is a systemic measure of the ability to extend a system and the level of effort required to implement the extension. Extensions can be provided by the addition of new functionality or through modification of existing functionality. The mainframe’s central theme is to provide for change while minimising impact to existing system functions. The mainframe as it evolves more as an autonomic system takes on tasks not anticipated in its original design. Its ultimate aim is to create the definitive self-managing computer environment to overcome its rapidly growing maturity and to facilitate expansion. Many built-in features perform software management, runtime health checking, and transparent hardware hot-swapping. Extensibility also comes in the form of cost containment and has been with the mainframe for a long time in different forms—it is a share-everything architecture, that is, its component and infrastructure reuse is a characteristic of its design.
Many organisations are under the false impression that the mainframe is a server that will be accompanied by higher overall software, hardware and people costs. Most organisations do not accurately calculate the total costs of their server proliferation, largely because chargeback mechanisms do not exist, because only incremental mainframe investment costs are compared to incremental distributed costs, or because total shadow costs are not weighed in. Many organisations also fail to recognise the path length delays and context switching of running workloads across many servers which typically add up to a performance penalty nonexistent on the mainframe. Also, the autonomic capabilities of the mainframe (reliability, scalability, self-managing design) may not be taken into consideration. Distributed servers encounter an efficiency barrier whereby adding incremental servers after a certain point fails to add efficiency. The total diluted cost of the mainframe is not used correctly in calculations, rather the delta costs attributed to an added workload often make the comparisons erroneous. Distributed servers’ cost per unit of work never approximates the incremental cost of a mainframe. However, over time, it is unlikely that a server farm could achieve the economies of scale associated with a fully loaded mainframe, regardless of how many devices are added. In effect there is a limit to the efficiencies realizable in a distributed computing environment. These inefficiencies are due to shadow costs, execution of only one style of workload versus a balanced workload, underutilisation of CPUs, people expense, and real estate cost of a distributed operations management.
Refurbishing existing data centers can also prove cost-prohibitive, such as installing new cooling units that require reconfiguration of floors. The cost of power over time also requires consideration as part of data center planning. With the rising trends in energy costs is a trend toward high-density distributed servers that stress the power capacity of today’s environment. However, this trend has been met with rising energy bills, and facilities that just do not accommodate new energy requirements. Distributed servers are resulting in power and cooling requirements per square foot that stress current data center power thresholds. Because these servers have an attractive initial price point, their popularity has increased. However, their compact electronics generate heat that can be costly to remove. The mainframe’s virtualisation leverages the power of many servers using a small hardware footprint. Today’s mainframe reduces the impact of energy cost to a near-negligible value when calculated on a per logical server basis because more applications, several hundred of them, can be deployed on a single machine. With mainframes, fewer physical servers running at a near constant energy level can host multiple virtual software servers. This allows a company to optimise the utilisation of hardware, and consolidate physical server infrastructure by hosting servers on a small number of powerful System Z servers. With server consolidation onto a System Z, often using Linux, companies get better hardware utilisation, reduce floor space and power consumption while driving down costs. The mainframe is designed to scale up and out—for instance by adding more processors to an existing hardware frame, and leveraging existing MIPS, which retain their value during upgrades. (With distributed systems, the hardware and processing power are typically just replaced after 3-4 years of use.) By adding MIPS to the existing mainframe, more workloads can be run cost-effectively without changing the footprint. There is no need for another server that would in-turn require additional environmental work, networks and cooling. The mainframe's IFLs6 can easily run hundreds of instances of Linux at an incremental cost of 75 watts of power.
Users of Mainframe Computers
Just about everyone has used a mainframe computer at one point or another. If you ever used an automated teller machine (ATM) to interact with your bank account, you used a mainframe. Today, mainframe computers play a central role in the daily operations of most of the world’s largest corporations. While other forms of computing are used extensively in business in various capacities, the mainframe occupies a coveted place in today’s e-business environment. In banking, finance, health care, insurance, utilities, government, and a multitude of other public and private enterprises, the mainframe computer continues to be the foundation of modern business.
Until the mid-1990s, mainframes provided the only acceptable means of handling the data processing requirements of a large business. These requirements were then (and are often now) based on large and complex batch jobs, such as payroll and general ledger processing.
The mainframe owes much of its popularity and longevity to its inherent reliability and stability, a result of careful and steady technological advances that have been made since the introduction of the System/360 in 1964. No other computer architecture can claim as much continuous, evolutionary improvement, while maintaining compatibility with previous releases. Because of these design strengths, the mainframe is often used by IT organisations to host the most important, mission-critical applications. These applications typically include customer order processing, financial transactions, production and inventory control, payroll, as well as many other types of work. One common impression of a mainframe’s user interface is the 80x24-character “green screen” terminal, named for the old cathode ray tube (CRT) monitors from years ago that glowed green. In reality, mainframe interfaces today look much the same as those for personal computers or UNIX systems. When a business application is accessed through a Web browser, there is often a mainframe computer performing crucial functions “behind the scene.” Many of today’s busiest Web sites store their production databases on a mainframe host. New mainframe hardware and software products are ideal for Web transactions because they are designed to allow huge numbers of users and applications to rapidly and simultaneously access the same data without interfering with each other. This security, scalability, and reliability is critical to the efficient and secure operation of contemporary information processing. Corporations use mainframes for applications that depend on scalability and reliability. For example, a banking institution could use a mainframe to host the database of its customer accounts, for which transactions can be submitted from any of thousands of ATM locations worldwide. Businesses today rely on the mainframe to:
- Perform large-scale transaction processing (thousands of transactions per second).
- Support thousands of users and application programs concurrently accessing numerous resources.
- Manage terabytes of information in databases.
- Handle large-bandwidth communication.
The roads of the information superhighway often lead to a mainframe.
Typical Mainframe Workloads
Most mainframe workloads fall into one of two categories: batch processing or online transaction processing (including Web-based applications).
One key advantage of mainframe systems is their ability to process terabytes of data from high-speed storage devices and produce valuable output. For example, mainframe systems make it possible for banks and other financial institutions to perform end-of-quarter processing and produce reports that are necessary to customers (for example, quarterly stock statements or pension statements) or to the government (for example, financial results). With mainframe systems, retail stores can generate and consolidate nightly sales reports for review by regional sales managers. The applications that produce these statements are batch applications; that is, they are processed on the mainframe without user interaction. A batch job is submitted on the computer, reads and processes data in bulk and produces output, such as customer billing statements. An equivalent concept can be found in a UNIX script file or a Windows® command file, but a z/OS batch job might process millions of records. While batch processing is possible on distributed systems, it is not as commonplace as it is on mainframes because distributed systems often lack:
- Sufficient data storage.
- Available processor capacity, or cycles.
- Sysplex-wide management of system resources and job scheduling.
Mainframe operating systems are typically equipped with sophisticated job scheduling software that allows data centre staff to submit, manage, and track the execution and output of batch jobs. Batch processes typically have the following characteristics:
- Large amounts of input data are processed and stored, large numbers of records are accessed, and a large volume of output is produced.
- Immediate response time is usually not a requirement. However, batch jobs often must complete within a “batch window,” a period of less-intensive online activity, as prescribed by a service level agreement (SLA).
- Information is generated about large numbers of users or data entities (for example, customer orders or a retailer’s stock on hand).
- A scheduled batch process can consist of the execution of hundreds or thousands of jobs in a pre-established sequence.
During batch processing, multiple types of work can be generated. Consolidated information such as profitability of investment funds, scheduled database backups, processing of daily orders and updating of inventories are common examples.
Today’s mainframe can run standard batch processing such as COBOL as well as batch UNIX and batch Java programs. These runtimes can execute either as standalone or participate collaboratively within a single job stream. This makes batch processing extremely flexible by integrating different execution environments centrally on a single server.
Transaction processing that occurs interactively with the end user is referred to as online transaction processing (OLTP). Typically, mainframes serve a vast number of transaction systems. These systems are often mission-critical applications that businesses depend on for their core functions. Transaction systems must be able to support an unpredictable number of concurrent users and transaction types. Most transactions are executed in short time periods—fractions of a second in some cases. One of the main characteristics of a transaction system is that the interactions between the user and the system are very short. The user will perform a complete business transaction through short interactions, with immediate response time required for each interaction. These systems are currently supporting mission-critical applications; therefore, continuous availability, high performance, and data protection and integrity are required. Online transactions are familiar to most people. Examples include:
- ATM machine transactions such as deposits, withdrawals, inquiries, and transfers.
- Supermarket payments with debit or credit cards.
- Purchase of merchandise over the Internet.
For example, inside a bank branch office or on the Internet, customers are using online services when checking an account balance or directing fund balances. In fact, an online system performs many of the same functions as an operating system:
- Managing and dispatching tasks
- Controlling user access authority to system resources
- Managing the use of memory
- Managing and controlling simultaneous access to data files
- Providing device independence
Some industry uses of mainframe-based online systems include:
- Banks - ATMs, teller systems for customer service and online financial systems.
- Insurance - Agent systems for policy management and claims processing.
- Travel and transport - Airline reservation systems.
- Manufacturing - Inventory control, production scheduling.
- Government - Tax processing, license issuance and management.
Multiple factors can influence the design of a company’s transaction processing system, including:
- Number of users interacting with the system at any one time.
- Number of transactions per second (TPS).
- Availability requirements of the application. For example, must the application be available 24 hours a day, seven days a week, or can it be brought down briefly one night each week?
Before personal computers and intelligent workstations became popular, the most common way to communicate with online mainframe applications was with 3270 terminals. These devices were sometimes known as “dumb” terminals, but they had enough intelligence to collect and display a full screen of data rather than interacting with the computer for each keystroke, saving processor cycles. The characters were green on a black screen, so the mainframe applications were nicknamed “green screen” applications. Based on these factors, user interactions vary from installation to installation.
With applications now being designed, many installations are reworking their existing mainframe applications to include Web browser-based interfaces for users. This work sometimes requires new application development, but can often be done with vendor software purchased to “re-face” the application.
Online transactions usually have the following characteristics:
- A small amount of input data, a few stored records accessed and processed, and a small amount of data as output.
- Immediate response time, usually less than one second.
- Large numbers of users involved in large numbers of transactions.
- Round-the-clock availability of the transactional interface to the user.
- Assurance of security for transactions and user data.
In a bank branch office, for example, customers use online services when checking an account balance or making an investment.
- A customer uses an ATM, which presents a user-friendly interface for various functions: Withdrawal, query account balance, deposit, transfer, or cash advance from a credit card account.
- Elsewhere in the same private network, a bank employee in a branch office performs operations such as consulting, fund applications, and money ordering.
- At the bank’s central office, business analysts tune transactions for improved performance. Other staffs uses specialized online systems for office automation to perform customer relationship management, budget planning, and stock control.
- All requests are directed to the mainframe computer for processing.
- Programs running on the mainframe computer perform updates and inquiries to the database management system (for example, DB2).
- Specialized disk storage systems store the database files.
Specialty Engines to Characterise Workload
A feature of the mainframe provides customers the capability to characterise their server configuration to the type of workload they elect to run on it. The mainframe can configure CPUs as specialty engines to off-load specific work to separate processors. This enables the general CPUs to continue processing standard workload increasing the overall ability to complete more batch jobs or transactions. In these scenarios the customer can benefit from greater throughput and eases the overall total cost of ownership.
Roles in the Mainframe World
Mainframe systems are designed to be used by large numbers of people. Most of those who interact with mainframes are end users—people who use the applications that are hosted on the system. However, because of the large number of end users, applications running on the system, and the sophistication and complexity of the system software that supports the users and applications, a variety of roles are needed to operate and support the system.
In IT field, these roles are referred to by a number of different titles; for example:
- System programmers
- System administrators
- Application designers and programmers
- System operators
- Production control analysts
In a distributed systems environment, many of the same roles are needed as in the mainframe environment. However, the job responsibilities are often not as well-defined. Since the 1960s, mainframe roles have evolved and expanded to provide an environment in which the system software and applications can function smoothly, effectively and serve many thousands of users efficiently. While it may seem that the size of the mainframe support staff is large and unwieldy, the numbers become comparatively small when one considers the number of users supported, the number of transactions run, and the high business value of the work that is performed on the mainframe. This relates to the cost containment mentioned earlier. This section focuses mainly with the system programmer and application programmer roles in the mainframe environment. There are, however, several other important jobs involved in the “care and feeding” of the mainframe, and we touch on some of these roles to give you a better idea of what’s going on behind the scene. Mainframe activities, such as the following, often require cooperation among the various roles:
- Installing and configuring system software.
- Designing and coding new applications to run on the mainframe.
- Introduction and management of new workloads on the system, such as batch jobs and online transaction processing.
- Operation and maintenance of the mainframe software and hardware.
The following sections covers about each role in more detail. A feature of the mainframe is that it requires fewer personnel to configure and run than other server environments. Many of the administration roles are automated, offering the means to incorporate runtime rules by allowing the system to run without manual intervention. These rules are based on installation policies that are integrated with the configuration.
In a mainframe IT organisation, the system programmer plays a central role. The system programmer installs, customises, and maintains the operating system and also installs or upgrades products that run on the system. The system programmer might be presented with the latest version of the operating system to upgrade the existing systems or the installation might be as simple as upgrading a single program, such as a sort application. The system programmer performs such tasks as the following:
- Planning hardware and software system upgrades and changes in configuration.
- Training system operators and application programmers.
- Automating operations.
- Capacity planning.
- Running installation jobs and scripts.
- Performing installation-specific customisation tasks.
- Integration-testing the new products with existing applications and user procedures.
- System-wide performance tuning to meet required levels of service.
The system programmer must be skilled at debugging problems with system software. These problems are often captured in a copy of the computer's memory contents called a dump, which the system produces in response to a failing software product, user job, or transaction. Armed with a dump and specialised debugging tools, the system programmer can determine where the components have failed. When the error has occurred in a software product, the system programmer works directly with the software vendor’s support representatives to discover whether the problem’s cause is known and whether a patch is available. System programmers are needed to install and maintain the middleware on the mainframe, such as database management systems, online transaction processing systems and Web servers. Middleware is a software “layer” between the operating system and the end user or end user application. It supplies major functions that are not provided by the operating system. Major middleware products such as DB2, CICS, and IMS can be as multifaceted as the operating system itself.
For large mainframe shops, it is not unusual for system programmers to specialise in specific products, such as CICS, IMS or DB2.
The distinction between system programmer and system administrator varies widely among mainframe sites. In smaller IT organisations, where one person might be called upon to perform several roles, the terms may be used interchangeably. In larger IT organisations with multiple departments, the job responsibilities tend to be more clearly separated. System administrators perform more of the day-to-day tasks related to maintaining the critical business data that resides on the mainframe, while the system programmer focuses on maintaining the system itself. One reason for the separation of duties is to comply with auditing procedures, which often require that no one person in the IT organisation be allowed to have unlimited access to sensitive data or resources. Examples of system administrators include the database administrator (DBA) and the security administrator. While system programmer expertise lies mainly in the mainframe hardware and software areas, system administrators are more likely to have experience with the applications. They often interface directly with the application programmers and end users to make sure that the administrative aspects of the applications are met. These roles are not necessarily unique to the mainframe environment, but they are key to its smooth operation.
In larger IT organisations, the system administrator maintains the system software environment for business purposes, including the day-to-day maintenance of systems to keep them running smoothly. For example, the database administrator must ensure the integrity of the data that is stored in the database management systems and efficient access to it. Other examples of common system administrator tasks can include:
- Installing software.
- Adding, deleting and maintaining user profiles.
- Maintaining security resource access lists.
- Managing storage devices and printers.
- Managing networks and connectivity.
- Monitoring system performance.
In matters of problem determination, the system administrator generally relies on the software vendor support centre personnel to diagnose problems, read dumps, and identify corrections for cases in which these tasks aren’t performed by the system programmer.
The application designer and application programmer (or application developer) design, build, test, and deliver mainframe applications for the company’s end users and customers. Based on requirements gathered from business analysts and end users, the designer creates a design specification from which the programmer constructs an application. The process includes several iterations of code changes and compilation, application builds, and unit testing. During the application development process, the designer and programmer must interact with other roles in the enterprise. For example, the programmer often works on a team of other programmers who are building code for related application program modules. When completed, each module is passed through a testing process that can include function, integration, and system-wide tests. Following the tests, the application programs must be acceptance tested by the user community to determine whether the code actually satisfies the original user requirement.
In addition to creating new application code, the programmer is responsible for maintaining and enhancing the company’s existing mainframe applications. In fact, this is often the primary job for many of today’s mainframe application programmers. While mainframe installations still create new programs with Common Business Oriented Language (COBOL) or PL/I, languages such as Java and C/C++ have become popular for building new applications on the mainframe, just as they have on distributed platforms. Widespread development of mainframe programs written in high-level languages such as COBOL and PL/I continues at a brisk pace, despite rumours to the contrary. Many thousands of programs are in production on mainframe systems around the world, and these programs are critical to the day-to-day business of the corporations that use them. COBOL and other high-level language programmers are needed to maintain existing code and make updates and modifications to existing programs. Also, many corporations continue to build new application logic in COBOL and other traditional languages, and IBM continues to enhance their high-level language compilers to include new functions and features that allow those languages to continue to take advantage of newer technologies and data formats. These programmers can benefit from state-of-the-art integrated development environments (IDEs) to enhance their productivity. These IDEs include support for sophisticated source code search and navigation, source code refactoring, and syntax highlighting. IDEs also assist with defining repeatable build processing steps and identifying dependent modules which must be rebuilt after changes to source code have been developed.
The system operator monitors and controls the operation of the mainframe hardware and software. The operator starts and stops system tasks, monitors the system consoles for unusual conditions, and works with the system programming and production control staff to ensure the health and normal operation of the systems.
As applications are added to the mainframe, the system operator is responsible for ensuring that they run smoothly. New applications from the Applications Programming Department are typically delivered to the Operations Staff with a run book of instructions. A run book identifies the specific operational requirements of the application, which operators need to be aware of during job execution. Run book instructions might include, for example: application-specific console messages that require operator intervention, recommended operator responses to specific system events, and directions for modifying job flows to accommodate changes in business requirements.
The operator is also responsible for starting and stopping the major subsystems, such as transaction processing systems, database systems, and the operating system itself. These restart operations are not nearly as commonplace as they once were, as the availability of the mainframe has improved dramatically over the years. However, the operator must still perform an orderly shutdown and startup of the system and its workloads, when it is required. In case of a failure or an unusual situation, the operator communicates with system programmers, who assist the operator in determining the proper course of action, and with the production control analyst, who works with the operator to make sure that production workloads are completed properly.
The production control analyst is responsible for making sure that batch workloads run to completion without error or delay. Some mainframe installations run interactive workloads for online users, followed by batch updates that run after the prime shift, when the online systems are not running. While this execution model is still common, world-wide operations at many companies with live Internet-based access to production data are finding the “daytime online/night time batch” model to be obsolete. However, batch workloads continue to be a part of information processing and skilled production control analysts play a key role.
A common complaint about mainframe systems is that they are inflexible and hard to work with, specifically in terms of implementing changes. The production control analyst often hears this type of complaint, but understands that the use of well-structured rules and procedures to control changes, strengthens the mainframe environment and helps to prevent outages. In fact, one reason that mainframes have attained a strong reputation for high levels of availability and performance is that there are controls on change and it is difficult to introduce change without proper procedures.
Mainframe Programming Languages Overview
Computer programming languages can be classified as follows:
- Machine language, the 1st generation, direct machine code.
- Assembler, 2nd generation, using mnemonics to present the instructions to be translated later into machine language by an assembly program, such as Assembler language.
- Procedural languages, 3rd generation, also known as high-level languages (HLL), such as Pascal, FORTRAN, Algol, COBOL, PL/I, Basic, and C. The coded program, called a source program, has to be translated through a compilation step.
- Non-procedural languages, 4th generation, also known as 4GL, used for predefined functions in applications for databases, report generators, queries, such as RPG, CSP, QMF™.
- Visual Programming languages that use a mouse and icons, such as VisualBasic and VisualC++.
- Hyper Text Markup Language, used for writing of World Wide Web documents.
- Object-oriented language, OO technology, such as Smalltalk, Java™, and C++.
- Other languages, for example 3D applications.
Each computer language has evolved separately, driven by the creation of and the adaptation to new standards. The most widely used computer languages supported by z/OS include:
- Assembler
- COBOL
- PL/I
- C/C++
- Java
- CLIST
- REXX™
Choosing a programming language depends on a number of different things, the following are some of the considerations, when choosing a programming language to develop an application:
- What is the type and nature of the application (e.g. on-line or batch)?
- Is performance a consideration?
- What are the budget constraints for development and ongoing support?
- What are the time constraints of the project?
- Do we need to write some of the subroutines in different languages because of the strengths of a particular language versus the overall language of choice?
- Do we use a compiled or an interpreted language?
The sections that follow look at considerations for several languages commonly supported on the mainframe.
Assembler language is a symbolic programming language that can be used to code instructions instead of coding in machine language.
The Assembler language is the symbolic programming language that is closest to the machine language in form and content, and therefore is an excellent candidate for writing programs in which:
- You need control of your program, down to the byte or bit level.
- You must write subroutines1 for functions that are not provided by other symbolic programming languages, such as COBOL, FORTRAN, or PL/I.
Assembler language is made up of statements that represent either instructions or comments. The instruction statements are the working part of the language, and they are divided into the following three groups:
- A machine instruction is the symbolic representation of a machine language instruction of instruction sets. It is called a machine instruction because the assembler translates it into the machine language code that the computer can execute.
- An assembler instruction is a request to the assembler, to do certain operations during the assembly of a source module; for example, defining data constants, reserving storage areas, and defining the end of the source module.
- A macro instruction or macro is a request to the assembler program, to process a predefined sequence of instructions called a macro definition. From this definition, the assembler generates machine and assembler instructions, which it then processes as if they were part of the original input in the source module.
The assembler produces a program listing containing information that was generated during the various phases of the assembly process. It is really a compiler for Assembler language programs.
With its origins in the late 1950’s COBOL one of the oldest and most commonly used programming languages in the world. Common Business-Oriented Language (COBOL), defines its primary domain in business, finance, and administrative systems for companies and governments. Common Business-Oriented Language (COBOL) is a programming language similar to English that is widely used to develop business-oriented applications in the area of commercial data processing. In addition to the traditional characteristics provided by the COBOL language, this version of COBOL is capable, through COBOL functions, of integrating COBOL applications into Web-oriented business processes. With the capabilities of this release, application developers can do the following:
- Utilise new debugging functions in Debug Tool.
- Enable interoperability with Java™ when an application runs in an IMS™ Java-dependent region.
- Simplify the componentization of COBOL programs and enable interoperability with Java components across distributed applications.
- Promote the exchange and usage of data in standardised formats including XML and Unicode.
With Enterprise COBOL for z/OS and OS/390, COBOL and Java applications can interoperate in the e-business world. The COBOL compiler produces a program listing containing all the information that it generated during the compilation. The compiler also produces information for other processors, such as the binder. Before the computer can execute your program, the object deck has to be run through another process to resolve the addresses where instructions and data will be located. This process is called linkage edition and is performed by the binder.
Programming Language/I (PL/I, pronounced "P-L one"), is a full-function, general-purpose, high-level programming language.
PL/I is suitable for the development of:
- Commercial applications
- Engineering/scientific applications
- Many other applications
The relationship between JCL and program files is the same for PL/I as it is for COBOL and other HLLs.
The C language contains a concise set of statements with functionality added through its library. This division enables C to be both flexible and efficient. An additional benefit is that the language is highly consistent across different systems. C is a programming language designed for a wide variety of programming purposes, including:
- System-level code
- Text processing
- Graphics.
The process of compiling a C source program and then link-editing the object deck into a load module is basically the same as it is for COBOL. The relationship between JCL and program files is the same for C/C++ as it is for COBOL and other HLLs.
Java™ is an object-oriented programming language developed by Sun Microsystems Inc. Java can be used for developing traditional mainframe commercial applications as well as Internet and intranet applications that use standard interfaces.
Java is an increasingly popular programming language used for many applications across multiple operating systems. IBM® is a major supporter and user of Java across all of the IBM computing platforms, including z/OS®. The z/OS Java products provide the same, full function Java APIs as on all other IBM platforms. In addition, the z/OS Java licensed programs have been enhanced to allow Java access to z/OS unique file systems. Programming languages such as Enterprise COBOL and Enterprise PL/I in z/OS provide interfaces to programs written in Java. These languages provide a set of interfaces or facilities for interacting with programs written in Java.
The various Java Software Development Kit (SDK) licensed programs for z/OS help application developers use the Java APIs for z/OS, write or run applications across multiple platforms, or use Java to access data that resides on the mainframe. Some of these products allow Java applications to run in only a 31-bit addressing environment. However, with 64-bit SDKs for z/OS, pure Java applications that were previously storage-constrained by 31-bit addressing can be executed in a 64-bit environment. Also, some mainframes support a special processor for running Java applications called the zSeries® Application Assist Processor (zAAP). Programs can be run interactively through z/OS UNIX® or in batch.
The CLIST and REXX™ languages are the two command languages available from TSO/E. The term CLIST (pronounced "see list") stands for command list; it is called this because the most basic CLISTs are lists of TSO/E commands. The CLIST language enables you to work more efficiently with TSO/E. The CLIST language is an interpreted language. Like programs in other high-level interpreted languages, CLISTs are easy to write and test. You don’t have to compile or link-edit them. To test a CLIST, you simply correct any errors that might occur and run it until the program runs without error.
When you invoke a CLIST, it issues the TSO/E commands in sequence. The CLIST programming language is used for:
- Performing routine tasks (such as entering TSO/E commands).
- Invoking other CLISTs.
- Invoking applications written in other languages.
- ISPF applications (such as displaying panels and controlling application flow).
The Restructured Extended Executor (REXX™) language is a procedural language that allows programs and algorithms to be written in a clear and structural way. It is an interpreted and compiled language. An interpreted language is different from other programming languages, such as COBOL, because it is not necessary to compile a REXX command list before executing it. However, you can choose to compile a REXX command list before executing it to reduce processing time.
The REXX programming language is typically used for:
- Performing routine tasks, such as entering TSO/E commands.
- Invoking other REXX execs.
- Invoking applications written in other languages.
- ISPF applications (displaying panels and controlling application flow).
- One-time quick solutions to problems.
- System programming.
- Wherever we can use another HLL compiled language.
The structure of a REXX program is simple. It provides a conventional selection of control constructs. For example, these include IF... THEN... ELSE... for simple conditional processing, SELECT... WHEN... OTHERWISE... END for selecting from a number of alternatives, and several varieties of DO... END for grouping and repetitions. No GOTO instruction is included, but a SIGNAL instruction is provided for abnormal transfer of control such as error exits and computed branching. The relationship between JCL and program files is the same for REXX as it is for COBOL and other HLLs.
In this topic