Velocity Software, Inc. is recognized as a leader in the performance measurement of z/VM and Linux on z. The Velocity Performance Suite consist of a set of tools that enable installations running z/VM to manage Linux and z/VM performance. In addition, many components of server farms can be measured and analyzed. Performance data can be viewed real-time through the use of either 3270 or a browser. The CLOUD Implementation (zPRO) component is designed for full cloud PaaS implementation as well as to extend the capabilities of the z/VM sysprog (system programmer) to the browser world. This feature moves system management to the point-and-click crowd. Archived data and reports can be kept available of long term review and reporting usine zMAP. The zVPS, formally ESALPS, components consist of: zMON (formally ESAMON - real-time display of performance data), zTCP (formally ESATCP - SNMP data collection), zMAP (formally ESAMAP - historical reporting and archiving), zVWS (formally ESAWEB - z/VM based web server), zTUNE (a subscription service), zVIEW (formally SHOWCASE - web based viewing of performance data), zPRO (new to the quality line of Velocity Software Products). Velocity continues to work with other software vendors to ensure smooth interface with or from other products such as VM:Webgateway, CA-Webgateway, EnterpriseWeb, MXG, MICS. Velocity software remains the leader and inovator in the z/VM performance, Linux performance, Managing cloud computing arenas.
Home | Contact Us    
About Us | Products | FAQ | zVIEW Demo | zPRO Demo | Customer Area | Education | Linux Hints & Tips | Presentations | News | Industry and Events | Employment Opportunities

The z/VM Tuning Reference, by Velocity Software

The information and suggestions contained in this document are provided on an as-is basis without any warranty either expressed or implied. The use of this information or the implementation of any of the suggestions is at the reader's own risk. The evaluation of the information for applicability is the reader's responsibility. Velocity Software may make improvements and/or changes to this publication at any time.

Overview

This reference guide is extracted from Velocity Software's presentation on configuration guidelines. It provides high level configuration and tuning recommendations that can have their results measured using zVPS. Most installations will find significant results from the recommendations suggested here. Installations with more complex requirements will want to evaluate the recommendations based on measurements of their particular systems.

This abreviated Tuning Guide will discuss the following from both a traditional z/VM environment and from a Linux server farm environment:

For performance questions or further information about evaluating these recommendations in your z/VM Linux environment please contact Barton Robinson of Velocity Software at:

Velocity Software, Inc.
PO 390640
Mountain View, CA 94039-0640
650-964-8867


DASD Subsystem Performance Summary

DASD Configuration Guidelines for z/VM:

Do NOT combine spool, paging, TDISK and minidisks at the volume level. (This means dedicated volumes!) Page and spool use algorithms designed to minimize seeks and overhead within the file. Putting either page or spool on the same volume as any other active area will result in contention and overhead.

Furthermore, multiple page or spool allocations should not reside on the same volume.

TDISK is formatted regularly and should not be assigned to the same volume as data with a performance requirement.

z/OS and VM data should be segregated at the control unit level to avoid error recovery complications and to reduce performance spikes when z/OS runs I/O intensive batch jobs.

DASD Planning Using Access Density

When allocating DASD using the access density methodology, consider the following planning guidelines. Note, they are not hard and fast rules of thumb. Installations should always review performance to ensure that the user's needs have been met. Access Density is defined as the number of I/O expected per Gigabyte of data. The ESADSD6 report provides data access densities at a device level. The following recommendations for current DASD technology are intended to keep device busy below 10%. This number is intentionally conservative as a guideline to provide positive results when estimates are wrong.

For some Linux volumes and z/VM paging volumes, the I/O are larger and longer in duration. Plan for Linux volumes and page volumes to have service times at 1-2ms per I/O, thus a device should be targeted at 50-100 I/O per second. Traditional I/O at 4K per I/O have service times in the 1-2 ms range, which means 50 I/O per volume is a reasonable target.

SCSI is currently not suited for high access data or paging due to reduced performance.

Control Unit Planning

He with the most cache wins. In a linux environment, the issue is often the non-volatile write cache, as Linux will buffer writes and then write out data in large bursts overflowing the write cache. Ensure there is a mechanism for detecting NVS full conditions. Minimizing Linux server storage sizes also minimizes the potential of this problem by reducing the available storage to cache write data.

Channels

Channels today rarely impact performance. PAV/HiperPAV is always good.

Measuring the DASD Subsystem

Each of the above tuning recommendations can be evaluated using the following zVPS reports:


Storage Subsystem Performance

Storage requirements should be reduced as much as possible to avoid unnecessary paging delays. Linux adds several guidelines. Plan on 2GB of storage for z/VM, MDC, and the infrastructure (TCPIP, DIRMAINT, zVPS).

Linux Storage Planning Guidelines

With Linux, the over-commit ratio is the planning target. If you plan for 20 Linux servers, and they are 1GB each, with a target over-commit ratio of 2, then 12 GB is required. (20 servers times 1gb, divided by 2, plus z/VM 2GB of storage). For WAS and Domino environments, an over-commit target of 1.5 is reasonable. For Oracle and other virtual friendly applications, over-commit of 3 is reasonable.

To put more servers into existing storage, decrease Linux server storage sizes until they start to swap. Repeat. This is the largest tuning knob available to improve storage utilization.

System Settings

Many SRM settings are no longer useful:

Storage Analysis

Use the following reports to evaluate storage.


Paging Subsystem Performance

Review the options for reducing storage requirements BEFORE analyzing or enhancing the paging subsystem. Many times, storage requirements can be reduced so that paging requirements drop significantly. If this is the case, any time spent on the paging subsystem will be wasted.


Paging Configuration Requirements

The following requirements for the paging subsystem are in order. Ensure page packs are dedicated.


Spooling Configuration Requirements

Spool very rarely impacts performance. Have sufficient number of spool volumes to keep the device busy at less than 20 percent peak period. Maintain sufficient space to ensure console logs are available for problem determination.


Paging/Spooling Analysis

The following reports should be used for analyzing the paging and spooling subsystems:


Processor Subsystem Performance

Moore's law is dead, long live the mainframe. Processor/cycle speeds have not significantly changed in several generations. Now the objective is to get more work done with less cycles. Reducing CPU requirements from a system tuning perspective can be done with the following actions.


System Settings

Many guidelines for SRM settings have changed over the years and for those that have their own "SRMSET EXEC" that has been carried forward for years, there may be some new recommendations.

Many guidelines had to do with controlling access to the dispatch list, and when resources were constrained, virtual machines would be delayed on the eligible list. This function no longer exists.


Processor Performance Analysis

Use the following reports to evaluate the processor performance:


Minidisk Caching

There are three areas of interest for MDC.

For CMS and shared Linux disk workloads, analysis of several systems has shown a pattern of diminishing returns from MDC. The largest gain is from the first 100mb. Note that Linux servers sharing one or two disks can avoid I/O with MDC. In no case should the z/VM control program (CP) be allowed to determine how much storage is to be used for MDC. Many case studies have shown that CP will cause paging spikes by allocating too much storage to MDC. The following commands should be issued to control MDC Storage where the maximum sets a reasonable limit on the size of MDC.

SET MDC STORAGE 0M 256M

For VSE systems that will get benefit from the MDC track read, meaning that for every I/O to disk, the full track is read and cached, ensure that MIN and MAX are the same to maintain consistent performance:

SET MDC STORAGE 1024M 1024M

Measuring MDC

The following reports can be used for analyzing different aspects of MDC performance:


Linux Configuration Recommendations

These are recommendations that are considered best practices and have been validated in 100's of Linux installations.

These are best practices. All have been validated many times in many installations.


Service Machine Performance Summary


Configuring TCPIP for z/VM

TCPIP should have the following option set to provide optimum service. The Share setting can be modified later to fit requirements if TCPIP's requirements are very large.

Tuning z/VM Database Service Machines

Database service machines such as SQL are a shared resource. They should have the following options set to provide optimum service. This does not include Linux servers, unless they are shared by many other servers as a resource.

Measuring Service Machines

Each of the above tuning recommendations can be evaluated using the following zVPS reports:



Tuning Traditional CMS Workloads

The following guidelines are for traditional CMS workloads, and have no impact on Linux server farm workloads.

Performance Analysis

Use the following reports to evaluate impacts of these functions on performance:



Functional Requirements for Managing Linux Performance Under zVM....


What Every IT Professional Should Know

Achieving these results introduces the following challenges:


Velocity Software, Inc. Products and Services

Velocity Software's focus is to provide performance products and services for z/VM. Velocity Software offerings currently include:



See what our customers say


Performance Tuning Guide


Sign up to receive the Velocity Software z/VM Tuning Reference Guide


Have a Velocity Software Sales Account Exec contact me.


IBM Z Ecosystem


Test drive our products
zVPS demo


Follow Velocity Software on LinkedIn!