Previous section.
UMA Data Pool Definition (DPD)
Copyright © 1997 The Open Group
Introduction
Purpose
This document is one of a family of documents that comprise the
Universal Measurement Architecture (UMA), and which define
interfaces and data formats for Performance Measurement.
UMA was originally defined by the Performance Management Working Group (PMWG)
and subsequently adopted by The Open Group.
This document defines a performance data pool
for the analysis and management of computer systems,
and an organisation to facilitate the collection and use of
such data.
The UMA is defined in the following documents:
-
Guide to the Universal Measurement Architecture (see reference UMA).
This document provides an overview of the UMA.
-
UMA Measurement Layer Interface Specification (see reference MLI).
This document defines functional characteristics
for a high-level open Application
Program Interface (API) to be used by Measurement Application Programs
(MAPs) to request and receive data.
It also defines header formats to be appended to the data captured
by a low-level Data Capture Interface (DCI).
-
UMA Data Capture Interface Specification (see reference DCI).
This document defines a standard programming interface for capturing
data provided by systems and applications.
-
UMA Datapool Specification (this document).
Audience
The metrics defined in this document span a wide range of uses.
The audience for these metrics
ranges from the end-user to the system developer:
-
End-users (customers) are concerned about adequate response
time for their particular application (that is, productivity).
-
The performance analyst/engineer uses these metrics for
modelling the performance of new systems/applications or changes
to existing ones,
tuning the overall performance of a system or an application to
a particular customer environment, and
measuring the current and predicting future capacity needs.
-
Data centres and MIS organisations, being service oriented,
are concerned about the quantity and quality of the computing services
provided under Service Level Agreements with their customers.
In this case, the metrics are used for accounting,
real-time monitoring to detect and correct poor service,
tracking service quality with control charts of the key metrics
specified in Service Level Agreements with customers,
and workload characterisation and balancing.
-
Hardware and software vendors like to assure the performance
of their products during development and after release to the market.
-
Performance management application vendors need standard metrics
with an open application interface to make it economically feasible
to develop such products.
Scope
The metrics defined in this document attempt to meet the data
use needs for the various audiences mentioned above.
Although the metrics are heavily influenced by the currently
available measurements,
an attempt is made to recommend new metrics to correct the
deficiencies experienced
with existing technology.
Metrics are grouped into "Classes" amd "Subclasses" based on their
functionality and content. Furthermore, to reflect the current technology
and to accommodate for future growth, each metric is assigned a "Level"
of maturity. Specifically, each metric belongs to one of the following
four categories:
-
Level 0
-
Level 1
-
Optional
-
Platform/Vendor Specific.
The first three categories are part of the Datapool Standard.
The level 0 specification is an attempt to formalize
existing common practice, and should be implementable on the bulk of the UNIX
installed base, using OS releases that were available in 1995.
The Level 1 specification is to provide direction for OS vendors, and defines
a common set of metrics that are needed to implement performance management
tools.
Additional details on the levels are found in Chapter 2.
This document defines no interfaces or other architecture, only data
and a data organisation.
Performance and capacity management of operating
systems have been considered "internal" to the
operating system and as such differ from one operating system to another
and from one implementation to another. Most
operating systems have, as a matter of necessity,
performance analysis modules, narrowly targeted at
the type of hardware, software and networking
facilities implemented within the system.
Most operating systems provide ad-hoc developed or
tailored performance metrics. Some of these tools
are developed as internal support tools for
benchmarking or on demand from performance analysts
and capacity planners. These tools are generally
also confined to one machine only and can not be
interrogated remotely.
The new era of networking and interoperability
views performance management and capacity planning
from user's perspective. Multiple machines and
operating systems can be involved in the
interaction with the user. This approach requires
capture and presentation of performance metrics to
be clearly defined and portable between platforms
and operating systems.
In addition, the data used in this specification
is presented as vendor and implementation
independent as possible, however,
a mechanism is provided for vendor data extensions.
Conformance
Support for Datapool level 0 is mandatory, while
support for higher levels is optional.
Conformance to levels higher than zero means that
metrics defined as mandatory in such levels must all be provided.
Why not acquire a nicely bound hard copy?
Click here to return to the publication details or order a copy
of this publication.