Previous section.
Systems Management: Application Response Measurement (ARM) API
Copyright © 1998 The Open Group
Testing Your Instrumentation
Procedure
The following tasks are recommended for testing your instrumentation
after you have included the ARM API calls in your program.
-
Link to the NULL library that is part of the ARM SDK. If the link
fails, it means that you are not linking to the correct library, or
you are using incorrect names or parameters in at least one of the ARM
API calls.
-
Once you can link successfully, then run your application, including
the calls to the API, and verify that your application performs
correctly. No testing of the API calls is done except for the linking
parameters, because the NULL library simply returns zero every time it
is called. Running the application is useful to insure that you did not
inadvertently alter the program in a way that affects its basic
function.
-
Assuming you have a compiled logging agent source,
link to the logging agent generated in the previous step. Run your
application, including the calls to the ARM API and verify that your
application performs correctly.
-
Manually review the log created by the logging agent to verify that
the correct parameters are passed on each call. These parameters
include transaction ids to connect start calls to the correct
transaction class, start handles to connect stop calls to the correct
start calls, and any of the optional parameters. Optional advanced
parameters include correlators that indicate the parent/child
relationship between transactions and components, and metrics about
the transaction or application state.
Search the log for error messages (identified by
ERROR
in the text),
and informative messages (identified by
INFO
in the text), after your
application has run for a considerable period of time in a simulated
production environment. Upon successful completion of this test, you
should be confident that your ARM API calls are correct. A sample log
is provided in
Logging Agent Sample Output
.
-
Link to a performance measurement product (if available) and run the
application under typical usage scenarios. This will test the entire
system of application plus management tools.
Logging Agent Sample Output
7:47:39.sss: arm_init: Application <Appl_0> User <User_0> = Appl_id <1>
17:47:39.sss: arm_getid: Application <Appl_0> User <User_0> Transaction <Tran_0>
Detail <This is transaction type 0>
17:47:39.sss: arm_getid: Application <Appl_0> User <User_0> Transaction <Tran_0>
= Tran_id <1>
17:47:39.sss: arm_getid: Application <Appl_0> User <User_0> Transaction <Tran_0>
Metric Field <1> Type <1> Name <This is a Counter32 user metric >
17:47:39.sss: arm_start: Application <Appl_0> User <User_0> Transaction <Tran_0>
= Start_handle <1>
17:47:39.sss: arm_start: Application <Appl_0> User <User_0> Transaction <Tran_0>
Start_handle <1> Metric < This is a Counter32 user metric > : <0>
17:47:40.sss: arm_update: Application <Appl_0> User <User_0> Transaction <Tran_0>
Start_handle <1> Metric < This is a Counter32 user metric > : <2>
17:47:41.sss: arm_stop: Application <Appl_0> User <User_0> Transaction <Tran_0>
Start_handle <1> Status <0>
17:47:41.sss: arm_stop: Application <Appl_0> User <User_0> Transaction <Tran_0>
Start_handle <1> Metric < This is a Counter32 user metric > : <4>
17:47:41.sss: arm_end: Application <Appl_0> User <User_0> appl_id <1>
Why not acquire a nicely bound hard copy?
Click here to return to the publication details or order a copy
of this publication.