---------------------------------------- # EEM policy used for measuring the cpu performance of EEM policies. # # July 2005, Cisco EEM team # # Copyright (c) 2005-2008 by cisco Systems, Inc. # All rights reserved. #------------------------------------------------------------------ ### ### Input arguments: ### ### arg1 $iter - current iteration count ### ### The following EEM environment variables are used: ### ### _perf_iterations (mandatory) - number of iterations over which we ### will run our measurement. ### ### _perf_fast (optional) - if set to any value run this policy ### in high performance mode. ### _perf_fast_no_refresh (optional) ### - if _perf_fast is set, set this to ### any value and environment variables ### will not be refreshed. By default they ### will be refreshed upon each event ### iteration. ### Example: ### event manager environment _perf_iterations 100 ### ### _perf_cmd1 (optional) - optional non interactive cli command ### to be executed as part of the ### measurement test. ### Example: ### event manager environment _perf_cmd1 enable ### ### _perf_cmd2 (optional) - optional non interactive cli command ### to be executed as part of the ### measurement test. ### To use _perf_cmd2, _perf_cmd1 MUST ### be defined. ### Example: ### event manager environment _perf_cmd2 show ver ### ### _perf_cmd3 (optional) - optional non interactive cli command ### to be executed as part of the ### measurement test. ### To use _perf_cmd3, _perf_cmd1 MUST ### be defined. ### Example: ### event manager environment _perf_cmd3 show int counters protocol status ### ### Description: ### Iterate through _perf_iterations of this policy. ### It is up to the user to calculate the average ### execution time based on the system timestamps. ### Optional commands _perf_cmd1, ### _perf_cmd2 and _perf_cmd3 are executed if defined. ### ### A value of 100 is a good starting point. ### ### Outputs: ### Console output. ### ### Usage example: ### >conf t ### >service timestamps debug datetime msec ### >event manager environment _perf_iterations 100 ### >event manager policy ap_perf_base_cpu.tcl ### >event manager policy no_perf_test_init.tcl ### >end ### 2d19h: %SYS-5-CONFIG_I: Configured from console by console ### >event manager run no_perf_test_init.tcl ### ### Oct 16 14:57:17.284: %SYS-5-CONFIG_I: Configured from console by console ### >event manager run no_perf_test_init.tcl ### ### Oct 16 19:32:02.772: %HA_EM-6-LOG: ### eem_policy/no_perf_test_init.tcl: EEM performance test start ### Oct 16 19:32:03.115: %HA_EM-6-LOG: ### eem_policy/ap_perf_test_base_cpu.tcl: EEM performance test iteration 1 ### Oct 16 19:32:03.467: %HA_EM-6-LOG: ### eem_policy/ap_perf_test_base_cpu.tcl: EEM performance test iteration 2 ### ... ### Oct 16 19:32:36.936: %HA_EM-6-LOG: ### eem_policy/ap_perf_test_base_cpu.tcl: EEM performance test iteration 100 ### Oct 16 19:32:36.936: %HA_EM-6-LOG: ### eem_policy/ap_perf_test_base_cpu.tcl: EEM performance test end ### ### The user must calculate execution time and average time of execution. ### In this example, total time = 19:32:36.936 - 19:32:02.772 = 34.164 ### Average script execution time = 341.64 milliseconds ### # check if all the env variables we need exist # If any of them doesn't exist, print out an error msg and quit if {![info exists _perf_iterations]} { set result \ "Policy cannot be run: variable _perf_iterations has not been set" error $result $errorInfo } # ensure our target iteration count > 0 if {$_perf_iterations <= 0} { set result \ "Policy cannot be run: variable _perf_iterations <= 0" error $result $errorInfo } if {[info exists _perf_fast_no_refresh]} { set refresh_vars 0 } else { set refresh_vars 1 } namespace import ::cisco::eem::* namespace import ::cisco::lib::* set done 0 while { $done == 0 } { # query the event info array set arr_einfo [event_reqinfo] if {$_cerrno != 0} { set result [format "component=%s; subsys err=%s; posix err=%s;\n%s" \ $_cerr_sub_num $_cerr_sub_err $_cerr_posix_err $_cerr_str] error $result } set iter $arr_einfo(data1) set iter [expr $iter + 1] # if _perf_cmd1 is defined if {[info exists _perf_cmd1]} { # open the cli library if [catch {cli_open} result] { error $result $errorInfo } else { array set cli1 $result } # execute the comamnd defined in _perf_cmd1 if [catch {cli_exec $cli1(fd) $_perf_cmd1} result] { error $result $errorInfo } # if _perf_cmd2 is defined if {[info exists _perf_cmd2]} { # execute the comamnd defined in _perf_cmd2 if [catch {cli_exec $cli1(fd) $_perf_cmd2} result] { error $result $errorInfo } else { set cmd_output $result } } # if _perf_cmd3 is defined if {[info exists _perf_cmd3]} { # execute the comamnd defined in _perf_cmd3 if [catch {cli_exec $cli1(fd) $_perf_cmd3} result] { error $result $errorInfo } else { set cmd_output $result } } # close the cli library if [catch {cli_close $cli1(fd) $cli1(tty_id)} result] { error $result $errorInfo } } # log a message set msg [format "EEM performance test iteration %s" $iter] action_syslog priority info msg $msg if {$_cerrno != 0} { set result [format "component=%s; subsys err=%s; posix err=%s;\n%s" \ $_cerr_sub_num $_cerr_sub_err $_cerr_posix_err $_cerr_str] error $result } # use the context info from the previous run to determine when to end if {$iter >= $_perf_iterations} { #log the final messages action_syslog priority info msg "EEM performance test end" if {$_cerrno != 0} { set result [format \ "component=%s; subsys err=%s; posix err=%s;\n%s" \ $_cerr_sub_num $_cerr_sub_err $_cerr_posix_err $_cerr_str] error $result } exit 0 } # cause the next iteration to run event_publish sub_system 798 type 9999 arg1 $iter if {$_cerrno != 0} { set result [format \ "component=%s; subsys err=%s; posix err=%s;\n%s" \ $_cerr_sub_num $_cerr_sub_err $_cerr_posix_err $_cerr_str] error $result } if {[info exists _perf_fast]} { event_completion status 0 if {$iter < $_perf_iterations} { array set _event_state_arr [event_wait refresh_vars $refresh_vars] if {$_event_state_arr(event_state) != 0} { exit 0 } } else { set done 1 } } else { set done 1 } }