Getting to Know EMDIAG: repvfy execute optimize

In my group, we work with a lot of customers with very large EM environments.  On the range of 2000+ agents.  So as you can imagine there’s a little bit of optimizing that needs to get done to account for these numbers.

A few of these standard tweaks have been put into the repvfy execute optimize command.    You can make all these changes individually, but if you want to get them all done at once, optimize is your tool.

There’s 3 categories of optimization that is handled at this point:  Internal Tasks, Repository Settings and Target system.    The script will first evaluate the size of your repository based on the number of agents, and from there determine what optimizations need to be done or recommended for future implementation.

Internal Task Tuning

Enterprise Manager uses short and long workers, depending on the task activity.  We typically recommend 2 workers for each for most larger systems, so in repvfy execute optimize this is what gets set. Smaller systems are usually sufficient with the default settings of 1 each.    You can view the configuration in EM on Manage Cloud Control -> Repository page.   Here you can also configure the short workers, but not the long.  If you see a high collection backlog, this is an indication that your in need of additional task workers.

shortworkers

The next step is to evaluate the current settings of the job system and ensure that there are enough connections available for the job system.  This change is not implemented automatically, but is printed out for you to change with emctl, as it will require a restart to take effect.   Recommendations for Large Job System Load can be found in the Sizing chapter of Advanced Installation Guide.  Increasing the number of connections may require an increase in database processes value.

Repository Settings Tuning

EM tracks system errors in one of it’s tables.   In larger systems, the MGMT_SYSTEM_ERROR_LOG table can become quite large over the 31 day default retention.   The optimize script reduces log retention to 7 days for normal operating.

There are also various levels of tracing enabled by default, this can generate a lot of extra activity during normal operations if you’re not utilizing the traces.    Tracing is turned off by the optimize command.  It can be enabled at any time by using the repvfy send start_trace -name <name>  and repvfy send start_repotrace commands.

Finally this step looks for any invalid SYSMAN objects and validates them, then checks for stale optimizer statistics and makes a recommendation as needed.

System Tuning

After an EM outage or downtime, all the agents will attempt to upload and update their status (or heartbeat) with the OMS.  There’s a grace period in which no alerts are sent.  In larger systems, this grace period may not be long enough to get all agents updated before alerts start going out.   This can be adjusted by increasing that grace period.

In 12.1.0.3 and higher, you can also increase the number of threads that perform the ping heartbeat tasks.  This should be done if you have more than 2000 agents per OMS.  The optimize command will make this calculation for you and recommend the appropriate emctl command to set the heartbeatPingRecorderThreads property.  Recommendations for Large Number of Agents can be found in the Sizing chapter of Advanced Installation Guide.

The optimize command will only output those items that require attention, so not every item will appear in the output on every site.
The recommended values reported in the output are specific for THAT environment  and should not be copied over to another environment just like that.  To tune another EM environment, run the optimize script on that environment.

Sample output from a small EM system:

bash-4.1$ ./repvfy execute optimize

Please enter the SYSMAN password:
SQL*Plus: Release 11.1.0.7.0 – Production on Thu Jul 9 07:59:35 2015

Copyright (c) 1982, 2008, Oracle. All rights reserved.

SQL> Connected.

Session altered.
Session altered.

========== ========== ========== ========== ========== ========== ==========
== Internal task system tuning ==
========== ========== ========== ========== ========== ========== ==========

– Setting the number of short workers to 2 (1->2)
– Setting the number of long workers to 2 (1->2)
========== ========== ========== ========== ========== ========== ==========
========== ========== ========== ========== ========== ========== ==========
== Job system tuning ==
========== ========== ========== ========== ========== ========== ==========

– On each OMS, run this command:
  $ emctl set property -name oracle.sysman.core.conn.maxConnForJobWorkers -value 72 -module emoms
  This change will require a bounce of the OMS

========== ========== ========== ========== ========== ========== ==========
========== ========== ========== ========== ========== ========== ==========
== Repository tuning ==
========== ========== ========== ========== ========== ========== ==========
– Setting retention for MGMT_SYSTEM_ERROR_LOG table to 7 days (31->7)

– Disabling PL/SQL tracing for module (EM.GDS)
– Disabling PL/SQL tracing for module (EM_DBM)

– Disabling repository metric tracing for ID (1234)

– Recompiling invalid object (foo,TRIGGER)
– Recompiling invalid object (bar,CONSTRAINT)

– Stale CBO statistics in the repository. Gather statistics for the SYSMAN schema
  Command to use:
  $ repvfy send gather_stats
  Or:
  SQL> exec emd_maintenance.gather_sysman_stats_job(p_gather_all=>’YES’);

========== ========== ========== ========== ========== ========== ==========
========== ========== ========== ========== ========== ========== ==========
== Target system tuning ==
========== ========== ========== ========== ========== ========== ==========

– Setting the PING grace period to (90) (60->90)

– Set the parameter oracle.sysman.core.omsAgentComm.ping.heartbeatPingRecorderThreads to 3
  $ emctl set property -module emoms -name oracle.sysman.core.omsAgentComm.ping.heartbeatPingRecorderThreads -value 3

========== ========== ========== ========== ========== ========== ==========
not spooling currently

Getting to Know EMDIAG – repvfy diag all

If you’ve worked with me, or called me about a problem with your Enterprise Manager, or even attended any of my sessions, you’ve probably heard me talk about EMDIAG.   One of the most popular components of EMDIAG is the Repvfy tool. This is basically a series of scripts and queries that will provide data from the repository to help diagnose configuration and data issues.  You can get more details on downloading and installing in EMDIAG Troubleshooting Kits Master Index (Doc ID 421053.1).

There are 3 components that make up EMDIAG:  repvfy, omsvfy and agtvfy.   Today, one of the features I am introducing you to is in repvfy.  This is the component that pulls data from the EM repository.

repvfy diag all  

This is my go to these days.  Instead of tell the customer I need X, Y, Z and A, B, C, I get this.   The diag all runs through various EMDIAG reports that are frequently used in troubleshooting issues with support and development.  It runs the different reports and zips them into a file that can then be uploaded easily.    There’s also a shorter version repvfy diag core.

adding: advisor_day_2015_07_06_084304.log (deflated 81%)
adding: advisors_2015_07_06_084304.log (deflated 83%)
adding: agent_health_2015_07_02_120925.log (deflated 83%)
adding: analyze.log (deflated 83%)
adding: backlog_2015_07_06_084304.log (deflated 84%)
adding: body1.log (stored 0%)
adding: body2.log (stored 0%)
adding: body3.log (stored 0%)
adding: cursor_2015_07_06_084304.log (deflated 84%)
adding: custom_2015_07_06_084304.log (deflated 86%)
adding: deinstall.log (deflated 79%)
adding: details_2015_07_02_082451.log (deflated 82%)
adding: details_2015_07_02_082451.sql (deflated 68%)
adding: details_2015_07_06_084304.log (deflated 87%)
adding: details_2015_07_06_084304.sql (deflated 74%)
adding: errors_2015_07_06_084304.log (deflated 83%)
adding: install.log (deflated 80%)
adding: job_health_2015_07_06_084304.log (deflated 83%)
adding: loader_health_2015_07_06_084304.log (deflated 91%)
adding: metric_stats_2015_07_06_084304.log (deflated 92%)
adding: mtm_2015_07_06_084304.log (deflated 89%)
adding: notif_health_2015_07_06_084304.log (deflated 85%)
adding: performance_2015_07_06_084304.log (deflated 88%)
adding: ping_health_2015_07_06_084304.log (deflated 76%)
adding: pkg.log (deflated 62%)
adding: space_2015_07_06_084304.log (deflated 91%)
adding: system_2015_07_06_084304.log (deflated 86%)
adding: task_health_2015_07_06_084304.log (deflated 83%)
adding: upgrade2.log (stored 0%)
adding: verify.log (deflated 89%)
adding: verify_2015_07_02_082451.log (deflated 49%)
adding: verify_2015_07_06_084304.log (deflated 45%)
adding: views.log (deflated 82%)

File created: /u01/oracle/em12r5/oms/emdiag/tmp/repvfy_2015_07_06_084304.zip

So just what does it gather information about?   Here’s a one line summary of each report:

advisors  – ADDM, ASH and AWR reports from the repository database
agent_health – summary of deployed agents, plugins and targets as well as availability and ping statistics
backlog – statistics from dbms_scheduler, loader subsystem, job subsystem, notification subsystem and the task/worker subsystem
cursor – cursor parameters and statistics for EM SQL
custom – summary of EM customizations done
errors – targets, agents, plugins, metrics, collections, jobs in error
job_health – summary of job subsystem configuration, statistics and performance
loader_health – summary of loader subsystem configuration, statistics and performance
metric_stats – performance summary of repository, loader subsystem, purge policies and metrics including top targets and metrics
mtm – summary of Repository and OMS configuration, housekeeping jobs, agent and plugin deployments
notif_health – summary of notification subsystem configuration, statistics and performance
performance – performance summary of repository, OMS, agents and internal subsystems.
ping_health – summary of agent ping jobs and communication
space – summary of schema statistics collections, table/index sizes and fragmentation
system – full configuration summary
task_health – summary of task subsystem configuration, statistics and performance
verify/details – the standard verification checks with detailed output

So depending on the issue you’re seeing, I will typically look at various reports.    If you have problems with notifications, I’m obviously going to go through the notif_health and probably the backlog and job_health reports.   If I’m just trying to get a good understanding of how your system is built, what targets you manage and what you’re doing with them, I’d start with the system and custom reports.

In future posts, we’ll break down some of these reports in detail, but that’s it for today!