Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
A guide for configuring, tuning, and enhancing Workforce Central v6.2 from a performance perspective.
Document Revision: C
The information in this document is subject to change without notice and should not be construed as a commitment by Kronos Incorporated. Kronos Incorporated assumes no responsibility for any errors that may appear in this manual. This document or any part thereof may not be reproduced in any form without the written permission of Kronos Incorporated. All rights reserved. Copyright 2009, 2010, 2011. Altitude, Altitude Dream, Cambridge Clock, CardSaver, Datakeeper, Datakeeper Central, eForce, Gatekeeper, Gatekeeper Central, Imagekeeper, Jobkeeper Central, Keep.Trac, Kronos, Kronos Touch ID, the Kronos logo, My Genies, PeoplePlanner, PeoplePlanner & Design, Schedule Manager & Design, ShiftLogic, ShopTrac, ShopTrac Pro, StarComm, StarPort, StarSaver, StarTimer, TeleTime, Timekeeper, Timekeeper Central, TimeMaker, Unicru, Visionware, Workforce Accruals, Workforce Central, Workforce Decisions, Workforce Express, Workforce Genie, and Workforce TeleTime are registered trademarks of Kronos Incorporated or a related company. Altitude MPP, Altitude MPPXpress, Altitude Pairing, Altitude PBS, Comm.Mgr, CommLink, DKC/Datalink, eDiagnostics, Experts at Improving the Performance of People and Business, FasTrack, Hireport, HR and Payroll Answerforce, HyperFind, Kronos 4500 Touch ID, Kronos 4500, Kronos 4510, Kronos Acquisition, Kronos e-Central, Kronos KnowledgePass, Kronos TechKnowledgy, KronosWorks, KVC OnDemand, Labor Plus, Momentum Essentials, Momentum Online, Momentum, MPPXpress, Overall Labor Effectiveness, Schedule Assistant, Smart Scheduler, Smart View, Start Quality, Start WIP, Starter Series, StartLabor, Timekeeper Decisions, Timekeeper Web, VisionPlus, Winstar Elite, WIP Plus, Workforce Absence Manager, Workforce Acquisition, Workforce Activities, Workforce Analytics, Workforce Attendance, Workforce Central Portal, Workforce Connect, Workforce Employee, Workforce ESP, Workforce Forecast Manager, Workforce HR, Workforce Leave, Workforce Manager, Workforce MobileTime, Workforce Operations Planner, Workforce Payroll, Workforce Record Manager, Workforce Recruiter, Workforce Scheduler, Workforce Smart Scheduler, Workforce Tax Filing, Workforce Timekeeper, Workforce View, and Workforce Worksheet are trademarks of Kronos Incorporated or a related company. The source code for Equinox is available for free download at www.eclipse.org. All other trademarks or registered trademarks used herein are the property of their respective owners and are used for identification purposes only. When using and applying the information generated by Kronos products, customers should ensure that they comply with the applicable requirements of federal and state law, such as the Fair Labor Standards Act. Nothing in this Guide shall be construed as an assurance or guaranty that Kronos products comply with any such laws. Published by Kronos Incorporated 297 Billerica Road, Chelmsford, Massachusetts 01824-4119 USA Phone: 978-250-9800, Fax: 978-367-5900 Kronos Incorporated Global Support: 1-800-394-HELP (1-800-394-4357) For links to information about international subsidiaries of Kronos Incorporated, go to http://www.kronos.com Document Revision History Document Revision A B C Product Version Workforce Central 6.2 Workforce Central 6.2 Workforce Central 6.2 Release Date November 2010 March 2011 April 2011
Contents
About This Guide Organization of this guide ..................................................................... 10 Chapter 1: Workforce Central Guide Best Practices Application server best practices ................................................................. 14 JRE heap size ........................................................................................ 14 Database server best practices ..................................................................... 17 Database platform ................................................................................. 17 Database layout ..................................................................................... 18 Oracle recommendations ...................................................................... 19 SQL Server recommendations .............................................................. 24 Best practice for configuring SQL Server Max Memory setting .......... 27 Best practices for TEMPDB layout ............................................................ 28 Operational practices ................................................................................... 29 Optimize data transmission throughput ................................................ 29 Keep database statistics current ............................................................ 29 Rebuilding an index (Sql Server) ................................................................ 34 Reorganize Indexes ............................................................................... 34 Rebuild Indexes when Necessary ......................................................... 34 Monitor the database for fragmentation ................................................ 35 Running the Workforce Timekeeper Reconcile utility ......................... 36 HyperFind performance ............................................................................... 40 Reports best practices .................................................................................. 42 Using dedicated report servers .............................................................. 42 Limiting the number of employees per report ...................................... 44 Following procedural best practices ..................................................... 45 SSRS 2005 best practices ............................................................................ 46 SSRS 2008 best practices ............................................................................ 47 Next Generation User Interface best practices ............................................ 47
Contents
Process Manager best practices ....................................................................48 Process template design recommendations ...........................................48 Configuration recommendations ...........................................................52 Process governors ..................................................................................53 Chapter 2: Workforce Timekeeper Best Practices Configuring the Workforce Timekeeper application for optimum performance ......................................................................60 Genie performance .......................................................................................62 Create Genies with only the necessary data columns ............................62 Assign Genies as needed to users ..........................................................64 Limit the number of employees displayed in Scheduling Genies to below 1,000 .............................................................................64 Do not calculate totals on Genies ..........................................................65 Totalizer best practices .................................................................................66 General configuration considerations ....................................................67 Totalizer extensibility interface .............................................................68 Multiple instances .................................................................................68 Additional recommendations .................................................................69 HTML client performance ...........................................................................71 HTML client versus applets ..................................................................71 CPU consumption ..................................................................................72 Quick Time Stamp throughput (kiosk capability) .................................72 Chapter 3: Workforce Device Manager Best Practices Device communication settings recommendations ......................................74 Dedicated Workforce Device Manager servers (version 6.2.1 and later) .....................................................................76 Recommended threshold for terminals per server .......................................77 Load balancing punch uploads .....................................................................77 Recommended Web/application server resource settings ............................77 Modify maximum database connections ...............................................78 Modify web server processing threads ..................................................78
Kronos Incorporated
Contents
Chapter 4: Workforce Scheduler Best Practices Best Practices ............................................................................................... 82 Chapter 5: Workforce Forecast Manager Best Practices Best practices ............................................................................................... 86 Configuring number of forecasting and Auto-Scheduler engines ............... 88 Chapter 6: Workforce Operations Planner Best Practices Best practices ............................................................................................... 92 Chapter 7: Workforce Attendance Best Practices Best practices ............................................................................................... 94 Chapter 8: Workforce Analytics Best Practices SQL Server best practices ........................................................................... 98 Set Maximum Degree of Parallelism to 1 ............................................. 98 Configure Workforce Analytics database appropriately ..................... 100 Configure Workforce Analytics data mart server appropriately ......... 100 Oracle best practices .................................................................................. 102 Configure Workforce Analytics database ........................................... 102 Configure Workforce Analytics data mart server ............................... 103 Oracle Provider for OLE DB .............................................................. 103 Open Query best practices ......................................................................... 104 Workforce Analytics product performance best practices ........................ 105 Analysis Services SSAS best practices ..................................................... 106 Chapter 9: Workforce HR/Payroll Best Practices Database configuration .............................................................................. 110 Preventing connections from timing out ................................................... 111 Chapter 10: Workforce Record Manager Best Practices Configure Ample I/O Bandwidth ........................................................ 114 Use dedicated 64-bit application servers ............................................ 115 Run the WRM COPY and/or PURGE during non-peak processing hours ....................................................................... 115
Contents
Run COPY and/or PURGE Operations in 1-month Increments .........116 Define an archive strategy earlier rather than later .............................116 Process more tasks concurrently (64-bit ONLY) ................................117 Use Row Count validation for COPY operations ............................117 Drop indexes on the target database for the COPY .............................118 Increase Threshold to process non-historic tables in a single operation (64-bit) ............................................................120 Chapter 11: Workforce Integration Manager and Workforce Central Import Best Practices Appendix A: Performance Tuning Oracle Indexes Overview ....................................................................................................130 Block splitting ............................................................................................131 Sparse population .......................................................................................132 Empty blocks ..............................................................................................132 Height/Blevel .............................................................................................132 Indexes in the Kronos environment ...........................................................133 Monotonically increasing indexes .......................................................134 Indexes with no pattern of inserts ........................................................134 Primary key indexes ............................................................................134 Date datatype indexes ..........................................................................135 Workforce Central index maintenance recommendations .........................137 Index coalesce .....................................................................................137 Rebuilding an index .............................................................................137 Appendix B: Recommendations for using Workforce Central with VMware General recommendation ...........................................................................142 Hardware and software requirements ........................................................142 Hardware .............................................................................................142 Software ...............................................................................................143
Kronos Incorporated
Contents
VM configuration ...................................................................................... 143 Memory allocation .............................................................................. 143 Virtual CPU allocation ........................................................................ 144 System resource allocation ................................................................. 144 Monitoring virtual machine usage ...................................................... 144 Appendix C: Recommendations for using Workforce Central with Hyper-V Best practices ............................................................................................. 145 Appendix D: Additional information Workforce Record Manager Database definitions .................................................................................. 147 Index List for the Property WrmSetting.Option.IndexList ....................... 150 Oracle Tuning ............................................................................................ 151 SQL Server Tuning .................................................................................... 152
Contents
Kronos Incorporated
This document provides information that you can use to configure, tune, and enhance the behavior of the Kronos Workforce Central product suite from a performance perspective. The Workforce Central system can be tuned by altering the system and setup configuration. An equally valid and effective approach is to alter the way that Workforce Central is used. For example, if reports consume resources during peak operation, run the reports at off-peak times. Both methods are discussed in this book. Note: This document is specific to Workforce Central v6.2. Previous best practices do not carry forward from earlier versions of Workforce Central. If there is a v5.2 or v6.0 best practice that is not explicitly called out in this v6.2 document, then it does not apply. Because there is no practical one-size-fits-all method of managing Workforce Central performance, best practices described in this document are recommendations based on the Kronos Performance Groups performance testing. These suggestions for system configuration, patterns of use, and activities to avoid can generally be followed regardless of the environment in which the application is running. Important: While we expect and hope that your implementation of the best practices recommended in this guide will help you achieve optimal performance of your Workforce Central system, the results from our performance testing are highly dependent upon workload, specific application requirements, system design, and implementation. System performance will vary as a result of these and other factors. This guide is not intended, and should not be relied upon, as a guarantee of system performance. The results described in this guide should not be used as a substitute
for a specific customer application benchmark when for capacity planning or product evaluation decisions.
10
Kronos Incorporated
Chapter 9, Workforce HR/Payroll Best Practices, on page 109 recommends special attention to database configuration and describes the procedure to prevent connections from timing out when you run long reports. Chapter 10, Workforce Record Manager Best Practices, on page 113 presents system and application best practices to be able to copy and purge data from the customer database in acceptable maintenance windows. Chapter 11, Workforce Integration Manager and Workforce Central Import Best Practices, on page 123 describes several list best practices to help achieve optimal performance for Workforce Integration Manager: Appendix A, Performance Tuning Oracle Indexes, on page 129 provides background information and recommendations about tuning Oracle indexes for optimal performance. Appendix B, Recommendations for using Workforce Central with VMware, on page 141 outlines recommendations for configuring and deploying Workforce Central in a VMware environment. Appendix C, Recommendations for using Workforce Central with Hyper-V, on page 145 provides hardware and software recommendations for using Hyper-V with Workforce Central. Appendix D, Additional information Workforce Record Manager, on page 147 includes information needed for completing practices to optimize Workforce Record Manager performance.
11
12
Kronos Incorporated
Chapter 1
The Workforce Central suite is a comprehensive solution for managing every phase of the employee relationshipstaffing, developing, deploying, tracking, and rewarding. It consists of a number of separate, yet tightly integrated applications that are both extensible and unified to provide a centralized data repository and flexible self-service capabilities. This chapter describes recommendations to optimize performance of Workforce Central components that are available at the platform level and that are used by more than one Workforce Central application. This chapter consists of the following sections: Application server best practices on page 14 Database server best practices on page 17 Best practices for TEMPDB layout on page 28 Operational practices on page 29 HyperFind performance on page 40 Reports best practices on page 42 SSRS 2005 best practices on page 46 SSRS 2008 best practices on page 47 Next Generation User Interface best practices on page 47 Process Manager best practices on page 48
Chapter 1
14
Kronos Incorporated
7. From c:\Kronos\wfc\bin, execute the following command: jbossWinService.bat install The following message appears: The Jboss_wfc service was successfully installed. 8. Restart Workforce Central. Changing Jboss heap size on application server running Solaris or AIX 1. Navigate to <JBoss>/bin. 2. With a text editor, open run.conf. 3. Locate JAVA_OPTS= and ensure that it reads as shown in this step. Note: For AIX environments, you do not need to enter the following lines: -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 JAVA_OPTS="-Xms128m -Xmx1024m -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000\ -XX:PermSize=128m \ -XX:MaxPermSize=256m \ -Dsun.lang.ClassLoader.allowArraySyntax=true\ -Doracle.jdbc.defaultNChar="true" \ -Doracle.jdbc.convertNcharLiterals="true" \ -Dfile.encoding=UTF8 \ -Djava.awt.headless=true\ -Dsun.lang.ClassLoader.allowArraySyntax=true" 4. Add the Kronos-endorsed library in the JBoss-endorsed library path using the following text: set JBOSS_ENDORSED_DIRS=%JBOSS_HOME%/lib/endorsed:
15
Chapter 1
/usr/local/kronos/endorsed_libs 5. Save your edits and close run.conf. 6. Restart Workforce Central.
16
Kronos Incorporated
Database platform
Workforce Central version 6.2 can use the following SQL Server or Oracle databases: SQL Server 2005 or SQL Server 2008 Oracle 10g R2 or Oracle 11g R1 & R2
Each database platform is designed with different strategies for SQL optimization, table indexing, and other performance considerations. The Kronos strategy is to tune the Workforce Central system to perform consistently on all supported database platforms.
17
Chapter 1
Database layout
The database layout includes both the physical and logical layout of the database: Physical layout The hard drives and the distribution of the database components on the drives. Logical layout The distribution of the database elements such as tablespaces or file groups, indexes, and other database-specific objects across the physical layout.
With Workforce Central version 6.2, you can specify the file groups that are needed for the installation or you can use RAID (Redundant Array of Independent Disks). RAID storage Your specific RAID implementation will determine how to allocate the required tablespaces. Non-RAID disk allocation If you are not using RAID, Kronos recommends that you use at least nine disks when installing a Workforce Timekeeper database. Assign file groups tkcs1tkcs9 to each disk. Refer to Installing Workforce Timekeeper for more information.
RAID (Redundant Array of Independent Disks) technology combines two or more physical hard disks into a single logical unit. Although many RAID implementations are available, Kronos recommends using hardware level RAID storage or SAN disk storage with production quality drives. The following text lists sample recommended RAID configurations for installations of varying size. See your Kronos Representative for recommendations specific to your environment. Small installation (fewer than 5,000 employees) Hardware RAID controller with at least three drives for the database storage. Disk I/O performance improves as the number of disks increases. Logical drive created as a RAID partition with available drives to be used for all tablespaces (Oracle database) or file groups (SQL Server database). One or more hardware RAID controllers with five or more disk drives for database storage. Disk I/O performance improves as the number of disks increases.
18
Kronos Incorporated
Logical drive created as a RAID partition with available drives to be used for all tablespaces (Oracle database) or file groups (SQL Server database). SAN storage for database disk storage. Disk I/O performance improves as the number of disks increases. Log files placed on logical units (LUNs) that are separate from the data LUNs. Striped RAID configuration for database files (for example, RAID 5 or 10) and a mirrored configuration for log files (for example, RAID 1 or 10).
Large installations (more than 20,000 employees) One or more hardware RAID controllers with seven or more disk drives for database storage. Disk I/O performance improves as the number of disks increases. Logical drive created as a RAID partition with available drives to be used for all tablespaces (Oracle database) or file groups (SQL Server database). SAN storage for database disk storage. Disk I/O performance improves as the number of disks increases. Log files placed on logical units (LUNs) that are separate from the data LUNs.
Oracle recommendations
Workforce Central v6.2 supports Oracle 10G and Oracle 11G. With each version, you set memory management instance parameters in a slightly different way. See Oracle Initialization Parameter Recommendations on page 21 for explanations of parameter settings which are specific to Oracle 10G and Oracle 11G. The following are recommendations which apply to both Oracle 10G and Oracle 11G: Set the initialization parameter CURSOR_SHARING to EXACT. This setting has the most predictable performance and functionality in both Kronos Benchmarks and Customer Production Environments.
19
Chapter 1
Kronos strongly recommends that you not use the values of SIMILAR or FORCE for CURSOR_SHARING. These values have displayed isolated instances of either significant performance degradation or internal Oracle errors. Set the value for the initialization parameter OPEN_CURSORS to 500 or higher. Kronos Performance Tests have shown that lower values for this parameter can lead to Oracle errors. For information regarding optimal Oracle index tuning, see Performance Tuning Oracle Indexes on page 111.
20
Kronos Incorporated
Oracle Initialization Parameter Recommendations A key to assuring optimal application performance with an Oracle platform is to minimize the amount of physical I/O required to process a query. Kronos has tested a number of parameter settings for optimal performance of an Oracle database with Workforce Timekeeper. The following table lists the Oracle initialization parameter recommendations. Set initialization parameters to assure optimal performance of Workforce Timekeeper. If there are other documented or undocumented Oracle initialization parameter settings that are not listed in the following table, do not change their default settings. Note: Although Workforce Timekeeper functions without setting these values, performance may not be optimal. These recommendations are based on testing and resolution of customer escalation issues, and on internal customer and synthetic database testing.
Recommended setting
Parameter
Purpose/comments
Oracle 11G only MEMORY_TARGET Non-zero value based MEMORY_MAX_TAR on RAM available on the database server GET DB_CACHE_SIZE 0 Allows Oracle to manage and allocate memory to its processes and shared areas. Set this value to the maximum memory that can be allocated to Oracle processes. Default setting Allows Oracle to determine the pool size. Oracle 11G and 10G CURSOR_SHARING EXACT Provides the most predictable performance and functionality in both Kronos benchmarks and customer production systems. Important: Kronos strongly recommends that you do not use SIMILAR or FORCE because these values have caused isolated instances of significant performance degradation or internal Oracle errors.
SHARED_POOL_SIZE 0
21
Chapter 1
Parameter DB_BLOCK_SIZE
Purpose/comments Helps to minimize I/O. Performance tests have demonstrated that a majority of the application functions perform better with this block size setting. DB_BLOCK_SIZE is set at the time the database is built. The setting cannot be changed except by rebuilding the database. Represents the number of database blocks to be read for each I/O. In most cases, the default setting is appropriate. If you set this value too high, the Oracle optimizer performs table scans instead of using indexes. Disables MTS (now called Shared Server), with which there are known performance issues when using Workforce Timekeeper. Note: All performance testing and optimization work on Workforce Timekeeper was done with this feature disabled.
OPEN_CURSORS DISK_ASYNCH_IO
Specifies maximum number of cursors available. Recommended for environments supporting Asynchronous I/O. Some benchmarks have shown significant benefit when using Asynchronous I/O. Supports Asynchronous I/O. (Default value is NONE.) Note: DISK_ASYNCH_IO must be set to TRUE for this parameter to have an effect.
FILESYSTEMIO_ OPTIONS
ASYNCH
OPTIMIZER_MODE
ALL_ROWS
22
Kronos Incorporated
Parameter PROCESSES
Recommended setting
Purpose/comments
500 or see comments to Defines the total number of connections that can be define more precisely made to the Oracle database. For optimum performance, calculate the value as the sum of the following: The number of Oracle background processes (configuration-dependent) The sum of the site.database.max property from each instance of Workforce Central Any ad hoc connections that need to be made to the database
RESOURCE_LIMIT False
PGA_AGGREGATE_ TARGET
23
Chapter 1
Purpose/comments Specifies the maximum amount of session memory that is available for any individual sort. By default, the value is equal to the size of SORT_AREA_SIZE. After a sort operation is complete, the users memory size is reduced to the value specified by SORT_AREA_RETAINED_SIZE. Controls the amount of memory that is available to an Oracle session to perform sorting. The recommended value reduces the probability of having to write to temporary segments, which reduces the amount of physical I/O. Important: Ensure that this sorting activity occurs in memory.
SORT_AREA_SIZE
1310720
24
Kronos Incorporated
source database and Workforce Analytics data mart database. This causes SQL Server to use only a single CPU at a time for any one users transaction. SQL Server can still use any other available processors for handling transactions from other users. You can set the Max Degree of Parallelism setting in either of two ways: Using the SQL Server Management Studio user interface Executing a query in SQL Server Management Studio
To set this parameter using the SQL Server Management Studio user interface: 1. Select Start > Programs > Microsoft SQL Server > SQL Server Management Studio. 2. In the Connect to Server dialog box, enter the appropriate information for your environment. 3. Click Connect. 4. From Management Studio: a. Right-click the server name and select Properties. b. Select Advanced from the left side of the workspace.
25
Chapter 1
6. Click OK. To set this parameter by executing a query in SQL Server Management Studio: 1. From the left navigation bar of SQL Server Management Studio, select the name of the Workforce Central database. 2. From the header, click New Query. 3. In the Management Studio query window, enter the following text: exec sp_configure 'show advanced options', 1 go reconfigure with override go exec sp_configure 'max degree of parallelism', 1 go reconfigure with override go 4. Click Execute.
26
Kronos Incorporated
Example 1 For a 100,000-employee company, a 32 GB system is recommended by the Expert Sizer. SQL Servers Max Server Memory is set to 27GB. Important: If other applications (for example, SSRS) are running on the database server, you must consider the memory needs for those applications before using the setting in this example. See Example 2 on page 27 Example 2 For a 100,000-employee company, a 32 GB system is recommended by the Expert Sizer. SSRS is installed on the database server and is estimated to consume 5GB of memory. To account for this memory usage, set SQL Servers Max Server Memory parameter to 22GB (32GB physical 5 GB SSRS 5GB SQL Server overhead = 22GB for the SQL Server buffer pool).
27
Chapter 1
28
Kronos Incorporated
Operational practices
Operational practices
The Workforce Central database requires regular maintenance for optimal system performance and data security. The following sections describe the minimum tasks to perform to maintain optimal database performance: Optimize data transmission throughput on page 29 Keep database statistics current on page 19 Monitor the database for fragmentation on page 22 Run the Workforce Timekeeper Reconcile utility on page 23
29
Chapter 1
Never compute statistics during times of peak system usage because the process can cause performance degradation in a Workforce Central system.
To use this script: 1. Connect to the Workforce Central database using a SQL query tool and the database owners user ID and password (for example, tkcsowner). 2. To execute the script, enter: start statsddl.sql or @statsddl.sql The script generates an output file called stats.sql, in the current directory. 3. Use the SQL query tool to run the stats.sql script that was generated in step 2. This script updates all statistics for objects in the Workforce Central database and returns the following information: Number of rows Number of rows sampled Number of rows changed Last analyzed date
Run the stats.sql script once each week or after any major data additions or deletions. Run it when database activity is minimal (for example, late at night). Database administrators can schedule the script to run as a cron job (UNIX) or other scheduled event (Windows).
30
Kronos Incorporated
Operational practices
Updating SQL Server database statistics When a query is sent to a SQL Server database, the query optimizer evaluates the query and the tables being queried to determine whether it is most efficient to retrieve data using a table scan or an index. The query optimizer creates an execution plan and bases much of its optimization on the statistics for each table. If table statistics are out of date, the query optimizer may create an execution plan that will cause performance issues. For example, for smaller tables, the query optimizer usually does a full table scan to retrieve the data. If a query is run against a table that has 1,000,000 rows, but the table statistics indicate that there are only 500 rows, the optimizer could do a full table scan of the million rows. If the statistics were up to date, the query optimizer would have determined that the best way to access the data was by using an available index. Y can keep table statistics up to date manually or automatically with SQL ou Server. Each method of updating statistics has advantages and disadvantages:
Update method Automatic Advantage You can rely on the statistics being current. Disadvantage With SQL Servers automated schedule, you cannot know when statistics will be updated. There is always a chance that statistics will be updated at an inappropriate time, such as in the middle of your payroll processing day. This can cause major database resource contention. Since the update process is not scheduled, you can forget to perform the updated.
Manual
Select the most appropriate method for your environment. It is extremely important that you update statistics regularly.
31
Chapter 1
Determining whether the database is set to automatically update statistics 1. Right-click the database name in SQL Server Management Studio and select Properties. 2. From the upper-left corner of the Database Properties dialog box, click Options. 3. Review the Automatic portion of the screen. If the Auto Update Statistics option box is set to True, then SQL Server automatically updates statistics on tables. When SQL Server performs updates is based on its own algorithms) Updating statistics manually 1. Connect to the Workforce Central database, using the database owners user ID and password (such as tkcsowner) 2. Using a SQL query tool, open the Kronos-supplied statsddl.sql script. This script is located in the following folder on the Workforce Timekeeper application server: \Kronos\wfc\applications\dbmanager\scripts\database \mss\dbautilities 3. Execute the script. 4. Click the results window and save the results as a file named stats.sql. 5. Using a SQL query tool, open the stats.sql file and execute the script. This process updates statistics on all Workforce Central product tables in the database, and returns the following information: Number of rows Number of rows sampled Number of rows changed Last analyzed date
32
Kronos Incorporated
Operational practices
Guidelines for determining when to update statistics Note: For all future statistics updates, complete only step 5, using the stats.sql script. Run stats.sql every week as part of normal maintenance. Run stats.sql after large operations such as extensive additions or deletions of data (for example, after running an import or using Workforce Record Manager to delete a large amount of data). Run stats.sql at times when very few users are using Workforce Central. You can schedule the update using SQL Servers SQL Agent.
33
Chapter 1
Reorganize Indexes
Reorganize an index when the index is not heavily fragmented. Use the ALTER INDEX statement with the REORGANIZE clause (replaces the DBCC INDEXDEFRAG).
34
Kronos Incorporated
Accessing database reports 1. From the upper-right corner of the Workforce Timekeeper workspace, click Setup. Then, select System Settings from the System Configuration box. 2. Click the Database tab. 3. Select a report to run. Running the dbreport.sql script An alternative to running reports from the Database tab is to run the dbreport.sql script. Depending on the type of database, this script is located in one of the following directories:
35
Chapter 1
The dbreport.sql script shows tables and indexes that have 15 or more extents. Any tables or indexes listed would benefit from a rebuild. Running this script creates a report that displays the state of the Kronos database at any specified time. This report details statistics about the database that include: Object name Index fill factor percentages Table fragmentation Number of allocated pages/blocks and number of pages/blocks used Number of data pages Space used
36
Kronos Incorporated
instance_name is the name of the Workforce Central instance. The default name is wfc).
Note: This URL is case-sensitive. For example, if your Web server is on a machine named apex, you enter: http://apex/wfc/DBMgrLogonServlet 2. Allow the system to install ActiveX controls, if requested to do so. 3. In the Workforce Central 6.1 Database Manager logon window, enter the database owner ID (for example, TKCSOWNER) and database password. 4. Click Logon.
Note: In a combined Workforce HR/Payroll - Workforce Timekeeper environment, sa is often the database owner. See your database administrator for information specific to your environment.
37
Chapter 1
5. When the Database Manager screen appears, select the applicable version of Workforce Central from the drop-down box. Then, click Reconcile.
6. Click OK. 7. While reconcile is in progress, verify that all the processes are running. To do this: a. In the Segment Maintenance box, click View Status. b. In the Status page, click Refresh. 8. When you are finished with the Database Manager, close the browser. Output of the Reconcile utility After the reconcile is run, it writes a report to the Database Manager log file. The log file is located on the application server machine in c:\Kronos\wfc\logs. The name of this file is DBManager_timestamp.log where timestamp is the date and time that the reconcile was run. Note: The same report information is also collected in wfc.log.
38
Kronos Incorporated
The log file identifies any missing triggers, permissions, or other objects. Immediately repair any issues revealed by the Reconcile utility to help ensure the integrity of the database. Run this utility at least weekly. If non-Kronos indexes are present, the Reconcile utility returns information in the Reconcile report similar to the following: Non-KRONOS PRIMARY KEYS are: ux1_timesheetitem Indexes listed in this section of a Reconcile report are not regular objects in the Workforce Central database. The example is likely an index added by a database administrator after a performance study showed that the index would help speed up Oracle query responses. In general, Kronos recommends that you do not add indexes to the Workforce Central database except when requested to do so by Kronos or as part of a Kronos product installation. Maintenance and support of customer-created indexes are customer responsibilities. Important: If the Reconcile utility reports that Workforce Central indexes, tables, or other objects are non-Kronos objects, verify that the Reconcile was performed when connected to the database as the database owner (such as tkcsowner). If re- running Reconcile as the database owner still shows the Workforce Central objects as non-Kronos objects, contact Kronos Global Support immediately to address a potential object ownership issue in the database.
39
Chapter 1
HyperFind performance
HyperFind queries consist of dynamically created SQL statements that return a list of employees based on criteria established by the user. This list of employees is used to populate Genies, reports, and Schedule Editor functions. To achieve optimal HyperFind performance, construct HyperFind queries using the following guidelines: Whenever possible, include multiple entries of the same type of criteria in the same query line. For example, the following syntax represents an inefficient HyperFind criteria specification: ((Any home or transferred-in employees worked in */ MACNT/*/*/*/*/* OR Any home or transferred-in employees worked in */ MADIV/*/*/*/*/* OR Any home or transferred-in employees worked in */ MAEST/*/*/*/*/* OR Any home or transferred-in employees worked in */ MASLS/*/*/*/*/*)) The following syntax is a more efficient way of expressing the same HyperFind query: ((Any home or transferred-in employees worked in */ MACNT, MADIV, MAEST, MASLS/*/*/*/*/*)) If you need to specify all labor level entries for a labor level and the number of labor level entries is greater than 30, leave the field blank. Do not use wildcard characters % or *. In previous versions of Workforce Central, these wildcard characters generated inefficient SQL, which caused unpredictable query behavior. Although Kronos has improved this behavior in recent versions of Workforce Central, Kronos recommends that you leave a field blank if all entries need to be specified.
40
Kronos Incorporated
HyperFind performance
Review the SQL in the organizations All Home query and other employee group HyperFind queries to ensure that they contain efficient SQL. Efficient SQL syntax follows the guidelines for: Using blank fields Specifying wildcards Creating SQL expressions
When creating HyperFind queries using Worked Accounts, use the All Home and Transferred In Employees Hyperfind as a guide to create your own Worked Accounts Hyperfind. This Hyperfind demonstrates the correct way to select both of the following: Employees hours in a manager/users employee group, whether the employees worked in the employee group or worked in another employee group Employees from other employee groups who worked in the manager/ users group (employees who transferred in).
Set up labor account sets or organization sets appropriately to limit the number of All Home employees.
41
Chapter 1
Workforce Central reports can significantly affect CPU utilization and consume large amounts of memory. Be aware of the following best practices when running reports: Using dedicated report servers on page 42 Limiting the number of employees per report on page 44 Following procedural best practices on page 45
42
Kronos Incorporated
Use the following recommendations for configuring report engines (concurrent report execution processes): When running reports on a general-purpose application server, configure the number of report agents to be no greater than 75% of the total number of system cores. For example, if using an 8 core system, configure up to 6 Report Agents. When running reports on a dedicated report server: Configure one report agent per core. Configure one transformer thread for every two report agents.
To create a dedicated report server, you must: Disable the report service on the primary application server that handles user authentication. Enable the report service on the application server on which you want to run reports.
Perform these steps: 1. In the upper-right corner of the Workforce Timekeeper workspace, click Setup. 2. From the System Configuration box, select System Settings. 3. Select the Reports tab. 4. If you are disabling the report engine on a general-purpose application server, click false for the site.reporting.engine.enable setting.: If you are configuring a dedicated report server: a. Click true for the property setting called site.reporting.engine.enable. b. Enter the number of report agents for the site.reporting.MaximumRepAgents setting.
5. Click Save.
43
Chapter 1
6. Click Save.
44
Kronos Incorporated
45
Chapter 1
This setting is located on rsreportserver.config. The default value of this property is set to False, which indicates that snapshot and report data will be stored in the temporary database of the SSRS
46
Kronos Incorporated
repository. Testing has shown that when this property is changed to True, the user-perceived response time to move from page to page in the HTML browser for a report can be as much as 40% faster. This reduced time translates into less CPU utilization on the SSRS server because the SSRS repository is not utilized as much. Setting the value for this property to True allows you to keep the SSRS repository database and Reporting Services on the same physical machine, regardless of load, since the CPU utilization of SQL Server is minimized.
47
Chapter 1
48
Kronos Incorporated
The following example shows how to convert multiple branch tasks to a multibranch task:
49
Chapter 1
Avoid using XOR tasks. Combine calls to an API in a single task to eliminate calling the same API multiple times. To do this: Use a Script task to pass the values returned from the API call to the next task in the process. If you have API tasks that call the same API in different ways with different attributes, use the same API task preceded by a Script task to set the specific attributes that you need.
When using a Branch task, use the Rule condition (specified on the Branch Properties tab) instead of a Script condition. Kronos recommends this because the Script condition is executed frequently, adversely affects performance, and sometimes causes unexpected results. If it is appropriate to use some type of script, use a callback scriptwhich is executed only onceinstead of using the Script condition. Y specify the ou callback script in the Callback Scripts tab on the Branch task properties sheet.
50
Kronos Incorporated
The following examples show the use of a script condition and a callback script. Script condition
Callback script
51
Chapter 1
Configuration recommendations
Best practices for configuring Process Manager include the following: Consider using the process pooling mechanism for high-volume templates. Pooling is the ability to configure a pre-allocated pool of process instances so that they are available for rapid retrieval when a user initiates a request. By reducing demand on the database CPU, process pooling improves response time and overall Process Manager performance. The pool-building event, however, can use a significant amount of system resources. During the pool-building event, all actively deployed process templates with a pool size greater than zero are instantiated in the database for future use. Therefore, consider the following best practices when you are creating a process pool: Schedule pool-building events regularly. However, because pool-building consolidates the process (pre)allocation into a brief concentrated period, the period should not be too frequent (every minute), nor should it be too infrequent (four times a year). Do not build a large pool during times of peak Workforce Central activity, such as during payroll processing. Schedule pool-building events during off-peak times. Do not build pools so large that they are not used. For example, if all vacation requests are due during the first week of each quarter, increase the pool size of the Time Off Request process just before the first week, and then reduce it after the first week.
During regular operation of the Workforce Central applications, when a request for a new process instance is made, a pre-allocated process instance from the pool is used. This reduces the response time and database load, because the work to create a process instance has already been done. The pool is then reduced by one. If the pool is depleted before the next pool building event, then the standard process allocation is used. Delete old processes from the database when they are no longer of use and do not need to be saved. Workforce Record Manager provides functionality to purge completed processes.
52
Kronos Incorporated
Keep database statistics current for both SQL Server and Oracle. Process Manager database tables are especially sensitive to the statistics. By default, the Process Engine is enabled. If process throughput begins to affect overall application performance during peak load periods, you can disable the Process Engine as follows: a. From the upper-right corner of the Workforce Timekeeper workspace, click Setup. b. From the System Configuration box, select System Settings. c. Select the Business Automation tab. d. Change the following business automation parameter from true to false: wba.processengine.enabled e. Save your edit.
Process governors
Kronos has identified process governors as the mechanism for limiting Process Manager load for improved system performance. Governors monitor system activity and reject user process requests when certain conditions are detected. However, Kronos recommends using governors only when there is a need to throttle process generation to improve system performance. Important: When using governors for Process Manager, you are using systemwide governors. At this time, there is no way to associate a Process Manager template to a governor. A simple governor framework has been implemented that allows zero or more governors to be plugged into Process Manager. Each governor is a Java class that monitors some system metric. Metrics may or may not be related to Process Manager. Governors, which you should enable only on an as-needed basis, are externally configurable using the custom_WBA.properties file, which is located in the following folder: C:\Kronos\wfc\applications\wba\properties
53
Chapter 1
Y can create this file if one does not already exist. ou For each governor configuration property listed in the following sections, add a line in the custom_WBA.properties file. For example:
private.wba.governors=private.wba.workflow.UserRequestGovernor.threshold private.wba.workflow.UserRequestGovernor.threshold=2
Important: Because it is difficult to predict how any one customer will use
Process Manager, analyze the current workload and performance characteristics before you implement governors. Currently, experienced levels of concurrency and throughput can be obtained by enabling governor logging, but not by enabling any of the governors themselves. Use this information as the basis for establishing any governing values. Governor configuration The following properties affect general governor functionality:
Configuration property private.wba.governors
Default value
Description A comma-separated list of governor class names. If empty (the default setting), no governors are activated. As a convenience to the user, WBA.properties contains a commented-out entry for each implemented governor class.
private.wba.governors.log
false
Set to true to enable governor logging. Each governor will log its current state on every process request. Activate this only for debug purposes. Leaving this property on in a production environment may adversely affect performance. The file name to which governor statistics should be logged.
private.wba.governor. outputfile
c:/Kronos/ governorstats.csv
54
Kronos Incorporated
User Request governor on page 41 Throughput governor on page 42 Response Time governor on page 43 Form Timeout governor on page 43
User Request governor The User Request governor monitors the number of active user process requests at the current time. Active user process requests are those that are user- initiated and contain a Process Manager form as the first user interface to the process. An example of an active user process request is the Time Off Request. A process is counted by this governor if the user has requested a form, but the form has not yet been rendered. Note: Only processes that are included in this count are ones that are launched by requesting Process Manager forms.
Default value 0
Description The number of active requests above which the governor should be activated. A value of 0 disables the governor.
Kronos recommends the use of the User Request governor when you want to control the number of concurrently executing Process Manager processes. This governor is best suited for heavyweight templates where heavyweight is defined as a template that does a significant amount of system processing (possibly asynchronously) and executes for an extended period of time (minutes or more). Throughput governor The Throughput governor monitors the current process throughput. Throughput is defined as the number of processes that have been processed during a given, configurable time period. When this number of processes exceeds the threshold, the governor is activated.
55
Chapter 1
Note: Only processes that are included in this count are ones that are launched by requesting Process Manager forms.
Configuration property private.wba.workflow.Workflow ThroughputGovernor.threshold Default value 0 Description The highest number of processes that can be activated during a configured time period (see below). A value of 0 disables the governor. The time period in milliseconds over which activated processes should be counted. A value of 0 disables the governor.
private.wba.workflow.Workflow ThroughputGovernor.window.millis
Kronos recommends using this governor when you want to control the number of processes that the system will process per unit of time. This governor is best suited for lightweight templates, where lightweight is defined as a template that has a few forms and may involve several user interactions. An example of a lightweight template is the Time Off Request template. For this governor to be effective, Kronos recommends that you keep the window size small (5 minutes or less) to avoid spikes of activity. Response Time governor The Response Time governor monitors a value that is roughly equivalent to the user response time for Process Manager forms. The response time is calculated by averaging all response times over a given, configurable time period. When this average exceeds the configured threshold, the governor starts working. Note: Only forms that launch processes are monitored by this governor.
Configuration property private.wba.workflow.Response TimeGovernor.threshold Default value 0 Description The response time above which the governor will be activated. A value of 0 disables the governor.
56
Kronos Incorporated
Default value 0
Description The time period in milliseconds over which response times are averaged. The time window is defined from the current instant back. A value of 0 disables the governor.
Form Timeout governor The Form Timeout governor counts form timeouts over a given, configurable time period. When the number of timeouts exceeds the configured threshold, the governor is activated. Note: Only forms that launch processes are monitored by this governor. This governor is different from the others in that it is reactive rather than proactive. Although Kronos does not prefer this approach, this governor can be viewed as a second line of defense in case other governors are not effective.
Configuration property Default value Description The number of timeouts (over the configured interval), above which the governor will be activated. A value of 0 disables the governor. The time period in milliseconds over which form timeouts are counted. The time window is defined from the current instant back. A value of 0 disables the governor.
57
Chapter 1
58
Kronos Incorporated
Chapter 2
Workforce Timekeeper is the foundation of the time & attendance and scheduling capability of the Workforce Central suite. It must be installed before you can install Workforce Scheduler, Workforce Record Manager, or other Time & Attendance and Scheduling products. Workforce Central components place varying demands on overall system performance and each of the following components is discussed separately in this chapter: Configuring the Workforce Timekeeper application for optimum performance on page 60 Genie performance on page 62 Totalizer best practices on page 66 HTML client performance on page 62
Chapter 2
60
Kronos Incorporated
In addition to ensuring that you have an efficient configuration, you should adopt the following practices to improve Workforce Timekeeper performance: Use only features needed in the timecard The Workforce Timekeeper flexitime feature allows users to view their accruals in the Totals & Schedule window of their timecard.
Accruals window
Although displaying the accruals window in the timecards Totals & Schedule window has a moderate cost on the database, it may decrease the application server response time. Kronos recommends that you turn this feature off if you are not using it. Do not edit jobs or rules during peak processing times If you edit jobs or pay rules during peak processing times, the impact on the application server can be significant, especially if you change multiple attributes. The best practice is to avoid making configuration edits during peak processing periods. Do not make pay code edits in the timecard beyond the current pay period Making pay code edits beyond the current pay period in the timecard causes slow system performance and totalization. To make pay code edits beyond the current pay period, Kronos recommends that you make the edits in the Schedule Planner instead of the timecard. The projected balance information will be the same whether the time is entered in the Schedule Planner or in the timecard.
61
Chapter 2
Genie performance
Genies can have a significant impact on overall application performance. The impact is primarily due to the volume and/or complexity of the SQL that is generated. Genies affect the performance of the application server when there is a large quantity of data returned from the database server. The application server collects data in 200-row chunks. This means, for example, that if over 5,000 rows of data are returned by the database server, the application server will take some time to get those rows, resulting in significant CPU use. There are four simple practices that you should follow for optimum Genie and HyperFind performance: Create Genies with only the necessary data columns on page 62 Assign Genies as needed to users on page 64 Limit the number of employees displayed in Scheduling Genies to below 1,000 on page 64 Do not calculate totals on Genies on page 65 General configuration considerations on page 67 Totalizer extensibility interface on page 68
62
Kronos Incorporated
Genie performance
Most Genie columns generate an individual SQL statement; however, note the following: Pay codes generate one SQL statement for all pay code totals. Exceptions generate one SQL statement for a group of exception conditions. The Missed Punch Genie column is treated separately from the other exception conditions and generates its own SQL statement. Employee Name Home Account Unexcused Absence (Exception) Missed Punch Early In (Exception) Late In (Exception) Early Out (Exception) Late Out (Exception) Unscheduled Hours (Exception) Totals Up to Date
For example, a Reconcile Timecard Genie could contain the following columns:
This Genie generates five SQL statements each time it is activated. To limit the number of SQL statements generated by a Genie, you should create Genies with only the necessary data columns. For example, if you only want to see the missed punches for a group of employees, you should not use the Reconcile Timecard Genie as defined in the previous example. Instead, you should create another detail Genie with just the Employee Name and Missed Punches columns. This new Genie generates only two SQL statements for each execution, versus the five in the original example. The reduction of SQL statements helps to increase the CPU capacity of the database server, as well as to improve the elapsed time of the Genie to return the necessary data.
63
Chapter 2
64
Kronos Incorporated
Genie performance
65
Chapter 2
The underlying strategy of the Background Processor is to precalculate and store calculated data (in this case, employee totals) to amortize the cost of calculation among the various components that share the data. In addition, de-coupling the process of updating totals from interactive tasks speeds interactive response time, but at the cost of introducing latency between the time the totals are invalidated and the time the Background Processor updates them. This section includes the following topics that you can use to minimize the chances of the Background Processor causing performance issues: General configuration considerations on page 67 Totalizer extensibility interface on page 68 Multiple instances on page 68 Additional recommendations on page 69
66
Kronos Incorporated
Stand-alone configuration
67
Chapter 2
Important: The fixed read size and minimum queue size Background Processor properties should be left at their defaults (unless directed by Kronos Engineering to change them). This is extremely important for minimizing the impact on memory usage if you have significant numbers of employees with unsigned-off data.
Multiple instances
In Workforce Central 6.2, multiple instances can be installed on a single physical application server system in two ways: Instances that are created by the Workforce Central Configuration Manager utility. Instances that are manually created to enhance application scalability on a single physical application server system. This type of instance is referred as an instance for vertical scalability.
The main difference between these instances is that vertical scaling instances respond to requests from a single URL and each Configuration Manager instance responds to its own URL.
68
Kronos Incorporated
The following table shows which type of instance to use for a particular situation:
Purpose of the instance Separate Background Processor and application server processing Overcome Totalizer hook inefficiencies in the Callable Totalizer Overcome Totalizer hook inefficiencies in the Background Processor Configure stand-alone Background Processor instances Instance type to configure Configuration Manager Instance for vertical scalability Configuration Manager Configuration Manager
Additional recommendations
The following are important additional recommendations for optimizing the performance of the Background Processor: Configure Background Processors appropriately on page 56 Keep employee totals up to date on page 57 Keep signoffs up to date on page 57
Configure Background Processors appropriately Given that the Background Processor is the batch-oriented process responsible for updating employee totals, it is important that you configure it appropriately: Ensure that Background Processor streams are always running. Provide sufficient Background Processor threads to meet the desired turnaround time for updated totals. Install all servers that run Background Processor threads on a high-bandwidth LAN with the database server. Ensure that the recommended database administration practices are implemented. This includes keeping database statistics up to date, reducing or removing table fragmentation, and re-creating indexes frequently.
69
Chapter 2
Keep employee totals up to date The Background Processor must keep up with demand so that employee totals are up-to-date in the database. When the database server is working with the Background Processor, it has less capacity to support other activities. If the Background Processor is not keeping employee totals up to date, users invoke the Callable Totalizer more frequently to view totaled amounts interactively. However, the Callable Totalizer is expensive to use: It makes the database do work twice. It uses a significant amount of application server CPU resources. It does not calculate scheduled against actual totals.
When calculations are done independent of the application server, the Background Processor removes the calculation time from end-user response time. The goal is to avoid having the Background Processor lag behind in calculations. A number of events can cause the Background Processor to fall behind. However, take the following steps to minimize the likelihood of that happening: Never edit pay rules at times of peak activity. When pay rules are changed or configured, the Background Processor must re-total all employees. A good time to change rules is immediately after a signed-off pay period. Ensure that routine database maintenance, as described in Database best practices on page 13, is up to date. Make sure that the Background Processor (as well as the application servers) is on the same LAN segment as the database server.
Keep signoffs up to date Improve performance of the Totalizer and Workforce Central in general by keeping signoffs up-to-date, typically completed every pay period. If you do not sign off timecards regularly, every time the Totalizer runs, it reprocesses timeframes that are not signed off, performing extra work that robs performance from other parts of the product. Failure to do signoffs for extended periods of time may result in significantly poorer performances and application stability issues.
70
Kronos Incorporated
This section describes some of the unique performance issues of the HTML client and contains the following sections: HTML client versus applets on page 71 CPU consumption on page 72 Quick Time Stamp throughput (kiosk capability) on page 72
71
Chapter 2
CPU consumption
An easy way to reduce the work done by the application server is to modify the function access profile for employees who use the HTML client. This reduces the CPU consumption of the HTML client to below the CPU usage that an equivalent applet user with the same features enabled would use. To modify the function access profile: 1. From the upper-right corner of the Workforce Timekeeper workspace, click Setup. 2. In the Access Setup dialog box, select Function Access Profiles, then select Default. 3. In the Edit Function Access Profile dialog box, expand Workforce Employee. 4. Select Timecard Editor for Employees (My Timecard). 5. Change the access for Calculate Totals in My Timecard to Disallowed. 6. Click Save.
72
Kronos Incorporated
Chapter 3
Workforce Device Manager enables the flow of data between Workforce Central applications and data collection devices. Since Workforce Central v6.1, Workforce Device Manager has replaced Data Collection Manager (DCM). Based on performance testing results, Kronos recommends the following best practices when using Workforce Device Manager: Device communication settings recommendations on page 74 Dedicated Workforce Device Manager servers (version 6.2.1 and later) on page 76 Recommended threshold for terminals per server on page 77 Load balancing punch uploads on page 77 Recommended Web/application server resource settings on page 77
Chapter 3
For optimum performance, Kronos recommends that you use device-initiated protocol. However, if you use server-initiated protocol and you have more than 200 server-initiated devices, Kronos recommends that you configure the netcheck interval to be 10 minutes or longer. Regardless of whether you use device- and server-initiated protocol, the automatic collection interval is set to 20 seconds by default. To improve throughput and reduce system resource utilization when using automatic punch collection, you should increase the automatic collection interval value to 60 seconds. Increasing this value allows Workforce Device Manager to process punches more efficiently in larger batches. To change the automatic collection interval value: 1. In the Workforce Timekeeper user interface, select Setup > Device Manager Setup > Device Communication Settings. 2. Select the applicable communications template, and click Edit. 3. Select the communication mode Device Initiated or Server Initiated. 4. Scroll to the Data Collection area and change the Automatic data collection interval to 60. 5. Click Save. If downloads will be done frequently (for example, several times each day) reduce the Download retry value to 1. on the Device Communication Setting Editor page. Note: Vertical scaling is supported only for the device-initiated protocol.
74
Kronos Incorporated
The following illustration shows where the NetCheck interval and data collection options are on the Device Communication Setting Editor page.
75
Chapter 3
Kronos recommends use of 64 bit WFC to provide better scalability. Do not combine dedicated servers. Use separate dedicated WDM, report, and WIM servers as needed.
76
Kronos Incorporated
77
Chapter 3
Note: Y only need to apply these modifications to the Workforce Central ou instances that are used for device communication.
78
Kronos Incorporated
If this value approaches 1,000, use multiple web servers because Apache has a hard-coded limit of 1,024. IIS Web server Increase the value of the connection_pool_size configuration property in the workers.properties file for each application server instance used for device communication. This file is located in the following directory: \Kronos\jboss_connectors\IIS\conf Look up the value in the table below to determine the appropriate value for your site:
Devices per application server instance IIS connection_pool_size setting Fewer than 250 250 to 500 250 400
Increase the value of the configuration property maxThreads in the server.xml file, which is located in the following directory:
...\Kronos\jboss\server\wfc_staging\deploy\ jboss-web.deployer
79
Chapter 3
Note: Do not make changes to the JBoss server settings in the instance-specific folder. Changes that exist in an instance-specific folder will be lost when a new service pack is installed. To enable a change in the JBoss server settings to be distributed to all instances, first make the change in the wfc_staging folder. Then, make the change on all application servers. Finally, run the Configuration Manager after changing the JBoss server settings on each server.
Because there are multiple maxThreads properties in this file, you only need to change the section for the AJP1.3 connector as follows:
<!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="@JBossAJPConnectorPort@" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" maxThreads="250" redirectPort="@JBossRedirectPort@" useBodyEncodingForURL="true"/> <Engine name="jboss.web" defaultHost="localhost"> Look up the value in the table below to determine the appropriate value for your site:
Devices per application server instance Fewere than 250 250 to 500 AJP 1.3 maxThreads setting 250 400
80
Kronos Incorporated
Chapter 4
Based on the performance tests run against Workforce Scheduler, this chapter presents recommended best practices to help you achieve optimal performance for your Workforce Scheduler environment.
Chapter 4
Best Practices
Y should keep the value of the following property at the default of 50: ou global.WtkScheduler.MaximumNoOfRuleViolationsSentTo Client This property governs the number of rules violations sent from the application server to the client. Testing of Workforce Scheduler showed that the loading of rules violations was a memory-intensive operation, and significantly impacted user response times. Therefore, setting this value at higher than 50 results in longer response times. To check the value of this property: a. Log on to Workforce Timekeeper as a system administrator or SuperUser. b. In the upper-right corner of the Workforce Central workspace, click Setup. Then, from the System Configuration box, select System Settings. c. In the Systems Settings dialog box, click the Global Values tab. d. Check the value of the following global setting: global.WtkScheduler.MaximumNoOfRuleViolationsSent ToClient. If the value is higher than 50, change the setting to 50. Kronos recommends that Schedule Groups operations with more than 300 employees per group be conducted for shorter schedule periods in Schedule Planner view, even as short as one day. Changes beyond the period in view are being processed in the background. This practice lowers demand on system resources, CPU, and memory on the client machine. The response time of Schedule Planner is sensitive to the number of employees loaded and the number of weeks viewed. Kronos recommends that you minimize the number of employees loaded in the Schedule Planner. Limit the number of employees to no more than 500 for four to six weeks per Schedule Planner at any given time.
82
Kronos Incorporated
Best Practices
When running Schedule Generator interactively, cover the shortest schedule periods and smallest number of employees as possible. Schedule Generator is CPU-intensive on the application server and could impact the response times of other interactive users in the system. Therefore, it is good practice to run Schedule Generator during off-peak hours of operation.
83
Chapter 4
84
Kronos Incorporated
Chapter 5
Workforce Forecast Manager is typically used in retail environments where staffing requirements vary depending on business demand. For example, the number of staff needed to meet the level of service required in a grocery store can vary from day to day. Based on the performance tests run against Workforce Forecast Manager, the best practices presented in this chapter will help you achieve optimal performance for Workforce Forecast Manager. Note: The best practices noted in this document also apply to Schedule Generator. This chapter contains the following sections: Best practices on page 86 Configuring number of forecasting and Auto-Scheduler engines on page 88
Chapter 5
Best practices
For large stores (more than 100 employees per store), Kronos recommends that you run volume forecasting, labor forecasting, and auto-schedule generation using Event Manager during off-peak hours. This is especially true for auto-schedule generation. (See Configuring number of forecasting and Auto-Scheduler engines on page 73 for more information.) Running Auto-Scheduler in interactive mode at higher user loads consumes significant amounts of application server CPU and causes response times to suffer. Y should keep the value of the following property at the default of 50: ou global.WtkScheduler.MaximumNoOfRuleViolationsSentTo Client This property governs the number of rules violations sent from the application server to the client. Testing of Workforce Scheduler showed that the loading of rules violations was a memory-intensive operation, and significantly impacted user response times. Therefore, setting this value at higher than 50 results in longer response times. To check the value of this property: a. Log on to Workforce Timekeeper as a system administrator or SuperUser. b. In the upper-right corner of the Workforce Timekeeper workspace, click Setup. Then, from the System Configuration box, select System Settings. c. In the Systems Settings dialog box, click the Global Values tab. d. Check the value of the following global setting: global.WtkScheduler.MaximumNoOfRuleViolationsSent ToClient. If the value is higher than 50, change the setting to 50. Auto-Scheduler response times and number of operations The default setting for Run iterations is 100. This number of iterations results in better quality schedules. Do not change this setting unless you have experimented with different number of iterations and analyzed the resulting schedules. As expected, higher numbers of iterations result in longer response times.
86
Kronos Incorporated
Best practices
Auto-scheduler response time at different store levels Tests have shown that generating schedules at the store level takes longer compared to the sum of the response times for individual departments. Running auto-scheduler at department level reduces the overall time to generate schedules. Depending upon business needs, group the jobs in a store into multiple option sets. Number of auto-scheduler engines Auto-scheduler is application server CPU-intensive. A general rule is to ensure that the number of auto-scheduler engines does not exceed 75% of the available logical CPUs (cores) on the application server. For example, if there are 8 cores available, configure no more than six auto-scheduler engines.
87
Chapter 5
3. Set the value based on the number of concurrent users expected on this particular application server. For Volume and Labor Forecasting and Auto-Scheduler, use the following equation to calculate the appropriate value:
88
Kronos Incorporated
<Number of concurrent users> multiplied by 100 where 100 is the Auto-Scheduler engine rating. For example, if the expected number of users is 50, then the value should be 5000 Caution: Because auto-scheduler engines are CPU-intensive, Kronos highly recommends that you minimize the number of users running these engines interactively. Otherwise, the response times of other interactive users will be impacted.
89
Chapter 5
90
Kronos Incorporated
Chapter 6
Workforce Operations Planner is a retail scheduling product that enables budget stakeholders to collaboratively develop and review the budget. It also generates a labor forecast in hours as well as dollars for a future fiscal period. Based on the performance tests run against Workforce Operations Planner, this chapter present the recommended best practices to help you achieve optimal performance for Workforce Operations Planner.
Chapter 6
Best practices
While working with large number of stores (400 and above) that have longer fiscal periods (longer than a quarter), it is important to execute the following functions when the demand for the system resources is not significant: Generate Plan with volume data Generate Plan with labor data Edit plan
The Batch Processing Framework processes each request based on its priority and the time that it is entered in the queue. If priorities for two requests are the same, then the request that was entered in the queue first is processed first. Note: The Workforce Operations Planner batch requests are entered into the queue with the highest priority. Therefore, if other batch requests with lower priority (for example, running the Forecaster or Auto-Scheduler engines) are already in the queue, Workforce Operations Planner batch requests run before lower priority requests.
92
Kronos Incorporated
Chapter 7
Workforce Attendance interprets the time and attendance data that is collected by Workforce Timekeeper based on your companys attendance policies. It also generates applicable warnings to help supervisors manage their employees attendance. Based on the performance tests run against Workforce Attendance, this chapter presents the recommended best practices to help you achieve optimal performance for Workforce Attendance.
Chapter 7
Best practices
If you have more than 5000 employees, you should run Workforce Attendance processing in batch mode during off-peak hours. Because the response time of interactive mode can be prohibitive during periods of peak activity, use Event Manager to schedule batches to run during off-peak hours daily. This is especially important if you are using any of the following features: Lost time Lost time with time collection
Y can increase the number of Workforce Attendance processing batches that ou can be run concurrently by changing the following in the System Settings: a. In the upper-right corner of the Workforce Timekeeper workspace, click Setup. b. In the System Configuration box, select System Settings. c. Select the Batch Service tab. d. Change the value of site.BatchService.numberOfCPU to 2 to set the number of CPU cores.
94
Kronos Incorporated
Best practices
e. Click Save.
Note: If you increase this setting, you should also consider the impact on other Workforce Central 6.2 processing.
95
Chapter 7
96
Kronos Incorporated
Chapter 8
Workforce Analytics extracts data from the Workforce Central database and transforms the data into a target data warehouse model that is designed for optimal reporting and analytics. For optimal performance, consider the recommendations in the following sections: SQL Server best practices on page 98 Oracle best practices on page 102 Open Query best practices on page 104 Workforce Analytics product performance best practices on page 105 Analysis Services SSAS best practices on page 106
Chapter 8
98
Kronos Incorporated
6. Click OK. If you prefer, you can disable parallelism by executing a query. From the leftside navigation bar of SQL Server Management Studio, select the name of the Workforce Central database and click New Query from the header. In the Management Studio query window, enter the following, then click Execute: exec sp_configure 'show advanced options', 1 go reconfigure with override go exec sp_configure 'max degree of parallelism', 1 go reconfigure with override go
99
Chapter 8
100
Kronos Incorporated
c. In Management Studio, right-click the server name, select Properties, and select Connections from the left side of the workspace. d. Change the query timeout value.
e. Click OK. Note: Kronos recommends that you set the Remote Query timeout value to 1800 (30 minutes) to allow potentially long queries,
101
Chapter 8
Oracle memory settings Kronos recommends that you only set MEMORY_TARGET and MEMORY_MAX_TARGET and leave the detailed memory management to Oracle 11g. You should set these memory target values based on available memory on the server and the operating systems upper limits. Database writer processes The database writer (DBWn) writes modified blocks from the database buffer cache to the data files. Kronos recommends that you use one database writer process for each CPU core, up to a maximum of four writer processes.
102
Kronos Incorporated
103
Chapter 8
104
Kronos Incorporated
105
Chapter 8
time out when issuing commands to external data sources, such as relational and other OLAP sources. Setting this property to 14400 allows
sufficient time for the cube processing query to complete. To change the ExternalCommandTimeout property on the SSAS Server.
1. Connect to the Analysis Services server using SQL Management Studio. 2. Right-click the server name and select Properties. 3. Select Show Advanced (All) Properties at the bottom of the workspace. 4. Find the ExternalCommandTimeout property and change its value to 14400.
106
Kronos Incorporated
Kronos also recommends that you put the Analysis Cubes on a drive with multiple drives and with adequate space. This is required for Analysis service performance. The following shows where we recommend changing the default directory name to one that has enough disk space.
107
Chapter 8
108
Kronos Incorporated
Chapter 9
Workforce HR and Workforce Payroll are core components of the Workforce Central suite of products. Both applications use Microsoft Transaction Server (MTS), Internet Information Server (IIS), and a SQL Server database to provide a single point of access for HR and payroll needs.
Chapter 9
Database configuration
Database configuration is key to HR/Payroll application performance. If you have not already done so, review the following sections of this document: Database server best practices on page 17 Best practices for TEMPDB layout on page 28
110
Kronos Incorporated
4. In the Internet Information Services (IIS) Manager dialog box: a. Expand Web site > Default Web Site. b. Right-click Admin and select Properties. c. In the Admin Properties dialog box, click Configuration. d. f. In the Application Configuration dialog box, select the Options tab. Set the ASP script timeout to 1800 seconds. e. Select Enable session state and set the session timeout to 30 minutes. g. Click OK until you return to the main Internet Information Services (IIS) Manager dialog box. 5. Restart IIS: a. Select Start > Control Panel > Administrative Tools > Services. b. In the Services panel, right-click IIS Admin Service and click Restart. c. Exit the Control Panel.
111
Chapter 9
To do this on machine running Windows 2008 Server R2: 1. On the web server machine, select Start > Administrative Tools > Internet Information Services (IIS) Manager. 2. Expand Application Pools. 3. Right Click on Classic .NET AppPool.
a. Select Set Application Pool Defaults.
b. Set Idle Time-out (minutes) to 30. c. Click OK. 4. Expand the Sites directory listing. a. Right click on the website. b. Select Manage Web Site. c. Select Advanced Settings. d. Expand Behavior and Connection Limits. e. Change Connection Time-out (seconds) to 1800. 6. In the ASP.Net Session State dialog box for the WFC website: a. Set Time-out (in minutes) to 30. b. Select WFC website (usually Default Web Site) again. c. Select Yes to save changes at the Session State dialog box. Using any version of IIS you must also modify the c:\Kronos\whrpr\web\web.config property file. 1. Set the values for the following properties to be 1800. <httpRuntime executionTimeout="1800" /> <add key="AsyncPostBackTimeout" value="1800" /> <add key="iKeepAliveTime" value="1800" /> 5. Select WFC website (usually Default Web Site).
112
Kronos Incorporated
Chapter 10
A key element in minimizing the impact of Workforce Record Manager on the operation of the production database is to take steps in understanding how to fix WRM processing in allocated maintenance windows, or being able to run WRM tasks during periods where processing on the production database is minimal. Based on testing of WRM in Workforce Central 6.2, the following are key recommendations to help obtain optimal performance of the WRM COPY and PURGE processes. Note that unless otherwise specified, any Workforce Central Application Tuning options are found in the Record Retention Options and Tuning tab under Setup/System Settings. Each of these recommendations will be discussed in more detail later in this chapter: Configure Ample I/O Bandwidth. Use dedicated 64-bit application servers. Run the WRM COPY during non-peak processing hours. Avoid running the PURGE with active Workforce Central user activity or processing. Run COPY and/or PURGE operations in 1-month increments. Define and implement archive strategies within the first year of overall deployment. Process more tasks concurrently by setting the application property WrmSetting.Tuning.MaxThreads to at least 6, especially on an application server with 8 processors (64-bit application servers ONLY). Use Row Count Validation for COPY operations versus Complete Binary Validation for optimal performance. Never select the option No Validation so that the Target Database is assured to be in a valid state.
Chapter 10
Set application property WrmSetting.Option.DropTargetIndexes to true to drop indexes on the Target Database during a COPY operation. Move the data for large non-historic tables in one operation by setting the property WrmSetting.Tuning.CopyChunkRowIdThreshold to a value higher than the maximum number of rows in the largest non-historic table (64-bit application servers ONLY). Note that for the properties WrmSetting.Tuning.MaxThreads and WrmSetting.Tuning.CopyChunkRowIdThreshold, the application limits the values entered through the Record Retention Options and Tuning tab to 4 and 500000 respectively due to the memory limitations of 32-bit application servers. To override this GUI limitation, enter the following lines into the file custom_WRM_OptionsAndTuning.properties, which is located in the directory \Kronos\wfc\applications\wrm\properties on the application server. The application server needs to be restarted after making these changes for the changes to take effect: WrmSetting.Tuning.MaxThreads=6 WrmSetting.Tuning.CopyChunkRowIdThreshold=50000000
114
Kronos Incorporated
Run the WRM COPY and/or PURGE during non-peak processing hours
Running the WRM COPY introduces some overhead on the production database due to the population of temporary node tables to establish the data to be copied, extraction of data from the Source Database to be copied to the Target Database, and data validation routines to ensure the data being copied to the Target Database matches the data on the Source Database. Since this overhead occurs on the largest of transaction tables, it is strongly recommended not to run the COPY during peak processing hours such as pay period end or other times where the system is the busiest. It is recommended to define a maintenance window in which to run the WRM COPY, similar to how database maintenance tasks are scheduled.
115
Chapter 10
116
Kronos Incorporated
The bottom line is that if the COPY is run within the first year of full rollout, the database will be in a state where the PURGE can be run when it is decided to reduce the size of the production database, eliminating a significant delay if the database suddenly was either exceeding size requirements or causing typical database maintenance procedures to exceed given maintenance windows.
117
Chapter 10
The count query is run on both the Source and Target Databases and the results compared for data validity. To assess the performance impact of these two validation options, a COPY experiment was run with Database #2 as defined in Table 15 on page 148 in Appendix D. The experiments consisted of running the COPY for 1-month of data with binary validation, then with Row Count validation. The results were as follows: Complete Binary Validation COPY time: 4.1 hours Row Count Validation COPY time: 1.4 hours Note that with this test case, the elapsed time to run the COPY increased by almost a factor of 3 when using Complete Binary validation versus Row Count validation. What is more significant is the actual time measured for the validation methods. Recall that for binary validation, queries are run for every chunk processed and those queries are run in parallel. The elapsed times for the validation methods were measured as follows for the aforementioned experiment: Complete Binary Validation processing times: 17.5 hours Row Count Validation processing times: 7.2 minutes Since the process is multi-threaded, the 17.5 hours of processing is distributed amongst 6 threads configured for the property WrmSetting.Tuning.MaxThreads. With fewer threads, the overall elapsed time for the COPY would have significantly increased due to the time requirements for complete binary validation.
118
Kronos Incorporated
setting for this property being false. It is recommended to change the setting of this property to true. The property that defines the indexes to be dropped is named WrmSetting.Option.IndexList. The recommended indexes to be dropped are defined in Index List for the Property WrmSetting.Option.IndexList on page 150 in Appendix D. This list of indexes is based on the running of the WRM COPY on several Oracle and SQL Server databases. To display the impact of inserting records into and deleting records from large tables with heavy indexing, an experiment was run using Database #1 as defined in Table 14 on page 148 in Appendix D to INSERT/DELETE records from the WFCTOTAL table with full indexing applied and without the indexes. Table 12 represents the results when varying the number of records inserted into and deleted from the WFCTOTAL table. All times are measured in seconds.
With Records No Records With Records No Records Indexes per Indexes per Indexes per Indexes per (Seconds) Seconds (Seconds) Second (Seconds) Second (Seconds) Second
1 10 43 90 198 484
Note the significance of the indexing impact. When inserting 135 million records into the fully indexed WFCTOTAL table, the elapsed time was close to 11 hours. When removing all indexes (except for the primary key), the elapsed time to INSERT the same 135 million records was reduced from 11 hours with indexing down to 8 minutes with the indexes removed. The performance impact
119
Chapter 10
on COPY will be more significant each month for the fully indexed tables as the Target Database grows.
120
Kronos Incorporated
Notice that with the higher setting for WrmSetting.Tuning.CopyChunkRowIdThreshold, the Node Population and Transfer time of 1.7 hours is eliminated from the COPY job. Subsequently, the overall elapsed time for the COPY job was reduced from 7.9 hours with the default property setting down to 5.9 hours with the optimized property setting.
121
Chapter 10
122
Kronos Incorporated
Chapter 11
Workforce Integration Manager is a next-generation data integration tool that interfaces Workforce Central products with other business-critical applications. Unlike Workforce Connect, which is an entirely Windows client-side utility, Workforce Integration Manager allows users to run and maintain interfaces from their Workforce Central web interface. Interface programmers will use the clientside Interface Designer (which continues to use the Workforce Connect look-andfeel) to create and update data integration interfaces. Note: Workforce Central 6.2 continues to support Workforce Connect 6.0. For Workforce Connect performance recommendations, refer to the v6.0 version of Best Practices for Optimal Workforce Central Performance. Based on the performance tests run against Workforce Integration Manager, the following list of best practices will help you achieve optimal performance for Workforce Integration Manager: Use multi-server import Kronos recommends that you use Parallel XML import (using two or more Workforce Central servers) for large volume imports. For large data sets, using more Workforce Central servers in parallel will make the total import time shorter. Incremental (delta) data import For daily and regularly scheduled imports of data into Workforce Central, design the imports so that only incremental or changed data is imported regularly. This practice helps to minimize the volumes of data imported, resulting in faster total import times.
Chapter 11
Scheduling memory or CPU-intensive links When possible, avoid running links that consume large amounts of CPU or memory utilization during peak processing periods. If it is necessary to run links during peak processing periods (such as pay period end or a schedule maintenance processing), run the interfaces on a server that has no interactive users on it. Database vs memory sort Sorting in memory has fairly low CPU, time, and memory overhead. However if out of memory conditions occur when using a SQL Query, Workforce Activities data, Workforce Activities changed data, or Workforce Timekeeper API as a source, consider reducing the size of the knx.engine.MaxInMemorySortSize system setting to fewer records to push the interfaces to sort in the database instead. Reducing the value of this system setting will not have a significant impact on memory utilization of other data source types. To access this system setting: a. In the Workforce Timekeeper workspace, select Setup > System Configuration > System Settings. b. Select the Data Integration tab. c. Locate the knx.engine.MaxInMemorySortSize setting and change it accordingly. d. Click Save.
Break up large integrations into multiple steps As seen in the scalability tests, large workloads can be processed much more quickly when they are executed in smaller parallel chunks. Smaller data sets may also alleviate out of memory conditions. This data should be broken-up outside the WIM product. For example the data could be segmented in the source application by: or Breaking a single large input file up into multiple smaller input files using a shell script. Adding an additional where clause in the case of a database source
Consider the impact of automatically triggered integrations in large updates Large updates should be done at times when the system usage will not impact other users, or temporarily disabled during such an update.
124
Kronos Incorporated
For example, if a large-scale reorganization of the company occurs in an HR or ERP System with integrations configured to start automatically for each record updated: a. Disable the integration trigger in the source applications database. b. Update the company organization and job assignments. c. Re-import the organizational structure and job assignment data. d. Re-enable the integration trigger in the source database. Disable virus scanners in certain circumstances Disable the application server on access virus scanning in directories mapped to mapped folders. Scanning input or output files during link execution can have a significant impact on both CPU consumption and link execution time. Disable the client on access virus scanner when opening large lookup tables or output files in the browser. Disqualify records as early in link processing as possible The earlier a record is disqualified, the sooner the Integration Manager engine can begin processing the next record. If possible, filter input data before passing it into the Integration Manager engine. In the case of a SQL query, try to filter out records in the where clause. Disqualification should take place in a variable. V ariables are processed in a top-down manner, so disqualifications should take place in the topmost variables in the Interface Designer. Disqualification can take place in the output fields.
Only input necessary data Try to input only the required data into the Integration Manager engine. Avoid processing data sets that include unnecessary columns or unnecessary rows. Processing these additional records increases the time and memory footprint required to execute the interface. In SQL query sources, retrieve only the columns and rows needed for processing. In text files, try to preprocess the files to contain only the necessary data.
125
Chapter 11
Create a record retention policy to meet the needs of your business Create a Record Retention Policy that removes interface results at an interval that supports system usability and the amount of history likely to be reviewed to support interface analysis. Although the number of interface results (Interface Results Summary reports) stored in Workforce Central should not impact system performance, this is a good practice to establish. Minimize memory usage Integrations take place in the shared Workforce Central Java memory space. This memory has limitations beyond the amount of physical memory on the system. Using unnecessary amounts of memory may cause link execution to fail or may interrupt other users using the same Workforce Central instance. Several areas to keep in mind with memory usage in Workforce Integration Manager include the following: Link/interface size Remove any unused variables, fields, and lookup tables. Data size Only input necessary data. Avoid unnecessary rows or columns in input data. Lookup table size Remove any unused tables, columns, and rows. Reduce avoidable errors If hundreds of errors are generated during every execution, try to isolate the root cause and eliminate it. Do not run more interfaces than memory allows Running more interfaces concurrently will speed execution, but if memory becomes tight, consider installing a second instance of Workforce Integration Manager rather than running more threads in a single instance.
Optimize lookup tables Lookup tables with no wildcards or those with an asterisk (*) on the last line perform faster than those with many wildcards. In tables where more wildcards are necessary, try to put the table in order of most common usage.
126
Kronos Incorporated
Use XML import Kronos recommends that you use XML import for all data imports. XML imports are faster than table imports. Insert vs. Update When importing data using XML API methods, it is important to use the correct import type either Insert for new records to be added or Update for updating already existing data in the database. Note that the Update import type allows inserting of new records of data; however, the best performance is achieved using the correct import type. Manage thread pool The default Workforce Integration Manager thread pool is configured to run up to five integration interfaces concurrently. In cases where running five interfaces concurrently exhausts the Java heap space or provides unacceptably slow performance to interactive users of the instance, lower this number. To process more interfaces in parallel, you can increase this value. For maximum performance of individual interfaces, Kronos suggests setting this value no higher than the number of processing cores available to the Workforce Central instance. To change the thread pool size: a. Select Setup > System Configuration > System Settings. b. Select the Data Integration tab. c. Locate the knx.engine.ThreadPoolSize property and change the value to the desired number. d. Click Save.
Use a dedicated WIM server If any of the following conditions exist, use a dedicated WIM: the number of employees in the installation is more than 10,000. Large imports/exports are carried out during interactive usage of the system. This is applicable for all size installations. Memory or CPU intensive links are required to be run during Pay Period End or Shift Change usage periods.
127
Chapter 11
An existing server can be allocated for this purpose. More dedicated servers can be added if circumstances dictate, such as when multiple long import/export jobs are run concurrently during peak interactive usage periods. Do not combine dedicated servers. Use separate dedicated WDM, report, and WIM servers as needed.
128
Kronos Incorporated
Appendix A
This appendix provides background information and recommendations about tuning Oracle indexes for optimal performance, including: Background information about the internal management of Oracle indexes what is occurring in terms of growth and storage inside the index during the normal life cycle of data change. Guidance on assessing the quality of an index and determining if any maintenance should be carried out to increase performance within the database. Overview on page 130 Block splitting on page 131 Sparse population on page 132 Empty blocks on page 132 Height/Blevel on page 132 Indexes in the Kronos environment on page 133 Workforce Central index maintenance recommendations on page 137
Appendix A
Overview
Index maintenance within any database environment has long been a subject of debate. Some of the common topics that are debated are: Whether you rarely need to rebuild indexes or you should rebuild them regularly. Whether indexes whose values grow sequentially to the right should be rebuilt regularly. Whether indexes deleted from the left should be rebuilt regularly. Whether index blocks whose values are deleted are reused within the index. Whether indexes of a certain height/Blevel need to be rebuilt.
The following sections provide some guidance to these topics with respect to the Kronos database environment. The primary rule to follow with respect to index maintenance in any environment is to understand how an application uses an index before you rebuild the index. Rules such as rebuilding the index when the percentage of deleted entries within the index grows above a certain percentage or when the Blevel of the index grows beyond 4 are not entirely accurate. Deleted records are only marked for deletion initially. An update consists of a deletion and then an insertion. The space held by deleted entries in the index is typically cleaned out and reused when: The block is completely empty and is placed back on the free list. Any record that is inserted into a block will clean out the deleted entries within that block. Any DML on a block will clean out the deleted entries already present within that block. Block splitting on page 113 Sparse population on page 114 Empty blocks on page 114 Height/Blevel on page 114
130
Kronos Incorporated
Block splitting
Block splitting
When a block in the middle section of an indexs logical structure fills, it can split in either of the following ways: 50:50 split A new adjacent block is created and the contents of the original block are split 50:50 between the two blocks. 90:10 split When the block splits, the inner block remains mostly full and the new outer (right) block contains only a small amount of row data. The 90:10 split method allows the index to grow with its data very compacted within the index blocks, as opposed to the 50:50 split, which leaves unused space in each block after the split. The 50:50 split assumes that data may be inserted in any location so it leaves space available in both blocks. With Oracle 10g, when the right-most leaf block fills (as in an index whose new values are increasing sequentially), a 90:10 split occurs. The 50:50 and 90:10 methods have a significant impact on the density of storage within the index blocks and on the performance of index range scans. Caution: Like the sparse population condition (described next), the 50:50 split can increase the actual number of blocks required for a range scan. Although rebuilding to compress density can reduce the range scan cost, there is an important caveat: The 50:50 split provides important space availability for random inserts due to table inserts or updates. Without that available space, the index must undergo a much higher rate of block splits. Block splits are a very expensive operation and can easily outweigh the cost of reading a few more blocks during a scan.
131
Appendix A
Sparse population
The presence of sparse data distribution over index blocks can have significant impact on index range scans. Denser packing can reduce this, but as explained in Block splitting on page 113, there is the problem of increasing split operations.
Empty blocks
The presence of large numbers of empty blocks within an index due to delayed block cleanout can be a serious problem because it can impact index range scan costs, depending on the optimizer statistics collection method. Large numbers of empty blocks retained on the end of an index can severely skew the cost of an index range scan. In addition, large numbers of empty blocks within the index can cause similar problems. There seems to be a trade-off between the severe cost of a transaction cleaning its own blocks out and the impact on range scan estimates.
Height/Blevel
Height/Blevel is one of the least useful of the more popular metrics for examining index storage health. Leaf blocks greatly outnumber branch blocks. For height/Blevel to be an issue, it requires a very specific use profile to cause resource pressure. Also, rebuilding to only reduce height/Blevel requires a previous reduction in rows and can result in further block splitting costs if growth returns. Using the height/Blevel of an index solely as a guideline for rebuilding that index is not a recommended practice. As an index grows and expands, the height/Blevel may grow and expand with it. Keep in mind that it takes time and resources for an index to grow and expand and get to a certain height/Blevel, and it usually does so for good reasons.
132
Kronos Incorporated
With this in mind, there may still be a need to rebuild some indexes, depending on how the applications DML activities affect them. This section provides an overview of the different types of indexes that exist within the Workforce Central database and how different types of DML can affect these indexes. Indexes within the Workforce Central database can be separated into the following types: Monotonically increasing indexes on page 134 Indexes with no pattern of inserts on page 134 Primary key indexes on page 134 Date datatype indexes on page 135
133
Appendix A
134
Kronos Incorporated
If deletes are sporadic and do not result in emptied blocks, the space within these types of indexes will not be reused as any new entries are monotonically increasing.
For example, transactional tables such as WFCTOTAL, TIMESHEETITEM, PUNCHEVENT, ACCRUALTRAN, ACCRUALEDIT, PUNCHEVENTTRC, WFCEXCEPTION, and others are candidates for rebuilding their primary key indexes. With the exception of Workforce Record Manager purges, these tables do not have bulk deletes performed against them. Tables such as PERSONIMPORT, PERSONMANYIMPORT, MYWTKEMPLOYEE, and others are not candidates for index rebuilds because they do have bulk deletes that result in emptied blocks performed against them.
135
Appendix A
Historical date indexes These indexes have date information that does not increase monotonically. Examples of these types of date columns are apply dates. A transaction can be deleted and reinserted at any point and it can affect historical records. Rebuilding indexes of this nature can be detrimental to the overall performance of the database, because these types of indexes work best having free space in their blocks for updates and future inserts.
136
Kronos Incorporated
Index coalesce
The index coalesce operation should become a regular operation in index maintenance. It is very effective in reducing empty blocks that are still attached to the logical index structure, minimizing excessive sparseness.
Rebuilding an index
1. Using the index analysis script in this step, collect statistical information on the current storage structure for the Workforce Central schema. This SQL script carries out analyze validate structure commands on the indexes within a schema. It collects the data into a work table and spools out the contents. This script can be modified to insert the results in a table or output in HTML for import into Excel for easier analysis. Important: To collect index storage statistics with the analyze command, it must be run in offline mode. While it is running on an index, all DML will be blocked to that segment.
137
Appendix A
138
Kronos Incorporated
2. Using the results of the script, identify indexes that potentially need to be rebuilt. The following are key data points to focus on: The PCT_USED space of an index provides a good starting point in the analysis. Note that PCT_USED is in comparison to the actual number of rows in that index. If an index only has one row in it, its PCT_USED will be very low. It may be more reasonable to focus on indexes that use more than a certain number of blocks. The DEL_LF_ROWS column is the second item of interest. This column provides data on how many actual Leaf Row Entries have been deleted. This column should be used in conjunction with the PCT_USED column; it should not be used alone. After you locate indexes with a low PCT_USED and/or a high DEL_LF_ROWS, you should categorize these indexes according to the list in Indexes in the Kronos environment on page 115 (for example, primary key, date, and so forth) and their insert potential and method should be understood. Assess the deletion method and potential in addition to insert potential and method.
3. Rebuild the indexes identified in step 2 and monitor the system for performance improvements. 4. After identifying and rebuilding indexes, always retain the records of which indexes were rebuilt, and the storage data used to determine their candidacy. If a given index continues to resurface as needing to be rebuilt, it may be that sparse population is the normal state of the index and rebuilds only force the index to immediately undergo expensive block splits as it quickly returns to its normal state. In addition, indexes that benefit from rebuilding may again reach a state requiring another rebuild, and the analysis time can be reduced if records of the rebuilds are maintained.
139
Appendix A
140
Kronos Incorporated
Appendix B
Based on a series of tests completed in the Kronos performance lab, this appendix outlines recommendations for configuring and deploying Workforce Central in a VMware environment. Important: The recommendations in this appendix are based on the assumption that the hardware used is dedicated to the Kronos suite of applications. If your production systems have non-Kronos applications running on the same physical hardware, you must consider peak usage scenarios and overall system utilization. Also note that these are general recommendationsnot rulesand you should understand the need and the ramifications of deviating from these recommendations. This appendix contains the following topics: General recommendation on page 142 Hardware and software requirements on page 142 VM configuration on page 143
Appendix B
General recommendation
Although VMware has a multitude of options for tuning workloads, Kronos recommends that you keep the VMware configuration as simple as possible. Performance testing of Workforce Central in a VMware environment obtained optimal results when running VMware with an out of the box configuration, taking default settings for system parameters and tuning options.
Hardware
The hardware listed below represents the minimum processor specifications to be used in the sizing process for optimal performance of Workforce Central on VMware. The total hardware requirements will depend on individual customer use cases and will be determined by your local Kronos Representative through a hardware-sizing process. Dual-core systems based on Intel 5130 processor running at 2 GHz or greater Quad-core systems based on Intel 5335 processor running at 2 GHz or greater Dual-core systems based on AMD 2200 series processor running at 2.4 GHz All AMD quad-core processor systems 4 GB memory per processor core (depending on the number and size of the virtual machines) 1 GB Ethernet adapter per system If you use vSphere 4.0, Kronos recommends that you implement VMwares hardware best practices for vSphere. To do this, you must modify the following BIOS settings: Enable Intel or AMD Virtualization Technology (VT)
142
Kronos Incorporated
VM configuration
Note: Check with your hardware vendor for how to access your machines BIOS settings. For example, to enable hardware Virtualization Technology and enable Execute Disable for a Dell BIOS, select F2 as the machine boots, and then select CPU Information.
Software
vSphere 4.0 VMware ESX 4.0 and 3.5, VMware ESXi 4.0 and 3.5
VM configuration
Recommendations for configuring the virtual machine (VM) for use with Workforce Central are outlined in the following topics: Memory allocation on page 143 Virtual CPU allocation on page 144 System resource allocation on page 144 Monitoring virtual machine usage on page 144
Memory allocation
Configure VMware virtual machines with between 2 and 4 GB of memory, depending on whether the VM is used for the Workforce Central reporting functionality: If the instance of Workforce Central running within the VM is used to generate reports, you should consider using 4 GB of memory for the VM. If the instance of Workforce Central running within the VM is not used to generate reports, 2 GB of memory for the VM is adequate.
143
Appendix B
144
Kronos Incorporated
Appendix C
Best practices
Use stand-alone Hyper-V with virtualization. Refer to Appendix B, Recommendations for using Workforce Central with VMware, on page 141 for CPU requirements. Use Windows 2008 R2 as the Guest OS. Memory (RAM) requirements are the same as the memory requirements for the physical server. However, Kronos recommends that you provide at least 4 GB of RAM per virtual machine. Close the Hyper-V manager console because it consumes CPU resources.
Appendix C
146
Kronos Incorporated
Appendix D
The following topics contain information specific to optimizing Workforce Record Manager. Use this information in conjunction with the recommendations in Chapter 10, Workforce Record Manager Best Practices, on page 113. Database definitions on page 147 Index List for the Property WrmSetting.Option.IndexList on page 150 Oracle Tuning on page 151 SQL Server Tuning on page 152
Database definitions
For testing the 6.2 release of Workforce Record Manager, five customer databases (described in Table 14 through Table 18) were selected to represent . different data distributions, employee counts, database platforms, application use, and accumulation of historical data. The key criteria for each database are as follows: Number of Active and configured employees: Since WRM processes historical data, it is important to know the total employee count as well as the number of active employees. Number of years of historical data: The amount of historical data will have an impact on performance and size for both the Source and Target Databases. Size of the source database Number of punches processed per day:This metric has an impact on the amount of PUNCHEVENT data processed for each COPY and PURGE job.
Appendix D
Number of shifts processed per day: This metric has an impact primarily on the amount of SHIFTASSIGNMNT data processed for each COPY and PURGE job. Applications configured:The Workforce Central Suite components configured for a customer will have an impact on the number and type of records processed each month.
148
Kronos Incorporated
Database definitions
149
Appendix D
WrmSetting.Option.IndexList=X1_TIMESHEETITEM,X2_TIMESH EETITEM,X3_TIMESHEETITEM,X4_TIMESHEETITEM,X5_TIMESHEET ITEM,X6_TIMESHEETITEM,X7_TIMESHEETITEM,X8_TIMESHEETITE M,X9_TIMESHEETITEM,XA_TIMESHEETITEM,XB_TIMESHEETITEM,X C_TIMESHEETITEM,XD_TIMESHEETITEM,XE_TIMESHEETITEM,X10_ TIMESHEETITEM,X1_PUNCHEVENT,X2_PUNCHEVENT,X3_PUNCHEVEN T,X4_PUNCHEVENT,X5_PUNCHEVENT,X1_WFCTOTAL,X2_WFCTOTAL, X3_WFCTOTAL,X4_WFCTOTAL,X5_WFCTOTAL,X6_WFCTOTAL,X7_WFC TOTAL,X1_ACCRUALTRAN,X2_ACCRUALTRAN,X3_ACCRUALTRAN,X4_ ACCRUALTRAN,X5_ACCRUALTRAN,X6_ACCRUALTRAN,X7_ACCRUALTR AN,X8_ACCRUALTRAN,X9_ACCRUALTRAN,X1_AUDITITEMSTR,X2_AU DITITEMSTR,X1_SCHEDULEDTOTAL,X2_SCHEDULEDTOTAL,X3_SCHE DULEDTOTAL,X4_SCHEDULEDTOTAL,X1_AVLPATTRNASGN,X2_AVLPA TTRNASGN,X3_AVLPATTRNASGN,X4_AVLPATTRNASGN,X5_AVLPATTR NASGN,X1_SHIFTCODE,X2_SHIFTCODE,XU1_SHIFTCODE,X1_SHFTS EGORGTRAN,X2_SHFTSEGORGTRAN,X3_SHFTSEGORGTRAN,X1_SHIFT SEGMENT,X2_SHIFTSEGMENT,X3_SHIFTSEGMENT,X4_SHIFTSEGMEN T,X1_SHIFTASSIGNMNT,X2_SHIFTASSIGNMNT,X3_SHIFTASSIGNMN T,X4_SHIFTASSIGNMNT,X5_SHIFTASSIGNMNT,X6_SHIFTASSIGNMN T,X1_WORKEDSHIFT,X2_WORKEDSHIFT,X3_WORKEDSHIFT,X4_WORK EDSHIFT,X5_WORKEDSHIFT,X1_GROUPSHIFT,X2_GROUPSHIFT,X1_
150
Kronos Incorporated
Oracle Tuning
Oracle Tuning
For Oracle platforms, you should be aware of several settings that will help achieve optimal performance with the WRM Archive process. The general theme of the parameters is use significant Oracle memory, minimize redo log switches, and take advantage of Asynchronous I/O on UNIX servers. The specific settings to set are as follows: Set the REDO Log Size to at least 5 Gigabytes per REDO Log File: WRM COPY and PURGE are both operations which generate significant write activity to the database. This activity will generate significant REDO log activity and log switches if the REDO log files are too small. Log switches essentially will essentially delay write activity up to a few seconds, so these switches should be minimized. Performance data showed that using a REDO log size of 5 Gigabytes kept the number of log switches down to approximately 4 switches per hour, which resulted in minimal performance impact. Use Asynchronous I/O: When using Oracle on a regular file system, there is some overhead in writing to the file system. It used to be recommended that raw devices be used to overcome the overhead. An alternative which yielded positive performance results is to use the Oracle Asynchronous I/O functionality. To take advantage of Asynchronous I/O, the following Oracle Parameter settings are necessary: disk_asynch_io = TRUE filesystemio_options = asynch Use Multiple DB Writer Processes: Due the significant amount of write activity that occurs during a COPY or PURGE, increasing the number of Oracle DB Writer processes helps decrease the time to run a COPY or PURGE job. To increase the number of DB Writer processed, the following parameter setting is recommended:
151
Appendix D
DB_WRITER_PROCESSES=4 Disable the Oracle Recycle Bin: As of Oracle 10G, a concept was introduced similar to the Windows Recycle Bin where records would be kept on the server in a cryptic state even after being deleted. It would be possible to recover these records at a later time, similar to how files are recovered through the Windows Recycle Bin. The problem with this functionality is that the size of the database would not be reduced as quickly as desired due to the data being kept in the database. To disable this functionality, the following Oracle parameter setting is necessary: recyclebin = off Use a Large System Global Area (SGA): Due to the significant amount of I/O generated by the WRM Archive, it is critical that the use of Oracle memory is maximized to reduce the amount of physical I/O, subsequently improving the speed of the Archive process. To achieve this speed, Oracle should be configured to use as much physical memory as possible. In Oracle 11G, there are two parameters which essentially manage Oracle memory. It is recommended that these parameters be set to 75% of the available Physical Memory if possible. For example, on an Oracle server with 8 Gigabytes of RAM, the parameter settings would be as follows: MEMORY_TARGET=6GB MAX_MEMORY_TARGET=6GB
152
Kronos Incorporated
Use Read Committed Snapshot Isolation (RCSI): By default, SQL Server will place shared locks on data when reading from tables. These shared locks, if held long enough, could block other processes from writing to that table, causing blocking and other undesirable behavior. As of SQL Server 2005, the concept of RCSI was introduced. RCSI invokes a locking strategy similar to Oracle where SELECT statements do not lock data during a read operation. Testing has shown that using this approach significantly reduces locking and deadlocking behavior. To use this option, apply the following commands to the database on which this option is to be applied: ALTER DATABASE <DB Name> SET ALLOW_SNAPSHOT_ISOLATION ON GO ALTER DATABASE <DB Name> SET READ_COMMITTED_SNAPSHOT ON GO
Set the Max Degree of Parallelism to 1: This setting essentially decomposes long running queries and uses threads to try to run the single query with parallel threads. The default of 0 implies that the number of threads will be equal to the number of processors on the database server. Performance testing and research has indicated that using the default of 0 actually introduces excessive locking and blocking behavior, as well as negatively impacts the execution plans of some queries. To achieve optimal performance for WRM, it is recommended that this parameter be set to 1, which essentially disables the decomposition of individual queries. It does NOT hinder the ability to execute multiple queries concurrently. To change this setting, execute the following as the System Administrator: sp_configure 'allow updates',1 go reconfigure with override go
153
Appendix D
sp_configure 'show advanced options',1 go reconfigure with override go sp_configure 'max degree of parallelism',1 go reconfigure with override go
154
Kronos Incorporated