Sei sulla pagina 1di 154

Best Practices for Optimal Workforce Central Performance

A guide for configuring, tuning, and enhancing Workforce Central v6.2 from a performance perspective.

Kronos Workforce Central Suite Version 6

Document Revision: C

The information in this document is subject to change without notice and should not be construed as a commitment by Kronos Incorporated. Kronos Incorporated assumes no responsibility for any errors that may appear in this manual. This document or any part thereof may not be reproduced in any form without the written permission of Kronos Incorporated. All rights reserved. Copyright 2009, 2010, 2011. Altitude, Altitude Dream, Cambridge Clock, CardSaver, Datakeeper, Datakeeper Central, eForce, Gatekeeper, Gatekeeper Central, Imagekeeper, Jobkeeper Central, Keep.Trac, Kronos, Kronos Touch ID, the Kronos logo, My Genies, PeoplePlanner, PeoplePlanner & Design, Schedule Manager & Design, ShiftLogic, ShopTrac, ShopTrac Pro, StarComm, StarPort, StarSaver, StarTimer, TeleTime, Timekeeper, Timekeeper Central, TimeMaker, Unicru, Visionware, Workforce Accruals, Workforce Central, Workforce Decisions, Workforce Express, Workforce Genie, and Workforce TeleTime are registered trademarks of Kronos Incorporated or a related company. Altitude MPP, Altitude MPPXpress, Altitude Pairing, Altitude PBS, Comm.Mgr, CommLink, DKC/Datalink, eDiagnostics, Experts at Improving the Performance of People and Business, FasTrack, Hireport, HR and Payroll Answerforce, HyperFind, Kronos 4500 Touch ID, Kronos 4500, Kronos 4510, Kronos Acquisition, Kronos e-Central, Kronos KnowledgePass, Kronos TechKnowledgy, KronosWorks, KVC OnDemand, Labor Plus, Momentum Essentials, Momentum Online, Momentum, MPPXpress, Overall Labor Effectiveness, Schedule Assistant, Smart Scheduler, Smart View, Start Quality, Start WIP, Starter Series, StartLabor, Timekeeper Decisions, Timekeeper Web, VisionPlus, Winstar Elite, WIP Plus, Workforce Absence Manager, Workforce Acquisition, Workforce Activities, Workforce Analytics, Workforce Attendance, Workforce Central Portal, Workforce Connect, Workforce Employee, Workforce ESP, Workforce Forecast Manager, Workforce HR, Workforce Leave, Workforce Manager, Workforce MobileTime, Workforce Operations Planner, Workforce Payroll, Workforce Record Manager, Workforce Recruiter, Workforce Scheduler, Workforce Smart Scheduler, Workforce Tax Filing, Workforce Timekeeper, Workforce View, and Workforce Worksheet are trademarks of Kronos Incorporated or a related company. The source code for Equinox is available for free download at www.eclipse.org. All other trademarks or registered trademarks used herein are the property of their respective owners and are used for identification purposes only. When using and applying the information generated by Kronos products, customers should ensure that they comply with the applicable requirements of federal and state law, such as the Fair Labor Standards Act. Nothing in this Guide shall be construed as an assurance or guaranty that Kronos products comply with any such laws. Published by Kronos Incorporated 297 Billerica Road, Chelmsford, Massachusetts 01824-4119 USA Phone: 978-250-9800, Fax: 978-367-5900 Kronos Incorporated Global Support: 1-800-394-HELP (1-800-394-4357) For links to information about international subsidiaries of Kronos Incorporated, go to http://www.kronos.com Document Revision History Document Revision A B C Product Version Workforce Central 6.2 Workforce Central 6.2 Workforce Central 6.2 Release Date November 2010 March 2011 April 2011

Contents

About This Guide Organization of this guide ..................................................................... 10 Chapter 1: Workforce Central Guide Best Practices Application server best practices ................................................................. 14 JRE heap size ........................................................................................ 14 Database server best practices ..................................................................... 17 Database platform ................................................................................. 17 Database layout ..................................................................................... 18 Oracle recommendations ...................................................................... 19 SQL Server recommendations .............................................................. 24 Best practice for configuring SQL Server Max Memory setting .......... 27 Best practices for TEMPDB layout ............................................................ 28 Operational practices ................................................................................... 29 Optimize data transmission throughput ................................................ 29 Keep database statistics current ............................................................ 29 Rebuilding an index (Sql Server) ................................................................ 34 Reorganize Indexes ............................................................................... 34 Rebuild Indexes when Necessary ......................................................... 34 Monitor the database for fragmentation ................................................ 35 Running the Workforce Timekeeper Reconcile utility ......................... 36 HyperFind performance ............................................................................... 40 Reports best practices .................................................................................. 42 Using dedicated report servers .............................................................. 42 Limiting the number of employees per report ...................................... 44 Following procedural best practices ..................................................... 45 SSRS 2005 best practices ............................................................................ 46 SSRS 2008 best practices ............................................................................ 47 Next Generation User Interface best practices ............................................ 47

Contents

Process Manager best practices ....................................................................48 Process template design recommendations ...........................................48 Configuration recommendations ...........................................................52 Process governors ..................................................................................53 Chapter 2: Workforce Timekeeper Best Practices Configuring the Workforce Timekeeper application for optimum performance ......................................................................60 Genie performance .......................................................................................62 Create Genies with only the necessary data columns ............................62 Assign Genies as needed to users ..........................................................64 Limit the number of employees displayed in Scheduling Genies to below 1,000 .............................................................................64 Do not calculate totals on Genies ..........................................................65 Totalizer best practices .................................................................................66 General configuration considerations ....................................................67 Totalizer extensibility interface .............................................................68 Multiple instances .................................................................................68 Additional recommendations .................................................................69 HTML client performance ...........................................................................71 HTML client versus applets ..................................................................71 CPU consumption ..................................................................................72 Quick Time Stamp throughput (kiosk capability) .................................72 Chapter 3: Workforce Device Manager Best Practices Device communication settings recommendations ......................................74 Dedicated Workforce Device Manager servers (version 6.2.1 and later) .....................................................................76 Recommended threshold for terminals per server .......................................77 Load balancing punch uploads .....................................................................77 Recommended Web/application server resource settings ............................77 Modify maximum database connections ...............................................78 Modify web server processing threads ..................................................78

Kronos Incorporated

Contents

Chapter 4: Workforce Scheduler Best Practices Best Practices ............................................................................................... 82 Chapter 5: Workforce Forecast Manager Best Practices Best practices ............................................................................................... 86 Configuring number of forecasting and Auto-Scheduler engines ............... 88 Chapter 6: Workforce Operations Planner Best Practices Best practices ............................................................................................... 92 Chapter 7: Workforce Attendance Best Practices Best practices ............................................................................................... 94 Chapter 8: Workforce Analytics Best Practices SQL Server best practices ........................................................................... 98 Set Maximum Degree of Parallelism to 1 ............................................. 98 Configure Workforce Analytics database appropriately ..................... 100 Configure Workforce Analytics data mart server appropriately ......... 100 Oracle best practices .................................................................................. 102 Configure Workforce Analytics database ........................................... 102 Configure Workforce Analytics data mart server ............................... 103 Oracle Provider for OLE DB .............................................................. 103 Open Query best practices ......................................................................... 104 Workforce Analytics product performance best practices ........................ 105 Analysis Services SSAS best practices ..................................................... 106 Chapter 9: Workforce HR/Payroll Best Practices Database configuration .............................................................................. 110 Preventing connections from timing out ................................................... 111 Chapter 10: Workforce Record Manager Best Practices Configure Ample I/O Bandwidth ........................................................ 114 Use dedicated 64-bit application servers ............................................ 115 Run the WRM COPY and/or PURGE during non-peak processing hours ....................................................................... 115

Best Practices for Optimal Workforce Central Performance

Contents

Run COPY and/or PURGE Operations in 1-month Increments .........116 Define an archive strategy earlier rather than later .............................116 Process more tasks concurrently (64-bit ONLY) ................................117 Use Row Count validation for COPY operations ............................117 Drop indexes on the target database for the COPY .............................118 Increase Threshold to process non-historic tables in a single operation (64-bit) ............................................................120 Chapter 11: Workforce Integration Manager and Workforce Central Import Best Practices Appendix A: Performance Tuning Oracle Indexes Overview ....................................................................................................130 Block splitting ............................................................................................131 Sparse population .......................................................................................132 Empty blocks ..............................................................................................132 Height/Blevel .............................................................................................132 Indexes in the Kronos environment ...........................................................133 Monotonically increasing indexes .......................................................134 Indexes with no pattern of inserts ........................................................134 Primary key indexes ............................................................................134 Date datatype indexes ..........................................................................135 Workforce Central index maintenance recommendations .........................137 Index coalesce .....................................................................................137 Rebuilding an index .............................................................................137 Appendix B: Recommendations for using Workforce Central with VMware General recommendation ...........................................................................142 Hardware and software requirements ........................................................142 Hardware .............................................................................................142 Software ...............................................................................................143

Kronos Incorporated

Contents

VM configuration ...................................................................................... 143 Memory allocation .............................................................................. 143 Virtual CPU allocation ........................................................................ 144 System resource allocation ................................................................. 144 Monitoring virtual machine usage ...................................................... 144 Appendix C: Recommendations for using Workforce Central with Hyper-V Best practices ............................................................................................. 145 Appendix D: Additional information Workforce Record Manager Database definitions .................................................................................. 147 Index List for the Property WrmSetting.Option.IndexList ....................... 150 Oracle Tuning ............................................................................................ 151 SQL Server Tuning .................................................................................... 152

Best Practices for Optimal Workforce Central Performance

Contents

Kronos Incorporated

About This Guide

This document provides information that you can use to configure, tune, and enhance the behavior of the Kronos Workforce Central product suite from a performance perspective. The Workforce Central system can be tuned by altering the system and setup configuration. An equally valid and effective approach is to alter the way that Workforce Central is used. For example, if reports consume resources during peak operation, run the reports at off-peak times. Both methods are discussed in this book. Note: This document is specific to Workforce Central v6.2. Previous best practices do not carry forward from earlier versions of Workforce Central. If there is a v5.2 or v6.0 best practice that is not explicitly called out in this v6.2 document, then it does not apply. Because there is no practical one-size-fits-all method of managing Workforce Central performance, best practices described in this document are recommendations based on the Kronos Performance Groups performance testing. These suggestions for system configuration, patterns of use, and activities to avoid can generally be followed regardless of the environment in which the application is running. Important: While we expect and hope that your implementation of the best practices recommended in this guide will help you achieve optimal performance of your Workforce Central system, the results from our performance testing are highly dependent upon workload, specific application requirements, system design, and implementation. System performance will vary as a result of these and other factors. This guide is not intended, and should not be relied upon, as a guarantee of system performance. The results described in this guide should not be used as a substitute

About This Guide

for a specific customer application benchmark when for capacity planning or product evaluation decisions.

Organization of this guide


This guide is organized by Workforce Central suite v6.2 application: Chapter 1, Workforce Central Guide Best Practices, on page 13 recommends ways to optimize performance of core Workforce Central components: the database, reports, the Background Processor, the Callable Totalizer and Process Manager. Chapter 2, Workforce Timekeeper Best Practices, on page 59 describes how to configure the application server, Genies, and the HTML Client user interface for optimum performance. Chapter 3, Workforce Device Manager Best Practices, on page 73 describes some best practices to enable your data collection devices and Workforce Device Manager to function optimally. Chapter 4, Workforce Scheduler Best Practices, on page 81 describes several recommendations for configuration settings, RAM, and memory utilization management to optimize Workforce Scheduler performance. Chapter 5, Workforce Forecast Manager Best Practices, on page 85 describes configuration settings and operational procedures to optimize Workforce Forecast Manager performance. Chapter 6, Workforce Operations Planner Best Practices, on page 91 identifies system settings and operational practices to keep performance optimal. Chapter 7, Workforce Attendance Best Practices, on page 93 explains how to set Workforce Attendance processing for batch mode to optimize system performance. Chapter 8, Workforce Analytics Best Practices, on page 97 presents SQL Server, Oracle, Open Query, Analysis Services SSAS-related best practices and general recommendations to optimize Analytics product performance.

10

Kronos Incorporated

Chapter 9, Workforce HR/Payroll Best Practices, on page 109 recommends special attention to database configuration and describes the procedure to prevent connections from timing out when you run long reports. Chapter 10, Workforce Record Manager Best Practices, on page 113 presents system and application best practices to be able to copy and purge data from the customer database in acceptable maintenance windows. Chapter 11, Workforce Integration Manager and Workforce Central Import Best Practices, on page 123 describes several list best practices to help achieve optimal performance for Workforce Integration Manager: Appendix A, Performance Tuning Oracle Indexes, on page 129 provides background information and recommendations about tuning Oracle indexes for optimal performance. Appendix B, Recommendations for using Workforce Central with VMware, on page 141 outlines recommendations for configuring and deploying Workforce Central in a VMware environment. Appendix C, Recommendations for using Workforce Central with Hyper-V, on page 145 provides hardware and software recommendations for using Hyper-V with Workforce Central. Appendix D, Additional information Workforce Record Manager, on page 147 includes information needed for completing practices to optimize Workforce Record Manager performance.

Best Practices for Optimal Workforce Central Performance

11

About This Guide

12

Kronos Incorporated

Chapter 1

Workforce Central Guide Best Practices

The Workforce Central suite is a comprehensive solution for managing every phase of the employee relationshipstaffing, developing, deploying, tracking, and rewarding. It consists of a number of separate, yet tightly integrated applications that are both extensible and unified to provide a centralized data repository and flexible self-service capabilities. This chapter describes recommendations to optimize performance of Workforce Central components that are available at the platform level and that are used by more than one Workforce Central application. This chapter consists of the following sections: Application server best practices on page 14 Database server best practices on page 17 Best practices for TEMPDB layout on page 28 Operational practices on page 29 HyperFind performance on page 40 Reports best practices on page 42 SSRS 2005 best practices on page 46 SSRS 2008 best practices on page 47 Next Generation User Interface best practices on page 47 Process Manager best practices on page 48

Chapter 1

Workforce Central Guide Best Practices

Application server best practices


JRE heap size
On a 32-bit application server, 4 GB of physical memory is recommended for each WFC instance to be deployed. A maximum heap size of 1024 MB for each instance is required. On a 64-bit application server, 16 GB of physical memory is recommended for each WFC instance to be deployed. Configure each WFC instance JVM to use a maximum heap size of 4GB, with 8GB preferred. The total maximum heap size configured for all WFC instances should not exceed more than half of the physical memory of the system. For example, if the physical memory on the application server is 16GB and there is one WFC instance running on this server, set the maximum heap size to 8GB for that instance. If there are two WFC instances, then set each instance to a maximum heap size of 4GB, so that the total does not exceed 8GB. Changing heap size on a JBoss application server running Windows 1. Stop Workforce Central. 2. With a text editor, open \Kronos\wfc\bin\jbossWinService.bat. The default settings for minimum and maximum heap values are 512MB (Xms parameter) and 4096MB (-Xmx parameter). Change the Xmx setting to the desired value and save the file. 3. Open C:\<Jboss>\bin\run.bat with a text editor. 4. Locate set JAVA_OPTS= and change the default value to the same value you specified in jbossWinService.bat. 5. Save the file. 6. Open a command prompt, navigate to C:\Kronos\wfc\bin, and execute the following command: jbossWinService.bat uninstall The following message appears: The Jboss_wfc service was successfully uninstalled.

14

Kronos Incorporated

Application server best practices

7. From c:\Kronos\wfc\bin, execute the following command: jbossWinService.bat install The following message appears: The Jboss_wfc service was successfully installed. 8. Restart Workforce Central. Changing Jboss heap size on application server running Solaris or AIX 1. Navigate to <JBoss>/bin. 2. With a text editor, open run.conf. 3. Locate JAVA_OPTS= and ensure that it reads as shown in this step. Note: For AIX environments, you do not need to enter the following lines: -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 JAVA_OPTS="-Xms128m -Xmx1024m -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000\ -XX:PermSize=128m \ -XX:MaxPermSize=256m \ -Dsun.lang.ClassLoader.allowArraySyntax=true\ -Doracle.jdbc.defaultNChar="true" \ -Doracle.jdbc.convertNcharLiterals="true" \ -Dfile.encoding=UTF8 \ -Djava.awt.headless=true\ -Dsun.lang.ClassLoader.allowArraySyntax=true" 4. Add the Kronos-endorsed library in the JBoss-endorsed library path using the following text: set JBOSS_ENDORSED_DIRS=%JBOSS_HOME%/lib/endorsed:

Best Practices for Optimal Workforce Central Performance

15

Chapter 1

Workforce Central Guide Best Practices

/usr/local/kronos/endorsed_libs 5. Save your edits and close run.conf. 6. Restart Workforce Central.

16

Kronos Incorporated

Database server best practices

Database server best practices


In the Workforce Central suite, the database is the primary shared resource. Database performance affects all aspects of the product. How well the database performs is largely the result of the following factors: Database platform on page 17 Database layout on page 18 Oracle recommendations on page 19 SQL Server recommendations on page 24 Best practice for configuring SQL Server Max Memory setting on page 27

Database platform
Workforce Central version 6.2 can use the following SQL Server or Oracle databases: SQL Server 2005 or SQL Server 2008 Oracle 10g R2 or Oracle 11g R1 & R2

Each database platform is designed with different strategies for SQL optimization, table indexing, and other performance considerations. The Kronos strategy is to tune the Workforce Central system to perform consistently on all supported database platforms.

Best Practices for Optimal Workforce Central Performance

17

Chapter 1

Workforce Central Guide Best Practices

Database layout
The database layout includes both the physical and logical layout of the database: Physical layout The hard drives and the distribution of the database components on the drives. Logical layout The distribution of the database elements such as tablespaces or file groups, indexes, and other database-specific objects across the physical layout.

With Workforce Central version 6.2, you can specify the file groups that are needed for the installation or you can use RAID (Redundant Array of Independent Disks). RAID storage Your specific RAID implementation will determine how to allocate the required tablespaces. Non-RAID disk allocation If you are not using RAID, Kronos recommends that you use at least nine disks when installing a Workforce Timekeeper database. Assign file groups tkcs1tkcs9 to each disk. Refer to Installing Workforce Timekeeper for more information.

RAID (Redundant Array of Independent Disks) technology combines two or more physical hard disks into a single logical unit. Although many RAID implementations are available, Kronos recommends using hardware level RAID storage or SAN disk storage with production quality drives. The following text lists sample recommended RAID configurations for installations of varying size. See your Kronos Representative for recommendations specific to your environment. Small installation (fewer than 5,000 employees) Hardware RAID controller with at least three drives for the database storage. Disk I/O performance improves as the number of disks increases. Logical drive created as a RAID partition with available drives to be used for all tablespaces (Oracle database) or file groups (SQL Server database). One or more hardware RAID controllers with five or more disk drives for database storage. Disk I/O performance improves as the number of disks increases.

Medium installations (5,00020,000 employees)

18

Kronos Incorporated

Database server best practices

Logical drive created as a RAID partition with available drives to be used for all tablespaces (Oracle database) or file groups (SQL Server database). SAN storage for database disk storage. Disk I/O performance improves as the number of disks increases. Log files placed on logical units (LUNs) that are separate from the data LUNs. Striped RAID configuration for database files (for example, RAID 5 or 10) and a mirrored configuration for log files (for example, RAID 1 or 10).

Large installations (more than 20,000 employees) One or more hardware RAID controllers with seven or more disk drives for database storage. Disk I/O performance improves as the number of disks increases. Logical drive created as a RAID partition with available drives to be used for all tablespaces (Oracle database) or file groups (SQL Server database). SAN storage for database disk storage. Disk I/O performance improves as the number of disks increases. Log files placed on logical units (LUNs) that are separate from the data LUNs.

Oracle recommendations
Workforce Central v6.2 supports Oracle 10G and Oracle 11G. With each version, you set memory management instance parameters in a slightly different way. See Oracle Initialization Parameter Recommendations on page 21 for explanations of parameter settings which are specific to Oracle 10G and Oracle 11G. The following are recommendations which apply to both Oracle 10G and Oracle 11G: Set the initialization parameter CURSOR_SHARING to EXACT. This setting has the most predictable performance and functionality in both Kronos Benchmarks and Customer Production Environments.

Best Practices for Optimal Workforce Central Performance

19

Chapter 1

Workforce Central Guide Best Practices

Kronos strongly recommends that you not use the values of SIMILAR or FORCE for CURSOR_SHARING. These values have displayed isolated instances of either significant performance degradation or internal Oracle errors. Set the value for the initialization parameter OPEN_CURSORS to 500 or higher. Kronos Performance Tests have shown that lower values for this parameter can lead to Oracle errors. For information regarding optimal Oracle index tuning, see Performance Tuning Oracle Indexes on page 111.

20

Kronos Incorporated

Database server best practices

Oracle Initialization Parameter Recommendations A key to assuring optimal application performance with an Oracle platform is to minimize the amount of physical I/O required to process a query. Kronos has tested a number of parameter settings for optimal performance of an Oracle database with Workforce Timekeeper. The following table lists the Oracle initialization parameter recommendations. Set initialization parameters to assure optimal performance of Workforce Timekeeper. If there are other documented or undocumented Oracle initialization parameter settings that are not listed in the following table, do not change their default settings. Note: Although Workforce Timekeeper functions without setting these values, performance may not be optimal. These recommendations are based on testing and resolution of customer escalation issues, and on internal customer and synthetic database testing.
Recommended setting

Parameter

Purpose/comments

Oracle 11G only MEMORY_TARGET Non-zero value based MEMORY_MAX_TAR on RAM available on the database server GET DB_CACHE_SIZE 0 Allows Oracle to manage and allocate memory to its processes and shared areas. Set this value to the maximum memory that can be allocated to Oracle processes. Default setting Allows Oracle to determine the pool size. Oracle 11G and 10G CURSOR_SHARING EXACT Provides the most predictable performance and functionality in both Kronos benchmarks and customer production systems. Important: Kronos strongly recommends that you do not use SIMILAR or FORCE because these values have caused isolated instances of significant performance degradation or internal Oracle errors.

SHARED_POOL_SIZE 0

Best Practices for Optimal Workforce Central Performance

21

Chapter 1

Workforce Central Guide Best Practices

Parameter DB_BLOCK_SIZE

Recommended setting 8192

Purpose/comments Helps to minimize I/O. Performance tests have demonstrated that a majority of the application functions perform better with this block size setting. DB_BLOCK_SIZE is set at the time the database is built. The setting cannot be changed except by rebuilding the database. Represents the number of database blocks to be read for each I/O. In most cases, the default setting is appropriate. If you set this value too high, the Oracle optimizer performs table scans instead of using indexes. Disables MTS (now called Shared Server), with which there are known performance issues when using Workforce Timekeeper. Note: All performance testing and optimization work on Workforce Timekeeper was done with this feature disabled.

DB_FILE_ MULTIBLOCK_ READ_COUNT

DISPATCHERS (or Blank MTS_DISPATCHERS)

OPEN_CURSORS DISK_ASYNCH_IO

500 (minimum) TRUE

Specifies maximum number of cursors available. Recommended for environments supporting Asynchronous I/O. Some benchmarks have shown significant benefit when using Asynchronous I/O. Supports Asynchronous I/O. (Default value is NONE.) Note: DISK_ASYNCH_IO must be set to TRUE for this parameter to have an effect.

FILESYSTEMIO_ OPTIONS

ASYNCH

OPTIMIZER_MODE

ALL_ROWS

22

Kronos Incorporated

Database server best practices

Parameter PROCESSES

Recommended setting

Purpose/comments

500 or see comments to Defines the total number of connections that can be define more precisely made to the Oracle database. For optimum performance, calculate the value as the sum of the following: The number of Oracle background processes (configuration-dependent) The sum of the site.database.max property from each instance of Workforce Central Any ad hoc connections that need to be made to the database

RESOURCE_LIMIT False

Turns off resource checking.

Oracle 10G only


DB_CACHE_SIZE No less than 262144000 Represents the size of the buffer cache in database blocks. If this parameter is set too low, the amount of disk I/O will increase, degrading overall performance. The key is to do as much processing of data in memory as practical.This parameter can be set higher based on the usecase.However, if the setting is too high, performance can degrade. Recommended setting prevents the PGA settings from being ignored (SORT_AREA_SIZE, HASH_AREA_SIZE, SORT_AREA_RETAINED_SIZE). Note: WORK_AREA_SIZE_POLICY must be set to MANUAL to set this parameter to 0. SHARED_POOL_SIZE No less than 300MB This parameter can be set higher based on the use case.However, if the setting is set too high, performance can degrade because of the need to scan for SQL in a larger-than-necessary shared pool. If necessary, increase the value in increments of approximately 15%. The entire SGA should not exceed 50% of the total physical memory of the hardware platform.

PGA_AGGREGATE_ TARGET

Best Practices for Optimal Workforce Central Performance

23

Chapter 1

Workforce Central Guide Best Practices

Parameter SORT_AREA_ RETAINED_SIZE

Recommended setting 1310720

Purpose/comments Specifies the maximum amount of session memory that is available for any individual sort. By default, the value is equal to the size of SORT_AREA_SIZE. After a sort operation is complete, the users memory size is reduced to the value specified by SORT_AREA_RETAINED_SIZE. Controls the amount of memory that is available to an Oracle session to perform sorting. The recommended value reduces the probability of having to write to temporary segments, which reduces the amount of physical I/O. Important: Ensure that this sorting activity occurs in memory.

SORT_AREA_SIZE

1310720

SQL Server recommendations


Set RCSI ON Ensure that Read Committed Snapshot (RCSI) is turned on to reduce blocking. Enabling SI/RCSI is a simple DDL operation: alter database <database name> set read_committed_snapshot ON go Set maximum degree of parallelism to 1 Workforce Central requires that the number of processors to use in parallel plan execution be limited to 1. By default, SQL Server uses parallelism to break down large SQL transactions into a multiple smaller transactions, based on the number of CPUs available. For example, if four CPUs are available, a single large transaction could be broken down into four smaller transactions and executed in parallel to improve the transaction speed. However, parallelism tends to increase blocking on totals-related Workforce Central tables. Therefore, you must disable parallelism on the Workforce Central

24

Kronos Incorporated

Database server best practices

source database and Workforce Analytics data mart database. This causes SQL Server to use only a single CPU at a time for any one users transaction. SQL Server can still use any other available processors for handling transactions from other users. You can set the Max Degree of Parallelism setting in either of two ways: Using the SQL Server Management Studio user interface Executing a query in SQL Server Management Studio

To set this parameter using the SQL Server Management Studio user interface: 1. Select Start > Programs > Microsoft SQL Server > SQL Server Management Studio. 2. In the Connect to Server dialog box, enter the appropriate information for your environment. 3. Click Connect. 4. From Management Studio: a. Right-click the server name and select Properties. b. Select Advanced from the left side of the workspace.

Best Practices for Optimal Workforce Central Performance

25

Chapter 1

Workforce Central Guide Best Practices

5. Change the Max Degree of Parallelism to 1.

6. Click OK. To set this parameter by executing a query in SQL Server Management Studio: 1. From the left navigation bar of SQL Server Management Studio, select the name of the Workforce Central database. 2. From the header, click New Query. 3. In the Management Studio query window, enter the following text: exec sp_configure 'show advanced options', 1 go reconfigure with override go exec sp_configure 'max degree of parallelism', 1 go reconfigure with override go 4. Click Execute.

26

Kronos Incorporated

Database server best practices

Best practice for configuring SQL Server Max Memory setting


Kronos recommends a 64-bit operating system for the database server. Note: The tuning parameters in this section are for 64-bit operating systems. For 32-bit operating systems on the database server, the default settings are appropriate. Based on recommendations from Microsoft, the following table presents the best practice for configuring SQL Servers Max Memory parameter.
DB Server Physical Memory SQL Server Max Server Required Memory 8GB 16GB or more 4GB =(Physical Memory 5GB)*

Employees Up to 10,000 Above 10,000

Example 1 For a 100,000-employee company, a 32 GB system is recommended by the Expert Sizer. SQL Servers Max Server Memory is set to 27GB. Important: If other applications (for example, SSRS) are running on the database server, you must consider the memory needs for those applications before using the setting in this example. See Example 2 on page 27 Example 2 For a 100,000-employee company, a 32 GB system is recommended by the Expert Sizer. SSRS is installed on the database server and is estimated to consume 5GB of memory. To account for this memory usage, set SQL Servers Max Server Memory parameter to 22GB (32GB physical 5 GB SSRS 5GB SQL Server overhead = 22GB for the SQL Server buffer pool).

Best Practices for Optimal Workforce Central Performance

27

Chapter 1

Workforce Central Guide Best Practices

Best practices for TEMPDB layout


Certain activities in the Workforce Central suite will create temporary tables in TEMPDB. You must give TEMPDB the same I/O throughput considerations as the Workforce Central database. Never locate TEMPDB in its default location on the Windows system drive (for example, C:\). TEMPDB must be striped across multiple drives by using a RAID configuration or by using multiple data files across multiple disks. Throughput into TEMPDB is particularly important for customers running the Payroll or Analytics products.

28

Kronos Incorporated

Operational practices

Operational practices
The Workforce Central database requires regular maintenance for optimal system performance and data security. The following sections describe the minimum tasks to perform to maintain optimal database performance: Optimize data transmission throughput on page 29 Keep database statistics current on page 19 Monitor the database for fragmentation on page 22 Run the Workforce Timekeeper Reconcile utility on page 23

Optimize data transmission throughput


Ensure that the local area network has very fast data transmission throughput to the database.

Keep database statistics current


Out-of-date statistics can cause major performance issues. Statistics must be updated regularly, especially after large operations (for example, after running an import or using Workforce Record Manager to delete a large amount of data). The more volatile an object is, the more often it needs to be updated. How you update statistics depends on the type of database. Updating Oracle database statistics Kronos provides a script called statsddl.sql, that is used periodically to update statistics in the Workforce Central database. The script is located, by default, in the following directory on the Windows application server: \Kronos\wfc\applications\dbmanager\scripts\database\ ora\dbautilities Note the following important information: Disable Oracle automatic statistics generation so that it does not overwrite Kronos statistics collection.

Best Practices for Optimal Workforce Central Performance

29

Chapter 1

Workforce Central Guide Best Practices

Never compute statistics during times of peak system usage because the process can cause performance degradation in a Workforce Central system.

To use this script: 1. Connect to the Workforce Central database using a SQL query tool and the database owners user ID and password (for example, tkcsowner). 2. To execute the script, enter: start statsddl.sql or @statsddl.sql The script generates an output file called stats.sql, in the current directory. 3. Use the SQL query tool to run the stats.sql script that was generated in step 2. This script updates all statistics for objects in the Workforce Central database and returns the following information: Number of rows Number of rows sampled Number of rows changed Last analyzed date

Run the stats.sql script once each week or after any major data additions or deletions. Run it when database activity is minimal (for example, late at night). Database administrators can schedule the script to run as a cron job (UNIX) or other scheduled event (Windows).

30

Kronos Incorporated

Operational practices

Updating SQL Server database statistics When a query is sent to a SQL Server database, the query optimizer evaluates the query and the tables being queried to determine whether it is most efficient to retrieve data using a table scan or an index. The query optimizer creates an execution plan and bases much of its optimization on the statistics for each table. If table statistics are out of date, the query optimizer may create an execution plan that will cause performance issues. For example, for smaller tables, the query optimizer usually does a full table scan to retrieve the data. If a query is run against a table that has 1,000,000 rows, but the table statistics indicate that there are only 500 rows, the optimizer could do a full table scan of the million rows. If the statistics were up to date, the query optimizer would have determined that the best way to access the data was by using an available index. Y can keep table statistics up to date manually or automatically with SQL ou Server. Each method of updating statistics has advantages and disadvantages:
Update method Automatic Advantage You can rely on the statistics being current. Disadvantage With SQL Servers automated schedule, you cannot know when statistics will be updated. There is always a chance that statistics will be updated at an inappropriate time, such as in the middle of your payroll processing day. This can cause major database resource contention. Since the update process is not scheduled, you can forget to perform the updated.

Manual

You can control when the statistics are run.

Select the most appropriate method for your environment. It is extremely important that you update statistics regularly.

Best Practices for Optimal Workforce Central Performance

31

Chapter 1

Workforce Central Guide Best Practices

Determining whether the database is set to automatically update statistics 1. Right-click the database name in SQL Server Management Studio and select Properties. 2. From the upper-left corner of the Database Properties dialog box, click Options. 3. Review the Automatic portion of the screen. If the Auto Update Statistics option box is set to True, then SQL Server automatically updates statistics on tables. When SQL Server performs updates is based on its own algorithms) Updating statistics manually 1. Connect to the Workforce Central database, using the database owners user ID and password (such as tkcsowner) 2. Using a SQL query tool, open the Kronos-supplied statsddl.sql script. This script is located in the following folder on the Workforce Timekeeper application server: \Kronos\wfc\applications\dbmanager\scripts\database \mss\dbautilities 3. Execute the script. 4. Click the results window and save the results as a file named stats.sql. 5. Using a SQL query tool, open the stats.sql file and execute the script. This process updates statistics on all Workforce Central product tables in the database, and returns the following information: Number of rows Number of rows sampled Number of rows changed Last analyzed date

32

Kronos Incorporated

Operational practices

Guidelines for determining when to update statistics Note: For all future statistics updates, complete only step 5, using the stats.sql script. Run stats.sql every week as part of normal maintenance. Run stats.sql after large operations such as extensive additions or deletions of data (for example, after running an import or using Workforce Record Manager to delete a large amount of data). Run stats.sql at times when very few users are using Workforce Central. You can schedule the update using SQL Servers SQL Agent.

Best Practices for Optimal Workforce Central Performance

33

Chapter 1

Workforce Central Guide Best Practices

Rebuilding an index (Sql Server)


SQL Server maintains indexes whenever insert, update, or delete operations are made to database tables. Eventually these modifications can fragment the indexes to the point that retrieving data that uses them slows. Heavily fragmented indexes can degrade query performance and cause your application to respond slowly to end users. To determine the amount of fragmentation of the indexes, execute the sys.dm_db_index_physical_stats function. The column avg_fragmentation_in_percent gives the amount of fragmentation. If the fragmentation is greater than 30%, then indexes should be rebuilt. If the fragmentation is between 5 and 30% then they should be reorganized.

Reorganize Indexes
Reorganize an index when the index is not heavily fragmented. Use the ALTER INDEX statement with the REORGANIZE clause (replaces the DBCC INDEXDEFRAG).

Rebuild Indexes when Necessary


Rebuilding an index drops the index and creates a new one. In doing this, fragmentation is removed, disk space is reclaimed by compacting the pages using the specified or existing fill factor setting, and the index rows are reordered in contiguous pages (allocating new pages as needed). This can improve disk performance by reducing the number of page reads required to obtain the requested data. The following methods can be used to rebuild clustered and non-clustered indexes: ALTER INDEX with the REBUILD clause. This statement replaces the DBCC DBREINDEX statement. "CREATE INDEX with the DROP_EXISTING clause.

34

Kronos Incorporated

Rebuilding an index (Sql Server)

Monitor the database for fragmentation


Fragmented indexes can be a source of performance issues. Y can run the ou Kronos-supplied database reports to stay ahead of fragmentation in the database. Kronos recommends that you run the database reports and address any fragmentation issues every one to two weeks. Perform this task more often if report results show significant fragmentation; perform this task less often if database results show less frequent fragmentation. For example, if the database contains a relatively small number of employees, fragmentation will be less severe. Workforce Timekeeper includes four database reports that can be run through the user interface: Space Allocation Object Reconciliation Information Schema Reconciliatio n Information Tuning Parameters

Accessing database reports 1. From the upper-right corner of the Workforce Timekeeper workspace, click Setup. Then, select System Settings from the System Configuration box. 2. Click the Database tab. 3. Select a report to run. Running the dbreport.sql script An alternative to running reports from the Database tab is to run the dbreport.sql script. Depending on the type of database, this script is located in one of the following directories:

Best Practices for Optimal Workforce Central Performance

35

Chapter 1

Workforce Central Guide Best Practices

SQL Server \Kronos\wfc\applications\dbmanager\scripts\database \mss\dbautilities

Oracle \Kronos\wfc\applications\dbmanager\scripts\database \ora\dbautilities

The dbreport.sql script shows tables and indexes that have 15 or more extents. Any tables or indexes listed would benefit from a rebuild. Running this script creates a report that displays the state of the Kronos database at any specified time. This report details statistics about the database that include: Object name Index fill factor percentages Table fragmentation Number of allocated pages/blocks and number of pages/blocks used Number of data pages Space used

Running the Workforce Timekeeper Reconcile utility


Overview The Reconcile utility examines the Workforce Central database schema to confirm that all tables, indexes, constraints, triggers, and other objects that should be in a Workforce Central database are present. Run the Reconcile utility from the Web interface of Database Manager. Correct any errors, especially missing database objects, to achieve predictable functionality and performance. Procedure 1. Open a browser and enter the following URL: http://WebServer/ instance_name/DBMgrLogonServlet where: WebServer is the name of the machine where the Web server is installed

36

Kronos Incorporated

Rebuilding an index (Sql Server)

instance_name is the name of the Workforce Central instance. The default name is wfc).

Note: This URL is case-sensitive. For example, if your Web server is on a machine named apex, you enter: http://apex/wfc/DBMgrLogonServlet 2. Allow the system to install ActiveX controls, if requested to do so. 3. In the Workforce Central 6.1 Database Manager logon window, enter the database owner ID (for example, TKCSOWNER) and database password. 4. Click Logon.

Note: In a combined Workforce HR/Payroll - Workforce Timekeeper environment, sa is often the database owner. See your database administrator for information specific to your environment.

Best Practices for Optimal Workforce Central Performance

37

Chapter 1

Workforce Central Guide Best Practices

5. When the Database Manager screen appears, select the applicable version of Workforce Central from the drop-down box. Then, click Reconcile.

The following message appears:

6. Click OK. 7. While reconcile is in progress, verify that all the processes are running. To do this: a. In the Segment Maintenance box, click View Status. b. In the Status page, click Refresh. 8. When you are finished with the Database Manager, close the browser. Output of the Reconcile utility After the reconcile is run, it writes a report to the Database Manager log file. The log file is located on the application server machine in c:\Kronos\wfc\logs. The name of this file is DBManager_timestamp.log where timestamp is the date and time that the reconcile was run. Note: The same report information is also collected in wfc.log.

38

Kronos Incorporated

Rebuilding an index (Sql Server)

The log file identifies any missing triggers, permissions, or other objects. Immediately repair any issues revealed by the Reconcile utility to help ensure the integrity of the database. Run this utility at least weekly. If non-Kronos indexes are present, the Reconcile utility returns information in the Reconcile report similar to the following: Non-KRONOS PRIMARY KEYS are: ux1_timesheetitem Indexes listed in this section of a Reconcile report are not regular objects in the Workforce Central database. The example is likely an index added by a database administrator after a performance study showed that the index would help speed up Oracle query responses. In general, Kronos recommends that you do not add indexes to the Workforce Central database except when requested to do so by Kronos or as part of a Kronos product installation. Maintenance and support of customer-created indexes are customer responsibilities. Important: If the Reconcile utility reports that Workforce Central indexes, tables, or other objects are non-Kronos objects, verify that the Reconcile was performed when connected to the database as the database owner (such as tkcsowner). If re- running Reconcile as the database owner still shows the Workforce Central objects as non-Kronos objects, contact Kronos Global Support immediately to address a potential object ownership issue in the database.

Best Practices for Optimal Workforce Central Performance

39

Chapter 1

Workforce Central Guide Best Practices

HyperFind performance
HyperFind queries consist of dynamically created SQL statements that return a list of employees based on criteria established by the user. This list of employees is used to populate Genies, reports, and Schedule Editor functions. To achieve optimal HyperFind performance, construct HyperFind queries using the following guidelines: Whenever possible, include multiple entries of the same type of criteria in the same query line. For example, the following syntax represents an inefficient HyperFind criteria specification: ((Any home or transferred-in employees worked in */ MACNT/*/*/*/*/* OR Any home or transferred-in employees worked in */ MADIV/*/*/*/*/* OR Any home or transferred-in employees worked in */ MAEST/*/*/*/*/* OR Any home or transferred-in employees worked in */ MASLS/*/*/*/*/*)) The following syntax is a more efficient way of expressing the same HyperFind query: ((Any home or transferred-in employees worked in */ MACNT, MADIV, MAEST, MASLS/*/*/*/*/*)) If you need to specify all labor level entries for a labor level and the number of labor level entries is greater than 30, leave the field blank. Do not use wildcard characters % or *. In previous versions of Workforce Central, these wildcard characters generated inefficient SQL, which caused unpredictable query behavior. Although Kronos has improved this behavior in recent versions of Workforce Central, Kronos recommends that you leave a field blank if all entries need to be specified.

40

Kronos Incorporated

HyperFind performance

Review the SQL in the organizations All Home query and other employee group HyperFind queries to ensure that they contain efficient SQL. Efficient SQL syntax follows the guidelines for: Using blank fields Specifying wildcards Creating SQL expressions

When creating HyperFind queries using Worked Accounts, use the All Home and Transferred In Employees Hyperfind as a guide to create your own Worked Accounts Hyperfind. This Hyperfind demonstrates the correct way to select both of the following: Employees hours in a manager/users employee group, whether the employees worked in the employee group or worked in another employee group Employees from other employee groups who worked in the manager/ users group (employees who transferred in).

Set up labor account sets or organization sets appropriately to limit the number of All Home employees.

Best Practices for Optimal Workforce Central Performance

41

Chapter 1

Workforce Central Guide Best Practices

Reports best practices


Workforce Central 6.2 includes basic and advanced reporting functionality: Basic reporting Uses the Report Definition Language Client (RDLC) from Microsoft. Advanced reporting Uses the Microsoft SQL Server Reporting Services (SSRS) engine.

Workforce Central reports can significantly affect CPU utilization and consume large amounts of memory. Be aware of the following best practices when running reports: Using dedicated report servers on page 42 Limiting the number of employees per report on page 44 Following procedural best practices on page 45

Using dedicated report servers


Workforce Central 6.2 has significantly improved database access for a number of reports. This means that reports are spending a greater percentage of their time in concentrated CPU usage on the report/application server. As the reports spike CPU usage on a general application/report server, end user response times can be affected. To minimize the effect on response time, Kronos recommends that you use dedicated report servers with a minimum of 4GB of RAM for 32-bit or 8GB of RAM for 64-bit. When the primary application server receives a report request from a client, the server will not decide whether to run the report itself or relay the request to another server. The server always runs the report itself or always delegates the request to other servers. The server does provide every report server equal opportunity to receive report requests.

42

Kronos Incorporated

Reports best practices

Use the following recommendations for configuring report engines (concurrent report execution processes): When running reports on a general-purpose application server, configure the number of report agents to be no greater than 75% of the total number of system cores. For example, if using an 8 core system, configure up to 6 Report Agents. When running reports on a dedicated report server: Configure one report agent per core. Configure one transformer thread for every two report agents.

To create a dedicated report server, you must: Disable the report service on the primary application server that handles user authentication. Enable the report service on the application server on which you want to run reports.

Perform these steps: 1. In the upper-right corner of the Workforce Timekeeper workspace, click Setup. 2. From the System Configuration box, select System Settings. 3. Select the Reports tab. 4. If you are disabling the report engine on a general-purpose application server, click false for the site.reporting.engine.enable setting.: If you are configuring a dedicated report server: a. Click true for the property setting called site.reporting.engine.enable. b. Enter the number of report agents for the site.reporting.MaximumRepAgents setting.

5. Click Save.

Best Practices for Optimal Workforce Central Performance

43

Chapter 1

Workforce Central Guide Best Practices

Limiting the number of employees per report


To limit the number of employees per report, create a custom report and deploy it. For optimal performance, ensure that a single report includes fewer than 1,000 employees. Where possible, break up large reports into subsets of fewer than 1,000 employees per report. Workforce Central 6.2 has a mechanism that can limit the number of employees run in a report. This mechanism is similar to what is used to limit the output of Genies and the Schedule Planner. By default, most reports are assigned a value, which is similar to having no limit for the number of employees. However, you can change the value for the report: 1. In the upper-right corner of the Workforce Timekeeper workspace, click Setup. 2. From the Common Setup box, select Report Setup. 3. Select the Report Setup tab. 4. Select the name of the report for which you want to limit the number of employees. 5. Select the Limit Number of Employees check box and enter the maximum number of employees to be associated with this report.

6. Click Save.

44

Kronos Incorporated

Reports best practices

Following procedural best practices


Because running reports is such a resource-intensive activity, take the following steps to reduce the impact of reports on performance: Test HyperFind queries before running reports When you set up a report to run, test the HyperFind queries that the report uses before running the report in a production environment. This is important because if a report uses a HyperFind query that performs poorly, when you try to cancel a report that is taking too long or affecting other resources, the query continues to run in Java. This can cause system backlogs. Schedule reports during non-peak hours To reduce the impact on report performance, run long-running reportsespecially Time Detail and Hours by Labor Accountduring off hours, and avoid running reports at times of peak Workforce Timekeeper activity. Moving processing to off-peak hours reduces resource consumption and, therefore, hardware costs. This is particularly true for reports that include more than 1,000 employees. Do not run reports for more than 250 employees from a Genie Functional and performance issues can occur when users run a report from a Genie if the report contains more than 250 employees. You must run reports that contain more than 250 employees through the Reports menu. If a report contains fewer than 250 employees, running the report from a Genie is an appropriate option if you have a HyperFind that is causing a performance issue. Although the HyperFind is used when you click the Genie, the HyperFind is not called again when you run a report as Previously Selected Employees. The employees returned by the HyperFind in the Genie are populated into the MYWTKEMPLOYEE table for processing by the report.

Best Practices for Optimal Workforce Central Performance

45

Chapter 1

Workforce Central Guide Best Practices

SSRS 2005 best practices


Based on the performance tests run against SSRS, the following is a list of best practices to help achieve optimal SSRS performance: SSRS and the Workforce Central application/Web server can be installed on the same physical machine for companies with 10,000 employees or fewer. This figure is based on reports being run by supervisors with 75 or fewer employees in a department. Running SSRS and Workforce Central on the same physical server requires a minimum of 8 GB of physical RAM. When separating SSRS and Workforce Central on different machines, Kronos recommends that the SSRS server be configured with 4 GB of RAM. The use of multiple IIS worker processes on an SSRS machine (also known as Web Gardens) serves as an optimal load balancer of reports on a single machine. Performance data shows that when running multiple worker processes, CPU utilization is distributed almost evenly between the processes. A general rule is to configure one IIS worker process per CPU. You do not need to split the SSRS database repository and the IIS processes onto separate servers based on performance. Analysis of the performance data reveals that the average CPU utilization of a SQL Server database process on the SSRS server is minimal (5% or less), even when running at a rate of 738 reports per hour. For optimal memory utilization and reports performance, keep the number of employees that are included in a report at a minimum, or run larger reports during off-peak hours. For reports with employee detail, such as the Time Detail report, consider running the report in subsets of 1,000 employees or fewer. The following SSRS configuration setting has a significant impact on report paging and overall SSRS CPU utilization:
WebServiceUseFileShareStorage

This setting is located on rsreportserver.config. The default value of this property is set to False, which indicates that snapshot and report data will be stored in the temporary database of the SSRS

46

Kronos Incorporated

SSRS 2008 best practices

repository. Testing has shown that when this property is changed to True, the user-perceived response time to move from page to page in the HTML browser for a report can be as much as 40% faster. This reduced time translates into less CPU utilization on the SSRS server because the SSRS repository is not utilized as much. Setting the value for this property to True allows you to keep the SSRS repository database and Reporting Services on the same physical machine, regardless of load, since the CPU utilization of SQL Server is minimized.

SSRS 2008 best practices


It is not necessary to split the SSRS database repository and the SSRS processes onto separate servers based on performance. However, SSRS Repository database files must be placed on non-system drive (other than the C: drive). Kronos recommends that you use 64-bit SSRS 2008 for optimal performance and scalability. If you will make primary use of Advanced Reports, use a separate SSRS Server.

Next Generation User Interface best practices


Kronos recommends that You use a one-week schedule period for displaying widgets. Two weeks is acceptable if it suits your business needs. However, Kronos does not recommend that you use more than two-week schedule periods. When you use widgets, only select HyperFinds that return 100 or fewer employees. You use Firefox as your browser on Windows systems. The client system has 2GB of RAM.

Best Practices for Optimal Workforce Central Performance

47

Chapter 1

Workforce Central Guide Best Practices

Process Manager best practices


Process Manager enables you to automate common business processes in the areas of time and attendance, scheduling, and human resources. It includes a number of templates that you can customize to address specific business needs. For example, managers and employees can use Process Manager to submit online forms and access online messages as they initiate or complete a specific business process. This section outlines process template design and configuration recommendations that you can incorporate in your Workforce Central environment. Specific topics include: Process template design recommendations on page 48 Configuration recommendations on page 52 Process governors on page 53

Process template design recommendations


When using or creating Process Manager templates, Kronos recommends the following practices: Kronos standard templates consist of between 30 and 70 tasks per template. For high-volume templates, keep the process definition as simple as possible, using as few tasks as necessary. Fewer tasks per template provide high throughput and good performance. The process definitions tested by Kronos had approximately 30 tasks. Avoid extraneous tasks and minimize the use of attributes, to make the templates as lightweight as possible. Minimize the latency of services with which custom tasks may interact. Tasks that call into resource and/or time-intensive APIs will create overhead and potentially slow response times during periods of high throughput. Use multi-branch tasks instead of multiple branch tasks. Doing so improves performance significantly; in addition, the template structure is simpler and easy to understand.

48

Kronos Incorporated

Process Manager best practices

The following example shows how to convert multiple branch tasks to a multibranch task:

Best Practices for Optimal Workforce Central Performance

49

Chapter 1

Workforce Central Guide Best Practices

Avoid using XOR tasks. Combine calls to an API in a single task to eliminate calling the same API multiple times. To do this: Use a Script task to pass the values returned from the API call to the next task in the process. If you have API tasks that call the same API in different ways with different attributes, use the same API task preceded by a Script task to set the specific attributes that you need.

When using a Branch task, use the Rule condition (specified on the Branch Properties tab) instead of a Script condition. Kronos recommends this because the Script condition is executed frequently, adversely affects performance, and sometimes causes unexpected results. If it is appropriate to use some type of script, use a callback scriptwhich is executed only onceinstead of using the Script condition. Y specify the ou callback script in the Callback Scripts tab on the Branch task properties sheet.

50

Kronos Incorporated

Process Manager best practices

The following examples show the use of a script condition and a callback script. Script condition

Callback script

Best Practices for Optimal Workforce Central Performance

51

Chapter 1

Workforce Central Guide Best Practices

Configuration recommendations
Best practices for configuring Process Manager include the following: Consider using the process pooling mechanism for high-volume templates. Pooling is the ability to configure a pre-allocated pool of process instances so that they are available for rapid retrieval when a user initiates a request. By reducing demand on the database CPU, process pooling improves response time and overall Process Manager performance. The pool-building event, however, can use a significant amount of system resources. During the pool-building event, all actively deployed process templates with a pool size greater than zero are instantiated in the database for future use. Therefore, consider the following best practices when you are creating a process pool: Schedule pool-building events regularly. However, because pool-building consolidates the process (pre)allocation into a brief concentrated period, the period should not be too frequent (every minute), nor should it be too infrequent (four times a year). Do not build a large pool during times of peak Workforce Central activity, such as during payroll processing. Schedule pool-building events during off-peak times. Do not build pools so large that they are not used. For example, if all vacation requests are due during the first week of each quarter, increase the pool size of the Time Off Request process just before the first week, and then reduce it after the first week.

During regular operation of the Workforce Central applications, when a request for a new process instance is made, a pre-allocated process instance from the pool is used. This reduces the response time and database load, because the work to create a process instance has already been done. The pool is then reduced by one. If the pool is depleted before the next pool building event, then the standard process allocation is used. Delete old processes from the database when they are no longer of use and do not need to be saved. Workforce Record Manager provides functionality to purge completed processes.

52

Kronos Incorporated

Process Manager best practices

Keep database statistics current for both SQL Server and Oracle. Process Manager database tables are especially sensitive to the statistics. By default, the Process Engine is enabled. If process throughput begins to affect overall application performance during peak load periods, you can disable the Process Engine as follows: a. From the upper-right corner of the Workforce Timekeeper workspace, click Setup. b. From the System Configuration box, select System Settings. c. Select the Business Automation tab. d. Change the following business automation parameter from true to false: wba.processengine.enabled e. Save your edit.

Process governors
Kronos has identified process governors as the mechanism for limiting Process Manager load for improved system performance. Governors monitor system activity and reject user process requests when certain conditions are detected. However, Kronos recommends using governors only when there is a need to throttle process generation to improve system performance. Important: When using governors for Process Manager, you are using systemwide governors. At this time, there is no way to associate a Process Manager template to a governor. A simple governor framework has been implemented that allows zero or more governors to be plugged into Process Manager. Each governor is a Java class that monitors some system metric. Metrics may or may not be related to Process Manager. Governors, which you should enable only on an as-needed basis, are externally configurable using the custom_WBA.properties file, which is located in the following folder: C:\Kronos\wfc\applications\wba\properties

Best Practices for Optimal Workforce Central Performance

53

Chapter 1

Workforce Central Guide Best Practices

Y can create this file if one does not already exist. ou For each governor configuration property listed in the following sections, add a line in the custom_WBA.properties file. For example:
private.wba.governors=private.wba.workflow.UserRequestGovernor.threshold private.wba.workflow.UserRequestGovernor.threshold=2

Important: Because it is difficult to predict how any one customer will use
Process Manager, analyze the current workload and performance characteristics before you implement governors. Currently, experienced levels of concurrency and throughput can be obtained by enabling governor logging, but not by enabling any of the governors themselves. Use this information as the basis for establishing any governing values. Governor configuration The following properties affect general governor functionality:
Configuration property private.wba.governors

Default value

Description A comma-separated list of governor class names. If empty (the default setting), no governors are activated. As a convenience to the user, WBA.properties contains a commented-out entry for each implemented governor class.

private.wba.governors.log

false

Set to true to enable governor logging. Each governor will log its current state on every process request. Activate this only for debug purposes. Leaving this property on in a production environment may adversely affect performance. The file name to which governor statistics should be logged.

private.wba.governor. outputfile

c:/Kronos/ governorstats.csv

Governors types The following governors are available for activation:

54

Kronos Incorporated

Process Manager best practices

User Request governor on page 41 Throughput governor on page 42 Response Time governor on page 43 Form Timeout governor on page 43

User Request governor The User Request governor monitors the number of active user process requests at the current time. Active user process requests are those that are user- initiated and contain a Process Manager form as the first user interface to the process. An example of an active user process request is the Time Off Request. A process is counted by this governor if the user has requested a form, but the form has not yet been rendered. Note: Only processes that are included in this count are ones that are launched by requesting Process Manager forms.
Default value 0

Configuration property private.wba.workflow.UserRequest Governor.threshold

Description The number of active requests above which the governor should be activated. A value of 0 disables the governor.

Kronos recommends the use of the User Request governor when you want to control the number of concurrently executing Process Manager processes. This governor is best suited for heavyweight templates where heavyweight is defined as a template that does a significant amount of system processing (possibly asynchronously) and executes for an extended period of time (minutes or more). Throughput governor The Throughput governor monitors the current process throughput. Throughput is defined as the number of processes that have been processed during a given, configurable time period. When this number of processes exceeds the threshold, the governor is activated.

Best Practices for Optimal Workforce Central Performance

55

Chapter 1

Workforce Central Guide Best Practices

Note: Only processes that are included in this count are ones that are launched by requesting Process Manager forms.
Configuration property private.wba.workflow.Workflow ThroughputGovernor.threshold Default value 0 Description The highest number of processes that can be activated during a configured time period (see below). A value of 0 disables the governor. The time period in milliseconds over which activated processes should be counted. A value of 0 disables the governor.

private.wba.workflow.Workflow ThroughputGovernor.window.millis

Kronos recommends using this governor when you want to control the number of processes that the system will process per unit of time. This governor is best suited for lightweight templates, where lightweight is defined as a template that has a few forms and may involve several user interactions. An example of a lightweight template is the Time Off Request template. For this governor to be effective, Kronos recommends that you keep the window size small (5 minutes or less) to avoid spikes of activity. Response Time governor The Response Time governor monitors a value that is roughly equivalent to the user response time for Process Manager forms. The response time is calculated by averaging all response times over a given, configurable time period. When this average exceeds the configured threshold, the governor starts working. Note: Only forms that launch processes are monitored by this governor.
Configuration property private.wba.workflow.Response TimeGovernor.threshold Default value 0 Description The response time above which the governor will be activated. A value of 0 disables the governor.

56

Kronos Incorporated

Process Manager best practices

Configuration property private.wba.workflow.Response TimeGovernor.window.millis

Default value 0

Description The time period in milliseconds over which response times are averaged. The time window is defined from the current instant back. A value of 0 disables the governor.

Form Timeout governor The Form Timeout governor counts form timeouts over a given, configurable time period. When the number of timeouts exceeds the configured threshold, the governor is activated. Note: Only forms that launch processes are monitored by this governor. This governor is different from the others in that it is reactive rather than proactive. Although Kronos does not prefer this approach, this governor can be viewed as a second line of defense in case other governors are not effective.
Configuration property Default value Description The number of timeouts (over the configured interval), above which the governor will be activated. A value of 0 disables the governor. The time period in milliseconds over which form timeouts are counted. The time window is defined from the current instant back. A value of 0 disables the governor.

private.wba.workflow.Timeout 0 Governor.threshold private.wba.workflow.Timeout 0 Governor.window.millis

Best Practices for Optimal Workforce Central Performance

57

Chapter 1

Workforce Central Guide Best Practices

58

Kronos Incorporated

Chapter 2

Workforce Timekeeper Best Practices

Workforce Timekeeper is the foundation of the time & attendance and scheduling capability of the Workforce Central suite. It must be installed before you can install Workforce Scheduler, Workforce Record Manager, or other Time & Attendance and Scheduling products. Workforce Central components place varying demands on overall system performance and each of the following components is discussed separately in this chapter: Configuring the Workforce Timekeeper application for optimum performance on page 60 Genie performance on page 62 Totalizer best practices on page 66 HTML client performance on page 62

Chapter 2

Workforce Timekeeper Best Practices

Configuring the Workforce Timekeeper application for optimum performance


The Workforce Timekeeper application is CPU-intensive, which means that the amount of CPU resources available is the gating factor at times of peak activity. Although your system was sized to accommodate CPU requirements at times of peak activity, you should monitor system resources as your needs change: Configure sufficient resources to maintain server CPU utilization under 50%, and network utilization under 25%. Use fewer fast server processors (as opposed to a larger number of slow processors) while maintaining the server resource utilization goal. Ensure that each client network link has sufficient bandwidth. Browsers that run Workforce Timekeeper require a minimum network bandwidth of 256 Kb for optimum performance. As network bandwidth is decreased below this threshold, end-user response times will degrade, regardless of server utilization. Increasing latency due to a large number of network hops or large propagation delays will exacerbate the degradation. Use fast clients. In a number of cases, such as during logon, client-processing time is the largest component of response timeso fast client speed is critical to achieving good response times. If you cannot use fast clients, consider using the HTML client user interface instead of the Java interface. For more information about the HTML client, see HTML client performance on page 71.

60

Kronos Incorporated

Configuring the Workforce Timekeeper application for optimum performance

In addition to ensuring that you have an efficient configuration, you should adopt the following practices to improve Workforce Timekeeper performance: Use only features needed in the timecard The Workforce Timekeeper flexitime feature allows users to view their accruals in the Totals & Schedule window of their timecard.
Accruals window

Although displaying the accruals window in the timecards Totals & Schedule window has a moderate cost on the database, it may decrease the application server response time. Kronos recommends that you turn this feature off if you are not using it. Do not edit jobs or rules during peak processing times If you edit jobs or pay rules during peak processing times, the impact on the application server can be significant, especially if you change multiple attributes. The best practice is to avoid making configuration edits during peak processing periods. Do not make pay code edits in the timecard beyond the current pay period Making pay code edits beyond the current pay period in the timecard causes slow system performance and totalization. To make pay code edits beyond the current pay period, Kronos recommends that you make the edits in the Schedule Planner instead of the timecard. The projected balance information will be the same whether the time is entered in the Schedule Planner or in the timecard.

Best Practices for Optimal Workforce Central Performance

61

Chapter 2

Workforce Timekeeper Best Practices

Genie performance
Genies can have a significant impact on overall application performance. The impact is primarily due to the volume and/or complexity of the SQL that is generated. Genies affect the performance of the application server when there is a large quantity of data returned from the database server. The application server collects data in 200-row chunks. This means, for example, that if over 5,000 rows of data are returned by the database server, the application server will take some time to get those rows, resulting in significant CPU use. There are four simple practices that you should follow for optimum Genie and HyperFind performance: Create Genies with only the necessary data columns on page 62 Assign Genies as needed to users on page 64 Limit the number of employees displayed in Scheduling Genies to below 1,000 on page 64 Do not calculate totals on Genies on page 65 General configuration considerations on page 67 Totalizer extensibility interface on page 68

Create Genies with only the necessary data columns


The amount of SQL that Workforce Timekeeper generates has an impact on the database server CPU utilization, especially with larger numbers of system users.

62

Kronos Incorporated

Genie performance

Most Genie columns generate an individual SQL statement; however, note the following: Pay codes generate one SQL statement for all pay code totals. Exceptions generate one SQL statement for a group of exception conditions. The Missed Punch Genie column is treated separately from the other exception conditions and generates its own SQL statement. Employee Name Home Account Unexcused Absence (Exception) Missed Punch Early In (Exception) Late In (Exception) Early Out (Exception) Late Out (Exception) Unscheduled Hours (Exception) Totals Up to Date

For example, a Reconcile Timecard Genie could contain the following columns:

This Genie generates five SQL statements each time it is activated. To limit the number of SQL statements generated by a Genie, you should create Genies with only the necessary data columns. For example, if you only want to see the missed punches for a group of employees, you should not use the Reconcile Timecard Genie as defined in the previous example. Instead, you should create another detail Genie with just the Employee Name and Missed Punches columns. This new Genie generates only two SQL statements for each execution, versus the five in the original example. The reduction of SQL statements helps to increase the CPU capacity of the database server, as well as to improve the elapsed time of the Genie to return the necessary data.

Best Practices for Optimal Workforce Central Performance

63

Chapter 2

Workforce Timekeeper Best Practices

Assign Genies as needed to users


Another simple practice to follow is to assign only Genies that are likely to be used by each user. Do not assign Genies to a user who is unlikely to need them. For example, a user who will never need to sign off on a group of employees should not be assigned the Pay Period Close Genie.

Limit the number of employees displayed in Scheduling Genies to below 1,000


The global.LongList.ScheduleSummaryEmployeeThreshold property sets the maximum number of employees that can be displayed in the Scheduling Genies. To access this property: 1. From the upper-right corner of the Workforce Timekeeper workspace, click Setup, and then select System Settings from the System Configuration box. 2. In the Systems Settings dialog box, click the Global Values tab. 3. Find the global.LongList.ScheduleSummaryEmployeeThreshold entry and see what value has been assigned. By default, this property is set to 200, which is usually appropriate. If you need to display more than 200 employees in a Scheduling Genie, however, the value needs to be set higher than 200. However, if you set the value of this property to over 1,000, the SQL generated by com.kronoswfc.business.sched. GroupMemberService.selectGroupByEmployees causes the equivalent of an infinite loop. While this query is running, access to key tables such as PERSON and WTKEMPLOYEE is blocked. This can essentially lock up the application. If you need to display more than 200 employees in a Scheduling Genie, set global.LongList.ScheduleSummaryEmployeeThreshold higher than 200. However, do not set it over 1,000.

64

Kronos Incorporated

Genie performance

Do not calculate totals on Genies


Calculating totals from a Genie can drain system performance, especially if the calculations involve a large number of employees. It is much more efficient for calculations to be done with a report.

Best Practices for Optimal Workforce Central Performance

65

Chapter 2

Workforce Timekeeper Best Practices

Totalizer best practices


The Totalizer is a set of code that calculates totals in Workforce Central applications. Two components access the Totalizer: The Background Processor invokes the Totalizer and stores information in the database. It it is used to offload computationally intensive work from the application server. Because the Background Processor prepares data ahead of time, the information is ready in a presentable form when a user logs on to Workforce Timekeeper. By offloading the computation to the Background Processor, Workforce Timekeeper can be optimized for interactive use. The Callable Totalizer invokes the Totalizer interactively in the GUI to total employees whose totals have not yet been computed by the Background Processor and are not up to date. It does not send the totals to the database. Totals that are generated online by the Callable Totalizer must be generated a second time by the Background Processor in order to save the values in the database.

The underlying strategy of the Background Processor is to precalculate and store calculated data (in this case, employee totals) to amortize the cost of calculation among the various components that share the data. In addition, de-coupling the process of updating totals from interactive tasks speeds interactive response time, but at the cost of introducing latency between the time the totals are invalidated and the time the Background Processor updates them. This section includes the following topics that you can use to minimize the chances of the Background Processor causing performance issues: General configuration considerations on page 67 Totalizer extensibility interface on page 68 Multiple instances on page 68 Additional recommendations on page 69

66

Kronos Incorporated

Totalizer best practices

General configuration considerations


The Background Processor (BGP) requires a fully functional application server and cannot be run independently of the Workforce Timekeeper application; it is automatically installed with the Workforce Timekeeper application. However, the Background Processor can be combined with the main Workforce Timekeeper application or as a dedicated stand-alone system. Considerations for each type of configuration are as follows: Combined configuration Configure no more threads than 75% of the available cores on the system. For example., if a system has 8 cores, configure no more than 6 BGP threads. per instance to balance CPU and memory resources for BGP processing and other application server processing. Overall, do not configure more than 20 threads against any database instance for totalization. Y may need to increase the maximum number of database connections ou above the default to accommodate both application server and Background Processor types of processing. As a general rule, use no more than 100% of the available cores on the system. For example., if a system has 8 cores, configure no more than 8 BGP threads. per instance. If you need more BGP capacity, you should configure additional instances rather than more threads for better scalability and throughput. To fully utilize a stand-alone Background Processor server, create one Workforce Central instance per core Employees are not regularly signed off. Shift change processing overlaps other use cases (pay period end in particular).

Stand-alone configuration

Use stand-alone Background Processor instances when:

Best Practices for Optimal Workforce Central Performance

67

Chapter 2

Workforce Timekeeper Best Practices

Important: The fixed read size and minimum queue size Background Processor properties should be left at their defaults (unless directed by Kronos Engineering to change them). This is extremely important for minimizing the impact on memory usage if you have significant numbers of employees with unsigned-off data.

Totalizer extensibility interface


To accommodate Totalizer extensibility: Y should separate Background Processor and application server processing ou into separate Workforce Central instances. This is because the Callable Totalizer and Background Processor share the same synchronization point for the hook within an application server instance. Y can circumvent hook inefficiency and hook synchronization scalability ou problems by installing multiple instances of the Background Processor and application servers. Y should re-factor hooks for thread safety to avoid synchronization issues. ou The use of synchronization is property-driven and can be changed to an unsynchronized call.

Multiple instances
In Workforce Central 6.2, multiple instances can be installed on a single physical application server system in two ways: Instances that are created by the Workforce Central Configuration Manager utility. Instances that are manually created to enhance application scalability on a single physical application server system. This type of instance is referred as an instance for vertical scalability.

The main difference between these instances is that vertical scaling instances respond to requests from a single URL and each Configuration Manager instance responds to its own URL.

68

Kronos Incorporated

Totalizer best practices

The following table shows which type of instance to use for a particular situation:
Purpose of the instance Separate Background Processor and application server processing Overcome Totalizer hook inefficiencies in the Callable Totalizer Overcome Totalizer hook inefficiencies in the Background Processor Configure stand-alone Background Processor instances Instance type to configure Configuration Manager Instance for vertical scalability Configuration Manager Configuration Manager

Additional recommendations
The following are important additional recommendations for optimizing the performance of the Background Processor: Configure Background Processors appropriately on page 56 Keep employee totals up to date on page 57 Keep signoffs up to date on page 57

Configure Background Processors appropriately Given that the Background Processor is the batch-oriented process responsible for updating employee totals, it is important that you configure it appropriately: Ensure that Background Processor streams are always running. Provide sufficient Background Processor threads to meet the desired turnaround time for updated totals. Install all servers that run Background Processor threads on a high-bandwidth LAN with the database server. Ensure that the recommended database administration practices are implemented. This includes keeping database statistics up to date, reducing or removing table fragmentation, and re-creating indexes frequently.

Best Practices for Optimal Workforce Central Performance

69

Chapter 2

Workforce Timekeeper Best Practices

Keep employee totals up to date The Background Processor must keep up with demand so that employee totals are up-to-date in the database. When the database server is working with the Background Processor, it has less capacity to support other activities. If the Background Processor is not keeping employee totals up to date, users invoke the Callable Totalizer more frequently to view totaled amounts interactively. However, the Callable Totalizer is expensive to use: It makes the database do work twice. It uses a significant amount of application server CPU resources. It does not calculate scheduled against actual totals.

When calculations are done independent of the application server, the Background Processor removes the calculation time from end-user response time. The goal is to avoid having the Background Processor lag behind in calculations. A number of events can cause the Background Processor to fall behind. However, take the following steps to minimize the likelihood of that happening: Never edit pay rules at times of peak activity. When pay rules are changed or configured, the Background Processor must re-total all employees. A good time to change rules is immediately after a signed-off pay period. Ensure that routine database maintenance, as described in Database best practices on page 13, is up to date. Make sure that the Background Processor (as well as the application servers) is on the same LAN segment as the database server.

Keep signoffs up to date Improve performance of the Totalizer and Workforce Central in general by keeping signoffs up-to-date, typically completed every pay period. If you do not sign off timecards regularly, every time the Totalizer runs, it reprocesses timeframes that are not signed off, performing extra work that robs performance from other parts of the product. Failure to do signoffs for extended periods of time may result in significantly poorer performances and application stability issues.

70

Kronos Incorporated

HTML client performance

HTML client performance


The primary design goal of the HTML client user interfaces is to support lowerend PC hardware as well as browsers on non-PC hardware. The HTML client implements most of the Workforce Employee features found in the Java version of the application. For specific features included, refer to the following documents: Getting Started with Workforce TimekeeperA Guide for Employees Getting Started with Workforce TimekeeperA Guide for Managers

This section describes some of the unique performance issues of the HTML client and contains the following sections: HTML client versus applets on page 71 CPU consumption on page 72 Quick Time Stamp throughput (kiosk capability) on page 72

HTML client versus applets


The user interface in the HTML client is not downloaded once and reused, as is the case with the Java applet JAR files. Each time data changes in the HTML client, the page must be rebuilt on the application server. These page rebuilds require additional CPU resources on the application server beyond what is required for returning raw data to the applet. The HTML client uses the same business objects as the applet version of the software for all database interaction; therefore, the applet and HTML client database loads are identical.

Best Practices for Optimal Workforce Central Performance

71

Chapter 2

Workforce Timekeeper Best Practices

CPU consumption
An easy way to reduce the work done by the application server is to modify the function access profile for employees who use the HTML client. This reduces the CPU consumption of the HTML client to below the CPU usage that an equivalent applet user with the same features enabled would use. To modify the function access profile: 1. From the upper-right corner of the Workforce Timekeeper workspace, click Setup. 2. In the Access Setup dialog box, select Function Access Profiles, then select Default. 3. In the Edit Function Access Profile dialog box, expand Workforce Employee. 4. Select Timecard Editor for Employees (My Timecard). 5. Change the access for Calculate Totals in My Timecard to Disallowed. 6. Click Save.

Quick Time Stamp throughput (kiosk capability)


If you use time stamps collected at kiosks, you probably use the Quick Time Stamp functionality of the HTML client. Unlike Terminals using Workforce Device Manager to stream punches into the database, Quick Time Stamp requires that each user log on to the application server and retrieve his or her access profile. Therefore, Quick Time Stamp requires significantly more application server and database server resources to process a shift change than processing the equivalent shift change with Workforce Device Manager.

72

Kronos Incorporated

Chapter 3

Workforce Device Manager Best Practices

Workforce Device Manager enables the flow of data between Workforce Central applications and data collection devices. Since Workforce Central v6.1, Workforce Device Manager has replaced Data Collection Manager (DCM). Based on performance testing results, Kronos recommends the following best practices when using Workforce Device Manager: Device communication settings recommendations on page 74 Dedicated Workforce Device Manager servers (version 6.2.1 and later) on page 76 Recommended threshold for terminals per server on page 77 Load balancing punch uploads on page 77 Recommended Web/application server resource settings on page 77

Chapter 3

Workforce Device Manager Best Practices

Device communication settings recommendations


Device Manager interacts with devices using one of the following communication protocols: Device Initiated Devices initiate all communication with Device Manager. Server Initiated The majority of the communications are initiated from the application server. Communication can be initiated directly from Device Manager to the device or through a proxy

For optimum performance, Kronos recommends that you use device-initiated protocol. However, if you use server-initiated protocol and you have more than 200 server-initiated devices, Kronos recommends that you configure the netcheck interval to be 10 minutes or longer. Regardless of whether you use device- and server-initiated protocol, the automatic collection interval is set to 20 seconds by default. To improve throughput and reduce system resource utilization when using automatic punch collection, you should increase the automatic collection interval value to 60 seconds. Increasing this value allows Workforce Device Manager to process punches more efficiently in larger batches. To change the automatic collection interval value: 1. In the Workforce Timekeeper user interface, select Setup > Device Manager Setup > Device Communication Settings. 2. Select the applicable communications template, and click Edit. 3. Select the communication mode Device Initiated or Server Initiated. 4. Scroll to the Data Collection area and change the Automatic data collection interval to 60. 5. Click Save. If downloads will be done frequently (for example, several times each day) reduce the Download retry value to 1. on the Device Communication Setting Editor page. Note: Vertical scaling is supported only for the device-initiated protocol.

74

Kronos Incorporated

Device communication settings recommendations

The following illustration shows where the NetCheck interval and data collection options are on the Device Communication Setting Editor page.

Best Practices for Optimal Workforce Central Performance

75

Chapter 3

Workforce Device Manager Best Practices

Dedicated Workforce Device Manager servers (version 6.2.1 and later)


Use dedicated Workforce Device Manager servers for handling punch processing from devices, Workforce Device Manager batch job processing, and the Workforce Device Manager user interface. Kronos recommends dedicated Workforce Device Manager servers in the following scenarios The number of devices is more than 200. The number of Workforce Central application server instances (either with virtualization, Multiple Servers or Physical Servers) is more than 4.

Kronos recommends use of 64 bit WFC to provide better scalability. Do not combine dedicated servers. Use separate dedicated WDM, report, and WIM servers as needed.

76

Kronos Incorporated

Recommended threshold for terminals per server

Recommended threshold for terminals per server


For those instances that are configured to communicate with devices, Kronos recommends the following limits: 32-bit implementation Maximum of 500 devices per instance and a maximum of 1,000 devices per server. 64 bit implementation Maximum of 1,000 devices per server.

Load balancing punch uploads


For installations with more than 500 devices, punch uploads need to be load balanced across multiple application servers. This can be accomplished in two ways: The primary web server communication setting property can refer to a load balancer. There must be enough instances of Workforce Central behind the load balancer so that each instance can handle punches from up to 500 devices. For example if 1500 devices are used in the customers site, then configure three Workforce Central instances behind the load balanced URL. The other option is to use different primary Web servers for the devices. Devices should be logically grouped so that no more than 500 devices share communications profiles with the same primary web server. See Workforce Central System Administrators Guide Device Manager for additional information about web servers and load balancing.

Recommended Web/application server resource settings


When using Workforce Device Manager, Kronos 4500 Terminals communicate with the web/application server the same way that end users do. This often results in the need to increase the amount of defined consumable resources (database connections, web server connections, and application server processing threads) to avoid errors related to a shortage of these resources. Kronos recommends that you modify these resources as described in the following sections: Modify maximum database connections on page 78

Best Practices for Optimal Workforce Central Performance

77

Chapter 3

Workforce Device Manager Best Practices

Modify web server processing threads on page 78

Note: Y only need to apply these modifications to the Workforce Central ou instances that are used for device communication.

Modify maximum database connections


The system configuration property site.database.max (located on the Database tab in Setup > System Configuration > System Settings) needs to be set appropriately based on the number of devices in use. Look up the value in the table below to determine the appropriate value for your site:
Devices per application server instance Fewer than 250 250 to 500

site.database.max setting 250 400

Modify web server processing threads


The properties that control the number of available processing threads on the web server need to be set appropriately. This is done in different files depending on which web server is being used, Apache or IIS. Apache web server Increase the value of the configuration property ThreadsPerChild in the httpd.conf file located in the \Kronos\apache\conf directory. Look up the value in the table below to determine the appropriate value for your site:
Devices per primary Web server Less than 250 250 to 500 500 to 1000 Apache MaxThreads per Child setting 300 500 1000

78

Kronos Incorporated

Recommended Web/application server resource settings

If this value approaches 1,000, use multiple web servers because Apache has a hard-coded limit of 1,024. IIS Web server Increase the value of the connection_pool_size configuration property in the workers.properties file for each application server instance used for device communication. This file is located in the following directory: \Kronos\jboss_connectors\IIS\conf Look up the value in the table below to determine the appropriate value for your site:
Devices per application server instance IIS connection_pool_size setting Fewer than 250 250 to 500 250 400

Modify application server processing threads

Increase the value of the configuration property maxThreads in the server.xml file, which is located in the following directory:
...\Kronos\jboss\server\wfc_staging\deploy\ jboss-web.deployer

Best Practices for Optimal Workforce Central Performance

79

Chapter 3

Workforce Device Manager Best Practices

Note: Do not make changes to the JBoss server settings in the instance-specific folder. Changes that exist in an instance-specific folder will be lost when a new service pack is installed. To enable a change in the JBoss server settings to be distributed to all instances, first make the change in the wfc_staging folder. Then, make the change on all application servers. Finally, run the Configuration Manager after changing the JBoss server settings on each server.

Because there are multiple maxThreads properties in this file, you only need to change the section for the AJP1.3 connector as follows:
<!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="@JBossAJPConnectorPort@" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" maxThreads="250" redirectPort="@JBossRedirectPort@" useBodyEncodingForURL="true"/> <Engine name="jboss.web" defaultHost="localhost"> Look up the value in the table below to determine the appropriate value for your site:
Devices per application server instance Fewere than 250 250 to 500 AJP 1.3 maxThreads setting 250 400

80

Kronos Incorporated

Chapter 4

Workforce Scheduler Best Practices

Based on the performance tests run against Workforce Scheduler, this chapter presents recommended best practices to help you achieve optimal performance for your Workforce Scheduler environment.

Chapter 4

Workforce Scheduler Best Practices

Best Practices
Y should keep the value of the following property at the default of 50: ou global.WtkScheduler.MaximumNoOfRuleViolationsSentTo Client This property governs the number of rules violations sent from the application server to the client. Testing of Workforce Scheduler showed that the loading of rules violations was a memory-intensive operation, and significantly impacted user response times. Therefore, setting this value at higher than 50 results in longer response times. To check the value of this property: a. Log on to Workforce Timekeeper as a system administrator or SuperUser. b. In the upper-right corner of the Workforce Central workspace, click Setup. Then, from the System Configuration box, select System Settings. c. In the Systems Settings dialog box, click the Global Values tab. d. Check the value of the following global setting: global.WtkScheduler.MaximumNoOfRuleViolationsSent ToClient. If the value is higher than 50, change the setting to 50. Kronos recommends that Schedule Groups operations with more than 300 employees per group be conducted for shorter schedule periods in Schedule Planner view, even as short as one day. Changes beyond the period in view are being processed in the background. This practice lowers demand on system resources, CPU, and memory on the client machine. The response time of Schedule Planner is sensitive to the number of employees loaded and the number of weeks viewed. Kronos recommends that you minimize the number of employees loaded in the Schedule Planner. Limit the number of employees to no more than 500 for four to six weeks per Schedule Planner at any given time.

82

Kronos Incorporated

Best Practices

When running Schedule Generator interactively, cover the shortest schedule periods and smallest number of employees as possible. Schedule Generator is CPU-intensive on the application server and could impact the response times of other interactive users in the system. Therefore, it is good practice to run Schedule Generator during off-peak hours of operation.

Best Practices for Optimal Workforce Central Performance

83

Chapter 4

Workforce Scheduler Best Practices

84

Kronos Incorporated

Chapter 5

Workforce Forecast Manager Best Practices

Workforce Forecast Manager is typically used in retail environments where staffing requirements vary depending on business demand. For example, the number of staff needed to meet the level of service required in a grocery store can vary from day to day. Based on the performance tests run against Workforce Forecast Manager, the best practices presented in this chapter will help you achieve optimal performance for Workforce Forecast Manager. Note: The best practices noted in this document also apply to Schedule Generator. This chapter contains the following sections: Best practices on page 86 Configuring number of forecasting and Auto-Scheduler engines on page 88

Chapter 5

Workforce Forecast Manager Best Practices

Best practices
For large stores (more than 100 employees per store), Kronos recommends that you run volume forecasting, labor forecasting, and auto-schedule generation using Event Manager during off-peak hours. This is especially true for auto-schedule generation. (See Configuring number of forecasting and Auto-Scheduler engines on page 73 for more information.) Running Auto-Scheduler in interactive mode at higher user loads consumes significant amounts of application server CPU and causes response times to suffer. Y should keep the value of the following property at the default of 50: ou global.WtkScheduler.MaximumNoOfRuleViolationsSentTo Client This property governs the number of rules violations sent from the application server to the client. Testing of Workforce Scheduler showed that the loading of rules violations was a memory-intensive operation, and significantly impacted user response times. Therefore, setting this value at higher than 50 results in longer response times. To check the value of this property: a. Log on to Workforce Timekeeper as a system administrator or SuperUser. b. In the upper-right corner of the Workforce Timekeeper workspace, click Setup. Then, from the System Configuration box, select System Settings. c. In the Systems Settings dialog box, click the Global Values tab. d. Check the value of the following global setting: global.WtkScheduler.MaximumNoOfRuleViolationsSent ToClient. If the value is higher than 50, change the setting to 50. Auto-Scheduler response times and number of operations The default setting for Run iterations is 100. This number of iterations results in better quality schedules. Do not change this setting unless you have experimented with different number of iterations and analyzed the resulting schedules. As expected, higher numbers of iterations result in longer response times.

86

Kronos Incorporated

Best practices

Auto-scheduler response time at different store levels Tests have shown that generating schedules at the store level takes longer compared to the sum of the response times for individual departments. Running auto-scheduler at department level reduces the overall time to generate schedules. Depending upon business needs, group the jobs in a store into multiple option sets. Number of auto-scheduler engines Auto-scheduler is application server CPU-intensive. A general rule is to ensure that the number of auto-scheduler engines does not exceed 75% of the available logical CPUs (cores) on the application server. For example, if there are 8 cores available, configure no more than six auto-scheduler engines.

Best Practices for Optimal Workforce Central Performance

87

Chapter 5

Workforce Forecast Manager Best Practices

Configuring number of forecasting and Auto-Scheduler engines


It is very important to provide a sufficient number of engines when multiple users want to execute Forecasting and Auto-Scheduler engines interactively. If you do not, the users will be blocked from running forecasting or auto-scheduler. Use the following procedure and guidelines to configure the number of engines: 1. From the Workforce Central user interface, select Setup > System Configuration > System Settings. 2. Click the Engine Controller tab.In the following illustration, notice that the following system setting is set to a default value of 100: site.EngineController.maximumUtilizationPercentage .

3. Set the value based on the number of concurrent users expected on this particular application server. For Volume and Labor Forecasting and Auto-Scheduler, use the following equation to calculate the appropriate value:

88

Kronos Incorporated

Configuring number of forecasting and Auto-Scheduler engines

<Number of concurrent users> multiplied by 100 where 100 is the Auto-Scheduler engine rating. For example, if the expected number of users is 50, then the value should be 5000 Caution: Because auto-scheduler engines are CPU-intensive, Kronos highly recommends that you minimize the number of users running these engines interactively. Otherwise, the response times of other interactive users will be impacted.

Best Practices for Optimal Workforce Central Performance

89

Chapter 5

Workforce Forecast Manager Best Practices

90

Kronos Incorporated

Chapter 6

Workforce Operations Planner Best Practices

Workforce Operations Planner is a retail scheduling product that enables budget stakeholders to collaboratively develop and review the budget. It also generates a labor forecast in hours as well as dollars for a future fiscal period. Based on the performance tests run against Workforce Operations Planner, this chapter present the recommended best practices to help you achieve optimal performance for Workforce Operations Planner.

Chapter 6

Workforce Operations Planner Best Practices

Best practices
While working with large number of stores (400 and above) that have longer fiscal periods (longer than a quarter), it is important to execute the following functions when the demand for the system resources is not significant: Generate Plan with volume data Generate Plan with labor data Edit plan

The Batch Processing Framework processes each request based on its priority and the time that it is entered in the queue. If priorities for two requests are the same, then the request that was entered in the queue first is processed first. Note: The Workforce Operations Planner batch requests are entered into the queue with the highest priority. Therefore, if other batch requests with lower priority (for example, running the Forecaster or Auto-Scheduler engines) are already in the queue, Workforce Operations Planner batch requests run before lower priority requests.

92

Kronos Incorporated

Chapter 7

Workforce Attendance Best Practices

Workforce Attendance interprets the time and attendance data that is collected by Workforce Timekeeper based on your companys attendance policies. It also generates applicable warnings to help supervisors manage their employees attendance. Based on the performance tests run against Workforce Attendance, this chapter presents the recommended best practices to help you achieve optimal performance for Workforce Attendance.

Chapter 7

Workforce Attendance Best Practices

Best practices
If you have more than 5000 employees, you should run Workforce Attendance processing in batch mode during off-peak hours. Because the response time of interactive mode can be prohibitive during periods of peak activity, use Event Manager to schedule batches to run during off-peak hours daily. This is especially important if you are using any of the following features: Lost time Lost time with time collection

Y can increase the number of Workforce Attendance processing batches that ou can be run concurrently by changing the following in the System Settings: a. In the upper-right corner of the Workforce Timekeeper workspace, click Setup. b. In the System Configuration box, select System Settings. c. Select the Batch Service tab. d. Change the value of site.BatchService.numberOfCPU to 2 to set the number of CPU cores.

94

Kronos Incorporated

Best practices

e. Click Save.

Note: If you increase this setting, you should also consider the impact on other Workforce Central 6.2 processing.

Best Practices for Optimal Workforce Central Performance

95

Chapter 7

Workforce Attendance Best Practices

96

Kronos Incorporated

Chapter 8

Workforce Analytics Best Practices

Workforce Analytics extracts data from the Workforce Central database and transforms the data into a target data warehouse model that is designed for optimal reporting and analytics. For optimal performance, consider the recommendations in the following sections: SQL Server best practices on page 98 Oracle best practices on page 102 Open Query best practices on page 104 Workforce Analytics product performance best practices on page 105 Analysis Services SSAS best practices on page 106

Chapter 8

Workforce Analytics Best Practices

SQL Server best practices


When using a SQL Server database for Workforce Central: Set Maximum Degree of Parallelism to 1 on page 98 Configure Workforce Analytics database appropriately on page 100 Configure Workforce Analytics data mart server appropriately on page 100

Set Maximum Degree of Parallelism to 1


Workforce Central requires that the number processors to use in parallel plan execution be limited to 1. By default, SQL Server uses parallelism to break down large SQL transactions into a multiple smaller transactions based on the number of CPUs available. For example, if four CPUs are available, a single large transaction could be broken down into four smaller transactions and executed in parallel to improve the transaction speed. However, because parallelism tends to increase blocking on totals-related Workforce Central tables, you must disable this feature on both the Workforce Central source database and Workforce Analytics data mart database so that SQL Server will use a single CPU for any one transaction, although it can still use all available processors for other transactions. To set this parameter: 1. Select Start > Programs > Microsoft SQL Server > SQL Server Management Studio. 2. In the Connect to Server dialog box, enter the appropriate information for your environment. 3. Click Connect. 4. From Management Studio, right-click the server name, select Properties, and select Advanced from the left side of the workspace.

98

Kronos Incorporated

SQL Server best practices

5. Change the Max Degree of Parallelism to 1.

6. Click OK. If you prefer, you can disable parallelism by executing a query. From the leftside navigation bar of SQL Server Management Studio, select the name of the Workforce Central database and click New Query from the header. In the Management Studio query window, enter the following, then click Execute: exec sp_configure 'show advanced options', 1 go reconfigure with override go exec sp_configure 'max degree of parallelism', 1 go reconfigure with override go

Best Practices for Optimal Workforce Central Performance

99

Chapter 8

Workforce Analytics Best Practices

Configure Workforce Analytics database appropriately


Kronos strongly recommends that you use as many disks as possible as well as a RAID solution to optimize database performance. A hardware RAID device is usually the most effective type of RAID. For optimum performance, Kronos recommends the following: 1. Locate the tempdb system database (including its log and data file) on a separate RAID volume. The space on the tempdb drive should follow the disk space calculation spreadsheet recommendation. 2. Locate the IA_DB and IA_ETL_DB log files on a separate RAID disk volume. 3. Locate the IA_DB and IA_ETL_DB data files on separate RAID system with multiple disk drives (15K RPM drives are recommended). 4. Use separate disk volumes for indexes and data files. 5. Configure the RAID system with: Read-ahead enabled Write-back caching enabled

Configure Workforce Analytics data mart server appropriately


Kronos recommends that you configure the data mart server as follows: 1. The data mart server should have a minimum of 4 GB of RAM. 2. Allocate the TEMPDB with adequate space50 GB or more. For large customers, Kronos recommends that you allocate 100 GB with four files, each 25 GB. The TEMPDB file should be set to grow 10% with UNLIMITED. 3. Y should increase the query time-out on the data mart database server as ou follows: a. Select Start > Programs > Microsoft SQL Server > SQL Server Management Studio. b. In the Connect to Server dialog box, enter the appropriate information for your environment and click Connect.

100

Kronos Incorporated

SQL Server best practices

c. In Management Studio, right-click the server name, select Properties, and select Connections from the left side of the workspace. d. Change the query timeout value.

e. Click OK. Note: Kronos recommends that you set the Remote Query timeout value to 1800 (30 minutes) to allow potentially long queries,

Best Practices for Optimal Workforce Central Performance

101

Chapter 8

Workforce Analytics Best Practices

Oracle best practices


When using an Oracle database for Workforce Central, Kronos recommends the following settings and configuration modifications: Configure Workforce Analytics database on page 102 Configure Workforce Analytics data mart server on page 103 Oracle Provider for OLE DB on page 103

Configure Workforce Analytics database


Kronos recommends the following settings on the Workforce Central database: Oracle initialization parameter settings A key to assuring optimal application performance with an Oracle platform is to minimize the amount of physical I/O required to process a query. With this in mind, Kronos has tested a number of parameter settings for optimal performance of an Oracle database with Workforce Central. For Workforce Analytics, Kronos recommends the following Oracle initialization parameter settings: UNDO_RETENTION=3600 This dynamic parameter specifies the length of time (in seconds) to retain undo. DB_FILE_MULTIBLOCK_READ_COUNT=16 This setting determines the maximum number of database blocks read in one I/O operation during a full table scan.

Oracle memory settings Kronos recommends that you only set MEMORY_TARGET and MEMORY_MAX_TARGET and leave the detailed memory management to Oracle 11g. You should set these memory target values based on available memory on the server and the operating systems upper limits. Database writer processes The database writer (DBWn) writes modified blocks from the database buffer cache to the data files. Kronos recommends that you use one database writer process for each CPU core, up to a maximum of four writer processes.

102

Kronos Incorporated

Oracle best practices

Configure Workforce Analytics data mart server


Kronos recommends that you configure the data mart server as follows: 1. Put IA data, IA indexes, IA_ETL data, and IA_ETL indexes on separate drives. If this is not possible, you should separate IA and IA_ETL data by putting IA data and IA_ETL indexes together and IA indexes and IA_ETL data together. 2. Create the database with 16K block size for the data mart. 3. Use the same 16K block size for all tablespaces. 4. Create 10 Redo log files, each with 1 GB. 5. Locate the TEMP and UNDO tablespaces on separate drives or SAN LUNs. 6. Create the UNDO tablespace with four files, each with 50 GB for large customers. 7. Allocate the TEMP tablespace with adequate space50 GB or more. For large customers, Kronos recommends that you allocate 100 GB with four files, each 25 GB. The TEMP table space should be set to grow 10% with UNLIMITED.

Oracle Provider for OLE DB


Use Oracle Provider for OLE DB to connect to the Oracle data mart from SQL Server Analysis Services.

Best Practices for Optimal Workforce Central Performance

103

Chapter 8

Workforce Analytics Best Practices

Open Query best practices


Use Oracle OLE DB driver for Oracle to connect to the source database. Follow all the best practices for SQL Server described in SQL Server best practices for the SQL Server data mart in the Open Query configuration as well.

104

Kronos Incorporated

Workforce Analytics product performance best practices

Workforce Analytics product performance best practices


Workforce Analytics uses its own data mart to store analytical data. This allows the data to be stored in a way that optimizes performance and takes the load off the timekeeping transactional database. Before you can view analytical data, you must import historical data from your timekeeping database. When you run the Historical Data Load Utility, you must specify whether you are running the historical load for the detail or summary ETL. Both the Core Detail and Core Summary historical loads must be run, in that order, for each source/ tenant combination. Kronos recommends that you complete the Core Detail historical load for a given source/tenant combination before running the Core Summary load for that tenant and source. Kronos also recommends the following: 1. Run the historical load for 24 months or less by setting the Historical Load Start Date parameter accordingly in ETL Configuration dialog box. 2. If you have a large database, Kronos recommends that you set the Time Summary Cube Days ETL configuration parameter to 92 days. If you have a smaller database, you can set this parameter to a higher value. This setting is used by Time Summary Daily cube to load data. It sets the number of days for which the scheduled nightly ETL process should load time summary data into the Cube Time Summary fact table. For more information, refer to the Workforce Analytics Installation Guide.

Best Practices for Optimal Workforce Central Performance

105

Chapter 8

Workforce Analytics Best Practices

Analysis Services SSAS best practices


Kronos recommends that you set the Change ExternalCommandTimeout property on the SSAS server from the default of 3600 seconds (1 hour) to 14400. This property is used to set the number of seconds that SSAS should wait to

time out when issuing commands to external data sources, such as relational and other OLAP sources. Setting this property to 14400 allows
sufficient time for the cube processing query to complete. To change the ExternalCommandTimeout property on the SSAS Server.

1. Connect to the Analysis Services server using SQL Management Studio. 2. Right-click the server name and select Properties. 3. Select Show Advanced (All) Properties at the bottom of the workspace. 4. Find the ExternalCommandTimeout property and change its value to 14400.

106

Kronos Incorporated

Analysis Services SSAS best practices

Kronos also recommends that you put the Analysis Cubes on a drive with multiple drives and with adequate space. This is required for Analysis service performance. The following shows where we recommend changing the default directory name to one that has enough disk space.

Best Practices for Optimal Workforce Central Performance

107

Chapter 8

Workforce Analytics Best Practices

108

Kronos Incorporated

Chapter 9

Workforce HR/Payroll Best Practices

Workforce HR and Workforce Payroll are core components of the Workforce Central suite of products. Both applications use Microsoft Transaction Server (MTS), Internet Information Server (IIS), and a SQL Server database to provide a single point of access for HR and payroll needs.

Chapter 9

Workforce HR/Payroll Best Practices

Database configuration
Database configuration is key to HR/Payroll application performance. If you have not already done so, review the following sections of this document: Database server best practices on page 17 Best practices for TEMPDB layout on page 28

110

Kronos Incorporated

Preventing connections from timing out

Preventing connections from timing out


To prevent connections from timing out when you run long reports, change the Workforce HR/Payroll connection timeout setting for server.scriptimeout. To do this on machine running Windows 2003 Server: 1. On the web server machine, select Start > Administrative Tools > Internet Information Services (IIS) Manager. 2. Expand the directory listing, right-click Web Sites, and select Properties. 3. In the Web Sites Properties dialog box: a. Select the Web Site tab and set the connection timeout to 1800 seconds. b. Select the Home Directory tab, and click Configuration. c. In the Application Configuration dialog box, select the Options tab. d. Select Enable session state and set the session timeout to 30 minutes. e. Set the ASP script timeout to 1800 seconds. f. Click OK until you return to the main Internet Information Services (IIS) Manager dialog box.

4. In the Internet Information Services (IIS) Manager dialog box: a. Expand Web site > Default Web Site. b. Right-click Admin and select Properties. c. In the Admin Properties dialog box, click Configuration. d. f. In the Application Configuration dialog box, select the Options tab. Set the ASP script timeout to 1800 seconds. e. Select Enable session state and set the session timeout to 30 minutes. g. Click OK until you return to the main Internet Information Services (IIS) Manager dialog box. 5. Restart IIS: a. Select Start > Control Panel > Administrative Tools > Services. b. In the Services panel, right-click IIS Admin Service and click Restart. c. Exit the Control Panel.

Best Practices for Optimal Workforce Central Performance

111

Chapter 9

Workforce HR/Payroll Best Practices

To do this on machine running Windows 2008 Server R2: 1. On the web server machine, select Start > Administrative Tools > Internet Information Services (IIS) Manager. 2. Expand Application Pools. 3. Right Click on Classic .NET AppPool.
a. Select Set Application Pool Defaults.

b. Set Idle Time-out (minutes) to 30. c. Click OK. 4. Expand the Sites directory listing. a. Right click on the website. b. Select Manage Web Site. c. Select Advanced Settings. d. Expand Behavior and Connection Limits. e. Change Connection Time-out (seconds) to 1800. 6. In the ASP.Net Session State dialog box for the WFC website: a. Set Time-out (in minutes) to 30. b. Select WFC website (usually Default Web Site) again. c. Select Yes to save changes at the Session State dialog box. Using any version of IIS you must also modify the c:\Kronos\whrpr\web\web.config property file. 1. Set the values for the following properties to be 1800. <httpRuntime executionTimeout="1800" /> <add key="AsyncPostBackTimeout" value="1800" /> <add key="iKeepAliveTime" value="1800" /> 5. Select WFC website (usually Default Web Site).

112

Kronos Incorporated

Chapter 10

Workforce Record Manager Best Practices

A key element in minimizing the impact of Workforce Record Manager on the operation of the production database is to take steps in understanding how to fix WRM processing in allocated maintenance windows, or being able to run WRM tasks during periods where processing on the production database is minimal. Based on testing of WRM in Workforce Central 6.2, the following are key recommendations to help obtain optimal performance of the WRM COPY and PURGE processes. Note that unless otherwise specified, any Workforce Central Application Tuning options are found in the Record Retention Options and Tuning tab under Setup/System Settings. Each of these recommendations will be discussed in more detail later in this chapter: Configure Ample I/O Bandwidth. Use dedicated 64-bit application servers. Run the WRM COPY during non-peak processing hours. Avoid running the PURGE with active Workforce Central user activity or processing. Run COPY and/or PURGE operations in 1-month increments. Define and implement archive strategies within the first year of overall deployment. Process more tasks concurrently by setting the application property WrmSetting.Tuning.MaxThreads to at least 6, especially on an application server with 8 processors (64-bit application servers ONLY). Use Row Count Validation for COPY operations versus Complete Binary Validation for optimal performance. Never select the option No Validation so that the Target Database is assured to be in a valid state.

Chapter 10

Workforce Record Manager Best Practices

Set application property WrmSetting.Option.DropTargetIndexes to true to drop indexes on the Target Database during a COPY operation. Move the data for large non-historic tables in one operation by setting the property WrmSetting.Tuning.CopyChunkRowIdThreshold to a value higher than the maximum number of rows in the largest non-historic table (64-bit application servers ONLY). Note that for the properties WrmSetting.Tuning.MaxThreads and WrmSetting.Tuning.CopyChunkRowIdThreshold, the application limits the values entered through the Record Retention Options and Tuning tab to 4 and 500000 respectively due to the memory limitations of 32-bit application servers. To override this GUI limitation, enter the following lines into the file custom_WRM_OptionsAndTuning.properties, which is located in the directory \Kronos\wfc\applications\wrm\properties on the application server. The application server needs to be restarted after making these changes for the changes to take effect: WrmSetting.Tuning.MaxThreads=6 WrmSetting.Tuning.CopyChunkRowIdThreshold=50000000

Configure Ample I/O Bandwidth


Workforce Record Manager is very I/O intensive on the database server due to the large volumes of data being moved during a COPY operation to a target database or the large volumes of data being deleted from the production database. For optimal performance, it is critical to have ample I/O bandwidth and to balance I/O for optimal INSERT and DELETE performance on the database server. For example, on Oracle configurations, as discussed in Appendix D, take advantage of Asynchronous I/O since testing has shown significant performance improvements using this option. For the database server (source or target), while faster processors help overall performance, configuring ample I/O bandwidth with multiple disks is key to achieving the quickest times to both copy and purge data. If the I/O bandwidth and balancing is not adequate, the speed and number of processors on the database are not as significant. And the best practices for running Workforce Record Manager, such as using more threads, will not have the desired impact.

114

Kronos Incorporated

Use dedicated 64-bit application servers


One of the enhancements introduced with Workforce Central 6.2 is the ability to use 64-bit application servers. Workforce Record Manager can take advantage of a 64-bit architecture by the ability to process more tasks concurrently, as well as the ability to process more data in a single transaction. These attributes, in turn, reduce the elapsed time required to process COPY and/or PURGE operations. On 32-bit systems, the key limitation was the size of the Java heap on the application server, which was limited to approximately 1.4 Gigabytes. Therefore, limitations were imposed on the number of records being passed through the application server, which limited concurrency and prevented processing large tables in one operation. Testing on the 64-bit application server in Workforce Central 6.2 shows that this limitation is removed, allowing more concurrency as well as the processing of larger chunks of data in one operation. Some of the new recommendations for WRM 6.2, as noted above, apply only to 64-bit application servers and have helped improve performance of WRM significantly in some cases over previous releases of WRM.

Run the WRM COPY and/or PURGE during non-peak processing hours
Running the WRM COPY introduces some overhead on the production database due to the population of temporary node tables to establish the data to be copied, extraction of data from the Source Database to be copied to the Target Database, and data validation routines to ensure the data being copied to the Target Database matches the data on the Source Database. Since this overhead occurs on the largest of transaction tables, it is strongly recommended not to run the COPY during peak processing hours such as pay period end or other times where the system is the busiest. It is recommended to define a maintenance window in which to run the WRM COPY, similar to how database maintenance tasks are scheduled.

Best Practices for Optimal Workforce Central Performance

115

Chapter 10

Workforce Record Manager Best Practices

Run COPY and/or PURGE Operations in 1-month Increments


In order to be able to run the COPY and/or PURGE during a maintenance window, it is important to identify a unit of work that can fit into the maintenance window. Testing of each of the customer databases for WRM 6.2 revealed that if adhering to the best practices specified in this document, COPY and PURGE operations were able to fit into a reasonable maintenance window if the data was processed 1 month at a time. In many cases, it may be possible to process more data than 1-month depending on the given maintenance window. The Summary of Results section details the times achieved based on 1-month runs of the COPY and PURGE jobs, and why in most cases, running the jobs in 1month increments is optimal to fit the process into most maintenance windows. Note that depending on the maintenance window, more data than 1-month can be processed. It is possible in most cases to use the timings for processing 1-month worth of data to extrapolate whether it is possible to fit more data into the given maintenance window. This extrapolation assumes that the COPY and/or PURGE operations are processing timeframes where the application was fully rolled out to all stores and/or business units.

Define an archive strategy earlier rather than later


To get the best use out of Workforce Record Manager Archive and maintain optimal production database performance and size, it is critical to implement WRM Archive strategies within the first year of rollout. If the database gets too large to the point where maintenance procedures are exceeding given time allotments and user transactions are starting to increase in time due the larger database, it may take a long time to get to the point where data can be purged from the production database. As an example, consider Database #1 used for this assessment. Recall that the COPY operation for each month took about 8 hours to complete. Suppose that this customer accumulated 8 years of data then decided to use Workforce Record Manager to remove 7 years of that data. Assume as well that this customer had a maintenance window of 8 hours to run the COPY. Based on the Performance Data, it would take the equivalent of 84 maintenance windows just to run the COPY jobs in preparation for removing the data from the database.

116

Kronos Incorporated

The bottom line is that if the COPY is run within the first year of full rollout, the database will be in a state where the PURGE can be run when it is decided to reduce the size of the production database, eliminating a significant delay if the database suddenly was either exceeding size requirements or causing typical database maintenance procedures to exceed given maintenance windows.

Process more tasks concurrently (64-bit ONLY)


WRM provides a mechanism to process multiple tables concurrently for the COPY and the PURGE with the WrmSetting.Tuning.MaxThreads property. By default, this property is set to 4. In releases of WRM before 6.2, it was not recommended to set this value any higher due to the 32-bit architecture and the risk of encountering OutOfMemory errors due to the data being sent through the Java heap exceeding 32-bit limitations. In some cases, the value of this property was actually reduced in previous releases. However, with a 64-bit application server, this limitation for the Java heap does not exist and it is possible to configure this value higher than the default of 4. Performance testing of each of the database samples for WRM 6.2 was implemented with a value of 6 for the WrmSetting.Tuning.MaxThreads property. Based on the successful testing of WRM with this setting, it is recommended to use a value of 6 for the property WrmSetting.Tuning.MaxThreads unless running on an application server with less than 6 processors.

Use Row Count validation for COPY operations


From a performance perspective, it is recommended to use Row Count validation as the validation method for the COPY process. Complete binary validation essentially runs queries returning all records processed in a chunk to the application server from the Source and Target Databases to compare each column and row of data processed. This operation is very expensive both from an application server and Database Server perspective. Row Count validation runs queries to get the row counts of tables after the tables have been fully processed.

Best Practices for Optimal Workforce Central Performance

117

Chapter 10

Workforce Record Manager Best Practices

The count query is run on both the Source and Target Databases and the results compared for data validity. To assess the performance impact of these two validation options, a COPY experiment was run with Database #2 as defined in Table 15 on page 148 in Appendix D. The experiments consisted of running the COPY for 1-month of data with binary validation, then with Row Count validation. The results were as follows: Complete Binary Validation COPY time: 4.1 hours Row Count Validation COPY time: 1.4 hours Note that with this test case, the elapsed time to run the COPY increased by almost a factor of 3 when using Complete Binary validation versus Row Count validation. What is more significant is the actual time measured for the validation methods. Recall that for binary validation, queries are run for every chunk processed and those queries are run in parallel. The elapsed times for the validation methods were measured as follows for the aforementioned experiment: Complete Binary Validation processing times: 17.5 hours Row Count Validation processing times: 7.2 minutes Since the process is multi-threaded, the 17.5 hours of processing is distributed amongst 6 threads configured for the property WrmSetting.Tuning.MaxThreads. With fewer threads, the overall elapsed time for the COPY would have significantly increased due to the time requirements for complete binary validation.

Drop indexes on the target database for the COPY


Inserting data into fully indexed tables, especially tables with millions of rows, can take significant time due to the overhead involved in maintaining the data in the indexes as data is being inserted, especially on Oracle platforms. The COPY has a feature that drops indexes on large tables specified by the user so that the performance of INSERT operations is significantly enhanced. The name of the property is WrmSetting.Option.DropTargetIndexes, with the default

118

Kronos Incorporated

setting for this property being false. It is recommended to change the setting of this property to true. The property that defines the indexes to be dropped is named WrmSetting.Option.IndexList. The recommended indexes to be dropped are defined in Index List for the Property WrmSetting.Option.IndexList on page 150 in Appendix D. This list of indexes is based on the running of the WRM COPY on several Oracle and SQL Server databases. To display the impact of inserting records into and deleting records from large tables with heavy indexing, an experiment was run using Database #1 as defined in Table 14 on page 148 in Appendix D to INSERT/DELETE records from the WFCTOTAL table with full indexing applied and without the indexes. Table 12 represents the results when varying the number of records inserted into and deleted from the WFCTOTAL table. All times are measured in seconds.

Table 12: Impact on DELETE/INSERT Operations when removing Indexes on WFCTOTAL


DELETE Times and Records per Second Record Count INSERT Times and Records per Second

With Records No Records With Records No Records Indexes per Indexes per Indexes per Indexes per (Seconds) Seconds (Seconds) Second (Seconds) Second (Seconds) Second

33138 415881 1295478 2473664

60 816 3,133 7,172

552 510 413 345 532 343

2 15 57 108 232 573

16,569 27,725 22,728 22,904 23,592 23,564

26 735 3,688 6,282 9,668 38,690

1,275 566 351 394 566 349

1 10 43 90 198 484

33,138 41,588 30,127 27,485 27,643 27,897

5473291 10,285 13502150 39,377

Note the significance of the indexing impact. When inserting 135 million records into the fully indexed WFCTOTAL table, the elapsed time was close to 11 hours. When removing all indexes (except for the primary key), the elapsed time to INSERT the same 135 million records was reduced from 11 hours with indexing down to 8 minutes with the indexes removed. The performance impact

Best Practices for Optimal Workforce Central Performance

119

Chapter 10

Workforce Record Manager Best Practices

on COPY will be more significant each month for the fully indexed tables as the Target Database grows.

Increase Threshold to process non-historic tables in a single operation (64-bit)


For the COPY operation, non-historical tables are copied to the Target Database in a single operation unless the number of records is greater than the threshold set by the property WrmSetting.Tuning.CopyChunkRowIdThreshold. By default, this property is set to 500000. The reason for this setting was that for 32bit systems, passing more than 500000 records in one statement could result in Java OutOfMemory errors on the application server. When there are more than 500000 records in a non-historic table, the chunking approach is used where a Node Table is populated with the primary key and SEQUENCEID values, and the table is copied like the historic tables. In many cases, the non-historic tables are less than this threshold. However, for larger customers using Shift Group functionality, the GROUPSHIFT table, which is defined as a non-historic table, can contain more than 20 million rows. Testing for one of the customer databases indicates that the Node Population set of processing this table alone took just under 2 hours. On a 64-bit application server, this limitation does not exist. This same table can now be populated in 1 step by passing all records through the application server, creating the SQLLDR and/or BCP files containing the data, and loading the records onto the Target Database. To validate this assumption, an experiment was run with Database #3 defined in Table 16 on page 149 in Appendix D. The first test defined the property to be the default value of 500000. The second test defined the value of the property WrmSetting.Tuning.CopyChunkRowIdThreshold to be 300000000. The elapsed times to process the GROUPSHIFT table in these experiments are represented in Table 13:

120

Kronos Incorporated

Table 13: GROUPSHIFT Processing with WrmSetting.Tuning.CopyChunkRowIdThreshold Setting Changes


GROUPSHIFT Processing Elapsed Time (21 Million Records) Elapsed Time (Hours) Task Node Population and Transfer Extract and Load of Base Table Total Time Default Setting 1.7 2.6 4.3 Best Practice Setting 0 2.2 2.2

Notice that with the higher setting for WrmSetting.Tuning.CopyChunkRowIdThreshold, the Node Population and Transfer time of 1.7 hours is eliminated from the COPY job. Subsequently, the overall elapsed time for the COPY job was reduced from 7.9 hours with the default property setting down to 5.9 hours with the optimized property setting.

Best Practices for Optimal Workforce Central Performance

121

Chapter 10

Workforce Record Manager Best Practices

122

Kronos Incorporated

Chapter 11

Workforce Integration Manager and Workforce Central Import Best Practices

Workforce Integration Manager is a next-generation data integration tool that interfaces Workforce Central products with other business-critical applications. Unlike Workforce Connect, which is an entirely Windows client-side utility, Workforce Integration Manager allows users to run and maintain interfaces from their Workforce Central web interface. Interface programmers will use the clientside Interface Designer (which continues to use the Workforce Connect look-andfeel) to create and update data integration interfaces. Note: Workforce Central 6.2 continues to support Workforce Connect 6.0. For Workforce Connect performance recommendations, refer to the v6.0 version of Best Practices for Optimal Workforce Central Performance. Based on the performance tests run against Workforce Integration Manager, the following list of best practices will help you achieve optimal performance for Workforce Integration Manager: Use multi-server import Kronos recommends that you use Parallel XML import (using two or more Workforce Central servers) for large volume imports. For large data sets, using more Workforce Central servers in parallel will make the total import time shorter. Incremental (delta) data import For daily and regularly scheduled imports of data into Workforce Central, design the imports so that only incremental or changed data is imported regularly. This practice helps to minimize the volumes of data imported, resulting in faster total import times.

Chapter 11

Workforce Integration Manager and Workforce Central Import Best Practices

Scheduling memory or CPU-intensive links When possible, avoid running links that consume large amounts of CPU or memory utilization during peak processing periods. If it is necessary to run links during peak processing periods (such as pay period end or a schedule maintenance processing), run the interfaces on a server that has no interactive users on it. Database vs memory sort Sorting in memory has fairly low CPU, time, and memory overhead. However if out of memory conditions occur when using a SQL Query, Workforce Activities data, Workforce Activities changed data, or Workforce Timekeeper API as a source, consider reducing the size of the knx.engine.MaxInMemorySortSize system setting to fewer records to push the interfaces to sort in the database instead. Reducing the value of this system setting will not have a significant impact on memory utilization of other data source types. To access this system setting: a. In the Workforce Timekeeper workspace, select Setup > System Configuration > System Settings. b. Select the Data Integration tab. c. Locate the knx.engine.MaxInMemorySortSize setting and change it accordingly. d. Click Save.

Break up large integrations into multiple steps As seen in the scalability tests, large workloads can be processed much more quickly when they are executed in smaller parallel chunks. Smaller data sets may also alleviate out of memory conditions. This data should be broken-up outside the WIM product. For example the data could be segmented in the source application by: or Breaking a single large input file up into multiple smaller input files using a shell script. Adding an additional where clause in the case of a database source

Consider the impact of automatically triggered integrations in large updates Large updates should be done at times when the system usage will not impact other users, or temporarily disabled during such an update.

124

Kronos Incorporated

For example, if a large-scale reorganization of the company occurs in an HR or ERP System with integrations configured to start automatically for each record updated: a. Disable the integration trigger in the source applications database. b. Update the company organization and job assignments. c. Re-import the organizational structure and job assignment data. d. Re-enable the integration trigger in the source database. Disable virus scanners in certain circumstances Disable the application server on access virus scanning in directories mapped to mapped folders. Scanning input or output files during link execution can have a significant impact on both CPU consumption and link execution time. Disable the client on access virus scanner when opening large lookup tables or output files in the browser. Disqualify records as early in link processing as possible The earlier a record is disqualified, the sooner the Integration Manager engine can begin processing the next record. If possible, filter input data before passing it into the Integration Manager engine. In the case of a SQL query, try to filter out records in the where clause. Disqualification should take place in a variable. V ariables are processed in a top-down manner, so disqualifications should take place in the topmost variables in the Interface Designer. Disqualification can take place in the output fields.

Only input necessary data Try to input only the required data into the Integration Manager engine. Avoid processing data sets that include unnecessary columns or unnecessary rows. Processing these additional records increases the time and memory footprint required to execute the interface. In SQL query sources, retrieve only the columns and rows needed for processing. In text files, try to preprocess the files to contain only the necessary data.

Best Practices for Optimal Workforce Central Performance

125

Chapter 11

Workforce Integration Manager and Workforce Central Import Best Practices

Create a record retention policy to meet the needs of your business Create a Record Retention Policy that removes interface results at an interval that supports system usability and the amount of history likely to be reviewed to support interface analysis. Although the number of interface results (Interface Results Summary reports) stored in Workforce Central should not impact system performance, this is a good practice to establish. Minimize memory usage Integrations take place in the shared Workforce Central Java memory space. This memory has limitations beyond the amount of physical memory on the system. Using unnecessary amounts of memory may cause link execution to fail or may interrupt other users using the same Workforce Central instance. Several areas to keep in mind with memory usage in Workforce Integration Manager include the following: Link/interface size Remove any unused variables, fields, and lookup tables. Data size Only input necessary data. Avoid unnecessary rows or columns in input data. Lookup table size Remove any unused tables, columns, and rows. Reduce avoidable errors If hundreds of errors are generated during every execution, try to isolate the root cause and eliminate it. Do not run more interfaces than memory allows Running more interfaces concurrently will speed execution, but if memory becomes tight, consider installing a second instance of Workforce Integration Manager rather than running more threads in a single instance.

Optimize lookup tables Lookup tables with no wildcards or those with an asterisk (*) on the last line perform faster than those with many wildcards. In tables where more wildcards are necessary, try to put the table in order of most common usage.

126

Kronos Incorporated

Use XML import Kronos recommends that you use XML import for all data imports. XML imports are faster than table imports. Insert vs. Update When importing data using XML API methods, it is important to use the correct import type either Insert for new records to be added or Update for updating already existing data in the database. Note that the Update import type allows inserting of new records of data; however, the best performance is achieved using the correct import type. Manage thread pool The default Workforce Integration Manager thread pool is configured to run up to five integration interfaces concurrently. In cases where running five interfaces concurrently exhausts the Java heap space or provides unacceptably slow performance to interactive users of the instance, lower this number. To process more interfaces in parallel, you can increase this value. For maximum performance of individual interfaces, Kronos suggests setting this value no higher than the number of processing cores available to the Workforce Central instance. To change the thread pool size: a. Select Setup > System Configuration > System Settings. b. Select the Data Integration tab. c. Locate the knx.engine.ThreadPoolSize property and change the value to the desired number. d. Click Save.

Use a dedicated WIM server If any of the following conditions exist, use a dedicated WIM: the number of employees in the installation is more than 10,000. Large imports/exports are carried out during interactive usage of the system. This is applicable for all size installations. Memory or CPU intensive links are required to be run during Pay Period End or Shift Change usage periods.

Kronos recommends use of 64 bit WFC to provide better scalability.

Best Practices for Optimal Workforce Central Performance

127

Chapter 11

Workforce Integration Manager and Workforce Central Import Best Practices

An existing server can be allocated for this purpose. More dedicated servers can be added if circumstances dictate, such as when multiple long import/export jobs are run concurrently during peak interactive usage periods. Do not combine dedicated servers. Use separate dedicated WDM, report, and WIM servers as needed.

128

Kronos Incorporated

Appendix A

Performance Tuning Oracle Indexes

This appendix provides background information and recommendations about tuning Oracle indexes for optimal performance, including: Background information about the internal management of Oracle indexes what is occurring in terms of growth and storage inside the index during the normal life cycle of data change. Guidance on assessing the quality of an index and determining if any maintenance should be carried out to increase performance within the database. Overview on page 130 Block splitting on page 131 Sparse population on page 132 Empty blocks on page 132 Height/Blevel on page 132 Indexes in the Kronos environment on page 133 Workforce Central index maintenance recommendations on page 137

This appendix contains the following sections:

Appendix A

Performance Tuning Oracle Indexes

Overview
Index maintenance within any database environment has long been a subject of debate. Some of the common topics that are debated are: Whether you rarely need to rebuild indexes or you should rebuild them regularly. Whether indexes whose values grow sequentially to the right should be rebuilt regularly. Whether indexes deleted from the left should be rebuilt regularly. Whether index blocks whose values are deleted are reused within the index. Whether indexes of a certain height/Blevel need to be rebuilt.

The following sections provide some guidance to these topics with respect to the Kronos database environment. The primary rule to follow with respect to index maintenance in any environment is to understand how an application uses an index before you rebuild the index. Rules such as rebuilding the index when the percentage of deleted entries within the index grows above a certain percentage or when the Blevel of the index grows beyond 4 are not entirely accurate. Deleted records are only marked for deletion initially. An update consists of a deletion and then an insertion. The space held by deleted entries in the index is typically cleaned out and reused when: The block is completely empty and is placed back on the free list. Any record that is inserted into a block will clean out the deleted entries within that block. Any DML on a block will clean out the deleted entries already present within that block. Block splitting on page 113 Sparse population on page 114 Empty blocks on page 114 Height/Blevel on page 114

Key points to understand include:

130

Kronos Incorporated

Block splitting

Block splitting
When a block in the middle section of an indexs logical structure fills, it can split in either of the following ways: 50:50 split A new adjacent block is created and the contents of the original block are split 50:50 between the two blocks. 90:10 split When the block splits, the inner block remains mostly full and the new outer (right) block contains only a small amount of row data. The 90:10 split method allows the index to grow with its data very compacted within the index blocks, as opposed to the 50:50 split, which leaves unused space in each block after the split. The 50:50 split assumes that data may be inserted in any location so it leaves space available in both blocks. With Oracle 10g, when the right-most leaf block fills (as in an index whose new values are increasing sequentially), a 90:10 split occurs. The 50:50 and 90:10 methods have a significant impact on the density of storage within the index blocks and on the performance of index range scans. Caution: Like the sparse population condition (described next), the 50:50 split can increase the actual number of blocks required for a range scan. Although rebuilding to compress density can reduce the range scan cost, there is an important caveat: The 50:50 split provides important space availability for random inserts due to table inserts or updates. Without that available space, the index must undergo a much higher rate of block splits. Block splits are a very expensive operation and can easily outweigh the cost of reading a few more blocks during a scan.

Best Practices for Optimal Workforce Central Performance

131

Appendix A

Performance Tuning Oracle Indexes

Sparse population
The presence of sparse data distribution over index blocks can have significant impact on index range scans. Denser packing can reduce this, but as explained in Block splitting on page 113, there is the problem of increasing split operations.

Empty blocks
The presence of large numbers of empty blocks within an index due to delayed block cleanout can be a serious problem because it can impact index range scan costs, depending on the optimizer statistics collection method. Large numbers of empty blocks retained on the end of an index can severely skew the cost of an index range scan. In addition, large numbers of empty blocks within the index can cause similar problems. There seems to be a trade-off between the severe cost of a transaction cleaning its own blocks out and the impact on range scan estimates.

Height/Blevel
Height/Blevel is one of the least useful of the more popular metrics for examining index storage health. Leaf blocks greatly outnumber branch blocks. For height/Blevel to be an issue, it requires a very specific use profile to cause resource pressure. Also, rebuilding to only reduce height/Blevel requires a previous reduction in rows and can result in further block splitting costs if growth returns. Using the height/Blevel of an index solely as a guideline for rebuilding that index is not a recommended practice. As an index grows and expands, the height/Blevel may grow and expand with it. Keep in mind that it takes time and resources for an index to grow and expand and get to a certain height/Blevel, and it usually does so for good reasons.

132

Kronos Incorporated

Indexes in the Kronos environment

Indexes in the Kronos environment


Before you rebuild any index, a thorough analysis should always be done first. Rebuilding an index does not always improve performance. In fact, in some cases, rebuilding an index can actually degrade performance. For example, some indexes can reach a state where certain metrics might suggest that it be rebuilt, but in fact, the index is at a stable state where the different performance aspects are in balance. It takes an index time and overhead to reach that balanced state, and when rebuilt, the index will immediately begin to return to the pre-rebuilt state, which can be a costly process. When an index is rebuilt, the data is tightly packed, and unused space is reduced. If an index is designed to have random inserts, as opposed to monotonically increasing inserts, block splits might reoccur and impact performance of the database. Again, it is critical to understand the application and how its DML activities relate to data inside the index. The majority of indexes within Oracle will remain balanced and compact automatically due to the following features that Oracle has implemented: 50-50 block splits and self-balancing mechanisms 90-10 block split mechanism for monotonically increasing values Reusability of deleted row space within an index Reusability of emptied blocks for subsequent index block splits

With this in mind, there may still be a need to rebuild some indexes, depending on how the applications DML activities affect them. This section provides an overview of the different types of indexes that exist within the Workforce Central database and how different types of DML can affect these indexes. Indexes within the Workforce Central database can be separated into the following types: Monotonically increasing indexes on page 134 Indexes with no pattern of inserts on page 134 Primary key indexes on page 134 Date datatype indexes on page 135

Best Practices for Optimal Workforce Central Performance

133

Appendix A

Performance Tuning Oracle Indexes

Monotonically increasing indexes


Monotonically increasing indexes grow from left to right and are present with varying data types. This means that inserts are always at one end of the index. Inserts are not sporadic and they have a pattern. Any index that fits this behavior of inserts will be very unlikely to reuse unused space. These types of indexes are likely candidates for rebuilds, unless bulk deletes are being performed. In that case, the unused space will result in emptied blocks that will be reused.

Indexes with no pattern of inserts


Indexes with no pattern of inserts come in varying data types. Inserts against these indexes do not monotonically increase. Inserts and deletes against these indexes are random and can happen anywhere on the index. This means that these types of indexes require free space in each block to handle those random deletions and insertions. Having that free space prevents block splits from occurring. These indexes exist within the Workforce Central database, but there is a constraint on just how random the inserts can be. For example, edits and updates within Workforce Central typically occur for the current and previous pay period. Inserts and updates can still be accomplished for periods before and after that. However, if a period is locked, inserts and updates for those specific records are prevented. This reduces the randomness of inserts and updates and reduces the negative impact of unnecessarily rebuilding an index of this type.

Primary key indexes


Primary key indexes are monotonically increasing indexes. They grow from left to right and they only contain unique values. This means that when records are deleted from these types of indexes, the unused space is unlikely to be reused. This is still dependent on how deletes are performed against the index: If deletes are done in bulk, as in the MYWTKEMPLOYEE table, the fourth feature (Reusability of emptied blocks for subsequent index splits) will allow the space to be reused.

134

Kronos Incorporated

Indexes in the Kronos environment

If deletes are sporadic and do not result in emptied blocks, the space within these types of indexes will not be reused as any new entries are monotonically increasing.

For example, transactional tables such as WFCTOTAL, TIMESHEETITEM, PUNCHEVENT, ACCRUALTRAN, ACCRUALEDIT, PUNCHEVENTTRC, WFCEXCEPTION, and others are candidates for rebuilding their primary key indexes. With the exception of Workforce Record Manager purges, these tables do not have bulk deletes performed against them. Tables such as PERSONIMPORT, PERSONMANYIMPORT, MYWTKEMPLOYEE, and others are not candidates for index rebuilds because they do have bulk deletes that result in emptied blocks performed against them.

Date datatype indexes


The Workforce Central database contains several date datatype indexes. These indexes are unique because they appear to be very non-selective. The indexes are used in range scan operations, which means that unused space can be critical to their performance. These indexes need to be separated into two distinct categories, current date indexes and historical date indexes: Current date indexes These indexes reside on columns that only store the current date of a particular transaction. For example, assume that there is an index on the TIMESHEETITEM table on the ENTEREDONDTM column. This column stores the entered on date of the transaction. This is always the date when the transaction occurred. The key is that there will be several transactions on a given day and, therefore, several records with that same date. Inserts into this table will monotonically increase as the days go by. Deletes against this table, with the exception of Record Manager purges, are not typically performed in bulk. The likelihood of subsequent inserts using that unused space is low. Therefore, these types of indexes need to be monitored as candidates for rebuilds.

Best Practices for Optimal Workforce Central Performance

135

Appendix A

Performance Tuning Oracle Indexes

Historical date indexes These indexes have date information that does not increase monotonically. Examples of these types of date columns are apply dates. A transaction can be deleted and reinserted at any point and it can affect historical records. Rebuilding indexes of this nature can be detrimental to the overall performance of the database, because these types of indexes work best having free space in their blocks for updates and future inserts.

136

Kronos Incorporated

Workforce Central index maintenance recommendations

Workforce Central index maintenance recommendations


Given the varying nature of the Workforce Central application use cases and customizations, it is not appropriate to list specific indexes as the official rebuild candidates. In all likelihood, the list will differ at each site and change over time. It is far more useful to outline a process with which the DBA can perform general maintenance and then monitor and identify potential rebuild candidates. Recommendations are described in the following topics: Index coalesce on page 137 Rebuilding an index on page 137

Index coalesce
The index coalesce operation should become a regular operation in index maintenance. It is very effective in reducing empty blocks that are still attached to the logical index structure, minimizing excessive sparseness.

Rebuilding an index
1. Using the index analysis script in this step, collect statistical information on the current storage structure for the Workforce Central schema. This SQL script carries out analyze validate structure commands on the indexes within a schema. It collects the data into a work table and spools out the contents. This script can be modified to insert the results in a table or output in HTML for import into Excel for easier analysis. Important: To collect index storage statistics with the analyze command, it must be run in offline mode. While it is running on an index, all DML will be blocked to that segment.

Best Practices for Optimal Workforce Central Performance

137

Appendix A

Performance Tuning Oracle Indexes

Script for index analysis

138

Kronos Incorporated

Workforce Central index maintenance recommendations

2. Using the results of the script, identify indexes that potentially need to be rebuilt. The following are key data points to focus on: The PCT_USED space of an index provides a good starting point in the analysis. Note that PCT_USED is in comparison to the actual number of rows in that index. If an index only has one row in it, its PCT_USED will be very low. It may be more reasonable to focus on indexes that use more than a certain number of blocks. The DEL_LF_ROWS column is the second item of interest. This column provides data on how many actual Leaf Row Entries have been deleted. This column should be used in conjunction with the PCT_USED column; it should not be used alone. After you locate indexes with a low PCT_USED and/or a high DEL_LF_ROWS, you should categorize these indexes according to the list in Indexes in the Kronos environment on page 115 (for example, primary key, date, and so forth) and their insert potential and method should be understood. Assess the deletion method and potential in addition to insert potential and method.

3. Rebuild the indexes identified in step 2 and monitor the system for performance improvements. 4. After identifying and rebuilding indexes, always retain the records of which indexes were rebuilt, and the storage data used to determine their candidacy. If a given index continues to resurface as needing to be rebuilt, it may be that sparse population is the normal state of the index and rebuilds only force the index to immediately undergo expensive block splits as it quickly returns to its normal state. In addition, indexes that benefit from rebuilding may again reach a state requiring another rebuild, and the analysis time can be reduced if records of the rebuilds are maintained.

Best Practices for Optimal Workforce Central Performance

139

Appendix A

Performance Tuning Oracle Indexes

140

Kronos Incorporated

Appendix B

Recommendations for using Workforce Central with VMware

Based on a series of tests completed in the Kronos performance lab, this appendix outlines recommendations for configuring and deploying Workforce Central in a VMware environment. Important: The recommendations in this appendix are based on the assumption that the hardware used is dedicated to the Kronos suite of applications. If your production systems have non-Kronos applications running on the same physical hardware, you must consider peak usage scenarios and overall system utilization. Also note that these are general recommendationsnot rulesand you should understand the need and the ramifications of deviating from these recommendations. This appendix contains the following topics: General recommendation on page 142 Hardware and software requirements on page 142 VM configuration on page 143

Appendix B

Recommendations for using Workforce Central with VMware

General recommendation
Although VMware has a multitude of options for tuning workloads, Kronos recommends that you keep the VMware configuration as simple as possible. Performance testing of Workforce Central in a VMware environment obtained optimal results when running VMware with an out of the box configuration, taking default settings for system parameters and tuning options.

Hardware and software requirements


Performance testing of Workforce Central in a VMware environment used the following hardware and software.

Hardware
The hardware listed below represents the minimum processor specifications to be used in the sizing process for optimal performance of Workforce Central on VMware. The total hardware requirements will depend on individual customer use cases and will be determined by your local Kronos Representative through a hardware-sizing process. Dual-core systems based on Intel 5130 processor running at 2 GHz or greater Quad-core systems based on Intel 5335 processor running at 2 GHz or greater Dual-core systems based on AMD 2200 series processor running at 2.4 GHz All AMD quad-core processor systems 4 GB memory per processor core (depending on the number and size of the virtual machines) 1 GB Ethernet adapter per system If you use vSphere 4.0, Kronos recommends that you implement VMwares hardware best practices for vSphere. To do this, you must modify the following BIOS settings: Enable Intel or AMD Virtualization Technology (VT)

142

Kronos Incorporated

VM configuration

Enable hardware Data Execution Protection (DEP) Enable Intel XD bit

Note: Check with your hardware vendor for how to access your machines BIOS settings. For example, to enable hardware Virtualization Technology and enable Execute Disable for a Dell BIOS, select F2 as the machine boots, and then select CPU Information.

Software
vSphere 4.0 VMware ESX 4.0 and 3.5, VMware ESXi 4.0 and 3.5

VM configuration
Recommendations for configuring the virtual machine (VM) for use with Workforce Central are outlined in the following topics: Memory allocation on page 143 Virtual CPU allocation on page 144 System resource allocation on page 144 Monitoring virtual machine usage on page 144

Memory allocation
Configure VMware virtual machines with between 2 and 4 GB of memory, depending on whether the VM is used for the Workforce Central reporting functionality: If the instance of Workforce Central running within the VM is used to generate reports, you should consider using 4 GB of memory for the VM. If the instance of Workforce Central running within the VM is not used to generate reports, 2 GB of memory for the VM is adequate.

Best Practices for Optimal Workforce Central Performance

143

Appendix B

Recommendations for using Workforce Central with VMware

Virtual CPU allocation


Configure VMware virtual machines with up to two virtual CPUs per VM. Using more virtual CPUs will provide only a minimal increase in throughput because most Workforce Central applications become memory constrained before they can take advantage of the additional CPU resources. In general, dual-virtual-CPU VMs provide a good balance between response time and system resource usage.

System resource allocation


Do no over-commit system resources during peak-period processing. Overallocating resources leads to poor system response times and reduced throughput. The total amount of memory configured for all active virtual machines should not exceed the physical memory in the system. Refer to VMwares Performance Tuning Best Practices for ESX Server 3 for information on calculating VMwares memory overhead. The total number of virtual CPUs allocated for all active virtual machines should not exceed the number of cores available in the system. The general recommendation is for one core per virtual CPU.

Monitoring virtual machine usage


Because of the way that time and timer interrupts are handled by VMware, many of the most utilized counters in the Windows Performance Monitor are not accurate in a virtual machine. To monitor the performance of virtual machines, you should use either the VMware Virtual Infrastructure Client or the esxtop utility.

144

Kronos Incorporated

Appendix C

Recommendations for using Workforce Central with Hyper-V

Best practices
Use stand-alone Hyper-V with virtualization. Refer to Appendix B, Recommendations for using Workforce Central with VMware, on page 141 for CPU requirements. Use Windows 2008 R2 as the Guest OS. Memory (RAM) requirements are the same as the memory requirements for the physical server. However, Kronos recommends that you provide at least 4 GB of RAM per virtual machine. Close the Hyper-V manager console because it consumes CPU resources.

Appendix C

Recommendations for using Workforce Central with Hyper-V

146

Kronos Incorporated

Appendix D

Additional information Workforce Record Manager

The following topics contain information specific to optimizing Workforce Record Manager. Use this information in conjunction with the recommendations in Chapter 10, Workforce Record Manager Best Practices, on page 113. Database definitions on page 147 Index List for the Property WrmSetting.Option.IndexList on page 150 Oracle Tuning on page 151 SQL Server Tuning on page 152

Database definitions
For testing the 6.2 release of Workforce Record Manager, five customer databases (described in Table 14 through Table 18) were selected to represent . different data distributions, employee counts, database platforms, application use, and accumulation of historical data. The key criteria for each database are as follows: Number of Active and configured employees: Since WRM processes historical data, it is important to know the total employee count as well as the number of active employees. Number of years of historical data: The amount of historical data will have an impact on performance and size for both the Source and Target Databases. Size of the source database Number of punches processed per day:This metric has an impact on the amount of PUNCHEVENT data processed for each COPY and PURGE job.

Appendix D

Additional information Workforce Record Manager

Number of shifts processed per day: This metric has an impact primarily on the amount of SHIFTASSIGNMNT data processed for each COPY and PURGE job. Applications configured:The Workforce Central Suite components configured for a customer will have an impact on the number and type of records processed each month.

Table 14: Database #1 Specification


Metric Database Platform Source Database Size (GB) Configured Employees Years of Historical Data Active Employees Punches Per Day Shifts Per Day Application Components Value Oracle 11.1.0.7 430 879,410 3.3 124,097 126,613 94,333 Platform, WTK, WFS, WFF

Table 15: Database #2 Specification


Metric Database Platform Source Database Size (GB) Configured Employees Years of Historical Data Active Employees Punches Per Day Shifts Per Day Application Components Value SQL Server 2008 201 72,312 8.2 18,368 44,667 20,000 Platform, WTK, WFS, WIM, WDM

148

Kronos Incorporated

Database definitions

Table 16: Database #3 Specification


Metric Database Platform Source Database Size (GB) Configured Employees Years of Historical Data Active Employees Punches Per Day Shifts Per Day Application Components Value Oracle 11.1.0.7 157 91,780 8.2 18,192 48,167 31,833 Platform, WTK, WFS, WIM, WDM

Table 17: Database #4 Specification


Metric Database Platform Source Database Size (GB) Configured Employees Years of Historical Data Active Employees Punches Per Day Shifts Per Day Application Components Value Oracle 11.1.0.7 60 46,821 1.5 17,632 55,167 47,333 Platform, WTK, WFS

Table 18: Database #5 Specification


Metric Database Platform Source Database Size (GB) Configured Employees Years of Historical Data Active Employees Value SQL Server 2008 338 445,174 1.5 312,053

Best Practices for Optimal Workforce Central Performance

149

Appendix D

Additional information Workforce Record Manager

Metric Punches Per Day Shifts Per Day Application Components

Value 540,167 0 Platform, WTK

Index List for the Property WrmSetting.Option.IndexList


The list in Figure 1 represents the recommended indexes to have populated for the application property WrmSetting.Option.IndexList. Note that this set of index recommendations applies for both Oracle and SQLServer. To apply these indexes, simply copy the entire text string and paste it into the file custom_WRM_OptionsAndTuning.properties located in the directory \Kronos\wfc\applications\wrm\properties. After adding this information to the file, restart the application server so this new property setting takes effect.
Figure 1 - Index List for WrmSetting.Option.IndexList

WrmSetting.Option.IndexList=X1_TIMESHEETITEM,X2_TIMESH EETITEM,X3_TIMESHEETITEM,X4_TIMESHEETITEM,X5_TIMESHEET ITEM,X6_TIMESHEETITEM,X7_TIMESHEETITEM,X8_TIMESHEETITE M,X9_TIMESHEETITEM,XA_TIMESHEETITEM,XB_TIMESHEETITEM,X C_TIMESHEETITEM,XD_TIMESHEETITEM,XE_TIMESHEETITEM,X10_ TIMESHEETITEM,X1_PUNCHEVENT,X2_PUNCHEVENT,X3_PUNCHEVEN T,X4_PUNCHEVENT,X5_PUNCHEVENT,X1_WFCTOTAL,X2_WFCTOTAL, X3_WFCTOTAL,X4_WFCTOTAL,X5_WFCTOTAL,X6_WFCTOTAL,X7_WFC TOTAL,X1_ACCRUALTRAN,X2_ACCRUALTRAN,X3_ACCRUALTRAN,X4_ ACCRUALTRAN,X5_ACCRUALTRAN,X6_ACCRUALTRAN,X7_ACCRUALTR AN,X8_ACCRUALTRAN,X9_ACCRUALTRAN,X1_AUDITITEMSTR,X2_AU DITITEMSTR,X1_SCHEDULEDTOTAL,X2_SCHEDULEDTOTAL,X3_SCHE DULEDTOTAL,X4_SCHEDULEDTOTAL,X1_AVLPATTRNASGN,X2_AVLPA TTRNASGN,X3_AVLPATTRNASGN,X4_AVLPATTRNASGN,X5_AVLPATTR NASGN,X1_SHIFTCODE,X2_SHIFTCODE,XU1_SHIFTCODE,X1_SHFTS EGORGTRAN,X2_SHFTSEGORGTRAN,X3_SHFTSEGORGTRAN,X1_SHIFT SEGMENT,X2_SHIFTSEGMENT,X3_SHIFTSEGMENT,X4_SHIFTSEGMEN T,X1_SHIFTASSIGNMNT,X2_SHIFTASSIGNMNT,X3_SHIFTASSIGNMN T,X4_SHIFTASSIGNMNT,X5_SHIFTASSIGNMNT,X6_SHIFTASSIGNMN T,X1_WORKEDSHIFT,X2_WORKEDSHIFT,X3_WORKEDSHIFT,X4_WORK EDSHIFT,X5_WORKEDSHIFT,X1_GROUPSHIFT,X2_GROUPSHIFT,X1_

150

Kronos Incorporated

Oracle Tuning

MGRAPPROVAL,X2_MGRAPPROVAL,X1_WFCEXCEPTION,X2_WFCEXCEP TION,X3_WFCEXCEPTION,X4_WFCEXCEPTION,X1_SHIFTAPPLYDATE ,X2_SHIFTAPPLYDATE

Oracle Tuning
For Oracle platforms, you should be aware of several settings that will help achieve optimal performance with the WRM Archive process. The general theme of the parameters is use significant Oracle memory, minimize redo log switches, and take advantage of Asynchronous I/O on UNIX servers. The specific settings to set are as follows: Set the REDO Log Size to at least 5 Gigabytes per REDO Log File: WRM COPY and PURGE are both operations which generate significant write activity to the database. This activity will generate significant REDO log activity and log switches if the REDO log files are too small. Log switches essentially will essentially delay write activity up to a few seconds, so these switches should be minimized. Performance data showed that using a REDO log size of 5 Gigabytes kept the number of log switches down to approximately 4 switches per hour, which resulted in minimal performance impact. Use Asynchronous I/O: When using Oracle on a regular file system, there is some overhead in writing to the file system. It used to be recommended that raw devices be used to overcome the overhead. An alternative which yielded positive performance results is to use the Oracle Asynchronous I/O functionality. To take advantage of Asynchronous I/O, the following Oracle Parameter settings are necessary: disk_asynch_io = TRUE filesystemio_options = asynch Use Multiple DB Writer Processes: Due the significant amount of write activity that occurs during a COPY or PURGE, increasing the number of Oracle DB Writer processes helps decrease the time to run a COPY or PURGE job. To increase the number of DB Writer processed, the following parameter setting is recommended:

Best Practices for Optimal Workforce Central Performance

151

Appendix D

Additional information Workforce Record Manager

DB_WRITER_PROCESSES=4 Disable the Oracle Recycle Bin: As of Oracle 10G, a concept was introduced similar to the Windows Recycle Bin where records would be kept on the server in a cryptic state even after being deleted. It would be possible to recover these records at a later time, similar to how files are recovered through the Windows Recycle Bin. The problem with this functionality is that the size of the database would not be reduced as quickly as desired due to the data being kept in the database. To disable this functionality, the following Oracle parameter setting is necessary: recyclebin = off Use a Large System Global Area (SGA): Due to the significant amount of I/O generated by the WRM Archive, it is critical that the use of Oracle memory is maximized to reduce the amount of physical I/O, subsequently improving the speed of the Archive process. To achieve this speed, Oracle should be configured to use as much physical memory as possible. In Oracle 11G, there are two parameters which essentially manage Oracle memory. It is recommended that these parameters be set to 75% of the available Physical Memory if possible. For example, on an Oracle server with 8 Gigabytes of RAM, the parameter settings would be as follows: MEMORY_TARGET=6GB MAX_MEMORY_TARGET=6GB

SQL Server Tuning


On SQL Server, the general themes to optimize performance are similar to Oracle. However, with SQL Server, it is also critical to reduce locking and blocking activity, especially since the COPY and PURGE jobs manipulate major transaction tables and run threads concurrently. The following are tuning recommendations for SQL Server to achieve optimal performance:

152

Kronos Incorporated

SQL Server Tuning

Use Read Committed Snapshot Isolation (RCSI): By default, SQL Server will place shared locks on data when reading from tables. These shared locks, if held long enough, could block other processes from writing to that table, causing blocking and other undesirable behavior. As of SQL Server 2005, the concept of RCSI was introduced. RCSI invokes a locking strategy similar to Oracle where SELECT statements do not lock data during a read operation. Testing has shown that using this approach significantly reduces locking and deadlocking behavior. To use this option, apply the following commands to the database on which this option is to be applied: ALTER DATABASE <DB Name> SET ALLOW_SNAPSHOT_ISOLATION ON GO ALTER DATABASE <DB Name> SET READ_COMMITTED_SNAPSHOT ON GO

Set the Max Degree of Parallelism to 1: This setting essentially decomposes long running queries and uses threads to try to run the single query with parallel threads. The default of 0 implies that the number of threads will be equal to the number of processors on the database server. Performance testing and research has indicated that using the default of 0 actually introduces excessive locking and blocking behavior, as well as negatively impacts the execution plans of some queries. To achieve optimal performance for WRM, it is recommended that this parameter be set to 1, which essentially disables the decomposition of individual queries. It does NOT hinder the ability to execute multiple queries concurrently. To change this setting, execute the following as the System Administrator: sp_configure 'allow updates',1 go reconfigure with override go

Best Practices for Optimal Workforce Central Performance

153

Appendix D

Additional information Workforce Record Manager

sp_configure 'show advanced options',1 go reconfigure with override go sp_configure 'max degree of parallelism',1 go reconfigure with override go

154

Kronos Incorporated

Potrebbero piacerti anche