Sei sulla pagina 1di 58

It was indeed a knowledge worthy period at Kribhco Shyam Fertilizers Limited while I underwent a one month vocational training,

wherein I was provided with a training of IBM- Application System 400. It was absolutely an unforgettable experience for me to be at E.D.P Department where I got an exposure to the industrial working environment; got to know the working manner of an industry. I would like to place on record my high gratitude towards my training mentor Mr. Akshay Katiyar of Electronic Data Processing (E.D.P) Department for imparting his knowledge to me. I also suffer a high esteem for Mr. Kuldeep Balyan, Mr. A.K Sinha, and Mr. S.M Ghufran for devoting their precious time and extremely valuable knowledge for my training schedule. I am also greatly obliged for their great assistance at the instances while I confronted dilemma and illusions. At last I would sincerely thank all, for their cooperation, coordination, devotion and support they bestowed over me. Thanking You !!

Faithfully

Yours (Ambuj Kr. Pandey)

Kribhco Shyam Fertilizers Limited is located at Piproula in close vicinity of The City Of Martyrs, Shahjahanpur in the state of Uttar Pradesh and is a joint venture of Krishk Bharti Cooperative (Kribhco) Fertilizers & Shyam Telecommunications Limited. Mr. Chandra Pal Singh Yadav is in the chair for Honourable Chairmanship of the industry whereas Mr. Virendra Kumar Kaushik is the Managing Director of the company. It is one of the major eminent fertilizer projects in the country, which encounters the scarcity of fertilizer in India. The industry is empowered with the latest technology present till date. It is spelled over an area of about 777 acres of lush green land and the whole project values at around Rs. 2,200 Crores INR. Urea is the primary product of the company along with Ammonia as the secondary product. The manufacturing task is cooped by KSFL while marketing is done by Kribhco. The company sells Urea under the brand name of KRIBHCO UREA. Urea is a fertilizer which is covered under Essential Commodity Act and hence its sale is monitored and guided by the Government of India. The company is thrusted by a highly skilled & efficient workforce.

Concised Description About K.S.F.L.


Primary Product Output Land : : : : : : Urea (NH2CONH2). 7, 26,000 MTPA (Urea). 3, 34,000 MTPA (Ammonia). Rs. 800 Crores/Year. Factory-609 Acres. Township-167 Acres.

Commercial Production Started on 28/12/1995. Plant Capacity Output: Ammonia Plant Urea Plant Captive Power Plant Steam Generation Plant D.M Plant Bagging Plant Input: Fuel Naptha Feed & Fuel Gas Water : : : : : : : : : 1350 MTPD. 2200 MTPD (2 Streams). 20 MW + Stand by (20 MW). 200 TPH + Stand by (200 TPH). 11040 CMPD. 2880 MTPD (6 Lines) 2 Stand By Lines. 291.0 MTPD. 1.42 MstdCMPD. 1450 CMPH.

Prime Consultant: Project & Development India Limited. Toyo Engineering India Limited for Ammonia Storage. Process Licensors: M/S Haldor Topsoe, Denmark for Ammonia Process. M/S Snamprogetti SPA Italy for Urea Process.

M/S Giammarco Vetrocke for CO2 Removal Process.

Technical Information About Ammonia Plant:


Plant Capacity Process Licensor Prime Consultant Stream Days Annual Production Capacity : : : : : 1350 MTPD. Haldor Topsoe, Denmark. P.D.I.L, Baroda. 330 Days. 445500 MT.

Raw Material & Utilities Consumption: Natural Gas Feed & Fuel : 918.5SM/MT. D.M Water (Make Up) : 0.96 M/MT. Raw Water (Make Up) : 5051 M/MT. Power : 29.94 Kwh/MT. Energy Consumption : 7.78MKCal/MT. (Feed & Fuel) By Product: Carbon- di -Oxide Export Steam : : 1740 MTPD. 202.4 MTPD.

Ammonia Manufacturing Process:


Natural Gas

Primary Reformer. Secondary Reformer.

Steam

Air

H.P Steam

CO Conversion.

CO2 Removal. Methanation. Synthesis. Ammonia Storage. A Block Diagram of Ammonia Plant.

CO2

Ammonia

Technical Information About Urea Plant:


Plant Capacity : Process Licensor : Prime Consultant : Stream Days : Annual Production Capacity: 2 X 1100 MTPD. Snamprogetti, Italy. P.D.I.L, Baroda. 330 Days. 726000 MT.

Raw Materials & Utilities Consumption: Ammonia (100% Conc.) : 1265 MTPD-0.575 MT/MT. Carbon- di -Oxide : 1650 MTPD-0.75 MT/MT. Steam (103 ATA, 515C) : 2420 MTPD-01.10 MT/MT. Raw Water (Make Up) : 6600 M/D-03.00 M/MT. Power : 143000 Kwh/Day-65.00 Kwh/MT. Export Condensate : 0.95 M/MT.

Urea Manufacturing Process:


Ammonia CO2 Steam

Synthesis.

Stripping. Decomposition. Concentration. Bagging. Prilling.


5

To Market.

A Block Diagram Of Urea Plant.

Acknowledgment. Certificate. Company Profile (Preface). Objective of training. Acquaintance with IBMs AS/400. Introduction. System Structure Overview. Basic Hardware Structure. Machine Interface. Low Level System Services. Execution Environment Support.

Organization Of OS/400 Objects.

Utilities Incorporated In AS/400. Work Management. AS/400 security. Conclusion. References.

Concluding Remarks For AS/400.

The most pronounced reason for taking Industrial Training at countrys one of the major Fertilizer Industry, Kribhco Shyam Fertilizers Limited was to get a live, industrial working environment for the sake of Industrial Exposure. Parallel to that it was IBMs Application System/400 which bounded me for training at KSFL, for sake of updating of my knowledge. IBMs AS/400 is a family of easy to use business oriented computers designed for companies. AS/400 is the most widely used midrange mainframe family of computers in the industry and is most effective product IBM has ever manufactured. The AS/400, one of IBM's greatest success stories, is widely installed in large enterprises at the department level, in small corporations, in government agencies, and in almost every industry segment. As companies integrate e-commerce on a global scale into their fundamental business processes, their prospective customers, established customers and active partners can take advantage of increased revenue and decreased expenses through software globalization. They also can improve communications and increase savings. AS/400's standard functions plus the many communications options and supporting software provide users with flexibility for various communications environments. The AS/400 application programming interface provides some new capabilities not found in earlier operating systems. The basic architecture of AS/400 systems makes for a very productive program development
7

environment. The built-in database and single-level storage provide high-level structures and consistency. This along with the programming tools available for AS/400 can increase programmer productivity. The programmer has the flexibility to opt from a variety of programming languages for their application programs. Due these up-listed fruitful features and world wide significance of AS/400, I turned up for taking training at Kribhco Shyam Fertilizers Limited on IBMs Application System/400.

Introduction:
The IBMs Application System/400 is a general-purpose, mid-range family of computers which was first introduced in 1988. Many of the basic architectural characteristics of the hardware and operating system of Application System/400 originated from the System/38, one of its predecessors. Some of the basic system objectives and requirements underlying the design of the System/38 included: Hardware independence for the operating system, Enhanced productivity for system and application programmers, Optimization of system for interactive processing, Greater integrity and reliability for interactive processing, Major usability improvement over predecessor systems, Extendibility for the operating system and its applications, and Leading-edge commercial application support. The System/38 requirements applied equally well to the development of the AS/400 family of computers. In addition, several major objectives also existed for the AS/400, including compatibility with System/36, System/38, and Systems Application Architecture; a selection of products ranging from the size of System/36 to double the size of System/38; improved personal computer affinity via seamless interfaces; and market leadership in communications. The AS/400 systems exclusively use the IBM Operating System/400 (OS/400). It is a multi-user operating system that works with the Licensed Internal Code (LIC) instructions to implement the functions that are basic to the AS/400
8

architecture. OS/400 can perform tasks under direct control of both the user and an application program. The AS/400 system differs from the traditional systems in several ways. They offer more compatibility across the product line since only one operating system and architecture is used consistently across the entire family. The system offers very high performance compared to the earlier System/3X computers. This is achieved by a combination of faster processors, extended storage and improved fixed disk systems. The software architecture is different from that of more traditional systems. Implementing functions such as security, database and communications in microcode, and providing a one-piece operating system resulted in improved efficiency, consistency and simplicity. AS/400's standard functions plus the many communications options and supporting software provide users with flexibility for various communications environments. The AS/400 application programming interface provides some new capabilities not found in earlier operating systems. The basic architecture of AS/400 systems makes for a very productive program development environment. The built-in database and single-level storage provide high-level structures and consistency. This along with the programming tools available for AS/400 can increase programmer productivity. The programmer has the flexibility to choose one of the following programming languages for their application programs:

BASIC C CL command language COBOL X3.23-1974 & X3.23-1985 FORTRAN Pascal PL/I REXX RPG II & III

Some of the key AS/400 architectural characteristics that were developed to support these objectives included: High-level, abstract machine interface (MI). Pervasive late binding. Capability-based (object-oriented) operating sys tem (Operating System/400). Segment-based virtual addressing (hardware and licensed internal code). Relational database management system (RDBMS). High level of inherent operating system integrity and reliability. Consistent interfaces to lower-level services. Wide range of high-function primitives.

High-function program model (automatic and static storage initialization,

exceptions, debug, trace, and so on). High degree of fault tolerance and fault isolation in the system software support. Major application programming interfaces (APIS) of predecessors fully supported. In AS/400 a number of hardware & software design and architectural approaches were used, often in a unique or innovative manner, to provide the benefits of these characteristics without incurring the performance overhead normally associated with them. The hardware design features include tagged storage for pointers, highfunction input/output processors (I/OPS) to offload processing from the CPU and high function microcode primitives and services. The software architectural features include single level storage management and automatic utilization of all of main storage as a DASD (Direct-Access Storage Device) cache, high-function MI primitives and services, object-oriented architecture, a single, common code generator producing re-entrant programs, an integrated, natively supported System/36 execution environment, and cooperative processing (involving personal computers). The AS/400 marks a new beginning in the business computing world. This new generation of systems with advanced technology and advanced applications serve as a growth platform for the customer to expand in application, size and network complexity.

10

System Structure Overview


The hardware and licensed internal code implement an instruction set and multiprogramming primitives called the Internal Microprogrammed Interface (IMPI). The licensed internal code portion of the system is implemented using the IMPI instructions and contains the traditional operating system kernel type functions (storage management, resource management, authority, low-level Systems Network Architecture [SNA] layers of 110 operations, and so on) as well as the basic object handlers that provides the foundation for object orientation of the Operating System. The licensed internal code implements a higher-level interface known as the MI. This MI instruction set, although giving the appearance of being directly executed, is compiled into IMPI instructions via a licensed internal code component known as the Translator. The operating system (Operating System/400, or OS/400) is implemented on the MI layer and, in concern with the licensed internal code, contains all of the traditional operating system functions plus many services normally provided as separate products on other systems (communications, RDBMS, automatic configuration, performance data collection, and so on). OS/400 supports a free-format command language (CL) which can be either interpreted or compiled, extensive system displays and menus, and system services in support of both licensed programs (compilers, editors, office, programming workbench, and so on) and the largest inventory of commercial applications in the industry available at this stage in the life cycle of a system. It provides batch and interactive capability for commercial and office applications. Among the software and hardware features of the AS/400 system is the layered machine architecture.

11

Layered Architecture Of AS/400:- Elements & Interfaces.


The user and parts of the operating system are provided a high-level machine interface (MI). Below this, vertical licensed internal code (VLIC) implements the remainder of the operating system functions. VLIC uses internal microprogrammed interface (IMPI) instructions. Horizontal licensed internal code (HLIC) performs the operations specified by the IMPI instructions. The layered architecture of the AS/400 system allows the underlying hardware and software interface and functions to change in order to take advantage of technology advances without affecting the end user. Many of the high-level functions performed by software in other systems are provided below the IMPI in the AS/400 system. Complex functions, such as dispatching tasks, queuing, and input/output (I/OO) operations are provided at the IMPI by means of hardware and HLIC. These are some of the functions that an operating system must often alter significantly when multiprocessors are introduced into architecture. Often the uniprocessor assumption that only one task at a time runs and alters data structures that are shared by other tasks is no longer true for multiprocessors. The unique architecture of the AS/400 system made it an ideal candidate for design as a shared-memory multiprocessor. The layered architecture made it easier to introduce multiprocessing without requiring changes to existing applications, because multiprocessor changes could be made below the MI. In addition, the AS/400 system was already a multitasking system. Task-to-task communications are controlled by built-in instructions supported by HLIC. Other functions are also written in HLIC-for example, the task dispatcher. This type of high-level support at the IMPI level makes it possible to incorporate multiprocessor architecture while restricting most of the changes to the hardware and HLIC levels. One of the major changes made at the IMPI level in order to implement multiprocessing was the redefinition of many instructions as relatively atomic instructions. Only one instruction that operated on shared IMPI data structures was allowed to run at a time. The instruction execution on one processor was suspended by the processor hardware if another instruction of the same

12

type was already running on another processor. A unique hardware lock was used by the HLIC for each type of relatively atomic instruction. Other hardware changes were also made to support multiprocessors: A common shared bus allowed multiple processors to access main storage. A cache was attached to each processor to reduce the main-storage-access bandwidth needs of the processors. Another addition was a mechanism to send messages between processors.

13

System Structure Overview.

Basic Hardware Structure

14

The AS/400 family of computers is a system, made up of several processors, including the main processor, a service processor, one or more storage control processors, one or more local workstation processors, and optional communications processors. The storage control, local workstation, and communications processors offload functions from the main processor. The AS/400 main processor hardware provides control storage, main storage, a set of internal registers, and an address translation unit. The most highly used parts of the licensed internal code execute in the high-speed control storage, whereas the rest of the licensed internal code executes IMPI instructions in main storage. The IMPI instruction set provides 16 generalpurpose registers, a condition-code register, and an instruction-address register. This instruction set is used by the licensed internal code to implement the MI instruction set. The high-level MI instruction set is not interpreted but is translated by the licensed internal code to the IMPI instruction set before execution. The IMPI instruction set is similar to the System/360-System/370 instruction set, but with many extensions. It provides one-, two-, four-, and six-byte registers with the ability to do arithmetic operations on one-, two-, and four-byte integers. It provides a binary floating point implementation and decimal arithmetic on integers up to 15 digits. The IMPI has a large number of register-immediate and storageimmediate instructions. These instructions provide faster execution than their register storage and storage-storage counterparts. The IMPI provides instructions that allow a fast implementation of many of the MI instructions. The IMPI also provides many instructions to implement common sequences of more basic instructions. For example, there are test-and-branch instructions which can be used to test a bit and branch, depending on instruction contents.

Low-Level System Services

15

High-level IMPI instructions:

The IMPI instruction set, made available by the hardware and licensed internal code, includes some functions which, on most machines, would be implemented from more primitive instructions by the operating system. Because the functions are implemented in AS/400 hardware and licensed internal code, they perform much faster than if built from primitive instructions. These functions include: Task dispatching-The IMPI provides a fast, prioritized, pre-emptive task dispatcher. Queuing-The IMPI provides a set of instructions and data structures that allow tasks to communicate via messages. The queuing functions are integrated with the task dispatching functions such that the receive message functions place a task in a wait state until an appropriate message is available on the queue. This allows the licensed internal code layer to be implemented as a multitasking, message-passing system. Serialization-The IMPI provides instructions that allow tasks to have a very fast serialization mechanism. Locking-The IMPI includes a set of instructions for the management of lock conflict. These instructions make available a fast hashing function for accessing symbolic locks and for automatic conflict detection. Data compression-The IMPI has a set of instructions that perform SNA and Multileaving Remote Job Entry (MRJE) data compression. These instructions perform the cpu-intensive compression algorithms much faster than the equivalent algorithm implemented by general-purpose, low-level IMPI instructions. Data scanning-The IMPI provides instructions that perform complex operations on character string data, including scanning for specific characters and trailing blank truncation. Array subscripting-In support of high-level languages, the IMPI provides a set of instructions that compute array element addresses from array indexes. Supervisor link-The IMPI provides a set of instructions used to route requests from user programs (MI programs) into the licensed internal code layer. These instructions automatically allocate a save area, save the registers of the process, and route execution to the proper function. A complementary instruction is used to restore registers, free the save area, and return to the user program. Implicit instructions-The IMPI provides that any unimplemented instructions will be executed as if they were supervisor link instructions. The licensed internal code can implement complex functions as if they were IMPI instructions.

16

The IMPI also provides an attribute bit for each quadword (16 bytes on a 16byte boundary) within main storage. This bit is not addressable by the normal IMPI instructions used for address storage. The bit specifically identifies quadwords in storage containing MI pointers. MI pointers are addresses that MI programs may use and manipulate. MI programs have no direct access to the tag bit. The tag bit is turned on by the licensed internal code when a pointer is set and turned off by hardware anytime the quadword is modified (except through a controlled set of IMPI pointer manipulation instructions). This procedure allows the system to detect invalid pointers. It is not possible for an MI program to counterfeit a pointer or to modify a pointer in an invalid way. The attribute bit implementation allows the validation of pointers in an extremely efficient way and is the basis for system integrity at the MI layer. An error detected during the execution of an IMPI instruction is routed to the licensed internal code using the same technique used for the supervisor link instructions. The IMPI identifies many exceptional conditions in this way, allowing the licensed internal code layer to take appropriate action.

Index Support:

The licensed internal code layer implements a general balanced binary tree with front compression. The binary tree function is used extensively for fast, keyed information retrieval within the licensed internal code and OS/400 components. This implementation is highly optimized to minimize the number of disk operations required to retrieve an entry. The tree is balanced at a page level, providing a very broad, short tree. Binary tree indexes are used within the licensed internal code by: Storage management, for permanent, temporary, and free-space directories. Database, for indexed file support. Libraries, for object name to address resolution. Security, as a fast mechanism to check a users authority to perform object operations. Event management, to provide a fast way for finding processes that act as monitors for specific events. The binary tree function is also made available to OS/400 with support for an MI object, an index, which contains a binary tree. This is used within OS/400 for: Message files, so that the text of a message can be found quickly. Job queues, so that jobs may be ordered by priority and status. Output queues, so that spool files may be ordered by priority and status. Measurements on customer systems with heavy database applications show 10 to 15 percent of the total CPU being used by the index support code.

17

Storage Management:

The AS/400 hardware and licensed internal code provide a single-level storage addressing architecture. A better term might be uniform addressable storage. As objects (files, programs, control blocks, directories, and so on) are created, they are allocated disk space and are assigned a range of virtual addresses. These virtual addresses are used by the IMPI instructions to address the object data directly. The storage management licensed internal code reads the object data from disk into main storage on demand, as required by instruction access. This is known as demand paging. Essentially all of main storage is used as a cache for objects stored on disk. Storage management is divided into two parts: auxiliary storage management and main storage management. Auxiliary storage management allocates disk space to objects, whereas main storage management handles the demand paging. Auxiliary storage management assigns disk space to the virtual addresses of an object. Main storage management moves the pages of an object between disk storage and main storage. The CPU addressing hardware translates the virtual address of an object into the corresponding main storage address.

Auxiliary storage management:

Auxiliary storage management uses a binary buddy system to manage free disk space. The binary buddy system only allows disk space blocks (extents) whose sizes are a power of two. Thus, one sector, two sectors, four sectors, eight sectors, and so on, is valid free-space block sizes. This scheme has several performance advantages: Garbage collection (the recombination of small blocks of free space into larger blocks) is very simple and fast. When a disk block is freed, a simple check can be made to see if its buddy is also free. If it is, the two buddies are combined, and the process is repeated until no buddy is found. External free-space fragmentation is nearly eliminated in most real-world cases. Auxiliary storage management uses binary tree indexes to maintain allocated and free-disk-space directories. These indexes are organized so that most operations (allocation, deallocation, and translation between virtual addresses and disk addresses) can be performed with a single index operation. Auxiliary storage management uses one of two techniques to select the disk unit (actuator) from which the space will be allocated. If the request is small (less than or equal to 32K bytes), a randomized round-robin scheme is used. If the request is large, the disk unit with the greatest percentage of free space is selected. Data on the system is fairly well spread out among the disk units and provides reasonable disk-access balancing. Storage management forces newly created objects to contain binary zeros on first reference. This action guarantees that a new object never contains old data from a deleted object
18

that occupied the same disk space. No performance penalty occurs because the virtual address assigned to the object is stored in a header associated with each sector on disk. When a page of an object is read into main storage, its virtual address is compared with the address stored in the header. If they do not match, the contents of the disk sector are not part of this object and the page is zeroed. This technique eliminates the need to zero disk space when it is allocated (or freed) or to maintain a large table containing an entry for each virtual page indicating whether it had ever been referenced. Optionally, auxiliary storage management may divide the disk units into auxiliary storage pools (ASP). Most user data (files, programs, and so on) are stored in the system ASP. Certain objects, such as journals and saved files, may be created and stored into other user ASP. This process provides a physical separation between the active data files and the on-line backup, which improves performance by avoiding disk arm contention.

Main storage management:

The basis for main storage management is a simple, demand paging scheme with an LRU (least recently used) page replacement algorithm. Performance would not be adequate with this simple approach in most environments. Main storage management provides functions that allow other components of the licensed internal code and the operating system to improve the paging performance of the machine. Some of these functions are: Requesting that large blocks of virtual pages are read into storage prior to any reference to them. This can be performed either synchronously with the requestor or asynchronously. Requesting that blocks of virtual pages are cleared. This allocates zeroed pages of main storage to the virtual pages without doing any I/O operations. Identifying blocks of virtual pages not likely to be referenced in the near future. These pages are written to disk (if changed) and put at the head of the LRU list. Dividing main storage into pools. A customer may divide main storage into pools. Each user and system task is assigned to one of the pools. All task paging requests are satisfied only from their assigned pool. In this way, the customer may ensure that a batch job, for example, will not steal the pages of a higher-priority interactive user. The integrated database licensed internal code is highly optimized to reduce both I/O requests and main storage requirements. When handling a request to read a virtual page into main storage, main storage management must determine the disk address assigned to the given virtual address. Determination is made by finding the entry in the binary tree index, which is built by auxiliary storage management. Main storage management maintains a look aside directory of recently used virtual addresses, which can be examined very quickly. The index operation can be avoided by using the look aside directory.

19

With the single-level addressing structure of the AS/400, main storage can be thought of as a cache for virtual storage. In this way, little within the system is sensitive to the main storage size. As more main storage is added to a system, the amount of disk I/O activity is reduced, since more data is automatically cached in main storage. Access groups: Storage management uses an MI object, the access group, as a container where other objects may be suballocated. The access group gathers many small objects associated with a process into a few large blocks of disk space. When a process enters a long wait (for terminal response), its access group is written to disk in the fewest possible I/O operations. The main storage pages are then placed at the top of the LRU list. When the process executes again, the pages of the objects in its access group (that were in main storage before the long wait) are read back into main storage.

If the demand for main storage pages is small, storage management determines dynamically that the access groups of a process need not be written to or read from disk at all. This determination is based on a number of factors which are dynamically monitored. These include: the general faulting rate in the pool, the number of pages of the access group which were still resident in the pool at the start of the last few transactions, the number of faults that occurred on the access group during the last transaction, and any simple patterns detected in the read and write decisions over the last few transactions (both for the pool and the specific access group). The amount of data and history gathering done is directly tied to the general faulting rate in the pool so that this overhead is also minimized as demand in the pool decreases. With this enhancement, response time for machines with a large amount of main storage is fast and the CPU resource is substantially reduced. Access group swapping in a highly memory-constrained system can consume 30 percent of the CPU resource of the system. Swapping decreases to 1-2 percent of the CPU as the paging demand in the pool decreases. Because main storage is used as a cache for virtual storage and has the ability to dynamically turn swapping on and off, there is a strong and direct relationship between main storage size and the amount of disk 110 operations required on a system. If the main storage size is increased, the amount of disk 110 operations decreases. The system shifts smoothly from an environment of heavy swapping and faulting, to one where I/O activity is required only for randomly accessed data when main storage is added.

20

Schematic Diagram Showing Relationship Between Object & Storage.

21

Resource and process management:

Resource and process management are the licensed internal code components that control the execution of user and system tasks within the system. Although the IMPI instructions supply a task dispatcher, its pre-emptive, priority scheduler is not adequate for a system with other resource constraints. For example, allowing all processes to compete for the CPU could quickly force the working set (the number of main storage pages required to run without excessive page faults) of the system to exceed the available main storage. The process management component implements a scheduler, limiting the number of processes that may actively compete for pages in a storage pool to a number set for that pool by the user. An active process may become ineligible to compete when it has used a certain amount of CPU time, known as a timeslice. An active process that becomes ineligible or that reaches a long wait for terminal response has its access group purged. When an access group is purged, any changed pages are written to disk, and the pages are forced to the top of the LRU list. This action amounts to swapping the process out. A process eligible to compete has its access group swapped in.

Relational database support:

The key AS/400 system component, from the performance standpoint of commercial applications, is its relational database management system (RDBMS). Because of the performance-critical nature of this component, the majority of the runtime support and management of the RDBMS (including journaling and commitment control support) is implemented in the licensed internal code layer (below the MI layer). Run-time support is closely integrated with two other key performance areas of licensed internal code support, index support and storage management. Index support is heavily used to implement the logical views of the database in the most performance- efficient manner possible. Storage management services are extensively used by the RDBMS to maximize and overlap disk 110 operations and minimize working set size. Anticipatory asynchronous reads and writes on database record segment pages and indexes are done based on expected or historical reference patterns. Blocking of multiple data pages to and from disk are done automatically when sequential processing patterns are detected or at three quest of the application. Journals can also be placed in an auxiliary storage pool, separate from the rest of the system, to eliminate contention for the disk arm. One important consequence of the single-level store as it relates to the database is the cost of ensuring that all changed pages associated with a file have been forced to disk when the file is deactivated (closed). Because of the implicit sharing (or caching) provided by main storage management, finding all changed pages of an object currently in memory requires either examining all of the pages in the main store or checking each page of the object to determine if it is in main store. This technique becomes prohibitively expensive as the size of the main store and object increases. On

22

the System/38 Model 700 with 32M bytes of main store, this approach was consuming up to 30 percent of the system CPU in the RAMP-c benchmark. A bit-map technique was implemented to resolve this problem so that at file close time a bit map associated with the file identifies which pages were modified, thus restricting the number of page examinations required. Sophisticated search features, based on estimates made with incomplete information, or guesstimates, of the number of selected keys in specified indexes, minimize processing time for dynamic queries. This key range guesstimate technique is unique to the System/38 and AS/400 in that it is done dynamically, without requiring any additional index management at update time or static key counting routines run at the users request. Implicit index sharing by multiple logical views is done when equivalent sequencing is specified in the logical view definition, avoiding the maintenance of multiple indexes at execution. Such sharing is particularly important on the AS/400 be cause of the serialization protocols currently used in the RDBMS. These protocols result in all of the indexes involved in an update being locked concurrently while the update is in progress. Therefore, the potential for contention on a file increases with the number of concurrently updated logical views over it. This potential can be a serious bottleneck on a large system with a heavily updated file that has a large number of logical views over it. This design will need to be changed to provide for more granular serialization as the system size and number of supported users grows. Combining the characteristics of implicitly cached main storage, automatically balanced disk arm utilization, high-function horizontal IMPI primitives, and the lowlevel, integrated implementation of the RDBMS results in unusually good performance characteristics for a relational database. This result is a key contributor to the good price/performance characteristics of the AS/400.

23

Machine interface

Abstract machine:

From a performance standpoint, perhaps the most important architectural feature of the AS/400 is the machine interface (MI) layer. The MI layer is an enforced boundary (a set of instructions) formally structured in accordance with the architecture between the licensed internal code layer of the system and the OS/400 layer. The MI instruction set, although giving the architectural appearance and function of direct execution, is actually compiled. The OS/400, and all code above it (licensed programs, applications), is implemented entirely on the MI. The MI instruction set can be categorized into several logical groupings: computational instructions; specific objects (over 15 different object types are supported); locking, exceptions, events; and machine resource observation and management. All of the systems compilers are targeted to this MI instruction set, producing a program template which is then used as input to the MI instruction Create Program. This process invokes a translator component in the licensed internal code layer that translates this program template into a program object containing an IMPI instruction stream. Generating the instruction stream involves normal code generation chores (such as performing register optimization, temporary operand management, and so on) followed by the final step of encapsulating all of the generated pieces into a new program object. A system pointer is returned to the program object, which can then be used as the operand of a call or transfer MI instruction. MI instructions are characterized by being high-level, generic, and machineindependent. There is no concept of registers, physical storage locations, or other hardware-specific characteristics in the instruction syntax. For example, the computational instructions consist of generic arithmetic operations and string manipulation operations. To add two numbers together, a single add numeric instruction exists that accepts any combination of numeric operand types and precisions. At translate time, if the type and precision of the operand is known, an appropriate set of IMPI instructions is generated to perform the operation, performing type conversions and precision adjustments as required. If the operand attributes are not determined at translate time (i.e., late binding was used via data pointers), a Supervisor Link (SVL) to the appropriate licensed internal code routine is generated, performing the operation in an interpretive manner when executed. Along with the traditional numeric and string manipulation instructions supported in the computational class, a number of higher-function instructions for performing common string-handling operations exist. Besides generalized versions of the System/370-like translate instructions, there are instructions in support of parsing (scanning for the occurrences of a particular string in another storing an array) and string compression and decompression (MRJE and SNA flavors). Special support for double-byte
24

character strings (DBCS for ideographic character sets) is also provided in the scan instruction. Character string operands can be up to 32K bytes in length, and arrays of up to 16 megabytes are supported. Each class of object supported by the MI layer has its own unique set of instructions appropriate for the class of object (i.e., a program object supports create, delete, call, transfer, and materialize instructions). In general, these instructions (at execution) result in an SVL operation to invoke the appropriate licensed internal code routine to perform the function. It is also true for most of the other instructions in the remaining two categories. A program object contains an instruction stream that is a mixture of: IMPI instructions, corresponding to early-bound computational MI instructions SVLS to licensed internal code routines, to perform more complex and late-bound operations, such as object management, database access, authorization management, and so on. This mixture results in a machine interface that is high-level, abstract, latebound, and interpretive in nature. The machine interface is translated, however, into an instruction stream, where the performance critical computational and stringhandling operations are handled in line with compiled, early-bound performance characteristics (where possible). Furthermore, since there is a single translator for a single MI instruction set targeted by all compilers on the system, it is comparatively easy to enhance the IMPI support (i.e., to provide additional high-function primitives) and quickly take advantage of the enhancement because only one code generator must be modified.

MI objects:

A dominant characteristic of the AS/400, both externally (to the user) and internally (in the OS/400 design and implementation) is its object-oriented architecture. The basic object handlers are implemented in the licensed internal code layer, providing the support for the set of objects at the MI. These objects are interfaced by the OS/400 and licensed programs (LPS) via the respective objectspecific MI instructions. These MI objects present a set of common functions (via MI instructions) to all of the system code built on top of the MI layer, thus providing the benefits of improved integrity and reliability, functional and interface consistency, optimized performance, and reduced operating system code redundancy. These benefits come from formally encapsulated function and data structures that are centralized, carefully implemented, and easily accessed. The structures are widely used throughout the operating system and LPS as basic building blocks for the functions and objects they provide. This formalized and rigidly enforced data abstraction model is a key contributor to the integrity, reliability, and usability characteristics of OS/400. It also contributes significantly to its performance characteristics by providing a highly shared implementation of common constructs

25

which can then be highly optimized. Several MI objects are used in support of the RDBMS of the system. These include cursor, data space, data space index, journal port, journal space, and commit block. These MI objects provide the basic, supporting building blocks for the OS/400 RDBMS. Most of the fundamental areas of the functions of the operating system are supported through appropriate MI objects. Other objects that have a key influence on the performance of the system include contexts (libraries), user profiles (authorization), and programs.

Contexts and address resolution:

The context object maps the symbolic identification (type and name) of an MI object to its virtual address. Above the MI layer, this virtual address is embodied in a 16-byte pointer, which can only be produced and manipulated through MI instructions (such as object creates and resolve pointer) that are designed in the architecture. Pointers are hardware-tagged so that they cannot be counterfeited or manipulated through interfaces not conforming to the architecture. Since pointers are the primary mechanism for identifying object operands on MI instructions, context objects serve as the mechanism for mapping the symbolic object identification, of an object provided by a user, to the virtual address needed to access the object on the system. Context objects are used by OS/400 to support what is presented to the user as a library. A user specified (and modifiable) list of libraries is associated with each job on the system, and objects can be referenced by the user explicitly qualified to a specific library. If not explicitly qualified to a library, the library list of the job resolves the reference by searching each library on the list in order until a matching entry is found. Context objects are implemented as indexes (keyed by object type and name) to provide optimum performance for this address resolution. User applications refer to all of the objects making up an application symbolically, and everything is represented as an object in the system (including the users job itself, over 40 different external object types are on the system). This representation combined with the late-bound nature of the system (no link-editing, late-bound calls, each CL command represented by an object, and so on) results in this address resolution operation occurring very frequently in the system, often accounting for 5 percent of the CPU usage in interactive applications.

User profiles and authority management:

System authorization management is based on user profiles. Each system user is represented by a user profile object, which serves as the repository for all authorization information related to that user. All objects created on the system are owned by a specific user and authorization to use, modify, and manage that object (and the data within it, in some cases) can be controlled on an individual user basis. At creation time, the object is given a default level of access authority that applies to all

26

users. The authority level can be over-ridden on an individual user basis to give that user more or less authority to access each object. Each operation or access to an object must be verified by the system to ensure the users authority. This level of authority checking in combination with the granularity of objects typically used in an application (data and device files, programs, libraries, data areas, commands, spool files, data queues, device and controller descriptions, output queues, message queues, menus, and so on) implies the potential for a great deal of execution overhead, and a number of optimizations exist to minimize this overhead. The enhancements include: All object authority user profile attribute: - When the attribute is present in the user profile attempting an operation, no further checking is required. This mechanism is used when the user configures the system to run without resource authorization checking. It can also be granted to selected profiles when resource authorization checking is active. Default authority in the object: - The object default level of authority is stored in the object itself, along with a bit that specifies whether any specific (private) authorities have been granted to specific users. This default avoids doing any user profile lookup if no private authorities exist for the object. Pointer authority: - A users authority to access an object can be stored in a resolved object pointer as part of the address resolution operation. An example is the database file open processing, which performs an address resolution, storing authority in the pointer used for subsequent operations against the file (within the same job), and avoids authorization checking on the data accesses to the file. A number of additional constructs exist for controlling object authorizations (such as group profiles, adopted authorities, and authorization lists). Complete authorization verification can result in several user profiles being accessed. The user profile object itself is implemented as an index (using the virtual address of the object as the key), thus providing optimum performance for random lookup operations when they do have to be made. The most expensive part of this authority resolution is the index operations against the user profiles. These operations have been observed in some customer systems to be consuming 15 to 20 percent of the total processor when the various optimizations described above were disabled. The authority verification algorithm has been optimized to perform the checks in an ascending order of cost, attempting to avoid the index operations if possible. For example, the authority in the pointer is checked first, next the user profile(s) is checked for all object authority, and then the object is checked for no private authorities and sufficient default authority. This order typically results in less than 5 percent of the authority verifications performing an index operation (0 percent if resource authorization checking is not active).

27

Space objects:

A space object is an MI object that is essentially a free-format byte string (up to 16 megabytes in length), which can be freely accessed and manipulated using MI computational instructions. Access to this byte string is gained through a special purpose pointer called a space pointer (SPP). An SPP identifies the space object and an offset within it. The SPP can be used as the operand for many of the MI instructions. The offset within an SPP can be manipulated via specific MI instructions that are provided for this purpose. A high-performance form of a space pointer, called a machine space pointer (MSPP), is supported with limitations on its use, such that its actual storage location cannot be accessed directly from an MI program. The pointer can be optimized to and manipulated as a six-byte virtual address, potentially being optimized into a register across MI instructions, without compromising program debug support. A specific authority to access the object is required in order to set a space pointer (from a system pointer) to the space object, but once it has been initially set, its offset within the space can be manipulated without any authorization checking. At the time the space object is created, 16 megabytes of address space are reserved for the object, with the actual disk allocations being made only upon explicit request or, optionally, automatically on first reference to an offset. The space object provides a high-performance free-format construct for use when the frequency of reference or unpredictability of use would make more formal encapsulation of the object impractical. It often serves the function of GETMAIN type of support in more conventional systems without the space-management (chaining and so on) problems normally associated with these older mechanisms. Space objects are extensively used for control blocks within OS/400 as well as for many of the external objects (commands, job descriptions, menus, device files, data areas, and so on) presented to the user.

28

Schematic Diagram Of Space Object & Space Pointer. MI program architecture:


29

All MI programs are reentrant-that is, the instruction stream and other constant execution entities are non modifiable and shared among multiple users. Only one copy ever exists in main storage, regardless of the number of concurrent users. Storage for program variables and other process-specific pieces of the program are allocated and managed in process-specific storage by the MI on appropriate Call and Return boundaries. (The external call is implemented as an SVL routine in the licensed internal code.) In addition to allocating and managing this storage in a manner consistent with its attributes (static or automatic), program variables can be initialized to specific values by the MI at the time the program is called, by specifying the initial values in the declarations of the variable. These features, plus other services such as exception-description management, invocation-tracing support, event management, and so on, provide a very rich, productive programming model at the MI level. From a performance standpoint, this rich support can make external program calls expensive. The minimum path length is on the order of 60 instructions, with much longer path lengths being incurred, depending on features used (such as the number of variables initialized). Path length has not been a significant problem in the commercial application arena as it is characterized by large programs and relatively infrequent external calls. Since MI programs are re-entrant (do not have to be loaded or relocated), they have their program variable storage automatically allocated (in separate segments) at call time and can be identified either symbolically (late-bound call) or by virtual address pointer (early-bound call). Since all other external references are resolved at execution time, there is no concept of a link loader at the MI level. Program linkage is dynamic, implicitly occurring at external call time. If the called program is symbolically identified, an implicit address resolution is performed using an explicitly specified context or an implicitly specified list of contexts (an address resolution list associated with the process). This resolution maps the symbolic program name to a virtual address. The address can optionally replace the symbolic specification (in the processes, program variable storage area) so that subsequent call executions do not incur the overhead of the address resolution. This option is commonly used in application programs to provide dynamic binding to the programs on the first call; then subsequent calls in the run unit of the language reuse the resolved address. Similar techniques are also applied to other external references by the program. This linkage technique has been further refined for the OS/400 system code by building a system entry point table containing the addresses of all of the system programs (built at the time that the system code is installed). All external calls within the system code and from application code to system code are done via these preresolved pointers. Similar techniques are heavily used within the OS/400 system code to earlybind other external references. Numerous control blocks and structures are built and

30

initialized at different points in time (install, initial program load, job initiation, first use, and so on), binding external addressability at the most appropriate point based on functional and performance considerations and on tradeoffs. A vast majority of external references are early-bound without losing the flexibility of late binding. Late binding is still used freely when functional considerations make it desirable.

Program debug:

The MI also supports a set of common program debug functions, including the ability to set breakpoints on MI instructions as well as displaying and modifying program variables while at a breakpoint. Breakpoint support is implemented through licensed internal code support and designates an address range within an instruction stream (specific to a process) where interrupts will be presented on instruction execution. This designation allows supporting breakpoints to be on the program anytime (in a process-specific manner), without incurring any extra overhead in the instruction stream when running without breakpoints being set. The program variable display and modification support is provided via a table generated by the translator that maps program variables into their storage locations at execution. Currently this support is automatically provided, so a recompile is not needed to perform program debugging operations. To make this support as predictable as possible, the MI architecture guarantees that the storage locations associated with variables are always current at MI instruction boundaries (the only place where breakpoints are serviced) and that changes made to variables while at a breakpoint will be reflected immediately in the execution of the program. Ensuring this predictability places some constraints on register optimization. Although addresses are currently optimized into registers across MI instructions, data items are not. This restriction can result in poor performance for tightly coded loops where the loop control code and array index values cannot be optimized into registers. For the typical commercial application environment, this condition is generally not a problem because of the existence of the high-function string manipulation of MI instructions, which usually eliminates the need for tightly coded loops at the MI. As newer languages and engineering and scientific languages (Pascal, C, FORTRAN) are supported on the system, this performance shortcoming of the MI may become more serious, requiring a relaxation of this aspect of the architecture as a program option.

Transaction processing model:

31

AS/400 IMPI supports a basic tasking model represented by a task dispatching element (512-byte memory-resident control block). The licensed internal code layer of the system combines this tasking model with several other constructs to provide an MI process model. Constructs include: User profile. Process access group. Program variable storage-Program automatic storage area (PASA) and program static storage area (PSSA). MI exception-handling support. Event-handling support. Object-locking support. The OS/400 combines an MI process with additional structures and support to present a job to the user. The additional structures include: Job message Q. Output Q. Local data area. MI response Q (I/O interface to the MI). Data management communications Q (manages QTEMP library file opens and dynamic file redirection). All this system function, available in support of a users job, in combination with the previously discussed support (re-entrant programs, dynamic address resolution, storage management, RDBMS, and so on) results in a transaction processing model based in each users job. This model results in a dramatically simplified, flexible, and dynamic application development environment. Application control flow is single thread and free of conventional resource bottleneck constraints that confront more conventional transaction processing environments. Each users job contends for and accesses resources dynamically and independently of other users, using shared copies of the permanent objects in main storage (code, data, indexes, and control blocks) and their own job-specific program variable and file-access buffer areas. Thus, the performance benefits of shared system resources are achieved without the drawbacks of restricted address space, complex, inflexible resource management problems, and rigid early-bound requirements which come from transaction processing models servicing multiple users under a single task.

Execution Environment Support


32

AS/400 & System/36 Environment:

One of the major challenges in the development of AS/400 was providing a platform to support the execution of the System/36 application family with equivalent or improved price and performance. Given the radical differences in the architectures, designs, and heritages of the two systems, the conventional solution would have been to support an emulation mode (based on hardware) on the new system. This choice would have had the advantage of providing object code compatibility but would not have achieved the objective of immediately providing a wide range of new functions, productivity, and capacity to System/36 applications. An alternative solution as implemented based on software. The AS/400 System/36 Environment (S36E) provides source code compatibility for System/36 applications on the AS/400. Compatibility is accomplished by providing a mapping layer of support and structures in OS/400 to map the System/36 Application Programming Interfaces (APIS) to then underlying native support in OS/400. As a result, the S36E is an integrated extension to OS/400 rather than an emulator or a mode. There is no concept of hot keying between the environments. Applications running in the S36E share the same system facilities and support that an AS/400 application does. The S36E language compilers generate code that runs directly on the AS/400 hardware, and the System/36 command language invokes the appropriate OS/400 services directly. The database, spool, security, message handler, display facilities, and so on used by applications running in the S36E are the same as and are fully shared with the native AS/400 applications. System compatibility results in performance characteristics similar to native applications. Although there is some performance overhead incurred in mapping some System/36 functions to the appropriate native services, these functions are generally in the 5 to 15 percent range. When a migrated System/36 application does experience significantly degraded performance (compared to the equivalent sized System/36), it is usually caused by the design of the application. That is, it is making unusually heavy use of a system service, which is significantly more expensive on the AS/400 than it was on the System/36. For example, the creation and deletion of a file on the System/36 is relatively cheap (fast) since it primarily involves a Volume Table of Contents (VTOC) update (the System/36 had a simple flat file system). On AS/400 all files are part of a full-function RDBMS, and the creation of one file involves creating and linking a number of complex control blocks as well as the updating of the data dictionary. Creation and deletion of a database file on AS/400 is much more costly (and slow) than on System/36. However, a System/36 application executing in the S36E on AS/400 is using a full-function RDBMS file instead of the limited-function flat file on the System/36, making much of the function (and performance) of the integrated RDBMS immediately available to the System/36 application.

33

System/36 Environment applications that make heavy use of those system functions which perform comparatively poorly on the AS/400 have been observed to sometimes require 50 percent more system resource than they did on the System/36 and may have to be modified (usually in relatively simple ways) to achieve acceptable performance.

Save/restore (backup and recovery):

One implication of the auxiliary storage management scheme of the AS/400 (distributing the disk extents associated with an object across multiple DASD units) is that a simple sector-by-sector copy of the contents of a single device to a backup medium is of no value in the event of a future device failure. Since any single device, in general, contains only some of the pieces of any specific object, a backup of those pieces is out of synchronization with the other pieces residing on other devices. Short of doing a sector image backup of all of the relevant DASD on the system and then restoring all of these devices (essentially reloading the entire system), a sector image backup has little value. Thus the system save and restore strategy is based on a higherlevel, object oriented premise: essentially collecting and copying complete images of objects to the backup media on the basis of an object, group of objects, library, or group of libraries. This process is clearly more complex, requiring significantly more system processing for organization and management, particularly for complex database networks where many files (physical and logical views) may be interconnected so that they must be backed up together. For smaller objects, the result is that significantly more disk I/O activity is required since smaller disk I/O operations must be used to collect the small extents associated with these objects. Other objectrelated information, such as authorizations which are not physically stored with the object but must be recoverable, also adds complication to this kind of a scheme. To maximize the save and restore processing performance, a number of different strategies and support have been developed both to improve the performance of the processing itself and to reduce the volume of objects that must be backed up. The save and restore design employs extensive multitasking and main storage buffering, achieving the maximum possible amount of concurrent disk I/O operations and overlap with media I/O activity. The multitasking and buffering can be easily restricted by tuning parameters (CPU priority and buffer sizes which directly affect the amount of concurrent disk I/O operations) so that the impact of this activity on the rest of the system can be controlled when running in a non-dedicated environment. To reduce the volume of data to be backed up, the system supports a save changed objects scheme, whereby only those objects that have changed in a library (since the last time the entire library was backed up) are saved. Database files being journaled can be exempted from this procedure since a journal save achieves the same result (in less time if the file is large and the activity comparatively low). The system also supports the concept of a save file. This file, residing on DASD, is simulated tape files which can be used as a substitute for removable media

34

on save and restores operations. If the save file is placed in an auxiliary storage pool (ASP) separate from the rest of the system, it provides the following benefits: Operator less backup. For example, unattended backup overnight. Improved performance; i.e., a simulated tape device that runs at DASD speed. Improved flexibility. The backup can be done unattended when the system and objects are not in use, then optionally copied with low overhead (at device speed) to removable media during prime shift without interfering with normal operations and use of the objects. Or, if the save file is in a separate ASP, it can be left on line. If a disk unit in the system ASP is lost, the system ASP can be reloaded (after appropriate repair actions). Then the ASP containing the save file can be logically reattached to the system and used as the source for restore operation as appropriate.

Checksums:

Probably the most innovative feature of the AS/400 system in this area is the facility known as checksums. This facility provides data redundancy on the DASD of the system using an exclusive ORing technique such that the contents of any disk drive on the system can be reconstructed from the contents of several other disk drives (from three to seven, depending on the systems configuration). Although the implementation of the concept on the AS/400 does not allow continued operation of the system while a disk device is inoperable, it does provide data recovery characteristics similar to DASD mirroring at a fraction of the DASD cost (13 to 33 percent additional DASD required, depending on the configuration). Although there is a CPU cost for the support (about 5 to 10 percent for interactive workloads) and an increase in disk 110 operations (about 25 percent for interactive workloads), it provides a cost-effective solution for many users desiring no data loss from DASD failure characteristic to the system. The checksumming concept that was implemented implies that for every write of changed data to disk, the corresponding data locations on all of the other DASD in the checksum set must be read into memory and a checksum calculated and then written out to the checksum disk. Three key optimizations were adopted in the AS/400 implementation of checksumming which allow its performance to be acceptable. First, when a changed page is written to DASD, the old data in the location is read into memory along with the old version of the checksum for that data. By exclusive ORing of the new data, old data, and old checksum, the new checksum value can be derived. This method avoids having to read all of the DASD locations that correspond to the checksum, reducing the required disk I/O operations from N (where N is the number of set) to four when writing changed data to disk. DASDS in the checksum.

35

Second, the checksum data for a checksum set is spread evenly across the DASDS in a checksum set, thus spreading the I/O activity required to maintain it evenly among all of the members of the set and avoiding over-utilization of one DASD arm in the set. Third, the storage on the system is segregated into two classes: temporary objects, whose existence does not span IPLS, and permanent objects. Since the temporary objects normally represent 5 percent or less of the DASD space on a system but account for 40 to 60 percent of the DASD writes on a typical customer system, segregating these two classes of storage and providing the checksum protection only for the permanent objects significantly reduces the number of DASD operations that incur checksumming overhead. The negative implication of this operation is that the system cannot continue to run when a DASD fails, as the portions of temporary objects stored on that device are no longer available and cannot be recovered. System operation cannot be resumed until the failing device has been repaired (and the permanent data on it reconstructed if lost during the repair action). For systems that are not cpu-bound and which add an appropriate amount of DASD and/or main memory (adding memory almost always results in a significant reduction in total disk I/O activity), interactive performance with the checksum support active is usually equivalent to that prior to activating checksums (and adding the appropriate hardware). Very disk-write-intensive batch performance can degrade significantly, in some extreme cases by as much as a factor of three. This performance can usually be improved by fixing problems in the application such as blocking factors or changing file placement to get better overlap between DASD controllers or adding DASD controllers/buses.

36

Organization of OS/400 Objects


Objects:

On the AS/400, everything that can be stored or retrieved is stored in an object. Examples of objects are libraries, files, executable programs, queues, and more. Objects share some common attributes such as name, type, size, description, date created, and owner. The concept of an object allows the system to perform certain standard operations, such as authorization management, on all objects types. The primary object types are:

*LIB Libraries. *FILE Files. *PGM Compiled programs. *OUTQ Output queues.

Libraries:

Every object is contained in a library. A library is an object, of type *LIB, that contains a group of objects. It is similar to the "root" or top-level directory on UNIX, MS-DOS, and VAX/VMS. However, unlike these systems, a library cannot "contain" other libraries (with the exception of QSYS, the system master library, which "contains" all libraries on the system). An interesting implication of the nonhierarchical nature of libraries is that two users cannot have libraries with the same name. There are basically three general categories of libraries: QSYS - the library that contains all other libraries. System supplied libraries NOTE: all IBM-supplied library names begin with the letter "Q" or "#". User-created libraries.

Files:

A file is an object, of type *FILE, that contains data in the form of a database, device data, or a group of related records that are handled as a unit. In this manual, we are primarily concerned with database files. There are two types of database files:
1. Physical files. 2. Logical files.

37

A physical file contains actual data stored on the system. It has a fixed-length record format. Primarily there are two kinds of physical files:
1. Data physical files 2. Source physical files.

A data physical file (*FILE PF-DTA) contains data that cannot be compiled, such as an input file to a program. In conventional terms, a data physical file is a data file, for example an employee master file. A data physical file normally has a record format. This record format is defined using Data Description Specifications (DDS is a language that is used to describe database files to the system). This description is then compiled to produce a *FILE object with attribute PF-DTA. A source physical file (*FILE PF-SRC) contains source statements, for example the source statements of a Pascal or COBOL program. A source physical file has the attribute "PF-SRC". It is usually created using the "Create Source Physical File" (CRTSRCPF) command. A source physical file is actually a special type of data physical file. The data records in a data physical file can be grouped into members. A data physical file may contain one or more members. These members are not objects themselves but subsets of an object. This implies that all members of an object share the same basic characteristics with the other members in the object such as ownership and security. In a PF-SRC file, each member contains source statements for a program or DDS source. Members have an attribute associated with them, which in the case of PF-SRC members, determines how the various systems programs (such as the editor and compilers) on the AS/400 treat the member. This attribute is specified when creating the member, and allows compilation to be totally automatic. Once, for example, a member has been specified as having an attribute of CBL (for COBOL program,) the AS/400 editor, SEU, will format the program as a COBOL program, and when PDM (Program Development Manager) is given the instruction to compile the file, it "knows" that it should invoke the COBOL compiler.

38

Screen View of Working With Members.


The name of the source physical file is SRCFILE, and it is contained in the library YOURLIB. Although, in the above example, source members of different types are stored in the same source physical file, you will probably want to store source programs of the same type in a separate source physical file. For example, you may want to keep all your RPG source programs in a PF-SRC file called, for example, "RPGSRC" and DDS source in a PF-SRC file called, for example, "DDSSRC". You may also use the standard IBM-supplied names such as "QRPGSRC", "QDDSSRC", and "QCLSRC" for the various PF-SRC files. However, you may choose to include source members of different types belonging to the same application in the same PFSRC file. In a data physical file (PF-DTA), the member(s) contains data for use by programs. Normally, a PF-DTA file will only have one member (by default, the member's name is the same as the file name). However, it is possible to include multiple members in a single PF-DTA file. It is important to understand, at this point, the difference between "source" and "data" in regards to data physical files. In the "Work with Members using PDM" screen above, the "PF" member, "EMPMAST", contains DDS source that defines a physical file. When this source member is compiled, it will produce a *FILE object with attribute PF-DTA. This compiled object is the actual file that is used to hold data records.

39

A data logical file (*FILE LF-DTA) is a data file that contains no actual data, but provides a different method of viewing the data of an accompanying data physical file(s) which it internally references. It is similar to the concept of a "view" in SQL. A data logical file is described to the system using DDS. When the DDS source is compiled, a *FILE object with the attribute LF-DTA is produced. Another *FILE object type that you may encounter in your programming courses is the device file. A device file contains a description how data is to be presented to a program from a device or vice versa. Two common types of device files are printer files (*FILE PRTF) and display files (*FILE DSPF). A printer file describes the attributes that printed output will have, such as the length and width of a printed page. A printer file can be created using the "Create Printer File" (CRTPRTF) command. A display file describes what information is to be displayed and where it is to be displayed on the screen of a display station. One way of defining and creating a display file is with the Screen Design Aid (SDA) utility.

Other Object Types:

A program object (*PGM) is a compiled program. The attribute for a *PGM object indicates the language the program was written in. For instance, when a COBOL source program is compiled, it produces an object with a type of *PGM with the attribute CBL. An important object type on the AS/400 is the output queue (*OUTQ). On the AS/400, whenever something is printed, the output goes to an output queue and it stays there as a spooled file. A spooled file, like a member, is not an object itself but a subset of an object. The spooled file stays in the output queue until it is directed to a printer or removed. An output queue has already been created for you with the creation of your user profile. The name of the output queue is normally the same as that of your user profile.

40

The relationships between the various objects that have been discussed in this section are as follows:

Relationships between the various Objects.

Utilities incorporated in AS/400


41

Program Development Manager (PDM):

The Program Development Manager (PDM) is a set of utilities under OS/400 designed to simplify the creation and development of software. It automates file and member creation, editing, compilation and program execution, and allows the programmer to manage their environment from a set of standard menus.

Source Entry Utility (SEU):

The IBM AS/400 provides an integrated set of Application Development Tools (ADT) to design, develop and maintain applications. One such tool is the Programming Development Manager (PDM) that offers the following: Integrated application development environment. List-oriented selection of items for development or maintenance. Extendable interface to tools through user-defined options.

Another tool is the Source Entry Utility (SEU) that offers a full screen editor providing syntax checking of source statements. PDM is one tool that may be used to access the SEU.

Data File Utility (DFU):

The data file utility (DFU) is a program generator that helps you create programs to enter data, update files, and make file inquiries. You do not need a programming language to use DFU. You create the program by responding to a series of displays. DFU creates the program for you. DFU provides you with a quick way of updating a file using a temporary program. You do not have to define a DFU program first. DFU also allows you to create database maintenance programs faster than you could by using programming languages (for example, RPG). DFU programs can perform several jobs. For example, a single DFU program can allow you to enter new records into a file, update fields within existing records, or perform file inquiry tasks. DFU creates data entry programs from definitions based on the descriptions of existing database files. You must create an Interactive Data Definition Utility (IDDU), Data Description Specifications (DDS), or RPG II description, and then create physical or logical files based on the description. The descriptions are then used during the definition of your DFU program. After you have defined a program, you can recall and run that program as often as required.

Creating a DFU Program:


42

To create a DFU program for data entry, select the record formats and fields you want to use. The data entry displays you request are then created. For example, if you use two record formats, DFU creates two different data entry displays. You can switch between the two formats while you are running your program. To help you create your program, DFU: Reviews available database record formats. Reviews field specifications for a record. Reviews definition status. Shows help text to explain the fields in a definition display. DFU also allows you to print a summary of the defined program that shows all your selected fields and record formats.

Running a Program:
While running a DFU program, you can: Change, delete, or display a record in a file. Add new records to a file by typing data into the displayed fields. Change or view records in a file by typing an approximate key value and then using the page keys to locate the desired record. Select new record formats (or types, if the file is RPG-defined). Retrieve the next or previous record for any record format or type. Automatically duplicate one or more fields. This is useful when a data file contains a field that is the same in each record and you do not want to retype the field each time. Print the program-specified fields of the current database record when in Display mode. Show the status of the DFU program that is running. Present the total number of additions, deletions, and changes processed during the current DFU session. Print an audit report listing the changes made to a data file. Automatically generate key values or Relative Record Numbers (RRN). Accumulate the sum of additions and subtractions in a specific field in a record. Add a positive or negative integer to the field for each new consecutive record. Update records while rolling through the file. Change the key value during Change mode while running the DFU program. Initialize fields with a default value. Hide fields on the data entry display when you specify both Output only and Non-display for the fields. See field values displayed in different forms by using an edit word when an edit code does not give the desired editing.

43

Validate data entered into fields.

Data File Utility (DFU) Menu:


The AS/400 DFU menu allows you to select options to run, create, change, or delete a DFU program, or run a temporary program.

Screen view of DFU option menu.


For each option, DFU begins a prompting sequence of displays that takes you through all the necessary steps. Following is a brief description of each option: 1. Run a DFU program: - This option allows you to select and run an existing DFU program. 2. Create a DFU program: - This option allows you to define a new DFU program. 3. Change a DFU program: - This option allows you to change an existing DFU program. 4. Delete a DFU program: - This option allows you to delete an existing DFU program. 5. Update data using temporary program: - This option allows you to change or add data to a file without having to predefine a DFU program.

44

Distributed Data Management Support:

DFU supports the AS/400 DDM files on DFU program definition and operation. Distributed Data Management (DDM) accesses data files that reside on remote IBM systems, allowing you to retrieve, add, update, or delete records in a file on another system. In addition, a remote system can access your systems database for record retrieval.

Screen Design Aid (SDA):


You can use the screen design aid (SDA) to perform the following tasks: Design a menu to present a list of options from which the user makes a selection. Design a display to help the user navigate through an application program. Create online help information for displays and menus.

Advantages of SDA:
SDA offers several advantages over traditional methods of designing displays because it: Creates data description specifications (DDS). You do not need extensive knowledge of the DDS coding forms, keywords, or syntax to use SDA. Presents displays in functional groups to make DDS keyword selection easier at the file, record, or field level. Allows you to select fields from existing database files to design a display. Allows you to see the display you are designing or changing as you work on it. Allows you to test displays with the data and status of the condition indicators specified for each test. Allows you to create the menus and the message files that Application System/400* (AS/400*) environment SDA uses to run the menus. Allows you to create the menus and the control language (CL) programs that System/38 environment SDA uses to run the menus. Allows you to create a display file from the DDS source statements that SDA creates. Supplies error messages with explanations. Diagnostics are supplied for conflicting source statements when you select DDS keywords.

SDA concepts & Terminology:


The concepts and terminology used in SDA are such as the: Relationship between display files, records, and fields Terms used in SDA Special considerations for menus.

45

Relationship between Display Files, Records, and Fields: When you work with SDA, you need to understand the relationship between display files, records, and fields. A display file contains one or more records. Each record specifies all the characteristics of one display. Each display is composed of fields that are designated as input, output, both (input and output), or constants.
Description of Terms Used in SDA:

The following is a brief explanation of some of the terms that you encounter while using SDA. The terms described here are: Keyword. Field. Record. Member. File.

Keyword: You use keywords to define displays, fields, records, and files: When defining a field, you use field-level keywords. When defining a display (record), you use record-level keywords. When defining a file (all the records), you use file-level keywords.

The set of keywords available on the AS/400 system make up a language called the data description specifications (DDS). On the AS/400 system, displays are described by DDS, which groups all the fields on one display into one record and all the records within a member into a file.
Field:

The term field is used in two different ways: In DDS, a field is an item that you specify for defining a display. In a database file, a field is an item that you define for storing data.

Record: The term record is used in two different ways: In DDS, all the fields on a display are grouped in a record. To DDS, a record represents a display. When you define a display, SDA prompts you for a record name to be used for the display. When you compile your DDS to create a display file, you reference each display in the display file by its record name. When you test a display file, SDA prompts you for the record name within the display file that you want to test.
46

In a database file, a record is a group of fields and their definitions. The

record also stores data from the fields. The record itself is in a database file. When you retrieve field definitions from a database file, SDA prompts you for the name of the record and the database file. Member: A member stores DDS statements. When you define a display in SDA, corresponding DDS source statements are produced. When you want to store the DDS source, SDA prompts you for a name for the member, source file, and library where you want the source to be stored. The member is stored in a database source file, which you compile to create a display file.

File: The term file is used in three different ways: For a database file containing data definitions. For a database source file that contains the DDS source member. For a display file that contains compiled DDS. SDA produces DDS for the displays that you define. You must compile the DDS into a display file before you can use the display.

47

Work Management

Introduction:

The AS/400 contains many system objects, and the way they interrelate helps determine the efficiency of your system. Work management functions control the work done on the system. When the Operating System/400 (OS/400) licensed program is installed, it includes a work management environment that supports all interactive and batch work. Work management supports the commands and internal functions necessary to control system operation and the daily workload on the system. In addition, work management contains the functions you need to distribute resources for your applications so that your system can handle your applications. AS/400 work management allows you to control the way work is managed on your system. The OS/400 licensed program allows you to tailor this support or to create your own work management environment. To do this, you need to understand the work management concepts. Because IBM ships all AS/400 systems with everything necessary to run typical operations, you are not required to learn about work management to use your system. However, to change the way your system manages work to better meet your needs, affect the order your jobs are run, solve a problem, improve the systems performance, or simply look at jobs on the system, you need an understanding of work management. In other words, if you understand work management, you know what affects the various pieces of the system, and how to change them so they operate most efficiently. Because of the systems complexity, learning about work management in stages could be helpful. You could begin by deciding what work you need the system to do and how you want that work done.

Simple System:

The purpose of the system is to perform work. Work enters, work is processed, and work leaves the system. If you think of work management in these three terms, work management will be easier to understand. Work management describes where work enters the system, where and with what resources work is processed, and where output from work goes.

48

A Simple System
Complex System: A complex system is many simple systems operating together. Using this definition, the simple systems within the AS/400 system are the subsystems.

What is ILE?:

ILE is a new set of tools and associated system support designed to enhance program development on the AS/400 system. The capabilities of this new model can be exploited only by programs produced by the new ILE family of compilers. That family includes ILE RPG/400*, ILE COBOL/400*, ILE C/400*, and ILE CL.

What are the benefits of ILE?:

ILE offers numerous benefits over previous program models. Those benefits include binding, modularity, reusable components, common run-time services, coexistence, and a source debugger. They also include better control over resources, better control over language interactions, better code optimization, a better environment for C, and a foundation for the future.

Binding:

The benefit of binding is that it helps reduce the overhead associated with calling programs. Binding the modules together speeds up the call. The previous call mechanism is still available, but there is also a faster alternative. To differentiate between the two types of calls, the previous method is referred to as a dynamic or external program call, and the ILE method is referred to as a static or bound procedure call. The binding capability, together with the resulting improvement in call performance, makes it far more practical to develop applications in a highly modular fashion. An ILE compiler does not produce a program that can be run. Rather, it produces a module object (*MODULE) that can be combined (bound) with other modules to form a single runnable unit; that is, a program object (*PGM). Just as you can call an RPG program from a COBOL program, ILE allows you to bind modules written in different languages. Therefore, it is possible to create a single runnable program that consists of modules written separately in RPG, COBOL, C, and CL.

49

Modularity:
The benefits from using a modular approach to application programming include the following: Faster compile time: The smaller the piece of code we compile, the faster the compiler can process it. This benefit is particularly important during maintenance, because often only a line or two needs to be changed. When we change two lines, we may have to recompile 2000 lines. That is hardly an efficient use of resources. If we modularize the code and take advantage of the binding capabilities of ILE, we may need to recompile only 100 or 200 lines. Even with the binding step included, this process is considerably faster. Simplified maintenance: When updating a very large program, it is very difficult to understand exactly what is going on. This is particularly true if the original programmer wrote in a different style from your own. A smaller piece of code tends to represent a single function, and it is far easier to grasp its inner workings. Therefore, the logical flow becomes more obvious, and when you make changes, you are far less likely to introduce unwanted side effects. Simplified testing: Smaller compilation units encourage you to test functions in isolation. This isolation helps to ensure that test coverage is complete; that is, that all possible inputs and logic paths are tested. Better use of programming resources: Modularity lends itself to greater division of labor. When you write large programs, it is difficult (if not impossible) to subdivide the work. Coding all parts of a program may stretch the talents of a junior programmer or waste the skills of a senior programmer. Easier migrating of code from other platforms: Programs written on other platforms, such as UNIX**, are often modular. Those modules can be migrated to the AS/400 system and incorporated into an ILE program.

Reusable Components:

ILE allows you to select packages of routines that can be blended into your own programs. Routines written in any ILE language can be used by all AS/400 ILE compiler users. The fact that programmers can write in the language of their choice ensures you the widest possible selection of routines. The same mechanisms that IBM and other vendors use to deliver these packages to you are available for you to use in your own applications. Your installation can develop its own set of standard

50

routines, and do so in any language it chooses. Not only can you use off-the-shelf routines in your own applications. You can also develop routines in the ILE language of your choice and market them to users of any ILE language.

Common Run-Time Services:

A selection of off-the-shelf components (bindable APIs) is supplied as part of ILE, ready to be incorporated into your applications. These APIs provide services such as: Date and time manipulation. Message handling. Math routines. Greater control over screen handling. Dynamic storage allocation.

Coexistence with Existing Applications:

ILE programs can coexist with existing OPM programs. ILE programs can call OPM programs and other ILE programs. Similarly, OPM programs can call ILE programs and other OPM programs. Therefore careful planning, it is possible to make a gradual transition to ILE.

Source Debugger:

The source debugger allows you to debug ILE programs and service programs.

51

AS/400 Security

Overall Principles. System Values. User Profiles. Object Authority. Authorization Lists. The Security Model. Application Security. Communications Basics. Communications Security.

Overall Principles:
All system entities are OBJECTS - Files, programs, commands, device

descriptions, job queues, etc. Objects have owners, who have privileged rights to the object. Other owners may be granted specific authority to the object by the owner or a system security officer. Any principal not specifically given or denied authority to a specific object gains the default authority assigned to *PUBLIC. *PUBLIC is the and everyone else authority. Certain principals may be given broad authority over entire classes of objects. This is done through Special Authorities.

System Values:
System Values determine overall operating system characteristics: Security. Character set. System date and time. Default device naming. Minimum operating system memory allocation. System Security Values include: Overall level of security. Permitted password characteristics. Default library list.

52

Auditing of security events. Protection of operating system objects and memory.

User Profiles:
User Profiles provide system access to principals. User profiles are required to sign on, to submit a batch job, or to start a communications process. User profiles define the initial user environment. Initial Program and Menu. Print queue, job description, message queue, etc. User profiles contain the Special Authorities, which provide broad classes of high level access: *ALLOBJ. *AUDIT. *SECADM. *SERVICE. A profile may be assigned to an individual user or to a group. Individuals may be members of a group - a good way to organize profiles. Some profiles are only for purposes of object ownership, and are not intended to provided system access.

Object Authority:
Object Authority gives the access rights to an object. Classed as either management or data authorities.

Primitive management Authorities are: Operational. Management. Existence. Alter. Reference. Primitive Data Authorities are: Read. Add. Update. Delete. Execute. Authorities may be combined in conventional ways: *ALL. *CHANGE. *EXCLUDE.
53

Ways to assign authority to profile(s): Private - to a specific user. Group - to a user through their membership in a group. Public - and everyone else; the authority to anyone not otherwise designated.

Authorization Lists:
Feature carried over from S/36. List of specific users having similar rights of set of objects. Can change authority in real time - no object locking involved.

AS/400 Security Model:

The Security Model Invoked In AS/400.

54

Application Security:
Implementation in an application environment: Library security. Program-enabled security. Application menus (with limited capability to keep users within the menus).

Communication Basics:

SNA derived, with emphasis on minicomputer environment. APPN - Routing or network layer protocol. APPC - Session/transport layer protocol. Model is minicomputer with line, remote controller and remote devices.

Model is applied to LAN-based configuration, TCP/IP, and all other

communications.

Communication Security:

Secure device objects. Secure authority to change configuration. Use Location Password for APPC communications. Dont use Secure Location (= trusted remote system). Dont rely on menu security and limited capability. Use outboard security devices when needed.

55

Concluding remarks for AS/400:


The architecture of AS/400 is characterized by a number of features normally associated with poor performance, such as a hardware-independent operating system, a relational database, pervasive late binding, and a broad range of functions. However, through extensive use of techniques such as low-level implementations of highly used primitives, an innovative storage management system, careful scoping of early- and late-bound features based on function and performance tradeoffs, and many other optimization techniques, the AS/400 exhibits competitive price and performance characteristics in the commercial application marketplace.

56

I hereby conclude that my vocational training was highly directive & was precisely oriented for my benefit. It was a privilege to gain information on IBMApplication System 400, witness its utility & functionality in the software development, database maintenance and a lot more. Herein my training period I got to learn the work-scope & job of some major departments including E.D.P, the schematic manner under which the data-flow in industry is carried and maintained. It was also of great interest to watch the practical field implementation of the concepts built in the classrooms. Finally I would summarize my training to be highly knowledge worthy & beneficial to me.

>>>> Thanking You <<<<


57

I took assistance and consulted from the underneath named sources of information:

IBM AS/400 User Manuals provided at company. Mastering The AS/400 authored by Jerry Fottral. Official website of IBM. www.ibm.com Encyclopedia website. www.wikipedia.com

58

Potrebbero piacerti anche