Sei sulla pagina 1di 43

Client Server System

UNIT I
Client/Server System concepts: Introduction Concepts 2-Tier Architecture 3-Tier Architecture n-Tier Architecture Benefits of Client/Server Case study of n-Tier Architecture Client/Server Models Gartner Classification Middleware Database connectivity and its needs Upsizing Downsizing Rightsizing Characteristics Types of Servers and Clients Future of Client/Server Computing

INTRODUCTION AND CONCEPTS:


Client-Server computing:
Client/server is a computational architecture that involves client processes requesting service from server processes.

Client/server computing is the logical extension of modular programming. Modular programming has as its fundamental assumption that separation of a large piece of software into its constituent parts ("modules") creates the possibility for easier development and better maintainability. Client/server computing takes this a step farther by recognizing that those modules need not all be executed within the same memory space. With this architecture, the calling module becomes the "client" (that which requests a service), and the called module becomes the "server" (that which provides the service). The logical extension of this is to have clients and servers running on the appropriate hardware and software platforms for their functions. For example, database management system servers running on platforms specially designed and configured to perform queries, or file servers running on platforms with special elements for managing files. It is this latter perspective that has 1

Client Server System created the widely-believed myth that client/server has something to do with PCs or UNIX machines.

Client process:
The client is a process or program that sends a message to a server process or program, requesting the server to perform a task or service. Client programs usually manage the user-interface portion of the application, validate data entered by the user, dispatch requests to server programs, and sometimes execute business logic. The client-based process is the front-end of the application that the user sees and interacts with. The client process contains solution-specific logic and provides the interface between the user and the rest of the application system. The client process also manages the local resources that the user interacts with such as the monitor, keyboard, workstation CPU and peripherals. One of the key elements of a client workstation is the graphical user interface (GUI). Normally a part of operating system i.e. the window manager detects user actions, manages the windows on the display and displays the data in the windows.

Server process:
A server process is a process or program that fulfills the client request by performing the task requested. Server programs generally receive requests from client programs, execute database retrieval and updates, manage data integrity and dispatch responses to client requests. Sometimes server programs execute common or complex business logic. The server-based process may run on another machine on the network. This server could be the host operating system or network file server; the server is then provided both file system services and application services. Or in some cases, another desktop machine provides the application services. The server process acts as a software engine that manages shared resources such as databases, printers, communication links, or high powered-processors. The server process performs the back-end tasks that are common to similar applications.

Client Server System

Cooperative Processing:
Cooperative processing is computing which requires two or more distinct processors to complete a single transaction.

Cooperative processing is related to both distributed and client/server processing. It is a form of distributed computing where two or more distinct processes are required to complete a single business transaction. Usually, these programs interact and execute concurrently on different processors. Cooperative processing can also be considered to be a style of client/server processing if communication between processors is performed through a message passing architecture.

Distributed Processing:
Distributed processing is the distribution of applications and business logic across multiple processing platforms. Distributed processing implies that processing will occur on more than one processor in order for a transaction to be completed. In other words, processing is distributed across two or more machines and the processes are most likely not running at the same time, i.e. each process performs part of an application in a sequence. Often the data used in a distributed processing environment is also distributed across platforms. 3

Client Server System

TWO-TIER ARCHITECTURE:

Two-tier architecture is where a client talks directly to a server, with no intervening server. It is typically used in small environments (less than 50 users)

In two-tier client/server architectures, the client interface is usually located in the user's desktop environment and the database management services are in a more powerful server that services many clients. The user system interface environment and the database management server environment split the processing management duties. The database management server contains stored procedures and triggers. Two-tier architectures are typical of environments with few clients homogeneous environments closed environments (e.g. DBMS) The characteristics of two-tier architecture include: 1. Application components are distributed between the server and client software. 2. In addition to part of the application software, the server also stores the data and all data accesses are through the server. 3. The presentation to the user is handled strictly by the client software. 4. The PC clients assume the bulk of the responsibility for the application logic. 5. The server assumes the bulk of the responsibility for data integrity checks, query capabilities, data extraction and most of the data intensive tasks, including sending the appropriate data to the appropriate clients. 4

Client Server System The whole point of client-server architecture is to distribute components of an application between a client and a server so that, for example, a database can reside on a server machine (for example a UNIX box or mainframe), a user interface can reside on a client machine (a desktop PC), and the business logic can reside in either or both components. Client/server applications started with a simple, 2-tiered model consisting of a client and an application server.

Fat Client/Server Deployment:

Fat Client/Server Deployment The most common implementation is a 'fat' client - 'thin' server architecture, placing application logic in the client. The database simply reports the results of queries implemented via dynamic SQL using a call level interface (CLI) such as Microsoft's Open Database Connectivity (ODBC).

Thin Client/Server Deployment:

Thin Client/Server Deployment An alternate approach is to use thin client - fat server waylays that invokes procedures stored at the database server. The term thin client generally refers to user devices whose functionality is minimized, either to reduce the cost of ownership per desktop or to provide more user flexibility and mobility. In either case, presentation is handled exclusively by the client, processing is split between client and server, and data is stored on and accessed through the server. Remote database transport 5

Client Server System protocols such as SQL-Net are used to carry the transaction. The network 'footprint' is very large per query so that the effective bandwidth of the network, and thus the corresponding number of users who can effectively use the network, is reduced. Furthermore, network transaction size and query transaction speed is slowed by this heavy interaction. These architectures are not intended for mission critical applications. Development tools that generate 2-tiered fat client implementations include PowerBuilder, Delphi, Visual Basic, and Uniface. The fat server approach, using stored procedures is more effective in gaining performance, because the network footprint, although still heavy, is lighter than that of a fat client.

Example:
The UNIX print spooler is an example of two-tier client-server architecture. The client (the UNIX lp command) reads a file to be printed and passes the file's contents to the server. The server performs a service by printing the file.

Advantages:
Accessibility: The server can be accessed remotely and across multiple platforms. Speed: Good application development speed. Durability: Most tools for the 2-tier architecture are very robust. Development: Ease of application development. Economy: Lower total costs than mainframe legacy systems. User friendly: It uses the familiar point and click interface. Stativity: Two-tier architectures work well in relatively homogeneous environments with fairly static business rules.

Disadvantages:
Non-Adaptability: 2-tier architecture is not suited for dispersed, heterogeneous environments with rapidly changing business logic. Software Incompatibility: Because the bulk of the application logic is on the client, there is a problem of client software version control and new version redistribution. Complexity: Security can be complicate because a user may require separate passwords for each SQL server accessed.

THREE-TIER ARCHITECTURE:
6

Client Server System

Three-tier architecture introduces a server or an "agent" between the client and the server. The role of the agent is many-fold. It can provide translation services (as in adapting a legacy application on a mainframe to a client/server environment), metering services (as in acting as a transaction monitor to limit the number of simultaneous requests to a given server), or intelligent agent services (as in mapping a request to a number of different servers, collating the results, and returning a single response to the client. The most popular type of n-tier client-server architecture to evolve from two-tier architecture was three-tier architecture which separated application components into three logical tiers. The components of three-tiered architecture are divided into three layers: A presentation layer, Functionality layer, and Data layer Application components are well-defined and separate processes, each running on a different platform: 1. The user interface, which runs on the user's computer (the client). 2. The functional modules that actually process data. This middle tier runs on a server and is often called the application server. A database management system (DBMS) that stores the data required by the middle tier. This tier runs on a second server called the database server. In this type of system, the user interface tier communicates only with the business logic tier, never directly with the database access tier. The business logic tier communicates both with the user interface tier and the database access tier. The 3-tier architecture attempts to overcome some of the limitations of 2-tier schemes by separating presentation, processing, and data into separate distinct entities. The middle-tier servers are typically coded in a highly portable, non-proprietary language such as C. Middle-tier functionality servers may be multithreaded and can be accessed by multiple clients, even those from separate applications.

Client Server System

3-Tiered Application Architecture The client interacts with the middle tier via a standard protocol such as DLL, API, or RPC. The middle-tier interacts with the server via standard database protocols. The middle-tier contains most of the application logic, translating client calls into database queries and other actions, and translating data from the database into client data in return. If the middle tier is located on the same host as the database, it can be tightly bound to the database via an embedded 3gl interface. This yields a very highly controlled and high performance interaction, thus avoiding the costly processing and network overhead of SQL-Net, ODBC, or other CLIs. Furthermore, the middle tier can be distributed to a third host to gain processing power capability.

Advantages of 3-Tier Architecture:


RPC calls provide greater overall system flexibility than SQL calls in 2-tier architectures. 3-tier presentation client is not required to understand SQL. This allows firms to access legacy data, and simplifies the introduction of new data base technologies. It provides for more flexible resource allocation. Modularly designed middle-tier code modules can be reused by several applications. 3-tier systems such as Open Software Foundation's Distributed Computing Environment (OSF/DCE) offer additional features to support distributed applications development. The added modularity makes it easier to modify or replace one tier without affecting the other tiers. Separating the application functions from the database functions makes it easier to implement load balancing.

N-TIER ARCHITECTURE:
8

Client Server System The 3-tier architecture can be extended to N-tiers when the middle tier provides connections to various types of services, integrating and coupling them to the client, and to each other. Partitioning the application logic among various hosts can also create an N-tiered system. Encapsulation of distributed functionality in such a manner provides significant advantages such as reusability, and thus reliability. As applications become Web-oriented, Web server front ends can be used to offload the networking required to service user requests, providing more scalability and introducing points of functional optimization. In this architecture, the client sends HTTP requests for content and presents the responses provided by the application system. On receiving requests, the Web server either returns the content directly or passes it on to a specific application server. The application server might then run CGI scripts for dynamic content, parse database requests, or assemble formatted responses to client queries, accessing dates or files as needed from a back-end database server or a file server.

Web-Oriented N-Tiered Architecture


By segregating each function, system bottlenecks can be more easily identified and cleared by scaling the particular layer that is causing the bottleneck. For example, if the Web server layer is the bottleneck, multiple Web servers can be deployed, with an appropriate server load-balancing solution to ensure effective load balancing across the servers as shown below.

Client Server System

Four-Tiered Architecture with Server Load Balancing Advantages:


The N-tiered approach has several benefits: Different aspects of the application can be developed and rolled out independently. Servers can be optimized separately for database and application server functions. Servers can be sized appropriately for the requirements of each tier of the architecture. More overall server horsepower can be deployed.

10

Client Server System COMPARISION BETWEEN THE TIER ARCHITECTURES:

Stable, low-volume growth Low reporting and batch processing needs Minor integration of other technology (e.g., Internet) LAN-based application deployment Variable system deployment scenarios at different levels of business using LANs & WANs Regular changes in business logic and rules Extensive use of Internet or telephony integration WAN-based application deployment Variable-demand batch processing Variable-demand report processing Web service process delivery Casual use by many networked clients

Two-tier Two-tier Two-tier Two-tier Three-tier Three-tier Three-tier Three-tier N-tier N-tier N-tier N-tier

BENEFITS OF CLIENT/SERVER
A properly designed client/server system provides a company and its employees with numerous benefits. Such a system enables people to do their jobs better by allowing them to focus their time and energies on acquiring new accounts, closing deals and working with customers, rather than on administrative tasks. It provides instant access to information for decision-making, facilitates communication, and reduces time, effort and cost for accomplishing tasks. The following sections outline the major benefits of client/server. Improved Information Access A well-designed client/server system provides users with easy access to all the information that they need to get their jobs done. With a few mouse button clicks, the userfriendly front-end application displays information that the user requests. This information may reside at different databases or even on physically separate servers, but the intricacies of data access are hidden from the user. The client/server system also contains powerful features that enable the users to further analyze this retrieved information. Therefore, they can manipulate this information to answer what-if questions. Because all this information access and functionality is provided from a single system, users no longer need to log into several different systems or depend on other people to get their answers. 11

Client Server System Increased Productivity A client/server system increases its users productivity by providing them with the tools to complete their tasks faster and more easily. For example, a powerful data-entry screen with graphical controls and programming logic to support business rules enables users to enter information more quickly and with fewer errors and omissions. It automatically validates information, performs calculations, and reduces duplicate data entry. Client/server systems can be integrated with other technologies such as e-mail, document imaging, and groupware to lead to additional productivity gains. Automated Business Process A client/server can automate a companys business processes and be a workflow solution by eliminating a great deal of manual labor and enabling processes to be completed sooner with fewer errors. For example, a companys current business process of completing a purchase order is completely manual. It involves searching through a cabinet to find a purchase order form, filling it out, performing all the calculations with a calculator, determining who should approve it, and then sending it to that person through interoffice mail. A client/server system can automate this process and accomplish it in a fraction of the time. An electronic version of the purchase order can be designed in the front-end application and be available online. Using the GUI, a user quickly enters the information, and the system automatically performs all the calculations. Then the form is automatically routed across the network to the appropriate person (based on a business rule) for approval. The approver immediately receives the purchase order their electronic in-box for review and does not have wait for it to through interoffice mail. Powerful Reporting Capabilities Because the information in a client/server system is stored in a relational database, the information can be easily queried for reporting purposes. Programmers can, of course, quickly create new reports by using SQL. However, client/server systems can provide features that enable end users to create their own reports and customize existing ones without having to learn SQL. With these capabilities, users can generate reports much faster than in the past and are no longer completely dependent on IS to provide reports. Those people who used to take a hard copy report and then retype all the information into a spreadsheet so that they could regenerate reports save a tremendous amount of time by using the client/server system. Improved Customer Service A company can improve its customer service by providing faster answers and minimizing the number of times that a customer has to contact the company. A client/server enables customer service representatives to service their customer better, and one key reason is its ability to provide information from different data sources. A bank, for example, may have several physically separate databases. Each of these databases stores a specific type of customer account information, such as savings, mortgage and student loan. Currently, a customer who has all three types of accounts with this bank and needs information on all them has to call three 12

Client Server System different numbers, which is very inconvenient. A client/server system can be designed to provide a customer service representative with access to information from all three databases. Therefore, the customer only needs to call one number. Customers are looking for this type of convenience. Rapid Application Development Most client/server development tools enable programmers to create applications by taking advantage of object-oriented programming techniques and developing application modules. By reusing objects and code that have already been written for existing systems, new client/server systems can be developed much faster. GUI design tools provide drag-and-drop facilities that allow programmers to quickly create visual screens without having to program he underlying code. Client/server applications can be easily modified in case a change, such as a new business rule, is necessary. In addition, client/server tools can be used to quickly create system prototypes that enable the developer to demonstrate the system to users and get immediate feedback. Cost reductions and savings A client /server system reduces costs in a number of ways, some of which are easier to quantify than others. Many companies have replaced their mainframe systems with client/server and saved millions of dollars in annual maintenance costs. Others have benefited from the online information access and significantly reduced their paper associated costs including its purchase, storage and distribution. This on-line information also enables people to quickly identify marketing campaigns and sales strategies that are failing and then cancel them before wasting any more money. Because people can accomplish their tasks faster, they save time and effort, which also translates into financial savings. Also, as employees are empowered and able to do more, the number of employees can be reduced ,if that is a company goal. Increased revenue A client /server system does not generate revenue itself. However,by providing easy access to crucial information along with data analysis tools, it can play a significant role in contributing to increasing revenue by enabling people to identify opportunities and to make the right decisions. The following are some examples of how a client/server system contributes to increase revenue:

Enables a new product to be developed faster so that it hits the market sooner Enables a company to spot sales opportunities faster Identifies which marketing campaigns work well and should be used again. Identifies what types of products and features a particular customer base wants Identifies sales trends that you can use to your advantage

Quick Response to the Changing Marketplace 13

Client Server System Businesses are changing rapidly. the marketplace is now more competitive than ever and will continue to be more and more so. companies are faced with the challenge of keeping their business up-to-date, and they must do business efficiently in order to remain in the marketplace. The computer systems that were developed in the 1980s tended to be based around a centralized computer system. LANs were connected to this system ,yet little or no real business processing was done on the LANs or the PCs. Any change to the business was made on the centralized system. if a new product was to be sold or a new accounting system was to be implemented , it was normally placed on the main computer . as time went on and more and more systems were placed on the centralized computer ,the costs of running this machine rose. The time to change this system if a new business function was needed also increased . over time, this situation has become so bad that it is not uncommon to hear of systems taking in excess of three years to develop and implement when the product needs to be ready for the marketplace in six months.

N-TIER ARCHITECTURE Definition:


In software engineering, multi-tier architecture (often referred to as n-tier architecture) is a client-server architecture in which an application is executed by more than one distinct software agent. For example, an application that uses middleware to service data requests between a user and a database employs multi-tier architecture. The most widespread use of "multi-tier architecture" refers to three-tier architecture.

CASE STUDY - TWENTIETH CENTURY FOX Upgrading the Financials System in a High-Utilization Organization Challenge: To upgrade the ERP Financials system and transition to an internet-enabled selfservice applications environment while supporting a large user base and maintaining a high level of uptime. Solution: In order to ensure that web-based n-tier architecture met all of Fox's requirements, the CherryRoad team conducted a comprehensive pre-upgrade planning, load testing and system 14

Client Server System monitoring. CherryRoad's rigorous, structured approach to load testing incorporated a proven third-party automated testing product. Benefits: Twentieth Century Fox was able to roll out their Financials system to a large user population without experiencing any significant performance issues. Twentieth Century Fox is a $4 billion integrated entertainment company with operations in three business segments: Filmed Entertainment, Twentieth Century Fox Television Studios, and Cable Network Programming. A News Corporation Company subsidiary, Fox is based in Beverly Hills, California and has more than 8,000 employees and contractors. Transitioning to a Web-Based N-Tier Architecture When Fox made the decision to upgrade its ERP Financials system, it faced some of the same challenges that many large enterprises encounter in transitioning to a web-based architecture, including: Ensuring acceptable online performance for a large number (500) of end-users Supporting a high volume of batch processes, especially during peak periods of report processing Maintaining high uptime requirements Minimizing new hardware procurement costs

Fox engaged CherryRoad Technologies for the upgrade, based on CherryRoad's successful past work with the company on Financials implementations, upgrades, and evaluations. CherryRoad had implemented Fox's Accounts Receivable and Billing systems, then upgraded the overall Financials system, and implemented Asset Management all successful projects, completed on time and on budget. For the upgrade, CherryRoad laid out a plan to ensure that the web-based architecture met all of Fox's requirements: Pre-Upgrade Planning Before the upgrade, perform Upgrade Readiness Evaluation, including designing a comprehensive hardware architecture that included all components, costs, and configuration of web-based n-tier architecture. Load Testing During the upgrade, utilize Segue SilkPerformer utilities to stress test the online and batch components to determine their upper limits. System Monitoring For post-production support, establish monitoring procedures and make additional recommendations to enable IT to constantly monitor all components of the architecture to proactively prevent issues. Pre-Upgrade Planning Prior to the upgrade, CherryRoad initiated the project with a Readiness Assessment, which included architecting the new hardware environment. To ensure the new web-based architecture would support their extensive online and batch requirements, the CherryRoad team used 15

Client Server System industry benchmarks, best practices, and normalized hardware metrics to define baselines. They quickly captured critical data to properly size infrastructure requirements and configured report servers to eliminate bottlenecks. The team also addressed critical factors in designing the hardware architecture, including issues of scalability, administration, and load balancing and failover. They used multiple smaller servers in a server cluster a more scalable solution than a single large server. In addition, in selecting servers, they identified the vendors' latest product lines, to maximize support and maintenance, and used the initial baseline benchmarks to validate the choices. The end result was a comprehensive specification document that included alternative hardware configurations, server and switch model numbers, software and middleware, and detailed budget. Fox was therefore able to procure the new hardware and receive vendor certification well before the upgrade began. Load Testing CherryRoad validated the configuration with a rigorous and structured approach to load testing with a proven third-party automated testing product. Fox and CherryRoad partnered with Segue, a leading provider of load testing applications, to assist in this effort. Using the SilkPerformer product, the team simulated conditions of high-volume online users, peak batch processing periods, and intensive transaction processing. A key focus was Fox's extensive use of nVision reporting, which under the new architecture centralized all report processing and could potentially create bottlenecks. Because of the careful planning done before the upgrade, the configured servers were able to pass load testing and proved that hardware issues would be minimized at the completion of the upgrade. System Monitoring In order to ensure that hardware problems are detected and proactively solved on an ongoing basis, CherryRoad assisted Fox in implementing a systematic process of monitoring all internetenabled self-service applications components. This included using utilities such as Tuxedo monitors, as well as those delivered with the Oracle RDBMS. The Unix operating system also provides various tools that provide statistics on system utilization. Fox is also using Segue Service Analysis Module (SAM) to monitor back-end systems and create effective monitoring metrics. A Successful Launch Transitioning to an internet-enabled environment requires careful planning, particularly for organizations with a large number of users and high processing requirements. It is therefore critical that planning and testing be performed before, during, and after the upgrade. CherryRoad was effectively helped Twentieth Century Fox navigate through this transition.

16

Client Server System As Cindy McKenzie, VP of Corporate IT for Fox said, Thanks to CherryRoad's comprehensive approach to infrastructure design and testing, we were able to roll out our Financials system to our large user base without experiencing any significant performance issues.

CASE STUDY OF N-TIER ARCHITECTURE


MASTERS ACADEMY & COLLEGE Company Overview Masters Academy & College, based in Calgary, Alberta, opened its doors in 1997 and now has a total of 600 students. The vision of the school is about creating Profound Learning, a 21st century model for value-based education. Profound Learning aims to exceed all the current standards set by Alberta Education, and to equip students as knowledge workers with skills to enable them to succeed in an ever-changing world. Integral to the Masters philosophy is a commitment to technology. As such the school has a better than 2:1 student to computer ratio with 300 desk top units and 50 laptops all running the Windows 2000 operating system. These client computers are all networked around several network servers running Windows 2000 Advanced Server to provide file sharing, email and high-speed Internet connectivity for every student from every computer. Business Challenge Aside from the schools commitment to technology, what makes the school different, and what it considers one of its key methods in producing superior students is the Masters assessment system. In a nutshell, benchmarks are set to establish a basic quality standard for student work. Bonus marks are available for exceeding the quality standard (EQS) but penalties are also applied if, for example, work is handed in late. If a student submits unsatisfactory work, then the teacher will not accept it. Students are expected to rework their assignments until the quality standard is met. The philosophy of this method reflects the schools belief that every student can produce quality work. Masters students are encouraged to produce quality work handed in on time and, whenever possible, to exceed the quality standard. The problem Masters faced with its marking system was that there were several criteria to be recorded and blended before an assignments final mark could be reached: the quality grade and any EQS bonuses or penalties. What it produced for school administrators was a vast amount of data that had to be collated before each and every grade could be calculated. They were using a basic database and spreadsheet system, but the solution was cumbersome. It was awkward to enter and interpret the data, because the system was not designed for the Masters model. They needed a more robust, scaleable and tailored solution. Having looked 17

Client Server System across the market for the newest and most powerful technology the school chose Microsofts .NET platform. Solution An early solution to the problem was tried in a prototype form using Microsofts Excel spread sheet system. This allowed teachers to compute the final mark based on all of the criteria, but it was extremely cumbersome. This prototype was refined for two years until all the parameters of the assessment system were in place. EDS Canada in Calgary was called in at this stage to develop and install a customized system. This was achieved using the Microsoft .NET platform with the specific implementation of a SQL ServerTM 2000 database, and the creation of a user interface using the beta stage Visual Studio development system .NET software suite. More specifically, the interface was developed using the Visual Basic 6.0 development system and JavaScript languages. To ensure that there was teacher-only secure access to the network, existing Windows 2000 Active Directory director service authentication was used. Report cards will soon be generated as PDF files using Crystal Decisions, Crystal Reports to create a read-only document for students and parents to see. The N-Tier architecture was a natural choice for the Masters project as it offered a strong solution based on the client/server program model. This distributed computing model is part of the fundamental basis of the .NET platform for delivering Web services. This architecture enables application programs to be distributed across three or more disparate computers or servers in a distributed network environment. In this environment, user interface programming is done on the individual users computer, business processes are done on a centralized computer and data that is needed is stored on a database managed by an alternate computer. By utilizing the N-Tier architecture, Masters is able to take advantage of a network in which any one tier can run the appropriate operating system platform or processor and can be updated separately without disrupting any of the other tiers. This ensures that any upgrades to the network that occur, happen seamlessly without compromising the performance of the network. N-Tier was also an obvious choice to avoid the problem of having a solution directly connected to the database. That arrangement creates data bottlenecks when too much data tries to pass through. Using N-Tier means that if the school should decide to modify its network, then the entire system will not need to be extensively revamped, but scaled to need. The architecture of the complete solution allows teachers to easily implement the schools unique marking system, while leaving it flexible enough for a wide variety of web service expansions. Business Benefits The objective of a school is not to make money. Masters goals are qualitative, not quantitative: the number of students produced is not as important as the quality of each student. In this objective Masters differs from a traditional business, because it could enroll an endless number of students and still fail in its mission.

18

Client Server System Where Masters is exactly like a modern business is in concern for lost time. The .NET platform is going to help the school outpace other schools by allowing it to know in advance which students needs more assistance in a specific area: Typically you go for a teacher parent interview after 3-4 months of school. You go in and they say that Johnny is not working up to his potential. You dont know where the missing piece is. Youd liked to have seen something happen 3 months ago, but nobody knew what was happening. What were looking at is the timeliness of relevant information being captured and presented, and that doesnt happen in education, says Rudmik. Now with the .NET platform installed Masters can know immediately when a block occurs in a childs learning, because the data of a students learning curve is gathered in real time from each classroom and stored in the schools central database. Soon, this information will lose no time being transmitted to parents, instead of taking several months before the next parent night. Rudmik calculates that 98 per cent of the schools parents have home Internet access. In the near future parents and students will be able to access from home a continually updating report card, so that all parties can know how a child is progressing at any given time. The advantage is that no time is lost to get on top of a problem before it becomes so overwhelming that the class leaves a student behind. Once the system is connected to the web, Masters hopes to employ the Web Services aspect of its data collection to report to Alberta Learning, the provinces board of education in Edmonton, continuously and in real time. At the moment the school reports electronically and infrequently using a cumbersome system. Reporting will become an easy, automatic and continually updated method, all of which grows naturally out of the robust, flexible and scalable platform provided by Microsofts .NET platform.

Resulting Value Installing the .NET platform has allowed Masters Academy & College to move from a difficult system which was troublesome for the teachers to use to allowing the school to have a ticket to the coming Web Services revolution. Says Rudmik, It was cumbersome process with the amount of information, data, the process, and the problems. It was a complex system we had built. It was beyond typically what spreadsheets are used for. We brought in the .NET platform, and it solved that one side, but it also gave us the capacity to build toward our vision of real time reporting. Solving that problem also opens new possibilities: soon it will be able to immediately communicate its findings to all parties; and it positions this forward-looking school to being fully ready for connecting to the world beyond Calgary, Alberta

19

Client Server System

CLIENT/SERVER MODELS:
Client/server systems can be classified based on the way in which the systems have been built. The most widely accepted range of classifications has come from the Gartner Group, a market research firm in Stamford, Connecticut {see Figure 1.1). Although your system will differ slightly in terms of design, these models give you a good idea of how client/server systems can be built. These models are not, however, mutually exclusive, and most good systems will use several of these styles to be effective and efficient. Over time, client/server systems may move models as the applications are replaced or enhanced. These models demonstrate that a full definition of a client/server system is a system in which a client issues requests and receives work done by one or more servers. The more servers statement is important because the client may need to access several distinctly separate network systems or hosts. The following sections describe each of the five basic models. In its simplest form, client/server identifies a system whereby a client issues a request to a second machine called the server asking that a piece of work is done. The client is typically a personal computer attached to a LAN, and the server is usually a host machine such as a PC file server, UNIX file server, or midrange/mainframe.

Gartner Group Model

The job requests can include a variety of tasks, including, for example: Return all records from the customer file database where name of Customer = Holly 20

Client Server System Store this file in a specific file server data directory Attach to CompuServe and retrieve these items Upload this data packet to the corporate mainframe

To enhance this definition you should also consider the additional requirements that a ness normally has.

Model 1: Distributed Presentation


Distributed presentation means that both the client and the server machines format the display presented to the end user. The client machine intercepts display output from the server intended for a display device and reroutes the output through its own processes before presenting it to the user. As below figure shows, the easiest model is to provide terminal emulation on the client alongside other applications. This approach is very easy to implement using products such as WallData's Rumba or Attachmate but provides no real business benefit other than to begin a migration to client/server. Sometimes a company may use a more advanced form of terminal emulation whereby they hide the emulation screen and copy some of its contents, normally key fields, onto a Visual Basic or Borland Delphi screen. This copying is often referred to as screen scraping. Screen scraping enables a company to hide its mainframe and midrange screens and present them under a PC interface such as Windows or OS/2. The major benefit of screen scraping is that it allows a system to migrate from an old mainframe-based system to a new client/server system in small incremental steps.

Figure : Distributed presentation: terminal emulation andscreen scraping.

Network

Client : Presentation (Screen scraping or Terminal Emulation)

Server: Program logic Data

Model 2: Remote Presentation


21

Client Server System

It may be necessary to move some of the application's program logic to the PC from the host computer. The second model, as shown in the below figure , allows for some business/program logic as well as the presentation to reside on the PC. This model is particularly useful when moving from a dumb terminal environment to a PC-LAN environment. The logic can be of any type; however, validation of fields, such as ensuring that states and zip codes are valid, are ideal types of logic. FIGURE Businessor program logic on the PC.

Network

Client Presentation Some Program Logic (Validation, for example)

Server Some Program Logic Data

Model 3: Distributed Logic


A distributed logic client/server application splits the logic of the application between the client and server processes. Typically, an event-driven GUI application on the client controls the application flow, and logic on the server centrally executes the business and database rules. The client and server processes can communicate using a variety of middleware tools, including APPC, Remote Procedure Calls (RPC), or data queues. Differentiating between the remote presentation and distributed logic models isn't always easy. For example, if a remote presentation application performs some calculations with the data it receives, does it therefore become a distributed logic application? This overlap between the models can sometimes make the models confusing. The following figure shows the distributed logic client/server model.

Network FIGURE The distributed logic client server model. Client Presentation Program Logic Server Program Logic Data

Model 4: Remote Data


22

Client Server System

With the remote data model, the client handles all the application logic and end-user presentation, and the server provides only the data. Clients typically use remote SQL or Open Database Connectivity (ODBC) to access the data stored on the server. Applications built in this way are currently the most common in use today. The below figure shows this model.

FIGURE In the remote data model, all the application logic resides on the PC.

Network

Client Presentation All Program Logic

Server: Data Logic

23

Model 5: Distributed Data


Finally, the distributed data model uses data distributed across multiple networked systems. Data sources may be distributed between the client and the server or multiple servers. The distributed data model requires an advanced data management scheme to enforce data concurrency, security, and integrity across multiple platforms. As you would expect, this model is the most difficult client/server model to use. It is complex and requires a great deal of planning and decisionmaking to use effectively. The following figure shows this model. FIGURE . The distributed data model Network Client Presentation All Program Logic Some Data

Server 1 Some Data Server 2 Some Data Server 3 Some Data

MIDDLEWARE: Middleware is used to glue together applications or components. A few examples of middleware include: IPC by sockets, shared memory TCP/IP, X.25 Common database RPC, CORBA RMI MOM

Middleware Connectivity allows applications to transparently communicate with other programs or processes, regardless of their location. The key element of connectivity is the network operating system (NOS). NOS provides services such as routing, distribution, messaging, file and print, and network management services. NOS rely on communication protocols to provide specific services. The protocols are divided into three groups: 1. 2. 3. Media protocols, Transport protocols and Client-server protocols

Media protocols determine the type of physical connections used on a network. Some examples of media protocols are Ethernet, Token Ring, Fiber Distributed Data Interface (FDDI), coaxial and twisted-pair.

A transport protocol provides the mechanism to move packets of data from client to server. Some examples of transport protocols are Novell's IPX/SPX, Apple's AppleTalk, Transmission Control Protocol/ Internet Protocol (TCP/IP), Open Systems Interconnection (OSI) and Government Open Systems Interconnection Profile (GOSIP)). Once the physical connection has been established and transport protocols chosen, a client-server protocol is required before the user can access the network services. A client-server protocol dictates the manner in which clients request information and services from a server and also how the server replies to that request. Some examples of client-server protocols are NetBIOS, RPC, Advanced Program-to-Program Communication (APPC), Named Pipes, Sockets, Transport Level Interface (TLI) and Sequenced Packet Exchange (SPX).

Types of Middleware:
1. Remote Procedure Calls (RPC) client makes calls to procedures running on remote computers synchronous and asynchronous 2. Message-Oriented Middleware (MOM) asynchronous calls between the client via message queues 3. Publish/Subscribe push technology server sends information to client when available 4. Object Request Broker (ORB) object-oriented management of communications between clients and servers 5. SQL-oriented Data Access middleware between applications and database servers

Database Middleware:
1. ODBCOpen Database Connectivity Most DB vendors support this 2. OLE-DB Microsoft enhancement of ODBC 3. JDBCJava Database Connectivity Special Java classes that allow Java applications/applets to connect to databases

Middleware Vendors:
1. Noble Net:

Noble Net specializes in providing high quality middleware tools for client-server development. Its premier product is EZ-RPC, an RPC precompiler tool kit that includes an enhanced XDR (packaged as an IDL), precompiler, and various libraries. EZ-RPC is available on more than 40 platforms, including most UNIXes, most Windows, Macs, VMS, OS/2, and several others. Noble Net also publishes a Windows rpcgen and distributes the IONA Corporations Orbix Object Request Broker development toolkit. A new product, a distributed two-tier ODBC driver SDK, is available for those working with databases. Noble Net provide free evaluation copies of EZ-RPC to qualified programmers 2. Piccolo:

Piccolo, from Cornerstone Software, Inc. is a message-oriented middleware product that provides application developers with peer-to-peer connectivity without regard for the underlying communications transport (i.e. TCP/IP, NetBIOS, Async). Piccolo is supported on UNIX versions AIX, SCO, HP-UX (HP9000/700 & 800), Tandem S2 Integrity, Solaris 2.1, and Silicon Graphics (SGI). It is also supported on Windows 3.x, Windows NT, and the Tandem Non-Stop Kernel. Application developers use the Piccolo API to access data and applications residing on any of the supported platforms on a network. The developers need no programming knowledge of the underlying transport protocol. 3. PIPES Platform:

PIPES Platform, from Peer Logic, is message-oriented middleware that provides the essential communications services for distributing applications across the enterprise. PIPES Platform's process-to-process messaging allows development of applications with an asynchronous, non-blocking, event-driven architecture. A dynamic name service lets us find at run-time and communicate with any application resource in the PIPES Platform network. PIPES Platform automatically maintains information on all PIPES Platform resources, even as machines and applications are added or moved. Session management services provide guaranteed message delivery, integrity, prioritization, sequencing, dynamic re-routing and error handling. PIPES Platform's cross-platform and multiprotocol support provide a consistent communications interface that allows developers to focus on business logic, not communications. 4. SmartSockets:

SmartSockets, from Talarian Corporation, is a rapid application development toolkit which enables processes to communicate quickly, reliably, and securely across different operating system platforms, through the use of messages. The communicating processes can reside on the same machine, on a LAN, on a WAN, or anywhere on the Internet. SmartSockets is an industrial-strength package which takes care of network interfaces, guarantees delivery of messages, handles communication protocols, and deals with recovery after system/network failures. SmartSockets's programming model is built specifically to offer high-speed inter process communication, scalability, reliability and fault tolerance.

It supports a variety of communication paradigms including publish-subscribe, peer-topeer, and RPC. Included as part of the package are graphical tools for monitoring and debugging our application. SmartSockets is available on most UNIX, OpenVMS, Windows 3.1, Windows 95, Windows NT, and OS/2.

DATABASE CONNECTIVITY AND ITS NEEDS:

Open Database Connectivity (ODBC)


ODBC specifies standard CLI ODBC is a superset of ANSI/ISO CLI ODBC uses standard SQL (SQL-92) ODBC defines minimum SQL for non-RDBMS data ODBC Drivers expose existing functionality ODBC is available on Windows, Macintosh, OS/2, UNIX, etc. ODBC is used by most commercial applications ODBC has over 370 drivers from over 100 companies ODBC has a speed same as native CLI

ODBC is:
A database API specification

ODBC is not:
A heterogeneous query engine A database management system A way to add database features ODBC is available on Windows, Macintosh, OS/2, UNIX, etc. ODBC is used by most commercial applications ODBC has over 370 drivers from over 100 companies ODBC has a speed same as native CLI

ODBC Architecture:

ODBC Architecture
The various components of the ODBC Architecture are described as follows:

Application layer: Only one application resides in the application layer at a time. The application Calls ODBC functions. Application layer is linked to the Driver Manager. Application layer is written by many companies.

Driver Manager: One Driver Manager exists. The Driver Manager loads and unloads drivers. The Driver Manager implements ODBC functions. The Driver Manager passes most ODBC function calls to drivers The Driver Manager handles backward compatibility The Driver Manager is written by Microsoft or Visigenic

Driver: There may be one or more drivers per application. The driver implements ODBC functions. The Driver is a thin layer over RDBMS. The Driver is a thick layer over nonRDBMS (includes SQL engine). The Driver is written by small number of companies.

Data Source: There may be one or more data sources per driver. The Data Source contains actual data. Typical examples of Data Source include RDBMS, dBase file, spreadsheet, etc.

RIGHTSIZING
As client/server technology evolves, the battle cry is now rightsizing design new applications for the platform they are best suited for, as opposed to using a default placement. An application should run in the environment that is most efficient for that application. The client/server model allows applications to be split into tasks and those tasks performed on individual platforms. Developers review all the tasks within an application and determine whether each task is best suited for processing on the server or on the client. In some cases, tasks that involve a great deal of number-crunching are performed on the server and only the results transmitted to the client. In other cases, the workload of the server or the trade-offs between server MIPS (millions of instructions per second) and client MIPS, together with the communication time and network costs, may not warrant the use of the server for data intensive, number-crunching tasks. Determining how the tasks are split can be the major factor in the success or failure of a client/server application. And if the first client/server application is a failure, for whatever reason, it may be a long time before there is a second. Some variations on this theme are: Downsizing. A host-based application is downsized when it is re-engineered to run in a smaller or LAN-based environment. Upsizing. Applications that have outgrown their environment are re-engineered to run in a larger environment. Smartsizing. In contrast to rightsizing, which is technology based, smartsizing affects the entire organizational structure and involves re-engineering and redesigning the business process, as well as the information systems that support the process.

DOWNSIZING:
Downsizing involves porting applications from mainframe and mid-range computers to a smaller platform or a LAN-based client/server architecture. One potential benefit of downsizing is lowered costs. Computer power is usually measured in MIPS. Currently, the cost of mainframe MIPS varies from $75,000 to $150,000; midrange MIPS about $50,000 and desktop micro MIPS about $300. A micro that can perform as a LAN server ranges from $1,000 to $3,000 per MIPS. As technology improves, the costs of LAN servers and micros continue to drop. The midrange and mainframe (host) technologies are improving at a slower rate. Their costs are dropping at an even slower rate. However, the cost benefit is not as straightforward as it appears. Host MIPS are used more efficiently and the processor has a higher utilization rate. Hosts automatically provide services (such as backup, recovery, and security) that must be added to LAN servers. Host software costs more than micro software, but more copies of micro software are required. Mainframes require special rooms, operators, and systems programmers. Micros sit on a desk. LAN servers use existing office space and require no specialized environment.

Another way to look at the cost benefit is to recognize where most of an organization's MIPS are todayon the desktop! And most of those MIPS aren't fully utilized. Figure 1.4 illustrates the relationship between the number of LAN-connected micros and the number of business micros. Gartner Group (Stamford, Connecticut) predicts that by 1996 there will be nearly five million LANs and 75 percent of all business micros will be connected to a LAN. By using the existing desktop MIPS, organizations can postpone or eliminate hardware acquisitions. Many of these desktop machines are already linked to a central machine using terminal emulation software, so the network is already in place. Other potential benefits of downsizing are improved response time, decreased systems development time, increased flexibility, greater control, and implementation of strategic changes in workflow processes. In addition, mainframe applications downsized to a desktop/LAN environment allow data to be accessed by other applications. However, the decision to downsize should be made on an application-byapplication basis. Downsizing the wrong application could put an organization at risk. According to Theodore P. Klein, president of the Boston-based consulting firm Boston Systems Group, Inc., an organization must answer the following questions when evaluating applications for downsizing: Is the application departmental, divisional, or enterprise-wide? What is the database size and how must it be accessed? Is the application functionally autonomous? How familiar with the new technology are the users and IS staff? Is the data in the application highly confidential? What level of system downtime can be tolerated? Downsizing is not as easy as buying and installing hardware and software that support client/server computing. The larger environments that these applications run on have built-in features, such as capacity planning and performance monitoring, that are still in their infancy in client/server platforms. As a result, client/server environments must be fine-tuned to reduce bottlenecks and make optimal use of processing cycles. While hardware and software cost savings may be almost immediate and dramatic, processing savings will be slower to realize and less impressive. When evaluating applications for downsizing, an organization must also recognize the political issues involved. In many organizations, ownership of information systems represents power. Downsizing applications changes the organizational structure. It is important that the political issues be planned for and dealt with.

UPSIZING:
Even as companies are downsizing from their glass-housed mainframes to distributed LAN-based systems, they are planning for the future by ensuring that these new systems are expandable. When an application outgrows the current environment, the capacity of the

environment should be increased or the application should be ported to a larger environment with no disruption to users. Environments can be expanded in many ways, which include: Increasing memory and storage on the server Swapping a more powerful processor into the server Adding processors to the server Upgrading to more robust network software For expansion to occur with a minimum of disruption to the users, open systems (hardware and software) should be used whenever possible.

Smartsizing:
Smartsizing is based on re-engineering the business processes themselves, in contrast to downsizing, which re-implements existing automated systems on smaller or LAN-based platforms. Downsizing focuses on cost savings and increasing current productivity. While the code for the application may be streamlined, little or no thought is given to the process itself. Smartsizing implies that information technology can make the business process more efficient and increase profits. Business reengineering focuses on using technology to streamline internal workflow tasks, such as order entry and customer billing. Information technology can be used to increase customer satisfaction. Products can be developed and brought to market faster using information technology.

Characteristics of client/server architecture:


The basic characteristics of client/server architectures are:

Asymmetrical protocols:

There is a many-to-one relationship between clients and a server. Clients always initiate a dialog by requesting a service. Servers wait passively for requests from clients.

Encapsulation of services:

The server is a specialist: when given a message requesting a service, it determines how to get the job done. Servers can be upgraded without affecting clients as long as the published message interface used by both is unchanged.

Integrity:

The code and data for a server are centrally maintained, which results in cheaper maintenance and the protection of shared data integrity. At the same time, clients remain personal and independent.

Location transparency:

The server is a process that can reside on the same machine as a client or on a different machine across a network. Client/server software usually hides the location of a server from clients by redirecting service requests. A program can be a client, a server, or both.

Message-based exchanges:

Clients and servers are loosely-coupled processes that can exchange service requests and replies using messages.

Modular, extensible design:

The modular design of a client/server application enables that application to be faulttolerant. In a fault-tolerant system, failures may occur without causing a shutdown of the entire application. In a fault-tolerant client/server application, one or more servers may fail without stopping the whole system as long as the services offered on the failed servers are available on servers that are still active. Another advantage of modularity is that a client/server application can respond automatically to increasing or decreasing system loads by adding or shutting down one or more services or servers.

Platform independence

The ideal client/server software is independent of hardware or operating system platforms, allowing you to mix client and server platforms. Clients and servers can be deployed on different hardware using different operating systems, optimizing the type of work each performs.

Reusable code Service programs can be used on multiple servers.

Scalability

Client/server systems can be scaled horizontally or vertically. Horizontal scaling means adding or removing client workstations with only a slight performance impact. Vertical scaling means migrating to a larger and faster server machine or adding server machines.

Separation of Client/Server Functionality

Client/server is a relationship between processes running on the same or separate machines. A server process is a provider of services. A client is a consumer of services. Client/server provides a clean separation of functions.

Shared resources

One server can provide services for many clients at the same time, and regulate their access to shared resources.

Types of Servers:
The concept of a server developed as organizations needed to share expensive peripherals, such as laser printers, CD ROM readers, and FAX machines. Our discussion of servers will, however, relate to servers that promote the sharing of data as opposed to the sharing of peripherals. The six types of servers are: 1. 2. 3. 4. 5. 6. File server Application server Data server Compute server Database server Communication server

Their differences are based on where data is handled and how it is transferred.

File Server:
File servers manage a work group's applications and data files, so that they may be shared by the group. File servers are very I/O oriented. They pull large amounts of data off their storage subsystems and pass the data over the network. When data from the file is requested, a file server transmits all records of a file and the entire index to the client. The client either selects records (based on query criteria) as they are received or loads the whole file and its index into memory and then reviews it. File servers require many slots for network connections and a largecapacity, fast hard disk subsystem. File locking is handled by locking the entire file or by locking byte ranges. There is no differentiation between read locks and write locks at this level. When multiple users access shared files, the file server engine checks for contention. If it detects contention at the file-lock level, it waits until the resource is free. There can be no scheduling of multiple users, no cache management, no lock manager, and minimal concurrency control in the DBMS sense because there is no single engine to which all the required information is available. These DBMS-like features are usually handled by the client software which anticipates the best way to process the data. Unless each data file is locked for exclusive use and some client-side indexing technique is used, all data must be moved across the network before filtering, sort, or merge operations can be applied. This situation forces heavy network traffic. Two techniques used to minimize the amount of data that passes over the network are: 1. Organizing data so that the data needed by a particular application request is stored in a single contiguous block.

2. Storing copies of data accessed by more than one user to help with concurrency problems. Of course, these techniques require developers to build integrity and synchronization handling into the processing of the application. The following is the Cross-business Server

Figure : Cross-business Server

Application Server:
An application server is a machine that serves as a host replacement and in some cases actually is a host). When applications are downsized from a host, one option is to install the applications on a smaller machine that runs the same software and to hook all the users to the new box. This process requires no modifications to the host-based application software. For client/server applications that are classified as host-based, the host is the server to the GUI-based clients, as below Figure.

Figure : Application server

Data Server:
A data server is data-oriented and used only for data storage and management, as illustrated in Figure below. A data server is used in conjunction with a compute server and may be used by more than one compute server. A data server does not perform any application logic processing. The processing done on a data server is rule-based procedures, such as data validation, required as part of the data management function.

Figure: Data server and compute server Data servers perform multiple searches through large amounts of data and frequently update massive tables. These tasks require fast processors, large amounts of memory, and

substantial hard disk capacity. However, for the most part, these computers send relatively small amounts of data across the network.

Compute Server:
A compute server passes client requests for data to a data server and forwards the results of those requests to clients (see Figure 8.4). Compute servers may perform application logic on the results of the data requests before forwarding data to the client. Compute servers require processors with high performance capabilities and large amounts of memory but relatively low disk-subsystem capacity and throughput. By separating the data from the computation processing, an organization can optimize its processing capabilities. Since a data server can serve more than one compute server, computeintensive applications can be spread among multiple servers.

Database Server:
This is the most typical use of server technology in client/server applications. Most, if not all, of the application is run on the client. The database server accepts requests for data, retrieves the data from its database (or makes a request for the data from another node), and passes the results of the request back to the client. Compute servers working with data servers provide the same functionality. Using a database server or a combination of data and compute servers, the data management function is on the server and the client program consists of application-specific code as well as presentation logic. Because the database engine is separate from the client, the disadvantages of file servers disappear. Database servers can have a lock manager, multiuser cache management, and scheduling, and thus have no need for redundant data. Database and data/compute servers improve request handling by processing a SQL client request and sending back to the client only the data that satisfies the request. This is much more efficient in terms of network load than a file server architecture, where the complete file is often sent from the server to the client. Because SQL allows records to be processed in sets, an application ran, with a single SQL statement, retrieve or modify a set of server database records. Older database systems have to issue separate sequential requests for each desired record of each of the base tables. Because SQL can create a results table that combines, filters, and transforms data from base tables, considerable savings in data communication are realized even for data retrieval. The requirements for these servers are a function of the size of the database, the speed with which the database must be updated, the number of users, and the type of network used.

Communication Server:

Communication servers provide gateways to other LANs, networks, midrange computers, and mainframes. They have relatively modest system requirements, with perhaps the greatest demands being those for multiple slots and fast processors to translate networking protocols.

TYPES OF CLIENTS:
A client is a computer system that accesses a remote service on another computer by some kind of network. The term was first applied to devices that were not capable of running their own stand-alone programs, but could interact with remote computers via a network. These dumb terminals were clients of the time-sharing mainframe computer. Clients are generally classified as either "Fat clients", "Thin clients", or "Hybrid clients"

Fat client:
A fat client (also known as a thick client or rich client) is a client that performs the bulk of any data processing operations itself, and does not necessarily rely on the server. The fat client is most common in the form of a personal computer, as the personal computers or laptops can operate independently. Programming environments for rich clients include Curl, Delphi, Droplets, Java, win32 and X11.

Thin client:
A thin client is a minimal sort of client. Thin clients use the resources of the host computer. A thin client's job is generally just to graphically display pictures provided by an application server, which performs the bulk of any required data processing. Programming environments for thin clients include JavaScript/AJAX (client side automation), ASP, JSP, Ruby on Rails, Python's Django, PHP and other (depends on server-side backend and uses HTML pages or rich media like Flash, Flex or Silver light on client).

Hybrid client:
A hybrid client is a mixture of the above two client models. Similar to a fat client, it processes locally, but relies on the server for storage data. This approach offers features from

both the fat client (multimedia support, high performance) and the thin client (high manageability, flexibility).

FUTURE OF CLIENT/SERVER COMPUTING:


Three tier architectures have been used successfully since the early 1990s on thousands of systems of various types throughout the Department of Defense (DoD) and in commercial industry, where distributed information computing in a heterogeneous environment is required. An Air Force system that is evolving from legacy architecture to three tier architecture is Theater Battle Management Core System (TBMCS). Multi tier architectures have been widely and successfully applied in some of the biggest Internet servers. A distributed object oriented client-server model, based on the Common Object Request Broker Architecture (CORBA), has been established to interface beam dynamics application programs at the Swiss Light Source (SLS) to essential software packages. These include the accelerator physics package, TRACY, the Common DEVice (CDEV) control library, a relational database management system and a logging facility for error messages and alarm reports. The software architecture allows for remote clients to invoke computer intensive methods, such as beam orbit correction procedures, on a dedicated server running the UNIX derivative, Linux. Client programs typically make use of graphical user interface (GUI) elements provided by specialized toolkits such as Tk or Java Swing, while monitored data required by procedures utilising the TRACY library, such as beam optics parameters, are marshalled to the model server for fast analysis. Access to the SLS accelerator devices is achieved through a generic C++ CDEV server. Since several of the servers have write privileges to sensitive software and hardware channels, it is intended to add authentication procedures that identify and authorize the client, e.g. through use of the Secure Sockets Layer (SSL), a protocol also supported by MICO. Server diagnostic tools will also be added to provide a synopsis of usage and performance. It is envisioned that configuration, calibration and other data will be held in an Oracle database. Work is in progress within the CDEV community to interface Oracle to the CDEV service layer allowing easy database access through the CDEV device/message paradigm. Of the CORBA facilities and services that are becoming increasingly available, particularly appealing is the Event Service which offers a convenient channel for distributing data to one or more consumers. Data from a supplier is distributed to consumers, on a push or pull basis, without the supplier requiring knowledge of the receiving objects. Such a service would be usefully employed in the distribution of calibrated data to client consumers. The use of Java is noticeably gaining momentum in the accelerator physics community; its unique features of garbage collection, exception handling and integrated thread support are desirable assets for building large-scale distributed systems. Java Swing and Java Beans further

offer components for building GUI operator interfaces (OPIs). In this respect, an effort to coordinate activities with the aim of releasing a standard Java OPI is on the Software Sharing Workshop agenda. The client/server computing is very much becoming the norm for most commercial organizations. It brings value and business benefit to those that use it. Yet inevitably it will become the norm for the computing industry also, and the vendors and resellers will move on to more advanced systems and techniques that will ultimately replace even the newer client/server systems that you use today. The major areas that will ultimately bring about improvements to the client/server environment and systems that you build. The essence is that client/server will get bigger, better, and faster!

Improvements at the Client:


Perhaps the most unknown area to you, the systems manager, is the robustness of the client. The client is, after all, the area in which you are expecting to place the business systems, yet you require it to operate at the level of your centralized midrange or mainframe computer. The client workstation will continue to improve in its capability as a strong, reliable business computer. This improvement will require changes in the components of the hardware itself, the operating systems, and the applications.

The Hardware:
Client hardware continues to change at a rapid pace; performance escalates while prices plummet. A Pentium 75Mhz was once the norm; now the 120Mhz level is best for your normal machines. Everything about PC hardware is getting bigger and faster. The architecture is also becoming more user-friendly, with the advent of technologies such as Plug-and-Play, and more integrated, with the advent of integrated cards containing audio, fax, modem, and voice mail capabilities on a single card. Network interface cards are becoming smarter; they now can power on the machine, if required, so that updates can be made without user intervention. This capability will increase as the integration of networking software and client machines improves, making for a more tightly knit systems environment. In time, the role of the traditional personal computer may become only that of a very intelligent graphical workstation attached to a very large, powerful client/server system. Then the client/server environment will have turned full circle and returned to a somewhat centralized architecture. The machine at the desktop will be a powerful network computer with its own processor, GUI, and memory capacity, yet the applications and data will all be stored on a central server. Although this scenario may occur, I cannot help thinking that the loss of that personal aspect may be too much for most companies to utilize this approach.

The Operating Systems:


With the likes of Windows 95, Windows NT V4.0, and OS/2 Warp, desktop operating systems have become more and more reliable. They have also taken onboard more and more of the advanced technologies, such as multithreading, multitasking, security, and communications,

normally found in larger systems. As time goes on and these operating systems evolve further, you will continue to see improvements that will benefit the client/server environments that you build. These products will continue to have built into them the features normally found within large-scale operating systems. Also, you will begin to see a complete integration of Internet and Intranet support within the operating systems. Navigating and browsing the Internet will become as easy as moving between folders on a hard disk. Further out, improvements in voice recognition and voice activation will allow the human voice to control the operation of the computer rather than the more traditional keyboard and mouse. Just as object orientation has proliferated through the application development tools, it also will move into the operating system Object orientation in an operating system is not a necessity, but as advancements are made in operating system functionality, you will see this functionality delivered as objects. Companies demand compatibility with their existing installed environments. The challenge for the operating systems vendors is to build in state-of-the-art advancements yet still maintain the compatibility. They must provide an ever-increasing number of new features and support for advanced 32-bit applications, but they must also tailor the systems to certain usage requirements. The operating systems also have to offer maximum compatibility with older drivers, older applications, and older equipment.

The Application Development Environment and Programming:


The future of the development tools for C/S is somewhat uncertain. One thing, however, is certain; programming will never be the same as it has been over the past decade. The meteoric rise in tools capable of providing C/S developments has to come to an end. A significant number of tools have developed into robust, reliable toolkits that can build your client/server systems. These products include Visual Basic 4, Delphi 2, Magic, Progress, Obsydian, and PowerBuilder. Over time, the toolsets will be split into those recognized as providing adequate capabilities for department-level systems and those capable of providing targe, scalable, enterprise systems. All the tools available will move to true object orientation. The use of objects within the development tools will allow developers to make the switch to true object-based systems. The strengths of the development tools will become their capability to deliver business systems built from pre-built objects joined together by business analysts and business users.

Improvements on Network:
Here Comes UNIX! UNIX is gaining popularity on PC-based LANs. UNIX does offer several advantages over current network operating systems. It offers symmetrical multiprocessing, meaning that several processors can divide up the processing load of a server. Microsoft's Windows NT offers this feature as does VINES, but NetWare still does not provide this feature. UNIX also offers built-in communications, a powerful script language, and program portability from one UNIX hardware platform to a second hardware platform. Perhaps equally important, UNIX was designed specifically for large networks and for providing security on such networks.

Historically, UNIX has failed to dominate the LAN NOS market because it required very fast processors and extensive memory resources, both of which were very expensive until recently. Also, UNIX is complex enough to require a very well-trained support person as well as users who have been trained it its basic commands. UNIX will grow in popularity on PC-based LANs because it now can overcome its heretofore most serious limitation, its reputation for being a very user-unfriendly environment. The use of X Window and Motif has allowed users to move from the more familiar Microsoft Windows environments to those of UNIX. More and more throughout the late 90s, LANs will also include UNIX servers as part of their structure, providing server resources to a variety of client workstations. In the initial stages, this inclusion will be in the form of database servers, such as Informix, but as traditionally PC-based companies become more familiar with the UNIX systems, the roles will increase to business systems such as graphical repositories, transaction processors, systems management consoles, and data warehouse/statistical analysis systems. Summary: As is the case with most information systems areas, the future of client/server looks both exciting and challenging. Whereas in years past computing environments have gone through major changes with the passing of every 10 years, client/server and distributed computing presents the formidable task of changing every two years! This constant change is an advantage in that the technology will not stagnate or stop being developed, yet it also presents big headaches for you, the systems professional. At the end of the day, the future of your client/server systems is based on the future of your business. The client/server models can help your business get to where it wants to go. As your business evolves and improves, you can use the more advanced technology highlighted in this chapter to enable the business to achieve its goals.

Potrebbero piacerti anche